2 # Copyright 1999-2000, The OpenLDAP Foundation, All Rights Reserved.
3 # COPYING RESTRICTIONS APPLY, see COPYRIGHT.
4 H1: Replication with slurpd
6 In certain configurations, a single {{slapd}}(8) instance may be
7 insufficient to handle the number of clients requiring
8 directory service via LDAP. It may become necessary to
9 run more than one slapd instance. At many sites,
10 for instance, there are multiple slapd servers: one
11 master and one or more slaves. {{TERM:DNS}} can be setup such that
12 a lookup of {{EX:ldap.example.com}} returns the {{TERM:IP}} addresses
13 of these servers, distributing the load among them (or
14 just the slaves). This master/slave arrangement provides
15 a simple and effective way to increase capacity, availability
18 {{slurpd}}(8) provides the capability for a master slapd to
19 propagate changes to slave slapd instances,
20 implementing the master/slave replication scheme
21 described above. slurpd runs on the same host as the
22 master slapd instance.
28 {{slurpd}}(8) provides replication services "in band". That is, it
29 uses the LDAP protocol to update a slave database from
30 the master. Perhaps the easiest way to illustrate this is
31 with an example. In this example, we trace the propagation
32 of an LDAP modify operation from its initiation by the LDAP
33 client to its distribution to the slave slapd instance.
36 {{B: Sample replication scenario:}}
38 ^ The LDAP client submits an LDAP modify operation to
41 + The slave slapd returns a referral to the LDAP
42 client referring the client to the master slapd.
44 + The LDAP client submits the LDAP modify operation to
47 + The master slapd performs the modify operation,
48 writes out the change to its replication log file and returns
49 a success code to the client.
51 + The slurpd process notices that a new entry has
52 been appended to the replication log file, reads the
53 replication log entry, and sends the change to the slave
56 + The slave slapd performs the modify operation and
57 returns a success code to the slurpd process.
62 When slapd is configured to generate a replication logfile,
63 it writes out a file containing {{TERM:LDIF}} change records.
64 The replication log gives the replication site(s), a
65 timestamp, the DN of the entry being modified, and a series
66 of lines which specify the changes to make. In the
67 example below, Barbara ({{EX:uid=bjensen}}) has replaced the {{EX:description}}
68 value. The change is to be propagated
69 to the slapd instance running on {{EX:slave.example.net}}
70 Changes to various operational attributes, such as {{EX:modifiersName}}
71 and {{EX:modifyTimestamp}}, are included in the change record and
72 will be propagated to the slave slapd.
74 > replica: slave.example.com:389
76 > dn: uid=bjensen,dc=example,dc=com
78 > replace: multiLineDescription
79 > description: A dreamer...
81 > replace: modifiersName
82 > modifiersName: uid=bjensen,dc=example,dc=com
84 > replace: modifyTimestamp
85 > modifyTimestamp: 20000805073308Z
88 The modifications to {{EX:modifiersName}} and {{EX:modifyTimestamp}}
89 operational attributes were added by the master {{slapd}}.
93 H2: Command-Line Options
95 This section details commonly used {{slurpd}}(8) command-line options.
99 This option sets the slurpd debug level to {{EX: <level>}}. When
100 level is a `?' character, the various debugging levels are
101 printed and slurpd exits, regardless of any other options
102 you give it. Current debugging levels (a subset of slapd's
103 debugging levels) are
105 !block table; colaligns="RL"; align=Center; \
106 title="Table 10.1: Debugging Levels"
108 4 heavy trace debugging
109 64 configuration file processing
110 65535 enable all debugging
113 Debugging levels are additive. That is, if you want heavy
114 trace debugging and want to watch the config file being
115 processed, you would set level to the sum of those two
116 levels (in this case, 68).
120 This option specifies an alternate slapd configuration file.
121 Slurpd does not have its own configuration file. Instead, all
122 configuration information is read from the slapd
127 This option specifies an alternate slapd replication log file.
128 Under normal circumstances, slurpd reads the name of
129 the slapd replication log file from the slapd configuration
130 file. However, you can override this with the -r flag, to
131 cause slurpd to process a different replication log file. See
132 the {{SECT:Advanced slurpd Operation}} section for a discussion
133 of how you might use this option.
137 Operate in "one-shot" mode. Under normal
138 circumstances, when slurpd finishes processing a
139 replication log, it remains active and periodically checks to
140 see if new entries have been added to the replication log.
141 In one-shot mode, by comparison, slurpd processes a
142 replication log and exits immediately. If the -o option is
143 given, the replication log file must be explicitly specified
144 with the -r option. See the {{SECT:One-shot mode and reject files}}
145 section for a discussion of this mode.
149 Specify an alternate directory for slurpd's temporary copies of
150 replication logs. The default location is {{F:/usr/tmp}}.
153 H2: Configuring slurpd and a slave slapd instance
155 To bring up a replica slapd instance, you must configure
156 the master and slave slapd instances for replication, then
157 shut down the master slapd so you can copy the
158 database. Finally, you bring up the master slapd instance,
159 the slave slapd instance, and the slurpd instance. These
160 steps are detailed in the following sections. You can set
161 up as many slave slapd instances as you wish.
164 H3: Set up the master {{slapd}}
166 The following section assumes you have a properly
167 working {{slapd}}(8) instance. To configure your working
168 {{slapd}}(8) server as a replication master, you need
169 to make the following changes to your {{slapd.conf}}(5).
171 ^ Add a {{EX:replica}} directive for each replica. The {{EX:binddn=}}
172 parameter should match the {{EX:updatedn}} option in the
173 corresponding slave slapd configuration file, and should
174 name an entry with write permission to the slave database
175 (e.g., an entry listed as {{EX:rootdn}}, or allowed access via
176 {{EX:access}} directives in the slave slapd configuration file).
178 + Add a {{EX:replogfile}} directive, which tells slapd where to log
179 changes. This file will be read by slurpd.
183 H3: Set up the slave {{slapd}}
185 Install the slapd software on the host which is to be the
186 slave slapd server. The configuration of the slave server
187 should be identical to that of the master, with the following
190 ^ Do not include a {{EX:replica}} directive. While it is
191 possible to create "chains" of replicas, in most cases this is
194 + Do not include a {{EX:replogfile}} directive.
196 + Do include an {{EX:updatedn}} line. The DN given should
197 match the DN given in the {{EX:binddn=}} parameter of the
198 corresponding {{EX:replica=}} directive in the master slapd
201 + Make sure the DN given in the {{EX:updatedn}} directive has
202 permission to write the database (e.g., it is listed as {{EX:rootdn}}
203 or is allowed {{EX:access}} by one or more access directives).
205 + Use the {{EX:updateref}} directive to define the URL the
206 slave should return if an update request is received.
209 H3: Shut down the master {{slapd}}
211 In order to ensure that the slave starts with an exact copy
212 of the master's data, you must shut down the master
213 slapd. Do this by sending the master slapd process an
214 interrupt signal with {{EX:kill -INT <pid>}}, where
215 {{EX:<pid>}} is the process-id of the master slapd process.
217 If you like, you may restart the master slapd in read-only
218 mode while you are replicating the database. During this
219 time, the master slapd will return an "unwilling to perform"
220 error to clients that attempt to modify data.
223 H3: Copy the master slapd's database to the slave
225 Copy the master's database(s) to the slave. For an {{TERM:BDB}} and
226 {{TERM:LDBM}} databases, you must copy all database files located
227 in the database {{EX:directory}} specified in {{slapd.conf}}(5).
228 In general, you should copy each file found in the database {{EX:
229 directory}} unless you know it is not used by {{slapd}}(8).
231 Note: This copy process assumes homogeneous servers with
232 identically configured OpenLDAP installations. Alternatively,
233 you may use {{slapcat}} to output the master's database in LDIF
234 format and use the LDIF with {{slapadd}} to populate the
235 slave. Using LDIF avoids any potential incompatibilities due
236 to differing server architectures or software configurations.
237 See the {{SECT:Database Creation and Maintenance Tools}}
238 chapter for details on these tools.
241 H3: Configure the master slapd for replication
243 To configure slapd to generate a replication logfile, you
244 add a "{{EX: replica}}" configuration option to the master slapd's
245 config file. For example, if we wish to propagate changes
246 to the slapd instance running on host
247 {{EX:slave.example.com}}:
249 > replica host=slave.example.com:389
250 > binddn="cn=Replicator,dc=example,dc=com"
251 > bindmethod=simple credentials=secret
253 In this example, changes will be sent to port 389 (the
254 standard LDAP port) on host slave.example.com. The slurpd
255 process will bind to the slave slapd as
256 "{{EX:cn=Replicator,dc=example,dc=com}}" using simple authentication
257 with password "{{EX:secret}}". Note that the DN given by the {{EX:binddn=}}
258 directive must exist in the slave slapd's database (or be
259 the rootdn specified in the slapd config file) in order for the
260 bind operation to succeed. The DN should also be listed as
261 the {{EX:updatedn}} for the database in the slave's slapd.conf(5).
263 Note: The use of strong authentication and transport security
264 is highly recommended.
267 H3: Restart the master slapd and start the slave slapd
269 Restart the master slapd process. To check that it is
270 generating replication logs, perform a modification of any
271 entry in the database, and check that data has been
272 written to the log file.
277 Start the slurpd process. Slurpd should immediately send
278 the test modification you made to the slave slapd. Watch
279 the slave slapd's logfile to be sure that the modification
282 > slurpd -f <masterslapdconfigfile>
286 H2: Advanced slurpd Operation
288 H3: Replication errors
290 When slurpd propagates a change to a slave slapd and
291 receives an error return code, it writes the reason for the
292 error and the replication record to a reject file. The reject
293 file is located in the same directory as the per-replica
294 replication logfile, and has the same name, but with the
295 string "{{F:.rej}}" appended. For example, for a replica running
296 on host {{EX:slave.example.com}}, port 389, the reject file, if it
297 exists, will be named
299 > /usr/local/var/openldap/replog.slave.example.com:389.rej
301 A sample rejection log entry follows:
303 > ERROR: No such attribute
304 > replica: slave.example.com:389
306 > dn: uid=bjensen,dc=example,dc=com
308 > replace: description
309 > description: A dreamer...
311 > replace: modifiersName
312 > modifiersName: uid=bjensen,dc=example,dc=com
314 > replace: modifyTimestamp
315 > modifyTimestamp: 20000805073308Z
318 Note that this is precisely the same format as the original
319 replication log entry, but with an {{EX:ERROR}} line prepended to
324 H3: One-shot mode and reject files
326 It is possible to use slurpd to process a rejection log with
327 its "one-shot mode." In normal operation, slurpd watches
328 for more replication records to be appended to the
329 replication log file. In one-shot mode, by contrast, slurpd
330 processes a single log file and exits. Slurpd ignores
331 {{EX:ERROR}} lines at the beginning of replication log entries, so
332 it's not necessary to edit them out before feeding it the
335 To use one-shot mode, specify the name of the rejection
336 log on the command line as the argument to the -r flag,
337 and specify one-shot mode with the -o flag. For example,
338 to process the rejection log file
339 {{F:/usr/local/var/openldap/replog.slave.example.com:389}}
340 and exit, use the command
342 > slurpd -r /usr/tmp/replog.slave.example.com:389 -o
346 H2: Replication to an X.500 DSA
348 In mixed environments where both {{TERM:X.500}} DSAs and slapd
349 are used, it may be desirable to replicate changes from a
350 slapd directory server to an X.500 {{TERM:DSA}}. This section
351 discusses issues involved with this method of replication,
352 and describes the currently-available facilities.
354 To propagate changes from a slapd directory server to an
355 X.500 DSA, slurpd runs on the master slapd host, and
356 sends changes to an ldapd which acts as a gateway to
359 !import "replication.gif"; align="center"; \
360 title="Replication from slapd to an X.500 DSA"
361 FT: Figure 10.1: Replication from slapd to an X.500 DSA
363 Note that the X.500 DSA must be a read-only copy. Since
364 the replication is one-way, updates from {{TERM:DAP}} clients
365 connecting to the X.500 DSA simply cannot be handled.
367 A problem arises where attribute names differ between the
368 slapd directory server and the X.500 DSA. At present,
369 slapd and slurpd do not support selective replication of
370 attributes, nor do they support translation of attribute
371 names and values. For example, slurpd will attempt to
372 update the {{EX:modifiersName}} and {{EX:modifyTimeStamp}}
373 attributes on the slave it connects to. However, the X.500
374 DSA may expect these attributes to be named
375 {{EX:lastModifiedBy}} and {{EX:lastModifiedTime}}.
377 A solution to this attribute naming problem is to have the
378 LDAP/DAP gateway to map {{EX:modifiersName}} to the Object
379 Identifier ({{TERM:OID}}) for the {{EX:lastModifiedBy}}
380 attribute and {{EX:modifyTimeStamp}} to the OID for the
381 {{EX:lastModifiedTime}} attribute. Since attribute names
382 are carried as OIDs over DAP, this should perform the
383 appropriate translation of attribute names.