2 # Copyright 1999-2000, The OpenLDAP Foundation, All Rights Reserved.
3 # COPYING RESTRICTIONS APPLY, see COPYRIGHT.
4 H1: Replication with slurpd
6 In certain configurations, a single {{slapd}}(8) instance may be
7 insufficient to handle the number of clients requiring
8 directory service via LDAP. It may become necessary to
9 run more than one slapd instance. Many sites,
10 for instance, there are multiple slapd servers, one
11 master and one or more slaves. {{TERM:DNS}} can be setup such that
12 a lookup of {{EX:ldap.example.com}} returns the {{TERM:IP}} addresses
13 of these servers, distributing the load among them (or
14 just the slaves). This master/slave arrangement provides
15 a simple and effective way to increase capacity, availability
18 {{slurpd}}(8) provides the capability for a master slapd to
19 propagate changes to slave slapd instances,
20 implementing the master/slave replication scheme
21 described above. slurpd runs on the same host as the
22 master slapd instance.
28 {{slurpd}}(8) provides replication services "in band". That is, it
29 uses the LDAP protocol to update a slave database from
30 the master. Perhaps the easiest way to illustrate this is
31 with an example. In this example, we trace the propagation
32 of an LDAP modify operation from its initiation by the LDAP
33 client to its distribution to the slave slapd instance.
36 {{B: Sample replication scenario:}}
38 ^ The LDAP client submits an LDAP modify operation to
41 + The slave slapd returns a referral to the LDAP
42 client referring the client to the master slapd.
44 + The LDAP client submits the LDAP modify operation to
47 + The master slapd performs the modify operation,
48 writes out the change to its replication log file and returns
49 a success code to the client.
51 + The slurpd process notices that a new entry has
52 been appended to the replication log file, reads the
53 replication log entry, and sends the change to the slave
56 + The slave slapd performs the modify operation and
57 returns a success code to the slurpd process.
62 When slapd is configured to generate a replication logfile,
63 it writes out a file containing {{TERM:LDIF}} change records.
64 The replication log gives the replication site(s), a
65 timestamp, the DN of the entry being modified, and a series
66 of lines which specify the changes to make. In the
67 example below, Barbara ({{EX:uid=bjensen}}) has replaced the {{EX:description}}
68 value. The change is to be propagated
69 to the slapd instance running on {{EX:slave.example.net}}
70 Changes to various operational attributes, such as {{EX:modifiersName}}
71 and {{EX:modifyTimestamp}}, are included in the change record and
72 will be propagated to the slave slapd.
74 E: replica: slave.example.com:389
76 E: dn: uid=bjensen, dc=example, dc=com
78 E: replace: multiLineDescription
79 E: description: A dreamer...
81 E: replace: modifiersName
82 E: modifiersName: uid=bjensen, dc=example, dc=com
84 E: replace: modifyTimestamp
85 E: modifyTimestamp: 20000805073308Z
88 The modifications to {{EX:modifiersName}} and {{EX:modifyTimestamp}}
89 operational attributes were added by the master {{slapd}}.
93 H2: Command-Line Options
95 {{slurpd}}(8) supports the following command-line options.
99 This option sets the slurpd debug level to {{EX: <level>}}. When
100 level is a `?' character, the various debugging levels are
101 printed and slapd exits, regardless of any other options
102 you give it. Current debugging levels (a subset of slapd's
103 debugging levels) are
106 E: 4 heavy trace debugging
107 E: 64 configuration file processing
108 E: 65535 enable all debugging
110 Debugging levels are additive. That is, if you want heavy
111 trace debugging and want to watch the config file being
112 processed, you would set level to the sum of those two
113 levels (in this case, 68).
117 This option specifies an alternate slapd configuration file.
118 Slurpd does not have its own configuration file. Instead, all
119 configuration information is read from the slapd
124 This option specifies an alternate slapd replication log file.
125 Under normal circumstances, slurpd reads the name of
126 the slapd replication log file from the slapd configuration
127 file. However, you can override this with the -r flag, to
128 cause slurpd to process a different replication log file. See
129 section 10.5, Advanced slurpd Operation, for a discussion
130 of how you might use this option.
134 Operate in "one-shot" mode. Under normal
135 circumstances, when slurpd finishes processing a
136 replication log, it remains active and periodically checks to
137 see if new entries have been added to the replication log.
138 In one-shot mode, by comparison, slurpd processes a
139 replication log and exits immediately. If the -o option is
140 given, the replication log file must be explicitly specified
145 Specify an alternate directory for slurpd's temporary
146 copies of replication logs. The default location is /usr/tmp.
150 When slurpd uses kerberos to authenticate to slave slapd
151 instances, it needs to have an appropriate srvtab file for
152 the remote slapd. This option allows you to specify an
153 alternate filename containing kerberos keys for the remote
154 slapd. The default filename is /etc/srvtab. You can also
155 specify the srvtab file to use in the slapd configuration
156 file's replica option. See the documentation on the srvtab
157 directive in section 5.2.2, General Backend Options. A
158 more complete discussion of using kerberos with slapd
159 and slurpd may be found in Appendix D.
163 H2: Configuring slurpd and a slave slapd instance
165 To bring up a replica slapd instance, you must configure
166 the master and slave slapd instances for replication, then
167 shut down the master slapd so you can copy the
168 database. Finally, you bring up the master slapd instance,
169 the slave slapd instance, and the slurpd instance. These
170 steps are detailed in the following sections. You can set
171 up as many slave slapd instances as you wish.
174 H3: Set up the master slapd
176 Follow the procedures in Section 4, Building and Installing
177 slapd. Be sure that the slapd instance is working properly
178 before proceeding. Be sure to do the following in the
179 master slapd configuration file.
181 ^ Add a replica directive for each replica. The binddn=
182 parameter should match the {{F:updatedn}} option in the
183 corresponding slave slapd configuration file, and should
184 name an entry with write permission to the slave database
185 (e.g., an entry listed as rootdn, or allowed access via
186 access directives in the slave slapd configuration file).
188 + Add a replogfile directive, which tells slapd where to log
189 changes. This file will be read by slurpd.
193 H3: Set up the slave slapd
195 Install the slapd software on the host which is to be the
196 slave slapd server. The configuration of the slave server
197 should be identical to that of the master, with the following
200 ^ Do not include a replica directive. While it is possible to
201 create "chains" of replicas, in most cases this is
204 + Do not include a replogfile directive.
206 + Do include an updatedn line. The DN given should
207 match the DN given in the {{EX: binddn=}} parameter of the
208 corresponding {{EX: replica=}} directive in the master slapd
211 + Make sure the DN given in the {{EX: updatedn}} directive has
212 permission to write the database (e.g., it is listed as rootdn
213 or is allowed access by one or more access directives).
217 H3: Shut down the master slapd
219 In order to ensure that the slave starts with an exact copy
220 of the master's data, you must shut down the master
221 slapd. Do this by sending the master slapd process an
222 interrupt signal with {{EX: kill -TERM <pid>}}, where {{EX: <pid>}} is the
223 process-id of the master slapd process.
225 If you like, you may restart the master slapd in read-only
226 mode while you are replicating the database. During this
227 time, the master slapd will return an "unwilling to perform"
228 error to clients that attempt to modify data.
232 H3: Copy the master slapd's database to the slave
234 Copy the master's database(s) to the slave. For an
235 {{TERM:LDBM}}-based database, you must copy all database
236 files located in the database {{EX:directory}} specified in
237 {{slapd.conf}}(5). Database files will have a different
238 suffix depending on the underlying database package used.
239 The current possibilities are
241 * {{EX: dbb}} Berkeley DB B-tree backend
242 * {{EX: dbh}} Berkeley DB hash backend
243 * {{EX: gdbm}} GNU DBM backend
245 In general, you should copy all files found in the database
246 {{EX: directory}} unless you know it not used by {{slapd}}(8).
249 H3: Configure the master slapd for replication
251 To configure slapd to generate a replication logfile, you
252 add a "{{EX: replica}}" configuration option to the master slapd's
253 config file. For example, if we wish to propagate changes
254 to the slapd instance running on host
257 E: replica host=slave.example.com:389
258 E: binddn="cn=Replicator,dc=example,dc=com"
259 E: bindmethod=simple credentials=secret
261 In this example, changes will be sent to port 389 (the
262 standard LDAP port) on host slave.example.com. The slurpd
263 process will bind to the slave slapd as
264 "cn=Replicator,dc=example,dc=com" using simple authentication
265 with password "secret". Note that the DN given by the binddn=
266 directive must either exist in the slave slapd's database (or be
267 the rootdn specified in the slapd config file) in order for the
268 bind operation to succeed. The DN should also be listed as
269 the {{EX:updatedn}} for the database in the slave's slapd.conf(5).
271 Note: use of simple authentication is discouraged. Use
272 of strong SASL mechanisms such as DIGEST-MD5 or GSSAPI is
276 H3: Restart the master slapd and start the slave slapd
278 Restart the master slapd process. To check that it is
279 generating replication logs, perform a modification of any
280 entry in the database, and check that data has been
281 written to the log file.
287 Start the slurpd process. Slurpd should immediately send
288 the test modification you made to the slave slapd. Watch
289 the slave slapd's logfile to be sure that the modification
292 {{EX: slurpd -f <masterslapdconfigfile>}}
296 H2: Advanced slurpd Operation
298 H3: Replication errors
300 When slurpd propagates a change to a slave slapd and
301 receives an error return code, it writes the reason for the
302 error and the replication record to a reject file. The reject
303 file is located in the same directory with the per-replica
304 replication logfile, and has the same name, but with the
305 string ".rej" appended. For example, for a replica running
306 on host slave.example.com, port 389, the reject file, if it
307 exists, will be named
309 E: /usr/local/var/openldap/replog.slave.example.com:389.
311 A sample rejection log entry follows:
313 E: ERROR: No such attribute
314 E: replica: slave.example.com:389
316 E: dn: uid=bjensen, dc=example, dc=com
317 E: changetype: modify
318 E: replace: description
319 E: description: A dreamer...
321 E: replace: modifiersName
322 E: modifiersName: uid=bjensen, dc=example, dc=com
324 E: replace: modifyTimestamp
325 E: modifyTimestamp: 20000805073308Z
328 Note that this is precisely the same format as the original
329 replication log entry, but with an ERROR line prepended to
334 H3: {{I:Slurpd}}'s one-shot mode and reject files
336 It is possible to use slurpd to process a rejection log with
337 its "one-shot mode." In normal operation, slurpd watches
338 for more replication records to be appended to the
339 replication log file. In one-shot mode, by contrast, slurpd
340 processes a single log file and exits. Slurpd ignores
341 {{EX:ERROR}} lines at the beginning of replication log entries, so
342 it's not necessary to edit them out before feeding it the
345 To use one-shot mode, specify the name of the rejection
346 log on the command line as the argument to the -r flag,
347 and specify one-shot mode with the -o flag. For example,
348 to process the rejection log file
349 {{F:/usr/local/var/openldap/replog.slave.example.com:389}}
350 and exit, use the command
352 E: slurpd -r /usr/tmp/replog.slave.example.com:389 -o
355 H2: Replication from a slapd directory server to an X.500 DSA
357 In mixed environments where both X.500 DSAs and slapd
358 are used, it may be desirable to replicate changes from a
359 slapd directory server to an X.500 DSA. This section
360 discusses issues involved with this method of replication,
361 and describes the currently-available facilities.
363 To propagate changes from a slapd directory server to an
364 X.500 DSA, slurpd runs on the master slapd host, and
365 sends changes to an ldapd which acts as a gateway to
368 !import "replication.gif"; align="center"; title="Replication from slapd to an X.500 DSA"
369 FT: Figure 6: Replication from slapd to an X.500 DSA
371 Note that the X.500 DSA must be a read-only copy. Since
372 the replication is one-way, updates from DAP clients
373 connecting to the X.500 DSA simply cannot be handled.
375 A problem arises where attribute names differ between the
376 slapd directory server and the X.500 DSA. At present,
377 slapd and slurpd do not support selective replication of
378 attributes, nor do they support translation of attribute
379 names and values. For example, slurpd will attempt to
380 update the {{EX:modifiersName}} and {{EX:modifyTimeStamp}}
381 attributes on the slave it connects to. However, the X.500
382 DSA may expect these attributes to be named
383 {{EX:lastModifiedBy}} and {{EX:lastModifiedTime}}.
385 A solution to this attribute naming problem is to have the
386 ldapd read oidtables that map {{EX:modifiersName}} to the
387 objectID (OID) for the {{EX:lastModifiedBy}} attribute and
388 {{EX:modifyTimeStamp}} to the OID for the {{EX:lastModifiedTime}}
389 attribute. Since attribute names are carried as OIDs over
390 DAP, this should perform the appropriate translation of