X-Git-Url: https://git.sur5r.net/?a=blobdiff_plain;f=doc%2Fguide%2Fadmin%2Freplication.sdf;h=0df0beab287137560b4c70087bbb8bdbabd8f47c;hb=221e0f727be9967543ff6255c05d4221e70338f0;hp=e67a2fb6e5966ffccd7eddb5cf7b51ea2ca4d40e;hpb=3149fc4f41bc53ea96d62c3e46681ea76f1a6b22;p=openldap diff --git a/doc/guide/admin/replication.sdf b/doc/guide/admin/replication.sdf index e67a2fb6e5..0df0beab28 100644 --- a/doc/guide/admin/replication.sdf +++ b/doc/guide/admin/replication.sdf @@ -1,388 +1,579 @@ # $OpenLDAP$ -# Copyright 1999-2000, The OpenLDAP Foundation, All Rights Reserved. +# Copyright 1999-2007 The OpenLDAP Foundation, All Rights Reserved. # COPYING RESTRICTIONS APPLY, see COPYRIGHT. -H1: Replication with slurpd -In certain configurations, a single {{slapd}}(8) instance may be -insufficient to handle the number of clients requiring -directory service via LDAP. It may become necessary to -run more than one slapd instance. Many sites, -for instance, there are multiple slapd servers, one -master and one or more slaves. {{TERM:DNS}} can be setup such that -a lookup of {{EX:ldap.example.com}} returns the {{TERM:IP}} addresses -of these servers, distributing the load among them (or -just the slaves). This master/slave arrangement provides -a simple and effective way to increase capacity, availability -and reliability. - -{{slurpd}}(8) provides the capability for a master slapd to -propagate changes to slave slapd instances, -implementing the master/slave replication scheme -described above. slurpd runs on the same host as the -master slapd instance. +H1: Replication +Replicated directories are a fundamental requirement for delivering a +resilient enterprise deployment. + +OpenLDAP has various configuration options for creating a replicated +directory. The following sections will discuss these. + +H2: Replication Strategies + + +H3: Push Based + + +H5: Replacing Slurpd + +Slurpd replication has been deprecated in favor of Syncrepl replication and +has been completely removed from 2.4. + +{{Why was it replaced?}} + +The slurpd daemon was the original replication mechanism inherited from +UMich's LDAP and operates in push mode: the master pushes changes to the +slaves. It has been replaced for many reasons, in brief: + + * It is not reliable + * It is extremely sensitive to the ordering of records in the replog + * It can easily go out of sync, at which point manual intervention is + required to resync the slave database with the master directory + * It isn't very tolerant of unavailable servers. If a slave goes down + for a long time, the replog may grow to a size that's too large for + slurpd to process + +{{What was it replaced with?}} +Syncrepl. -H2: Overview +{{Why is Syncrepl better?}} -{{slurpd}}(8) provides replication services "in band". That is, it -uses the LDAP protocol to update a slave database from -the master. Perhaps the easiest way to illustrate this is -with an example. In this example, we trace the propagation -of an LDAP modify operation from its initiation by the LDAP -client to its distribution to the slave slapd instance. + * Syncrepl is self-synchronizing; you can start with a database in any + state from totally empty to fully synced and it will automatically do + the right thing to achieve and maintain synchronization + * Syncrepl can operate in either direction + * Data updates can be minimal or maximal +{{How do I implement a pushed based replication system using Syncrepl?}} -{{B: Sample replication scenario:}} - -^ The LDAP client submits an LDAP modify operation to -the slave slapd. - -+ The slave slapd returns a referral to the LDAP -client referring the client to the master slapd. - -+ The LDAP client submits the LDAP modify operation to -the master slapd. - -+ The master slapd performs the modify operation, -writes out the change to its replication log file and returns -a success code to the client. - -+ The slurpd process notices that a new entry has -been appended to the replication log file, reads the -replication log entry, and sends the change to the slave -slapd via LDAP. +The easiest way is to point an LDAP backend ({{SECT: Backends}} and {{slapd-ldap(8)}}) +to your slave directory and setup Syncrepl to point to your Master database. -+ The slave slapd performs the modify operation and -returns a success code to the slurpd process. +REFERENCE test045/048 for better explanation of above. +If you imagine Syncrepl pulling down changes from the Master server, and then +pushing those changes out to your slave servers via {{slapd-ldap(8)}}. This is +called proxy mode (elaborate/confirm?). -H2: Replication Logs - -When slapd is configured to generate a replication logfile, -it writes out a file containing {{TERM:LDIF}} change records. -The replication log gives the replication site(s), a -timestamp, the DN of the entry being modified, and a series -of lines which specify the changes to make. In the -example below, Barbara ({{EX:uid=bjensen}}) has replaced the {{EX:description}} -value. The change is to be propagated -to the slapd instance running on {{EX:slave.example.net}} -Changes to various operational attributes, such as {{EX:modifiersName}} -and {{EX:modifyTimestamp}}, are included in the change record and -will be propagated to the slave slapd. - -> replica: slave.example.com:389 -> time: 809618633 -> dn: uid=bjensen, dc=example, dc=com -> changetype: modify -> replace: multiLineDescription -> description: A dreamer... -> - -> replace: modifiersName -> modifiersName: uid=bjensen, dc=example, dc=com -> - -> replace: modifyTimestamp -> modifyTimestamp: 20000805073308Z -> - +DIAGRAM HERE -The modifications to {{EX:modifiersName}} and {{EX:modifyTimestamp}} -operational attributes were added by the master {{slapd}}. +BETTER EXAMPLE here from test045/048 for different push/multiproxy examples. +Here's an example: -H2: Command-Line Options +> include ./schema/core.schema +> include ./schema/cosine.schema +> include ./schema/inetorgperson.schema +> include ./schema/openldap.schema +> include ./schema/nis.schema +> +> pidfile /home/ghenry/openldap/ldap/tests/testrun/slapd.3.pid +> argsfile /home/ghenry/openldap/ldap/tests/testrun/slapd.3.args +> +> modulepath ../servers/slapd/back-bdb/ +> moduleload back_bdb.la +> modulepath ../servers/slapd/back-monitor/ +> moduleload back_monitor.la +> modulepath ../servers/slapd/overlays/ +> moduleload syncprov.la +> modulepath ../servers/slapd/back-ldap/ +> moduleload back_ldap.la +> +> # We don't need any access to this DSA +> restrict all +> +> ####################################################################### +> # consumer proxy database definitions +> ####################################################################### +> +> database ldap +> suffix "dc=example,dc=com" +> rootdn "cn=Whoever" +> uri ldap://localhost:9012/ +> +> lastmod on +> +> # HACK: use the RootDN of the monitor database as UpdateDN so ACLs apply +> # without the need to write the UpdateDN before starting replication +> acl-bind bindmethod=simple +> binddn="cn=Monitor" +> credentials=monitor +> +> # HACK: use the RootDN of the monitor database as UpdateDN so ACLs apply +> # without the need to write the UpdateDN before starting replication +> syncrepl rid=1 +> provider=ldap://localhost:9011/ +> binddn="cn=Manager,dc=example,dc=com" +> bindmethod=simple +> credentials=secret +> searchbase="dc=example,dc=com" +> filter="(objectClass=*)" +> attrs="*,structuralObjectClass,entryUUID,entryCSN,creatorsName,createTimestamp,modifiersName,modifyTimestamp" +> schemachecking=off +> scope=sub +> type=refreshAndPersist +> retry="5 5 300 5" +> +> overlay syncprov +> +> database monitor + +DETAILED EXPLANATION OF ABOVE LIKE IN OTHER SECTIONS (line numbers?) + + +ANOTHER DIAGRAM HERE + +As you can see, you can let your imagination go wild using Syncrepl and +{{slapd-ldap(8)}} tailoring your replication to fit your specific network +topology. + +H3: Pull Based + + +H4: syncrepl replication + + +H4: delta-syncrepl replication + + +H2: Replication Types + + +H3: syncrepl replication + + +H3: delta-syncrepl replication + + +H3: N-Way Multi-Master + +http://www.connexitor.com/blog/pivot/entry.php?id=105#body +http://www.openldap.org/lists/openldap-software/200702/msg00006.html +http://www.openldap.org/lists/openldap-software/200602/msg00064.html + + +H3: MirrorMode + +MirrorMode is a hybrid configuration that provides all of the consistency +guarantees of single-master replication while also providing the high +availability of multi-master. In MirrorMode two masters are set up to +replicate from each other (as a multi-master configuration) but an +external frontend is employed to direct all writes to only one of +the two servers. The second master will only be used for writes if +the first master crashes, at which point the frontend will switch to +directing all writes to the second master. When a crashed master is +repaired and restarted it will automatically catch up to any changes +on the running master and resync. + +H2: LDAP Sync Replication + +The {{TERM:LDAP Sync}} Replication engine, {{TERM:syncrepl}} for +short, is a consumer-side replication engine that enables the +consumer {{TERM:LDAP}} server to maintain a shadow copy of a +{{TERM:DIT}} fragment. A syncrepl engine resides at the consumer-side +as one of the {{slapd}}(8) threads. It creates and maintains a +consumer replica by connecting to the replication provider to perform +the initial DIT content load followed either by periodic content +polling or by timely updates upon content changes. + +Syncrepl uses the LDAP Content Synchronization (or LDAP Sync for +short) protocol as the replica synchronization protocol. It provides +a stateful replication which supports both pull-based and push-based +synchronization and does not mandate the use of a history store. + +Syncrepl keeps track of the status of the replication content by +maintaining and exchanging synchronization cookies. Because the +syncrepl consumer and provider maintain their content status, the +consumer can poll the provider content to perform incremental +synchronization by asking for the entries required to make the +consumer replica up-to-date with the provider content. Syncrepl +also enables convenient management of replicas by maintaining replica +status. The consumer replica can be constructed from a consumer-side +or a provider-side backup at any synchronization status. Syncrepl +can automatically resynchronize the consumer replica up-to-date +with the current provider content. + +Syncrepl supports both pull-based and push-based synchronization. +In its basic refreshOnly synchronization mode, the provider uses +pull-based synchronization where the consumer servers need not be +tracked and no history information is maintained. The information +required for the provider to process periodic polling requests is +contained in the synchronization cookie of the request itself. To +optimize the pull-based synchronization, syncrepl utilizes the +present phase of the LDAP Sync protocol as well as its delete phase, +instead of falling back on frequent full reloads. To further optimize +the pull-based synchronization, the provider can maintain a per-scope +session log as a history store. In its refreshAndPersist mode of +synchronization, the provider uses a push-based synchronization. +The provider keeps track of the consumer servers that have requested +a persistent search and sends them necessary updates as the provider +replication content gets modified. + +With syncrepl, a consumer server can create a replica without +changing the provider's configurations and without restarting the +provider server, if the consumer server has appropriate access +privileges for the DIT fragment to be replicated. The consumer +server can stop the replication also without the need for provider-side +changes and restart. + +Syncrepl supports both partial and sparse replications. The shadow +DIT fragment is defined by a general search criteria consisting of +base, scope, filter, and attribute list. The replica content is +also subject to the access privileges of the bind identity of the +syncrepl replication connection. + + +H3: The LDAP Content Synchronization Protocol + +The LDAP Sync protocol allows a client to maintain a synchronized +copy of a DIT fragment. The LDAP Sync operation is defined as a set +of controls and other protocol elements which extend the LDAP search +operation. This section introduces the LDAP Content Sync protocol +only briefly. For more information, refer to {{REF:RFC4533}}. + +The LDAP Sync protocol supports both polling and listening for +changes by defining two respective synchronization operations: +{{refreshOnly}} and {{refreshAndPersist}}. Polling is implemented +by the {{refreshOnly}} operation. The client copy is synchronized +to the server copy at the time of polling. The server finishes the +search operation by returning {{SearchResultDone}} at the end of +the search operation as in the normal search. The listening is +implemented by the {{refreshAndPersist}} operation. Instead of +finishing the search after returning all entries currently matching +the search criteria, the synchronization search remains persistent +in the server. Subsequent updates to the synchronization content +in the server cause additional entry updates to be sent to the +client. + +The {{refreshOnly}} operation and the refresh stage of the +{{refreshAndPersist}} operation can be performed with a present +phase or a delete phase. + +In the present phase, the server sends the client the entries updated +within the search scope since the last synchronization. The server +sends all requested attributes, be it changed or not, of the updated +entries. For each unchanged entry which remains in the scope, the +server sends a present message consisting only of the name of the +entry and the synchronization control representing state present. +The present message does not contain any attributes of the entry. +After the client receives all update and present entries, it can +reliably determine the new client copy by adding the entries added +to the server, by replacing the entries modified at the server, and +by deleting entries in the client copy which have not been updated +nor specified as being present at the server. + +The transmission of the updated entries in the delete phase is the +same as in the present phase. The server sends all the requested +attributes of the entries updated within the search scope since the +last synchronization to the client. In the delete phase, however, +the server sends a delete message for each entry deleted from the +search scope, instead of sending present messages. The delete +message consists only of the name of the entry and the synchronization +control representing state delete. The new client copy can be +determined by adding, modifying, and removing entries according to +the synchronization control attached to the {{SearchResultEntry}} +message. + +In the case that the LDAP Sync server maintains a history store and +can determine which entries are scoped out of the client copy since +the last synchronization time, the server can use the delete phase. +If the server does not maintain any history store, cannot determine +the scoped-out entries from the history store, or the history store +does not cover the outdated synchronization state of the client, +the server should use the present phase. The use of the present +phase is much more efficient than a full content reload in terms +of the synchronization traffic. To reduce the synchronization +traffic further, the LDAP Sync protocol also provides several +optimizations such as the transmission of the normalized {{EX:entryUUID}}s +and the transmission of multiple {{EX:entryUUIDs}} in a single +{{syncIdSet}} message. + +At the end of the {{refreshOnly}} synchronization, the server sends +a synchronization cookie to the client as a state indicator of the +client copy after the synchronization is completed. The client +will present the received cookie when it requests the next incremental +synchronization to the server. + +When {{refreshAndPersist}} synchronization is used, the server sends +a synchronization cookie at the end of the refresh stage by sending +a Sync Info message with TRUE refreshDone. It also sends a +synchronization cookie by attaching it to {{SearchResultEntry}} +generated in the persist stage of the synchronization search. During +the persist stage, the server can also send a Sync Info message +containing the synchronization cookie at any time the server wants +to update the client-side state indicator. The server also updates +a synchronization indicator of the client at the end of the persist +stage. + +In the LDAP Sync protocol, entries are uniquely identified by the +{{EX:entryUUID}} attribute value. It can function as a reliable +identifier of the entry. The DN of the entry, on the other hand, +can be changed over time and hence cannot be considered as the +reliable identifier. The {{EX:entryUUID}} is attached to each +{{SearchResultEntry}} or {{SearchResultReference}} as a part of the +synchronization control. + + +H3: Syncrepl Details + +The syncrepl engine utilizes both the {{refreshOnly}} and the +{{refreshAndPersist}} operations of the LDAP Sync protocol. If a +syncrepl specification is included in a database definition, +{{slapd}}(8) launches a syncrepl engine as a {{slapd}}(8) thread +and schedules its execution. If the {{refreshOnly}} operation is +specified, the syncrepl engine will be rescheduled at the interval +time after a synchronization operation is completed. If the +{{refreshAndPersist}} operation is specified, the engine will remain +active and process the persistent synchronization messages from the +provider. + +The syncrepl engine utilizes both the present phase and the delete +phase of the refresh synchronization. It is possible to configure +a per-scope session log in the provider server which stores the +{{EX:entryUUID}}s of a finite number of entries deleted from a +replication content. Multiple replicas of single provider content +share the same per-scope session log. The syncrepl engine uses the +delete phase if the session log is present and the state of the +consumer server is recent enough that no session log entries are +truncated after the last synchronization of the client. The syncrepl +engine uses the present phase if no session log is configured for +the replication content or if the consumer replica is too outdated +to be covered by the session log. The current design of the session +log store is memory based, so the information contained in the +session log is not persistent over multiple provider invocations. +It is not currently supported to access the session log store by +using LDAP operations. It is also not currently supported to impose +access control to the session log. + +As a further optimization, even in the case the synchronization +search is not associated with any session log, no entries will be +transmitted to the consumer server when there has been no update +in the replication context. + +The syncrepl engine, which is a consumer-side replication engine, +can work with any backends. The LDAP Sync provider can be configured +as an overlay on any backend, but works best with the {{back-bdb}} +or {{back-hdb}} backend. + +The LDAP Sync provider maintains a {{EX:contextCSN}} for each +database as the current synchronization state indicator of the +provider content. It is the largest {{EX:entryCSN}} in the provider +context such that no transactions for an entry having smaller +{{EX:entryCSN}} value remains outstanding. The {{EX:contextCSN}} +could not just be set to the largest issued {{EX:entryCSN}} because +{{EX:entryCSN}} is obtained before a transaction starts and +transactions are not committed in the issue order. + +The provider stores the {{EX:contextCSN}} of a context in the +{{EX:contextCSN}} attribute of the context suffix entry. The attribute +is not written to the database after every update operation though; +instead it is maintained primarily in memory. At database start +time the provider reads the last saved {{EX:contextCSN}} into memory +and uses the in-memory copy exclusively thereafter. By default, +changes to the {{EX:contextCSN}} as a result of database updates +will not be written to the database until the server is cleanly +shut down. A checkpoint facility exists to cause the contextCSN to +be written out more frequently if desired. + +Note that at startup time, if the provider is unable to read a +{{EX:contextCSN}} from the suffix entry, it will scan the entire +database to determine the value, and this scan may take quite a +long time on a large database. When a {{EX:contextCSN}} value is +read, the database will still be scanned for any {{EX:entryCSN}} +values greater than it, to make sure the {{EX:contextCSN}} value +truly reflects the greatest committed {{EX:entryCSN}} in the database. +On databases which support inequality indexing, setting an eq index +on the {{EX:entryCSN}} attribute and configuring {{contextCSN}} +checkpoints will greatly speed up this scanning step. + +If no {{EX:contextCSN}} can be determined by reading and scanning +the database, a new value will be generated. Also, if scanning the +database yielded a greater {{EX:entryCSN}} than was previously +recorded in the suffix entry's {{EX:contextCSN}} attribute, a +checkpoint will be immediately written with the new value. + +The consumer also stores its replica state, which is the provider's +{{EX:contextCSN}} received as a synchronization cookie, in the +{{EX:contextCSN}} attribute of the suffix entry. The replica state +maintained by a consumer server is used as the synchronization state +indicator when it performs subsequent incremental synchronization +with the provider server. It is also used as a provider-side +synchronization state indicator when it functions as a secondary +provider server in a cascading replication configuration. Since +the consumer and provider state information are maintained in the +same location within their respective databases, any consumer can +be promoted to a provider (and vice versa) without any special +actions. + +Because a general search filter can be used in the syncrepl +specification, some entries in the context may be omitted from the +synchronization content. The syncrepl engine creates a glue entry +to fill in the holes in the replica context if any part of the +replica content is subordinate to the holes. The glue entries will +not be returned in the search result unless {{ManageDsaIT}} control +is provided. + +Also as a consequence of the search filter used in the syncrepl +specification, it is possible for a modification to remove an entry +from the replication scope even though the entry has not been deleted +on the provider. Logically the entry must be deleted on the consumer +but in {{refreshOnly}} mode the provider cannot detect and propagate +this change without the use of the session log. + + +H3: Configuring Syncrepl + +Because syncrepl is a consumer-side replication engine, the syncrepl +specification is defined in {{slapd.conf}}(5) of the consumer +server, not in the provider server's configuration file. The initial +loading of the replica content can be performed either by starting +the syncrepl engine with no synchronization cookie or by populating +the consumer replica by adding an {{TERM:LDIF}} file dumped as a +backup at the provider. + +When loading from a backup, it is not required to perform the initial +loading from the up-to-date backup of the provider content. The +syncrepl engine will automatically synchronize the initial consumer +replica to the current provider content. As a result, it is not +required to stop the provider server in order to avoid the replica +inconsistency caused by the updates to the provider content during +the content backup and loading process. + +When replicating a large scale directory, especially in a bandwidth +constrained environment, it is advised to load the consumer replica +from a backup instead of performing a full initial load using +syncrepl. + + +H4: Set up the provider slapd + +The provider is implemented as an overlay, so the overlay itself +must first be configured in {{slapd.conf}}(5) before it can be +used. The provider has only two configuration directives, for setting +checkpoints on the {{EX:contextCSN}} and for configuring the session +log. Because the LDAP Sync search is subject to access control, +proper access control privileges should be set up for the replicated +content. + +The {{EX:contextCSN}} checkpoint is configured by the + +> syncprov-checkpoint + +directive. Checkpoints are only tested after successful write +operations. If {{}} operations or more than {{}} +time has passed since the last checkpoint, a new checkpoint is +performed. + +The session log is configured by the + +> syncprov-sessionlog + +directive, where {{}} is the maximum number of session log +entries the session log can record. When a session log is configured, +it is automatically used for all LDAP Sync searches within the +database. + +Note that using the session log requires searching on the {{entryUUID}} +attribute. Setting an eq index on this attribute will greatly benefit +the performance of the session log on the provider. + +A more complete example of the {{slapd.conf}}(5) content is thus: + +> database bdb +> suffix dc=Example,dc=com +> rootdn dc=Example,dc=com +> directory /var/ldap/db +> index objectclass,entryCSN,entryUUID eq +> +> overlay syncprov +> syncprov-checkpoint 100 10 +> syncprov-sessionlog 100 + + +H4: Set up the consumer slapd + +The syncrepl replication is specified in the database section of +{{slapd.conf}}(5) for the replica context. The syncrepl engine +is backend independent and the directive can be defined with any +database type. + +> database hdb +> suffix dc=Example,dc=com +> rootdn dc=Example,dc=com +> directory /var/ldap/db +> index objectclass,entryCSN,entryUUID eq +> +> syncrepl rid=123 +> provider=ldap://provider.example.com:389 +> type=refreshOnly +> interval=01:00:00:00 +> searchbase="dc=example,dc=com" +> filter="(objectClass=organizationalPerson)" +> scope=sub +> attrs="cn,sn,ou,telephoneNumber,title,l" +> schemachecking=off +> bindmethod=simple +> binddn="cn=syncuser,dc=example,dc=com" +> credentials=secret + +In this example, the consumer will connect to the provider {{slapd}}(8) +at port 389 of {{FILE:ldap://provider.example.com}} to perform a +polling ({{refreshOnly}}) mode of synchronization once a day. It +will bind as {{EX:cn=syncuser,dc=example,dc=com}} using simple +authentication with password "secret". Note that the access control +privilege of {{EX:cn=syncuser,dc=example,dc=com}} should be set +appropriately in the provider to retrieve the desired replication +content. Also the search limits must be high enough on the provider +to allow the syncuser to retrieve a complete copy of the requested +content. The consumer uses the rootdn to write to its database so +it always has full permissions to write all content. + +The synchronization search in the above example will search for the +entries whose objectClass is organizationalPerson in the entire +subtree rooted at {{EX:dc=example,dc=com}}. The requested attributes +are {{EX:cn}}, {{EX:sn}}, {{EX:ou}}, {{EX:telephoneNumber}}, +{{EX:title}}, and {{EX:l}}. The schema checking is turned off, so +that the consumer {{slapd}}(8) will not enforce entry schema +checking when it process updates from the provider {{slapd}}(8). + +For more detailed information on the syncrepl directive, see the +{{SECT:syncrepl}} section of {{SECT:The slapd Configuration File}} +chapter of this admin guide. + + +H4: Start the provider and the consumer slapd + +The provider {{slapd}}(8) is not required to be restarted. +{{contextCSN}} is automatically generated as needed: it might be +originally contained in the {{TERM:LDIF}} file, generated by +{{slapadd}} (8), generated upon changes in the context, or generated +when the first LDAP Sync search arrives at the provider. If an +LDIF file is being loaded which did not previously contain the +{{contextCSN}}, the {{-w}} option should be used with {{slapadd}} +(8) to cause it to be generated. This will allow the server to +startup a little quicker the first time it runs. + +When starting a consumer {{slapd}}(8), it is possible to provide +a synchronization cookie as the {{-c cookie}} command line option +in order to start the synchronization from a specific state. The +cookie is a comma separated list of name=value pairs. Currently +supported syncrepl cookie fields are {{csn=}} and {{rid=}}. +{{}} represents the current synchronization state of the +consumer replica. {{}} identifies a consumer replica locally +within the consumer server. It is used to relate the cookie to the +syncrepl definition in {{slapd.conf}}(5) which has the matching +replica identifier. The {{}} must have no more than 3 decimal +digits. The command line cookie overrides the synchronization +cookie stored in the consumer replica database. + + +H2: N-Way Multi-Master + + +H2: MirrorMode -This section details commonly used {{slurpd}}(8) command-line options. -> -d | ? - -This option sets the slurpd debug level to {{EX: }}. When -level is a `?' character, the various debugging levels are -printed and slapd exits, regardless of any other options -you give it. Current debugging levels (a subset of slapd's -debugging levels) are - -!block table; colaligns="RL"; align=Center; \ - title="Table 10.1: Debugging Levels" -Level Description -4 heavy trace debugging -64 configuration file processing -65535 enable all debugging -!endblock - -Debugging levels are additive. That is, if you want heavy -trace debugging and want to watch the config file being -processed, you would set level to the sum of those two -levels (in this case, 68). - -> -f - -This option specifies an alternate slapd configuration file. -Slurpd does not have its own configuration file. Instead, all -configuration information is read from the slapd -configuration file. - -> -r - -This option specifies an alternate slapd replication log file. -Under normal circumstances, slurpd reads the name of -the slapd replication log file from the slapd configuration -file. However, you can override this with the -r flag, to -cause slurpd to process a different replication log file. See -the {{SECT:Advanced slurpd Operation}} section for a discussion -of how you might use this option. - -> -o - -Operate in "one-shot" mode. Under normal -circumstances, when slurpd finishes processing a -replication log, it remains active and periodically checks to -see if new entries have been added to the replication log. -In one-shot mode, by comparison, slurpd processes a -replication log and exits immediately. If the -o option is -given, the replication log file must be explicitly specified -with the -r option. See the {{SECT:One-shot mode and reject files}} -section for a discussion of this mode. - -> -t - -Specify an alternate directory for slurpd's temporary -copies of replication logs. The default location is /usr/tmp. - - -H2: Configuring slurpd and a slave slapd instance - -To bring up a replica slapd instance, you must configure -the master and slave slapd instances for replication, then -shut down the master slapd so you can copy the -database. Finally, you bring up the master slapd instance, -the slave slapd instance, and the slurpd instance. These -steps are detailed in the following sections. You can set -up as many slave slapd instances as you wish. - - -H3: Set up the master {{slapd}} - -The following section assumes you have a properly -working {{slapd}}(8) instance. To configure your working -{{slapd}}(8) server as a replication master, you need -to make the following changes to your {{slapd.conf}}(5). - -^ Add a {{EX:replica}} directive for each replica. The {{EX:binddn=}} -parameter should match the {{EX:updatedn}} option in the -corresponding slave slapd configuration file, and should -name an entry with write permission to the slave database -(e.g., an entry listed as {{EX:rootdn}}, or allowed access via -{{EX:access}} directives in the slave slapd configuration file). - -+ Add a {{EX:replogfile}} directive, which tells slapd where to log -changes. This file will be read by slurpd. - - - -H3: Set up the slave {{slapd}} - -Install the slapd software on the host which is to be the -slave slapd server. The configuration of the slave server -should be identical to that of the master, with the following -exceptions: - -^ Do not include a {{EX:replica}} directive. While it is -possible to create "chains" of replicas, in most cases this is -inappropriate. - -+ Do not include a {{EX:replogfile}} directive. - -+ Do include an updatedn line. The DN given should -match the DN given in the {{EX:binddn=}} parameter of the -corresponding {{EX:replica=}} directive in the master slapd -config file. - -+ Make sure the DN given in the {{EX:updatedn}} directive has -permission to write the database (e.g., it is listed as {{EX:rootdn}} -or is allowed {{EX:access}} by one or more access directives). - -+ Use the {{EX:updateref}} directive to define the URL the -slave should return if an update request is received. - - -H3: Shut down the master {{slapd}} - -In order to ensure that the slave starts with an exact copy -of the master's data, you must shut down the master -slapd. Do this by sending the master slapd process an -interrupt signal with {{EX:kill -INT }}, where -{{EX:}} is the process-id of the master slapd process. - -If you like, you may restart the master slapd in read-only -mode while you are replicating the database. During this -time, the master slapd will return an "unwilling to perform" -error to clients that attempt to modify data. - - -H3: Copy the master slapd's database to the slave - -Copy the master's database(s) to the slave. For an -{{TERM:LDBM}}-based database, you must copy all database -files located in the database {{EX:directory}} specified in -{{slapd.conf}}(5). Database files will have a different -suffix depending on the underlying database package used. -The current possibilities are - -!block table; align=Center; \ - title="Table 10.2: Database File Suffixes" -Suffix Database -{{EX:dbb}} Berkeley DB B-tree backend -{{EX:dbh}} Berkeley DB hash backend -{{EX:gdbm}} GNU DBM backend -!endblock - -In general, you should copy all files found in the database -{{EX: directory}} unless you know it not used by {{slapd}}(8). - -Note: The copy process assumes homogeneous servers with -identically configured OpenLDAP installations. - - -H3: Configure the master slapd for replication - -To configure slapd to generate a replication logfile, you -add a "{{EX: replica}}" configuration option to the master slapd's -config file. For example, if we wish to propagate changes -to the slapd instance running on host -{{EX:slave.example.com}}: - -> replica host=slave.example.com:389 -> binddn="cn=Replicator,dc=example,dc=com" -> bindmethod=simple credentials=secret - -In this example, changes will be sent to port 389 (the -standard LDAP port) on host slave.example.com. The slurpd -process will bind to the slave slapd as -"{{EX:cn=Replicator,dc=example,dc=com}}" using simple authentication -with password "{{EX:secret}}". Note that the DN given by the {{EX:binddn=}} -directive must either exist in the slave slapd's database (or be -the rootdn specified in the slapd config file) in order for the -bind operation to succeed. The DN should also be listed as -the {{EX:updatedn}} for the database in the slave's slapd.conf(5). - -Note: The use of strong authentication and transport security -is highly recommended. - - -H3: Restart the master slapd and start the slave slapd - -Restart the master slapd process. To check that it is -generating replication logs, perform a modification of any -entry in the database, and check that data has been -written to the log file. - - -H3: Start slurpd - -Start the slurpd process. Slurpd should immediately send -the test modification you made to the slave slapd. Watch -the slave slapd's logfile to be sure that the modification -was sent. - -> slurpd -f - - - -H2: Advanced slurpd Operation - -H3: Replication errors - -When slurpd propagates a change to a slave slapd and -receives an error return code, it writes the reason for the -error and the replication record to a reject file. The reject -file is located in the same directory with the per-replica -replication logfile, and has the same name, but with the -string "{{F:.rej}}" appended. For example, for a replica running -on host {{EX:slave.example.com}}, port 389, the reject file, if it -exists, will be named - -> /usr/local/var/openldap/replog.slave.example.com:389. - -A sample rejection log entry follows: - -> ERROR: No such attribute -> replica: slave.example.com:389 -> time: 809618633 -> dn: uid=bjensen, dc=example, dc=com -> changetype: modify -> replace: description -> description: A dreamer... -> - -> replace: modifiersName -> modifiersName: uid=bjensen, dc=example, dc=com -> - -> replace: modifyTimestamp -> modifyTimestamp: 20000805073308Z -> - - -Note that this is precisely the same format as the original -replication log entry, but with an {{EX:ERROR}} line prepended to -the entry. - - - -H3: One-shot mode and reject files - -It is possible to use slurpd to process a rejection log with -its "one-shot mode." In normal operation, slurpd watches -for more replication records to be appended to the -replication log file. In one-shot mode, by contrast, slurpd -processes a single log file and exits. Slurpd ignores -{{EX:ERROR}} lines at the beginning of replication log entries, so -it's not necessary to edit them out before feeding it the -rejection log. - -To use one-shot mode, specify the name of the rejection -log on the command line as the argument to the -r flag, -and specify one-shot mode with the -o flag. For example, -to process the rejection log file -{{F:/usr/local/var/openldap/replog.slave.example.com:389}} -and exit, use the command - -> slurpd -r /usr/tmp/replog.slave.example.com:389 -o - - -H2: Replication to an X.500 DSA - -In mixed environments where both {{TERM:X.500}} DSAs and slapd -are used, it may be desirable to replicate changes from a -slapd directory server to an X.500 {{TERM:DSA}}. This section -discusses issues involved with this method of replication, -and describes the currently-available facilities. - -To propagate changes from a slapd directory server to an -X.500 DSA, slurpd runs on the master slapd host, and -sends changes to an ldapd which acts as a gateway to -the X.500 DSA: - -!import "replication.gif"; align="center"; \ - title="Replication from slapd to an X.500 DSA" -FT: Figure 10.1: Replication from slapd to an X.500 DSA - -Note that the X.500 DSA must be a read-only copy. Since -the replication is one-way, updates from {{TERM:DAP}} clients -connecting to the X.500 DSA simply cannot be handled. - -A problem arises where attribute names differ between the -slapd directory server and the X.500 DSA. At present, -slapd and slurpd do not support selective replication of -attributes, nor do they support translation of attribute -names and values. For example, slurpd will attempt to -update the {{EX:modifiersName}} and {{EX:modifyTimeStamp}} -attributes on the slave it connects to. However, the X.500 -DSA may expect these attributes to be named -{{EX:lastModifiedBy}} and {{EX:lastModifiedTime}}. - -A solution to this attribute naming problem is to have the -ldapd read oidtables that map {{EX:modifiersName}} to the -Object Identifier ({{TERM:OID}}) for the {{EX:lastModifiedBy}} attribute and -{{EX:modifyTimeStamp}} to the OID for the {{EX:lastModifiedTime}} -attribute. Since attribute names are carried as OIDs over -DAP, this should perform the appropriate translation of -attribute names.