resilient enterprise deployment.
{{PRD:OpenLDAP}} has various configuration options for creating a replicated
-directory. The following sections will discuss these.
+directory. In previous releases, replication was discussed in terms of
+a {{master}} server and some number of {{slave}} servers. A master
+accepted directory updates from other clients, and a slave only
+accepted updates from a (single) master. The replication structure
+was rigidly defined and any particular database could only fulfill
+a single role, either master or slave.
+
+As OpenLDAP now supports a wide variety of replication topologies, these
+terms have been deprecated in favor of {{provider}} and
+{{consumer}}: A provider replicates directory updates to consumers;
+consumers receive replication updates from providers. Unlike the
+rigidly defined master/slave relationships, provider/consumer roles
+are quite fluid: replication updates received in a consumer can be
+further propagated by that consumer to other servers, so a consumer
+can also act simultaneously as a provider. The following sections will
+discuss the various replication options that are available.
-H2: Push Based
+H2: Pull Based
+H3: LDAP Sync Replication
-H3: Replacing Slurpd
+The {{TERM:LDAP Sync}} Replication engine, {{TERM:syncrepl}} for
+short, is a consumer-side replication engine that enables the
+consumer {{TERM:LDAP}} server to maintain a shadow copy of a
+{{TERM:DIT}} fragment. A syncrepl engine resides at the consumer-side
+and executes as one of the {{slapd}}(8) threads. It creates and maintains a
+consumer replica by connecting to the replication provider to perform
+the initial DIT content load followed either by periodic content
+polling or by timely updates upon content changes.
-{{Slurpd}} replication has been deprecated in favor of Syncrepl replication and
-has been completely removed from OpenLDAP 2.4.
+Syncrepl uses the LDAP Content Synchronization (or LDAP Sync for
+short) protocol as the replica synchronization protocol. LDAP Sync provides
+a stateful replication which supports both pull-based and push-based
+synchronization and does not mandate the use of a history store.
-{{Why was it replaced?}}
+Syncrepl keeps track of the status of the replication content by
+maintaining and exchanging synchronization cookies. Because the
+syncrepl consumer and provider maintain their content status, the
+consumer can poll the provider content to perform incremental
+synchronization by asking for the entries required to make the
+consumer replica up-to-date with the provider content. Syncrepl
+also enables convenient management of replicas by maintaining replica
+status. The consumer replica can be constructed from a consumer-side
+or a provider-side backup at any synchronization status. Syncrepl
+can automatically resynchronize the consumer replica up-to-date
+with the current provider content.
-The {{slurpd}} daemon was the original replication mechanism inherited from
-UMich's LDAP and operates in push mode: the master pushes changes to the
-slaves. It has been replaced for many reasons, in brief:
+Syncrepl supports both pull-based and push-based synchronization.
+In its basic refreshOnly synchronization mode, the provider uses
+pull-based synchronization where the consumer servers need not be
+tracked and no history information is maintained. The information
+required for the provider to process periodic polling requests is
+contained in the synchronization cookie of the request itself. To
+optimize the pull-based synchronization, syncrepl utilizes the
+present phase of the LDAP Sync protocol as well as its delete phase,
+instead of falling back on frequent full reloads. To further optimize
+the pull-based synchronization, the provider can maintain a per-scope
+session log as a history store. In its refreshAndPersist mode of
+synchronization, the provider uses a push-based synchronization.
+The provider keeps track of the consumer servers that have requested
+a persistent search and sends them necessary updates as the provider
+replication content gets modified.
- * It is not reliable
- * It is extremely sensitive to the ordering of records in the replog
- * It can easily go out of sync, at which point manual intervention is
- required to resync the slave database with the master directory
- * It isn't very tolerant of unavailable servers. If a slave goes down
- for a long time, the replog may grow to a size that's too large for
- slurpd to process
+With syncrepl, a consumer server can create a replica without
+changing the provider's configurations and without restarting the
+provider server, if the consumer server has appropriate access
+privileges for the DIT fragment to be replicated. The consumer
+server can stop the replication also without the need for provider-side
+changes and restart.
-{{What was it replaced with?}}
+Syncrepl supports both partial and sparse replications. The shadow
+DIT fragment is defined by a general search criteria consisting of
+base, scope, filter, and attribute list. The replica content is
+also subject to the access privileges of the bind identity of the
+syncrepl replication connection.
-Syncrepl
-{{Why is Syncrepl better?}}
+H4: The LDAP Content Synchronization Protocol
- * Syncrepl is self-synchronizing; you can start with a database in any
- state from totally empty to fully synced and it will automatically do
- the right thing to achieve and maintain synchronization
- * Syncrepl can operate in either direction
- * Data updates can be minimal or maximal
+The LDAP Sync protocol allows a client to maintain a synchronized
+copy of a DIT fragment. The LDAP Sync operation is defined as a set
+of controls and other protocol elements which extend the LDAP search
+operation. This section introduces the LDAP Content Sync protocol
+only briefly. For more information, refer to {{REF:RFC4533}}.
-{{How do I implement a pushed based replication system using Syncrepl?}}
+The LDAP Sync protocol supports both polling and listening for
+changes by defining two respective synchronization operations:
+{{refreshOnly}} and {{refreshAndPersist}}. Polling is implemented
+by the {{refreshOnly}} operation. The client copy is synchronized
+to the server copy at the time of polling. The server finishes the
+search operation by returning {{SearchResultDone}} at the end of
+the search operation as in the normal search. The listening is
+implemented by the {{refreshAndPersist}} operation. Instead of
+finishing the search after returning all entries currently matching
+the search criteria, the synchronization search remains persistent
+in the server. Subsequent updates to the synchronization content
+in the server cause additional entry updates to be sent to the
+client.
-The easiest way is to point an LDAP backend ({{SECT: Backends}} and {{slapd-ldap(8)}})
-to your slave directory and setup Syncrepl to point to your Master database.
+The {{refreshOnly}} operation and the refresh stage of the
+{{refreshAndPersist}} operation can be performed with a present
+phase or a delete phase.
-If you imagine Syncrepl pulling down changes from the Master server, and then
-pushing those changes out to your slave servers via {{slapd-ldap(8)}}. This is
-called Syncrepl Proxy Mode. You can also use Syncrepl Multi-proxy mode:
+In the present phase, the server sends the client the entries updated
+within the search scope since the last synchronization. The server
+sends all requested attributes, be it changed or not, of the updated
+entries. For each unchanged entry which remains in the scope, the
+server sends a present message consisting only of the name of the
+entry and the synchronization control representing state present.
+The present message does not contain any attributes of the entry.
+After the client receives all update and present entries, it can
+reliably determine the new client copy by adding the entries added
+to the server, by replacing the entries modified at the server, and
+by deleting entries in the client copy which have not been updated
+nor specified as being present at the server.
-!import "push-based-complete.png"; align="center"; title="Syncrepl Proxy Mode"
-FT[align="Center"] Figure X.Y: Replacing slurpd
+The transmission of the updated entries in the delete phase is the
+same as in the present phase. The server sends all the requested
+attributes of the entries updated within the search scope since the
+last synchronization to the client. In the delete phase, however,
+the server sends a delete message for each entry deleted from the
+search scope, instead of sending present messages. The delete
+message consists only of the name of the entry and the synchronization
+control representing state delete. The new client copy can be
+determined by adding, modifying, and removing entries according to
+the synchronization control attached to the {{SearchResultEntry}}
+message.
-The following example is for a self-contained push-based replication solution:
+In the case that the LDAP Sync server maintains a history store and
+can determine which entries are scoped out of the client copy since
+the last synchronization time, the server can use the delete phase.
+If the server does not maintain any history store, cannot determine
+the scoped-out entries from the history store, or the history store
+does not cover the outdated synchronization state of the client,
+the server should use the present phase. The use of the present
+phase is much more efficient than a full content reload in terms
+of the synchronization traffic. To reduce the synchronization
+traffic further, the LDAP Sync protocol also provides several
+optimizations such as the transmission of the normalized {{EX:entryUUID}}s
+and the transmission of multiple {{EX:entryUUIDs}} in a single
+{{syncIdSet}} message.
-> #######################################################################
-> # Standard OpenLDAP Master/Provider
-> #######################################################################
->
-> include /usr/local/etc/openldap/schema/core.schema
-> include /usr/local/etc/openldap/schema/cosine.schema
-> include /usr/local/etc/openldap/schema/nis.schema
-> include /usr/local/etc/openldap/schema/inetorgperson.schema
->
-> include /usr/local/etc/openldap/slapd.acl
->
-> modulepath /usr/local/libexec/openldap
-> moduleload back_hdb.la
-> moduleload syncprov.la
-> moduleload back_monitor.la
-> moduleload back_ldap.la
->
-> pidfile /usr/local/var/slapd.pid
-> argsfile /usr/local/var/slapd.args
->
-> loglevel sync stats
->
-> database hdb
-> suffix "dc=suretecsystems,dc=com"
-> directory /usr/local/var/openldap-data
->
-> checkpoint 1024 5
-> cachesize 10000
-> idlcachesize 10000
->
-> index objectClass eq
-> # rest of indexes
-> index default sub
->
-> rootdn "cn=admin,dc=suretecsystems,dc=com"
-> rootpw testing
->
-> # syncprov specific indexing
-> index entryCSN eq
-> index entryUUID eq
->
-> # syncrepl Provider for primary db
-> overlay syncprov
-> syncprov-checkpoint 1000 60
->
-> # Let the replica DN have limitless searches
-> limits dn.exact="cn=replicator,dc=suretecsystems,dc=com" time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited
->
-> database monitor
->
-> database config
-> rootpw testing
->
-> ##############################################################################
-> # Consumer Proxy that pulls in data via Syncrepl and pushes out via slapd-ldap
-> ##############################################################################
->
-> database ldap
-> # ignore conflicts with other databases, as we need to push out to same suffix
-> hidden on
-> suffix "dc=suretecsystems,dc=com"
-> rootdn "cn=slapd-ldap"
-> uri ldap://localhost:9012/
->
-> lastmod on
->
-> # We don't need any access to this DSA
-> restrict all
->
-> acl-bind bindmethod=simple
-> binddn="cn=replicator,dc=suretecsystems,dc=com"
-> credentials=testing
->
-> syncrepl rid=001
-> provider=ldap://localhost:9011/
-> binddn="cn=replicator,dc=suretecsystems,dc=com"
-> bindmethod=simple
-> credentials=testing
-> searchbase="dc=suretecsystems,dc=com"
-> type=refreshAndPersist
-> retry="5 5 300 5"
->
-> overlay syncprov
+At the end of the {{refreshOnly}} synchronization, the server sends
+a synchronization cookie to the client as a state indicator of the
+client copy after the synchronization is completed. The client
+will present the received cookie when it requests the next incremental
+synchronization to the server.
-A replica configuration for this type of setup could be:
-
-> #######################################################################
-> # Standard OpenLDAP Slave without Syncrepl
-> #######################################################################
->
-> include /usr/local/etc/openldap/schema/core.schema
-> include /usr/local/etc/openldap/schema/cosine.schema
-> include /usr/local/etc/openldap/schema/nis.schema
-> include /usr/local/etc/openldap/schema/inetorgperson.schema
->
-> include /usr/local/etc/openldap/slapd.acl
->
-> modulepath /usr/local/libexec/openldap
-> moduleload back_hdb.la
-> moduleload syncprov.la
-> moduleload back_monitor.la
-> moduleload back_ldap.la
->
-> pidfile /usr/local/var/slapd.pid
-> argsfile /usr/local/var/slapd.args
->
-> loglevel sync stats
->
-> database hdb
-> suffix "dc=suretecsystems,dc=com"
-> directory /usr/local/var/openldap-slave/data
->
-> checkpoint 1024 5
-> cachesize 10000
-> idlcachesize 10000
->
-> index objectClass eq
-> # rest of indexes
-> index default sub
->
-> rootdn "cn=admin,dc=suretecsystems,dc=com"
-> rootpw testing
->
-> # Let the replica DN have limitless searches
-> limits dn.exact="cn=replicator,dc=suretecsystems,dc=com" time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited
->
-> updatedn "cn=replicator,dc=suretecsystems,dc=com"
->
-> # Refer updates to the master
-> updateref ldap://localhost:9011
->
-> database monitor
->
-> database config
-> rootpw testing
-
-You can see we use the {{updatedn}} directive here and example ACLs ({{F:usr/local/etc/openldap/slapd.acl}}) for this could be:
-
-> # Give the replica DN unlimited read access. This ACL may need to be
-> # merged with other ACL statements.
->
-> access to *
-> by dn.base="cn=replicator,dc=suretecsystems,dc=com" write
-> by * break
->
-> access to dn.base=""
-> by * read
->
-> access to dn.base="cn=Subschema"
-> by * read
->
-> access to dn.subtree="cn=Monitor"
-> by dn.exact="uid=admin,dc=suretecsystems,dc=com" write
-> by users read
-> by * none
->
-> access to *
-> by self write
-> by * read
-
-In order to support more replicas, just add more {{database ldap}} sections and
-increment the {{syncrepl rid}} number accordingly.
-
-Note: You must populate the Master and Slave directories with the same data,
-unlike when using normal Syncrepl
-
-If you do not have access to modify the master directory configuration you can
-configure a standalone ldap proxy, which might look like:
-
-!import "push-based-standalone.png"; align="center"; title="Syncrepl Standalone Proxy Mode"
-FT[align="Center"] Figure X.Y: Replacing slurpd with a standalone version
-
-The following configuration is an example of a standalone LDAP Proxy:
-
-> include /usr/local/etc/openldap/schema/core.schema
-> include /usr/local/etc/openldap/schema/cosine.schema
-> include /usr/local/etc/openldap/schema/nis.schema
-> include /usr/local/etc/openldap/schema/inetorgperson.schema
->
-> include /usr/local/etc/openldap/slapd.acl
->
-> modulepath /usr/local/libexec/openldap
-> moduleload syncprov.la
-> moduleload back_ldap.la
->
-> ##############################################################################
-> # Consumer Proxy that pulls in data via Syncrepl and pushes out via slapd-ldap
-> ##############################################################################
->
-> database ldap
-> # ignore conflicts with other databases, as we need to push out to same suffix
-> hidden on
-> suffix "dc=suretecsystems,dc=com"
-> rootdn "cn=slapd-ldap"
-> uri ldap://localhost:9012/
->
-> lastmod on
->
-> # We don't need any access to this DSA
-> restrict all
->
-> acl-bind bindmethod=simple
-> binddn="cn=replicator,dc=suretecsystems,dc=com"
-> credentials=testing
->
-> syncrepl rid=001
-> provider=ldap://localhost:9011/
-> binddn="cn=replicator,dc=suretecsystems,dc=com"
-> bindmethod=simple
-> credentials=testing
-> searchbase="dc=suretecsystems,dc=com"
-> type=refreshAndPersist
-> retry="5 5 300 5"
->
-> overlay syncprov
-
-As you can see, you can let your imagination go wild using Syncrepl and
-{{slapd-ldap(8)}} tailoring your replication to fit your specific network
-topology.
-
-H2: Pull Based
-
-H3: LDAP Sync Replication
-
-The {{TERM:LDAP Sync}} Replication engine, {{TERM:syncrepl}} for
-short, is a consumer-side replication engine that enables the
-consumer {{TERM:LDAP}} server to maintain a shadow copy of a
-{{TERM:DIT}} fragment. A syncrepl engine resides at the consumer-side
-as one of the {{slapd}}(8) threads. It creates and maintains a
-consumer replica by connecting to the replication provider to perform
-the initial DIT content load followed either by periodic content
-polling or by timely updates upon content changes.
-
-Syncrepl uses the LDAP Content Synchronization (or LDAP Sync for
-short) protocol as the replica synchronization protocol. It provides
-a stateful replication which supports both pull-based and push-based
-synchronization and does not mandate the use of a history store.
-
-Syncrepl keeps track of the status of the replication content by
-maintaining and exchanging synchronization cookies. Because the
-syncrepl consumer and provider maintain their content status, the
-consumer can poll the provider content to perform incremental
-synchronization by asking for the entries required to make the
-consumer replica up-to-date with the provider content. Syncrepl
-also enables convenient management of replicas by maintaining replica
-status. The consumer replica can be constructed from a consumer-side
-or a provider-side backup at any synchronization status. Syncrepl
-can automatically resynchronize the consumer replica up-to-date
-with the current provider content.
-
-Syncrepl supports both pull-based and push-based synchronization.
-In its basic refreshOnly synchronization mode, the provider uses
-pull-based synchronization where the consumer servers need not be
-tracked and no history information is maintained. The information
-required for the provider to process periodic polling requests is
-contained in the synchronization cookie of the request itself. To
-optimize the pull-based synchronization, syncrepl utilizes the
-present phase of the LDAP Sync protocol as well as its delete phase,
-instead of falling back on frequent full reloads. To further optimize
-the pull-based synchronization, the provider can maintain a per-scope
-session log as a history store. In its refreshAndPersist mode of
-synchronization, the provider uses a push-based synchronization.
-The provider keeps track of the consumer servers that have requested
-a persistent search and sends them necessary updates as the provider
-replication content gets modified.
-
-With syncrepl, a consumer server can create a replica without
-changing the provider's configurations and without restarting the
-provider server, if the consumer server has appropriate access
-privileges for the DIT fragment to be replicated. The consumer
-server can stop the replication also without the need for provider-side
-changes and restart.
-
-Syncrepl supports both partial and sparse replications. The shadow
-DIT fragment is defined by a general search criteria consisting of
-base, scope, filter, and attribute list. The replica content is
-also subject to the access privileges of the bind identity of the
-syncrepl replication connection.
-
-
-H4: The LDAP Content Synchronization Protocol
-
-The LDAP Sync protocol allows a client to maintain a synchronized
-copy of a DIT fragment. The LDAP Sync operation is defined as a set
-of controls and other protocol elements which extend the LDAP search
-operation. This section introduces the LDAP Content Sync protocol
-only briefly. For more information, refer to {{REF:RFC4533}}.
-
-The LDAP Sync protocol supports both polling and listening for
-changes by defining two respective synchronization operations:
-{{refreshOnly}} and {{refreshAndPersist}}. Polling is implemented
-by the {{refreshOnly}} operation. The client copy is synchronized
-to the server copy at the time of polling. The server finishes the
-search operation by returning {{SearchResultDone}} at the end of
-the search operation as in the normal search. The listening is
-implemented by the {{refreshAndPersist}} operation. Instead of
-finishing the search after returning all entries currently matching
-the search criteria, the synchronization search remains persistent
-in the server. Subsequent updates to the synchronization content
-in the server cause additional entry updates to be sent to the
-client.
-
-The {{refreshOnly}} operation and the refresh stage of the
-{{refreshAndPersist}} operation can be performed with a present
-phase or a delete phase.
-
-In the present phase, the server sends the client the entries updated
-within the search scope since the last synchronization. The server
-sends all requested attributes, be it changed or not, of the updated
-entries. For each unchanged entry which remains in the scope, the
-server sends a present message consisting only of the name of the
-entry and the synchronization control representing state present.
-The present message does not contain any attributes of the entry.
-After the client receives all update and present entries, it can
-reliably determine the new client copy by adding the entries added
-to the server, by replacing the entries modified at the server, and
-by deleting entries in the client copy which have not been updated
-nor specified as being present at the server.
-
-The transmission of the updated entries in the delete phase is the
-same as in the present phase. The server sends all the requested
-attributes of the entries updated within the search scope since the
-last synchronization to the client. In the delete phase, however,
-the server sends a delete message for each entry deleted from the
-search scope, instead of sending present messages. The delete
-message consists only of the name of the entry and the synchronization
-control representing state delete. The new client copy can be
-determined by adding, modifying, and removing entries according to
-the synchronization control attached to the {{SearchResultEntry}}
-message.
-
-In the case that the LDAP Sync server maintains a history store and
-can determine which entries are scoped out of the client copy since
-the last synchronization time, the server can use the delete phase.
-If the server does not maintain any history store, cannot determine
-the scoped-out entries from the history store, or the history store
-does not cover the outdated synchronization state of the client,
-the server should use the present phase. The use of the present
-phase is much more efficient than a full content reload in terms
-of the synchronization traffic. To reduce the synchronization
-traffic further, the LDAP Sync protocol also provides several
-optimizations such as the transmission of the normalized {{EX:entryUUID}}s
-and the transmission of multiple {{EX:entryUUIDs}} in a single
-{{syncIdSet}} message.
-
-At the end of the {{refreshOnly}} synchronization, the server sends
-a synchronization cookie to the client as a state indicator of the
-client copy after the synchronization is completed. The client
-will present the received cookie when it requests the next incremental
-synchronization to the server.
-
-When {{refreshAndPersist}} synchronization is used, the server sends
-a synchronization cookie at the end of the refresh stage by sending
-a Sync Info message with TRUE refreshDone. It also sends a
-synchronization cookie by attaching it to {{SearchResultEntry}}
-generated in the persist stage of the synchronization search. During
-the persist stage, the server can also send a Sync Info message
-containing the synchronization cookie at any time the server wants
-to update the client-side state indicator. The server also updates
-a synchronization indicator of the client at the end of the persist
-stage.
+When {{refreshAndPersist}} synchronization is used, the server sends
+a synchronization cookie at the end of the refresh stage by sending
+a Sync Info message with TRUE refreshDone. It also sends a
+synchronization cookie by attaching it to {{SearchResultEntry}}
+generated in the persist stage of the synchronization search. During
+the persist stage, the server can also send a Sync Info message
+containing the synchronization cookie at any time the server wants
+to update the client-side state indicator. The server also updates
+a synchronization indicator of the client at the end of the persist
+stage.
In the LDAP Sync protocol, entries are uniquely identified by the
{{EX:entryUUID}} attribute value. It can function as a reliable
For configuration, please see the {{SECT:Delta-syncrepl}} section.
+H2: Push Based
+
+
+H3: Replacing Slurpd
+
+{{Slurpd}} replication has been deprecated in favor of Syncrepl replication and
+has been completely removed from OpenLDAP 2.4.
+
+{{Why was it replaced?}}
+
+The {{slurpd}} daemon was the original replication mechanism inherited from
+UMich's LDAP and operates in push mode: the master pushes changes to the
+slaves. It has been replaced for many reasons, in brief:
+
+ * It is not reliable
+ * It is extremely sensitive to the ordering of records in the replog
+ * It can easily go out of sync, at which point manual intervention is
+ required to resync the slave database with the master directory
+ * It isn't very tolerant of unavailable servers. If a slave goes down
+ for a long time, the replog may grow to a size that's too large for
+ slurpd to process
+
+{{What was it replaced with?}}
+
+Syncrepl
+
+{{Why is Syncrepl better?}}
+
+ * Syncrepl is self-synchronizing; you can start with a database in any
+ state from totally empty to fully synced and it will automatically do
+ the right thing to achieve and maintain synchronization
+ * Syncrepl can operate in either direction
+ * Data updates can be minimal or maximal
+
+{{How do I implement a pushed based replication system using Syncrepl?}}
+
+The easiest way is to point an LDAP backend ({{SECT: Backends}} and {{slapd-ldap(8)}})
+to your slave directory and setup Syncrepl to point to your Master database.
+
+If you imagine Syncrepl pulling down changes from the Master server, and then
+pushing those changes out to your slave servers via {{slapd-ldap(8)}}. This is
+called Syncrepl Proxy Mode. You can also use Syncrepl Multi-proxy mode:
+
+!import "push-based-complete.png"; align="center"; title="Syncrepl Proxy Mode"
+FT[align="Center"] Figure X.Y: Replacing slurpd
+
+The following example is for a self-contained push-based replication solution:
+
+> #######################################################################
+> # Standard OpenLDAP Master/Provider
+> #######################################################################
+>
+> include /usr/local/etc/openldap/schema/core.schema
+> include /usr/local/etc/openldap/schema/cosine.schema
+> include /usr/local/etc/openldap/schema/nis.schema
+> include /usr/local/etc/openldap/schema/inetorgperson.schema
+>
+> include /usr/local/etc/openldap/slapd.acl
+>
+> modulepath /usr/local/libexec/openldap
+> moduleload back_hdb.la
+> moduleload syncprov.la
+> moduleload back_monitor.la
+> moduleload back_ldap.la
+>
+> pidfile /usr/local/var/slapd.pid
+> argsfile /usr/local/var/slapd.args
+>
+> loglevel sync stats
+>
+> database hdb
+> suffix "dc=suretecsystems,dc=com"
+> directory /usr/local/var/openldap-data
+>
+> checkpoint 1024 5
+> cachesize 10000
+> idlcachesize 10000
+>
+> index objectClass eq
+> # rest of indexes
+> index default sub
+>
+> rootdn "cn=admin,dc=suretecsystems,dc=com"
+> rootpw testing
+>
+> # syncprov specific indexing
+> index entryCSN eq
+> index entryUUID eq
+>
+> # syncrepl Provider for primary db
+> overlay syncprov
+> syncprov-checkpoint 1000 60
+>
+> # Let the replica DN have limitless searches
+> limits dn.exact="cn=replicator,dc=suretecsystems,dc=com" time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited
+>
+> database monitor
+>
+> database config
+> rootpw testing
+>
+> ##############################################################################
+> # Consumer Proxy that pulls in data via Syncrepl and pushes out via slapd-ldap
+> ##############################################################################
+>
+> database ldap
+> # ignore conflicts with other databases, as we need to push out to same suffix
+> hidden on
+> suffix "dc=suretecsystems,dc=com"
+> rootdn "cn=slapd-ldap"
+> uri ldap://localhost:9012/
+>
+> lastmod on
+>
+> # We don't need any access to this DSA
+> restrict all
+>
+> acl-bind bindmethod=simple
+> binddn="cn=replicator,dc=suretecsystems,dc=com"
+> credentials=testing
+>
+> syncrepl rid=001
+> provider=ldap://localhost:9011/
+> binddn="cn=replicator,dc=suretecsystems,dc=com"
+> bindmethod=simple
+> credentials=testing
+> searchbase="dc=suretecsystems,dc=com"
+> type=refreshAndPersist
+> retry="5 5 300 5"
+>
+> overlay syncprov
+
+A replica configuration for this type of setup could be:
+
+> #######################################################################
+> # Standard OpenLDAP Slave without Syncrepl
+> #######################################################################
+>
+> include /usr/local/etc/openldap/schema/core.schema
+> include /usr/local/etc/openldap/schema/cosine.schema
+> include /usr/local/etc/openldap/schema/nis.schema
+> include /usr/local/etc/openldap/schema/inetorgperson.schema
+>
+> include /usr/local/etc/openldap/slapd.acl
+>
+> modulepath /usr/local/libexec/openldap
+> moduleload back_hdb.la
+> moduleload syncprov.la
+> moduleload back_monitor.la
+> moduleload back_ldap.la
+>
+> pidfile /usr/local/var/slapd.pid
+> argsfile /usr/local/var/slapd.args
+>
+> loglevel sync stats
+>
+> database hdb
+> suffix "dc=suretecsystems,dc=com"
+> directory /usr/local/var/openldap-slave/data
+>
+> checkpoint 1024 5
+> cachesize 10000
+> idlcachesize 10000
+>
+> index objectClass eq
+> # rest of indexes
+> index default sub
+>
+> rootdn "cn=admin,dc=suretecsystems,dc=com"
+> rootpw testing
+>
+> # Let the replica DN have limitless searches
+> limits dn.exact="cn=replicator,dc=suretecsystems,dc=com" time.soft=unlimited time.hard=unlimited size.soft=unlimited size.hard=unlimited
+>
+> updatedn "cn=replicator,dc=suretecsystems,dc=com"
+>
+> # Refer updates to the master
+> updateref ldap://localhost:9011
+>
+> database monitor
+>
+> database config
+> rootpw testing
+
+You can see we use the {{updatedn}} directive here and example ACLs ({{F:usr/local/etc/openldap/slapd.acl}}) for this could be:
+
+> # Give the replica DN unlimited read access. This ACL may need to be
+> # merged with other ACL statements.
+>
+> access to *
+> by dn.base="cn=replicator,dc=suretecsystems,dc=com" write
+> by * break
+>
+> access to dn.base=""
+> by * read
+>
+> access to dn.base="cn=Subschema"
+> by * read
+>
+> access to dn.subtree="cn=Monitor"
+> by dn.exact="uid=admin,dc=suretecsystems,dc=com" write
+> by users read
+> by * none
+>
+> access to *
+> by self write
+> by * read
+
+In order to support more replicas, just add more {{database ldap}} sections and
+increment the {{syncrepl rid}} number accordingly.
+
+Note: You must populate the Master and Slave directories with the same data,
+unlike when using normal Syncrepl
+
+If you do not have access to modify the master directory configuration you can
+configure a standalone ldap proxy, which might look like:
+
+!import "push-based-standalone.png"; align="center"; title="Syncrepl Standalone Proxy Mode"
+FT[align="Center"] Figure X.Y: Replacing slurpd with a standalone version
+
+The following configuration is an example of a standalone LDAP Proxy:
+
+> include /usr/local/etc/openldap/schema/core.schema
+> include /usr/local/etc/openldap/schema/cosine.schema
+> include /usr/local/etc/openldap/schema/nis.schema
+> include /usr/local/etc/openldap/schema/inetorgperson.schema
+>
+> include /usr/local/etc/openldap/slapd.acl
+>
+> modulepath /usr/local/libexec/openldap
+> moduleload syncprov.la
+> moduleload back_ldap.la
+>
+> ##############################################################################
+> # Consumer Proxy that pulls in data via Syncrepl and pushes out via slapd-ldap
+> ##############################################################################
+>
+> database ldap
+> # ignore conflicts with other databases, as we need to push out to same suffix
+> hidden on
+> suffix "dc=suretecsystems,dc=com"
+> rootdn "cn=slapd-ldap"
+> uri ldap://localhost:9012/
+>
+> lastmod on
+>
+> # We don't need any access to this DSA
+> restrict all
+>
+> acl-bind bindmethod=simple
+> binddn="cn=replicator,dc=suretecsystems,dc=com"
+> credentials=testing
+>
+> syncrepl rid=001
+> provider=ldap://localhost:9011/
+> binddn="cn=replicator,dc=suretecsystems,dc=com"
+> bindmethod=simple
+> credentials=testing
+> searchbase="dc=suretecsystems,dc=com"
+> type=refreshAndPersist
+> retry="5 5 300 5"
+>
+> overlay syncprov
+
+As you can see, you can let your imagination go wild using Syncrepl and
+{{slapd-ldap(8)}} tailoring your replication to fit your specific network
+topology.
+
+
H2: Mixture of both Pull and Push based
H3: N-Way Multi-Master replication
H4: MirrorMode Summary
-Hopefully you will now have a directory architecture that provides all of the
-consistency guarantees of single-master replication, whilst also providing the
+You will now have a directory architecture that provides all of the
+consistency guarantees of single-master replication, while also providing the
high availability of multi-master replication.