-# $OpenLDAP$
+# $OpenLDAP$
# Copyright 1999-2007 The OpenLDAP Foundation, All Rights Reserved.
# COPYING RESTRICTIONS APPLY, see COPYRIGHT.
Replicated directories are a fundamental requirement for delivering a
resilient enterprise deployment.
-OpenLDAP has various configuration options for creating a replicated
+{{PRD:OpenLDAP}} has various configuration options for creating a replicated
directory. The following sections will discuss these.
H2: Replication Strategies
-H3: Pull Based
-
-
-H4: syncrepl replication
-
-
-H4: delta-syncrepl replication
-
H3: Push Based
H5: Replacing Slurpd
-Slurpd replication has been deprecated in favor of Syncrepl replication and
-has been completely removed from 2.4.
+{{Slurpd}} replication has been deprecated in favor of Syncrepl replication and
+has been completely removed from OpenLDAP 2.4.
{{Why was it replaced?}}
-The slurpd daemon was the original replication mechanisim inherited from
+The {{slurpd}} daemon was the original replication mechanism inherited from
UMich's LDAP and operates in push mode: the master pushes changes to the
slaves. It has been replaced for many reasons, in brief:
- - It is not reliable
- - It is extremely sensitive to the ordering of records in the replog
- - It can easily go out of sync, at which point manual intervention is
+ * It is not reliable
+ * It is extremely sensitive to the ordering of records in the replog
+ * It can easily go out of sync, at which point manual intervention is
required to resync the slave database with the master directory
- - It isn't very tolerant of unavailable servers. If a slave goes down
+ * It isn't very tolerant of unavailable servers. If a slave goes down
for a long time, the replog may grow to a size that's too large for
slurpd to process
{{What was it replaced with?}}
-Syncrepl is self-synchronizing; you can start with a database in any
-state from totally empty to fully sync'd and it will automatically do
-the right thing to achieve and maintain synchronization.
+Syncrepl
+
+{{Why is Syncrepl better?}}
+
+ * Syncrepl is self-synchronizing; you can start with a database in any
+ state from totally empty to fully synced and it will automatically do
+ the right thing to achieve and maintain synchronization
+ * Syncrepl can operate in either direction
+ * Data updates can be minimal or maximal
+
+{{How do I implement a pushed based replication system using Syncrepl?}}
+
+The easiest way is to point an LDAP backend ({{SECT: Backends}} and {{slapd-ldap(8)}})
+to your slave directory and setup Syncrepl to point to your Master database.
+
+REFERENCE test045/048 for better explanation of above.
+
+If you imagine Syncrepl pulling down changes from the Master server, and then
+pushing those changes out to your slave servers via {{slapd-ldap(8)}}. This is
+called proxy mode (elaborate/confirm?).
+
+DIAGRAM HERE
+
+BETTER EXAMPLE here from test045/048 for different push/multiproxy examples.
+
+Here's an example:
+
+
+> include ./schema/core.schema
+> include ./schema/cosine.schema
+> include ./schema/inetorgperson.schema
+> include ./schema/openldap.schema
+> include ./schema/nis.schema
+>
+> pidfile /home/ghenry/openldap/ldap/tests/testrun/slapd.3.pid
+> argsfile /home/ghenry/openldap/ldap/tests/testrun/slapd.3.args
+>
+> modulepath ../servers/slapd/back-bdb/
+> moduleload back_bdb.la
+> modulepath ../servers/slapd/back-monitor/
+> moduleload back_monitor.la
+> modulepath ../servers/slapd/overlays/
+> moduleload syncprov.la
+> modulepath ../servers/slapd/back-ldap/
+> moduleload back_ldap.la
+>
+> # We don't need any access to this DSA
+> restrict all
+>
+> #######################################################################
+> # consumer proxy database definitions
+> #######################################################################
+>
+> database ldap
+> suffix "dc=example,dc=com"
+> rootdn "cn=Whoever"
+> uri ldap://localhost:9012/
+>
+> lastmod on
+>
+> # HACK: use the RootDN of the monitor database as UpdateDN so ACLs apply
+> # without the need to write the UpdateDN before starting replication
+> acl-bind bindmethod=simple
+> binddn="cn=Monitor"
+> credentials=monitor
+>
+> # HACK: use the RootDN of the monitor database as UpdateDN so ACLs apply
+> # without the need to write the UpdateDN before starting replication
+> syncrepl rid=1
+> provider=ldap://localhost:9011/
+> binddn="cn=Manager,dc=example,dc=com"
+> bindmethod=simple
+> credentials=secret
+> searchbase="dc=example,dc=com"
+> filter="(objectClass=*)"
+> attrs="*,structuralObjectClass,entryUUID,entryCSN,creatorsName,createTimestamp,modifiersName,modifyTimestamp"
+> schemachecking=off
+> scope=sub
+> type=refreshAndPersist
+> retry="5 5 300 5"
+>
+> overlay syncprov
+>
+> database monitor
+
+DETAILED EXPLANATION OF ABOVE LIKE IN OTHER SECTIONS (line numbers?)
+
+
+ANOTHER DIAGRAM HERE
+
+As you can see, you can let your imagination go wild using Syncrepl and
+{{slapd-ldap(8)}} tailoring your replication to fit your specific network
+topology.
+H3: Pull Based
-* Replication via syncrepl, the LDAP content synchronization operation (LDAP sync, RFC 4533). Introduced in OpenLDAP 2.2, it operates in pull mode: the consumer pulls the updates out of the producer. When used in refreshOnly mode, the producer barely knows it's acting as a master, while the refreshAndPersist mode requires the producer to support persistent searches. Either mode requires the provider and the consumer to support the controls related to the Sync Operation.
-
- Can you elaborate in a reply to me? I have no
-> braindead-automatically-attached-policy about e-mail confidentiality :-)
-Sure...
+H4: syncrepl replication
-> I have set up something using slurpd because I understood that using
-> replsync, the replica would need an access on the master, whereas slurpd
-> allowed a pure push method, where the replicas have no right to connect to
-> the master (the master can even be firewalled)
-Syncrepl can operate in either direction. In the pure push/firewall
-case, just set up a proxy backend as the syncrepl consumer. test045 and
-test048 in the test suite both demonstrate how to configure this. Those
-tests are in OpenLDAP 2.4, but you can do something similar in 2.3. You
-just need to use a separate slapd instance for the consumer in 2.3.
+H4: delta-syncrepl replication
-Just because the protocol was defined a particular way (consumer
-initiated single master replication) doesn't mean it can't be used in
-other ways. OpenLDAP is far more flexible than that. We've enhanced the
-basic syncrepl functionality a number of different ways (delta-syncrepl,
-proxied syncrepl, mirrormode, and multimaster) all without altering any
-of the syncrepl protocol definition. All it takes is a little creativity
-to assemble the pieces in the proper order.
+H2: Replication Types
+H3: syncrepl replication
-What was it replaced with?
-Why is Syncrepl better?
+H3: delta-syncrepl replication
-How do I implement a pushed based replication system using Syncrepl?
-H4: Working with Firewalls
+H3: N-Way Multi-Master replication
+Multi-Master replication is a replication technique using Syncrepl to replicate
+data to multiple Master Directory servers.
-H2: Replication Types
+* Advantages of Multi-Master replication:
+- If any master fails, other masters will continue to accept updates
+- Avoids a single point of failure
+- Masters can be located in several physical sites i.e. distributed across the
+network/globe.
+- Good for Automatic failover/High Availability
-H3: syncrepl replication
+* Disadvantages of Multi-Master replication:
-
-H3: delta-syncrepl replication
+- It has {{B:NOTHING}} to do with load balancing
+- {{URL:http://www.openldap.org/faq/data/cache/1240.html}}
+- If connectivity with a master is lost because of a network partition, then
+"automatic failover" can just compound the problem
+- Typically, a particular machine cannot distinguish between losing contact
+ with a peer because that peer crashed, or because the network link has failed
+- If a network is partitioned and multiple clients start writing to each of the
+"masters" then reconciliation will be a pain; it may be best to simply deny
+writes to the clients that are partitioned from the single master
+- Masters {{B:must}} propagate writes to {{B:all}} the other servers, which
+means the network traffic and write load is constant and spreads across all
+of the servers
-H3: N-Way Multi-Master
+This is discussed in full in the {{SECT:N-Way Multi-Master}} section below
-http://www.connexitor.com/blog/pivot/entry.php?id=105#body
-http://www.openldap.org/lists/openldap-software/200702/msg00006.html
-http://www.openldap.org/lists/openldap-software/200602/msg00064.html
+H3: MirrorMode replication
+MirrorMode is a hybrid configuration that provides all of the consistency
+guarantees of single-master replication, while also providing the high
+availability of multi-master. In MirrorMode two masters are set up to
+replicate from each other (as a multi-master configuration) but an
+external frontend is employed to direct all writes to only one of
+the two servers. The second master will only be used for writes if
+the first master crashes, at which point the frontend will switch to
+directing all writes to the second master. When a crashed master is
+repaired and restarted it will automatically catch up to any changes
+on the running master and resync.
-H3: MirrorMode
-
+This is discussed in full in the {{SECT:MirrorMode}} section below
H2: LDAP Sync Replication
H2: N-Way Multi-Master
+For the following example we will be using 3 Master nodes. Keeping in line with
+{{B:test050-syncrepl-multimaster}} of the OpenLDAP test suite, we will be configuring
+{{slapd(8)}} via {{B:cn=config}}
+
+This sets up the config database:
+
+> dn: cn=config
+> objectClass: olcGlobal
+> cn: config
+> olcServerID: 1
+>
+> dn: olcDatabase={0}config,cn=config
+> objectClass: olcDatabaseConfig
+> olcDatabase: {0}config
+> olcRootPW: secret
+
+second and third servers will have a different olcServerID obviously:
+
+> dn: cn=config
+> objectClass: olcGlobal
+> cn: config
+> olcServerID: 2
+>
+> dn: olcDatabase={0}config,cn=config
+> objectClass: olcDatabaseConfig
+> olcDatabase: {0}config
+> olcRootPW: secret
+
+This sets up syncrepl as a provider (since these are all masters):
+
+> dn: cn=module,cn=config
+> objectClass: olcModuleList
+> cn: module
+> olcModulePath: /usr/local/libexec/openldap
+> olcModuleLoad: syncprov.la
+
+Now we setup the first Master Node (replace $URI1, $URI2 and $URI3 etc. with your actual ldap urls):
+
+> dn: cn=config
+> changetype: modify
+> replace: olcServerID
+> olcServerID: 1 $URI1
+> olcServerID: 2 $URI2
+> olcServerID: 3 $URI3
+>
+> dn: olcOverlay=syncprov,olcDatabase={0}config,cn=config
+> changetype: add
+> objectClass: olcOverlayConfig
+> objectClass: olcSyncProvConfig
+> olcOverlay: syncprov
+>
+> dn: olcDatabase={0}config,cn=config
+> changetype: modify
+> add: olcSyncRepl
+> olcSyncRepl: rid=001 provider=$URI1 binddn="cn=config" bindmethod=simple
+> credentials=secret searchbase="cn=config" type=refreshAndPersist
+> retry="5 5 300 5" timeout=1
+> olcSyncRepl: rid=002 provider=$URI2 binddn="cn=config" bindmethod=simple
+> credentials=secret searchbase="cn=config" type=refreshAndPersist
+> retry="5 5 300 5" timeout=1
+> olcSyncRepl: rid=003 provider=$URI3 binddn="cn=config" bindmethod=simple
+> credentials=secret searchbase="cn=config" type=refreshAndPersist
+> retry="5 5 300 5" timeout=1
+> -
+> add: olcMirrorMode
+> olcMirrorMode: TRUE
+
+Now start up the Master and a consumer/s, also add the above LDIF to the first consumer, second consumer etc. It will then replicate {{B:cn=config}}. You now have N-Way Multimaster on the config database.
+
+We still have to replicate the actual data, not just the config, so add to the master (all active and configured consumers/masters will pull down this config, as they are all syncing). Also, replace all {{${}}} variables with whatever is applicable to your setup:
+
+> dn: olcDatabase={1}$BACKEND,cn=config
+> objectClass: olcDatabaseConfig
+> objectClass: olc${BACKEND}Config
+> olcDatabase: {1}$BACKEND
+> olcSuffix: $BASEDN
+> olcDbDirectory: ./db
+> olcRootDN: $MANAGERDN
+> olcRootPW: $PASSWD
+> olcSyncRepl: rid=004 provider=$URI1 binddn="$MANAGERDN" bindmethod=simple
+> credentials=$PASSWD searchbase="$BASEDN" type=refreshOnly
+> interval=00:00:00:10 retry="5 5 300 5" timeout=1
+> olcSyncRepl: rid=005 provider=$URI2 binddn="$MANAGERDN" bindmethod=simple
+> credentials=$PASSWD searchbase="$BASEDN" type=refreshOnly
+> interval=00:00:00:10 retry="5 5 300 5" timeout=1
+> olcSyncRepl: rid=006 provider=$URI3 binddn="$MANAGERDN" bindmethod=simple
+> credentials=$PASSWD searchbase="$BASEDN" type=refreshOnly
+> interval=00:00:00:10 retry="5 5 300 5" timeout=1
+> olcMirrorMode: TRUE
+>
+> dn: olcOverlay=syncprov,olcDatabase={1}${BACKEND},cn=config
+> changetype: add
+> objectClass: olcOverlayConfig
+> objectClass: olcSyncProvConfig
+> olcOverlay: syncprov
+
+Note: You must have all your server set to the same time via {{http://www.ntp.org/}}
H2: MirrorMode
+H3: Arguments for MirrorMode
+
+* Provides a high-availability (HA) solution for directory writes (replicas handle reads)
+* As long as one Master is operational, writes can safely be accepted
+* Master nodes replicate from each other, so they are always up to date and
+can be ready to take over (hot standby)
+* Syncrepl also allows the master nodes to re-synchronize after any downtime
+* Delta-Syncrepl can be used
+
+
+H3: Arguments against MirrorMode
+
+* MirrorMode is not what is termed as a Multi-Master solution. This is because
+writes have to go to one of the mirror nodes at a time
+* MirrorMode can be termed as Active-Active Hot-Standby, therefor an external
+server (slapd in proxy mode) or device (hardware load balancer) to manage which
+master is currently active
+* While syncrepl can recover from a completely empty database, slapadd is much
+faster
+* Does not provide faster or more scalable write performance (neither could
+ any Multi-Master solution)
+* Backups are managed slightly differently
+- If backing up the Berkeley database itself and periodically backing up the
+transaction log files, then the same member of the mirror pair needs to be
+used to collect logfiles until the next database backup is taken
+- To ensure that both databases are consistent, each database might have to be
+put in read-only mode while performing a slapcat.
+- When using slapcat, the generated LDIF files can be rather large. This can
+happen with a non-MirrorMode deployment also.
+
+H3: MirrorMode Configuration
+
+MirrorMode configuration is actually very easy. If you have ever setup a normal
+slapd syncrepl provider, then the only change is the following two directives:
+
+> mirrormode on
+> serverID 1
+
+Note: You need to make sure that the {{serverID}} of each mirror node pair is
+different.
+
+H4: Mirror Node Configuration
+
+This is the same as the {{SECT:Set up the provider slapd}} section, reference
+{{SECT:delta-syncrepl replication}} if using {{delta-syncrepl}}.
+
+Here's a specific cut down example using {{SECT:LDAP Sync Replication}} in
+{{refreshAndPersist}} mode ({{delta-syncrepl}} can be used also):
+
+MirrorMode node 1:
+
+> # syncrepl directives \r
+> syncrepl rid=001\r
+> provider=ldap://ldap-ridr1.example.com\r
+> bindmethod=simple\r
+> binddn="cn=mirrormode,dc=example,dc=com"\r
+> credentials=mirrormode\r
+> searchbase="dc=example,dc=com"\r
+> schemachecking=on\r
+> type=refreshAndPersist\r
+> retry="60 +"\r
+>
+> syncrepl rid=002\r
+> provider=ldap://ldap-rid2.example.com\r
+> bindmethod=simple\r
+> binddn="cn=mirrormode,dc=example,dc=com"\r
+> credentials=mirrormode\r
+> searchbase="dc=example,dc=com"\r
+> schemachecking=on\r
+> type=refreshAndPersist\r
+> retry="60 +"\r
+> \r
+> mirrormode on
+> serverID 1
+
+MirrorMode node 2:
+
+> # syncrepl directives\r
+> syncrepl rid=001\r
+> provider=ldap://ldap-ridr1.example.com\r
+> bindmethod=simple\r
+> binddn="cn=mirrormode,dc=example,dc=com"\r
+> credentials=mirrormode\r
+> searchbase="dc=example,dc=com"\r
+> schemachecking=on\r
+> type=refreshAndPersist\r
+> retry="60 +"\r
+>
+> syncrepl rid=002\r
+> provider=ldap://ldap-rid2.example.com\r
+> bindmethod=simple\r
+> binddn="cn=mirrormode,dc=example,dc=com"\r
+> credentials=mirrormode\r
+> searchbase="dc=example,dc=com"\r
+> schemachecking=on\r
+> type=refreshAndPersist\r
+> retry="60 +"\r
+> \r
+> mirrormode on
+> serverID 2
+
+It's simple really; each MirrorMode node is setup {{B:exactly}} the same, except
+that the {{serverID}} is unique.
+
+H4: Failover Configuration
+
+There are generally 2 choices for this; 1. Hardware proxies/load-balancing or
+dedicated proxy software, 2. using a Back-LDAP proxy as a syncrepl provider
+
+A typical enterprise example might be:
+
+!import "dual_dc.png"; align="center"; title="MirrorMode Enterprise Configuration"
+FT[align="Center"] Figure X.Y: MirrorMode in a Dual Data Center Configuration
+
+H4: Normal Consumer Configuration
+
+This is exactly the same as the {{SECT:Set up the consumer slapd}} section. It
+can either setup in normal {{SECT:syncrepl replication}} mode, or in
+{{SECT:delta-syncrepl replication}} mode.
+
+H3: MirrorMode Summary
+
+Hopefully you will now have a directory architecture that provides all of the
+consistency guarantees of single-master replication, whilst also providing the
+high availability of multi-master replication.
+