Replicated directories are a fundamental requirement for delivering a
resilient enterprise deployment.
-OpenLDAP has various configuration options for creating a replicated
+{{PRD:OpenLDAP}} has various configuration options for creating a replicated
directory. The following sections will discuss these.
H2: Replication Strategies
-H3: Pull Based
-
-
-H4: syncrepl replication
+H3: Push Based
-H4: delta-syncrepl replication
+H5: Replacing Slurpd
+
+{{Slurpd}} replication has been deprecated in favor of Syncrepl replication and
+has been completely removed from OpenLDAP 2.4.
+
+{{Why was it replaced?}}
+
+The {{slurpd}} daemon was the original replication mechanism inherited from
+UMich's LDAP and operates in push mode: the master pushes changes to the
+slaves. It has been replaced for many reasons, in brief:
+
+ * It is not reliable
+ * It is extremely sensitive to the ordering of records in the replog
+ * It can easily go out of sync, at which point manual intervention is
+ required to resync the slave database with the master directory
+ * It isn't very tolerant of unavailable servers. If a slave goes down
+ for a long time, the replog may grow to a size that's too large for
+ slurpd to process
+
+{{What was it replaced with?}}
+
+Syncrepl
+
+{{Why is Syncrepl better?}}
+
+ * Syncrepl is self-synchronizing; you can start with a database in any
+ state from totally empty to fully synced and it will automatically do
+ the right thing to achieve and maintain synchronization
+ * Syncrepl can operate in either direction
+ * Data updates can be minimal or maximal
+
+{{How do I implement a pushed based replication system using Syncrepl?}}
+
+The easiest way is to point an LDAP backend ({{SECT: Backends}} and {{slapd-ldap(8)}})
+to your slave directory and setup Syncrepl to point to your Master database.
+
+REFERENCE test045/048 for better explanation of above.
+
+If you imagine Syncrepl pulling down changes from the Master server, and then
+pushing those changes out to your slave servers via {{slapd-ldap(8)}}. This is
+called proxy mode (elaborate/confirm?).
+
+DIAGRAM HERE
+
+BETTER EXAMPLE here from test045/048 for different push/multiproxy examples.
+
+Here's an example:
+
+
+> include ./schema/core.schema
+> include ./schema/cosine.schema
+> include ./schema/inetorgperson.schema
+> include ./schema/openldap.schema
+> include ./schema/nis.schema
+>
+> pidfile /home/ghenry/openldap/ldap/tests/testrun/slapd.3.pid
+> argsfile /home/ghenry/openldap/ldap/tests/testrun/slapd.3.args
+>
+> modulepath ../servers/slapd/back-bdb/
+> moduleload back_bdb.la
+> modulepath ../servers/slapd/back-monitor/
+> moduleload back_monitor.la
+> modulepath ../servers/slapd/overlays/
+> moduleload syncprov.la
+> modulepath ../servers/slapd/back-ldap/
+> moduleload back_ldap.la
+>
+> # We don't need any access to this DSA
+> restrict all
+>
+> #######################################################################
+> # consumer proxy database definitions
+> #######################################################################
+>
+> database ldap
+> suffix "dc=example,dc=com"
+> rootdn "cn=Whoever"
+> uri ldap://localhost:9012/
+>
+> lastmod on
+>
+> # HACK: use the RootDN of the monitor database as UpdateDN so ACLs apply
+> # without the need to write the UpdateDN before starting replication
+> acl-bind bindmethod=simple
+> binddn="cn=Monitor"
+> credentials=monitor
+>
+> # HACK: use the RootDN of the monitor database as UpdateDN so ACLs apply
+> # without the need to write the UpdateDN before starting replication
+> syncrepl rid=1
+> provider=ldap://localhost:9011/
+> binddn="cn=Manager,dc=example,dc=com"
+> bindmethod=simple
+> credentials=secret
+> searchbase="dc=example,dc=com"
+> filter="(objectClass=*)"
+> attrs="*,structuralObjectClass,entryUUID,entryCSN,creatorsName,createTimestamp,modifiersName,modifyTimestamp"
+> schemachecking=off
+> scope=sub
+> type=refreshAndPersist
+> retry="5 5 300 5"
+>
+> overlay syncprov
+>
+> database monitor
+
+DETAILED EXPLANATION OF ABOVE LIKE IN OTHER SECTIONS (line numbers?)
+
+
+ANOTHER DIAGRAM HERE
+
+As you can see, you can let your imagination go wild using Syncrepl and
+{{slapd-ldap(8)}} tailoring your replication to fit your specific network
+topology.
-H3: Push Based
+H3: Pull Based
-H4: Working with Firewalls
+H4: syncrepl replication
-H4: Replacing Slurpd
+H4: delta-syncrepl replication
H2: Replication Types
H3: N-Way Multi-Master
+http://www.connexitor.com/blog/pivot/entry.php?id=105#body
+http://www.openldap.org/lists/openldap-software/200702/msg00006.html
+http://www.openldap.org/lists/openldap-software/200602/msg00064.html
+
H3: MirrorMode
+MirrorMode is a hybrid configuration that provides all of the consistency
+guarantees of single-master replication, while also providing the high
+availability of multi-master. In MirrorMode two masters are set up to
+replicate from each other (as a multi-master configuration) but an
+external frontend is employed to direct all writes to only one of
+the two servers. The second master will only be used for writes if
+the first master crashes, at which point the frontend will switch to
+directing all writes to the second master. When a crashed master is
+repaired and restarted it will automatically catch up to any changes
+on the running master and resync.
+
+This is discussed in full in the {{SECT:MirrorMode}} section below
H2: LDAP Sync Replication
H2: MirrorMode
+H3: Arguments for MirrorMode
+
+* Provides a high-availability (HA) solution for directory writes (replicas handle reads)
+* As long as one Master is operational, writes can safely be excepted
+* Master nodes replicate from each other, so they are always up to date and
+can be ready to take over (hot standby)
+* Syncrepl also allows the master nodes to re-synchronize after any downtime
+* Delta-Syncrepl can be used
+
+
+H3: Arguments against MirrorMode
+
+* MirrorMode is not what is termed as a Multi-Master solution. This is because
+writes have to go to one of the mirror nodes at a time
+* MirrorMode can be termed as Active-Active Hot-Standby, therefor an external
+server (slapd in proxy mode) or device (hardware load balancer) to manage which
+master is currently active
+* While syncrepl can recover from a completely empty database, slapadd is much
+faster
+* Does not provide faster or more scalable write performance (neither could
+ any Multi-Master solution)
+* Backups are managed slightly differently
+- If backing up the Berkeley database itself and periodically backing up the
+transaction log files, then the same member of the mirror pair needs to be
+used to collect logfiles until the next database backup is taken
+- To ensure that both databases are consistent, each database might have to be
+put in read-only mode while performing a slapcat.
+- When using slapcat, the generated LDIF files can be rather large. This can
+happen with a non-MirrorMode deployment also.
+
+H3: MirrorMode Configuration
+
+MirrorMode configuration is actually very easy. If you have ever setup a normal
+slapd syncrepl provider, then the only change is the directive:
+
+> mirrormode on
+
+You also need to make you the {{rid}} of each mirror node pair is different and
+that the {{provider}} syncrepl directive points to the other mirror pair.
+
+H4: Mirror Node Configuration
+
+This is the same as the {{SECT:Set up the provider slapd}} section, referencing
+{{SECT:delta-syncrepl replication}} if using {{delta-syncrepl}}.
+
+Here's a specific cut down example using {{SECT:LDAP Sync Replication}} in
+{{refreshAndPersist}} mode ({{delta-syncrepl}} can be used also):
+
+MirrorMode node 1:
+
+> # syncrepl directives\r
+> syncrepl rid=1\r
+> provider=ldap://ldap-rid2.example.com\r
+> bindmethod=simple\r
+> binddn="cn=mirrormode,dc=example,dc=com"\r
+> credentials=mirrormode\r
+> searchbase="dc=example,dc=com"\r
+> schemachecking=on\r
+> type=refreshAndPersist\r
+> retry="60 +"\r
+> \r
+> mirrormode on
+
+MirrorMode node 2:
+
+> # syncrepl directives\r
+> syncrepl rid=2\r
+> provider=ldap://ldap-rid1.example.com\r
+> bindmethod=simple\r
+> binddn="cn=mirrormode,dc=example,dc=com"\r
+> credentials=mirrormode\r
+> searchbase="dc=example,dc=com"\r
+> schemachecking=on\r
+> type=refreshAndPersist\r
+> retry="60 +"\r
+> \r
+> mirrormode on
+
+It's simple really; each MirrorMode node is setup {{B:exactly}} the same, except
+that the {{B:provider}} directive is set to point to the other MirrorMode node.
+
+H4: Failover Configuration
+
+There are generally 2 choices for this; 1. Hardware proxies/load-balancing or
+dedicated proxy software, 2. using a Back-LDAP proxy as a syncrepl provider
+
+MORE HERE and a nice PICTURE
+
+
+H4: Normal Consumer Configuration
+
+This is exactly the same as the {{SECT:Set up the consumer slapd}} section. It
+can either setup in normal {{SECT:syncrepl replication}} mode, or in
+{{SECT:delta-syncrepl replication}} mode.
+
+H3: MirrorMode Summary
+
+Hopefully you will now have a directory architecture that provides all of the
+consistency guarantees of single-master replication, whilst also providing the
+high availability of multi-master replication.
+