3 draft-ietf-ldup-model-03.txt
7 Netscape Communications Corp.
14 LDAP Replication Architecture
16 Copyright (C) The Internet Society (1998,1999, 2000).
21 This document is an Internet-Draft and is in full conformance with all
22 provisions of Section 10 of RFC2026.
24 Internet-Drafts are working documents of the Internet Engineering Task
25 Force (IETF), its areas, and its working groups. Note that other
26 groups may also distribute working documents as Internet-Drafts.
28 Internet-Drafts are draft documents valid for a maximum of six months
29 and may be updated, replaced, or made obsolete by other documents at
30 any time. It is inappropriate to use Internet-Drafts as reference
31 material or to cite them other than as "work in progress."
33 The list of current Internet-Drafts can be accessed at
34 http://www.ietf.org/ietf/1id-abstracts.txt
36 The list of Internet-Draft Shadow Directories can be accessed at
37 http://www.ietf.org/shadow.html.
39 This draft, file name draft-ietf-ldup-model-03.txt, is intended to be
40 become a Proposed Standard RFC, to be published by the IETF Working
41 Group LDUP. Distribution of this document is unlimited. Comments
42 should be sent to the LDUP Replication mailing list <ldup@imc.org> or
45 This Internet-Draft expires on 10 September 2000.
53 Merrells, Reed, Srinivasan [Page 1]
54 Expires 10 September 2000
60 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
71 This architectural document outlines a suite of schema and protocol
72 extensions to LDAPv3 that enables the robust, reliable, server-to-
73 server exchange of directory content and changes.
75 The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT",
76 "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this
77 document are to be interpreted as described in RFC 2119 [RFC2119]. The
78 sections below reiterate these definitions and include some additional
84 1 Abstract......................................................2
85 2 Table of Contents.............................................2
86 3 Introduction..................................................4
87 3.1 Scope.........................................................4
88 3.2 Document Objectives...........................................5
89 3.3 Document Non-Objectives.......................................6
90 3.4 Existing Implementations......................................6
91 3.4.1 Replication Log Implementations.........................6
92 3.4.2 State-Based Implementations.............................7
93 3.5 Terms and Definitions.........................................7
94 3.6 Consistency Models............................................8
95 3.7 LDAP Constraints..............................................9
96 4 Directory Model..............................................10
97 4.1 Replica Type.................................................10
98 4.1.1 Primary Replica........................................10
99 4.1.2 Updatable Replica......................................10
100 4.1.3 Read-Only Replica......................................10
101 4.1.4 Fractional Replicas....................................10
102 4.2 Sub-Entries..................................................11
103 4.3 Glue Entries.................................................11
104 4.4 Unique Identifiers...........................................11
105 4.5 Change Sequence Number.......................................11
106 4.5.1 CSN Composition........................................11
107 4.5.2 CSN Representation.....................................12
108 4.5.3 CSN Generation.........................................12
109 4.6 State Change Information.....................................13
110 4.1.1 Entry Change State Storage and Representation..........13
111 4.1.2 Attribute Change State Storage.........................14
113 Merrells, Reed, Srinivasan [Page 2]
114 Expires 10 September 2000
120 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
123 4.1.3 Attribute Value Change State Storage...................14
124 4.2 LDAP Update Operations.......................................14
125 5 Information Model............................................15
127 Semantics and Relationships............................15
128 5.2 Root DSE Attributes..........................................15
129 5.3 Naming Context...............................................15
130 5.4 Replica Object Class and Entries.............................16
131 5.5 Lost and Found Entry.........................................16
132 5.6 Replication Agreement Object Class and Entries...............16
133 5.6.1 Replication Schedule...................................17
134 6 Policy Information...........................................18
135 6.1 Schema Knowledge.............................................18
136 7 LDUP Update Transfer Protocol Framework......................18
137 7.1 Replication Session Initiation...............................19
138 7.1.1 Authentication.........................................19
139 7.1.2 Consumer Initiated.....................................19
140 7.1.3 Supplier Initiated.....................................19
141 7.2 Start Replication Session....................................20
142 7.2.1 Start Replication Request..............................20
143 7.2.2 Start Replication Response.............................20
144 7.3 Update Transfer..............................................20
145 7.4 End Replication Session......................................20
146 7.5 Integrity & Confidentiality..................................21
147 8 LDUP Update Protocols........................................21
148 8.1 Replication Updates and Update Primitives....................21
149 8.2 Fractional Updates...........................................21
150 9 LDUP Full Update Transfer Protocol...........................22
151 9.1 Full Update Transfer.........................................22
152 9.2 Replication Update Generation................................22
153 9.3 Replication Update Consumption...............................22
154 9.4 Full Update, End Replication Session.........................22
155 9.5 Interrupted Transmission.....................................23
156 10 LDUP Incremental Update Transfer Protocol....................23
157 10.1 Update Vector................................................23
158 10.2 Supplier Initiated, Incremental Update,
159 Start Replication Session................................24
160 10.3 Replication Update Generation................................24
161 10.3.1 Replication Log Implementation.......................25
162 10.3.2 State-Based Implementation...........................25
163 10.4 Replication Update Consumption...............................25
164 10.5 Update Resolution Procedures.................................25
165 10.5.1 URP: Distinguished Names.............................26
166 10.5.2 URP: Orphaned Entries................................26
167 10.5.3 URP: Distinguished Not Present.......................26
168 10.5.4 URP: Schema - Single Valued Attributes...............26
169 10.5.5 URP: Schema - Required Attributes....................27
170 10.5.6 URP: Schema - Extra Attributes.......................27
173 Merrells, Reed, Srinivasan [Page 3]
174 Expires 10 September 2000
180 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
183 10.5.7 URP: Duplicate Attribute Values......................27
184 10.5.8 URP: Ancestry Graph Cycle............................27
185 10.6 Incremental Update, End Replication Session..................27
186 10.7 Interrupted Transmission.....................................28
187 11 Purging State Information....................................28
188 11.1 Purge Vector.................................................28
189 11.2 Purging Deleted Entries, Attributes, and Attribute Values....29
190 12 Replication Configuration and Management.....................29
191 13 Time.........................................................30
192 14 Security Considerations......................................31
193 15 Acknowledgements.............................................31
194 16 References...................................................32
195 17 Intellectual Property Notice.................................32
196 18 Copyright Notice.............................................33
197 19 Authors' Address.............................................33
198 20 Appendix A - LDAP Constraints................................34
199 20.1 LDAP Constraints Clauses.....................................34
200 20.2 LDAP Data Model Constraints..................................35
201 20.3 LDAP Operation Behaviour Constraints.........................36
202 20.4 New LDAP Constraints.........................................37
203 20.4.1 New LDAP Data Model Constraints......................37
204 20.4.2 New LDAP Operation Behaviour Constraints.............37
216 This architectural document provides an outline of an LDAP based
217 replication scheme. Further detailed design documents will draw
220 The design proceeds from prior work in the industry, including
221 concepts from the ITU-T Recommendation X.525 (1993, 1997) Directory
222 Information Shadowing Protocol (DISP) [X525], experience with widely
223 deployed distributed directories in network operating systems,
224 electronic mail address books, and other database technologies. The
225 emphasis of the design is on:
227 1. Simplicity of operation.
229 2. Flexibility of configuration.
231 3. Manageability of replica operations among mixed heterogeneous
232 vendor LDAP servers under common administration.
235 Merrells, Reed, Srinivasan [Page 4]
236 Expires 10 September 2000
242 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
245 4. Security of content and configuration information when LDAP servers
246 from more than one administrative authority are interconnected.
248 A range of deployment scenarios are supported, including multi-master
249 and single-master topologies. Replication networks may include
250 transitive and redundant relationships between LDAP servers.
252 The controlling framework used to define the relationships, types, and
253 state of replicas of the directory content is defined. In this way the
254 directory content can itself be used to monitor and control the
255 replication network. The directory schema is extended to define object
256 classes, auxiliary classes, and attributes that describe areas of the
257 namespace which are replicated, LDAP servers which hold replicas of
258 various types for the various partitions of the namespace, LDAP Access
259 Points (network addresses) where such LDAP servers may be contacted,
260 which namespaces are held on given LDAP servers, and the progress of
261 replication operations. Among other things, this knowledge of where
262 directory content is located could serve as the basis for dynamic
263 generation of LDAP referrals.
265 An update transfer protocol, which actually brings a replica up to
266 date with respect to changes in directory content at another replica,
267 is defined using LDAPv3 protocol extensions. The representation of
268 directory content and changes will be defined by the LDAP Replication
269 Update Transfer Protocol sub-team. Incremental and full update
270 transfer mechanisms are described. Replication protocols are required
271 to include initial population, change updates, and removal of
274 Security information, including access control policy will be treated
275 as directory content by the replication protocols. Confidentiality
276 and integrity of replication information is required to be provided by
277 lower-level transport/session protocols such as IPSEC and/or TLS.
281 3.2 Document Objectives
283 The objectives of this document are:
285 a) To define the architectural foundations for LDAP Replication, so
286 that further detailed design documents may be written. For
287 instance, the Information Model, Update Transfer Protocol, and
288 Update Resolution Procedures documents.
290 b) To provide an architectural solution for each clause of the
291 requirements document [LDUP Requirements].
293 c) To preserve the LDAP Data Model and Operation Behavior
295 defined for LDAP in RFC 2251 [See Appendix A]
297 d) To avoid tying the LDUP working group to the schedule of any other
300 e) Not to infringe upon known registered intellectual property rights.
303 Merrells, Reed, Srinivasan [Page 5]
304 Expires 10 September 2000
310 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
316 3.3 Document Non-Objectives
318 This document does not address the following issues, as they are
319 considered beyond the scope of the Working Group.
321 a) How LDAP becomes a distributed directory. There are many issues
322 beyond replication that should be considered. Such as, support for
323 external references, algorithms for computing referrals from the
324 distributed directory knowledge, etc.
326 b) Specifying management protocols to create naming contexts or new
327 replicas. LDAP may be sufficient for this. The document describes
328 how new replicas and naming contexts are represented, in the
329 directory, as entries, attributes, and attribute values.
331 c) How transactions will be replicated. However, the architecture
332 should not knowingly prevent or impede them, given the Working
333 Group's incomplete understanding of the issues at this time.
335 d) The mapping or merging of disparate Schema definitions.
337 e) Support of overlapping replicated regions.
339 f) The case where separate attributes of an entry may be mastered by
340 different LDAP servers. This might be termed a 'Split Primary'.
341 Replica roles are defined in section 4.1.
343 g) The specification of a replication system that supports Sparse
344 Replication. A Sparse Replica contains a subset of the naming
345 context entries, being modified by an Entry Selection Filter
346 criteria associated with the replica. An Entry Selection Filter is
347 an LDAP filter expression that describes the entries to be
348 replicated. The design and implementation of this functionality is
349 not yet well enough understood to specify here.
353 3.4 Existing Implementations
355 In order to define a standard replication scheme that may be readily
356 implemented we must consider the architectures of current LDAP server
357 implementations. Existing systems currently support proprietary
358 replication schemes based on one of two general approaches: log-based
359 or state-based. Some sections of this text may specifically address
360 the concerns of one approach. They will be clearly marked.
365 eplication Log Implementations
367 Implementations based on the original University of Michigan LDAP
368 server code record LDAP operations to a operation log. During a
369 replication session operations are replayed from this log to bring the
371 Merrells, Reed, Srinivasan [Page 6]
372 Expires 10 September 2000
378 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
381 Consumer replica up to date. Example implementations of this type at
382 this time are the Innosoft, Netscape, and Open LDAP Directory Servers.
387 tate-Based Implementations
389 Directory Server implementations from Novell and Microsoft at this
390 time do not replay LDAP operations from a operation log. When a
391 replication session occurs each entry in the Replicated Area is
392 considered in turn, compared against the update state of the Consumer,
393 and any resultant changes transmitted. These changes are a set of
394 assertions about the presence or absence of entries, attributes, and
399 3.5 Terms and Definitions
401 The definitions from the Replication Requirements document have been
402 copied here and extended.
404 For brevity, an LDAP server implementation is referred to throughout
407 The LDAP update operations; Add, Delete, Modify, Modify RDN (LDAPv2)
408 and Modify DN (LDAPv3), are collectively referred to as LDAP Update
411 A Naming Context is a subtree of entries in the Directory Information
412 Tree (DIT). There may be multiple Naming Contexts stored on a single
413 server. Naming Contexts are defined in section 17 of [X501].
415 A Naming Context is based at an entry identified as its root and
416 includes all its subordinate entries down the tree until another
417 Naming Context is encountered.
419 A Replica is an instance of a replicated Naming Context.
421 A replicated Naming Context is said to be single-mastered if there is
422 only one Replica where it may be updated, and multi-mastered if there
423 is more than one Replica where it may be updated.
425 A Replication Relationship is established between two or more Replicas
426 that are hosted on servers that cooperate to service a common area of
429 A Replication Agreement is defined between two parties of a
430 Replication Relationship. The properties of the agreement codify the
431 Unit of Replication, the Update Transfer Protocol to be used, and the
432 Replication Schedule of a Replication Session.
434 A Replication Session is an LDAP session between the two servers
435 identified by a replication agreement. Interactions occur between the
436 two servers, resulting in the transfer of updates from the supplier
437 replica to the consumer replica.
439 Merrells, Reed, Srinivasan [Page 7]
440 Expires 10 September 2000
446 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
450 The Initiator of a Replication Session is the initiating server.
452 A Responder server responds to the replication initiation request from
453 the Initiator server.
455 A Supplier server is the source of the updates to be transferred.
457 A Consumer server is the recipient of the update sequence.
459 The Update Transfer Protocol is the means by which the Replication
460 Session proceeds. It defines the protocol for exchanging updates
461 between the Replication Relationship partners.
463 A Replication Update is an LDAP Extended Operation that contains
464 updates to be applied to the DIT. The Update Transfer Protocol carries
465 a sequence of these messages from the Supplier to the Consumer.
467 The Update Resolution Procedures repair constraint violations that
468 occur when updates to a multi-mastered Replica collide.
470 A Fractional Entry Specification is a list of entry attributes to be
471 included, or a list of attributes to be excluded in a replica. An
472 empty specification implies that all entry attributes are included.
474 A Fractional Entry is an entry that contains only a subset of its
475 original attributes. It results from the replication of changes
476 governed by a Fractional Entry
479 A Fractional Replica is a replica that holds Fractional Entries of its
484 3.6 Consistency Models
486 This replication architecture supports a loose consistency model
487 between replicas of a naming context. It does not attempt to provide
488 the appearance of a single copy of a replica. The contents of each
489 replica may be different, but over time they will be converging
490 towards the same state. This architecture is not intended to support
491 LDAP Clients that require a tight consistency model, where the state
492 of all replicas is always equivalent.
494 Three levels of consistency are available to LDAP Clients, which are
495 characterized by their deployment topologies. Single-Server, where
496 there is just the naming context and no replicas. Single-master, where
497 there are replicas, but only one may be updated. And, multi-master,
498 where there is more than one replica to which LDAP update operations
499 may be directed. The consistency properties of each model are rooted
500 in their serialization of read and write operations.
502 1) A single-server deployment of a naming context provides tight
503 consistency to LDAP applications. LDAP Clients have no choice but to
504 direct all their operations to a single server, serializing both read
507 Merrells, Reed, Srinivasan [Page 8]
508 Expires 10 September 2000
514 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
517 and write operations.
519 2) A single-mastered deployment of a naming context provides both
520 tight and loose consistency to LDAP applications. LDAP Clients must
521 direct all write operations to the single updateable replica, but may
522 direct their reads to any of the replicas. A client experiences tight
523 consistency by directing all its operations to the single updatable
524 replica, and loose consistency by directing any read operations to any
527 3) A multi-mastered deployment of a naming context can provide only
528 loose consistency to LDAP applications. Across the system writes and
529 reads are not serialized. An LDAP Client could direct their read and
530 write operations to a single updateable replica, but they will not
531 receive tight consistency as interleaved writes could be occurring at
534 Tight consistency can be achieved in a multi-master deployment for a
535 particular LDAP application if and only if all instances of its client
536 are directed towards the same updateable replica, and the application
537 data is not updated by any other LDAP application. Introducing these
538 constraints to an application and deployment of a naming-context
539 ensures that writes are serialized providing tight consistency for the
542 Future work could make use of the architecture proposed in this
543 document as a basis for allowing clients to request session guarantees
544 from a server when establishing a connection.
550 The LDAP-v3 Internet RFC [LDAPv3] defines a set of Data Model and
551 Operation Behaviour constraints that a compliant LDAP server must
552 enforce. The server must reject an LDAP Update Operation if its
553 application to the target entry would violate any one of these LDAP
554 Constraints. [Appendix A B contains the original text clauses from RFC
555 2251, and also a summary.]
557 In the case of a single-server or single-mastered naming context all
558 LDAP Constraints are immediately enforced at the single updateable
559 replica. An error result code is returned to an LDAP Client that
560 presents an operation that would violate the constraints.
562 In the case of a multi-mastered naming context not all LDAP
563 Constraints can be immediately enforced at the updateable replica to
564 which the LDAP Update Operation is applied. This loosely consistent
565 replication architecture ensures that at each replica all constraints
566 are imposed, but as updates are replicated constraint violations may
568 that can not be reported to the appropriate client. Any constraint
569 violations that occur are repaired by a set of update resolution
572 Any LDAP client that has been implemented to expect immediate
575 Merrells, Reed, Srinivasan [Page 9]
576 Expires 10 September 2000
582 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
585 enforcement of all LDAP Constraints may not behave as expected
586 against a multi-mastered naming context.
593 This section describes extensions to the LDAP Directory Model that are
594 required by this replication architecture.
600 Each Replica is characterized with a replica type. This may be
601 Primary, Updatable, or Read-Only. A Read-Only Replica may be further
602 defined as being Fractional.
609 The Primary Replica is a full copy of the Replica, to which all
610 applications that require tight consistency should direct their LDAP
611 Operations. There can be only one Primary Replica within the set of
612 Replicas of a given Naming Context. It is also permissible for none
613 of the Replicas to be designated the Primary. The Primary Replica MUST
614 NOT be a Fractional Replica.
620 An Updatable Replica is a Replica that accepts all the LDAP Update
621 Operations, but is not the Primary Replica. There could be none, one,
622 or many Updatable Replicas within the set of Replicas of a given
623 Naming Context. An Updatable Replica MUST NOT be a Fractional Replica.
630 A Read-Only Replica will accept only non-modifying LDAP operations.
631 All modification operations shall be referred to an updateable
632 Replica. The server referred to would usually be a Supplier of this
640 Fractional Replicas must always be Read-Only. All LDAP Update
641 Operations must be referred to an Updatable Replica. The server
642 referred to would usually be a Supplier of this Fractional Replica.
645 Merrells, Reed, Srinivasan [Page 10]
646 Expires 10 September 2000
652 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
657 Replication management entries are to be stored at the base of the
658 replicated naming context. They will be of a 'ldapSubentry'
660 to exclude them from regular searches. Entries with the objectclass
661 subentry are not returned as the result of a search unless the filter
662 component "(objectclass=ldapSubentry)" is included in the search
669 A glue entry is an entry that contains knowledge of its name only. No
670 other information is held with it. Such glue entries will be
671 distinguished through a special object class defined for that purpose.
672 Glue entries may be created during a replication session to repair a
673 constraint violation.
676 4.4 Unique Identifiers
678 Distinguished names can change, so are therefore unreliable as
679 identifiers. A Unique Identifier must therefore be assigned to each
680 entry as it is created. This identifier will be stored as an
681 operational attribute of the entry, named 'entryUUID'. The entryUUID
682 attribute is single valued. A consistent algorithm for generating such
683 unique identifiers should be defined for use in the LDUP standards
684 documents that detail the LDUP information model and LDUP protocols.
687 4.5 Change Sequence Number
689 Change Sequence Numbers (CSNs) are used to impose a total ordering
690 upon the causal sequence of updates applied to all the replicas of a
691 naming context. Every LDAP Update Operation is assigned at least one
692 CSN. A Modify operation MUST be assigned one CSN per modification.
699 A CSN is formed of four components. In order of significance they
700 are; the time, a change count, a Replica Identifier, and a
701 modification number. The CSN is composed thus to ensure the uniqueness
702 of every generated CSN. When CSNs are compared to determine their
703 ordering they are compared component by component. First the time,
704 then the change count, then the replica identifier, and finally the
707 The time component is a year-2000-safe representation of the real
708 world time, with a granularity of one second.
710 Because many LDAP Update Operations, at a single replica, may be
712 Merrells, Reed, Srinivasan [Page 11]
713 Expires 10 September 2000
719 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
722 applied to the same data in a single second, the change count
723 component of the CSN is provided to further order the changes. Each
724 replica maintains a count of LDAP update operations applied against
725 it. It is reset to zero at the start of each second, and is
726 monotonically increasing within that second, incremented for each and
727 every update operation. Should LDAP Update Operations occur at
728 different replicas, to the same data, within the same single second,
729 and happen to be assigned the same change count number, then the
730 Replica Identifier is used to further order the changes.
732 The Replica Identifier is the value of the RDN attribute on the
733 Replica Subentry. The Replica Identifier could be assigned
734 programmatically or administratively, in either case short values are
735 advised to minimise resource usage. The IA5CaseIgnoreString syntax is
736 used to compare and order Replica Identifier values.
738 The fourth and final CSN component, the modification number, is used
739 for ordering the modifications within an LDAP Modify operation.
746 The preferred CSN representation is:
747 yyyy mm dd hh:mi:ssz # 0xSSSS # replica id # 0xssss
749 The 'z' in the time stipulates that the time is expressed in GMT
750 without any daylight savings time offsets permitted, and the 0xssss
751 represents the hexadecimal representation of an unsigned
753 Implementations must support 16 bit change counts and should support
754 longer ones (32, 64, or 128 bits).
756 An example CSN would be " 1998081018:44:31z#0x000F#1#0x0000 ". The
757 update assigned this CSN would have been applied at time
758 1998081018:44:31z happened to be the 16th operation which was applied
759 in that second, was made against the replica with identifier '1', and
760 was the first modification of the operation that caused the change.
767 Because Change Sequence Numbers are primarily based on timestamps,
768 clock differences between servers can cause unexpected change
769 ordering. The synchronization of server clocks is not required, though
770 it is preferable that clocks are accurate. If timestamps are not
771 accurate, and a server consistently produces timestamps which are
772 significantly older than those of other servers, its updates will not
773 have effect and the real world time ordering of updates will not be
776 However, an implementation may choose to require clock
777 synchronisation. The Network Time Protocol [NTP] [SNTP] offers a
778 protocol means by which heterogeneous server hosts may be time
781 Merrells, Reed, Srinivasan [Page 12]
782 Expires 10 September 2000
788 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
792 The modifications which made up an LDAP Modify operation are presented
793 in a sequence. This must be preserved when the resultant changes of
794 this operation are replicated.
799 4.5.3.1 CSN Generation - Log Based Implementation
802 The modification number component may not be required, since the
803 ordering of the modifications within an LDAP Modify operation have
804 been preserved in the operation log.
807 4.5.3.2 CSN Generation - State Based Implementation
810 The modification number component may be needed to ensure that the
811 order of the modifications within an LDAP Modify operation are
812 faithfully replicated.
815 4.6 State Change Information
817 State changes can be introduced via either LDAP Update Operations or
818 via Replication Updates. A CSN is included with all changes made to an
819 entry, its attributes, and attribute values. This state information
820 must be recorded for the entry to enable a total ordering of updates.
821 The CSN recorded is the CSN assigned to the state change at the server
822 where the state change was first made. CSNs are only assigned to state
823 changes that originate from LDAP Update Operations.
825 Each of the LDAP Update Operations change their target entry in
826 different ways, and record the CSN of the change differently. The
827 state information for the resultant state changes are recorded at
828 three levels. The entry level, attribute level, and attribute value
829 level. The state change may be shown through.
831 1) The creation of a deletion CSN for the entry, an attribute, or an
834 2) In the addition of a new entry, attribute or attribute value, and
837 3) An update to an existing attribute, attribute value, entry
838 distinguished name, or entry superior name, and its update CSN.
843 Entry Change State Storage and Representation
845 When an entry is created, with the LDAP Add operation, the CSN of the
846 change is added to the entry as the value of an operational attribute
849 Merrells, Reed, Srinivasan [Page 13]
850 Expires 10 September 2000
856 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
859 named 'createdEntryCSN', of syntax type LDAPChangeSequenceNumber.
861 createdEntryCSN ::= csn
863 Deleted entries are marked as deleted by the addition of the object
864 class 'deletedEntry'. The attribute 'deletedEntryCSN', of syntax type
865 LDAP Change Sequence Number, is added to record where and when the
866 entry was deleted. Deleted entries are not visible to LDAP clients -
867 they may not be read, they don't appear in lists or search results,
868 and they may not be changed once deleted. Names of deleted entries
869 are available for reuse by new entries immediately after the deleted
870 entry is so marked. It may be desirable to allow deleted entries to be
871 accessed and manipulated by management and data recovery applications,
872 but that is outside the scope of this document.
874 deletedEntryCSN ::= csn
876 A CSN is recorded for both the RDN, and the Superior DN of the entry.
880 ttribute Change State Storage
882 When all values of an attribute have been deleted, the attribute is
883 marked as deleted and the CSN of the deletion is recorded. The deleted
884 state and CSN are stored by the server, but have no representation on
885 the entry, and may not be the subject of a search operation. This
886 state information must be stored to enable the Update Resolution
887 Procedures to be performed.
892 Attribute Value Change State Storage
894 The Modification CSN for each value is to be set by the server when it
895 accepts a modification request to the value, or when a new value with
896 a later Modification CSN is received via Replication. The modified
897 value and the Modification CSN changes are required to be atomic, so
898 that the value and its Modification CSN cannot be out of synch on a
899 given server. The state information is stored by the server, but it
900 has no representation on the entry, and may not be the subject of a
903 When the value of an attribute is deleted the state of its deletion
904 must be recorded, with the CSN of the modifying change. It must be
905 stored to enable the Update Resolution Procedures to be performed.
909 4.2 LDAP Update Operations
911 The server must reject LDAP client update operations with a CSN that
912 is older than the state information that would be replaced if the
913 operation were performed. This could occur in a replication topology
914 where the difference between the clocks of updateable replicas was too
915 large. Result code 72, serverClocksOutOfSync, is returned to the
918 Merrells, Reed, Srinivasan [Page 14]
919 Expires 10 September 2000
925 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
931 This section describes the object classes of the entries that
932 represent the replication topology. The operational information for
933 replication are administered through these entries. The LDUP Working
934 Group will work towards defining an Internet standard to fully detail
935 all these schema elements.
938 5.1 Entries, Semantics and Relationships
940 This section defines the organization of operational data for directory
941 replication in terms of the relative placement of the entries that
942 represent Naming Contexts, its Replicas, and their associated
943 Replication agreements. This section also describes the purpose of
944 these objects and abstractly describes their content.
945 A Naming Context defines an area of DIT with independent replication
946 policies. There are many mechanisms available to identify the set of
947 Naming Contexts in a Directory, including through special auxiliary
948 classes or through operational attributes in root DSE pointing to
949 such entries. The LDUP information model standards will detail an
950 appropriate mechanism.
952 Entries representing the set of Replicas associated with a Naming
953 Context are created immediately below (children) the Naming Context
954 entries. Replica entries are defined as subentries and are
955 intended to hold attributes that identify the Replica's LDAP Access
956 Point, its Replica Type, and if it is a Fractional Replica, the
957 attributes it does or does not hold. The attribute value of the entry's
958 Relative Distinguished Name (RDN) is termed the Replica Identifier and
959 is used as a component of each CSN associated with the replica.
961 Immediately subordinate to each Replica Subentry are the entries
962 representing the Replication Agreements between this replica and
963 another replica on some other server in the network. A Replication
964 Agreement entry is associated with exactly one remote replica.
965 These entries are defined to hold attributes identifying
966 the remote Replica associated with this agreement, the scheduling
967 policy for replication operations, including times when replication is
968 to be performed, when it is not to be performed, or the policies
969 governing event-driven replication initiation another Replica, the
970 scheduling policy for replication operations, including times when
971 replication is to be performed, when it is not to be performed, or the
972 policies governing event-driven replication initiation.
976 5.2 Root DSE Attributes
978 LDUP information model will define Root DSE attributes to identify the
979 set of naming Contexts and replicas present in an LDAP server.
983 The LDUP Information Model will implement schema elements for
985 Merrells, Reed, Srinivasan [Page 15]
986 Expires 10 September 2000
992 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
995 representing configuration and policy information common for all
996 replicas of the Naming Context. Attributes for recording the location
997 and time of creation of naming contexts may also be identified by the
1000 In future LDAP Access Control standards would define mechanisms for
1001 identifying the ACL policy associated with a Naming Context as well as
1002 the syntax and semantics of its representation.
1005 5.4 Replica Object Class and Entries
1007 Each Replica is characterized by a replica type. This may be Primary,
1008 Updatable, or Read-Only. The latter two types may be further defined
1009 as being Fractional. The Replica entry will include a Fractional Entry
1010 Specification for a Fractional Replica.
1012 There is a need to represent network addresses of servers holding
1013 replicas participating in Replication Agreements. For this,
1014 the LDUP information model will define an attribute with an
1015 appropriate syntax to represent an LDAP server addresses with which to
1019 An Update Vector describes the point to which the Replica has been
1020 updated, in respect to all the other Replicas of the Naming Context.
1021 The vector is used at the initiation of a replication session to
1022 determine the sequence of updates that should be transferred.
1024 Enabling LDAP to be a fully distributed service is not an objective
1025 for the design of LDUP information model, though the information stored
1026 in replica entries could facilitate certain distributed operations.
1029 5.5 Lost and Found Entry
1031 When replicating operations between servers, conflicts may arise that
1032 cause a parent entry to be removed causing its child entries to become
1033 orphaned. In this case the Update Resolution Procedures will make the
1034 Lost and Found Entry the child's new superior.
1036 Each Replica Entry names it's Lost and Found Entry, which would
1037 usually be an entry below the Replica Entry itself. This well known
1038 place allows administrators, and their tools, to find and repair
1043 5.6 Replication Agreement Object Class and Entries
1045 The Replication Agreement defines:
1047 1. The schedule for Replication Sessions initiation.
1049 2. The server that initiates the Replication Session, either the
1050 Consumer or the Supplier.
1052 Merrells, Reed, Srinivasan [Page 16]
1053 Expires 10 September 2000
1059 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
1063 3. The authentication credentials that will be presented between
1066 4. The network/transport security scheme that will be employed in
1067 order to ensure data confidentiality.
1069 5. The replication protocols and relevant protocol parameters to be
1070 used for Full and Incremental updates. An OID is used to identify
1071 the update transfer protocol, thus allowing for future extensions
1072 or bilaterally agreed upon alternatives.
1074 6. If the Replica is Fractional, the Fractional Entry Specification for
1075 the attributes to be included or excluded
1077 Permission to participate in replication sessions will be controlled,
1078 at least in part, by the presence and content of replica agreements.
1080 The Supplier must be subject to the access control policy enforced by
1081 the Consumer. Since the access control policy information is stored
1082 and replicated as directory content, the access control imposed on the
1083 Supplier by the Consumer must be stored in the Consumer's Replication
1089 Replication Schedule
1091 There are two broad mechanisms for initiating replication sessions:
1092 (1) scheduled event driven and (2) change event driven. The mechanism
1093 used to schedule replication operations between two servers is
1094 determined by the Schedule information that is part of the Replication
1095 Agreement governing the Replicas on those two servers. Because each
1096 Replication Agreement describes the policy for one direction of the
1097 relationship, it is possible that events propagate via scheduled
1098 events in one direction, and by change events in the other.
1100 Change event driven replication sessions are, by their nature,
1101 initiated by suppliers of change information. The server, which the
1102 change is made against, schedules a replication session in response to
1103 the change itself, so that notification of the change is passed on to
1106 Scheduled event driven replication sessions can be initiated by either
1107 consumers or suppliers of change information. The schedule defines a
1108 calendar of time periods during which Replication Sessions should be
1111 Schedule information may include both scheduled and change event
1112 driven mechanisms. For instance, one such policy may be to begin
1113 replication within 15 seconds of any change event, or every 30 minutes
1114 if no change events are received.
1120 Merrells, Reed, Srinivasan [Page 17]
1121 Expires 10 September 2000
1127 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
1130 6 Policy Information
1133 Administrative policy information governs the behavior of the server
1134 This policy information needs to be consistently known and
1135 applied by all replicas of a Naming Context. It may be
1136 represented in the DIT as sub-entries, attributes, and attribute
1137 values. Auxiliary classes are a convenient way to hold such
1138 policy information and to uniformly replicate them among all its
1139 replicas. For a naming context to be faithfully reproduced, all
1140 applicable prescriptive policy information represented among its
1141 ancestral entries must also be replicated. In all cases such
1142 policy information is transmitted as if it were an element of
1143 the Replica root entry.
1145 Policy information is always replicated in the same manner as any
1146 other entries, attributes, and attribute values.
1150 6.1 Schema Knowledge
1152 Schema subentries should be subordinate to the naming contexts to
1153 which they apply. Given our model, a single server may hold replicas
1154 of several naming contexts. It is therefore essential that schema
1155 should not be considered to be a server-wide policy, but rather to be
1156 scoped by the namespace to which it applies.
1158 Schema modifications replicate in the same manner as other directory
1159 data. Given the strict ordering of replication events, schema
1160 modifications will naturally be replicated prior to entry creations
1161 which use them, and subsequent to data deletions which eliminate
1162 references to schema elements to be deleted. Servers MUST NOT
1163 replicate information about entries which are not defined in the
1164 schema. Servers should not replicate modifications to existing schema
1165 definitions for which there are existing entries and/or attributes
1166 which rely on the schema element.
1168 Should a schema change cause an entry to be in violation of the new
1169 schema, it is recommended that the server preserve the entry for
1170 administrative repair. The server could add a known object class to
1171 make the entry valid and to mark the entry for maintenance.
1175 7 LDUP Update Transfer Protocol Framework
1178 A Replication Session occurs between a Supplier server and Consumer
1179 server over an LDAP connection. This section describes the process by
1180 which a Replication Session is initiated, started and stopped.
1182 The session initiator, termed the Initiator, could be either the
1183 Supplier or Consumer. The Initiator sends an LDAP extended operation
1184 to the Responder identifying the replication agreement being acted on.
1185 The Supplier then sends a sequence of updates to the Consumer.
1187 Merrells, Reed, Srinivasan [Page 18]
1188 Expires 10 September 2000
1194 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
1198 All transfers are in one direction only. A two way exchange requires
1199 two replication sessions; one session in each direction.
1202 7.1 Replication Session Initiation
1204 The Initiator starts the Replication Session by opening an LDAP
1205 connection to its Responder. The Initiator binds using the
1206 authentication credentials provided in the Replication Agreement.
1207 The LDUP Update Transfer Protocol will define the LDAP extended
1208 operation the Initiator should perform to initialize an LDUP session.
1209 For the sake of convenience, this extended LDAP operation for
1210 initializing a replication session is referred to as the "Start
1211 Replication" operation. Among other things, this operation will
1212 identify the role each
1213 server will perform, and what type of replication is to be performed.
1215 One server is to be the Consumer, the other the Supplier, and the
1216 replication may be either Full or Incremental.
1224 The initiation of a Replication Session is to be restricted to
1225 privileged clients. The identity and the credentials for the client
1226 eligible for initiating a replication session will be defined as
1227 attributes within Replication Agreements.
1232 The Consumer binds to the Supplier using the authentication
1233 credentials provided in the Replication Agreement. The Consumer sends
1234 the "Start Replication" extended request to begin the Replication
1235 Session. The Supplier returns a "Start Replication" extended response
1236 containing a response code. The Consumer then disconnects from the
1237 Supplier. If the Supplier has agreed to the replication session
1238 initiation, it binds to the Consumer and behaves just as if the
1239 Supplier initiated the replication.
1246 The Supplier binds to the Consumer using the authentication
1247 credentials provided in the Replication Agreement. The Supplier sends
1248 the "Start Replication" extended request to begin the
1249 Replication Session. The Consumer returns a "Start Replication"
1251 response containing a response code, and possibly its Update Vector.
1252 If the Consumer has agreed to the Replication Session initiation, then
1253 the transfer protocol begins.
1257 Merrells, Reed, Srinivasan [Page 19]
1258 Expires 10 September 2000
1264 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
1269 7.2 Start Replication Session
1272 tart Replication Request
1275 The LDUP Update Transfer Protocol would define an LDAP Extended
1276 Request, referred to in this document as "Start Replication Request",
1277 that is sent from the Initiator to Responder. The parameters of the
1278 "Start Replication Request" would identify the Replication Agreement
1279 associated with the session, the Update Transfer Protocol associated \
1280 with the replication session, and other state information necessary
1281 to resume replication between the two servers.
1285 tart Replication Response
1288 The LDUP Update Transfer Protocol would define an LDAP Extended
1289 Response, "Start Replication Response", sent in reply to a Start
1290 Replication Request, from the Responder to the Initiator. The
1291 parameters of the Start Replication Response include an response code,
1292 and an optional Update Vector.
1298 Each Update Transfer Protocol is identified by an OID. An LDUP
1299 conformant server implementation must support those update protocols
1301 defined as mandatory in the Update Transfer Protocol standard , and
1302 may support many others. A server will advertise its
1303 protocols in the Root DSE multi-valued attribute
1304 'supportedReplicationProtocols'.
1306 The Update Transfer Protocol would define the mechanisms for a
1307 Consumer to receive a complete (full) update or incremental update
1308 based on the current state of replication represented in the Update
1309 Vector. A full update is necessary for initializing a consumer
1310 replica upon establishment of replication agreements.
1314 7.4 End Replication Session
1316 A Replication Session is terminated by the "End Replication Request"
1317 initiated by the supplier. The purpose of this request and response
1318 is to secure the state of the Update Vector associated with the two
1319 replicas that participated in replication. This is necessary for
1320 proper resumption of replication during subsequent LDUP sessions
1325 Merrells, Reed, Srinivasan [Page 20]
1326 Expires 10 September 2000
1332 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
1335 7.5 Integrity & Confidentiality
1337 Data integrity (ie, protection from unintended changes) and
1338 confidentiality (ie, protection from unintended disclosure to
1339 eavesdroppers) SHOULD be provided by appropriate selection of
1340 underlying transports, for instance TLS, or IPSEC. Replication MUST
1341 be supported across TLS LDAP connections. Servers MAY be configured
1342 to refuse replication connections over unprotected TCP connections.
1347 8 LDUP Update Protocols
1350 This Internet-Draft defines two transfer protocols for the supplier to
1351 push changes to the consumer. Other protocols could be defined to
1352 transfer changes, including those which pull changes from the supplier
1353 to the consumer, but those are left for future work.
1357 8.1 Replication Updates and Update Primitives
1359 Both LDUP Update Protocols define how Replication Updates are
1360 transferred from the Supplier to the Consumer. Each Replication Update
1361 consists of a set of Update Primitives that describe the state changes
1362 that have been made to a single entry. Each Replication Update is
1363 associated with a single entry identified by its UUID.
1366 The Update Transfer Protocol would define a set of Update Primitives
1367 each of which codifies an assertion about the state change of an entry
1368 that resulted from a directory update operation. The primitives will
1369 include sufficient data to allow recreation of corresponding state
1370 changes on the consumer's replica. An assertion based approach has
1371 been chosen in such a way that the Primitives are idempotent, meaning
1372 that re-application of a Primitive to an Entry will cause no change to
1373 the entry. This is desirable as it provides some resilience against
1374 some kinds of system failures.
1376 Each Update Primitive contains a CSN that represents an ordering among
1377 all such primitives generated anywhere in the
1378 network. This ordering information is used by the consumer to reconcile
1379 among those primitives that lead to consistency violation
1383 8.2 Fractional Updates
1385 When fully populating or incrementally bringing up to date a
1386 Fractional Replica each of the Replication Updates must only contain
1387 updates to the attributes in the Fractional Entry Specification.
1391 Merrells, Reed, Srinivasan [Page 21]
1392 Expires 10 September 2000
1398 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
1401 9 LDUP Full Update Transfer Protocol
1406 9.1 Full Update Transfer
1408 This Full Update Protocol provides a bulk transfer of the replica
1409 contents for the initial population of new replicas, and the
1410 refreshing of existing replicas. The LDUP Update Transfer protocol
1411 standard will define the ways for this transfer is initiated.
1413 The Consumer must replace its entire replica contents with that sent
1416 The Consumer need not service any requests for this Naming Context
1417 whilst the full update is in progress. The Consumer could instead
1419 referral to another replica, possibly the supplier.
1423 9.2 Replication Update Generation
1425 The entire state of a Replicated Area can be mapped onto a sequence of
1426 Replication Updates, each of which contains a sequence of Update
1427 Primitives that describe the entire state of a single entry.
1429 The sequence of Replication Updates must be ordered such that no entry
1430 is created before its parent.
1434 9.3 Replication Update Consumption
1436 A Consumer will receive the Replication Updates, extract the sequence
1437 of Update Primitives, and must apply them to the DIB in the order
1442 9.4 Full Update, End Replication Session
1445 A Full Update should also result in the replication of all appropriate
1446 LDUP meta data (which are part of the replicated naming context), such
1447 as the sub-entry representing the Replica being updated and the Update
1448 Vector associated with it.
1449 The Supplier could be accepting updates whilst the update is in
1450 progress. Once the Full Update has completed, an Incremental Update
1451 should be performed to transfer these changes.
1457 Merrells, Reed, Srinivasan [Page 22]
1458 Expires 10 September 2000
1464 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
1467 9.5 Interrupted Transmission
1469 If the Replication Session terminates before the End Replication
1470 Request is sent, then the Replica could be in an inconsistent state.
1471 Until the replica is restored to a consistent
1472 state, the consumer might not permit LDAP Clients to access the
1473 incomplete replica. The Consumer could refer the Client to the
1474 Supplier Replica, or return an error result code.
1478 10 LDUP Incremental Update Transfer Protocol
1481 For efficiency, the Incremental Update Protocol transmits only those
1482 changes that have been made to the Supplier replica that the Consumer
1483 has not already received. In a replication topology with transitive
1484 redundant replication agreements, changes may propagate through the
1485 replica network via different routes.
1487 The Consumer must not support multiple concurrent replication sessions
1488 with more than one Supplier for the same Naming Context. A Supplier
1489 that attempts to initiate a Replication Session with a Consumer
1490 already participating as a Consumer in another Replication Session
1491 will receive appropriate error. .
1497 The Supplier uses the Consumer's Update Vector to determine the
1498 sequence of updates that should be sent to the Consumer.
1500 Each Replica entry includes an Update Vector to record the point to
1501 which the replica has been updated. The vector is a set of CSN values,
1502 one value for each known updateable Replica. Each CSN value in the
1503 vector corresponds to the most recent change that occurred in an
1504 updateable replica that has been replicated to the replica whose
1505 replication state this Update Vector represents.
1507 For example, consider two updatable replicas of a naming context, one
1508 is assigned replica identifier '1', the other replica identifier '2'.
1509 Each is responsible for maintaining its own update vector, which will
1510 contain two CSNs, one for each replica. So, if both replicas are
1511 identical they will have equivalent update vectors.
1513 Both Update Vectors =
1515 {1998081018:44:31z#0x000F#1#0x0000, 1998081018:51:20z#0x0001#2#0x0000}
1517 Subsequently, at 7pm, an update is applied to replica '2', so its
1518 update vector is updated.
1520 Replica '1' Update Vector =
1522 {1998081018:44:31z#0x000F#1#0x0000, 1998081018:51:20z#0x0001#2#0x0000}
1524 Merrells, Reed, Srinivasan [Page 23]
1525 Expires 10 September 2000
1531 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
1535 Replica '2' Update Vector =
1537 {1998081018:44:31z#0x000F#1#0x0000, 1998081019:00:00z#0x0000#2#0x0000}
1539 Since the Update Vector records the state to which the replica has
1540 been updated, a supplier server, during Replication Session
1541 initiation, can determine the sequence of updates that should be sent
1542 to the consumer. From the example above no updates need to be sent
1543 from replica '1' to replica '2', but there is an update pending from
1544 replica '2' to replica '1'.
1546 Because the Update Vector embodies knowledge of updates made at all
1547 known replicas it supports replication topologies that include
1548 transitive and redundant connections between replicas. It ensures that
1549 changes are not transferred to a consumer multiple times even though
1550 redundant replication agreements may exist. It also ensures that
1551 updates are passed across the replication network between replicas
1552 that are not directly linked to each other.
1554 It may be the case that a CSN for a given replica is absent, for one
1557 1. CSNs for Read-Only replicas might be absent because no changes will
1558 have ever been applied to that Replica, so there are no changes to
1561 2. CSNs for newly created replicas may be absent because no changes to
1562 that replica have yet been propagated.
1564 An Update Vector might also contain a CSN for a replica that no longer
1565 exists. The replica may have been temporarily taken out of service,
1566 or may have been removed from the replication topology permanently. An
1567 implementation may choose to retire a CSN after some configurable time
1572 10.2 Supplier Initiated, Incremental Update, Start Replication Session
1574 The Consumer Responder must return its Update Vector to the Supplier
1575 Initiator. The Supplier uses this to determine the sequence of
1576 Replication Updates that need to be sent to the Consumer.
1580 10.3 Replication Update Generation
1582 The Supplier generates a sequence of Replication Updates to be sent to
1583 the consumer. To enforce LDAP Constraint 20.1.6, that the LDAP Modify
1584 must be applied atomically, each Replication Update must contain the
1585 entire sequence of Update Primitives for all the LDAP Operations for
1586 which the Replication Update contains Update Primitives. Stated less
1587 formally, for each primitive the update contains, it must also contain
1588 all the other primitives that came from the same operation.
1591 Merrells, Reed, Srinivasan [Page 24]
1592 Expires 10 September 2000
1598 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
1603 10.3.1 Replication Log Implementation
1605 A log-based implementation might take the approach of mapping LDAP
1606 Operations onto an equivalent sequence of Update Primitives. A
1607 systematic procedure for achieving this will be fully described in the
1608 standard document defining Update Reconciliation Procedures.
1610 The Consumer Update Vector is used to determine the sequence of LDAP
1611 Operations in the operation log that the Consumer has not yet seen.
1615 10.3.2 State-Based Implementation
1617 A state-based implementation might consider each entry of the replica
1618 in turn using the Update Vector of the consumer to find all the state
1619 changes that need to be transferred. Each state change (entry,
1620 attribute, or value - creation, deletion, or update) is mapped onto
1621 the equivalent Update Primitive. All the Update Primitives for a
1622 single entry might be collected into a single Replication Update.
1623 Consequently, it could contain the resultant primitives of many LDAP
1628 10.4 Replication Update Consumption
1630 A Consumer will receive Replication Updates, extract the sequence of
1631 Update Primitives, and must apply them to the DIB in the order
1632 provided. LDAP Constraint 20.1.6 states that the modifications within
1633 an LDAP Modify operation must be applied in the sequence provided.
1635 Those Update Primitives must be reconciled with the current replica
1636 contents and any previously received updates. In short,,
1637 updates are compared to the state information associated with the item
1638 being operated on. If the change has a more recent CSN, then it is
1639 applied to the directory contents. If the change has an older CSN it
1640 is no longer relevant and its change must not be effected.
1642 If the consumer acts as a supplier to other replicas then the updates
1643 are retained for forwarding.
1647 10.5 Update Resolution Procedures
1649 The LDAP Update Operations must abide by the constraints imposed by
1650 the LDAP Data Model and LDAP Operational Behaviour, Appendix A. An
1651 operation that would violate at least one of these constraints is
1652 rejected with an error result code.
1654 The loose consistency model of this replication architecture and its
1655 support for multiple updateable replicas of a naming context means
1658 Merrells, Reed, Srinivasan [Page 25]
1659 Expires 10 September 2000
1665 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
1668 that LDAP Update Operations could be valid at one replica, but not in
1669 another. At the time of acceptance, the accepting
1670 replica may not have received other updates that would cause a
1671 constraint to be violated, and the operation to be rejected.
1673 Replication Updates must never be rejected because of a violation of
1674 an LDAP Constraint. If the result of applying the Replication Update
1675 causes a constraint violation to occur, then some remedial action must
1676 be taken to satisfy the constraint. These Update Resolution Procedures
1677 are introduced here will be fully defined withinLDUP Update Resolution
1682 10.5.1 URP: Distinguished Names
1684 LDAP Constraints 20.1.1 and 20.1.10 ensure that each entry in the
1685 replicated area has a unique DN. A Replication Update could violate
1686 this constraint producing two entries, with different unique
1687 identifiers, but with the same DN. The resolution procedure is to
1688 rename the most recently named entry so that its RDN includes its own
1689 unique identifier. This ensures that the new DN of the entry shall be
1694 10.5.2 URP: Orphaned Entries
1696 LDAP Constraints 20.1.11 ensures that every entry must have a parent
1697 entry. A Replication Update could violate this constraint producing an
1698 entry with no parent entry. The resolution procedure is to create a
1699 Glue Entry to take the place of the absent parent. The Glue Entry's
1700 superior will be the Lost and Found Entry. This well known place
1701 allows administrators and their tools to find and repair abandoned
1706 10.5.3 URP: Distinguished Not Present
1708 LDAP Constraints 20.1.8 and 20.1.9 ensure that the components of an
1709 RDN appear as attribute values of the entry. A Replication Update
1710 could violate this constraint producing an entry without its
1711 distinguished values. The resolution procedure is to add the missing
1712 attribute values, and mark them as distinguished not present, so that
1713 they can be deleted when the attribute values are no longer
1718 10.5.4 URP: Schema - Single Valued Attributes
1720 LDAP Constraint 20.1.7 enforces the single-valued attribute schema
1721 restriction. A Replication Update could violate this constraint
1722 creating a multi-value single-valued attribute. The resolution
1725 Merrells, Reed, Srinivasan [Page 26]
1726 Expires 10 September 2000
1732 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
1735 procedure is to consider the value of a single-valued attribute as
1736 always being equal. In this way the most recently added value will be
1737 retained, and the older one discarded.
1741 10.5.5 URP: Schema - Required Attributes
1743 LDAP Constraint 20.1.7 enforces the schema objectclass definitions on
1744 an entry. A Replication Update could violate this constraint creating
1745 an entry that does not have attribute values for required attributes.
1746 The resolution procedure is to ignore the schema violation and mark
1747 the entry for administrative repair.
1751 10.5.6 URP: Schema - Extra Attributes
1753 LDAP Constraint 20.1.3 and 20.1.7 enforces the schema objectclass
1754 definitions on an entry. A Replication Update could violate this
1755 constraint creating an entry that has attribute values not allowed by
1756 the objectclass values of the entry. The resolution procedure is to
1757 ignore the schema violation and mark the entry for administrative
1762 10.5.7 URP: Duplicate Attribute Values
1764 LDAP Constraint 20.1.5 ensures that the values of an attribute
1765 constitute a set of unique values. A Replication Update could violate
1766 this constraint. The resolution procedure is to enforce this
1767 constraint, recording the most recently assigned CSN with the value.
1771 10.5.8 URP: Ancestry Graph Cycle
1773 LDAP Constraint 20.4.2.1 prevents against a cycle in the DIT. A
1774 Replication Update could violate this constraint causing an entry to
1775 become it's own parent, or for it to appear even higher in it's
1776 ancestry graph. The resolution procedure is to break the cycle by
1777 changing the parent of the entry closest to be the lost and found
1782 10.6 Incremental Update, End Replication Session
1784 If the Supplier sent none of its own updates to the Consumer, then the
1785 Supplier's CSN within the Supplier's update vector should be updated
1786 with the earliest possible CSN that it could generate, to record the
1787 time of the last successful replication session. The Consumer will
1788 have received the Supplier's Update Vector in the replica sub-entry it
1789 holds for the Supplier replica.
1791 Merrells, Reed, Srinivasan [Page 27]
1792 Expires 10 September 2000
1798 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
1802 The Consumer's resultant Update Vector CSN values will be at least as
1803 great as the Supplier's Update Vector.
1805 The Supplier may request that the Consumer return its resultant Update
1806 Vector so that the Supplier can update its replica sub-entry for the
1807 Consumer Replica. The Supplier requests this by setting a flag in the
1808 End Replication Request. The default flag value is TRUE meaning the
1809 Consumer Update Vector must be returned.
1813 10.7 Interrupted Transmission
1815 If the Replication Session terminates before the End Replication
1816 Request is sent then the Consumer's Update Vector may or may not be
1817 updated to reflect the updates received. The Start Replication request
1818 includes a Replication Update Ordering flag which states whether the
1819 updates were sent in CSN order per replica.
1821 If updates are sent in CSN order per replica then it is possible to
1822 update the Consumer Update Vector to reflect that some portion of the
1823 updates to have been sent have been received and successfully applied.
1824 The next Incremental Replication Session will pick up where the failed
1827 If updates are not sent in CSN order per replica then the Consumer
1828 Update can not be updated. The next Incremental Replication Session
1829 will begin where the failed session began. Some updates will be
1830 replayed, but because the application of Replication Updates is
1831 idempotent they will not cause any state changes.
1835 11 Purging State Information
1838 The state information stored with each entry need not be stored
1839 indefinitely. A server implementation may choose to periodically, or
1840 continuously, remove state information that is no longer required. The
1841 mechanism is implementation-dependent, but to ensure interoperability
1842 between implementations, the state information must not be purged
1843 until all known replicas have received and acknowledged the change
1844 associated with a CSN. This is determined from the Purge Vector
1847 All the CSNs stored that are lower than the Purge Vector may be
1848 purged, because no changes with older CSNs can be replicated to this
1855 The Purge Vector is an Update Vector constructed from the Update
1856 Vectors of all known replicas. Each replica has a sub-entry for each
1858 Merrells, Reed, Srinivasan [Page 28]
1859 Expires 10 September 2000
1865 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
1868 known replica stored below its naming context. Each of those entries
1869 contains the last known update vector for that replica. The lowest CSN
1870 for each replica are taken from these update vectors to form the Purge
1871 Vector. The Purge Vector is used to determine when state information
1872 and updates need no longer be stored.
1876 11.2 Purging Deleted Entries, Attributes, and Attribute Values
1878 The following conditions must hold before an item can be deleted from
1879 the Directory Information Base.
1881 1) The LDAP delete operation has been propagated to all replication
1884 2) All the updates from all the other replicas with CSNs less than the
1885 CSN on the deletion have been propagated to the server holding the
1886 deleted entry (similarly for deleted attributes and attribute values).
1888 3) The CSN generator of the other Replicas must have advanced beyond
1889 the deletion CSN of the deleted entry. Otherwise, it is possible for
1890 one of those Replicas to generate operations with CSNs earlier than
1894 12 Replication Configuration and Management
1897 Replication management entries, such as replica or replication
1898 agreement entries, can be altered on any updateable replica. These
1899 entries are implicitly included in the directory entries governed by
1900 any agreement associated with this naming context. As a result, all
1901 servers with a replica of a naming context will have access to
1902 information about all other replicas and associated agreements.
1904 The deployment and maintenance of a replicated directory network
1905 involves the creation and management of all the replicas of a naming
1906 context and replication agreements among these replicas. This section
1907 outlines, through an example, the administrative actions necessary to
1908 create a new replica and establish replication agreements. Typically,
1909 administrative tools will guide the administrator and facilitate these
1910 actions. The objective of this example is to illustrate the
1911 architectural relationship among various replication related
1912 operational information.
1914 A copy of an agreement should exist on both the supplier and consumer
1915 side for the replication update transfer protocol to be able to start.
1916 For this purpose, the root of the naming context, replica objects and
1917 the replication agreement objects are created first on one of the
1918 servers. A copy of these objects are then manually created on the
1919 second server associated with the agreement.
1921 The scenario below starts with a server (named DSA1) that holds an
1922 updateable replica of a naming context NC1. Procedures to establish
1923 an updateable replica of the naming context on a second server (DSA2)
1925 Merrells, Reed, Srinivasan [Page 29]
1926 Expires 10 September 2000
1932 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
1939 1) Add the context prefix for NC1 to the Root DSE attribute
1940 'replicaRoot' if it does not already exist.
1942 2) Alter the 'ObjectClass' attribute of the root entry of NC1 to
1943 include the "namingContext" auxiliary class.
1945 3) Create a replica object, NC1R1, (as a child of the root of NC1) to
1946 represent the replica on DSA1. The attributes include replica type
1947 (updateable, read-only etc.) and DSA1 access point information.
1949 4) Create a copy of the replica object NC1R2 (after it is created on
1952 5) Create a replication agreement, NC1R1-R2 to represent update
1953 transfer from NC1R1 to NC1R2. This object is a child of NC1R1.
1957 1) Add NC1's context prefix to the Root DSE attribute 'replicaRoot'.
1959 2) Create a copy of the root entry of NC1 as a copy of the one in DSA1
1960 (including the namingContext auxiliary class)
1962 3) Create a copy of the replica object NC1R1
1964 4) Create a second replica object, NC1R2 (as a sibling of NC1R1) to
1965 represent the replica on DSA2.
1967 5) Create a copy of the replication agreement, NC1R1-R2
1969 6) Create a replication agreement, NC1R2-R1, to represent update
1970 transfer from NC1R2 to NC1R1. This object is a sibling of NC1R1-
1973 After these actions update transfer to satisfy either of the two
1974 agreements can commence.
1976 If data already existed in one of the replicas, the update transfer
1977 protocol should perform a complete update of the data associated with
1978 the agreement before normal replication begins.
1985 The server assigns a CSN for every LDAP update operation it receives.
1986 Since the CSN is principally based on time, the CSN is susceptible to
1987 the Replica clocks drifting in relation to each other (either forwards
1990 The server must never assign a CSN older than or equal to the last CSN
1993 Merrells, Reed, Srinivasan [Page 30]
1994 Expires 10 September 2000
2000 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
2004 The server must reject update operations, from any source, which would
2005 result in setting a CSN on an entry or a value which is earlier than
2006 the one that is there. The error code serverClocksOutOfSync (72)
2010 14 Security Considerations
2013 The preceding architecture discussion covers the server
2014 authentication, session confidentiality, and session integrity in
2015 sections 7.1.1 and 7.5
2017 The internet draft "Authentication Methods" for LDAP, provides a
2018 detailed LDAP security discussion. Its introductory passage is
2019 paraphrased below. [AUTH]
2021 A Replication Session can be protected with the following security
2024 1) Authentication by means of the SASL mechanism set, possibly backed
2025 by the TLS credentials exchange mechanism,
2027 2) Authorization by means of access control based on the Initiators
2028 authenticated identity,
2030 3) Data integrity protection by means of the TLS protocol or data-
2031 integrity SASL mechanisms,
2033 4) Protection against snooping by means of the TLS protocol or data-
2034 encrypting SASL mechanisms,
2036 The configuration entries that represent Replication Agreements may
2037 contain authentication information. This information must never be
2038 replicated between replicas.
2040 Updates to a multi-mastered entry may collide causing the Update
2041 Resolution Procedures [10.5] to reject or reverse one of the changes
2042 to the entry. The URP algorithms resolve conflicts by using the total
2043 ordering of updates imposed by the assignment of CSNs for every
2044 operation. As a consequence updates originating from system
2045 administrators have no priority over updates originating from regular
2053 This document is a product of the LDUP Working Group of the IETF. The
2054 contributions of its members is greatly appreciated.
2060 Merrells, Reed, Srinivasan [Page 31]
2061 Expires 10 September 2000
2067 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
2073 [AUTH] - M. Wahl, H. Alvestrand, J. Hodges, RL "Bob" Morgan,
2074 "Authentication Methods for LDAP", Internet Draft, draft-ietf-ldapext-
2075 authmeth-02.txt, June 1998.
2077 [BCP-11] - R. Hovey, S. Bradner, "The Organizations Involved in the
2078 IETF Standards Process", BCP 11, RFC 2028, October 1996.
2080 [LDAPv3] - M. Wahl, S. Kille, T. Howes, "Lightweight Directory Access
2081 Protocol (v3)", RFC 2251, December1997.
2083 [LDUP Requirements] - R. Weiser, E. Stokes 'LDAP Replication
2084 Requirements', Internet Draft, draft-weiser-replica-req-02.txt,
2087 [NTP] - D. L. Mills, "Network Time Protocol (Version 3)", RFC 1305,
2090 [RFC2119] - S. Bradner, "Key words for use in RFCs to Indicate
2091 Requirement Levels", RFC 2119.
2093 [RFC2252] - M. Wahl, A. Coulbeck, T. Howes, S. Kille, 'Lightweight
2094 Directory Access Protocol (v3): Attribute Syntax Definitions', RFC
2095 2252, December 1997.
2097 [SNTP] - D. L. Mills, "Simple Network Time Protocol (SNTP) Version 4
2098 for IPv4, IPv6 and OSI", RFC 2030, University of Delaware, October
2101 [TLS] - J. Hodges, R. L. "Bob" Morgan, M. Wahl, "Lightweight
2102 Directory Access Protocol (v3): Extension for Transport
2103 Layer Security", Internet draft, draft-ietf-ldapext-ldapv3-tls-01.txt,
2106 [X501] - ITU-T Recommendation X.501 (1993), ) | ISO/IEC 9594-2:1993,
2107 Information Technology - Open Systems Interconnection - The Directory:
2110 [X680] - ITU-T Recommendation X.680 (1994) | ISO/IEC 8824-1:1995,
2111 Information technology - Abstract Syntax Notation One (ASN.1):
2112 Specification of Basic Notation
2114 [X525] - ITU-T Recommendation X.525 (1997) | ISO/IEC 9594-9:1997,
2115 Information Technology - Open Systems Interconnection - The Directory:
2119 17 Intellectual Property Notice
2122 The IETF takes no position regarding the validity or scope of any
2123 intellectual property or other rights that might be claimed to
2124 pertain to the implementation or use of the technology described in
2125 this document or the extent to which any license under such rights
2128 Merrells, Reed, Srinivasan [Page 32]
2129 Expires 10 September 2000
2135 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
2138 might or might not be available; neither does it represent that it has
2139 made any effort to identify any such rights. Information on the
2140 IETF's procedures with respect to rights in standards-track and
2141 standards-related documentation can be found in BCP-11. [BCP-11]
2142 Copies of claims of rights made available for publication and any
2143 assurances of licenses to be made available, or the result of an
2144 attempt made to obtain a general license or permission for the use of
2145 such proprietary rights by implementors or users of this specification
2146 can be obtained from the IETF Secretariat.
2148 The IETF invites any interested party to bring to its attention any
2149 copyrights, patents or patent applications, or other proprietary
2150 rights which may cover technology that may be required to practice
2151 this standard. Please address the information to the IETF Executive
2158 Copyright (C) The Internet Society (1998,1999). All Rights Reserved.
2160 This document and translations of it may be copied and furnished to
2161 others, and derivative works that comment on or otherwise explain it
2162 or assist in its implementation may be prepared, copied, published and
2163 distributed, in whole or in part, without restriction of any kind,
2164 provided that the above copyright notice and this paragraph are
2165 included on all such copies and derivative works. However, this
2166 document itself may not be modified in any way, such as by removing
2167 the copyright notice or references to the Internet Society or other
2168 Internet organizations, except as needed for the purpose of
2169 developing Internet standards in which case the procedures for
2170 copyrights defined in the Internet Standards process must be followed,
2171 or as required to translate it into languages other than English.
2173 The limited permissions granted above are perpetual and will not be
2174 revoked by the Internet Society or its successors or assigns.
2176 This document and the information contained herein is provided on an
2177 "AS IS" basis and THE INTERNET SOCIETY AND THE INTERNET ENGINEERING
2178 TASK FORCE DISCLAIMS ALL WARRANTIES, EXPRESS OR IMPLIED, INCLUDING BUT
2179 NOT LIMITED TO ANY WARRANTY THAT THE USE OF THE INFORMATION HEREIN
2180 WILL NOT INFRINGE ANY RIGHTS OR ANY IMPLIED WARRANTIES OF
2181 MERCHANTABILITY OR FITNESS FOR A PARTICULAR PURPOSE.
2188 Netscape Communications, Inc.
2189 501 East Middlefield Road
2193 E-mail: merrells@netscape.com
2196 Merrells, Reed, Srinivasan [Page 33]
2197 Expires 10 September 2000
2203 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
2206 Phone: +1 650-937-5739
2214 E-mail: eer@oncalldba.com
2215 Phone: +1 801-796-7065
2222 E-mail: usriniva@us.oracle.com
2223 Phone: +1 650 506 3039
2225 LDUP Engineering Mailing List: ldup-repl@external.cisco.com
2226 LDUP Working Group Mailing List: ietf-ldup@imc.org
2229 20 Appendix A - LDAP Constraints
2232 20.1 LDAP Constraints Clauses
2234 This is an enumeration of the Data Model and Operation Behaviour
2235 constraint clauses defined in RFC 2251. [LDAPv3]
2237 1) Data Model - Entries have names: one or more attribute values from
2238 the entry form its relative distinguished name (RDN), which MUST be
2239 unique among all its siblings. (p5)
2241 2) Data Model - Attributes of Entries - Each entry MUST have an
2242 objectClass attribute. (p6)
2244 3) Data Model - Attributes of Entries - Servers MUST NOT permit
2245 clients to add attributes to an entry unless those attributes are
2246 permitted by the object class definitions. (p6)
2248 4) Relationship to X.500 - This document defines LDAP in terms of
2249 X.500 as an X.500 access mechanism. An LDAP server MUST act in
2250 accordance with the X.500 (1993) series of ITU recommendations when
2251 providing the service. However, it is not required that an LDAP
2252 server make use of any X.500 protocols in providing this service,
2253 e.g. LDAP can be mapped onto any other directory system so long as
2254 the X.500 data and service model as used in LDAP is not violated in
2255 the LDAP interface. (p8)
2257 5) Elements of Protocol - Common Elements - Attribute - Each attribute
2258 value is distinct in the set (no duplicates). (p14)
2260 6) Elements of Protocol - Modify Operation - The entire list of entry
2261 modifications MUST be performed in the order they are listed, as a
2263 Merrells, Reed, Srinivasan [Page 34]
2264 Expires 10 September 2000
2270 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
2273 single atomic operation. (p33)
2275 7) Elements of Protocol - Modify Operation - While individual
2276 modifications may violate the directory schema, the resulting entry
2277 after the entire list of modifications is performed MUST conform to
2278 the requirements of the directory schema. (p33)
2280 8) Elements of Protocol - Modify Operation - The Modify Operation
2281 cannot be used to remove from an entry any of its distinguished
2282 values, those values which form the entry's relative distinguished
2285 9) Elements of Protocol - Add Operation - Clients MUST include
2286 distinguished values (those forming the entry's own RDN) in this
2287 list, the objectClass attribute, and values of any mandatory
2288 attributes of the listed object classes. (p35)
2290 10) Elements of Protocol - Add Operation - The entry named in the
2291 entry field of the AddRequest MUST NOT exist for the AddRequest to
2294 11) Elements of Protocol - Add Operation - The parent of the entry to
2295 be added MUST exist. (p35)
2297 12) Elements of Protocol - Delete Operation - ... only leaf entries
2298 (those with no subordinate entries) can be deleted with this
2301 13) Elements of Protocol - Modify DN Operation - If there was already
2302 an entry with that name [the new DN], the operation would fail.
2305 14) Elements of Protocol - Modify DN Operation - The server may not
2306 perform the operation and return an error code if the setting of
2307 the deleteoldrdn parameter would cause a schema inconsistency in
2312 20.2 LDAP Data Model Constraints
2314 The LDAP Data Model Constraint clauses as written in RFC 2251 [LDAPv3]
2315 may be summarised as follows.
2317 a) The parent of an entry must exist. (LDAP Constraint 11 & 12.)
2319 b) The RDN of an entry is unique among all its siblings. (LDAP
2322 c) The components of the RDN must appear as attribute values of the
2323 entry. (LDAP Constraint 8 & 9.)
2325 d) An entry must have an objectclass attribute. (LDAP Constraint 2 &
2328 e) An entry must conform to the schema constraints. (LDAP Constraint
2331 Merrells, Reed, Srinivasan [Page 35]
2332 Expires 10 September 2000
2338 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
2343 f) Duplicate attribute values are not permitted. (LDAP Constraint 5.)
2347 20.3 LDAP Operation Behaviour Constraints
2349 The LDAP Operation Behaviour Constraint clauses as written in RFC 2251
2350 [LDAPv3] may be summarised as follows.
2352 A) The Add Operation will fail if an entry with the target DN already
2353 exists. (LDAP Constraint 10.)
2355 B) The Add Operation will fail if the entry violates data constraints:
2357 a - The parent of the entry does not exist. (LDAP Constraint 11.)
2359 b - The entry already exists. (LDAP Constraint 10.)
2361 c - The entry RDN components appear as attribute values on the
2362 entry. (LDAP Constraint 9.)
2364 d - The entry has an objectclass attribute. (LDAP Constraint 9.)
2366 e - The entry conforms to the schema constraints. (LDAP
2369 f - The entry has no duplicated attribute values. (LDAP
2372 C) The modifications of a Modify Operation are applied in the order
2373 presented. (LDAP Constraint 6.)
2375 D) The modifications of a Modify Operation are applied atomically.
2376 (LDAP Constraint 6.)
2378 E) A Modify Operation will fail if it results in an entry that
2379 violates data constraints:
2381 c - If it attempts to remove distinguished attribute values.
2382 (LDAP Constraint 8.)
2384 d - If it removes the objectclass attribute. (LDAP Constraint 2.)
2386 e - If it violates the schema constraints. (LDAP Constraint 7.)
2388 f - If it creates duplicate attribute values. (LDAP Constraint
2391 F) The Delete Operation will fail if it would result in a DIT that
2392 violates data constraints:
2394 a - The deleted entry must not have any children. (LDAP
2399 Merrells, Reed, Srinivasan [Page 36]
2400 Expires 10 September 2000
2406 INTERNET-DRAFT LDAP Replication Architecture March 10, 2000
2409 G) The ModDN Operation will fail if it would result in a DIT or entry
2410 that violates data constraints:
2412 b - The new Superior entry must exist. (Derived LDAP Data Model
2415 c - An entry with the new DN must not already exist. (LDAP
2418 c - The new RDN components do not appear as attribute values on
2419 the entry. (LDAP Constraint 1.)
2421 d - If it removes the objectclass attribute. (LDAP Constraint 2.)
2423 e - It is permitted for the operation to result in an entry that
2424 violates the schema constraints. (LDAP Constraint 14.)
2428 20.4 New LDAP Constraints
2430 The introduction of support for multi-mastered entries, by the
2431 replication scheme presented in this document, necessitates the
2432 imposition of new constraints upon the Data Model and LDAP Operation
2437 20.4.1 New LDAP Data Model Constraints
2439 1) Each entry shall have a unique identifier generated by the UUID
2440 algorithm available through the 'entryUUID' operational attribute. The
2441 entryUUID attribute is single valued.
2445 20.4.2 New LDAP Operation Behaviour Constraints
2447 1) The LDAP Data Model Constraints do not prevent cycles in the
2448 ancestry graph. Existing constraints Data Model Constraint - 20.4.1
2449 - (a) and Operation Constraint - 20.4.2 - (B) would prevent this in
2450 the single master case, but not in the presence of multiple
2453 2) The LDAP Data Model Constraints state that only the LDAP Modify
2454 Operation is atomic. All other LDAP Update Operations are also
2455 considered to be atomically applied to the DIB.
2465 Merrells, Reed, Srinivasan [Page 37]
2466 Expires 10 September 2000