2 # Copyright 2007 The OpenLDAP Foundation, All Rights Reserved.
3 # COPYING RESTRICTIONS APPLY, see COPYRIGHT.
5 H1: Changes Since Previous Release
7 The following sections attempt to summarize the new features and changes in OpenLDAP
8 software since the 2.3.x release and the OpenLDAP Admin Guide.
10 H2: New Guide Sections
12 In order to make the Admin Guide more thorough and cover the majority of questions
13 asked on the OpenLDAP mailing lists and scenarios discussed there, we have added the following new sections:
15 * {{SECT:When should I use LDAP?}}
16 * {{SECT:When should I not use LDAP?}}
17 * {{SECT:LDAP vs RDBMS}}
20 * {{SECT:Replication}}
21 * {{SECT:Maintenance}}
24 * {{SECT:Troubleshooting}}
25 * {{SECT:Changes Since Previous Release}}
26 * {{SECT:Configuration File Examples}}
29 Also, the table of contents is now 3 levels deep to ease navigation.
32 H2: New Features and Enhancements in 2.4
34 H3: Better {{B:cn=config}} functionality
36 There is a new slapd-config(5) manpage for the {{B:cn=config}} backend. The
37 original design called for auto-renaming of config entries when you insert or
38 delete entries with ordered names, but that was not implemented in 2.3. It is
39 now in 2.4. This means, e.g., if you have
41 > olcDatabase={1}bdb,cn=config
42 > olcSuffix: dc=example,dc=com
44 and you want to add a new subordinate, now you can ldapadd:
46 > olcDatabase={1}bdb,cn=config
47 > olcSuffix: dc=foo,dc=example,dc=com
49 This will insert a new BDB database in slot 1 and bump all following databases
50 down one, so the original BDB database will now be named:
52 > olcDatabase={2}bdb,cn=config
53 > olcSuffix: dc=example,dc=com
55 H3: Better {{B:cn=schema}} functionality
57 In 2.3 you were only able to add new schema elements, not delete or modify
58 existing elements. In 2.4 you can modify schema at will. (Except for the
59 hardcoded system schema, of course.)
61 H3: More sophisticated Syncrepl configurations
63 The original implementation of Syncrepl in OpenLDAP 2.2 was intended to support
64 multiple consumers within the same database, but that feature never worked and
65 was removed from OpenLDAP 2.3; you could only configure a single consumer in
68 In 2.4 you can configure multiple consumers in a single database. The configuration
69 possibilities here are quite complex and numerous. You can configure consumers
70 over arbitrary subtrees of a database (disjoint or overlapping). Any portion
71 of the database may in turn be provided to other consumers using the Syncprov
72 overlay. The Syncprov overlay works with any number of consumers over a single
73 database or over arbitrarily many glued databases.
75 H3: N-Way Multimaster Replication
77 As a consequence of the work to support multiple consumer contexts, the syncrepl
78 system now supports full N-Way multimaster replication with entry-level conflict
79 resolution. There are some important constraints, of course: In order to maintain
80 consistent results across all servers, you must maintain tightly synchronized
81 clocks across all participating servers (e.g., you must use NTP on all servers).
83 The entryCSNs used for replication now record timestamps with microsecond resolution,
84 instead of just seconds. The delta-syncrepl code has not been updated to support
85 multimaster usage yet, that will come later in the 2.4 cycle.
87 H3: Replicating {{slapd}} Configuration (syncrepl and {{B:cn=config}})
89 Syncrepl was explicitly disabled on cn=config in 2.3. It is now fully supported
90 in 2.4; you can use syncrepl to replicate an entire server configuration from
91 one server to arbitrarily many other servers. It's possible to clone an entire
92 running slapd using just a small (less than 10 lines) seed configuration, or
93 you can just replicate the schema subtrees, etc. Tests 049 and 050 in the test
94 suite provide working examples of these capabilities.
97 H3: Push-Mode Replication
99 In 2.3 you could configure syncrepl as a full push-mode replicator by using it
100 in conjunction with a back-ldap pointed at the target server. But because the
101 back-ldap database needs to have a suffix corresponding to the target's suffix,
102 you could only configure one instance per slapd.
104 In 2.4 you can define a database to be "hidden", which means that its suffix is
105 ignored when checking for name collisions, and the database will never be used
106 to answer requests received by the frontend. Using this "hidden" database feature
107 allows you to configure multiple databases with the same suffix, allowing you to
108 set up multiple back-ldap instances for pushing replication of a single database
109 to multiple targets. There may be other uses for hidden databases as well (e.g.,
110 using a syncrepl consumer to maintain a *local* mirror of a database on a separate filesystem).
113 H3: More extensive TLS configuration control
115 In 2.3, the TLS configuration in slapd was only used by the slapd listeners. For
116 outbound connections used by e.g. back-ldap or syncrepl their TLS parameters came
117 from the system's ldap.conf file.
119 In 2.4 all of these sessions inherit their settings from the main slapd configuration,
120 but settings can be individually overridden on a per-config-item basis. This is
121 particularly helpful if you use certificate-based authentication and need to use a
122 different client certificate for different destinations.
125 H3: Performance enhancements
127 Too many to list. Some notable changes - ldapadd used to be a couple of orders
128 of magnitude slower than "slapadd -q". It's now at worst only about half the
129 speed of slapadd -q. Some comparisons of all the 2.x OpenLDAP releases are available
130 at {{URL:http://www.openldap.org/pub/hyc/scale2007.pdf}}
132 That compared 2.0.27, 2.1.30, 2.2.30, 2.3.33, and HEAD). Toward the latter end
133 of the "Cached Search Performance" chart it gets hard to see the difference
134 because the run times are so small, but the new code is about 25% faster than 2.3,
135 which was about 20% faster than 2.2, which was about 100% faster than 2.1, which
136 was about 100% faster than 2.0, in that particular search scenario. That test
137 basically searched a 1.3GB DB of 380836 entries (all in the slapd entry cache)
138 in under 1 second. i.e., on a 2.4GHz CPU with DDR400 ECC/Registered RAM we can
139 search over 500 thousand entries per second. The search was on an unindexed
140 attribute using a filter that would not match any entry, forcing slapd to examine
141 every entry in the DB, testing the filter for a match.
143 Essentially the slapd entry cache in back-bdb/back-hdb is so efficient the search
144 processing time is almost invisible; the runtime is limited only by the memory
145 bandwidth of the machine. (The search data rate corresponds to about 3.5GB/sec;
146 the memory bandwidth on the machine is only about 4GB/sec due to ECC and register latency.)
150 * slapo-dds (Dynamic Directory Services, RFC 2589)
151 * slapo-memberof (reverse group membership maintenance)
153 H3: New features in existing Overlays
155 * slapo-pcache allows cache inspection/maintenance/hot restart
156 * slapo-rwm can safely interoperate with other overlays
157 * Dyngroup/Dynlist merge, plus security enhancements
159 H3: New features in slapd
161 * monitoring of back-{b,h}db: cache fill-in, non-indexed searches,
162 * session tracking control (draft-wahl-ldap-session)
163 * subtree delete in back-sql (draft-armijo-ldap-treedelete)
165 H3: New features in libldap
167 * ldap_sync client API (LDAP Content Sync Operation, RFC 4533)
169 H3: New clients and tools
171 * ldapexop for arbitrary extended operations
172 * complete support of controls in request/response for all clients
174 H3: New build options
176 * Support for building against GnuTLS
177 * Advertisement of LDAP server in DNS
180 H2: Obsolete Features Removed From 2.4
182 These features were strongly deprecated in 2.3 and removed in 2.4.
186 Please read the {{SECT:Replication}} section as to why this is no longer in
191 back-ldbm was both slow and unreliable. Its byzantine indexing code was
192 prone to spontaneous corruption, as were the underlying database libraries
193 that were commonly used (e.g. GDBM or NDBM). back-bdb and back-hdb are
194 superior in every aspect, with simplified indexing to avoid index corruption,
195 fine-grained locking for greater concurrency, hierarchical caching for
196 greater performance, streamlined on-disk format for greater efficiency
197 and portability, and full transaction support for greater reliability.