From: Quanah Gibson-Mount Date: Sat, 1 Sep 2007 01:48:46 +0000 (+0000) Subject: Sync 2.4 guide with HEAD for 2.4.5 X-Git-Tag: OPENLDAP_REL_ENG_2_4_5BETA~28 X-Git-Url: https://git.sur5r.net/?a=commitdiff_plain;h=779d6af56da8facfcaf2a4ad7ed689dc36bd986a;p=openldap Sync 2.4 guide with HEAD for 2.4.5 --- diff --git a/doc/guide/COPYRIGHT b/doc/guide/COPYRIGHT index 27a4e73735..3e2fba9504 100644 --- a/doc/guide/COPYRIGHT +++ b/doc/guide/COPYRIGHT @@ -36,9 +36,11 @@ Public License. --- -Portions Copyright 1999-2005 Howard Y.H. Chu. -Portions Copyright 1999-2005 Symas Corporation. +Portions Copyright 1999-2007 Howard Y.H. Chu. +Portions Copyright 1999-2007 Symas Corporation. Portions Copyright 1998-2003 Hallvard B. Furuseth. +Portions Copyright 2007 Gavin Henry +Portions Copyright 2007 Suretec Systems All rights reserved. Redistribution and use in source and binary forms, with or without diff --git a/doc/guide/admin/Makefile b/doc/guide/admin/Makefile index dfae7270e9..6b33980f98 100644 --- a/doc/guide/admin/Makefile +++ b/doc/guide/admin/Makefile @@ -18,16 +18,19 @@ sdf-src: \ ../plain.sdf \ ../preamble.sdf \ abstract.sdf \ + appendix-configs.sdf \ + backends.sdf \ config.sdf \ dbtools.sdf \ glossary.sdf \ guide.sdf \ install.sdf \ intro.sdf \ + maintenance.sdf \ master.sdf \ monitoringslapd.sdf \ + overlays.sdf \ preface.sdf \ - proxycache.sdf \ quickstart.sdf \ referrals.sdf \ replication.sdf \ @@ -36,21 +39,19 @@ sdf-src: \ schema.sdf \ security.sdf \ slapdconfig.sdf \ - syncrepl.sdf \ title.sdf \ tls.sdf \ + troubleshooting.sdf \ tuning.sdf sdf-img: \ ../images/LDAPlogo.gif \ - config_local.gif \ - config_ref.gif \ + config_dit.png \ + config_local.png \ + config_ref.png \ config_repl.gif \ - config_x500fe.gif \ - config_x500ref.gif \ - intro_dctree.gif \ - intro_tree.gif \ - replication.gif + intro_dctree.png \ + intro_tree.png \ guide.html: guide.sdf sdf-src sdf-img sdf -2html guide.sdf @@ -62,6 +63,7 @@ admin.html: admin.sdf sdf-src sdf-img sdf -DPDF -2html admin.sdf guide.pdf: admin.html - htmldoc --book --duplex --bottom 36 --top 36 \ - --toclevels 2 \ - -f guide.pdf admin.html + htmldoc --batch guide.book + +clean: + rm -f *.pdf *.html *~ diff --git a/doc/guide/admin/README.spellcheck b/doc/guide/admin/README.spellcheck new file mode 100644 index 0000000000..729b247882 --- /dev/null +++ b/doc/guide/admin/README.spellcheck @@ -0,0 +1,16 @@ +# $OpenLDAP$ +# Copyright 2007 The OpenLDAP Foundation, All Rights Reserved. +# COPYING RESTRICTIONS APPLY, see COPYRIGHT. +# +# README.spellcheck +# + +aspell.en.pws + We use aspell to spell check the Admin Guide and Man Pages. + + Please move aspell.en.pws to ~/.aspell.en.pws and run: + + aspell --lang=en_US -c + + If you add additional words and terms, please add + them or copy them to aspell.en.pws and commit. diff --git a/doc/guide/admin/appendix-changes.sdf b/doc/guide/admin/appendix-changes.sdf new file mode 100644 index 0000000000..4ee1dce248 --- /dev/null +++ b/doc/guide/admin/appendix-changes.sdf @@ -0,0 +1,208 @@ +# $OpenLDAP$ +# Copyright 2007 The OpenLDAP Foundation, All Rights Reserved. +# COPYING RESTRICTIONS APPLY, see COPYRIGHT. + +H1: Changes Since Previous Release + +The following sections attempt to summarize the new features and changes in OpenLDAP +software since the 2.3.x release and the OpenLDAP Admin Guide. + +H2: New Guide Sections + +In order to make the Admin Guide more thorough and cover the majority of questions +asked on the OpenLDAP mailing lists and scenarios discussed there, we have added the following new sections: + +* {{SECT:When should I use LDAP?}} +* {{SECT:When should I not use LDAP?}} +* {{SECT:LDAP vs RDBMS}} +* {{SECT:Backends}} +* {{SECT:Overlays}} +* {{SECT:Replication}} +* {{SECT:Maintenance}} +* {{SECT:Monitoring}} +* {{SECT:Tuning}} +* {{SECT:Troubleshooting}} +* {{SECT:Changes Since Previous Release}} +* {{SECT:Configuration File Examples}} +* {{SECT:Glossary}} + +Also, the table of contents is now 3 levels deep to ease navigation. + + +H2: New Features and Enhancements in 2.4 + +H3: Better {{B:cn=config}} functionality + +There is a new slapd-config(5) manpage for the {{B:cn=config}} backend. The +original design called for auto-renaming of config entries when you insert or +delete entries with ordered names, but that was not implemented in 2.3. It is +now in 2.4. This means, e.g., if you have + +> olcDatabase={1}bdb,cn=config +> olcSuffix: dc=example,dc=com + +and you want to add a new subordinate, now you can ldapadd: + +> olcDatabase={1}bdb,cn=config +> olcSuffix: dc=foo,dc=example,dc=com + +This will insert a new BDB database in slot 1 and bump all following databases + down one, so the original BDB database will now be named: + +> olcDatabase={2}bdb,cn=config +> olcSuffix: dc=example,dc=com + +H3: Better {{B:cn=schema}} functionality + +In 2.3 you were only able to add new schema elements, not delete or modify +existing elements. In 2.4 you can modify schema at will. (Except for the +hardcoded system schema, of course.) + +H3: More sophisticated Syncrepl configurations + +The original implementation of Syncrepl in OpenLDAP 2.2 was intended to support +multiple consumers within the same database, but that feature never worked and +was removed from OpenLDAP 2.3; you could only configure a single consumer in +any database. + +In 2.4 you can configure multiple consumers in a single database. The configuration +possibilities here are quite complex and numerous. You can configure consumers +over arbitrary subtrees of a database (disjoint or overlapping). Any portion +of the database may in turn be provided to other consumers using the Syncprov +overlay. The Syncprov overlay works with any number of consumers over a single +database or over arbitrarily many glued databases. + +H3: N-Way Multimaster Replication + +As a consequence of the work to support multiple consumer contexts, the syncrepl +system now supports full N-Way multimaster replication with entry-level conflict +resolution. There are some important constraints, of course: In order to maintain +consistent results across all servers, you must maintain tightly synchronized +clocks across all participating servers (e.g., you must use NTP on all servers). + +The entryCSNs used for replication now record timestamps with microsecond resolution, +instead of just seconds. The delta-syncrepl code has not been updated to support +multimaster usage yet, that will come later in the 2.4 cycle. + +H3: Replicating {{slapd}} Configuration (syncrepl and {{B:cn=config}}) + +Syncrepl was explicitly disabled on cn=config in 2.3. It is now fully supported +in 2.4; you can use syncrepl to replicate an entire server configuration from +one server to arbitrarily many other servers. It's possible to clone an entire +running slapd using just a small (less than 10 lines) seed configuration, or +you can just replicate the schema subtrees, etc. Tests 049 and 050 in the test +suite provide working examples of these capabilities. + + +H3: Push-Mode Replication + +In 2.3 you could configure syncrepl as a full push-mode replicator by using it +in conjunction with a back-ldap pointed at the target server. But because the +back-ldap database needs to have a suffix corresponding to the target's suffix, +you could only configure one instance per slapd. + +In 2.4 you can define a database to be "hidden", which means that its suffix is +ignored when checking for name collisions, and the database will never be used +to answer requests received by the frontend. Using this "hidden" database feature +allows you to configure multiple databases with the same suffix, allowing you to +set up multiple back-ldap instances for pushing replication of a single database +to multiple targets. There may be other uses for hidden databases as well (e.g., +using a syncrepl consumer to maintain a *local* mirror of a database on a separate filesystem). + + +H3: More extensive TLS configuration control + +In 2.3, the TLS configuration in slapd was only used by the slapd listeners. For +outbound connections used by e.g. back-ldap or syncrepl their TLS parameters came +from the system's ldap.conf file. + +In 2.4 all of these sessions inherit their settings from the main slapd configuration, +but settings can be individually overridden on a per-config-item basis. This is +particularly helpful if you use certificate-based authentication and need to use a +different client certificate for different destinations. + + +H3: Performance enhancements + +Too many to list. Some notable changes - ldapadd used to be a couple of orders +of magnitude slower than "slapadd -q". It's now at worst only about half the +speed of slapadd -q. Some comparisons of all the 2.x OpenLDAP releases are available +at {{URL:http://www.openldap.org/pub/hyc/scale2007.pdf}} + +That compared 2.0.27, 2.1.30, 2.2.30, 2.3.33, and HEAD). Toward the latter end +of the "Cached Search Performance" chart it gets hard to see the difference +because the run times are so small, but the new code is about 25% faster than 2.3, +which was about 20% faster than 2.2, which was about 100% faster than 2.1, which +was about 100% faster than 2.0, in that particular search scenario. That test +basically searched a 1.3GB DB of 380836 entries (all in the slapd entry cache) +in under 1 second. i.e., on a 2.4GHz CPU with DDR400 ECC/Registered RAM we can +search over 500 thousand entries per second. The search was on an unindexed +attribute using a filter that would not match any entry, forcing slapd to examine +every entry in the DB, testing the filter for a match. + +Essentially the slapd entry cache in back-bdb/back-hdb is so efficient the search +processing time is almost invisible; the runtime is limited only by the memory +bandwidth of the machine. (The search data rate corresponds to about 3.5GB/sec; +the memory bandwidth on the machine is only about 4GB/sec due to ECC and register latency.) + +H3: New overlays + +* slapo-constraint (Attribute value constraints) +* slapo-dds (Dynamic Directory Services, RFC 2589) +* slapo-memberof (reverse group membership maintenance) + +H3: New features in existing Overlays + +* slapo-pcache + - Inspection/Maintenance + -- the cache database can be directly accessed via + LDAP by adding a specific control to each LDAP request; a specific + extended operation allows to consistently remove cached entries and entire + cached queries + - Hot Restart + -- cached queries are saved on disk at shutdown, and reloaded if + not expired yet at subsequent restart + +* slapo-rwm can safely interoperate with other overlays +* Dyngroup/Dynlist merge, plus security enhancements + - added dgIdentity support (draft-haripriya-dynamicgroup) + +H3: New features in slapd + +* monitoring of back-{b,h}db: cache fill-in, non-indexed searches, +* session tracking control (draft-wahl-ldap-session) +* subtree delete in back-sql (draft-armijo-ldap-treedelete) + +H3: New features in libldap + +* ldap_sync client API (LDAP Content Sync Operation, RFC 4533) + +H3: New clients, tools and tool enhancements + +* ldapexop for arbitrary extended operations +* Complete support of controls in request/response for all clients +* LDAP Client tools now honor SRV records + +H3: New build options + +* Support for building against GnuTLS + + +H2: Obsolete Features Removed From 2.4 + +These features were strongly deprecated in 2.3 and removed in 2.4. + +H3: Slurpd + +Please read the {{SECT:Replication}} section as to why this is no longer in +OpenLDAP + +H3: back-ldbm + +back-ldbm was both slow and unreliable. Its byzantine indexing code was +prone to spontaneous corruption, as were the underlying database libraries +that were commonly used (e.g. GDBM or NDBM). back-bdb and back-hdb are +superior in every aspect, with simplified indexing to avoid index corruption, +fine-grained locking for greater concurrency, hierarchical caching for +greater performance, streamlined on-disk format for greater efficiency +and portability, and full transaction support for greater reliability. diff --git a/doc/guide/admin/appendix-configs.sdf b/doc/guide/admin/appendix-configs.sdf new file mode 100644 index 0000000000..81aaf86f86 --- /dev/null +++ b/doc/guide/admin/appendix-configs.sdf @@ -0,0 +1,14 @@ +# $OpenLDAP$ +# Copyright 2007 The OpenLDAP Foundation, All Rights Reserved. +# COPYING RESTRICTIONS APPLY, see COPYRIGHT. + +H1: Configuration File Examples + + +H2: slapd.conf + + +H2: ldap.conf + + +H2: a-n-other.conf diff --git a/doc/guide/admin/aspell.en.pws b/doc/guide/admin/aspell.en.pws new file mode 100644 index 0000000000..a28b908c2f --- /dev/null +++ b/doc/guide/admin/aspell.en.pws @@ -0,0 +1,1406 @@ +personal_ws-1.1 en 1405 +nattrsets +inappropriateAuthentication +api +olcAttributeTypes +BhY +reqEnd +olcOverlayConfig +shoesize +olcTLSCACertificateFile +CGI +cdx +DCE +DAP +attributename +lsei +dbconfig +arg +kurt +authzID +authzid +authzId +DAs +ddd +userApplications +BNF +attrs +mixin +wholeSubtree +chainingRequired +ldapport +hallvard +ASN +acknowledgements +Chu +ava +monitorCounter +del +DDR +testObject +OrgPerson +IGJlZ +olcUpdateref +ECC +deleteDN +cli +ltdl +CAPI +dev +serverctrls +olcDbDirectory +xvfB +BSI +modv +nonleaf +errCode +PhotoURI +buf +cdef +monitorConnectionLocalAddress +dir +EGD +dit +retoidp +ando +edu +caseExactSubstringsMatch +bvstrdup +AUTHNAME +memrealloc +auditExtended +replog +ludp +metainformation +CRL +CRP +olcReferral +XLDFLAGS +metadirectory +csn +siiiib +stateful +olcModulePath +maxentries +authc +seeAlso +searchbase +searchBase +realnamingcontext +dn's +DNs +DN's +dns +dereference +sortKey +authzTo +lossy +gcc +CWD +lssl +organizationalRole +DSA +derefInSearching +pwdGraceUseTime +DSE +groupOfURLs +modrdn +ModRDN +modrDN +pwdFailureCountInterval +homePhone +eng +paramName +errUnsolicitedData +Heimdal +EOF +authz +XINCPATH +LTFINISH +plaintext +indices +reqAssertion +olcDbUri +dst +env +oplist +MirrorMode +mirrormode +objclass +Bint +dup +hdb +gid +stderr +caseIgnoreOrderingMatch +moduledir +gif +jpegPhoto +lsasl +judgmentday +prepend +subentry +dbcache +mkversion +objectClasses +objectclasses +searchResultReference +fmt +qdescrs +olcSuffix +supportedControl +GHz +libpath +INADDR +compareDN +sizelimit +unixODBC +APIs +blen +attrsOnly +attrsonly +slappasswd +referralsPreferred +oids +OIDs +wBDARESEhgVG +syncIdSet +olcTLSCipherSuite +username +sizeLimitExceeded +subst +idl +chroot +iff +auditDelete +numbits +ZKKuqbEKJfKSXhUbHG +reqRespControls +TLSCertificateKeyFile +olcAccess +proxyTemplates +neverDerefaliases +RootDN +rootdn +loglevel +args +caseExactOrderingMatch +olcDbQuarantine +RELEASEDATE +baseDN +basedn +argv +GSS +schemachecking +whoami +WhoAmI +syslogd +dataflow +subentries +attrpair +BerkeleyDB's +singleLevel +entryDN +dSAOperation +includedir +inplace +LDAPAPIFeatureInfo +logbase +ing +moduleload +IPC +Makefile +getpid +GETREALM +numericString +MANSECT +XXXX +domainstyle +bvarray +Choi +iscritical +subschema +slapindex +plugin +distinguishedNameMatch +derefAliases +baseObject +kdz +reqMod +ldb +srcdir +pwdExpireWarning +localstatedir +sockbuf +PENs +ipv +IPv +ghenry +hyc +multimaster +noop +DEFS +joe +testAttr +syncrepl +pwdFailureTime +timestamp +whitespaces +ISP +ldp +monitorInfo +bjensen +newPasswd +irresponsive +len +perl +dynlist +browseable +attrvalue +pers +retcode +rootpw +matchedDN +auditReadObject +idletimeout +intermediateResponse +myOID +structuralObjectClass +integerMatch +openldap +OpenLDAP +moddn +rewriteEngine +AVAs +accesslog +searchDN +reqOld +MDn +aspell +TLSCACertificateFile +mem +peername +syncUUIDs +database's +krb +bool +logins +jts +memberAttr +newpasswdfile +newPasswdFile +ucdata +LLL +confdir +BerValues +olcDbLinearIndex +Elfrink +AUTOREMOVE +countp +realloc +bsize +CThreads +structs +desc +LTCOMPILE +bindmethod +olcDbCheckpoint +modme +refreshOnly +PIII +pwdPolicySubentry +FIXME +realanonymous +caseExactMatch +olcSizeLimit +Bourne +attr +objectidentifier +objectIdentifier +refint +msgtype +OBJEXT +LRL +subtrees +realdnattr +entrymods +admittable +libtool's +dupbv +searchResultEntry +lud +modifyTimestamp +TLSEphemeralDHParamFile +LRU +syncprov +strvals +preread +auth +nis +regexec +adamsom +objclasses +deallocation +strdup +gsMatch +adamson +UniqueName +ppErrStr +DESTDIR +oid +saslpasswd +interoperate +bindwhen +Solaris +oOjM +msg +submatch +refreshAndPersist +monitorServer +attributeUsage +soelim +objectIdentiferMatch +olc +PEM +Autoconf +alloc +PDU +OLF +inetorgperson +inetOrgPerson +deleteoldrdn +monitorCounterObject +pid +CPAN +sharedstatedir +OLP +LDFLAGS +dereferencing +errcodep +xeXBkeFxlZ +accessor's +extendedop +ple +NTP +reqSizeLimit +ORed +NUL +namingContexts +num +reqAttrsOnly +ldappasswd +online +libdir +unindexed +ObjectClassDescription +attrdesc +efgh +exopPasswdDN +ranlib +olcAttributeOptions +lineno +storages +nameAndOptionalUID +png +INCPATH +organizationalPerson +integerOrderingMatch +OSI +subschemaSubentry +cond +conf +bvec +rdn +ECHOPROMPT +RDBM +subany +runningslapd +configs +datagram +crlcheck +conn +builddir +OTP +entrylimit +attrdescN +logold +pos +sbi +PRD +reqEntries +pre +bvals +unixusers +olcReadonly +olcReadOnly +pwdChangedTime +mySQL +sdf +suffixmassage +referralDN +sed +statslog +perror +ldapexop +bvecadd +distributedOperation +sel +versa +TBC +telephonenumber +telephoneNumber +DLDAP +peernamestyle +SHA +filename +rpath +argsfile +ptr +INCDIR +pwd +dctree +rnd +quanah +lastmod +TCL +sprintf +shm +logops +dnattr +subdir +searchAttrDN +cctrls +tcp +strlen +spellcheck +ludpp +typedef +olcDbIDLcacheSize +ostring +mwrscdx +SMD +UCD +cancelled +crit +lucyB +slp +rdns +CPUs +TGT +modulepath +quickstart +mySNMP +tgz +UDP +RDBMs +rdbms +Matic +qdstring +gunzip +librewrite +UFl +src +lastName +ufn +cron +sql +pwdPolicyChecker +uid +olcDbConfig +refreshDone +ssf +replogfile +rwm +TOC +vec +LDAPDN +compareAttrDN +endmacro +tls +repl +monitoringslapd +referralsp +tmp +SRP +olcDbNosync +conns +SSL +PDkzODdASFxOQ +SRV +rwx +sss +deallocators +Contribware +URLlist +str +subinitial +CSNs +sbin +dbtools +datasource +sbio +posp +errText +prepended +labeledURI +scdx +startup +const +wBDABALD +octetStringSubstringsStringMatch +ttl +bvalue +bvdup +stringa +stringb +hasSubordinates +oldPasswd +sys +pwdPolicy +slapd +sasl +slapauth +MANCOMPRESS +octetStringOrderingStringMatch +updatedn +UpdateDN +slapdindex +searchFilter +uri +slapi +tty +liblunicode +url +entryExpireTimestamp +priv +slapo +UTF +vlv +ctrl +TXN +virtualnamingcontext +eatBlanks +slimit +ldaprc +usr +txt +proc +generalizedTime +loopback +unmassaged +mechs +freemods +initgroups +auditCompare +GDBM +DSA's +compareFalse +resultCode +resultcode +noSuchObject +params +groupnummer +searchEntryDN +negttl +chainingPreferred +TABs +retdatap +errAuxObject +postoperation +realself +olcPasswordHash +concat +debuglevel +addAttrDN +credp +ldaphost +pwdMaxFailure +octetStringMatch +extparam +auditWriteObject +colaligns +Diffie +attributevalue +AttributeValue +SIGTERM +MyCompany +al +AAQSkZJRgABAAAAAQABAAD +cd +contextCSN +ar +pthreads +monitorTimestamp +de +reqAuthzID +backend's +backends +cn +lcrypto +infodir +groupstyle +ldapsearch +cp +displayName +eg +bv +olcBackendConfig +dn +fd +LDAPSync +fG +fi +eq +FIPS +dx +et +eu +hh +olcLogLevel +slurpd +logevels +IG +addDN +tbls +ldapmodify +kb +syslog +io +ip +dynacl +aXRoIGEgc +enum +slapdconf +reqFilter +ld +xyz +TLSCertificateFile +idassert +failover +kerberos +lookups +md +iZ +SysNet +BerValue +idlcachesize +struct +UCASE +errno +syslogged +mk +ng +oc +errOp +pwdMaxAge +truelies +NL +mr +reindex +newentry +ok +mv +preinstalled +regex +saslmech +rc +config +ou +policyDN +sb +olcSyncrepl +QN +strtol +runtime +NOSYNC +slapover +RL +sockname +MANCOMPRESSSUFFIX +makeinfo +coltags +ro +rp +EXEEXT +sockurl +th +sn +ru +UG +ss +su +TP +reqMethod +XLIBS +PhotoObject +tt +keycol +namingContext +rlookups +searchstack +NOECHOPROMPT +sldb +wi +AlmostASearchRequest +xf +param +MChAODQ +caseExactIA +Vu +Za +idlecachesize +ws +errSleepTime +INSTALLFLAGS +pthread +pwdHistory +slen +errUnsolicitedOID +dyngroup +filtertype +rewriteRules +criticality +preoperation +smbk +subord +reqVersion +errp +ZZ +entryCSNs +dlopen +continuated +newsuperior +newSuperior +Preprocessor +XXLIBS +deallocate +reqScope +llber +bitstringa +sbindir +apache's +noidlen +monitorContext +resync +fqdn +authPassword +LDAPMatchingRule +olcIdleTimeout +treedelete +auditAdd +reqSession +derated +LDVERSION +IANA +olcDbSearchStack +bitstrings +rscdx +schemas +minssf +ldapadd +pseudorootdn +lldap +gssapi +applicatio +nelems +liblutil +wrscdx +scherr +internet +logfilter +lutil +themself +libexec +dnpattern +proxying +reqType +Kartik +libexecdir +inetd +pwdSafeModify +contrib +FQDNs +bjorn +myLDAP +SNMP +myObjectClass +thru +olcLastMod +commonName +testTwo +olcFrontendConfig +LDAPObjectClass +attributeTypes +LTINSTALL +hostname +Symas +numattrsets +msgid +ldapmodrdn +ldapbis +attributeoptions +serverID +memberof +pseudorootpw +CFLAGS +substr +pwdAllowUserChange +rewriteRule +XXXXXXXXXX +credlen +departmentNumber +rewriteMap +logfile +vals +LDAPAVA +modifyAttrDN +dcedn +olcOverlay +exop +berelement +BerElement +olcRootDN +octetString +SampleLDAP +expr +PostgreSQL +bvstr +filesystem +pathtest +objectClass +objectclass +submatches +newrdn +armijo +addBlanks +reqMessage +exts +SSHA +func +filterlist +modifyDN +syncuser +Masarati +LDAPSyntax +oldpasswdfile +oldPasswdFile +reqDN +SSFs +ietf +unwillingToPerform +oidlen +searchFilterAttrDN +CPPFLAGS +slapadd +Clatworthy +urldesc +substrings +Apurva +slapacl +multiclassing +monitoredInfo +LTLINK +ETCDIR +reqId +setspec +scanf +TLSv +distinguishedname +distinguishedName +BerVarray +caseIgnoreSubstrin +ldapwhoami +URLattr +generalizedTimeOrderingMatch +requestdata +timelimit +subr +cachesize +olcRootPW +SSLv +domainScope +LDAPMessage +LTVERSION +memalloc +refreshDeletes +BerkeleyDB +pathspec +uint +Poitou +whitespace +dynstyle +slaptest +zeilenga +WebUpdate +numericoid +changelog +ChangeLog +creatorsName +ascii +wahl +uniqueMember +slapcat +lwrap +ldapfilter +errDisconnect +sermersheim +rootdns +searchResult +libtool +servercredp +AttributeTypeDescription +LTFLAGS +authcDN +TLSCipherSuite +supportedSASLMechanisms +rootDSE +dsaparam +cachefree +UMich's +schemadir +attribute's +extern +varchar +olcDbCacheSize +olcDbCachesize +authcid +authcID +POSIX +hnPk +ldapext +authzFrom +Google +olcSchemaConfig +newsup +sbiod +XXXLIBS +LDAPBASE +Supr +olcDatabaseConfig +rwxrwxrwx +aeeiib +reqStart +sasldb +somevalue +LIBRELEASE +starttls +StartTLS +LDAPSchemaExtensionItem +reqReferral +shtool +Pierangelo +attrstyle +backend +portnumber +subjectAltName +errObject +valsort +bervals +berval's +derefFindingBaseObj +checkpointed +keytab +groupnaam +frontend +sctrls +dbnum +olcLdapConfig +sessionlog +attrset +entryCSN +strcast +kbyte +modifiersName +keytbl +olcHdbConfig +README +memcalloc +inet +saslargs +givenname +givenName +olcDbMode +pidfile +olcLimits +memvfree +tuple +superset +directoryString +proxyTemplate +proxytemplate +wildcards +monitoredObject +TTLs +LxsdLy +olcTimeLimit +stringal +init +Locators +bvalues +reqResult +impl +outvalue +returnCode +returncode +attributeDescription +attrval +dnssrv +ciphersuite +auditlog +reqControls +notypes +myAttributeType +stringbv +keyval +calloc +chmod +Subbarao +setstyle +subdirectories +errlist +slapdn +uncached +ldapapiinfo +groupOfUniqueNames +dhparam +slapd's +slapds +inputfile +RDBMSes +wildcard +Locator +errAbsObject +errABsObject +SASL's +html +searchResultDone +olcBdbConfig +ldapmod +LDAPMod +olcHidden +userPassword +TLSRandFile +use'd +auditBind +requestDN +lockdetect +selfstyle +liblber +ERXRTc +printf +AutoConfig +localhost +lber +noprompt +databasenumber +hasSubordintes +URIs +lang +auditSearch +ldapdelete +reqTimeLimit +cacertdir +queryid +Warper +XDEFS +urls +URL's +postalAddress +postaladdress +passwd +plugins +george +http +uppercased +Poobah +libldap +ldap +ldbm +ursula +LDAPModifying +slapdconfig +dnSubtreeMatch +olcSaslSecProps +olcSaslSecprops +auditModify +groupOfNames +jensen +reloadHint +prepending +olcGlobal +matchingRule +matchingrule +SmVuc +MSSQL +hostnames +ctrlp +lltdl +ctrls +rewriter +secprops +namespace +whsp +realusers +dnstyle +suffixalias +proxyAttrset +proxyAttrSet +proxyattrset +pwdMustChange +ldif +bvfree +sleeptime +pwdCheckQuality +msgidp +pwdAttribute +PRNGD +LDAPRDN +entryUUIDs +proxycache +proxyCache +SERATGCgaGBYWGDEjJR +noanonymous +accessee +createTimestamp +nretries +auditAbandon +LDAPAttributeType +logdb +procs +realdn +alwaysDerefAliases +ppolicy +jpeg +functionalities +pcache +caseIgnoreMatch +sysconfdir +checkpointing +rebindproc +dryrun +noplain +exattrs +Jong +proxied +firstName +accesslevel +login +rewriteContext +dcObject +newparent +numericStringMatch +TLSVerifyClient +subtree +multi +immSupr +manpage +assciated +wZFQrDD +serverctrlsp +onelevel +abcd +reqcert +referralsRequired +Hyuk +olcServerID +reqDerefAliases +newSuperiorDN +passwdfile +errMatchedDN +everytime +mkdep +olcDbindex +olcDbIndex +syntaxOID +reqData +databasetype +woid +numericStringOrderingMatch +clientctrls +RetCodes +pwdAccountLockedTime +attrtype +LIBVERSION +proto +endif +reqNewRDN +ldapi +notoc +matcheddnp +mkdir +mech +pwdMinAge +ldaps +userCertificate +LDAPv +IPsec +tokenization +olcModuleList +robert +generalizedTimeMatch +UMLDAP +OpenLDAP's +lookup +ABNF +olcDbShmKey +pwdLockoutDuration +TLSCACertificatePath +ldapuri +ldapurl +ACIs +behera +olcObjectIdentifier +endblock +proxyAuthz +pagedResults +bitstring +ACLs +berptr +olcModuleLoad +attributetype +attributeType +auditModRDN +cacert +freebuf +IDSET +pwdGraceAuthnLimit +invalue +XKYnrjvGT +srvtab +referralAttrDN +requestoid +basename +substring +booleanMatch +babs +pPasswd +msgfree +slapdconfigfile +olcDatabase +builtin +hardcoded +SIGINT +MAXLEN +xpasswd +cleartext +extensibleObject +pwdLockout +SIGHUP +reqDeleteOldRDN +reqAttr +subfinal +berval +octothorpe +LTONLY +filesystems +urandom +NDBM +abcdefgh +olcBackend +errmsgp +boolean +updateref +regcomp +contextp +filtercomp +LDAPNOINIT +deref +preallocated +syntaxes +memberURL +monitorRuntimeConfig +bindDn +bindDN +binddn +methodp +timelimitExceeded +pwdInHistory +LTSTATIC +requestors +requestor's +LDAPCONF +saslauthd +MKDEPFLAG +gecos +entryUUID +gnutls +GNUtls +GnuTLS +postread +timeval +DHAVE +caseIgnoreSubstringsMatch +monitorIsShadow +syncdata +olcPidFile +hostport +backload +bindir +olcObjectClasses +auditObject +LDIFv +strcasecmp +LTHREAD +dereferenced +entryTtl +LDAPControl +pwdMinLength +ldapcompare +readonly +readOnly +RANDFILE +attrlist +aci +directoryOperation +selfwrite +pwdReset +acl +attrname +ADH +searchable +bindmethods +logpurge +reqNewSuperior +multiproxy +dereferences +datadir +malloc +UUIDs +veryclean +userid +Kumar +AES +bdb +manageDSAit +ManageDsaIT +bindpw +monitorContainer +pEntry +baz +memfree +lresolv +objectIdentifierMatch +Blowfish +mkln +numericStringSubstringsMatch +openssl +OpenSSL +ModName +cacheable +freeit +pathname +ber +ali +mandir +changetype +CAs +CA's +typeA +bvecfree +ODBC +typeB +unescaped +devel +pwdCheckModule +LDAPURLDesc +authzDN diff --git a/doc/guide/admin/backends.sdf b/doc/guide/admin/backends.sdf new file mode 100644 index 0000000000..013288f453 --- /dev/null +++ b/doc/guide/admin/backends.sdf @@ -0,0 +1,262 @@ +# $OpenLDAP$ +# Copyright 2007 The OpenLDAP Foundation, All Rights Reserved. +# COPYING RESTRICTIONS APPLY, see COPYRIGHT. + +H1: Backends + + +H2: Berkeley DB Backends + + +H3: Overview + +The {{bdb}} backend to {{slapd}}(8) is the recommended primary backend for a +normal {{slapd}} database. It uses the Oracle Berkeley DB ({{TERM:BDB}}) +package to store data. It makes extensive use of indexing and caching +(see the {{SECT:Tuning}} section) to speed data access. + +{{hdb}} is a variant of the {{bdb}} backend that uses a hierarchical database +layout which supports subtree renames. It is otherwise identical to the {{bdb}} + behavior, and all the same configuration options apply. + +Note: An {{hdb}} database needs a large {{idlcachesize}} for good search performance, +typically three times the {{cachesize}} (entry cache size) or larger. + +H3: back-bdb/back-hdb Configuration + +MORE LATER + +H3: Further Information + +{{slapd-bdb}}(5) + +H2: LDAP + + +H3: Overview + +The LDAP backend to {{slapd}}(8) is not an actual database; instead it acts +as a proxy to forward incoming requests to another LDAP server. While +processing requests it will also chase referrals, so that referrals are fully +processed instead of being returned to the {{slapd}} client. + +Sessions that explicitly {{Bind}} to the {{back-ldap}} database always create +their own private connection to the remote LDAP server. Anonymous sessions +will share a single anonymous connection to the remote server. For sessions +bound through other mechanisms, all sessions with the same DN will share the +same connection. This connection pooling strategy can enhance the proxy’s +efficiency by reducing the overhead of repeatedly making/breaking multiple +connections. + +The ldap database can also act as an information service, i.e. the identity +of locally authenticated clients is asserted to the remote server, possibly +in some modified form. For this purpose, the proxy binds to the remote server +with some administrative identity, and, if required, authorizes the asserted +identity. + +H3: back-ldap Configuration + +LATER + +H3: Further Information + +{{slapd-ldap}}(5) + +H2: LDIF + + +H3: Overview + +The LDIF backend to {{slapd}}(8) is a basic storage backend that stores +entries in text files in LDIF format, and exploits the filesystem to create +the tree structure of the database. It is intended as a cheap, low performance +easy to use backend. + +When using the {{cn=config}} dynamic configuration database with persistent +storage, the configuration data is stored using this backend. See {{slapd-config}}(5) +for more information + +H3: back-ldif Configuration + +LATER + +H3: Further Information + +{{slapd-ldif}}(5) + +H2: Metadirectory + + +H3: Overview + +The meta backend to {{slapd}}(8) performs basic LDAP proxying with respect +to a set of remote LDAP servers, called "targets". The information contained +in these servers can be presented as belonging to a single Directory Information +Tree ({{TERM:DIT}}). + +A basic knowledge of the functionality of the {{slapd-ldap}}(5) backend is +recommended. This backend has been designed as an enhancement of the ldap +backend. The two backends share many features (actually they also share portions + of code). While the ldap backend is intended to proxy operations directed + to a single server, the meta backend is mainly intended for proxying of + multiple servers and possibly naming context masquerading. + +These features, although useful in many scenarios, may result in excessive +overhead for some applications, so its use should be carefully considered. + + +H3: back-meta Configuration + +LATER + +H3: Further Information + +{{slapd-meta}}(5) + +H2: Monitor + + +H3: Overview + +The monitor backend to {{slapd}}(8) is not an actual database; if enabled, +it is automatically generated and dynamically maintained by slapd with +information about the running status of the daemon. + +To inspect all monitor information, issue a subtree search with base {{cn=Monitor}}, +requesting that attributes "+" and "*" are returned. The monitor backend produces +mostly operational attributes, and LDAP only returns operational attributes +that are explicitly requested. Requesting attribute "+" is an extension which +requests all operational attributes. + +See the {{SECT:Monitoring}} section. + +H3: back-monitor Configuration + +LATER + +H3: Further Information + +{{slapd-monitor}}(5) + +H2: Null + + +H3: Overview + +The Null backend to {{slapd}}(8) is surely the most useful part of slapd: + +* Searches return success but no entries. +* Compares return compareFalse. +* Updates return success (unless readonly is on) but do nothing. +* Binds other than as the rootdn fail unless the database option "bind on" is given. +* The slapadd(8) and slapcat(8) tools are equally exciting. + +Inspired by the {{F:/dev/null}} device. + +H3: back-null Configuration + +LATER + +H3: Further Information + +{{slapd-null}}(5) + +H2: Passwd + + +H3: Overview + +The PASSWD backend to {{slapd}}(8) serves up the user account information +listed in the system {{passwd}}(5) file. + +This backend is provided for demonstration purposes only. The DN of each entry +is "uid=,". + +H3: back-passwd Configuration + +LATER + +H3: Further Information + +{{slapd-passwd}}(5) + +H2: Perl/Shell + +H3: Overview + +The Perl backend to {{slapd}}(8) works by embedding a {{perl}}(1) interpreter +into {{slapd}}(8). Any perl database section of the configuration file +{{slapd.conf}}(5) must then specify what Perl module to use. Slapd then creates +a new Perl object that handles all the requests for that particular instance of the backend. + +The Shell backend to {{slapd}}(8) executes external programs to implement +operations, and is designed to make it easy to tie an existing database to the +slapd front-end. This backend is is primarily intended to be used in prototypes. + +H3: back-perl/back-shell Configuration + +LATER + +H3: Further Information + +{{slapd-shell}}(5) and {{slapd-perl}}(5) + +H2: Relay + + +H3: Overview + +The primary purpose of this {{slapd}}(8) backend is to map a naming context +defined in a database running in the same {{slapd}}(8) instance into a +virtual naming context, with attributeType and objectClass manipulation, if +required. It requires the rwm overlay. + +This backend and the above mentioned overlay are experimental. + +H3: back-relay Configuration + +LATER + +H3: Further Information + +{{slapd-relay}}(5) + +H2: SQL + + +H3: Overview + +The primary purpose of this {{slapd}}(8) backend is to PRESENT information +stored in some RDBMS as an LDAP subtree without any programming (some SQL and +maybe stored procedures can’t be considered programming, anyway ;). + +That is, for example, when you (some ISP) have account information you use in +an RDBMS, and want to use modern solutions that expect such information in LDAP +(to authenticate users, make email lookups etc.). Or you want to synchronize or +distribute information between different sites/applications that use RDBMSes +and/or LDAP. Or whatever else... + +It is {{B:NOT}} designed as a general-purpose backend that uses RDBMS instead of +BerkeleyDB (as the standard BDB backend does), though it can be used as such with +several limitations. Please see {{SECT: LDAP vs RDBMS}} for discussion. + +The idea is to use some meta-information to translate LDAP queries to SQL queries, +leaving relational schema untouched, so that old applications can continue using +it without any modifications. This allows SQL and LDAP applications to interoperate +without replication, and exchange data as needed. + +The SQL backend is designed to be tunable to virtually any relational schema without +having to change source (through that meta-information mentioned). Also, it uses +ODBC to connect to RDBMSes, and is highly configurable for SQL dialects RDBMSes +may use, so it may be used for integration and distribution of data on different +RDBMSes, OSes, hosts etc., in other words, in highly heterogeneous environment. + +This backend is experimental. + +H3: back-sql Configuration + +LATER + +H3: Further Information + +{{slapd-sql}}(5) diff --git a/doc/guide/admin/config.sdf b/doc/guide/admin/config.sdf index 05700cfe4d..f80ec4a1d3 100644 --- a/doc/guide/admin/config.sdf +++ b/doc/guide/admin/config.sdf @@ -15,7 +15,7 @@ directory service for your local domain only. It does not interact with other directory servers in any way. This configuration is shown in Figure 3.1. -!import "config_local.gif"; align="center"; title="Local service via slapd(8) configuration" +!import "config_local.png"; align="center"; title="Local service via slapd(8) configuration" FT[align="Center"] Figure 3.1: Local service configuration. Use this configuration if you are just starting out (it's the one the @@ -32,7 +32,7 @@ referrals to other servers capable of handling requests. You may run this service (or services) yourself or use one provided to you. This configuration is shown in Figure 3.2. -!import "config_ref.gif"; align="center"; title="Local service with referrals" +!import "config_ref.png"; align="center"; title="Local service with referrals" FT[align="Center"] Figure 3.2: Local service with referrals Use this configuration if you want to provide local service and diff --git a/doc/guide/admin/config_dit.gif b/doc/guide/admin/config_dit.gif deleted file mode 100644 index 2327d03c72..0000000000 Binary files a/doc/guide/admin/config_dit.gif and /dev/null differ diff --git a/doc/guide/admin/config_dit.png b/doc/guide/admin/config_dit.png new file mode 100644 index 0000000000..fd51f296da Binary files /dev/null and b/doc/guide/admin/config_dit.png differ diff --git a/doc/guide/admin/config_local.gif b/doc/guide/admin/config_local.gif deleted file mode 100644 index 6690d46fa0..0000000000 Binary files a/doc/guide/admin/config_local.gif and /dev/null differ diff --git a/doc/guide/admin/config_local.png b/doc/guide/admin/config_local.png new file mode 100644 index 0000000000..5337c7ffee Binary files /dev/null and b/doc/guide/admin/config_local.png differ diff --git a/doc/guide/admin/config_ref.gif b/doc/guide/admin/config_ref.gif deleted file mode 100644 index 9108d3a7d4..0000000000 Binary files a/doc/guide/admin/config_ref.gif and /dev/null differ diff --git a/doc/guide/admin/config_ref.png b/doc/guide/admin/config_ref.png new file mode 100644 index 0000000000..cca3dde776 Binary files /dev/null and b/doc/guide/admin/config_ref.png differ diff --git a/doc/guide/admin/config_x500fe.gif b/doc/guide/admin/config_x500fe.gif deleted file mode 100644 index 916a26eae3..0000000000 Binary files a/doc/guide/admin/config_x500fe.gif and /dev/null differ diff --git a/doc/guide/admin/config_x500ref.gif b/doc/guide/admin/config_x500ref.gif deleted file mode 100644 index c986d865e1..0000000000 Binary files a/doc/guide/admin/config_x500ref.gif and /dev/null differ diff --git a/doc/guide/admin/dbtools.sdf b/doc/guide/admin/dbtools.sdf index 3de7710d30..61b2aec692 100644 --- a/doc/guide/admin/dbtools.sdf +++ b/doc/guide/admin/dbtools.sdf @@ -18,7 +18,7 @@ special utilities provided with slapd. This method is best if you have many thousands of entries to create, which would take an unacceptably long time using the LDAP method, or if you want to ensure the database is not accessed while it is being created. Note -that not all database types support these utilitites. +that not all database types support these utilities. H2: Creating a database over LDAP diff --git a/doc/guide/admin/guide.book b/doc/guide/admin/guide.book new file mode 100644 index 0000000000..200a227edd --- /dev/null +++ b/doc/guide/admin/guide.book @@ -0,0 +1,3 @@ +#HTMLDOC 1.8.27 +-t pdf14 -f "OpenLDAP-Admin-Guide.pdf" --book --toclevels 3 --no-numbered --toctitle "Table of Contents" --title --titleimage "../images/LDAPwww.gif" --linkstyle plain --size Universal --left 1.00in --right 0.50in --top 0.50in --bottom 0.50in --header .t. --header1 ... --footer ..1 --nup 1 --tocheader .t. --tocfooter ..i --duplex --portrait --color --no-pscommands --no-xrxcomments --compression=1 --jpeg=0 --fontsize 11.0 --fontspacing 1.2 --headingfont Helvetica --bodyfont Times --headfootsize 11.0 --headfootfont Helvetica --charset iso-8859-1 --links --embedfonts --pagemode outline --pagelayout single --firstpage p1 --pageeffect none --pageduration 10 --effectduration 1.0 --no-encryption --permissions all --owner-password "" --user-password "" --browserwidth 680 --no-strict --no-overflow +admin.html diff --git a/doc/guide/admin/install.sdf b/doc/guide/admin/install.sdf index 18e113f529..1d4e7b5ab0 100644 --- a/doc/guide/admin/install.sdf +++ b/doc/guide/admin/install.sdf @@ -21,7 +21,7 @@ directly from the project's {{TERM:FTP}} service at The project makes available two series of packages for {{general use}}. The project makes {{releases}} as new features and bug fixes -come available. Though the project takes steps to improve stablity +come available. Though the project takes steps to improve stability of these releases, it is common for problems to arise only after {{release}}. The {{stable}} release is the latest {{release}} which has demonstrated stability through general use. @@ -63,16 +63,18 @@ installation instructions provided with it. H3: {{TERM[expand]TLS}} -OpenLDAP clients and servers require installation of {{PRD:OpenSSL}} +OpenLDAP clients and servers require installation of either {{PRD:OpenSSL}} +or {{PRD:GnuTLS}} {{TERM:TLS}} libraries to provide {{TERM[expand]TLS}} services. Though some operating systems may provide these libraries as part of the -base system or as an optional software component, OpenSSL often -requires separate installation. +base system or as an optional software component, OpenSSL and GnuTLS often +require separate installation. OpenSSL is available from {{URL: http://www.openssl.org/}}. +GnuTLS is available from {{URL: http://www.gnu.org/software/gnutls/}}. OpenLDAP Software will not be fully LDAPv3 compliant unless OpenLDAP's -{{EX:configure}} detects a usable OpenSSL installation. +{{EX:configure}} detects a usable TLS library. H3: {{TERM[expand]SASL}} diff --git a/doc/guide/admin/intro.sdf b/doc/guide/admin/intro.sdf index 8d40e9d724..fe8f23bb09 100644 --- a/doc/guide/admin/intro.sdf +++ b/doc/guide/admin/intro.sdf @@ -57,8 +57,8 @@ support browsing and searching. While some consider the Internet {{TERM[expand]DNS}} (DNS) is an example of a globally distributed directory service, DNS is not -browsable nor searchable. It is more properly described as a -globaly distributed {{lookup}} service. +browseable nor searchable. It is more properly described as a +globally distributed {{lookup}} service. H2: What is LDAP? @@ -96,7 +96,7 @@ units, people, printers, documents, or just about anything else you can think of. Figure 1.1 shows an example LDAP directory tree using traditional naming. -!import "intro_tree.gif"; align="center"; \ +!import "intro_tree.png"; align="center"; \ title="LDAP directory tree (traditional naming)" FT[align="Center"] Figure 1.1: LDAP directory tree (traditional naming) @@ -106,7 +106,7 @@ for directory services to be located using the {{DNS}}. Figure 1.2 shows an example LDAP directory tree using domain-based naming. -!import "intro_dctree.gif"; align="center"; \ +!import "intro_dctree.png"; align="center"; \ title="LDAP directory tree (Internet naming)" FT[align="Center"] Figure 1.2: LDAP directory tree (Internet naming) @@ -154,6 +154,12 @@ LDAP also supports data security (integrity and confidentiality) services. +H2: When should I use LDAP? + + +H2: When should I not use LDAP? + + H2: How does LDAP work? LDAP utilizes a {{client-server model}}. One or more LDAP servers @@ -205,22 +211,127 @@ H2: What is the difference between LDAPv2 and LDAPv3? LDAPv3 was developed in the late 1990's to replace LDAPv2. LDAPv3 adds the following features to LDAP: - - Strong authentication and data security services via {{TERM:SASL}} - - Certificate authentication and data security services via {{TERM:TLS}} (SSL) - - Internationalization through the use of Unicode - - Referrals and Continuations - - Schema Discovery - - Extensibility (controls, extended operations, and more) + * Strong authentication and data security services via {{TERM:SASL}} + * Certificate authentication and data security services via {{TERM:TLS}} (SSL) + * Internationalization through the use of Unicode + * Referrals and Continuations + * Schema Discovery + * Extensibility (controls, extended operations, and more) LDAPv2 is historic ({{REF:RFC3494}}). As most {{so-called}} LDAPv2 implementations (including {{slapd}}(8)) do not conform to the -LDAPv2 technical specification, interoperatibility amongst +LDAPv2 technical specification, interoperability amongst implementations claiming LDAPv2 support is limited. As LDAPv2 differs significantly from LDAPv3, deploying both LDAPv2 and LDAPv3 simultaneously is quite problematic. LDAPv2 should be avoided. LDAPv2 is disabled by default. +H2: LDAP vs RDBMS + +This question is raised many times, in different forms. The most common, +however, is: {{Why doesn't OpenLDAP drop Berkeley DB and use a relational +database management system (RDBMS) instead?}} In general, expecting that the +sophisticated algorithms implemented by commercial-grade RDBMS would make +{{OpenLDAP}} be faster or somehow better and, at the same time, permitting +sharing of data with other applications. + +The short answer is that use of an embedded database and custom indexing system +allows OpenLDAP to provide greater performance and scalability without loss of +reliability. OpenLDAP, since release 2.1, in its main storage-oriented backends +(back-bdb and, since 2.2, back-hdb) uses Berkeley DB concurrent / transactional +database software. This is the same software used by leading commercial +directory software. + +Now for the long answer. We are all confronted all the time with the choice +RDBMSes vs. directories. It is a hard choice and no simple answer exists. + +It is tempting to think that having a RDBMS backend to the directory solves all +problems. However, it is a pig. This is because the data models are very +different. Representing directory data with a relational database is going to +require splitting data into multiple tables. + +Think for a moment about the person objectclass. Its definition requires +attribute types objectclass, sn and cn and allows attribute types userPassword, +telephoneNumber, seeAlso and description. All of these attributes are multivalued, +so a normalization requires putting each attribute type in a separate table. + +Now you have to decide on appropriate keys for those tables. The primary key +might be a combination of the DN, but this becomes rather inefficient on most +database implementations. + +The big problem now is that accessing data from one entry requires seeking on +different disk areas. On some applications this may be OK but in many +applications performance suffers. + +The only attribute types that can be put in the main table entry are those that +are mandatory and single-value. You may add also the optional single-valued +attributes and set them to NULL or something if not present. + +But wait, the entry can have multiple objectclasses and they are organized in +an inheritance hierarchy. An entry of objectclass organizationalPerson now has +the attributes from person plus a few others and some formerly optional attribute +types are now mandatory. + +What to do? Should we have different tables for the different objectclasses? +This way the person would have an entry on the person table, another on +organizationalPerson, etc. Or should we get rid of person and put everything on +the second table? + +But what do we do with a filter like (cn=*) where cn is an attribute type that +appears in many, many objectclasses. Should we search all possible tables for +matching entries? Not very attractive. + +Once this point is reached, three approaches come to mind. One is to do full +normalization so that each attribute type, no matter what, has its own separate +table. The simplistic approach where the DN is part of the primary key is +extremely wasteful, and calls for an approach where the entry has a unique +numeric id that is used instead for the keys and a main table that maps DNs to +ids. The approach, anyway, is very inefficient when several attribute types from +one or more entries are requested. Such a database, though cumbersomely, +can be managed from SQL applications. + +The second approach is to put the whole entry as a blob in a table shared by all +entries regardless of the objectclass and have additional tables that act as +indices for the first table. Index tables are not database indices, but are +fully managed by the LDAP server-side implementation. However, the database +becomes unusable from SQL. And, thus, a fully fledged database system provides +little or no advantage. The full generality of the database is unneeded. +Much better to use something light and fast, like Berkeley DB. + +A completely different way to see this is to give up any hopes of implementing +the directory data model. In this case, LDAP is used as an access protocol to +data that provides only superficially the directory data model. For instance, +it may be read only or, where updates are allowed, restrictions are applied, +such as making single-value attribute types that would allow for multiple values. +Or the impossibility to add new objectclasses to an existing entry or remove +one of those present. The restrictions span the range from allowed restrictions +(that might be elsewhere the result of access control) to outright violations of +the data model. It can be, however, a method to provide LDAP access to preexisting +data that is used by other applications. But in the understanding that we don't +really have a "directory". + +Existing commercial LDAP server implementations that use a relational database +are either from the first kind or the third. I don't know of any implementation +that uses a relational database to do inefficiently what BDB does efficiently. +For those who are interested in "third way" (exposing EXISTING data from RDBMS +as LDAP tree, having some limitations compared to classic LDAP model, but making +it possible to interoperate between LDAP and SQL applications): + +OpenLDAP includes back-sql - the backend that makes it possible. It uses ODBC + +additional metainformation about translating LDAP queries to SQL queries in your +RDBMS schema, providing different levels of access - from read-only to full +access depending on RDBMS you use, and your schema. + +For more information on concept and limitations, see {{slapd-sql}}(5) man page, +or the {{SECT: Backends}} section. There are also several examples for several +RDBMSes in {{F:back-sql/rdbms_depend/*}} subdirectories. + +TO REFERENCE: + +http://blogs.sun.com/treydrake/entry/ldap_vs_relational_database +http://blogs.sun.com/treydrake/entry/ldap_vs_relational_database_part + H2: What is slapd and what can it do? {{slapd}}(8) is an LDAP directory server that runs on many different @@ -243,7 +354,7 @@ SASL}} software which supports a number of mechanisms including {{B:{{TERM[expand]TLS}}}}: {{slapd}} supports certificate-based authentication and data security (integrity and confidentiality) services through the use of TLS (or SSL). {{slapd}}'s TLS -implementation utilizes {{PRD:OpenSSL}} software. +implementation can utilize either {{PRD:OpenSSL}} or {{PRD:GnuTLS}} software. {{B:Topology control}}: {{slapd}} can be configured to restrict access at the socket layer based upon network topology information. @@ -283,8 +394,7 @@ well-defined {{TERM:C}} {{TERM:API}}, you can write your own customized modules which extend {{slapd}} in numerous ways. Also, a number of {{programmable database}} modules are provided. These allow you to expose external data sources to {{slapd}} using popular -programming languages ({{PRD:Perl}}, {{shell}}, {{TERM:SQL}}, and -{{PRD:TCL}}). +programming languages ({{PRD:Perl}}, {{shell}}, and {{TERM:SQL}}. {{B:Threads}}: {{slapd}} is threaded for high performance. A single multi-threaded {{slapd}} process handles all incoming requests using @@ -294,8 +404,10 @@ required while providing high performance. {{B:Replication}}: {{slapd}} can be configured to maintain shadow copies of directory information. This {{single-master/multiple-slave}} replication scheme is vital in high-volume environments where a -single {{slapd}} just doesn't provide the necessary availability -or reliability. {{slapd}} includes support for {{LDAP Sync}}-based +single {{slapd}} installation just doesn't provide the necessary availability +or reliability. For extremely demanding environments where a +single point of failure is not acceptable, {{multi-master}} replication +is also available. {{slapd}} includes support for {{LDAP Sync}}-based replication. {{B:Proxy Cache}}: {{slapd}} can be configured as a caching @@ -304,5 +416,7 @@ LDAP proxy service. {{B:Configuration}}: {{slapd}} is highly configurable through a single configuration file which allows you to change just about everything you'd ever want to change. Configuration options have -reasonable defaults, making your job much easier. +reasonable defaults, making your job much easier. Configuration can +also be performed dynamically using LDAP itself, which greatly +improves manageability. diff --git a/doc/guide/admin/intro_dctree.gif b/doc/guide/admin/intro_dctree.gif deleted file mode 100644 index 5be4b171ac..0000000000 Binary files a/doc/guide/admin/intro_dctree.gif and /dev/null differ diff --git a/doc/guide/admin/intro_dctree.png b/doc/guide/admin/intro_dctree.png new file mode 100644 index 0000000000..099588c5bc Binary files /dev/null and b/doc/guide/admin/intro_dctree.png differ diff --git a/doc/guide/admin/intro_tree.gif b/doc/guide/admin/intro_tree.gif deleted file mode 100644 index 376e28778f..0000000000 Binary files a/doc/guide/admin/intro_tree.gif and /dev/null differ diff --git a/doc/guide/admin/intro_tree.png b/doc/guide/admin/intro_tree.png new file mode 100644 index 0000000000..043b51e813 Binary files /dev/null and b/doc/guide/admin/intro_tree.png differ diff --git a/doc/guide/admin/maintenance.sdf b/doc/guide/admin/maintenance.sdf new file mode 100644 index 0000000000..5bba1a5f5a --- /dev/null +++ b/doc/guide/admin/maintenance.sdf @@ -0,0 +1,110 @@ +# $OpenLDAP$ +# Copyright 2007 The OpenLDAP Foundation, All Rights Reserved. +# COPYING RESTRICTIONS APPLY, see COPYRIGHT. + +H1: Maintenance + +System Administration is all about maintenance, so it is only fair that we +discuss how to correctly maintain an OpenLDAP deployment. + + +H2: Directory Backups + +MORE + +You can use {{slapcat}}(8) to generate an LDIF file for each of your {{slapd}}(8) +back-bdb or back-hdb databases. + +> slapcat -f slapd.conf -b "dc=example,dc=com" + +For back-bdb and back-hdb, this command may be ran while slapd(8) is running. + +MORE + + +H2: Berkeley DB Logs + +Berkeley DB log files grow, and the administrator has to deal with it. The +procedure is known as log file archival or log file rotation. + +Note: The actual log file rotation is handled by the Berkeley DB engine. + +Logs of current transactions need to be stored into files so that the database +can be recovered in the event of an application crash. Administrators can change +the size limit of a single log file (by default 10MB), and have old log files +removed automatically, by setting up DB environment (see below). The reason +Berkeley DB never deletes any log files by default is that the administrator +may wish to backup the log files before removal to make database recovery +possible even after a catastrophic failure, such as file system corruption. + +Log file names are {{F:log.XXXXXXXXXX}} (X is a digit). By default the log files +are located in the BDB backend directory. The {{F:db_archive}} tool knows what +log files are used in current transactions, and what are not. Administrators can +move unused log files to a backup media, and delete them. To have them removed +automatically, place set_flags {{DB_LOG_AUTOREMOVE}} directive in {{F:DB_CONFIG}}. + +Note: If the log files are removed automatically, recovery after a catastrophic +failure is likely to be impossible. + +The files with names {{F:__db.001}}, {{F:__db.002}}, etc are just shared memory +regions (or whatever). These ARE NOT 'logs', they must be left alone. Don't be +afraid of them, they do not grow like logs do. + +To understand the {{F:db_archive}} interface, the reader should refer to +chapter 9 of the Berkeley DB guide. In particular, the following chapters are +recommended: + +* Database and log file archival +* Log file removal +* Recovery procedures +* Hot failover + +Advanced installations can use special environment settings to fine-tune some +Berkeley DB options (change the log file limit, etc). This can be done by using +the {{F:DB_CONFIG}} file. This magic file can be created in BDB backend directory +set up by {{slapd.conf}}(5). More information on this file can be found in File +naming chapter. Specific directives can be found in C Interface, look for +{{DB_ENV->set_XXXX}} calls. + +Note: options set in {{F:DB_CONFIG}} file override options set by OpenLDAP. +Use them with extreme caution. Do not use them unless You know what You are doing. + +The advantages of {{F:DB_CONFIG}} usage can be the following: + +* to keep data files and log files on different mediums (i.e. disks) to improve + performance and/or reliability; +* to fine-tune some specific options (such as shared memory region sizes); +* to set the log file limit (please read Log file limits before doing this). + +To figure out the best-practice BDB backup scenario, the reader is highly +recommended to read the whole Chapter 9: Berkeley DB Transactional Data Store Applications. +This chapter is a set of small pages with examples in C language. Non-programming +people can skip this examples without loss of knowledge. + + +H2: Checkpointing + +MORE/TIDY + +If you put "checkpoint 1024 5" in slapd.conf (to checkpoint after 1024kb or 5 minutes, +for example), this does not checkpoint every 5 minutes as you may think. +The explanation from Howard is: + +'In OpenLDAP 2.1 and 2.2 the checkpoint directive acts as follows - *when there +is a write operation*, and more than minutes have occurred since the +last checkpoint, perform the checkpoint. If more than minutes pass after +a write without any other write operations occurring, no checkpoint is performed, +so it's possible to lose the last write that occurred.'' + +In other words, a write operation occurring less than "check" minutes after the +last checkpoint will not be checkpointed until the next write occurs after "check" +minutes have passed since the checkpoint. + +This has been modified in 2.3 to indeed checkpoint every so often; in the meantime +a workaround is to invoke "db_checkpoint" from a cron script every so often, say 5 minutes. + +H2: Migration + +Exporting to a new system...... + + diff --git a/doc/guide/admin/master.sdf b/doc/guide/admin/master.sdf index 7d7b4b2471..f9dc9ee61a 100644 --- a/doc/guide/admin/master.sdf +++ b/doc/guide/admin/master.sdf @@ -48,6 +48,12 @@ PB: !include "dbtools.sdf"; chapter PB: +!include "backends.sdf"; chapter +PB: + +!include "overlays.sdf"; chapter +PB: + !include "schema.sdf"; chapter PB: @@ -60,25 +66,32 @@ PB: !include "tls.sdf"; chapter PB: -!include "monitoringslapd.sdf"; chapter +!include "referrals.sdf"; chapter PB: -#!include "tuning.sdf"; chapter -#PB: +!include "replication.sdf"; chapter +PB: -!include "referrals.sdf"; chapter +!include "maintenance.sdf"; chapter PB: -!include "replication.sdf"; chapter +!include "monitoringslapd.sdf"; chapter PB: -!include "syncrepl.sdf"; chapter +!include "tuning.sdf"; chapter PB: -!include "proxycache.sdf"; chapter +!include "troubleshooting.sdf"; chapter PB: # Appendices +!include "appendix-changes.sdf"; appendix +PB: + +# Config file examples +!include "appendix-configs.sdf"; appendix +PB: + # Terms !include "glossary.sdf"; appendix PB: diff --git a/doc/guide/admin/monitoringslapd.sdf b/doc/guide/admin/monitoringslapd.sdf index cc2311b605..a21ebcaf5b 100644 --- a/doc/guide/admin/monitoringslapd.sdf +++ b/doc/guide/admin/monitoringslapd.sdf @@ -55,7 +55,7 @@ First, ensure {{core.schema}} schema configuration file is included by your {{slapd.conf}}(5) file. The {{monitor}} backend requires it. -Second, instanticate the {{monitor backend}} by adding a +Second, instantiate the {{monitor backend}} by adding a {{database monitor}} directive below your existing database sections. For instance: @@ -64,7 +64,7 @@ sections. For instance: Lastly, add additional global or database directives as needed. Like most other database backends, the monitor backend does honor -slapd(8) access and other adminstrative controls. As some monitor +slapd(8) access and other administrative controls. As some monitor information may be sensitive, it is generally recommend access to cn=monitor be restricted to directory administrators and their monitoring agents. Adding an {{access}} directive immediately below @@ -99,7 +99,7 @@ Note that unlike general purpose database backends, the database suffix is hardcoded. It's always {{EX:cn=Monitor}}. So no {{suffix}} directive should be provided. Also note that general purpose database backends, the monitor backend cannot be instantiated -multiple times. That is, there can only be one (or zero) occurances +multiple times. That is, there can only be one (or zero) occurrences of {{EX:database monitor}} in the server's configuration. @@ -498,3 +498,8 @@ Write waiters: > entryDN: cn=Write,cn=Waiters,cn=Monitor > subschemaSubentry: cn=Subschema > hasSubordinates: FALSE + +Add new monitored things here and discuss, referencing man pages and present +examples + + diff --git a/doc/guide/admin/overlays.sdf b/doc/guide/admin/overlays.sdf new file mode 100644 index 0000000000..b153978ece --- /dev/null +++ b/doc/guide/admin/overlays.sdf @@ -0,0 +1,413 @@ +# $OpenLDAP$ +# Copyright 2007 The OpenLDAP Foundation, All Rights Reserved. +# COPYING RESTRICTIONS APPLY, see COPYRIGHT. + +H1: Overlays + +Overlays are software components that provide hooks to functions analogous to +those provided by backends, which can be stacked on top of the backend calls +and as callbacks on top of backend responses to alter their behavior. + +Overlays may be compiled statically into slapd, or when module support +is enabled, they may be dynamically loaded. Most of the overlays +are only allowed to be configured on individual databases, but some +may also be configured globally. + +Essentially they represent a means to: + + * customize the behavior of existing backends without changing the backend + code and without requiring one to write a new custom backend with + complete functionality + * write functionality of general usefulness that can be applied to + different backend types + +Overlays are usually documented by separate specific man pages in section 5; +the naming convention is + +> slapo- + +Not all distributed overlays have a man page yet. Feel free to contribute one, +if you think you well understood the behavior of the component and the +implications of all the related configuration directives. + +Official overlays are located in + +> servers/slapd/overlays/ + +That directory also contains the file slapover.txt, which describes the +rationale of the overlay implementation, and may serve as guideline for the +development of custom overlays. + +Contribware overlays are located in + +> contrib/slapd-modules// + +along with other types of run-time loadable components; they are officially +distributed, but not maintained by the project. + +They can be stacked on the frontend as well; this means that they can be +executed after a request is parsed and validated, but right before the +appropriate database is selected. The main purpose is to affect operations +regardless of the database they will be handled by, and, in some cases, +to influence the selection of the database by massaging the request DN. + +All the current overlays in 2.4 are listed and described in detail in the +following sections. + + +H2: Access Logging + + +H3: Overview + +This overlay can record accesses to a given backend database on another +database. + + +H3: Access Logging Configuration + + +H2: Audit Logging + +This overlay records changes on a given backend database to an LDIF log +file. + + +H3: Overview + + +H3: Audit Logging Configuration + + +H2: Chaining + + +H3: Overview + +The chain overlay provides basic chaining capability to the underlying +database. + +What is chaining? It indicates the capability of a DSA to follow referrals on +behalf of the client, so that distributed systems are viewed as a single +virtual DSA by clients that are otherwise unable to "chase" (i.e. follow) +referrals by themselves. + +The chain overlay is built on top of the ldap backend; it is compiled by +default when --enable-ldap. + + +H3: Chaining Configuration + + +H2: Constraints + + +H3: Overview + +This overlay enforces a regular expression constraint on all values +of specified attributes. It is used to enforce a more rigorous +syntax when the underlying attribute syntax is too general. + + +H3: Constraint Configuration + + +H2: Dynamic Directory Services + + +H3: Overview + +This overlay supports dynamic objects, which have a limited life after +which they expire and are automatically deleted. + + +H3: Dynamic Directory Service Configuration + + +H2: Dynamic Groups + + +H3: Overview + +This overlay extends the Compare operation to detect +members of a dynamic group. This overlay is now deprecated +as all of its functions are available using the +{{SECT:Dynamic Lists}} overlay. + + +H3: Dynamic Group Configuration + + +H2: Dynamic Lists + + +H3: Overview + +This overlay allows expansion of dynamic groups and more. + + +H3: Dynamic List Configuration + + +H2: Reverse Group Membership Maintenance + + +H3: Member Of Configuration + + +H2: The Proxy Cache Engine + +{{TERM:LDAP}} servers typically hold one or more subtrees of a +{{TERM:DIT}}. Replica (or shadow) servers hold shadow copies of +entries held by one or more master servers. Changes are propagated +from the master server to replica (slave) servers using LDAP Sync +replication. An LDAP cache is a special type of replica which holds +entries corresponding to search filters instead of subtrees. + +H3: Overview + +The proxy cache extension of slapd is designed to improve the +responsiveness of the ldap and meta backends. It handles a search +request (query) +by first determining whether it is contained in any cached search +filter. Contained requests are answered from the proxy cache's local +database. Other requests are passed on to the underlying ldap or +meta backend and processed as usual. + +E.g. {{EX:(shoesize>=9)}} is contained in {{EX:(shoesize>=8)}} and +{{EX:(sn=Richardson)}} is contained in {{EX:(sn=Richards*)}} + +Correct matching rules and syntaxes are used while comparing +assertions for query containment. To simplify the query containment +problem, a list of cacheable "templates" (defined below) is specified +at configuration time. A query is cached or answered only if it +belongs to one of these templates. The entries corresponding to +cached queries are stored in the proxy cache local database while +its associated meta information (filter, scope, base, attributes) +is stored in main memory. + +A template is a prototype for generating LDAP search requests. +Templates are described by a prototype search filter and a list of +attributes which are required in queries generated from the template. +The representation for prototype filter is similar to {{REF:RFC4515}}, +except that the assertion values are missing. Examples of prototype +filters are: (sn=),(&(sn=)(givenname=)) which are instantiated by +search filters (sn=Doe) and (&(sn=Doe)(givenname=John)) respectively. + +The cache replacement policy removes the least recently used (LRU) +query and entries belonging to only that query. Queries are allowed +a maximum time to live (TTL) in the cache thus providing weak +consistency. A background task periodically checks the cache for +expired queries and removes them. + +The Proxy Cache paper +({{URL:http://www.openldap.org/pub/kapurva/proxycaching.pdf}}) provides +design and implementation details. + + +H3: Proxy Cache Configuration + +The cache configuration specific directives described below must +appear after a {{EX:overlay proxycache}} directive within a +{{EX:"database meta"}} or {{EX:database ldap}} section of +the server's {{slapd.conf}}(5) file. + +H4: Setting cache parameters + +> proxyCache + +This directive enables proxy caching and sets general cache +parameters. The parameter specifies which underlying database +is to be used to hold cached entries. It should be set to +{{EX:bdb}} or {{EX:hdb}}. The parameter specifies the +total number of entries which may be held in the cache. The + parameter specifies the total number of attribute sets +(as specified by the {{EX:proxyAttrSet}} directive) that may be +defined. The parameter specifies the maximum number of +entries in a cacheable query. The specifies the consistency +check period (in seconds). In each period, queries with expired +TTLs are removed. + +H4: Defining attribute sets + +> proxyAttrset + +Used to associate a set of attributes to an index. Each attribute +set is associated with an index number from 0 to -1. +These indices are used by the proxyTemplate directive to define +cacheable templates. + +H4: Specifying cacheable templates + +> proxyTemplate + +Specifies a cacheable template and the "time to live" (in sec) +for queries belonging to the template. A template is described by +its prototype filter string and set of required attributes identified +by . + + +H4: Example + +An example {{slapd.conf}}(5) database section for a caching server +which proxies for the {{EX:"dc=example,dc=com"}} subtree held +at server {{EX:ldap.example.com}}. + +> database ldap +> suffix "dc=example,dc=com" +> rootdn "dc=example,dc=com" +> uri ldap://ldap.example.com/dc=example%2cdc=com +> overlay proxycache +> proxycache bdb 100000 1 1000 100 +> proxyAttrset 0 mail postaladdress telephonenumber +> proxyTemplate (sn=) 0 3600 +> proxyTemplate (&(sn=)(givenName=)) 0 3600 +> proxyTemplate (&(departmentNumber=)(secretary=*)) 0 3600 +> +> cachesize 20 +> directory ./testrun/db.2.a +> index objectClass eq +> index cn,sn,uid,mail pres,eq,sub + + +H5: Cacheable Queries + +A LDAP search query is cacheable when its filter matches one of the +templates as defined in the "proxyTemplate" statements and when it references +only the attributes specified in the corresponding attribute set. +In the example above the attribute set number 0 defines that only the +attributes: {{EX:mail postaladdress telephonenumber}} are cached for the following +proxyTemplates. + +H5: Examples: + +> Filter: (&(sn=Richard*)(givenName=jack)) +> Attrs: mail telephoneNumber + + is cacheable, because it matches the template {{EX:(&(sn=)(givenName=))}} and its + attributes are contained in proxyAttrset 0. + +> Filter: (&(sn=Richard*)(telephoneNumber)) +> Attrs: givenName + + is not cacheable, because the filter does not match the template, + nor is the attribute givenName stored in the cache + +> Filter: (|(sn=Richard*)(givenName=jack)) +> Attrs: mail telephoneNumber + + is not cacheable, because the filter does not match the template ( logical + OR "|" condition instead of logical AND "&" ) + + +H2: Password Policies + + +H3: Overview + +This overlay provides a variety of password control mechanisms, +e.g. password aging, password reuse and duplication control, mandatory +password resets, etc. + + +H3: Password Policy Configuration + + +H2: Referential Integrity + + +H3: Overview + +This overlay can be used with a backend database such as slapd-bdb (5) +to maintain the cohesiveness of a schema which utilizes reference +attributes. + + +H3: Referential Integrity Configuration + + +H2: Return Code + + +H3: Overview + +This overlay is useful to test the behavior of clients when +server-generated erroneous and/or unusual responses occur. + + +H3: Return Code Configuration + + +H2: Rewrite/Remap + + +H3: Overview + +It performs basic DN/data rewrite and +objectClass/attributeType mapping. + + +H3: Rewrite/Remap Configuration + + +H2: Sync Provider + + +H3: Overview + +This overlay implements the provider-side support for syncrepl +replication, including persistent search functionality + + +H3: Sync Provider Configuration + + +H2: Translucent Proxy + + +H3: Overview + +This overlay can be used with a backend database such as slapd-bdb (5) +to create a "translucent proxy". + +Content of entries retrieved from a remote LDAP server can be partially +overridden by the database. + + +H3: Translucent Proxy Configuration + + +H2: Attribute Uniqueness + + +H3: Overview + +This overlay can be used with a backend database such as slapd-bdb (5) +to enforce the uniqueness of some or all attributes within a subtree. + + +H3: Attribute Uniqueness Configuration + + +H2: Value Sorting + + +H3: Overview + +This overlay can be used to enforce a specific order for the values +of an attribute when it is returned in a search. + + +H3: Value Sorting Configuration + + +H2: Overlay Stacking + + +H3: Overview + + +H3: Example Scenarios + + +H4: Samba diff --git a/doc/guide/admin/preface.sdf b/doc/guide/admin/preface.sdf index c3d7f320b7..83db7c7c13 100644 --- a/doc/guide/admin/preface.sdf +++ b/doc/guide/admin/preface.sdf @@ -9,7 +9,7 @@ P1: Preface # document's copyright P2[notoc] Copyright -Copyright 1998-2006, The {{ORG[expand]OLF}}, {{All Rights Reserved}}. +Copyright 1998-2007, The {{ORG[expand]OLF}}, {{All Rights Reserved}}. Copyright 1992-1996, Regents of the {{ORG[expand]UM}}, {{All Rights Reserved}}. @@ -71,5 +71,5 @@ This document was produced using the {{TERM[expand]SDF}} ({{TERM:SDF}}) documentation system ({{URL:http://search.cpan.org/src/IANC/sdf-2.001/doc/catalog.html}}) developed by {{Ian Clatworthy}}. Tools for SDF are available from -{{ORG:CPAN}} ({{URL:http://search.cpan.org/search?query=SDF}}). +{{ORG:CPAN}} ({{URL:http://search.cpan.org/search?query=SDF&mode=dist}}). diff --git a/doc/guide/admin/proxycache.sdf b/doc/guide/admin/proxycache.sdf deleted file mode 100644 index 0d4dcab72b..0000000000 --- a/doc/guide/admin/proxycache.sdf +++ /dev/null @@ -1,148 +0,0 @@ -# $OpenLDAP$ -# Copyright 2003-2007 The OpenLDAP Foundation, All Rights Reserved. -# COPYING RESTRICTIONS APPLY, see COPYRIGHT. - -H1: The Proxy Cache Engine - -{{TERM:LDAP}} servers typically hold one or more subtrees of a -{{TERM:DIT}}. Replica (or shadow) servers hold shadow copies of -entries held by one or more master servers. Changes are propagated -from the master server to replica (slave) servers using LDAP Sync -replication. An LDAP cache is a special type of replica which holds -entries corresponding to search filters instead of subtrees. - -H2: Overview - -The proxy cache extension of slapd is designed to improve the -responseiveness of the ldap and meta backends. It handles a search -request (query) -by first determining whether it is contained in any cached search -filter. Contained requests are answered from the proxy cache's local -database. Other requests are passed on to the underlying ldap or -meta backend and processed as usual. - -E.g. {{EX:(shoesize>=9)}} is contained in {{EX:(shoesize>=8)}} and -{{EX:(sn=Richardson)}} is contained in {{EX:(sn=Richards*)}} - -Correct matching rules and syntaxes are used while comparing -assertions for query containment. To simplify the query containment -problem, a list of cacheable "templates" (defined below) is specified -at configuration time. A query is cached or answered only if it -belongs to one of these templates. The entries corresponding to -cached queries are stored in the proxy cache local database while -its associated meta information (filter, scope, base, attributes) -is stored in main memory. - -A template is a prototype for generating LDAP search requests. -Templates are described by a prototype search filter and a list of -attributes which are required in queries generated from the template. -The representation for prototype filter is similar to {{REF:RFC4515}}, -except that the assertion values are missing. Examples of prototype -filters are: (sn=),(&(sn=)(givenname=)) which are instantiated by -search filters (sn=Doe) and (&(sn=Doe)(givenname=John)) respectively. - -The cache replacement policy removes the least recently used (LRU) -query and entries belonging to only that query. Queries are allowed -a maximum time to live (TTL) in the cache thus providing weak -consistency. A background task periodically checks the cache for -expired queries and removes them. - -The Proxy Cache paper -({{URL:http://www.openldap.org/pub/kapurva/proxycaching.pdf}}) provides -design and implementation details. - - -H2: Proxy Cache Configuration - -The cache configuration specific directives described below must -appear after a {{EX:overlay proxycache}} directive within a -{{EX:"database meta"}} or {{EX:database ldap}} section of -the server's {{slapd.conf}}(5) file. - -H3: Setting cache parameters - -> proxyCache - -This directive enables proxy caching and sets general cache -parameters. The parameter specifies which underlying database -is to be used to hold cached entries. It should be set to -{{EX:bdb}} or {{EX:hdb}}. The parameter specifies the -total number of entries which may be held in the cache. The - parameter specifies the total number of attribute sets -(as specified by the {{EX:proxyAttrSet}} directive) that may be -defined. The parameter specifies the maximum number of -entries in a cachable query. The specifies the consistency -check period (in seconds). In each period, queries with expired -TTLs are removed. - -H3: Defining attribute sets - -> proxyAttrset - -Used to associate a set of attributes to an index. Each attribute -set is associated with an index number from 0 to -1. -These indices are used by the proxyTemplate directive to define -cacheable templates. - -H3: Specifying cacheable templates - -> proxyTemplate - -Specifies a cacheable template and the "time to live" (in sec) -for queries belonging to the template. A template is described by -its prototype filter string and set of required attributes identified -by . - - -H3: Example - -An example {{slapd.conf}}(5) database section for a caching server -which proxies for the {{EX:"dc=example,dc=com"}} subtree held -at server {{EX:ldap.example.com}}. - -> database ldap -> suffix "dc=example,dc=com" -> rootdn "dc=example,dc=com" -> uri ldap://ldap.example.com/dc=example%2cdc=com -> overlay proxycache -> proxycache bdb 100000 1 1000 100 -> proxyAttrset 0 mail postaladdress telephonenumber -> proxyTemplate (sn=) 0 3600 -> proxyTemplate (&(sn=)(givenName=)) 0 3600 -> proxyTemplate (&(departmentNumber=)(secretary=*)) 0 3600 -> -> cachesize 20 -> directory ./testrun/db.2.a -> index objectClass eq -> index cn,sn,uid,mail pres,eq,sub - - -H4: Cacheable Queries - -A LDAP search query is cacheable when its filter matches one of the -templates as defined in the "proxyTemplate" statements and when it references -only the attributes specified in the corresponding attribute set. -In the example above the attribute set number 0 defines that only the -attributes: {{EX:mail postaladdress telephonenumber}} are cached for the following -proxyTemplates. - -H4: Examples: - -> Filter: (&(sn=Richard*)(givenName=jack)) -> Attrs: mail telephoneNumber - - is cacheable, because it matches the template {{EX:(&(sn=)(givenName=))}} and its - attributes are contained in proxyAttrset 0. - -> Filter: (&(sn=Richard*)(telephoneNumber)) -> Attrs: givenName - - is not cacheable, because the filter does not match the template, - nor is the attribute givenName stored in the cache - -> Filter: (|(sn=Richard*)(givenName=jack)) -> Attrs: mail telephoneNumber - - is not cacheable, because the filter does not match the template ( logical - OR "|" condition instead of logical AND "&" ) - diff --git a/doc/guide/admin/referrals.sdf b/doc/guide/admin/referrals.sdf index 0b41a2a355..8756553cb8 100644 --- a/doc/guide/admin/referrals.sdf +++ b/doc/guide/admin/referrals.sdf @@ -132,3 +132,10 @@ or with {{ldapsearch}}(1): Note: the {{EX:ref}} attribute is operational and must be explicitly requested when desired in search results. +Note: the use of referrals to construct a Distributed Directory Service is +extremely clumsy and not well supported by common clients. If an existing +installation has already been built using referrals, the use of the +{{chain}} overlay to hide the referrals will greatly improve the usability +of the Directory system. A better approach would be to use explicitly +defined local and proxy databases in {{subordinate}} configurations to +provide a seamless view of the Distributed Directory. diff --git a/doc/guide/admin/replication.gif b/doc/guide/admin/replication.gif deleted file mode 100644 index 70814033e5..0000000000 Binary files a/doc/guide/admin/replication.gif and /dev/null differ diff --git a/doc/guide/admin/replication.sdf b/doc/guide/admin/replication.sdf index 5f1ba20335..0df0beab28 100644 --- a/doc/guide/admin/replication.sdf +++ b/doc/guide/admin/replication.sdf @@ -1,356 +1,579 @@ # $OpenLDAP$ # Copyright 1999-2007 The OpenLDAP Foundation, All Rights Reserved. # COPYING RESTRICTIONS APPLY, see COPYRIGHT. -H1: Replication with slurpd -Note: this section is provided for historical reasons. {{slurpd}}(8) -is deprecated in favor of LDAP Sync based replication, commonly -referred to as {{syncrepl}}. Syncrepl is discussed in -{{SECT:LDAP Sync Replication}} section of this document. +H1: Replication -In certain configurations, a single {{slapd}}(8) instance may be -insufficient to handle the number of clients requiring -directory service via LDAP. It may become necessary to -run more than one slapd instance. At many sites, -for instance, there are multiple slapd servers: one -master and one or more slaves. {{TERM:DNS}} can be setup such that -a lookup of {{EX:ldap.example.com}} returns the {{TERM:IP}} addresses -of these servers, distributing the load among them (or -just the slaves). This master/slave arrangement provides -a simple and effective way to increase capacity, availability -and reliability. +Replicated directories are a fundamental requirement for delivering a +resilient enterprise deployment. + +OpenLDAP has various configuration options for creating a replicated +directory. The following sections will discuss these. + +H2: Replication Strategies + + +H3: Push Based + + +H5: Replacing Slurpd + +Slurpd replication has been deprecated in favor of Syncrepl replication and +has been completely removed from 2.4. + +{{Why was it replaced?}} + +The slurpd daemon was the original replication mechanism inherited from +UMich's LDAP and operates in push mode: the master pushes changes to the +slaves. It has been replaced for many reasons, in brief: + + * It is not reliable + * It is extremely sensitive to the ordering of records in the replog + * It can easily go out of sync, at which point manual intervention is + required to resync the slave database with the master directory + * It isn't very tolerant of unavailable servers. If a slave goes down + for a long time, the replog may grow to a size that's too large for + slurpd to process + +{{What was it replaced with?}} -{{slurpd}}(8) provides the capability for a master slapd to -propagate changes to slave slapd instances, -implementing the master/slave replication scheme -described above. slurpd runs on the same host as the -master slapd instance. +Syncrepl. +{{Why is Syncrepl better?}} + * Syncrepl is self-synchronizing; you can start with a database in any + state from totally empty to fully synced and it will automatically do + the right thing to achieve and maintain synchronization + * Syncrepl can operate in either direction + * Data updates can be minimal or maximal -H2: Overview +{{How do I implement a pushed based replication system using Syncrepl?}} -{{slurpd}}(8) provides replication services "in band". That is, it -uses the LDAP protocol to update a slave database from -the master. Perhaps the easiest way to illustrate this is -with an example. In this example, we trace the propagation -of an LDAP modify operation from its initiation by the LDAP -client to its distribution to the slave slapd instance. +The easiest way is to point an LDAP backend ({{SECT: Backends}} and {{slapd-ldap(8)}}) +to your slave directory and setup Syncrepl to point to your Master database. +REFERENCE test045/048 for better explanation of above. -{{B: Sample replication scenario:}} +If you imagine Syncrepl pulling down changes from the Master server, and then +pushing those changes out to your slave servers via {{slapd-ldap(8)}}. This is +called proxy mode (elaborate/confirm?). -^ The LDAP client submits an LDAP modify operation to -the slave slapd. +DIAGRAM HERE -+ The slave slapd returns a referral to the LDAP -client referring the client to the master slapd. +BETTER EXAMPLE here from test045/048 for different push/multiproxy examples. -+ The LDAP client submits the LDAP modify operation to -the master slapd. +Here's an example: -+ The master slapd performs the modify operation, -writes out the change to its replication log file and returns -a success code to the client. -+ The slurpd process notices that a new entry has -been appended to the replication log file, reads the -replication log entry, and sends the change to the slave -slapd via LDAP. +> include ./schema/core.schema +> include ./schema/cosine.schema +> include ./schema/inetorgperson.schema +> include ./schema/openldap.schema +> include ./schema/nis.schema +> +> pidfile /home/ghenry/openldap/ldap/tests/testrun/slapd.3.pid +> argsfile /home/ghenry/openldap/ldap/tests/testrun/slapd.3.args +> +> modulepath ../servers/slapd/back-bdb/ +> moduleload back_bdb.la +> modulepath ../servers/slapd/back-monitor/ +> moduleload back_monitor.la +> modulepath ../servers/slapd/overlays/ +> moduleload syncprov.la +> modulepath ../servers/slapd/back-ldap/ +> moduleload back_ldap.la +> +> # We don't need any access to this DSA +> restrict all +> +> ####################################################################### +> # consumer proxy database definitions +> ####################################################################### +> +> database ldap +> suffix "dc=example,dc=com" +> rootdn "cn=Whoever" +> uri ldap://localhost:9012/ +> +> lastmod on +> +> # HACK: use the RootDN of the monitor database as UpdateDN so ACLs apply +> # without the need to write the UpdateDN before starting replication +> acl-bind bindmethod=simple +> binddn="cn=Monitor" +> credentials=monitor +> +> # HACK: use the RootDN of the monitor database as UpdateDN so ACLs apply +> # without the need to write the UpdateDN before starting replication +> syncrepl rid=1 +> provider=ldap://localhost:9011/ +> binddn="cn=Manager,dc=example,dc=com" +> bindmethod=simple +> credentials=secret +> searchbase="dc=example,dc=com" +> filter="(objectClass=*)" +> attrs="*,structuralObjectClass,entryUUID,entryCSN,creatorsName,createTimestamp,modifiersName,modifyTimestamp" +> schemachecking=off +> scope=sub +> type=refreshAndPersist +> retry="5 5 300 5" +> +> overlay syncprov +> +> database monitor + +DETAILED EXPLANATION OF ABOVE LIKE IN OTHER SECTIONS (line numbers?) + + +ANOTHER DIAGRAM HERE + +As you can see, you can let your imagination go wild using Syncrepl and +{{slapd-ldap(8)}} tailoring your replication to fit your specific network +topology. + +H3: Pull Based + + +H4: syncrepl replication + + +H4: delta-syncrepl replication + + +H2: Replication Types + + +H3: syncrepl replication + + +H3: delta-syncrepl replication + + +H3: N-Way Multi-Master + +http://www.connexitor.com/blog/pivot/entry.php?id=105#body +http://www.openldap.org/lists/openldap-software/200702/msg00006.html +http://www.openldap.org/lists/openldap-software/200602/msg00064.html + + +H3: MirrorMode + +MirrorMode is a hybrid configuration that provides all of the consistency +guarantees of single-master replication while also providing the high +availability of multi-master. In MirrorMode two masters are set up to +replicate from each other (as a multi-master configuration) but an +external frontend is employed to direct all writes to only one of +the two servers. The second master will only be used for writes if +the first master crashes, at which point the frontend will switch to +directing all writes to the second master. When a crashed master is +repaired and restarted it will automatically catch up to any changes +on the running master and resync. + +H2: LDAP Sync Replication + +The {{TERM:LDAP Sync}} Replication engine, {{TERM:syncrepl}} for +short, is a consumer-side replication engine that enables the +consumer {{TERM:LDAP}} server to maintain a shadow copy of a +{{TERM:DIT}} fragment. A syncrepl engine resides at the consumer-side +as one of the {{slapd}}(8) threads. It creates and maintains a +consumer replica by connecting to the replication provider to perform +the initial DIT content load followed either by periodic content +polling or by timely updates upon content changes. + +Syncrepl uses the LDAP Content Synchronization (or LDAP Sync for +short) protocol as the replica synchronization protocol. It provides +a stateful replication which supports both pull-based and push-based +synchronization and does not mandate the use of a history store. + +Syncrepl keeps track of the status of the replication content by +maintaining and exchanging synchronization cookies. Because the +syncrepl consumer and provider maintain their content status, the +consumer can poll the provider content to perform incremental +synchronization by asking for the entries required to make the +consumer replica up-to-date with the provider content. Syncrepl +also enables convenient management of replicas by maintaining replica +status. The consumer replica can be constructed from a consumer-side +or a provider-side backup at any synchronization status. Syncrepl +can automatically resynchronize the consumer replica up-to-date +with the current provider content. + +Syncrepl supports both pull-based and push-based synchronization. +In its basic refreshOnly synchronization mode, the provider uses +pull-based synchronization where the consumer servers need not be +tracked and no history information is maintained. The information +required for the provider to process periodic polling requests is +contained in the synchronization cookie of the request itself. To +optimize the pull-based synchronization, syncrepl utilizes the +present phase of the LDAP Sync protocol as well as its delete phase, +instead of falling back on frequent full reloads. To further optimize +the pull-based synchronization, the provider can maintain a per-scope +session log as a history store. In its refreshAndPersist mode of +synchronization, the provider uses a push-based synchronization. +The provider keeps track of the consumer servers that have requested +a persistent search and sends them necessary updates as the provider +replication content gets modified. + +With syncrepl, a consumer server can create a replica without +changing the provider's configurations and without restarting the +provider server, if the consumer server has appropriate access +privileges for the DIT fragment to be replicated. The consumer +server can stop the replication also without the need for provider-side +changes and restart. + +Syncrepl supports both partial and sparse replications. The shadow +DIT fragment is defined by a general search criteria consisting of +base, scope, filter, and attribute list. The replica content is +also subject to the access privileges of the bind identity of the +syncrepl replication connection. + + +H3: The LDAP Content Synchronization Protocol + +The LDAP Sync protocol allows a client to maintain a synchronized +copy of a DIT fragment. The LDAP Sync operation is defined as a set +of controls and other protocol elements which extend the LDAP search +operation. This section introduces the LDAP Content Sync protocol +only briefly. For more information, refer to {{REF:RFC4533}}. + +The LDAP Sync protocol supports both polling and listening for +changes by defining two respective synchronization operations: +{{refreshOnly}} and {{refreshAndPersist}}. Polling is implemented +by the {{refreshOnly}} operation. The client copy is synchronized +to the server copy at the time of polling. The server finishes the +search operation by returning {{SearchResultDone}} at the end of +the search operation as in the normal search. The listening is +implemented by the {{refreshAndPersist}} operation. Instead of +finishing the search after returning all entries currently matching +the search criteria, the synchronization search remains persistent +in the server. Subsequent updates to the synchronization content +in the server cause additional entry updates to be sent to the +client. + +The {{refreshOnly}} operation and the refresh stage of the +{{refreshAndPersist}} operation can be performed with a present +phase or a delete phase. + +In the present phase, the server sends the client the entries updated +within the search scope since the last synchronization. The server +sends all requested attributes, be it changed or not, of the updated +entries. For each unchanged entry which remains in the scope, the +server sends a present message consisting only of the name of the +entry and the synchronization control representing state present. +The present message does not contain any attributes of the entry. +After the client receives all update and present entries, it can +reliably determine the new client copy by adding the entries added +to the server, by replacing the entries modified at the server, and +by deleting entries in the client copy which have not been updated +nor specified as being present at the server. + +The transmission of the updated entries in the delete phase is the +same as in the present phase. The server sends all the requested +attributes of the entries updated within the search scope since the +last synchronization to the client. In the delete phase, however, +the server sends a delete message for each entry deleted from the +search scope, instead of sending present messages. The delete +message consists only of the name of the entry and the synchronization +control representing state delete. The new client copy can be +determined by adding, modifying, and removing entries according to +the synchronization control attached to the {{SearchResultEntry}} +message. + +In the case that the LDAP Sync server maintains a history store and +can determine which entries are scoped out of the client copy since +the last synchronization time, the server can use the delete phase. +If the server does not maintain any history store, cannot determine +the scoped-out entries from the history store, or the history store +does not cover the outdated synchronization state of the client, +the server should use the present phase. The use of the present +phase is much more efficient than a full content reload in terms +of the synchronization traffic. To reduce the synchronization +traffic further, the LDAP Sync protocol also provides several +optimizations such as the transmission of the normalized {{EX:entryUUID}}s +and the transmission of multiple {{EX:entryUUIDs}} in a single +{{syncIdSet}} message. + +At the end of the {{refreshOnly}} synchronization, the server sends +a synchronization cookie to the client as a state indicator of the +client copy after the synchronization is completed. The client +will present the received cookie when it requests the next incremental +synchronization to the server. + +When {{refreshAndPersist}} synchronization is used, the server sends +a synchronization cookie at the end of the refresh stage by sending +a Sync Info message with TRUE refreshDone. It also sends a +synchronization cookie by attaching it to {{SearchResultEntry}} +generated in the persist stage of the synchronization search. During +the persist stage, the server can also send a Sync Info message +containing the synchronization cookie at any time the server wants +to update the client-side state indicator. The server also updates +a synchronization indicator of the client at the end of the persist +stage. + +In the LDAP Sync protocol, entries are uniquely identified by the +{{EX:entryUUID}} attribute value. It can function as a reliable +identifier of the entry. The DN of the entry, on the other hand, +can be changed over time and hence cannot be considered as the +reliable identifier. The {{EX:entryUUID}} is attached to each +{{SearchResultEntry}} or {{SearchResultReference}} as a part of the +synchronization control. + + +H3: Syncrepl Details + +The syncrepl engine utilizes both the {{refreshOnly}} and the +{{refreshAndPersist}} operations of the LDAP Sync protocol. If a +syncrepl specification is included in a database definition, +{{slapd}}(8) launches a syncrepl engine as a {{slapd}}(8) thread +and schedules its execution. If the {{refreshOnly}} operation is +specified, the syncrepl engine will be rescheduled at the interval +time after a synchronization operation is completed. If the +{{refreshAndPersist}} operation is specified, the engine will remain +active and process the persistent synchronization messages from the +provider. + +The syncrepl engine utilizes both the present phase and the delete +phase of the refresh synchronization. It is possible to configure +a per-scope session log in the provider server which stores the +{{EX:entryUUID}}s of a finite number of entries deleted from a +replication content. Multiple replicas of single provider content +share the same per-scope session log. The syncrepl engine uses the +delete phase if the session log is present and the state of the +consumer server is recent enough that no session log entries are +truncated after the last synchronization of the client. The syncrepl +engine uses the present phase if no session log is configured for +the replication content or if the consumer replica is too outdated +to be covered by the session log. The current design of the session +log store is memory based, so the information contained in the +session log is not persistent over multiple provider invocations. +It is not currently supported to access the session log store by +using LDAP operations. It is also not currently supported to impose +access control to the session log. + +As a further optimization, even in the case the synchronization +search is not associated with any session log, no entries will be +transmitted to the consumer server when there has been no update +in the replication context. + +The syncrepl engine, which is a consumer-side replication engine, +can work with any backends. The LDAP Sync provider can be configured +as an overlay on any backend, but works best with the {{back-bdb}} +or {{back-hdb}} backend. + +The LDAP Sync provider maintains a {{EX:contextCSN}} for each +database as the current synchronization state indicator of the +provider content. It is the largest {{EX:entryCSN}} in the provider +context such that no transactions for an entry having smaller +{{EX:entryCSN}} value remains outstanding. The {{EX:contextCSN}} +could not just be set to the largest issued {{EX:entryCSN}} because +{{EX:entryCSN}} is obtained before a transaction starts and +transactions are not committed in the issue order. + +The provider stores the {{EX:contextCSN}} of a context in the +{{EX:contextCSN}} attribute of the context suffix entry. The attribute +is not written to the database after every update operation though; +instead it is maintained primarily in memory. At database start +time the provider reads the last saved {{EX:contextCSN}} into memory +and uses the in-memory copy exclusively thereafter. By default, +changes to the {{EX:contextCSN}} as a result of database updates +will not be written to the database until the server is cleanly +shut down. A checkpoint facility exists to cause the contextCSN to +be written out more frequently if desired. + +Note that at startup time, if the provider is unable to read a +{{EX:contextCSN}} from the suffix entry, it will scan the entire +database to determine the value, and this scan may take quite a +long time on a large database. When a {{EX:contextCSN}} value is +read, the database will still be scanned for any {{EX:entryCSN}} +values greater than it, to make sure the {{EX:contextCSN}} value +truly reflects the greatest committed {{EX:entryCSN}} in the database. +On databases which support inequality indexing, setting an eq index +on the {{EX:entryCSN}} attribute and configuring {{contextCSN}} +checkpoints will greatly speed up this scanning step. + +If no {{EX:contextCSN}} can be determined by reading and scanning +the database, a new value will be generated. Also, if scanning the +database yielded a greater {{EX:entryCSN}} than was previously +recorded in the suffix entry's {{EX:contextCSN}} attribute, a +checkpoint will be immediately written with the new value. + +The consumer also stores its replica state, which is the provider's +{{EX:contextCSN}} received as a synchronization cookie, in the +{{EX:contextCSN}} attribute of the suffix entry. The replica state +maintained by a consumer server is used as the synchronization state +indicator when it performs subsequent incremental synchronization +with the provider server. It is also used as a provider-side +synchronization state indicator when it functions as a secondary +provider server in a cascading replication configuration. Since +the consumer and provider state information are maintained in the +same location within their respective databases, any consumer can +be promoted to a provider (and vice versa) without any special +actions. + +Because a general search filter can be used in the syncrepl +specification, some entries in the context may be omitted from the +synchronization content. The syncrepl engine creates a glue entry +to fill in the holes in the replica context if any part of the +replica content is subordinate to the holes. The glue entries will +not be returned in the search result unless {{ManageDsaIT}} control +is provided. + +Also as a consequence of the search filter used in the syncrepl +specification, it is possible for a modification to remove an entry +from the replication scope even though the entry has not been deleted +on the provider. Logically the entry must be deleted on the consumer +but in {{refreshOnly}} mode the provider cannot detect and propagate +this change without the use of the session log. + + +H3: Configuring Syncrepl + +Because syncrepl is a consumer-side replication engine, the syncrepl +specification is defined in {{slapd.conf}}(5) of the consumer +server, not in the provider server's configuration file. The initial +loading of the replica content can be performed either by starting +the syncrepl engine with no synchronization cookie or by populating +the consumer replica by adding an {{TERM:LDIF}} file dumped as a +backup at the provider. + +When loading from a backup, it is not required to perform the initial +loading from the up-to-date backup of the provider content. The +syncrepl engine will automatically synchronize the initial consumer +replica to the current provider content. As a result, it is not +required to stop the provider server in order to avoid the replica +inconsistency caused by the updates to the provider content during +the content backup and loading process. + +When replicating a large scale directory, especially in a bandwidth +constrained environment, it is advised to load the consumer replica +from a backup instead of performing a full initial load using +syncrepl. + + +H4: Set up the provider slapd + +The provider is implemented as an overlay, so the overlay itself +must first be configured in {{slapd.conf}}(5) before it can be +used. The provider has only two configuration directives, for setting +checkpoints on the {{EX:contextCSN}} and for configuring the session +log. Because the LDAP Sync search is subject to access control, +proper access control privileges should be set up for the replicated +content. + +The {{EX:contextCSN}} checkpoint is configured by the + +> syncprov-checkpoint + +directive. Checkpoints are only tested after successful write +operations. If {{}} operations or more than {{}} +time has passed since the last checkpoint, a new checkpoint is +performed. + +The session log is configured by the + +> syncprov-sessionlog + +directive, where {{}} is the maximum number of session log +entries the session log can record. When a session log is configured, +it is automatically used for all LDAP Sync searches within the +database. + +Note that using the session log requires searching on the {{entryUUID}} +attribute. Setting an eq index on this attribute will greatly benefit +the performance of the session log on the provider. + +A more complete example of the {{slapd.conf}}(5) content is thus: + +> database bdb +> suffix dc=Example,dc=com +> rootdn dc=Example,dc=com +> directory /var/ldap/db +> index objectclass,entryCSN,entryUUID eq +> +> overlay syncprov +> syncprov-checkpoint 100 10 +> syncprov-sessionlog 100 + + +H4: Set up the consumer slapd + +The syncrepl replication is specified in the database section of +{{slapd.conf}}(5) for the replica context. The syncrepl engine +is backend independent and the directive can be defined with any +database type. + +> database hdb +> suffix dc=Example,dc=com +> rootdn dc=Example,dc=com +> directory /var/ldap/db +> index objectclass,entryCSN,entryUUID eq +> +> syncrepl rid=123 +> provider=ldap://provider.example.com:389 +> type=refreshOnly +> interval=01:00:00:00 +> searchbase="dc=example,dc=com" +> filter="(objectClass=organizationalPerson)" +> scope=sub +> attrs="cn,sn,ou,telephoneNumber,title,l" +> schemachecking=off +> bindmethod=simple +> binddn="cn=syncuser,dc=example,dc=com" +> credentials=secret + +In this example, the consumer will connect to the provider {{slapd}}(8) +at port 389 of {{FILE:ldap://provider.example.com}} to perform a +polling ({{refreshOnly}}) mode of synchronization once a day. It +will bind as {{EX:cn=syncuser,dc=example,dc=com}} using simple +authentication with password "secret". Note that the access control +privilege of {{EX:cn=syncuser,dc=example,dc=com}} should be set +appropriately in the provider to retrieve the desired replication +content. Also the search limits must be high enough on the provider +to allow the syncuser to retrieve a complete copy of the requested +content. The consumer uses the rootdn to write to its database so +it always has full permissions to write all content. + +The synchronization search in the above example will search for the +entries whose objectClass is organizationalPerson in the entire +subtree rooted at {{EX:dc=example,dc=com}}. The requested attributes +are {{EX:cn}}, {{EX:sn}}, {{EX:ou}}, {{EX:telephoneNumber}}, +{{EX:title}}, and {{EX:l}}. The schema checking is turned off, so +that the consumer {{slapd}}(8) will not enforce entry schema +checking when it process updates from the provider {{slapd}}(8). + +For more detailed information on the syncrepl directive, see the +{{SECT:syncrepl}} section of {{SECT:The slapd Configuration File}} +chapter of this admin guide. + + +H4: Start the provider and the consumer slapd + +The provider {{slapd}}(8) is not required to be restarted. +{{contextCSN}} is automatically generated as needed: it might be +originally contained in the {{TERM:LDIF}} file, generated by +{{slapadd}} (8), generated upon changes in the context, or generated +when the first LDAP Sync search arrives at the provider. If an +LDIF file is being loaded which did not previously contain the +{{contextCSN}}, the {{-w}} option should be used with {{slapadd}} +(8) to cause it to be generated. This will allow the server to +startup a little quicker the first time it runs. + +When starting a consumer {{slapd}}(8), it is possible to provide +a synchronization cookie as the {{-c cookie}} command line option +in order to start the synchronization from a specific state. The +cookie is a comma separated list of name=value pairs. Currently +supported syncrepl cookie fields are {{csn=}} and {{rid=}}. +{{}} represents the current synchronization state of the +consumer replica. {{}} identifies a consumer replica locally +within the consumer server. It is used to relate the cookie to the +syncrepl definition in {{slapd.conf}}(5) which has the matching +replica identifier. The {{}} must have no more than 3 decimal +digits. The command line cookie overrides the synchronization +cookie stored in the consumer replica database. + + +H2: N-Way Multi-Master + + +H2: MirrorMode -+ The slave slapd performs the modify operation and -returns a success code to the slurpd process. - - -Note: {{ldapmodify}}(1) and other clients distributed as part of -OpenLDAP Software do not support automatic referral chasing -(for security reasons). - - - -H2: Replication Logs - -When slapd is configured to generate a replication logfile, it -writes out a file containing {{TERM:LDIF}} change records. The -replication log gives the replication site(s), a timestamp, the DN -of the entry being modified, and a series of lines which specify -the changes to make. In the example below, Barbara ({{EX:uid=bjensen}}) -has replaced the {{EX:description}} value. The change is to be -propagated to the slapd instance running on {{EX:slave.example.net}} -Changes to various operational attributes, such as {{EX:modifiersName}} -and {{EX:modifyTimestamp}}, are included in the change record and -will be propagated to the slave slapd. - -> replica: slave.example.com:389 -> time: 809618633 -> dn: uid=bjensen,dc=example,dc=com -> changetype: modify -> replace: multiLineDescription -> description: A dreamer... -> - -> replace: modifiersName -> modifiersName: uid=bjensen,dc=example,dc=com -> - -> replace: modifyTimestamp -> modifyTimestamp: 20000805073308Z -> - - -The modifications to {{EX:modifiersName}} and {{EX:modifyTimestamp}} -operational attributes were added by the master {{slapd}}. - - - -H2: Command-Line Options - -This section details commonly used {{slurpd}}(8) command-line options. - -> -d | ? - -This option sets the slurpd debug level to {{EX: }}. When -level is a `?' character, the various debugging levels are printed -and slurpd exits, regardless of any other options you give it. -Current debugging levels (a subset of slapd's debugging levels) are - -!block table; colaligns="RL"; align=Center; \ - title="Table 13.1: Debugging Levels" -Level Description -4 heavy trace debugging -64 configuration file processing -65535 enable all debugging -!endblock - -Debugging levels are additive. That is, if you want heavy trace -debugging and want to watch the config file being processed, you -would set level to the sum of those two levels (in this case, 68). - -> -f - -This option specifies an alternate slapd configuration file. Slurpd -does not have its own configuration file. Instead, all configuration -information is read from the slapd configuration file. - -> -r - -This option specifies an alternate slapd replication log file. -Under normal circumstances, slurpd reads the name of the slapd -replication log file from the slapd configuration file. However, -you can override this with the -r flag, to cause slurpd to process -a different replication log file. See the {{SECT:Advanced slurpd -Operation}} section for a discussion of how you might use this -option. - -> -o - -Operate in "one-shot" mode. Under normal circumstances, when slurpd -finishes processing a replication log, it remains active and -periodically checks to see if new entries have been added to the -replication log. In one-shot mode, by comparison, slurpd processes -a replication log and exits immediately. If the -o option is given, -the replication log file must be explicitly specified with the -r -option. See the {{SECT:One-shot mode and reject files}} section -for a discussion of this mode. - -> -t - -Specify an alternate directory for slurpd's temporary copies of -replication logs. The default location is {{F:/usr/tmp}}. - - -H2: Configuring slurpd and a slave slapd instance - -To bring up a replica slapd instance, you must configure the master -and slave slapd instances for replication, then shut down the master -slapd so you can copy the database. Finally, you bring up the master -slapd instance, the slave slapd instance, and the slurpd instance. -These steps are detailed in the following sections. You can set up -as many slave slapd instances as you wish. - - -H3: Set up the master {{slapd}} - -The following section assumes you have a properly working {{slapd}}(8) -instance. To configure your working {{slapd}}(8) server as a -replication master, you need to make the following changes to your -{{slapd.conf}}(5). - -^ Add a {{EX:replica}} directive for each replica. The {{EX:binddn=}} -parameter should match the {{EX:updatedn}} option in the corresponding -slave slapd configuration file, and should name an entry with write -permission to the slave database (e.g., an entry allowed access via -{{EX:access}} directives in the slave slapd configuration file). -This DN generally {{should not}} be the same as the master's -{{EX:rootdn}}. - -+ Add a {{EX:replogfile}} directive, which tells slapd where to log -changes. This file will be read by slurpd. - - -H3: Set up the slave {{slapd}} - -Install the slapd software on the host which is to be the slave -slapd server. The configuration of the slave server should be -identical to that of the master, with the following exceptions: - -^ Do not include a {{EX:replica}} directive. While it is possible -to create "chains" of replicas, in most cases this is inappropriate. - -+ Do not include a {{EX:replogfile}} directive. - -+ Do include an {{EX:updatedn}} line. The DN given should match the -DN given in the {{EX:binddn=}} parameter of the corresponding -{{EX:replica=}} directive in the master slapd config file. The -{{EX:updatedn}} generally {{should not}} be the same as the -{{EX:rootdn}} of the master database. - -+ Make sure the DN given in the {{EX:updatedn}} directive has -permission to write the database (e.g., it is is allowed {{EX:access}} -by one or more access directives). - -+ Use the {{EX:updateref}} directive to define the URL the slave -should return if an update request is received. - - -H3: Shut down the master server - -In order to ensure that the slave starts with an exact copy of the -master's data, you must shut down the master slapd. Do this by -sending the master slapd process an interrupt signal with -{{EX:kill -INT }}, where {{EX:}} is the process-id of the master -slapd process. - -If you like, you may restart the master slapd in read-only mode -while you are replicating the database. During this time, the master -slapd will return an "unwilling to perform" error to clients that -attempt to modify data. - - -H3: Copy the master slapd's database to the slave - -Copy the master's database(s) to the slave. For {{TERM:BDB}} and -{{TERM:HDB}} databases, you must copy all database files located -in the database {{EX:directory}} specified in {{slapd.conf}}(5). -In general, you should copy each file found in the database {{EX: -directory}} unless you know it is not used by {{slapd}}(8). - -Note: This copy process assumes homogeneous servers with identically -configured OpenLDAP installations. Alternatively, you may use -{{slapcat}} to output the master's database in LDIF format and use -the LDIF with {{slapadd}} to populate the slave. Using LDIF avoids -any potential incompatibilities due to differing server architectures -or software configurations. See the {{SECT:Database Creation and -Maintenance Tools}} chapter for details on these tools. - - -H3: Configure the master slapd for replication - -To configure slapd to generate a replication logfile, you add a -"{{EX: replica}}" configuration option to the master slapd's config -file. For example, if we wish to propagate changes to the slapd -instance running on host {{EX:slave.example.com}}: - -> replica uri=ldap://slave.example.com:389 -> binddn="cn=Replicator,dc=example,dc=com" -> bindmethod=simple credentials=secret - -In this example, changes will be sent to port 389 (the standard -LDAP port) on host slave.example.com. The slurpd process will bind -to the slave slapd as "{{EX:cn=Replicator,dc=example,dc=com}}" using -simple authentication with password "{{EX:secret}}". - -If we wish to perform the same replication using ldaps on port 636: - -> replica uri=ldaps://slave.example.com:636 -> binddn="cn=Replicator,dc=example,dc=com" -> bindmethod=simple credentials=secret - -The host option is deprecated in favor of uri, but the following -replica configuration is still supported: - -> replica host=slave.example.com:389 -> binddn="cn=Replicator,dc=example,dc=com" -> bindmethod=simple credentials=secret - -Note that the DN given by the {{EX:binddn=}} directive must exist -in the slave slapd's database (or be the rootdn specified in the -slapd config file) in order for the bind operation to succeed. The -DN should also be listed as the {{EX:updatedn}} for the database -in the slave's slapd.conf(5). It is generally recommended that -this DN be different than the {{EX:rootdn}} of the master database. - -Note: The use of strong authentication and transport security is -highly recommended. - - -H3: Restart the master slapd and start the slave slapd - -Restart the master slapd process. To check that it is -generating replication logs, perform a modification of any -entry in the database, and check that data has been -written to the log file. - - -H3: Start slurpd - -Start the slurpd process. Slurpd should immediately send -the test modification you made to the slave slapd. Watch -the slave slapd's logfile to be sure that the modification -was sent. - -> slurpd -f - - - -H2: Advanced slurpd Operation - -H3: Replication errors - -When slurpd propagates a change to a slave slapd and receives an -error return code, it writes the reason for the error and the -replication record to a reject file. The reject file is located in -the same directory as the per-replica replication logfile, and has -the same name, but with the string "{{F:.rej}}" appended. For -example, for a replica running on host {{EX:slave.example.com}}, -port 389, the reject file, if it exists, will be named - -> /usr/local/var/openldap/replog.slave.example.com:389.rej - -A sample rejection log entry follows: - -> ERROR: No such attribute -> replica: slave.example.com:389 -> time: 809618633 -> dn: uid=bjensen,dc=example,dc=com -> changetype: modify -> replace: description -> description: A dreamer... -> - -> replace: modifiersName -> modifiersName: uid=bjensen,dc=example,dc=com -> - -> replace: modifyTimestamp -> modifyTimestamp: 20000805073308Z -> - - -Note that this is precisely the same format as the original replication -log entry, but with an {{EX:ERROR}} line prepended to the entry. - - - -H3: One-shot mode and reject files - -It is possible to use slurpd to process a rejection log with its -"one-shot mode." In normal operation, slurpd watches for more -replication records to be appended to the replication log file. In -one-shot mode, by contrast, slurpd processes a single log file and -exits. Slurpd ignores {{EX:ERROR}} lines at the beginning of -replication log entries, so it's not necessary to edit them out -before feeding it the rejection log. - -To use one-shot mode, specify the name of the rejection log on the -command line as the argument to the -r flag, and specify one-shot -mode with the -o flag. For example, to process the rejection log -file {{F:/usr/local/var/openldap/replog.slave.example.com:389}} and -exit, use the command - -> slurpd -r /usr/tmp/replog.slave.example.com:389 -o diff --git a/doc/guide/admin/runningslapd.sdf b/doc/guide/admin/runningslapd.sdf index c96eaf0686..54a4145c80 100644 --- a/doc/guide/admin/runningslapd.sdf +++ b/doc/guide/admin/runningslapd.sdf @@ -104,9 +104,9 @@ H2: Starting slapd In general, slapd is run like this: -> /usr/local/etc/libexec/slapd [