have many thousands of entries to create, which would take an
unacceptably long time using the LDAP method, or if you want to
ensure the database is not accessed while it is being created. Note
-that not all database types support these utilitites.
+that not all database types support these utilities.
H2: Creating a database over LDAP
!catalog terms ''; headings; columns="Term,Definition"
H2: Related Organizations
-!catalog organisations ''; headings; columns="ORG:Name,Long,URL:Jump"
+!catalog organizations ''; headings; columns="ORG:Name,Long,URL:Jump"
H2: Related Products
!catalog products ''; headings; columns="PRD:Name,URL:Jump"
The project makes available two series of packages for {{general
use}}. The project makes {{releases}} as new features and bug fixes
-come available. Though the project takes steps to improve stablity
+come available. Though the project takes steps to improve stability
of these releases, it is common for problems to arise only after
{{release}}. The {{stable}} release is the latest {{release}} which
has demonstrated stability through general use.
While some consider the Internet {{TERM[expand]DNS}} (DNS) is an
example of a globally distributed directory service, DNS is not
-browsable nor searchable. It is more properly described as a
-globaly distributed {{lookup}} service.
+browseable nor searchable. It is more properly described as a
+globally distributed {{lookup}} service.
H2: What is LDAP?
LDAPv2 is historic ({{REF:RFC3494}}). As most {{so-called}} LDAPv2
implementations (including {{slapd}}(8)) do not conform to the
-LDAPv2 technical specification, interoperatibility amongst
+LDAPv2 technical specification, interoperability amongst
implementations claiming LDAPv2 support is limited. As LDAPv2
differs significantly from LDAPv3, deploying both LDAPv2 and LDAPv3
simultaneously is quite problematic. LDAPv2 should be avoided.
one of those present. The restrictions span the range from allowed restrictions
(that might be elsewhere the result of access control) to outright violations of
the data model. It can be, however, a method to provide LDAP access to preexisting
-data that is used by other applications. But in the understanding that we don't r
-eally have a "directory".
+data that is used by other applications. But in the understanding that we don't
+really have a "directory".
Existing commercial LDAP server implementations that use a relational database
are either from the first kind or the third. I don't know of any implementation
MORE
You can use {{slapcat}}(8) to generate an LDIF file for each of your {{slapd}}(8)
-back-bdb or back-hdbdatabases.
+back-bdb or back-hdb databases.
> slapcat -f slapd.conf -b "dc=example,dc=com"
To figure out the best-practice BDB backup scenario, the reader is highly
recommended to read the whole Chapter 9: Berkeley DB Transactional Data Store Applications.
-This chapter is a set of small pages with examples in C language. Non-prorgamming
+This chapter is a set of small pages with examples in C language. Non-programming
people can skip this examples without loss of knowledge.
by your {{slapd.conf}}(5) file. The {{monitor}} backend requires
it.
-Second, instanticate the {{monitor backend}} by adding a
+Second, instantiate the {{monitor backend}} by adding a
{{database monitor}} directive below your existing database
sections. For instance:
Lastly, add additional global or database directives as needed.
Like most other database backends, the monitor backend does honor
-slapd(8) access and other adminstrative controls. As some monitor
+slapd(8) access and other administrative controls. As some monitor
information may be sensitive, it is generally recommend access to
cn=monitor be restricted to directory administrators and their
monitoring agents. Adding an {{access}} directive immediately below
suffix is hardcoded. It's always {{EX:cn=Monitor}}. So no {{suffix}}
directive should be provided. Also note that general purpose
database backends, the monitor backend cannot be instantiated
-multiple times. That is, there can only be one (or zero) occurances
+multiple times. That is, there can only be one (or zero) occurrences
of {{EX:database monitor}} in the server's configuration.
Essentially they represent a means to:
- * customise the behavior of existing backends without changing the backend
+ * customize the behavior of existing backends without changing the backend
code and without requiring one to write a new custom backend with
complete functionality
* write functionality of general usefulness that can be applied to
H3: Overview
The proxy cache extension of slapd is designed to improve the
-responseiveness of the ldap and meta backends. It handles a search
+responsiveness of the ldap and meta backends. It handles a search
request (query)
by first determining whether it is contained in any cached search
filter. Contained requests are answered from the proxy cache's local
<nattrsets> parameter specifies the total number of attribute sets
(as specified by the {{EX:proxyAttrSet}} directive) that may be
defined. The <entrylimit> parameter specifies the maximum number of
-entries in a cachable query. The <period> specifies the consistency
+entries in a cacheable query. The <period> specifies the consistency
check period (in seconds). In each period, queries with expired
TTLs are removed.
H3: Overview
This overlay can be used with a backend database such as slapd-bdb (5)
-to maintain the cohesiveness of a schema which utilises reference
+to maintain the cohesiveness of a schema which utilizes reference
attributes.
H3: Overview
-H3: Example Senarios
+H3: Example Scenarios
H4: Samba
{{Why was it replaced?}}
-The slurpd daemon was the original replication mechanisim inherited from
+The slurpd daemon was the original replication mechanism inherited from
UMich's LDAP and operates in push mode: the master pushes changes to the
slaves. It has been replaced for many reasons, in brief:
{{Why is Syncrepl better?}}
- - Syncrepl is self-synchronising; you can start with a database in any
- state from totally empty to fully sync'd and it will automatically do
- the right thing to achieve and maintain synchronisation
+ - Syncrepl is self-synchronizing; you can start with a database in any
+ state from totally empty to fully synced and it will automatically do
+ the right thing to achieve and maintain synchronization
- Syncrepl can operate in either direction
- Data updates can be minimal or maximal
The easiest way is to point an LDAP backend ({{SECT: Backends}} and {{slapd-ldap(8)}})
to your slave/s directory and setup Syncrepl to point to your Master database.
-REFERENCE test045/048 for better explaination of above.
+REFERENCE test045/048 for better explanation of above.
If you imagine Syncrepl pulling down changes from the Master server, and then
pushing those changes out to your slave servers via {{slapd-ldap(8)}}. This is
> credentials=monitor
>
> # HACK: use the RootDN of the monitor database as UpdateDN so ACLs apply
-> # whithout the need to write the UpdateDN before starting replication
+> # without the need to write the UpdateDN before starting replication
> syncrepl rid=1
> provider=ldap://localhost:9011/
> binddn="cn=Manager,dc=example,dc=com"
>
> database monitor
-DETAILED EXPLAINATION OF ABOVE LIKE IN OTHER SECTIONS (line numbers?)
+DETAILED EXPLANATION OF ABOVE LIKE IN OTHER SECTIONS (line numbers?)
ANOTHER DIAGRAM HERE
or pattern, and terms in parenthesis are remembered for the replacement
pattern.
-The replacement pattern will produce either a DN or URL refering
+The replacement pattern will produce either a DN or URL referring
to the user. Anything from the authentication request DN that
matched a string in parenthesis in the search pattern is stored in
the variable "$1". That variable "$1" can appear in the replacement
become that DN, users must first authenticate as one of the persons
on the list. This allows for better auditing of who made changes
to the LDAP database. If people were allowed to authenticate
-directly to the priviliged account, possibly through the {{EX:rootpw}}
+directly to the privileged account, possibly through the {{EX:rootpw}}
{{slapd.conf}}(5) directive or through a {{EX:userPassword}}
attribute, then auditing becomes more difficult.
In the first form, the <username> is from the same namespace as
the authentication identities above. It is the user's username as
-it is refered to by the underlying authentication mechanism.
+it is referred to by the underlying authentication mechanism.
Authorization identities of this form are converted into a DN format
by the same function that the authentication process used, producing
an {{authorization request DN}} of the form
specify the authorization is permitted, the {{EX:authzFrom}}
rules in the authorization DN entry are then checked. If neither
case specifies that the request be honored, the request is denied.
-Since the default behaviour is to deny authorization requests, rules
+Since the default behavior is to deny authorization requests, rules
only specify that a request be allowed; there are no negative rules
telling what authorizations to deny.
Also note that the values in an authorization rule must be one of
the two forms: an LDAP URL or a DN (with or without regular expression
characters). Anything that does not begin with "{{EX:ldap://}}" is
-taken as a DN. It is not permissable to enter another authorization
+taken as a DN. It is not permissible to enter another authorization
identity of the form "{{EX:u:<username>}}" as an authorization rule.
H1: Schema Specification
This chapter describes how to extend the user schema used by
-{{slapd}}(8). The chapter assumes the reader is familar with the
+{{slapd}}(8). The chapter assumes the reader is familiar with the
{{TERM:LDAP}}/{{TERM:X.500}} information model.
The first section, {{SECT:Distributed Schema Files}} details optional
and hence is not discussed here.
There are five steps to defining new schema:
-^ obtain Object Identifer
+^ obtain Object Identifier
+ choose a name prefix
+ create local schema file
+ define custom attribute types (if necessary)
You are, of course, free to design a hierarchy suitable to your
organizational needs under your organization's OID. No matter what hierarchy you choose, you should maintain a registry of assignments you make. This can be a simple flat file or something more sophisticated such as the {{OpenLDAP OID Registry}} ({{URL:http://www.openldap.org/faq/index.cgi?file=197}}).
-For more information about Object Identifers (and a listing service)
+For more information about Object Identifiers (and a listing service)
see {{URL:http://www.alvestrand.no/harald/objectid/}}.
.{{Under no circumstances should you hijack OID namespace!}}
The name should be both descriptive and not likely to clash with
names of other schema elements. In particular, any name you choose
should not clash with present or future Standard Track names (this
-is assured if you registered names or use names begining with "x-").
+is assured if you registered names or use names beginning with "x-").
It is noted that you can obtain your own registered name
prefix so as to avoid having to register your names individually.
integer 1.3.6.1.4.1.1466.115.121.1.27 integer
numericString 1.3.6.1.4.1.1466.115.121.1.36 numeric string
OID 1.3.6.1.4.1.1466.115.121.1.38 object identifier
-octetString 1.3.6.1.4.1.1466.115.121.1.40 arbitary octets
+octetString 1.3.6.1.4.1.1466.115.121.1.40 arbitrary octets
!endblock
>
A successful user/password authenticated bind results in a user
authorization identity, the provided name, being associated with
the session. User/password authenticated bind is enabled by default.
-However, as this mechanism itself offers no evesdropping protection
+However, as this mechanism itself offers no eavesdropping protection
(e.g., the password is set in the clear), it is recommended that
it be used only in tightly controlled systems or when the LDAP
session is protected by other means (e.g., TLS, {{TERM:IPsec}}).
This directive grants access (specified by <accesslevel>) to a
set of entries and/or attributes (specified by <what>) by one or
-more requesters (specified by <who>).
+more requestors (specified by <who>).
See the {{SECT:Access Control}} section of this chapter for a
summary of basic usage.
A checkpoint operation flushes the database buffers to disk and writes a
checkpoint record in the log.
The checkpoint will occur if either <kbyte> data has been written or
-<min> minutes have passed since the last checkpont. Both arguments default
+<min> minutes have passed since the last checkpoint. Both arguments default
to zero, in which case they are ignored. When the <min> argument is
non-zero, an internal task will run every <min> minutes to perform the
checkpoint. See the Berkeley DB reference guide for more details.
no such file exists yet, the {{EX:DB_CONFIG}} file will be created and the
settings in this attribute will be written to it. If the file exists,
its contents will be read and displayed in this attribute. The attribute
-is multi-valued, to accomodate multiple configuration directives. No default
+is multi-valued, to accommodate multiple configuration directives. No default
is provided, but it is essential to use proper settings here to get the
best server performance.
Ideally the BDB cache must be
at least as large as the working set of the database, the log buffer size
-should be large enough to accomodate most transactions without overflowing,
+should be large enough to accommodate most transactions without overflowing,
and the log directory must be on a separate physical disk from the main
database files. And both the database directory and the log directory
should be separate from disks used for regular system activities such as
H4: olcDbSearchStack: <integer>
Specify the depth of the stack used for search filter evaluation.
-Search filters are evaluated on a stack to accomodate nested {{EX:AND}} /
+Search filters are evaluated on a stack to accommodate nested {{EX:AND}} /
{{EX:OR}} clauses. An individual stack is allocated for each server thread.
The depth of the stack determines how complex a filter can be evaluated
without requiring any additional memory allocation. Filters that are
This directive grants access (specified by <accesslevel>) to a set
of entries and/or attributes (specified by <what>) by one or more
-requesters (specified by <who>). See the {{SECT:The access
+requestors (specified by <who>). See the {{SECT:The access
Configuration Directive}} section of this chapter for a summary of
basic usage.
individual users in their {{.ldaprc}} files.
The LDAP Start TLS operation is used in LDAP to initiate TLS
-negotatation. All OpenLDAP command line tools support a {{EX:-Z}}
+negotiation. All OpenLDAP command line tools support a {{EX:-Z}}
and {{EX:-ZZ}} flag to indicate whether a Start TLS operation is to
be issued. The latter flag indicates that the tool is to cease
processing if TLS cannot be started while the former allows the
The OpenLDAP Project only supports OpenLDAP software.
-You may however seek commerical support ({{URL:http://www.openldap.org/support/}}) or join
+You may however seek commercial support ({{URL:http://www.openldap.org/support/}}) or join
the general LDAP forum for non-commercial discussions and information relating to LDAP at:
{{URL:http://www.umich.edu/~dirsvcs/ldap/mailinglist.html}}
See {{slapd.conf}}(8) and {{slapdindex}}(8) for more information
-H3: Presense indexing
+H3: Presence indexing
If your client application uses presence filters and if the
target attribute exists on the majority of entries in your target scope, then
Also note that {{id2entry}} always uses 16KB per "page", while {{dn2id}} uses whatever
the underlying filesystem uses, typically 4 or 8KB. To avoid thrashing the,
your cache must be at least as large as the number of internal pages in both
-the {{dn2id}} and {{id2entry}} databases, plus some extra space to accomodate the actual
+the {{dn2id}} and {{id2entry}} databases, plus some extra space to accommodate the actual
leaf data pages.
For example, in my OpenLDAP 2.4 test database, I have an input LDIF file that's
I try to cache the most actively used entries. Unless you expect all 400,000 entries of your DB to be accessed regularly, there is no need to cache that many entries. My entry cache is set to 20,000 (out of a little over 400,000 entries).
-The idl cache has to do with how many unique result sets of searches you want to store in memory. Setting up this cache will allow your most frequently placed searches to get results much faster, but I doubt you want to try and cache the results of every search that hits your system. ;)
+The idlcache has to do with how many unique result sets of searches you want to store in memory. Setting up this cache will allow your most frequently placed searches to get results much faster, but I doubt you want to try and cache the results of every search that hits your system. ;)
--Quanah