directive only permits a single address to be specified. This directive
should not be used if you specify a DirAddresses (N.B. plural) directive.
+\item [DirSourceAddress = \lt{}IP-Address\gt{}]
+ \index[fd]{DirSourceAddress}
+ \index[fd]{Directive!DirSourceAddress}
+ This record is optional, and if it is specified, it will cause the Director
+ server (when initiating connections to a storage or file daemon) to source
+ its connections from the specified address. Only a single IP address may be
+ specified. If this record is not specified, the Director server will source
+ its outgoing connections according to the system routing table (the default).
+
\item[Statistics Retention = \lt{}time\gt{}]
\index[dir]{StatisticsRetention}
\index[dir]{Directive!StatisticsRetention}
The default is 5 years.
+\item[VerId = \lt{}string\gt{}]
+ \index[dir]{Directive!VerId}
+ where \lt{}string\gt{} is an identifier which can be used for support purpose.
+ This string is displayed using the \texttt{version} command.
+
\item[MaxConsoleConnections = \lt{}number\gt{}]
\index[dir]{MaximumConsoleConnections}
\index[dir]{MaxConsoleConnections}
if a client has a really huge number of files (more than several million),
you might want to split it into to Jobs each with a different FileSet
covering only part of the total files.
+
+Multiple Storage daemons are not currently supported for Jobs, so if
+you do want to use multiple storage daemons, you will need to create
+a different Job and ensure that for each Job that the combination of
+Client and FileSet are unique. The Client and FileSet are what Bacula
+uses to restore a client, so if there are multiple Jobs with the same
+Client and FileSet or multiple Storage daemons that are used, the
+restore will not work. This problem can be resolved by defining multiple
+FileSet definitions (the names must be different, but the contents of
+the FileSets may be the same).
\begin{description}
specify here followed by the date and time the job was scheduled for
execution. This directive is required.
-\item [Enabled = \lt{}yes|no\gt{}]
+\item [Enabled = \lt{}yes\vb{}no\gt{}]
\index[dir]{Enable}
\index[dir]{Directive!Enable}
This directive allows you to enable or disable automatic execution
different FileSet.
\item The Job was a Full, Differential, or Incremental backup.
\item The Job terminated normally (i.e. did not fail or was not canceled).
-\item The Job started no longer ago than {\bf Max Full Age}.
+\item The Job started no longer ago than {\bf Max Full Interval}.
\end{itemize}
If all the above conditions do not hold, the Director will upgrade the
different FileSet.
\item The Job was a FULL backup.
\item The Job terminated normally (i.e. did not fail or was not canceled).
-\item The Job started no longer ago than {\bf Max Full Age}.
+\item The Job started no longer ago than {\bf Max Full Interval}.
\end{itemize}
If all the above conditions do not hold, the Director will upgrade the
have been deleted.
\end{description}
-\item [Accurate = \lt{}yes|no\gt{}]
+\item [Accurate = \lt{}yes\vb{}no\gt{}]
\index[dir]{Accurate}
- In accurate mode, FileDaemon known exactly which files were present
- after last backup. So it is able to handle deleted or renamed files.
+ In accurate mode, the File daemon knowns exactly which files were present
+ after the last backup. So it is able to handle deleted or renamed files.
- When restoring a fileset for a specified date (including "most
- recent"), Bacula is able to give you exactly the files and
+ When restoring a FileSet for a specified date (including "most
+ recent"), Bacula is able to restore exactly the files and
directories that existed at the time of the last backup prior to
- that date.
+ that date including ensuring that deleted files are actually deleted,
+ and renamed directories are restored properly.
- In this mode, FileDaemon have to keep all files in memory. So you have
- to check that your memory and swap are sufficent.
+ In this mode, the File daemon must keep data concerning all files in
+ memory. So you do not have sufficient memory, the restore may
+ either be terribly slow or fail.
- $$ memory = \sum_{i=1}^{n}(strlen(path_i + file_i) + sizeof(CurFile))$$
+%% $$ memory = \sum_{i=1}^{n}(strlen(path_i + file_i) + sizeof(CurFile))$$
- For 500.000 files (typical desktop linux system), it will take
- around 64MB of RAM on your FileDaemon.
+ For 500.000 files (a typical desktop linux system), it will require
+ approximately 64 Megabytes of RAM on your File daemon to hold the
+ required information.
\item [Verify Job = \lt{}Job-Resource-Name\gt{}]
\index[dir]{Verify Job}
\addcontentsline{lof}{figure}{Job time control directives}
\includegraphics{\idir different_time.eps}
-\item [Max Full Age = \lt{}time\gt{}]
-\index[dir]{Max Full Age}
-\index[dir]{Directive!Max Full Age}
+\item [Max Full Interval = \lt{}time\gt{}]
+\index[dir]{Max Full Interval}
+\index[dir]{Directive!Max Full Interval}
The time specifies the maximum allowed age (counting from start time) of
the most recent successful Full backup that is required in order to run
Incremental or Differential backup jobs. If the most recent Full backup
considered.
\label{PreferMountedVolumes}
-\item [Prefer Mounted Volumes = \lt{}yes|no\gt{}]
+\item [Prefer Mounted Volumes = \lt{}yes\vb{}no\gt{}]
\index[dir]{Prefer Mounted Volumes}
\index[dir]{Directive!Prefer Mounted Volumes}
If the Prefer Mounted Volumes directive is set to {\bf yes} (default
a drive with a valid Volume already mounted in preference to a drive
that is not ready. This means that all jobs will attempt to append
to the same Volume (providing the Volume is appropriate -- right Pool,
- ... for that job). If no drive with a suitable Volume is available, it
+ ... for that job), unless you are using multiple pools.
+ If no drive with a suitable Volume is available, it
will select the first available drive. Note, any Volume that has
been requested to be mounted, will be considered valid as a mounted
volume by another job. This if multiple jobs start at the same time
This means that the job will prefer to use an unused drive rather
than use a drive that is already in use.
-\item [Prune Jobs = \lt{}yes|no\gt{}]
+ Despite the above, we recommend against setting this directive to
+ {\bf no} since
+ it tends to add a lot of swapping of Volumes between the different
+ drives and can easily lead to deadlock situations in the Storage
+ daemon. We will accept bug reports against it, but we cannot guarantee
+ that we will be able to fix the problem in a reasonable time.
+
+ A better alternative for using multiple drives is to use multiple
+ pools so that Bacula will be forced to mount Volumes from those Pools
+ on different drives.
+
+\item [Prune Jobs = \lt{}yes\vb{}no\gt{}]
\index[dir]{Prune Jobs}
\index[dir]{Directive!Prune Jobs}
Normally, pruning of Jobs from the Catalog is specified on a Client by
default is {\bf no}.
-\item [Prune Files = \lt{}yes|no\gt{}]
+\item [Prune Files = \lt{}yes\vb{}no\gt{}]
\index[dir]{Prune Files}
\index[dir]{Directive!Prune Files}
Normally, pruning of Files from the Catalog is specified on a Client by
yes}, it will override the value specified in the Client resource. The
default is {\bf no}.
-\item [Prune Volumes = \lt{}yes|no\gt{}]
+\item [Prune Volumes = \lt{}yes\vb{}no\gt{}]
\index[dir]{Prune Volumes}
\index[dir]{Directive!Prune Volumes}
Normally, pruning of Volumes from the Catalog is specified on a Client
\index[dir]{RunScript}
\index[dir]{Directive!Run Script}
- This directive is implemented in version 1.39.22 and later.
The RunScript directive behaves like a resource in that it
requires opening and closing braces around a number of directives
that make up the body of the runscript.
- The specified {\bf Command} (see below for details) is run as an
- external program prior or after the current Job. This is optional.
+ The specified {\bf Command} (see below for details) is run as an external
+ program prior or after the current Job. This is optional. By default, the
+ program is executed on the Client side like in \texttt{ClientRunXXXJob}.
\textbf{Console} options are special commands that are sent to the director instead
of the OS. At this time, console command ouputs are redirected to log with
Note, please see the notes above in {\bf RunScript}
concerning Windows clients.
-\item [Rerun Failed Levels = \lt{}yes|no\gt{}]
+\item [Rerun Failed Levels = \lt{}yes\vb{}no\gt{}]
\index[dir]{Rerun Failed Levels}
\index[dir]{Directive!Rerun Failed Levels}
If this directive is set to {\bf yes} (default no), and Bacula detects that
when checking for failed levels, which means that any FileSet change will
trigger a rerun.
-\item [Spool Data = \lt{}yes|no\gt{}]
+\item [Spool Data = \lt{}yes\vb{}no\gt{}]
\index[dir]{Spool Data}
\index[dir]{Directive!Spool Data}
NOTE: When this directive is set to yes, Spool Attributes is also
automatically set to yes.
-\item [Spool Attributes = \lt{}yes|no\gt{}]
+\item [Spool Attributes = \lt{}yes\vb{}no\gt{}]
\index[dir]{Spool Attributes}
\index[dir]{Directive!Spool Attributes}
\index[dir]{slow}
if the backed up file already exists, Bacula skips restoring this file.
\end{description}
-\item [Prefix Links=\lt{}yes|no\gt{}]
+\item [Prefix Links=\lt{}yes\vb{}no\gt{}]
\index[dir]{Prefix Links}
\index[dir]{Directive!Prefix Links}
If a {\bf Where} path prefix is specified for a recovery job, apply it
documented under \ilink{ Maximum Concurrent Jobs}{DirMaxConJobs} in the
Director's resource.
-\item [Reschedule On Error = \lt{}yes|no\gt{}]
+\item [Reschedule On Error = \lt{}yes\vb{}no\gt{}]
\index[dir]{Reschedule On Error}
\index[dir]{Directive!Reschedule On Error}
If this directive is enabled, and the job terminates in error, the job
correct order, and that your priority scheme will be respected.
\label{AllowMixedPriority}
-\item [Allow Mixed Priority = \lt{}yes|no\gt{}]
+\item [Allow Mixed Priority = \lt{}yes\vb{}no\gt{}]
\index[dir]{Allow Mixed Priority}
This directive is only implemented in version 2.5 and later. When
set to {\bf yes} (default {\bf no}), this job may run even if lower
be run until the priority 5 job has finished.
\label{WritePartAfterJob}
-\item [Write Part After Job = \lt{}yes|no\gt{}]
+\item [Write Part After Job = \lt{}yes\vb{}no\gt{}]
\index[dir]{Write Part After Job}
\index[dir]{Directive!Write Part After Job}
This directive is only implemented in version 1.37 and later.
specifies to use the Pool named {\bf Incremental} if the job is an
incremental backup.
-\item [SpoolData=yes|no]
+\item [SpoolData=yes\vb{}no]
\index[dir]{SpoolData}
\index[dir]{Directive!SpoolData}
tells Bacula to request the Storage daemon to spool data to a disk file
This directive is available only in Bacula version 2.3.5 or
later.
-\item [WritePartAfterJob=yes|no]
+\item [WritePartAfterJob=yes\vb{}no]
\index[dir]{WritePartAfterJob}
\index[dir]{Directive!WritePartAfterJob}
tells Bacula to request the Storage daemon to write the current part
<wday-range>
<date> = <date-keyword> | <day> | <range>
<date-spec> = <date> | <date-spec>
-<day-spec> = <day> | <wday-keyword> |
- <day-range> | <wday-range> |
- <daily-keyword>
<day-spec> = <day> | <wday-keyword> |
<day> | <wday-range> |
<week-keyword> <wday-keyword> |
- <week-keyword> <wday-range>
+ <week-keyword> <wday-range> |
+ <daily-keyword>
<month-spec> = <month-keyword> | <month-range> |
<monthly-keyword>
<date-time-spec> = <month-spec> <day-spec> <time-spec>
The default is 180 days.
\label{AutoPrune}
-\item [AutoPrune = \lt{}yes|no\gt{}]
+\item [AutoPrune = \lt{}yes\vb{}no\gt{}]
\index[dir]{AutoPrune}
\index[dir]{Directive!AutoPrune}
If AutoPrune is set to {\bf yes} (default), Bacula (version 1.20 or greater)
check so that you don't try to write data for a DLT onto an 8mm device.
\label{Autochanger1}
-\item [Autochanger = \lt{}yes|no\gt{}]
+\item [Autochanger = \lt{}yes\vb{}no\gt{}]
\index[dir]{Autochanger}
\index[dir]{Directive!Autochanger}
If you specify {\bf yes} for this command (the default is {\bf no}),
the Job resource or in the Pool, but it must be specified in
one or the other. If not configuration error will result.
-\item [Use Volume Once = \lt{}yes|no\gt{}]
+\item [Use Volume Once = \lt{}yes\vb{}no\gt{}]
\index[dir]{Use Volume Once}
\index[dir]{Directive!Use Volume Once}
This directive if set to {\bf yes} specifies that each volume is to be
must use the
\ilink{\bf update volume}{UpdateCommand} command in the Console.
-\item [Catalog Files = \lt{}yes|no\gt{}]
+\item [Catalog Files = \lt{}yes\vb{}no\gt{}]
\index[dir]{Catalog Files}
\index[dir]{Directive!Catalog Files}
This directive defines whether or not you want the names of the files
restore} command nor any other command that references File entries.
\label{PoolAutoPrune}
-\item [AutoPrune = \lt{}yes|no\gt{}]
+\item [AutoPrune = \lt{}yes\vb{}no\gt{}]
\index[dir]{AutoPrune}
\index[dir]{Directive!AutoPrune}
If AutoPrune is set to {\bf yes} (default), Bacula (version 1.20 or
\label{PoolRecycle}
-\item [Recycle = \lt{}yes|no\gt{}]
+\item [Recycle = \lt{}yes\vb{}no\gt{}]
\index[dir]{Recycle}
\index[dir]{Directive!Recycle}
This directive specifies whether or not Purged Volumes may be recycled.
\label{RecycleOldest}
-\item [Recycle Oldest Volume = \lt{}yes|no\gt{}]
+\item [Recycle Oldest Volume = \lt{}yes\vb{}no\gt{}]
\index[dir]{Recycle Oldest Volume}
\index[dir]{Directive!Recycle Oldest Volume}
This directive instructs the Director to search for the oldest used
\label{RecycleCurrent}
-\item [Recycle Current Volume = \lt{}yes|no\gt{}]
+\item [Recycle Current Volume = \lt{}yes\vb{}no\gt{}]
\index[dir]{Recycle Current Volume}
\index[dir]{Directive!Recycle Current Volume}
If Bacula needs a new Volume, this directive instructs Bacula to Prune
\label{PurgeOldest}
-\item [Purge Oldest Volume = \lt{}yes|no\gt{}]
+\item [Purge Oldest Volume = \lt{}yes\vb{}no\gt{}]
\index[dir]{Purge Oldest Volume}
\index[dir]{Directive!Purge Oldest Volume}
This directive instructs the Director to search for the oldest used
by MySQL and PostgreSQL and is ignored by SQLite if provided. This
directive is optional.
-%% \item [Multiple Connections = \lt{}yes|no\gt{}]
+%% \item [Multiple Connections = \lt{}yes\vb{}no\gt{}]
%% \index[dir]{Multiple Connections}
%% \index[dir]{Directive!Multiple Connections}
%% By default, this directive is set to no. In that case, each job that uses