\sloppy
\newfont{\bighead}{cmr17 at 36pt}
-%% \topmargin -0.5in
-%% \textheight 9in
\parskip 10pt
\parindent 0pt
-%% \oddsidemargin -0.0in
-%% \evensidemargin -0.25in
-%% \textwidth 7in
\title{\includegraphics{./bacula-logo.eps} \\ \bigskip
\Large{It comes in the night and sucks the essence
-1.38.3 (12 December 2005)
+1.38.3 (22 December 2005)
-1.38.3 (12 December 2005)
+1.38.3 (22 December 2005)
\item [Prefer Mounted Volumes = \lt{}yes|no\gt{}]
\index[dir]{Prefer Mounted Volumes}
- It the Prefer Mounted Volumes directive is set to {\bf yes}
- (default yes), it is used to inform the Storage daemon
- to select either an Autochanger or a drive with a valid
- Volume already mounted in preference to a drive that is
- not ready. If none is available, it will select the first
- available drive. If the directive is set to {\bf no}, the
- Storage daemon will prefer finding an unused drive. This
- can potentially be useful for those sites that prefer to
- maximum backup throughput at the expense of using additional
- drives and Volumes.
+ If the Prefer Mounted Volumes directive is set to {\bf yes} (default
+ yes), the Storage daemon is requested to select either an Autochanger or
+ a drive with a valid Volume already mounted in preference to a drive
+ that is not ready. If no drive with a suitable Volume is available, it
+ will select the first available drive.
+
+ If the directive is set to {\bf no}, the Storage daemon will prefer
+ finding an unused drive, otherwise, each job started will append to the
+ same Volume (assuming the Pool is the same for all jobs). Setting
+ Prefer Mounted Volumes to no can be useful for those sites particularly
+ with multiple drive autochangers that prefer to maximumize backup
+ throughput at the expense of using additional drives and Volumes. As an
+ optimization, when using multiple drives, you will probably want to
+ start each of your jobs one after another with approximately 5 second
+ intervals. This will help ensure that each night, the same drive
+ (Volume) is selected for the same job, otherwise, when you do a restore,
+ you may find the files spread over many more Volumes than necessary.
\item [Prune Jobs = \lt{}yes|no\gt{}]
(for example DVD), so you are sure that the current part, containing
this job's data, is written to the device, and that no data is left in
the temporary file on the hard disk. However, on some media, like DVD+R
- and DVD-R, a lot of space (about 10Mb) is lost everytime a part is
+ and DVD-R, a lot of space (about 10Mb) is lost every time a part is
written. So, if you run several jobs each after another, you could set
this directive to {\bf no} for all jobs, except the last one, to avoid
wasting too much space, but to ensure that the data is written to the
must specify a unique Media Type for that drive. This is an important
point that should be carefully understood. Note, this applies equally
to Disk Volumes. If you define more than one disk Device resource in
- your Storage daemon's conf file, the Volumes on thoes two devices are in
+ your Storage daemon's conf file, the Volumes on those two devices are in
fact incompatible because one can not be mounted on the other device
since they are found in different directories. For this reason, you
probably should use two different Media Types for your two disk Devices
Storage resources in your bacula-dir.conf.
OK, so now you should understand that you need multiple Device definitions
-in the case of different directorys or different Pools, but you also
+in the case of different directories or different Pools, but you also
need to know that the catalog data that Bacula keeps contains only
the Media Type and not the specific storage device. This permits a tape
for example to be re-read on any compatible tape drive. The compatibility
the proof of concept in a relatively short period of time. The example
consists of a two clients that are backed up to a set of 12 archive files
(Volumes) for each client into different directories on the Storage
-maching. Each Volume is used (written) only once, and there are four Full
+machine. Each Volume is used (written) only once, and there are four Full
saves done every hour (so the whole thing cycles around after three hours).
What is key here is that each physical device on the Storage daemon
mounted at one time on a disk as defined in your Device resource in
the Storage daemon's conf file. You can have multiple concurrent
jobs running that all write to the one Volume that is being used, but
-if you want to have multiple concurrent jobs that are writting to
+if you want to have multiple concurrent jobs that are writing to
separate disks drives (or partitions), you will need to define
separate Device resources for each one, exactly as you would do for
two different tape drives. There is one fundamental difference, however.
-The Volumes that you creat on the two drives cannot be easily exchanged
+The Volumes that you create on the two drives cannot be easily exchanged
as they can for a tape drive, because they are physically resident (already
mounted in a sense) on the particular drive. As a consequence, you will
probably want to give them different Media Types so that Bacula can
Catalog = BackupDB
Password = client2_password
}
-# Two Storage definitions with differen Media Types
+# Two Storage definitions with different Media Types
# permits different directories
Storage {
Name = File1
\section*{Messages Resource}
\label{_ChapterStart15}
-\index[general]{Resource!Messages }
-\index[general]{Messages Resource }
+\index[general]{Resource!Messages}
+\index[general]{Messages Resource}
\addcontentsline{toc}{section}{Messages Resource}
\subsection*{The Messages Resource}
\label{MessageResource}
-\index[general]{Resource!Messages }
-\index[general]{Messages Resource }
+\index[general]{Resource!Messages}
+\index[general]{Messages Resource}
\addcontentsline{toc}{subsection}{Messages Resource}
The Messages resource defines how messages are to be handled and destinations
\begin{description}
\item [destination = message-type1, message-type2, message-type3, ... ]
- \index[dir]{destination }
+ \index[dir]{destination}
\end{description}
or for those destinations that need and address specification (e.g. email):
\item [destination = address = message-type1, message-type2,
message-type3, ... ]
- \index[dir]{destination }
+ \index[dir]{destination}
Where {\bf destination} is one of a predefined set of keywords that define
where the message is to be sent ({\bf stdout}, {\bf file}, ...), {\bf
\begin{description}
\item [Messages]
- \index[dir]{Messages }
+ \index[dir]{Messages}
Start of the Messages records.
\item [Name = \lt{}name\gt{}]
- \index[dir]{Name }
+ \index[dir]{Name}
The name of the Messages resource. The name you specify here will be used to
tie this Messages resource to a Job and/or to the daemon.
\label{mailcommand}
\item [MailCommand = \lt{}command\gt{}]
- \index[dir]{MailCommand }
+ \index[dir]{MailCommand}
In the absence of this resource, Bacula will send all mail using the
following command:
{\bf mailcommand = "/home/kern/bacula/bin/bsmtp -h mail.example.com -f
\textbackslash{}"\textbackslash{}(Bacula\textbackslash{})
\%r\textbackslash{}" -s \textbackslash{}"Bacula: \%t \%e of \%c
-\%l\textbackslash{}" \%r" }
+\%l\textbackslash{}" \%r"}
Note, the {\bf bsmtp} program is provided as part of {\bf Bacula}. For
additional details, please see the
selective as to what forms are permitted particularly in the from part.
\item [OperatorCommand = \lt{}command\gt{}]
- \index[fd]{OperatorCommand }
+ \index[fd]{OperatorCommand}
This resource specification is similar to the {\bf MailCommand} except that
it is used for Operator messages. The substitutions performed for the {\bf
MailCommand} are also done for this command. Normally, you will set this
command to the same value as specified for the {\bf MailCommand}.
\item [Debug = \lt{}debug-level\gt{}]
- \index[fd]{Debug }
+ \index[fd]{Debug}
This sets the debug message level to the debug level, which is an integer.
Higher debug levels cause more debug information to be produced. You are
requested not to use this record since it will be deprecated.
\item [\lt{}destination\gt{} = \lt{}message-type1\gt{},
\lt{}message-type2\gt{}, ...]
- \index[fd]{\lt{}destination\gt{} }
+ \index[fd]{\lt{}destination\gt{}}
Where {\bf destination} may be one of the following:
\begin{description}
\item [stdout]
- \index[fd]{stdout }
+ \index[fd]{stdout}
Send the message to standard output.
\item [stderr]
- \index[fd]{stderr }
+ \index[fd]{stderr}
Send the message to standard error.
\item [console]
- \index[console]{console }
+ \index[console]{console}
Send the message to the console (Bacula Console). These messages are held
until the console program connects to the Director.
\end{description}
\item {\bf \lt{}destination\gt{} = \lt{}address\gt{} =
\lt{}message-type1\gt{}, \lt{}message-type2\gt{}, ...}
- \index[console]{\lt{}destination\gt{} }
+ \index[console]{\lt{}destination\gt{}}
Where {\bf address} depends on the {\bf destination}, which may be one of the
following:
\begin{description}
\item [director]
- \index[dir]{director }
+ \index[dir]{director}
Send the message to the Director whose name is given in the {\bf address}
field. Note, in the current implementation, the Director Name is ignored, and
the message is sent to the Director that started the Job.
\item [file]
- \index[dir]{file }
+ \index[dir]{file}
Send the message to the filename given in the {\bf address} field. If the
file already exists, it will be overwritten.
\item [append]
- \index[dir]{append }
+ \index[dir]{append}
Append the message to the filename given in the {\bf address} field. If the
-file already exists, it will be appended to. If the file does not exist, it
-will be created.
+ file already exists, it will be appended to. If the file does not exist, it
+ will be created.
\item [syslog]
- \index[fd]{syslog }
+ \index[fd]{syslog}
Send the message to the system log (syslog) using the facility specified in
-the {\bf address} field. Note, for the moment, the {\bf address} field is
-ignored and the message is always sent to the {\bf LOG\_ERR} facility.
+ the {\bf address} field. Note, for the moment, the {\bf address} field is
+ ignored and the message is always sent to the LOG_DAEMON facility with
+ level LOG_ERR. See {\bf man 3 syslog} for more details. Example:
+\begin{verbatim}
+ syslog = all, !skipped, !saved
+\end{verbatim}
\item [mail]
- \index[fd]{mail }
- Send the message to the email addresses that are given as a comma separated
-list in the {\bf address} field. Mail messages are grouped together during a
-job and then sent as a single email message when the job terminates. The
-advantage of this destination is that you are notified about every Job that
-runs. However, if you backup 5 or 10 machines every night, the volume of
-email messages can be important. Some users use filter programs such as {\bf
-procmail} to automatically file this email based on the Job termination code
-(see {\bf mailcommand}).
+ \index[fd]{mail}
+ Send the message to the email addresses that are given as a comma
+ separated list in the {\bf address} field. Mail messages are grouped
+ together during a job and then sent as a single email message when the
+ job terminates. The advantage of this destination is that you are
+ notified about every Job that runs. However, if you backup 5 or 10
+ machines every night, the volume of email messages can be important.
+ Some users use filter programs such as {\bf procmail} to automatically
+ file this email based on the Job termination code (see {\bf
+ mailcommand}).
\item [mail on error]
- \index[fd]{mail on error }
- Send the message to the email addresses that are given as a comma separated
-list in the {\bf address} field if the Job terminates with an error
-condition. MailOnError messages are grouped together during a job and then
-sent as a single email message when the job terminates. This destination
-differs from the {\bf mail} destination in that if the Job terminates
-normally, the message is totally discarded (for this destination). If the Job
-terminates in error, it is emailed. By using other destinations such as {\bf
-append} you can ensure that even if the Job terminates normally, the output
-information is saved.
+ \index[fd]{mail on error}
+ Send the message to the email addresses that are given as a comma
+ separated list in the {\bf address} field if the Job terminates with an
+ error condition. MailOnError messages are grouped together during a job
+ and then sent as a single email message when the job terminates. This
+ destination differs from the {\bf mail} destination in that if the Job
+ terminates normally, the message is totally discarded (for this
+ destination). If the Job terminates in error, it is emailed. By using
+ other destinations such as {\bf append} you can ensure that even if the
+ Job terminates normally, the output information is saved.
\item [operator]
- \index[fd]{operator }
- Send the message to the email addresses that are specified as a comma
-separated list in the {\bf address} field. This is similar to {\bf mail}
-above, except that each message is sent as received. Thus there is one email
-per message. This is most useful for {\bf mount} messages (see below).
-\end{description}
+ \index[fd]{operator}
+ Send the message to the email addresses that are specified as a comma
+ separated list in the {\bf address} field. This is similar to {\bf
+ mail} above, except that each message is sent as received. Thus there
+ is one email per message. This is most useful for {\bf mount} messages
+ (see below). \end{description}
-For any destination, the {\bf message-type} field is a comma separated list
-of the following types or classes of messages:
+ For any destination, the {\bf message-type} field is a comma separated
+ list of the following types or classes of messages:
\begin{description}
\item [info]
- \index[fd]{info }
+ \index[fd]{info}
General information messages.
\item [warning]
- \index[fd]{warning }
+ \index[fd]{warning}
Warning messages. Generally this is some unusual condition but not expected
to be serious.
\item [error]
- \index[fd]{error }
+ \index[fd]{error}
Non-fatal error messages. The job continues running. Any error message should
be investigated as it means that something went wrong.
\item [fatal]
- \index[fd]{fatal }
+ \index[fd]{fatal}
Fatal error messages. Fatal errors cause the job to terminate.
\item [terminate]
- \index[fd]{terminate }
+ \index[fd]{terminate}
Message generated when the daemon shuts down.
\item [saved]
- \index[fd]{saved }
+ \index[fd]{saved}
Files saved normally.
\item [notsaved]
- \index[fd]{notsaved }
+ \index[fd]{notsaved}
Files not saved because of some error. Usually because the file cannot be
accessed (i.e. it does not exist or is not mounted).
\item [skipped]
- \index[fd]{skipped }
- Files that were skipped because of a user supplied option such as an
-incremental backup or a file that matches an exclusion pattern. This is not
-considered an error condition such as the files listed for the {\bf
-notsaved} type because the configuration file explicitly requests these types
-of files to be skipped. For example, any unchanged file during an incremental
-backup, or any subdirectory if the no recursion option is specified.
+ \index[fd]{skipped}
+ Files that were skipped because of a user supplied option such as an
+ incremental backup or a file that matches an exclusion pattern. This is
+ not considered an error condition such as the files listed for the {\bf
+ notsaved} type because the configuration file explicitly requests these
+ types of files to be skipped. For example, any unchanged file during an
+ incremental backup, or any subdirectory if the no recursion option is
+ specified.
\item [mount]
- \index[dir]{mount }
- Volume mount or intervention requests from the Storage daemon. These requests
-require a specific operator intervention for the job to continue.
+ \index[dir]{mount}
+ Volume mount or intervention requests from the Storage daemon. These
+ requests require a specific operator intervention for the job to
+ continue.
\item [restored]
- \index[dir]{restored }
- The {\bf ls} style listing generated for each file restored is sent to this
-message class.
+ \index[dir]{restored}
+ The {\bf ls} style listing generated for each file restored is sent to
+ this message class.
\item [all]
- \index[fd]{all }
+ \index[fd]{all}
All message types.
\item [*security]
- \index[fd]{*security }
- Security info/warning messages (not currently implemented).
+ \index[fd]{*security}
+ Security info/warning messages principally from unauthorized
+ connection attempts.
\end{description}
\end{description}
-The following is an example of a valid Messages resource definition, where all
-messages except files explicitly skipped or daemon termination messages are
-sent by email to enforcement@sec.com. In addition all mount messages are sent
-to the operator (i.e. emailed to enforcement@sec.com). Finally all messages
-other than explicitly skipped files and files saved are sent to the console:
+The following is an example of a valid Messages resource definition, where
+all messages except files explicitly skipped or daemon termination messages
+are sent by email to enforcement@sec.com. In addition all mount messages
+are sent to the operator (i.e. emailed to enforcement@sec.com). Finally
+all messages other than explicitly skipped files and files saved are sent
+to the console:
\footnotesize
\begin{verbatim}
the name {\bf foobaz-fd}. For the Director, you should use {\bf foobaz-dir},
and for the storage daemon, you might use {\bf foobaz-sd}.
Each of your Bacula components {\bf must} have a unique name. If you
-make them all the same, aside fromt the fact that you will not
+make them all the same, aside from the fact that you will not
know what daemon is sending what message, if they share the same
working directory, the daemons temporary file names will not
be unique, and you will get many strange failures.
\index[general]{Automatic Volume Recycling }
\addcontentsline{toc}{section}{Automatic Volume Recycling}
-By default, once Bacula starts writing a Volume, it can
-append to the volume, but it will not overwrite the existing
-data thus destroying it.
-However when Bacula {\bf recycles} a Volume, the Volume becomes
-available for being reused, and Bacula can at some later time
-over write the previous contents of that Volume.
-Thus all previous data will be lost. If the Volume is a tape,
-the tape will be rewritten from the beginning. If the Volume is
-a disk file, the file will be truncated before being rewritten.
+By default, once Bacula starts writing a Volume, it can append to the
+volume, but it will not overwrite the existing data thus destroying it.
+However when Bacula {\bf recycles} a Volume, the Volume becomes available
+for being reused, and Bacula can at some later time overwrite the previous
+contents of that Volume. Thus all previous data will be lost. If the
+Volume is a tape, the tape will be rewritten from the beginning. If the
+Volume is a disk file, the file will be truncated before being rewritten.
You may not want Bacula to automatically recycle (reuse) tapes. This would
require a large number of tapes though, and in such a case, it is possible
\item AutoPrune = yes
\item VolumeRetention = \lt{}time\gt{}
\item Recycle = yes
- \end{itemize}
+\end{itemize}
+
+The above three directives are all you need assuming that you fill
+each of your Volumes then wait the Volume Retention period before
+reusing them. If you want Bacula to stop using a Volume and recycle
+it before it is full, you will need to use one or more additional
+directives such as:
+\begin{itemize}
+\item Use Volume Once = yes
+\item Volume Use Duration = ttt
+\item Maximum Volume Jobs = nnn
+\item Maximum Volume Bytes = mmm
+\end{itemize}
+Please see below and
+the \ilink{Basic Volume Management}{_ChapterStart39} chapter
+of this manual for more complete examples.
Automatic recycling of Volumes is performed by Bacula only when it wants a
new Volume and no appendable Volumes are available in the Pool. It will then
Append}. If necessary, you can manually set a volume to {\bf Full}. The reason
for this is that Bacula wants to preserve the data on your old tapes (even
though purged from the catalog) as long as absolutely possible before
-overwriting it.
+overwriting it. There are also a number of directives such as
+{\bf Volume Use Duration} that will automatically mark a volume as {\bf
+Used} and thus no longer appendable.
\label{AutoPruning}
\subsection*{Automatic Pruning}
-\index[general]{Automatic Pruning }
-\index[general]{Pruning!Automatic }
+\index[general]{Automatic Pruning}
+\index[general]{Pruning!Automatic}
\addcontentsline{toc}{subsection}{Automatic Pruning}
-As Bacula writes files to tape, it keeps a list of files, jobs, and volumes in
-a database called the catalog. Among other things, the database helps Bacula
-to decide which files to back up in an incremental or differential backup, and
-helps you locate files on past backups when you want to restore something.
-However, the catalog will grow larger and larger as time goes on, and
-eventually it can become unacceptably large.
-
-Bacula's process for removing entries from the catalog is called Pruning. The
-default is Automatic Pruning, which means that once an entry reaches a certain
-age (e.g. 30 days old) it is removed from the catalog. Once a job has been
-pruned, you can still restore it from the backup tape, but one additional step
-is required: scanning the volume with bscan. The alternative to Automatic
-Pruning is Manual Pruning, in which you explicitly tell Bacula to erase the
-catalog entries for a volume. You'd usually do this when you want to reuse a
-Bacula volume, because there's no point in keeping a list of files that USED
-TO BE on a tape. Or, if the catalog is starting to get too big, you could
-prune the oldest jobs to save space. Manual pruning is done with the
-\ilink{ prune command}{ManualPruning} in the console. (thanks to
-Bryce Denney for the above explanation).
-
-\subsection*{Prunning Directives}
-\index[general]{Prunning Directives }
-\index[general]{Directives!Prunning }
-\addcontentsline{toc}{subsection}{Prunning Directives}
+As Bacula writes files to tape, it keeps a list of files, jobs, and volumes
+in a database called the catalog. Among other things, the database helps
+Bacula to decide which files to back up in an incremental or differential
+backup, and helps you locate files on past backups when you want to restore
+something. However, the catalog will grow larger and larger as time goes
+on, and eventually it can become unacceptably large.
+
+Bacula's process for removing entries from the catalog is called Pruning.
+The default is Automatic Pruning, which means that once an entry reaches a
+certain age (e.g. 30 days old) it is removed from the catalog. Once a job
+has been pruned, you can still restore it from the backup tape, but one
+additional step is required: scanning the volume with bscan. The
+alternative to Automatic Pruning is Manual Pruning, in which you explicitly
+tell Bacula to erase the catalog entries for a volume. You'd usually do
+this when you want to reuse a Bacula volume, because there's no point in
+keeping a list of files that USED TO BE on a tape. Or, if the catalog is
+starting to get too big, you could prune the oldest jobs to save space.
+Manual pruning is done with the \ilink{ prune command}{ManualPruning} in
+the console. (thanks to Bryce Denney for the above explanation).
+
+\subsection*{Pruning Directives}
+\index[general]{Pruning Directives }
+\index[general]{Directives!Pruning }
+\addcontentsline{toc}{subsection}{Pruning Directives}
There are three pruning durations. All apply to catalog database records and
not to the actual data in a Volume. The pruning (or retention) durations are
Busy, or Cleaning, the Volume status will not be changed to Purged.
\item [Volume Retention = \lt{}time-period-specification\gt{}]
- \index[console]{Volume Retention }
- The Volume Retention record defines the length of time that Bacula will
- guarantee that the Volume is not reused counting from the time the last job
- stored on the Volume terminated.
-
- When this time period expires, and if {\bf AutoPrune} is set to {\bf yes},
- and a new Volume is needed, but no appendable Volume is available, Bacula
- will prune (remove) Job records that are older than the specified Volume
- Retention period.
-
- The Volume Retention period takes precedence over any Job Retention period
- you have specified in the Client resource. It should also be noted, that the
- Volume Retention period is obtained by reading the Catalog Database Media
- record rather than the Pool resource record. This means that if you change
- the VolumeRetention in the Pool resource record, you must ensure that the
- corresponding change is made in the catalog by using the {\bf update pool}
- command. Doing so will insure that any new Volumes will be created with the
- changed Volume Retention period. Any existing Volumes will have their own
- copy of the Volume Retention period that can only be changed on a Volume by
- Volume basis using the {\bf update volume} command.
+ \index[console]{Volume Retention}
+ The Volume Retention record defines the length of time that Bacula will
+ guarantee that the Volume is not reused counting from the time the last
+ job stored on the Volume terminated. A key point is that this time
+ period is not even considered as long at the Volume remains appendable.
+ The Volume Retention period count down begins only when the Append
+ status has been changed to some othe status (Full, Used, Purged, ...).
+
+ When this time period expires, and if {\bf AutoPrune} is set to {\bf
+ yes}, and a new Volume is needed, but no appendable Volume is available,
+ Bacula will prune (remove) Job records that are older than the specified
+ Volume Retention period.
+
+ The Volume Retention period takes precedence over any Job Retention
+ period you have specified in the Client resource. It should also be
+ noted, that the Volume Retention period is obtained by reading the
+ Catalog Database Media record rather than the Pool resource record.
+ This means that if you change the VolumeRetention in the Pool resource
+ record, you must ensure that the corresponding change is made in the
+ catalog by using the {\bf update pool} command. Doing so will insure
+ that any new Volumes will be created with the changed Volume Retention
+ period. Any existing Volumes will have their own copy of the Volume
+ Retention period that can only be changed on a Volume by Volume basis
+ using the {\bf update volume} command.
When all file catalog entries are removed from the volume, its VolStatus is
set to {\bf Purged}. The files remain physically on the Volume until the
\item [Recycle = \lt{}yes|no\gt{}]
\index[fd]{Recycle }
- This statement tells Bacula whether or not the particular Volume can be
- recycled (i.e. rewritten). If Recycle is set to {\bf no} (the default), then
- even if Bacula prunes all the Jobs on the volume and it is marked {\bf
- Purged}, it will not consider the tape for recycling. If Recycle is set to
- {\bf yes} and all Jobs have been pruned, the volume status will be set to
- {\bf Purged} and the volume may then be reused when another volume is needed.
- If the volume is reused, it is relabeled with the same Volume Name, however
- all previous data will be lost.
+ This statement tells Bacula whether or not the particular Volume can be
+ recycled (i.e. rewritten). If Recycle is set to {\bf no} (the
+ default), then even if Bacula prunes all the Jobs on the volume and it
+ is marked {\bf Purged}, it will not consider the tape for recycling. If
+ Recycle is set to {\bf yes} and all Jobs have been pruned, the volume
+ status will be set to {\bf Purged} and the volume may then be reused
+ when another volume is needed. If the volume is reused, it is relabeled
+ with the same Volume Name, however all previous data will be lost.
\end{description}
It is also possible to "force" pruning of all Volumes in the Pool
and if the {\bf Recycle} flag is on (Recycle=yes) for that Volume, Bacula will
relabel it and write new data on it.
+As mentioned above, there are two key points for getting a Volume
+to be recycled. First, the Volume must no longer be marked Append (there
+are a number of directives to automatically make this change), and second
+since the last write on the Volume, one or more of the Retention periods
+must have expired so that there are no more catalog backup job records
+that reference that Volume. Once both those conditions are satisfied,
+the volume can be marked Purged and hence recycled.
+
The full algorithm that Bacula uses when it needs a new Volume is:
\begin{itemize}
\subsection*{Making Bacula Use a Single Tape}
\label{singletape}
-\index[general]{Tape!Making Bacula Use a Single }
-\index[general]{Making Bacula Use a Single Tape }
+\index[general]{Tape!Making Bacula Use a Single}
+\index[general]{Making Bacula Use a Single Tape}
\addcontentsline{toc}{subsection}{Making Bacula Use a Single Tape}
Most people will want Bacula to fill a tape and when it is full, a new tape
\item
\ilink{I'm building my own rpms but on all platforms and compiles I get an
unresolved dependancy for something called
-/usr/afsws/bin/pagsh.}{faq5}
+ /usr/afsws/bin/pagsh.}{faq5}
\end{enumerate}
\subsection*{Answers}
\item
\label{faq1}
{\bf How do I build Bacula for platform xxx?}
-The bacula spec file contains defines to build for several platforms: RedHat
-7.x (rh7), RedHat 8.0 (rh8), RedHat 9 (rh9), Fedora Core (fc1), Whitebox
-Enterprise Linux (RHEL) 3.0 (wb3), Mandrake 10.x (mdk) and SuSE 9.x (su9).
-The package build is controlled by a mandatory define set at the beginning of
-the file. These defines basically just control the dependency information that
- gets coded into the finished rpm package.
-The platform define may be edited in the spec file directly (by default all
-defines are set to 0 or "not set"). For example, to build the RedHat 7.x
-package find the line in the spec file which reads
+ The bacula spec file contains defines to build for several platforms:
+ RedHat 7.x (rh7), RedHat 8.0 (rh8), RedHat 9 (rh9), Fedora Core (fc1,
+ fc3, fc4), Whitebox Enterprise Linux (RHEL) 3.0 (wb3), Mandrake 10.x
+ (mdk) and SuSE 9.x (su9). The package build is controlled by a
+ mandatory define set at the beginning of the file. These defines
+ basically just control the dependency information that gets coded into
+ the finished rpm package. The platform define may be edited in the spec
+ file directly (by default all defines are set to 0 or "not set"). For
+ example, to build the RedHat 7.x package find the line in the spec file
+ which reads
\footnotesize
\begin{verbatim}
\item
\label{faq2}
{\bf How do I control which database support gets built?}
-Another mandatory build define controls which database support is compiled,
-one of build\_sqlite, build\_mysql or build\_postgresql. To get the MySQL
-package and support either set the
+ Another mandatory build define controls which database support is compiled,
+ one of build\_sqlite, build\_mysql or build\_postgresql. To get the MySQL
+ package and support either set the
\footnotesize
\begin{verbatim}
\item
\label{faq3}
{\bf What other defines are used?}
-Two other building defines of note are the depkgs\_version and tomsrtbt
-identifiers. These two defines are set with each release and must match the
-version of those sources that are being used to build the packages. You would
-not ordinarily need to edit these.
+ Two other building defines of note are the depkgs\_version and tomsrtbt
+ identifiers. These two defines are set with each release and must match the
+ version of those sources that are being used to build the packages. You would
+ not ordinarily need to edit these.
\item
\label{faq4}
{\bf I'm getting errors about not having permission when I try to build the
-packages. Do I need to be root?}
-No, you do not need to be root and, in fact, it is better practice to build
-rpm packages as a non-root user. Bacula packages are designed to be built by
-a regular user but you must make a few changes on your system to do this. If
-you are building on your own system then the simplest method is to add write
-permissions for all to the build directory (/usr/src/redhat/). To accomplish
-this, execute the following command as root:
+ packages. Do I need to be root?}
+ No, you do not need to be root and, in fact, it is better practice to
+ build rpm packages as a non-root user. Bacula packages are designed to
+ be built by a regular user but you must make a few changes on your
+ system to do this. If you are building on your own system then the
+ simplest method is to add write permissions for all to the build
+ directory (/usr/src/redhat/). To accomplish this, execute the following
+ command as root:
\footnotesize
\begin{verbatim}
\end{verbatim}
\normalsize
-If you are working on a shared system where you can not use the method above
-then you need to recreate the /usr/src/redhat directory tree with all of its
-subdirectories inside your home directory. Then create a file named {\tt
-.rpmmacros} in your home directory (or edit the file if it already exists)
+If you are working on a shared system where you can not use the method
+above then you need to recreate the /usr/src/redhat directory tree with all
+of its subdirectories inside your home directory. Then create a file named
+
+{\tt .rpmmacros}
+
+in your home directory (or edit the file if it already exists)
and add the following line:
\footnotesize
\item
\label{faq5}
- {\bf I'm building my own rpms but on all platforms and compiles I get an
-unresolved dependency for something called /usr/afsws/bin/pagsh.}
-This is a shell from the OpenAFS (Andrew File System). If you are seeing this
-then you chose to include the docs/examples directory in your package. One of
-the example scripts in this directory is a pagsh script. Rpmbuild, when
-scanning for dependencies, looks at the shebang line of all packaged scripts
-in addition to checking shared libraries. To avoid this do not package the
-examples directory.
+ {\bf I'm building my own rpms but on all platforms and compiles I get an
+ unresolved dependency for something called /usr/afsws/bin/pagsh.} This
+ is a shell from the OpenAFS (Andrew File System). If you are seeing
+ this then you chose to include the docs/examples directory in your
+ package. One of the example scripts in this directory is a pagsh
+ script. Rpmbuild, when scanning for dependencies, looks at the shebang
+ line of all packaged scripts in addition to checking shared libraries.
+ To avoid this do not package the examples directory.
\end{enumerate}
\item {\bf Support for RHEL4, CentOS 4 and x86_64}
-The examples below
-explicit build support for RHEL4 (I think) and CentOS 4. Build support
-for x86_64 has also been added. Test builds have been done on CentOS but
-not RHEL4.
+ The examples below
+ explicit build support for RHEL4 (I think) and CentOS 4. Build support
+ for x86_64 has also been added. Test builds have been done on CentOS but
+ not RHEL4.
\footnotesize
\begin{verbatim}
rpmbuild --rebuild \
--define "build_rhel4 1" \
--define "build_sqlite 1" \
- bacula-1.36.2-4.src.rpm
+ bacula-1.38.3-1.src.rpm
rpmbuild --rebuild \
--define "build_rhel4 1" \
--define "build_postgresql 1" \
- bacula-1.36.2-4.src.rpm
+ bacula-1.38.3-1.src.rpm
rpmbuild --rebuild \
--define "build_rhel4 1" \
--define "build_mysql 1" \
--define "build_mysql4 1" \
- bacula-1.36.2-4.src.rpm
+ bacula-1.38.3-1.src.rpm
For CentOS substitute '--define "build_centos4 1"' in place of rhel4.
\end{verbatim}
\normalsize
-
\item You can restrict what IP addresses Bacula will bind to by using the
appropriate {\bf DirAddress}, {\bf FDAddress}, or {\bf SDAddress} records in
the respective daemon configuration files.
+\item Be aware that if you are backing up your database using the default
+ script, if you have a password on your database, it will be passed as
+ a command line option to that script, and any user will be able to see
+ this information. If you want it to be secure, you will need to pass it
+ by an environment variable or a secure file.
\end{itemize}
and even if the catalog contains references to files saved in file 11,
everything will be OK and nothing will be lost. Note that if the SD had
written several file marks to the volume, the difference between the Volume
-cound and the Catalog count could be larger than one, but this is unusual.
+count and the Catalog count could be larger than one, but this is unusual.
If on the other hand the catalog is marked as having more files than Bacula
found on the tape, you need to consider the possible negative consequences of
replaced it with a fresh tape that I labeled as DLT-28Jun03, thus assuring
myself that the backups would all complete without my intervention.
-\subsection*{How to Excude File on Windows Regardless of Case}
+\subsection*{How to Exclude File on Windows Regardless of Case}
\label{Case}
-\index[general]{How to Excude File on Windows Regardless of Case }
-\index[general]{Case!How to Excude File on Windows Regardless of }
-\addcontentsline{toc}{subsection}{How to Excude File on Windows Regardless of
+\index[general]{How to Exclude File on Windows Regardless of Case }
+\index[general]{Case!How to Exclude File on Windows Regardless of }
+\addcontentsline{toc}{subsection}{How to Exclude File on Windows Regardless of
Case}
This tip was submitted by Marc Brueckner who wasn't sure of the case of some
by the addition of {\bf ClientRunBeforJob} and {\bf ClientRunAfterJob} Job
records, but the technique still could be useful.) First I thought the "Run
Before Job" statement in the Job-resource is for executing a script on the
-remote machine(the machine to be backed up). It could be usefull to execute
+remote machine(the machine to be backed up). It could be useful to execute
scripts on the remote machine e.g. for stopping databases or other services
while doing the backup. (Of course I have to start the services again when the
backup has finished) I found the following solution: Bacula could execute
\end{verbatim}
\normalsize
-The above should be all on one line, and it effectivly tells Bacula that
+The above should be all on one line, and it effectively tells Bacula that
volume "VOL-0001" is located in slot 1 (which is our lowest slot), that
volume "VOL-0002" is located in slot 2 and so on..
The script also maintains a logfile (/var/log/mtx.log) where you can monitor
time.
For example, if you want two different jobs to run simultaneously backing up
-the same Client to the same Storage device, they will run concurrentl only if
+the same Client to the same Storage device, they will run concurrently only if
you have set {\bf Maximum Concurrent Jobs} greater than one in the Director
resource, the Client resource, and the Storage resource in bacula-dir.conf.
-1.38.3 (12 December 2005)
+1.38.3 (22 December 2005)