http://www.postgresql.org/docs/faqs.FAQ.html\#3.3}
{http://www.postgresql.org/docs/faqs.FAQ.html\#3.3}.
-Also for PostgreSQL, look at what "effective_cache_size". For a 2GB memory
+Also for PostgreSQL, look at what "effective\_cache\_size". For a 2GB memory
machine, you probably want to set it at 131072, but don't set it too high.
-In addition, for a 2GB system, work_mem = 256000 and
-maintenance_work_mem = 256000 seem to be reasonable values. Make
-sure your checkpoint_segments is set to at least 8.
+In addition, for a 2GB system, work\_mem = 256000 and
+maintenance\_work\_mem = 256000 seem to be reasonable values. Make
+sure your checkpoint\_segments is set to at least 8.
# This schedule does the catalog. It starts after the WeeklyCycle
Schedule {
Name = "WeeklyCycleAfterBackup
- Run = Full sun-sat at 1:10
+ Run = Level=Full sun-sat at 1:10
}
# This is the backup of the catalog
FileSet {
\index[general]{General }
\addcontentsline{toc}{subsection}{General}
-We recommend you take your time before implementing a Bacula backup system
-since Bacula is a rather complex program, and if you make a mistake, you may
-suddenly find that you cannot restore your files in case of a disaster.
-This is especially true if you have not previously used a major backup
-product.
+We recommend you take your time before implementing a production a Bacula
+backup system since Bacula is a rather complex program, and if you make a
+mistake, you may suddenly find that you cannot restore your files in case
+of a disaster. This is especially true if you have not previously used a
+major backup product.
If you follow the instructions in this chapter, you will have covered most of
the major problems that can occur. It goes without saying that if you ever
equivalent experience, and that you have set up a basic production
configuration. If you haven't done the above, please do so and then come back
here. The following is a sort of checklist that points with perhaps a brief
-explanation of why you should do it. You will find the details elsewhere in the
-manual. The order is more or less the order you would use in setting up a
-production system (if you already are in production, use the checklist anyway).
+explanation of why you should do it. In most cases, you will find the
+details elsewhere in the manual. The order is more or less the order you
+would use in setting up a production system (if you already are in
+production, use the checklist anyway).
\begin{itemize}
\item Test your tape drive for compatibility with Bacula by using the test
\item If you are using a 2.4 kernel, make sure that /lib/tls is disabled. Bacula
does not work with this library. See the second point under
\ilink{ Supported Operating Systems.}{SupportedOSes}
-\item Do at least one restore of files. If you backup both Unix and Win32
- systems, restore files from each system type. The
+\item Do at least one restore of files. If you backup multiple OS types
+ (Linux, Solaris, HP, MacOS, FreeBSD, Win32, ...),
+ restore files from each system type. The
\ilink{Restoring Files}{_ChapterStart13} chapter shows you how.
\item Write a bootstrap file to a separate system for each backup job. The
Write Bootstrap directive is described in the
CDROM}{_ChapterRescue} chapter. It is trivial to make such a CDROM,
and it can make system recovery in the event of a lost hard disk infinitely
easier.
+\item Bacula assumes all filenames are in UTF-8 format. This is important
+ when saving the filenames to the catalog. For Win32 machine, Bacula will
+ automatically convert from Unicode to UTF-8, but on Unix, Linux, *BSD,
+ and MacOS X machines, you must explicitly ensure that your locale is set
+ properly. Typically this means that the {bf LANG} environment variable
+ must end in {\bf .UTF-8}. An full example is {\bf en_US.UTF-8}. The
+ exact syntax may vary a bit from OS to OS, and exactly how you define it
+ will also vary.
\end{itemize}
\subsection*{Recommended Items}
./configure --with-openssl ...
\end{verbatim}
+\subsection*{Encryption Technical Details}
+\index[general]{Encryption Technical Details}
+\addcontentsline{toc}{subsection}{Encryption Technical Details}
+The implementation uses 128bit AES-CBC, with RSA encrypted symmetric
+session keys. The RSA key is user supplied.
+If you are running OpenSSL 0.9.8 or later, the signed file hash uses
+SHA-256 -- otherwise, SHA-1 is used.
+
+End-user configuration settings for the algorithms are not currently
+exposed -- only the algorithms listed above are used. However, the
+data written to Volume supports arbitrary symmetric, asymmetric, and
+digest algorithms for future extensibility, and the back-end
+implementation currently supports:
+
+\begin{verbatim}
+Symmetric Encryption:
+ - 128, 192, and 256-bit AES-CBC
+ - Blowfish-CBC
+
+Asymmetric Encryption (used to encrypt symmetric session keys):
+ - RSA
+
+Digest Algorithms:
+ - MD5
+ - SHA1
+ - SHA256
+ - SHA512
+\end{verbatim}
+
+The various algorithms are exposed via an entirely re-usable,
+OpenSSL-agnostic API (ie, it is possible to drop in a new encryption
+backend). The Volume format is DER-encoded ASN.1, modeled after the
+Cryptographic Message Syntax from RFC 3852. Unfortunately, using CMS
+directly was not possible, as at the time of coding a free software
+streaming DER decoder/encoder was not available.
\subsection*{Generating Private/Public Encryption Keypairs}
\index[dir]{Write Bootstrap}
\index[dir]{Directive!Write Bootstrap}
The {\bf writebootstrap} directive specifies a file name where Bacula
- will write a {\bf bootstrap} file for each Backup job run. Thus this
+ will write a {\bf bootstrap} file for each Backup job run. This
directive applies only to Backup Jobs. If the Backup job is a Full
save, Bacula will erase any current contents of the specified file
before writing the bootstrap records. If the Job is an Incremental
+ or Differential
save, Bacula will append the current bootstrap record to the end of the
file.
specified should be a mounted drive on another machine, so that if your
hard disk is lost, you will immediately have a bootstrap record
available. Alternatively, you should copy the bootstrap file to another
- machine after it is updated.
+ machine after it is updated. Note, it is a good idea to write a separate
+ bootstrap file for each Job backed up including the job that backs up
+ your catalog database.
If the {\bf bootstrap-file-specification} begins with a vertical bar
(|), Bacula will use the specification as the name of a program to which
On versions 1.39.22 or greater, before openning the file or execute the
specified command, Bacula performs
\ilink{character substitution}{character substitution} like in RunScript
- directive. To manage automatically yours bootstrap files, you can use
- this in your {\bf JobDefs} :
+ directive. To automatically manage your bootstrap files, you can use
+ this in your {\bf JobDefs} resources:
\begin{verbatim}
JobDefs {
Write Bootstrap = "%c_%n.bsr"
wasting too much space, but to ensure that the data is written to the
medium when all jobs are finished.
- It is ignored with tape and FIFO devices.
+ This directive is ignored with tape and FIFO devices.
\end{description}
The following is an example of a valid Job resource definition:
# When to do the backups
Schedule {
Name = "WeeklyCycle"
- Run = Full sun at 1:05
- Run = Incremental mon-sat at 1:05
+ Run = level=Full sun at 1:05
+ Run = level=Incremental mon-sat at 1:05
}
# Client (File Services) to backup
Client {
\addcontentsline{toc}{section}{Bacula Frequently Asked Questions}
These are questions that have been submitted over time by the
-Bacula users.
+Bacula users. The following
+FAQ is very useful, but it is not always up to date
+with newer information, so after reading it, if you don't find what you
+want, you might try the following wiki maintained by Frank Sweetser, which
+contains more than just a FAQ:
+\elink{http://paramount.ind.wpi.edu/wiki/}{http://paramount.ind.wpi.edu/wiki/}
+or go directly to his FAQ at:
+\elink{http://paramount.ind.wpi.edu/wiki/doku.php?id=faq}{http://paramount.ind.wpi.edu/wiki/doku.php?id=faq}.
Please also see
\ilink{the bugs section}{_ChapterStart4} of this document for a list
\index[general]{On what machines does Bacula run? }
{\bf Bacula} builds and executes on RedHat Linux (versions RH7.1-RHEL
4.0, Fedora, SuSE, Gentoo, Debian, Mandriva, ...), FreeBSD, Solaris,
- Alpha, SGI (client), NetBSD, OpenBSD, Mac OS X (client), and Win32
- (client).
+ Alpha, SGI (client), NetBSD, OpenBSD, Mac OS X (client), and Win32.
- Bacula has been my only backup tool for over five years backing up 7
- machines nightly (5 Linux boxes running Fedora Core, previously
- RedHat, a WinXP machine, and a WinNT machine).
+ Bacula has been my only backup tool for over seven years backing up 8
+ machines nightly (6 Linux boxes running SuSE, previously
+ RedHat and Fedora, a WinXP machine, and a WinNT machine).
\label{stable}
\index[general]{Is Bacula Stable? }
Yes, it is remarkably stable, but remember, there are still a lot of
unimplemented or partially implemented features. With a program of this
- size (140,000+ lines of C++ code not including the SQL programs) there
+ size (150,000+ lines of C++ code not including the SQL programs) there
are bound to be bugs. The current test environment (a twisted pair
local network and a HP DLT backup tape) is not exactly ideal, so
additional testing on other sites is necessary. The File daemon has
never crashed -- running months at a time with no intervention. The
Storage daemon is remarkably stable with most of the problems arising
- during labeling or switching tapes. Storage daemon crashes are rare.
+ during labeling or switching tapes. Storage daemon crashes are rare
+ but running multiple drives and simultaneous jobs sometimes (rarely)
+ problems.
The Director, given the multitude of functions it fulfills is also
relatively stable. In a production environment, it rarely if ever
crashes. Of the three daemons, the Director is the most prone to having
There are a number of reasons for this stability.
\begin{enumerate}
- \item The program was largely written by one person to date
- (Kern).\\
\item The program is constantly checking the chain of allocated
memory buffers to ensure that no overruns have occurred. \\
\item All memory leaks (orphaned buffers) are reported each time the
in all respects with the program defined here.
\label{docversion}
-\subsection*{Why is Your Online Document for Version 1.37 but the Released Version is 1.36?}
-\item [Why is Your Online Document for Version 1.37 of Bacula when the
- Currently Release Version is 1.36?]
+\subsection*{Why is Your Online Document for Version 1.39 but the Released Version is 1.38?}
+\item [Why is Your Online Document for Version 1.39 of Bacula when the
+ Currently Release Version is 1.38?]
\index[general]{Multiple manuals}
-As Bacula is being developed, the document is also being enhanced, more often
-than not it has clarifications of existing features that can be very useful
-to our users, so we publish the very latest document. Fortunately it is rare
-that there are confusions with new features.
+As Bacula is being developed, the document is also being enhanced, more
+often than not it has clarifications of existing features that can be very
+useful to our users, so we publish the very latest document. Fortunately
+it is rare that there are confusions with new features.
If you want to read a document that pertains only to a specific version,
-please use the one distributed in the source code.
+please use the one distributed in the source code. The web site also has
+online versions of both the released manual and the current development
+manual.
\label{sure}
\subsection*{Does Bacula really save and restore all files?}
other destinations such as {\bf append} you can ensure that even if the
Job terminates normally, the output information is saved.
+\item [mail on success]
+ \index[fd]{mail on success}
+ Send the message to the email addresses that are given as a comma
+ separated list in the {\bf address} field if the Job terminates
+ normally (no error condition). MailOnSuccess messages are grouped
+ together during a job and then sent as a single email message when the
+ job terminates. This destination differs from the {\bf mail}
+ destination in that if the Job terminates abnormally, the message is
+ totally discarded (for this destination). If the Job terminates in
+ normally, it is emailed.
+
+
\item [operator]
\index[fd]{operator}
Send the message to the email addresses that are specified as a comma
\item a Volume
\item a Client
\item a regular expression matching a Job, Volume, or Client name
-\item the time a Job is on a Volume
+\item the time a Job has been on a Volume
\item high and low water marks (usage or occupation) of a Pool
\item Volume size
\end{itemize}
to a Backup Job but with {\bf Type = Migrate} instead of {\bf Type =
Backup}. One of the key points to remember is that the Pool that is
specified for the migration job is the only pool from which jobs will
-be migrated, with one exception noted below. Also, Bacula permits pools
-to contain Volumes with different Media Types. However, when doing
-migration, this is a very undesirable condition. For migration to work
-properly, you should use pools containing only Volumes of the same
-Media Type for all migration jobs.
+be migrated, with one exception noted below. In addition, the Pool to
+which the selected Job or Jobs will be migrated is defined by the {\bf
+Next Pool = ...} in the Pool resource specified for the Migration Job.
+
+Bacula permits pools to contain Volumes with different Media Types.
+However, when doing migration, this is a very undesirable condition. For
+migration to work properly, you should use pools containing only Volumes of
+the same Media Type for all migration jobs.
The migration job normally is either manually started or starts
from a Schedule much like a backup job. It searches
If the Migration control job finds a number of JobIds to migrate (e.g.
it is asked to migrate one or more Volumes), it will start one new
migration backup job for each JobId found on the specified Volumes.
+Please note that Migration doesn't scale too well since Migrations are
+done on a Job by Job basis. This if you select a very large volume or
+a number of volumes for migration, you may have a large number of
+Jobs that start. Because each job must read the same Volume, they will
+run consecutively (not simultaneously).
\subsection*{Migration Job Resource Directives}
\addcontentsline{toc}{section}{Migration Job Resource Directives}
particularly important because it determines what Pool will be examined for
finding JobIds to migrate. The exception to this is when {\bf Selection
Type = SQLQuery}, in which case no Pool is used, unless you
- specifically include it in the SQL query.
+ specifically include it in the SQL query. Note, the Pool resource
+ referenced must contain a {\bf Next Pool = ...} directive to define
+ the Pool to which the data will be migrated.
\item [Type = Migrate]
{\bf Migrate} is a new type that defines the job that is run as being a
\item [Next Pool = \lt{}pool-specification\gt{}]
The Next Pool directive specifies the pool to which Jobs will be
- migrated.
+ migrated. This directive is required to define the Pool into which
+ the data will be migrated. Without this directive, the migration job
+ will terminate in error.
\item [Storage = \lt{}storage-specification\gt{}]
The Storage directive specifies what Storage resource will be used
for all Jobs that use this Pool. It takes precedence over any other
Storage specifications that may have been given such as in the
- Schedule Run directive, or in the Job resource.
+ Schedule Run directive, or in the Job resource. We highly recommend
+ that you define the Storage resource to be used in the Pool rather
+ than elsewhere (job, schedule run, ...).
\end{description}
\subsection*{Important Migration Considerations}
\item Migration is done only when you run a Migration job. If you set a
Migration High Bytes and that number of bytes is exceeded in the Pool
no migration job will automatically start. You must schedule the
- migration jobs yourself.
+ migration jobs, and they must run for any migration to take place.
\item If you migrate a number of Volumes, a very large number of Migration
jobs may start.
will never find. In general, ensure that all your migration
pools contain only one Media Type, and that you always
migrate to pools with different Media Types.
+
+\item The {\bf Next Pool = ...} directive must be defined in the Pool
+ referenced in the Migration Job to define the Pool into which the
+ data will be migrated.
+
+\item Pay particular attention to the fact that data is migrated on a Job
+ by Job basis, and for any particular Volume, only one Job can read
+ that Volume at a time (no simultaneous read), so migration jobs that
+ all reference the same Volume will run sequentially. This can be a
+ potential bottle neck and does not scale very well to large numbers
+ of jobs.
\end{itemize}
because the values from the original job used instead.
As an example, suppose you have the following Job that
-you run every night:
+you run every night. To note: there is no Storage directive in the
+Job resource; there is a Storage directive in each of the Pool
+resources; the Pool to be migrated (File) contains a Next Pool
+directive that defines the output Pool (where the data is written
+by the migration job).
\footnotesize
\begin{verbatim}
Client = rufus-fd
FileSet = "Full Set"
Messages = Standard
- Storage = DLTDrive
Pool = Default
Maximum Concurrent Jobs = 4
Selection Type = Volume
Client = rufus-fd
FileSet="Full Set"
Messages = Standard
- Storage = DLTDrive
Pool = Default
Maximum Concurrent Jobs = 4
Selection Type = Job
}
Schedule {
Name = "WeeklyCycle"
- Run = Full 1st sun at 2:05
- Run = Differential 2nd-5th sun at 2:05
- Run = Incremental mon-sat at 2:05
+ Run = Level=Full 1st sun at 2:05
+ Run = Level=Differential 2nd-5th sun at 2:05
+ Run = Level=Incremental mon-sat at 2:05
}
# This schedule does the catalog. It starts after the WeeklyCycle
Schedule {
Name = "WeeklyCycleAfterBackup"
- Run = Full sun-sat at 2:10
+ Run = Level=Full sun-sat at 2:10
}
# This is the backup of the catalog
have filenames that are not encoded in UTF8, either because you have
not set UTF8 as your default character set or because you have imported
files from elsewhere (e.g. MacOS X). For this reason, Bacula uses
- SQL_ASCII as the default encoding. If you want to change this,
+ SQL\_ASCII as the default encoding. If you want to change this,
please modify the script before running it.
If running the script fails, it is probably because the database is
Data spooling is exactly that "spooling". It is not a way to first write a
"backup" to a disk file and then to a tape. When the backup has only been
spooled to disk, it is not complete yet and cannot be restored until it is
-written to tape. In a future version, Bacula will support writing a backup
-to disk then later {\bf Migrating} or {\bf Copying} it to a tape.
+written to tape.
+
+Bacula version 1.39.x and later supports writing a backup
+to disk then later {\bf Migrating} or moving it to a tape (or any
+other medium). For
+details on this, please see the \ilink{Migration}{MigrationChapter} chapter
+of this manual for more details.
The remainder of this chapter explains the various directives that you can use
in the spooling process.
-\label{directives}
+\label{directives}
\subsection*{Data Spooling Directives}
\index[general]{Directives!Data Spooling }
\index[general]{Data Spooling Directives }
file. Breaking long sequences of data blocks with file marks permits
quicker positioning to the start of a given stream of data and can
improve recovery from read errors on the volume. The default is one
- Gigabyte.
+ Gigabyte. This directive creates EOF marks only on tape media.
+ However, regardless of the medium type (tape, disk, DVD, ...) each time
+ a the Maximum File Size is exceeded, a record is put into the catalog
+ database that permits seeking to that position on the medium for
+ restore operations. If you set this to a small value (e.g. 1MB),
+ you will generate lots of database records (JobMedia) and may
+ significantly increase CPU/disk overhead.
+
+ Note, this directive does not limit the size of Volumes that Bacula
+ will create regardless of whether they are tape or disk volumes. It
+ changes only the number of EOF marks on a tape and the number of
+ block positioning records (see below) that are generated. If you
+ want to limit the size of all Volumes for a particular device, use
+ the {\bf Maximum Volume Size} directive (above), or use the
+ {\bf Maximum Volume Bytes} directive in the Director's Pool resource,
+ which does the same thing but on a Pool (Volume) basis.
\item [Block Positioning = {\it yes|no}]
\index[sd]{Block Positioning}
\index[sd]{Directive!Block Positioning}
- This directive is not normally used (and has not yet been tested). It will
- tell Bacula not to use block positioning when it is reading tapes. This can
- cause Bacula to be {\bf extremely} slow when restoring files. You might use
- this directive if you wrote your tapes with Bacula in variable block mode
- (the default), but your drive was in fixed block mode. If it then works as I
- hope, Bacula will be able to re-read your tapes.
+ This directive tells Bacula not to use block positioning when doing restores.
+ Turning this directive off can cause Bacula to be {\bf extremely} slow
+ when restoring files. You might use this directive if you wrote your
+ tapes with Bacula in variable block mode (the default), but your drive
+ was in fixed block mode. The default is {\bf yes}.
\item [Maximum Network Buffer Size = {\it bytes}]
\index[sd]{Maximum Network Buffer Size}
\hline {Fedora} & {Overland } & {LTO } & {Overland PowerLoader LTO-2 } & {10-19} & {200/400GB } \\
\hline {FreeBSD 5.4-Stable} & {Overland} & {LTO-2} & {Overland Powerloader tape} & {17} & {100GB } \\
\hline {- } & {Overland} & {LTO } & {Overland Neo2000 LTO } & {26-30} & {100GB } \\
- \hline {- } & {Quantum } & {?? } & {Super Loader } & {??} & {?? } \\
+ \hline {Linux} & {Quantum } & {DLT-S4} & {Superloader 3} & {16} & {800/1600GB } \\
+ \hline {Linux} & {Quantum } & {LTO-2} & {Superloader 3} & {16} & {200/400GB } \\
+ \hline {Linux} & {Quantum } & {LTO-3 } & {PX502 } & {??} & {?? } \\
\hline {FreeBSD 4.9 } & {QUALSTAR TLS-4210 (Qualstar) } & {AIT1: 36GB, AIT2: 50GB all
uncomp } & {QUALSTAR TLS-4210 } & {12} & {AIT1: 36GB, AIT2: 50GB all uncomp }\\
\hline {Linux } & {Skydata } & {DLT } & {ATL-L200 } & {8} & {40/80 } \\
\item [TLS Allowed CN = \lt{}string list\gt{}]
Common name attribute of allowed peer certificates. If this directive is
-specified, all client certificates will be verified against this list.
+specified, all server certificates will be verified against this list. This
+can be used to ensure that only the CA-approved Director may connect.
This directive may be specified more than once. It is not valid in a client
context.
-1.39.29 (28 November 2006)
+1.39.30 (08 December 2006)
\end{verbatim}
\normalsize
+TopView is another program that has been recommend, but it is not a
+standard Win32 program, so you must find and download it from the Internet.
+
\subsection*{Windows Disaster Recovery}
\index[general]{Recovery!Windows Disaster}
\index[general]{Windows Disaster Recovery}