--- /dev/null
+
+\chapter{Data Encryption}
+\label{DataEncryption}
+\index[general]{Data Encryption}
+\index[general]{Encryption!Data}
+\index[general]{Data Encryption}
+
+Bacula permits file data encryption and signing within the File Daemon (or
+Client) prior to sending data to the Storage Daemon. Upon restoration,
+file signatures are validated and any mismatches are reported. At no time
+does the Director or the Storage Daemon have access to unencrypted file
+contents.
+
+
+It is very important to specify what this implementation does NOT
+do:
+\begin{itemize}
+\item There is one important restore problem to be aware of, namely, it's
+ possible for the director to restore new keys or a Bacula configuration
+ file to the client, and thus force later backups to be made with a
+ compromised key and/or with no encryption at all. You can avoid this by
+ not not changing the location of the keys in your Bacula File daemon
+ configuration file, and not changing your File daemon keys. If you do
+ change either one, you must ensure that no restore is done that restores
+ the old configuration or the old keys. In general, the worst effect of
+ this will be that you can no longer connect the File daemon.
+
+\item The implementation does not encrypt file metadata such as file path
+ names, permissions, and ownership. Extended attributes are also currently
+ not encrypted. However, Mac OS X resource forks are encrypted.
+\end{itemize}
+
+Encryption and signing are implemented using RSA private keys coupled with
+self-signed x509 public certificates. This is also sometimes known as PKI
+or Public Key Infrastructure.
+
+Each File Daemon should be given its own unique private/public key pair.
+In addition to this key pair, any number of "Master Keys" may be specified
+-- these are key pairs that may be used to decrypt any backups should the
+File Daemon key be lost. Only the Master Key's public certificate should
+be made available to the File Daemon. Under no circumstances should the
+Master Private Key be shared or stored on the Client machine.
+
+The Master Keys should be backed up to a secure location, such as a CD
+placed in a in a fire-proof safe or bank safety deposit box. The Master
+Keys should never be kept on the same machine as the Storage Daemon or
+Director if you are worried about an unauthorized party compromising either
+machine and accessing your encrypted backups.
+
+While less critical than the Master Keys, File Daemon Keys are also a prime
+candidate for off-site backups; burn the key pair to a CD and send the CD
+home with the owner of the machine.
+
+NOTE!!! If you lose your encryption keys, backups will be unrecoverable.
+{\bf ALWAYS} store a copy of your master keys in a secure, off-site location.
+
+The basic algorithm used for each backup session (Job) is:
+\begin{enumerate}
+\item The File daemon generates a session key.
+\item The FD encrypts that session key via PKE for all recipients (the file
+daemon, any master keys).
+\item The FD uses that session key to perform symmetric encryption on the data.
+\end{enumerate}
+
+
+\section{Building Bacula with Encryption Support}
+\index[general]{Building Bacula with Encryption Support}
+
+The configuration option for enabling OpenSSL encryption support has not changed
+since Bacula 1.38. To build Bacula with encryption support, you will need
+the OpenSSL libraries and headers installed. When configuring Bacula, use:
+
+\begin{verbatim}
+ ./configure --with-openssl ...
+\end{verbatim}
+
+\section{Encryption Technical Details}
+\index[general]{Encryption Technical Details}
+
+The implementation uses 128bit AES-CBC, with RSA encrypted symmetric
+session keys. The RSA key is user supplied.
+If you are running OpenSSL 0.9.8 or later, the signed file hash uses
+SHA-256 -- otherwise, SHA-1 is used.
+
+End-user configuration settings for the algorithms are not currently
+exposed -- only the algorithms listed above are used. However, the
+data written to Volume supports arbitrary symmetric, asymmetric, and
+digest algorithms for future extensibility, and the back-end
+implementation currently supports:
+
+\begin{verbatim}
+Symmetric Encryption:
+ - 128, 192, and 256-bit AES-CBC
+ - Blowfish-CBC
+
+Asymmetric Encryption (used to encrypt symmetric session keys):
+ - RSA
+
+Digest Algorithms:
+ - MD5
+ - SHA1
+ - SHA256
+ - SHA512
+\end{verbatim}
+
+The various algorithms are exposed via an entirely re-usable,
+OpenSSL-agnostic API (ie, it is possible to drop in a new encryption
+backend). The Volume format is DER-encoded ASN.1, modeled after the
+Cryptographic Message Syntax from RFC 3852. Unfortunately, using CMS
+directly was not possible, as at the time of coding a free software
+streaming DER decoder/encoder was not available.
+
+
+\section{Decrypting with a Master Key}
+\index[general]{Decrypting with a Master Key}
+
+It is preferable to retain a secure, non-encrypted copy of the
+client's own encryption keypair. However, should you lose the
+client's keypair, recovery with the master keypair is possible.
+
+You must:
+\begin{itemize}
+\item Concatenate the master private and public key into a single
+ keypair file, ie:
+ cat master.key master.cert >master.keypair
+
+\item 2) Set the PKI Keypair statement in your bacula configuration file:
+
+\begin{verbatim}
+ PKI Keypair = master.keypair
+\end{verbatim}
+
+\item Start the restore. The master keypair will be used to decrypt
+ the file data.
+
+\end{itemize}
+
+
+\section{Generating Private/Public Encryption Keys}
+\index[general]{Generating Private/Public Encryption Keypairs}
+
+Generate a Master Key Pair with:
+
+\footnotesize
+\begin{verbatim}
+ openssl genrsa -out master.key 2048
+ openssl req -new -key master.key -x509 -out master.cert
+\end{verbatim}
+\normalsize
+
+Generate a File Daemon Key Pair for each FD:
+
+\footnotesize
+\begin{verbatim}
+ openssl genrsa -out fd-example.key 2048
+ openssl req -new -key fd-example.key -x509 -out fd-example.cert
+ cat fd-example.key fd-example.cert >fd-example.pem
+\end{verbatim}
+\normalsize
+
+Note, there seems to be a lot of confusion around the file extensions given
+to these keys. For example, a .pem file can contain all the following:
+private keys (RSA and DSA), public keys (RSA and DSA) and (x509) certificates.
+It is the default format for OpenSSL. It stores data Base64 encoded DER format,
+surrounded by ASCII headers, so is suitable for text mode transfers between
+systems. A .pem file may contain any number of keys either public or
+private. We use it in cases where there is both a public and a private
+key.
+
+Typically, above we have used the .cert extension to refer to X509
+certificate encoding that contains only a single public key.
+
+
+\section{Example Data Encryption Configuration}
+\index[general]{Example!File Daemon Configuration File}
+\index[general]{Example!Data Encryption Configuration File}
+\index[general]{Example Data Encryption Configuration}
+
+{\bf bacula-fd.conf}
+\footnotesize
+\begin{verbatim}
+FileDaemon {
+ Name = example-fd
+ FDport = 9102 # where we listen for the director
+ WorkingDirectory = /var/bacula/working
+ Pid Directory = /var/run
+ Maximum Concurrent Jobs = 20
+
+ PKI Signatures = Yes # Enable Data Signing
+ PKI Encryption = Yes # Enable Data Encryption
+ PKI Keypair = "/etc/bacula/fd-example.pem" # Public and Private Keys
+ PKI Master Key = "/etc/bacula/master.cert" # ONLY the Public Key
+}
+\end{verbatim}
+\normalsize
--- /dev/null
+
+\chapter{Migration}
+\label{MigrationChapter}
+\index[general]{Migration}
+
+The term Migration, as used in the context of Bacula, means moving data from
+one Volume to another. In particular it refers to a Job (similar to a backup
+job) that reads data that was previously backed up to a Volume and writes
+it to another Volume. As part of this process, the File catalog records
+associated with the first backup job are purged. In other words, Migration
+moves Bacula Job data from one Volume to another by reading the Job data
+from the Volume it is stored on, writing it to a different Volume in a
+different Pool, and then purging the database records for the first Job.
+
+The section process for which Job or Jobs are migrated
+can be based on quite a number of different criteria such as:
+\begin{itemize}
+\item a single previous Job
+\item a Volume
+\item a Client
+\item a regular expression matching a Job, Volume, or Client name
+\item the time a Job has been on a Volume
+\item high and low water marks (usage or occupation) of a Pool
+\item Volume size
+\end{itemize}
+
+The details of these selection criteria will be defined below.
+
+To run a Migration job, you must first define a Job resource very similar
+to a Backup Job but with {\bf Type = Migrate} instead of {\bf Type =
+Backup}. One of the key points to remember is that the Pool that is
+specified for the migration job is the only pool from which jobs will
+be migrated, with one exception noted below. In addition, the Pool to
+which the selected Job or Jobs will be migrated is defined by the {\bf
+Next Pool = ...} in the Pool resource specified for the Migration Job.
+
+Bacula permits pools to contain Volumes with different Media Types.
+However, when doing migration, this is a very undesirable condition. For
+migration to work properly, you should use pools containing only Volumes of
+the same Media Type for all migration jobs.
+
+The migration job normally is either manually started or starts
+from a Schedule much like a backup job. It searches
+for a previous backup Job or Jobs that match the parameters you have
+specified in the migration Job resource, primarily a {\bf Selection Type}
+(detailed a bit later). Then for
+each previous backup JobId found, the Migration Job will run a new Job which
+copies the old Job data from the previous Volume to a new Volume in
+the Migration Pool. It is possible that no prior Jobs are found for
+migration, in which case, the Migration job will simply terminate having
+done nothing, but normally at a minimum, three jobs are involved during a
+migration:
+
+\begin{itemize}
+\item The currently running Migration control Job. This is only
+ a control job for starting the migration child jobs.
+\item The previous Backup Job (already run). The File records
+ for this Job are purged if the Migration job successfully
+ terminates. The original data remains on the Volume until
+ it is recycled and rewritten.
+\item A new Migration Backup Job that moves the data from the
+ previous Backup job to the new Volume. If you subsequently
+ do a restore, the data will be read from this Job.
+\end{itemize}
+
+If the Migration control job finds a number of JobIds to migrate (e.g.
+it is asked to migrate one or more Volumes), it will start one new
+migration backup job for each JobId found on the specified Volumes.
+Please note that Migration doesn't scale too well since Migrations are
+done on a Job by Job basis. This if you select a very large volume or
+a number of volumes for migration, you may have a large number of
+Jobs that start. Because each job must read the same Volume, they will
+run consecutively (not simultaneously).
+
+\section{Migration Job Resource Directives}
+
+The following directives can appear in a Director's Job resource, and they
+are used to define a Migration job.
+
+\begin{description}
+\item [Pool = \lt{}Pool-name\gt{}] The Pool specified in the Migration
+ control Job is not a new directive for the Job resource, but it is
+ particularly important because it determines what Pool will be examined for
+ finding JobIds to migrate. The exception to this is when {\bf Selection
+ Type = SQLQuery}, in which case no Pool is used, unless you
+ specifically include it in the SQL query. Note, the Pool resource
+ referenced must contain a {\bf Next Pool = ...} directive to define
+ the Pool to which the data will be migrated.
+
+\item [Type = Migrate]
+ {\bf Migrate} is a new type that defines the job that is run as being a
+ Migration Job. A Migration Job is a sort of control job and does not have
+ any Files associated with it, and in that sense they are more or less like
+ an Admin job. Migration jobs simply check to see if there is anything to
+ Migrate then possibly start and control new Backup jobs to migrate the data
+ from the specified Pool to another Pool.
+
+\item [Selection Type = \lt{}Selection-type-keyword\gt{}]
+ The \lt{}Selection-type-keyword\gt{} determines how the migration job
+ will go about selecting what JobIds to migrate. In most cases, it is
+ used in conjunction with a {\bf Selection Pattern} to give you fine
+ control over exactly what JobIds are selected. The possible values
+ for \lt{}Selection-type-keyword\gt{} are:
+ \begin{description}
+ \item [SmallestVolume] This selection keyword selects the volume with the
+ fewest bytes from the Pool to be migrated. The Pool to be migrated
+ is the Pool defined in the Migration Job resource. The migration
+ control job will then start and run one migration backup job for
+ each of the Jobs found on this Volume. The Selection Pattern, if
+ specified, is not used.
+
+ \item [OldestVolume] This selection keyword selects the volume with the
+ oldest last write time in the Pool to be migrated. The Pool to be
+ migrated is the Pool defined in the Migration Job resource. The
+ migration control job will then start and run one migration backup
+ job for each of the Jobs found on this Volume. The Selection
+ Pattern, if specified, is not used.
+
+ \item [Client] The Client selection type, first selects all the Clients
+ that have been backed up in the Pool specified by the Migration
+ Job resource, then it applies the {\bf Selection Pattern} (defined
+ below) as a regular expression to the list of Client names, giving
+ a filtered Client name list. All jobs that were backed up for those
+ filtered (regexed) Clients will be migrated.
+ The migration control job will then start and run one migration
+ backup job for each of the JobIds found for those filtered Clients.
+
+ \item [Volume] The Volume selection type, first selects all the Volumes
+ that have been backed up in the Pool specified by the Migration
+ Job resource, then it applies the {\bf Selection Pattern} (defined
+ below) as a regular expression to the list of Volume names, giving
+ a filtered Volume list. All JobIds that were backed up for those
+ filtered (regexed) Volumes will be migrated.
+ The migration control job will then start and run one migration
+ backup job for each of the JobIds found on those filtered Volumes.
+
+ \item [Job] The Job selection type, first selects all the Jobs (as
+ defined on the {\bf Name} directive in a Job resource)
+ that have been backed up in the Pool specified by the Migration
+ Job resource, then it applies the {\bf Selection Pattern} (defined
+ below) as a regular expression to the list of Job names, giving
+ a filtered Job name list. All JobIds that were run for those
+ filtered (regexed) Job names will be migrated. Note, for a given
+ Job named, they can be many jobs (JobIds) that ran.
+ The migration control job will then start and run one migration
+ backup job for each of the Jobs found.
+
+ \item [SQLQuery] The SQLQuery selection type, used the {\bf Selection
+ Pattern} as an SQL query to obtain the JobIds to be migrated.
+ The Selection Pattern must be a valid SELECT SQL statement for your
+ SQL engine, and it must return the JobId as the first field
+ of the SELECT.
+
+ \item [PoolOccupancy] This selection type will cause the Migration job
+ to compute the total size of the specified pool for all Media Types
+ combined. If it exceeds the {\bf Migration High Bytes} defined in
+ the Pool, the Migration job will migrate all JobIds beginning with
+ the oldest Volume in the pool (determined by Last Write time) until
+ the Pool bytes drop below the {\bf Migration Low Bytes} defined in the
+ Pool. This calculation should be consider rather approximative because
+ it is made once by the Migration job before migration is begun, and
+ thus does not take into account additional data written into the Pool
+ during the migration. In addition, the calculation of the total Pool
+ byte size is based on the Volume bytes saved in the Volume (Media)
+database
+ entries. The bytes calculate for Migration is based on the value stored
+ in the Job records of the Jobs to be migrated. These do not include the
+ Storage daemon overhead as is in the total Pool size. As a consequence,
+ normally, the migration will migrate more bytes than strictly necessary.
+
+ \item [PoolTime] The PoolTime selection type will cause the Migration job to
+ look at the time each JobId has been in the Pool since the job ended.
+ All Jobs in the Pool longer than the time specified on {\bf Migration Time}
+ directive in the Pool resource will be migrated.
+ \end{description}
+
+\item [Selection Pattern = \lt{}Quoted-string\gt{}]
+ The Selection Patterns permitted for each Selection-type-keyword are
+ described above.
+
+ For the OldestVolume and SmallestVolume, this
+ Selection pattern is not used (ignored).
+
+ For the Client, Volume, and Job
+ keywords, this pattern must be a valid regular expression that will filter
+ the appropriate item names found in the Pool.
+
+ For the SQLQuery keyword, this pattern must be a valid SELECT SQL statement
+ that returns JobIds.
+
+\end{description}
+
+\section{Migration Pool Resource Directives}
+
+The following directives can appear in a Director's Pool resource, and they
+are used to define a Migration job.
+
+\begin{description}
+\item [Migration Time = \lt{}time-specification\gt{}]
+ If a PoolTime migration is done, the time specified here in seconds (time
+ modifiers are permitted -- e.g. hours, ...) will be used. If the
+ previous Backup Job or Jobs selected have been in the Pool longer than
+ the specified PoolTime, then they will be migrated.
+
+\item [Migration High Bytes = \lt{}byte-specification\gt{}]
+ This directive specifies the number of bytes in the Pool which will
+ trigger a migration if a {\bf PoolOccupancy} migration selection
+ type has been specified. The fact that the Pool
+ usage goes above this level does not automatically trigger a migration
+ job. However, if a migration job runs and has the PoolOccupancy selection
+ type set, the Migration High Bytes will be applied. Bacula does not
+ currently restrict a pool to have only a single Media Type, so you
+ must keep in mind that if you mix Media Types in a Pool, the results
+ may not be what you want, as the Pool count of all bytes will be
+ for all Media Types combined.
+
+\item [Migration Low Bytes = \lt{}byte-specification\gt{}]
+ This directive specifies the number of bytes in the Pool which will
+ stop a migration if a {\bf PoolOccupancy} migration selection
+ type has been specified and triggered by more than Migration High
+ Bytes being in the pool. In other words, once a migration job
+ is started with {\bf PoolOccupancy} migration selection and it
+ determines that there are more than Migration High Bytes, the
+ migration job will continue to run jobs until the number of
+ bytes in the Pool drop to or below Migration Low Bytes.
+
+\item [Next Pool = \lt{}pool-specification\gt{}]
+ The Next Pool directive specifies the pool to which Jobs will be
+ migrated. This directive is required to define the Pool into which
+ the data will be migrated. Without this directive, the migration job
+ will terminate in error.
+
+\item [Storage = \lt{}storage-specification\gt{}]
+ The Storage directive specifies what Storage resource will be used
+ for all Jobs that use this Pool. It takes precedence over any other
+ Storage specifications that may have been given such as in the
+ Schedule Run directive, or in the Job resource. We highly recommend
+ that you define the Storage resource to be used in the Pool rather
+ than elsewhere (job, schedule run, ...).
+\end{description}
+
+\section{Important Migration Considerations}
+\index[general]{Important Migration Considerations}
+\begin{itemize}
+\item Each Pool into which you migrate Jobs or Volumes {\bf must}
+ contain Volumes of only one Media Type.
+
+\item Migration takes place on a JobId by JobId basis. That is
+ each JobId is migrated in its entirety and independently
+ of other JobIds. Once the Job is migrated, it will be
+ on the new medium in the new Pool, but for the most part,
+ aside from having a new JobId, it will appear with all the
+ same characteristics of the original job (start, end time, ...).
+ The column RealEndTime in the catalog Job table will contain the
+ time and date that the Migration terminated, and by comparing
+ it with the EndTime column you can tell whether or not the
+ job was migrated. The original job is purged of its File
+ records, and its Type field is changed from "B" to "M" to
+ indicate that the job was migrated.
+
+\item Jobs on Volumes will be Migration only if the Volume is
+ marked, Full, Used, or Error. Volumes that are still
+ marked Append will not be considered for migration. This
+ prevents Bacula from attempting to read the Volume at
+ the same time it is writing it. It also reduces other deadlock
+ situations, as well as avoids the problem that you migrate a
+ Volume and later find new files appended to that Volume.
+
+\item As noted above, for the Migration High Bytes, the calculation
+ of the bytes to migrate is somewhat approximate.
+
+\item If you keep Volumes of different Media Types in the same Pool,
+ it is not clear how well migration will work. We recommend only
+ one Media Type per pool.
+
+\item It is possible to get into a resource deadlock where Bacula does
+ not find enough drives to simultaneously read and write all the
+ Volumes needed to do Migrations. For the moment, you must take
+ care as all the resource deadlock algorithms are not yet implemented.
+
+\item Migration is done only when you run a Migration job. If you set a
+ Migration High Bytes and that number of bytes is exceeded in the Pool
+ no migration job will automatically start. You must schedule the
+ migration jobs, and they must run for any migration to take place.
+
+\item If you migrate a number of Volumes, a very large number of Migration
+ jobs may start.
+
+\item Figuring out what jobs will actually be migrated can be a bit complicated
+ due to the flexibility provided by the regex patterns and the number of
+ different options. Turning on a debug level of 100 or more will provide
+ a limited amount of debug information about the migration selection
+ process.
+
+\item Bacula currently does only minimal Storage conflict resolution, so you
+ must take care to ensure that you don't try to read and write to the
+ same device or Bacula may block waiting to reserve a drive that it
+ will never find. In general, ensure that all your migration
+ pools contain only one Media Type, and that you always
+ migrate to pools with different Media Types.
+
+\item The {\bf Next Pool = ...} directive must be defined in the Pool
+ referenced in the Migration Job to define the Pool into which the
+ data will be migrated.
+
+\item Pay particular attention to the fact that data is migrated on a Job
+ by Job basis, and for any particular Volume, only one Job can read
+ that Volume at a time (no simultaneous read), so migration jobs that
+ all reference the same Volume will run sequentially. This can be a
+ potential bottle neck and does not scale very well to large numbers
+ of jobs.
+
+\item Only migration of Selection Types of Job and Volume have
+ been carefully tested. All the other migration methods (time,
+ occupancy, smallest, oldest, ...) need additional testing.
+
+\item Migration is only implemented for a single Storage daemon. You
+ cannot read on one Storage daemon and write on another.
+\end{itemize}
+
+
+\section{Example Migration Jobs}
+\index[general]{Example Migration Jobs}
+
+When you specify a Migration Job, you must specify all the standard
+directives as for a Job. However, certain such as the Level, Client, and
+FileSet, though they must be defined, are ignored by the Migration job
+because the values from the original job used instead.
+
+As an example, suppose you have the following Job that
+you run every night. To note: there is no Storage directive in the
+Job resource; there is a Storage directive in each of the Pool
+resources; the Pool to be migrated (File) contains a Next Pool
+directive that defines the output Pool (where the data is written
+by the migration job).
+
+\footnotesize
+\begin{verbatim}
+# Define the backup Job
+Job {
+ Name = "NightlySave"
+ Type = Backup
+ Level = Incremental # default
+ Client=rufus-fd
+ FileSet="Full Set"
+ Schedule = "WeeklyCycle"
+ Messages = Standard
+ Pool = Default
+}
+
+# Default pool definition
+Pool {
+ Name = Default
+ Pool Type = Backup
+ AutoPrune = yes
+ Recycle = yes
+ Next Pool = Tape
+ Storage = File
+ LabelFormat = "File"
+}
+
+# Tape pool definition
+Pool {
+ Name = Tape
+ Pool Type = Backup
+ AutoPrune = yes
+ Recycle = yes
+ Storage = DLTDrive
+}
+
+# Definition of File storage device
+Storage {
+ Name = File
+ Address = rufus
+ Password = "ccV3lVTsQRsdIUGyab0N4sMDavui2hOBkmpBU0aQKOr9"
+ Device = "File" # same as Device in Storage daemon
+ Media Type = File # same as MediaType in Storage daemon
+}
+
+# Definition of DLT tape storage device
+Storage {
+ Name = DLTDrive
+ Address = rufus
+ Password = "ccV3lVTsQRsdIUGyab0N4sMDavui2hOBkmpBU0aQKOr9"
+ Device = "HP DLT 80" # same as Device in Storage daemon
+ Media Type = DLT8000 # same as MediaType in Storage daemon
+}
+
+\end{verbatim}
+\normalsize
+
+Where we have included only the essential information -- i.e. the
+Director, FileSet, Catalog, Client, Schedule, and Messages resources are
+omitted.
+
+As you can see, by running the NightlySave Job, the data will be backed up
+to File storage using the Default pool to specify the Storage as File.
+
+Now, if we add the following Job resource to this conf file.
+
+\footnotesize
+\begin{verbatim}
+Job {
+ Name = "migrate-volume"
+ Type = Migrate
+ Level = Full
+ Client = rufus-fd
+ FileSet = "Full Set"
+ Messages = Standard
+ Pool = Default
+ Maximum Concurrent Jobs = 4
+ Selection Type = Volume
+ Selection Pattern = "File"
+}
+\end{verbatim}
+\normalsize
+
+and then run the job named {\bf migrate-volume}, all volumes in the Pool
+named Default (as specified in the migrate-volume Job that match the
+regular expression pattern {\bf File} will be migrated to tape storage
+DLTDrive because the {\bf Next Pool} in the Default Pool specifies that
+Migrations should go to the pool named {\bf Tape}, which uses
+Storage {\bf DLTDrive}.
+
+If instead, we use a Job resource as follows:
+
+\footnotesize
+\begin{verbatim}
+Job {
+ Name = "migrate"
+ Type = Migrate
+ Level = Full
+ Client = rufus-fd
+ FileSet="Full Set"
+ Messages = Standard
+ Pool = Default
+ Maximum Concurrent Jobs = 4
+ Selection Type = Job
+ Selection Pattern = ".*Save"
+}
+\end{verbatim}
+\normalsize
+
+All jobs ending with the name Save will be migrated from the File Default to
+the Tape Pool, or from File storage to Tape storage.
+++ /dev/null
-%%
-%%
-
-\section*{Supported Systems and Hardware}
-\label{_ChapterStart}
-\index[general]{Supported Systems and Hardware }
-\index[general]{Hardware!Supported Systems and }
-\addcontentsline{toc}{section}{Supported Systems and Hardware}
-
-\label{SysReqs}
-
-\subsection*{System Requirements}
-\index[general]{System Requirements }
-\index[general]{Requirements!System }
-\addcontentsline{toc}{subsection}{System Requirements}
-
-\begin{itemize}
-\item {\bf Bacula} has been compiled and run on Linux RedHat, FreeBSD, and
-Solaris systems.
-\item It requires GNU C++ version 2.95 or higher to compile. You can try with
-other compilers and older versions, but you are on your own. We have
-successfully compiled and used Bacula on RH8.0/RH9/RHEL 3.0/FC3 with GCC 3.4.
-Note, in general GNU C++ is a separate package (e.g. RPM) from GNU C, so you
-need them both loaded. On RedHat systems, the C++ compiler is part of the
-{\bf gcc-c++} rpm package.
-\item There are certain third party packages that Bacula needs. Except for
-MySQL and PostgreSQL, they can all be found in the {\bf depkgs} and {\bf
-depkgs1} releases.
-\item If you want to build the Win32 binaries, you will need a Microsoft
-Visual C++ compiler (or Visual Studio). Although all components build
-(console has some warnings), only the File daemon has been tested.
-\item {\bf Bacula} requires a good implementation of pthreads to work. This
-is not the case on some of the BSD systems.
-\item The source code has been written with portability in mind and is mostly
-POSIX compatible. Thus porting to any POSIX compatible operating system
-should be relatively easy.
-\item The GNOME Console program is developed and tested under GNOME 2.x. It
-also runs under GNOME 1.4 but this version is deprecated and thus no longer
-maintained.
-\item The wxWidgets Console program is developed and tested with the latest
-stable ANSI (not Unicode) version of
-\elink{wxWidgets}{http://www.wxwidgets.org/} (2.6.0). It works fine with the
-Windows and GTK+-2.x version of wxWidgets, and should also works on other
-platforms supported by wxWidgets.
-\item The Tray Monitor program is developed for GTK+-2.x. It needs Gnome less
-or equal to 2.2, KDE greater or equal to 3.1 or any window manager supporting
-the
-\elink{ FreeDesktop system tray
-standard}{http://www.freedesktop.org/Standards/systemtray-spec}.
-\item If you want to enable command line editing and history, you will need
-to have /usr/include/termcap.h and either the termcap or the ncurses library
-loaded (libtermcap-devel or ncurses-devel).
-\item If you want to use DVD as backup medium, you will need to download and
-install the
-\elink{dvd+rw-tools}{http://fy.chalmers.se/~appro/linux/DVD+RW/}.
-\end{itemize}
-
-\subsection*{Supported Operating Systems}
-\label{SupportedOSes}
-\index[general]{Systems!Supported Operating }
-\index[general]{Supported Operating Systems }
-\addcontentsline{toc}{subsection}{Supported Operating Systems}
-
-\begin{itemize}
-\item Linux systems (built and tested on RedHat Fedora Core 3).
-\item If you have a recent Red Hat Linux system running the 2.4.x kernel and
-you have the directory {\bf /lib/tls} installed on your system (normally by
-default), bacula will {\bf NOT} run. This is the new pthreads library and it
-is defective. You must remove this directory prior to running Bacula, or you
-can simply change the name to {\bf /lib/tls-broken}) then you must reboot
-your machine (one of the few times Linux must be rebooted). If you are not
-able to remove/rename /lib/tls, an alternative is to set the environment
-variable ``LD\_ASSUME\_KERNEL=2.4.19'' prior to executing Bacula. For this
-option, you do not need to reboot, and all programs other than Bacula will
-continue to use /lib/tls.
-
-The feedback that we have for 2.6 kernels is that the same problem may
-exist. However, we have not been able to reproduce the above mentioned
-problem (bizarre hangs) on 2.6 kernels. If you do experience problems, we
-recommend using the environment variable override
-(LD\_ASSUME\_KERNEL=2.4.19) rather than removing /lib/tls, because TLS
-is designed to work with 2.6 kernels.
-
-\item Most flavors of Linux (Gentoo, SuSE, Mandrake, Debian, ...).
-\item Solaris various versions.
-\item FreeBSD (tape driver supported in 1.30 -- please see some {\bf
-important} considerations in the
-\ilink{ Tape Modes on FreeBSD}{tapetesting.tex#FreeBSDTapes} section of the
-Tape Testing chapter of this manual.)
-\item Windows (Win98/Me, WinNT/2K/XP) Client (File daemon) binaries.
-\item MacOS X/Darwin (see
-\elink{ http://fink.sourceforge.net/}{http://fink.sourceforge.net/} for
-obtaining the packages)
-\item OpenBSD Client (File daemon).
-\item Irix Client (File daemon).
-\item Tru64
-\item Bacula is said to work on other systems (AIX, BSDI, HPUX, ...) but we
-do not have first hand knowledge of these systems.
-\item See the Porting chapter of the Bacula Developer's Guide for information
-on porting to other systems.
-\end{itemize}
-
-\subsection*{Supported Tape Drives}
-\label{SupportedDrives}
-\index[general]{Drives!Supported Tape }
-\index[general]{Supported Tape Drives }
-\addcontentsline{toc}{subsection}{Supported Tape Drives}
-
-Even if your drive is on the list below, please check the
-\ilink{Tape Testing Chapter}{tapetesting.tex#btape} of this manual for
-procedures that you can use to verify if your tape drive will work with
-Bacula. If your drive is in fixed block mode, it may appear to work with
-Bacula until you attempt to do a restore and Bacula wants to position the
-tape. You can be sure only by following the procedures suggested above and
-testing.
-
-It is very difficult to supply a list of supported tape drives, or drives that
-are known to work with Bacula because of limited feedback (so if you use
-Bacula on a different drive, please let us know). Based on user feedback, the
-following drives are known to work with Bacula. A dash in a column means
-unknown:
-
-\addcontentsline{lot}{table}{Supported Tape Drives}
-\begin{longtable}{|p{1.2in}|l|l|p{1.3in}|l|}
- \hline
-\multicolumn{1}{|c| }{\bf OS } & \multicolumn{1}{c| }{\bf Man. } &
-\multicolumn{1}{c| }{\bf Media } & \multicolumn{1}{c| }{\bf Model } &
-\multicolumn{1}{c| }{\bf Capacity } \\
- \hline
-{- } & {ADIC } & {DLT } & {Adic Scalar 100 DLT } & {100GB } \\
- \hline
-{- } & {ADIC } & {DLT } & {Adic Fastor 22 DLT } & {- } \\
- \hline
-{- } & {- } & {DDS } & {Compaq DDS 2,3,4 } & {- } \\
- \hline
-{- } & {Exabyte } & {- } & {Exabyte drives less than 10 years old } & {- }
-\\
- \hline
-{- } & {Exabyte } & {- } & {Exabyte VXA drives } & {- } \\
- \hline
-{- } & {HP } & {Travan 4 } & {Colorado T4000S } & {- } \\
- \hline
-{- } & {HP } & {DLT } & {HP DLT drives } & {- } \\
- \hline
-{- } & {HP } & {LTO } & {HP LTO Ultrium drives } & {- } \\
- \hline
-{FreeBSD 4.10 RELEASE } & {HP } & {DAT } & {HP StorageWorks DAT72i } & {- }
-\\
- \hline
-{- } & {Overland } & {LTO } & {LoaderXpress LTO } & {- } \\
- \hline
-{- } & {Overland } & {- } & {Neo2000 } & {- } \\
- \hline
-{- } & {OnStream } & {- } & {OnStream drives (see below) } & {- } \\
- \hline
-{- } & {Quantum } & {DLT } & {DLT-8000 } & {40/80GB } \\
- \hline
-{Linux } & {Seagate } & {DDS-4 } & {Scorpio 40 } & {20/40GB } \\
- \hline
-{FreeBSD 4.9 STABLE } & {Seagate } & {DDS-4 } & {STA2401LW } & {20/40GB } \\
- \hline
-{FreeBSD 5.2.1 pthreads patched RELEASE } & {Seagate } & {AIT-1 } & {STA1701W
-} & {35/70GB } \\
- \hline
-{Linux } & {Sony } & {DDS-2,3,4 } & {- } & {4-40GB } \\
- \hline
-{Linux } & {Tandberg } & {- } & {Tandbert MLR3 } & {- } \\
- \hline
-{FreeBSD } & {Tandberg } & {- } & {Tandberg SLR6 } & {- } \\
- \hline
-{Solaris } & {Tandberg } & {- } & {Tandberg SLR75 } & {- }
-\\ \hline
-
-\end{longtable}
-
-There is a list of
-\ilink{supported autochangers}{autochangers.tex#Models} models in the
-\ilink{autochangers chapter}{autochangers.tex#_ChapterStart} of this document,
-where you will find other tape drives that work with Bacula.
-
-\subsection*{Unsupported Tape Drives}
-\label{UnSupportedDrives}
-\index[general]{Unsupported Tape Drives }
-\index[general]{Drives!Unsupported Tape }
-\addcontentsline{toc}{subsection}{Unsupported Tape Drives}
-
-Previously OnStream IDE-SCSI tape drives did not work with Bacula. As of
-Bacula version 1.33 and the osst kernel driver version 0.9.14 or later, they
-now work. Please see the testing chapter as you must set a fixed block size.
-
-QIC tapes are known to have a number of particularities (fixed block size, and
-one EOF rather than two to terminate the tape). As a consequence, you will
-need to take a lot of care in configuring them to make them work correctly
-with Bacula.
-
-\subsection*{FreeBSD Users Be Aware!!!}
-\index[general]{FreeBSD Users Be Aware }
-\index[general]{Aware!FreeBSD Users Be }
-\addcontentsline{toc}{subsection}{FreeBSD Users Be Aware!!!}
-
-Unless you have patched the pthreads library on most FreeBSD systems, you will
-lose data when Bacula spans tapes. This is because the unpatched pthreads
-library fails to return a warning status to Bacula that the end of the tape is
-near. Please see the
-\ilink{Tape Testing Chapter}{tapetesting.tex#FreeBSDTapes} of this manual for
-{\bf important} information on how to configure your tape drive for
-compatibility with Bacula.
-
-\subsection*{Supported Autochangers}
-\index[general]{Autochangers!Supported }
-\index[general]{Supported Autochangers }
-\addcontentsline{toc}{subsection}{Supported Autochangers}
-
-For information on supported autochangers, please see the
-\ilink{Autochangers Known to Work with Bacula}{autochangers.tex#Models}
-section of the Autochangers chapter of this manual.