configure the number of reload requests that can be done while jobs are
running.
-\begin{verbatim}
+\begin{lstlisting}
Director {
Name = localhost-dir
Maximum Reload Requests = 64
...
}
-\end{verbatim}
+\end{lstlisting}
\subsection{FD Storage Address}
-When the Director is behind a NAT, in a WAN area, to connect to
-% the FileDaemon or
-the StorageDaemon, the Director uses an "external" ip address,
-and the FileDaemon should use an "internal" ip address to contact the
+When the Director is behind a NAT, in a WAN area, to connect to
+% the FileDaemon or
+the StorageDaemon, the Director uses an ``external'' ip address,
+and the FileDaemon should use an ``internal'' IP address to contact the
StorageDaemon.
The normal way to handle this situation is to use a canonical name such as
-"storage-server" that will be resolved on the Director side as the WAN address
+``storage-server'' that will be resolved on the Director side as the WAN address
and on the Client side as the LAN address. This is now possible to configure
-this parameter using the new \texttt{FDStorageAddress} Storage
+this parameter using the new \texttt{FDStorageAddress} Storage
% or Client
directive.
-\begin{figure}[htbp]
- \centering
- \includegraphics[width=10cm]{\idir BackupOverWan1}
- \label{fig:fdstorageaddress}
- \caption{Backup over WAN}
-\end{figure}
+\bsysimageH{BackupOverWan1}{Backup Over WAN}{figbs6:fdstorageaddress}
+% \label{fig:fdstorageaddress}
-\begin{verbatim}
+\begin{lstlisting}
Storage {
Name = storage1
Address = 65.1.1.1
SD Port 9103
...
}
-\end{verbatim}
+\end{lstlisting}
% # or in the Client resouce
-%
+%
% Client {
% Name = client1
% Address = 65.1.1.2
% FD Port 9103
% ...
% }
-% \end{verbatim}
-%
+% \end{lstlisting}
+%
% Note that using the Client \texttt{FDStorageAddress} directive will not allow
% to use multiple Storage Daemon, all Backup or Restore requests will be sent to
% the specified \texttt{FDStorageAddress}.
they don't monopolize all the Storage drives causing a deadlock situation
where all the drives are allocated for reading but none remain for
writing. This deadlock situation can occur when running multiple
-simultaneous Copy, Migration, and VirtualFull jobs.
+simultaneous Copy, Migration, and VirtualFull jobs.
\smallskip
The default value is set to 0 (zero), which means there is no
list of running jobs allowing you to select one, which might
look like the following:
-\begin{verbatim}
+\begin{lstlisting}
*stop
Select Job:
1: JobId=3 Job=Incremental.2012-03-26_12.04.26_07
Choose Job to stop (1-3): 2
2001 Job "Incremental.2012-03-26_12.04.30_08" marked to be stopped.
3000 JobId=4 Job="Incremental.2012-03-26_12.04.30_08" marked to be stopped.
-\end{verbatim}
+\end{lstlisting}
\subsection{The Restart Command}
The new {\bf Restart command} allows console users to restart
a canceled, failed, or incomplete Job. For canceled and failed
-Jobs, the Job will restart from the beginning. For incomplete
+Jobs, the Job will restart from the beginning. For incomplete
Jobs the Job will restart at the point that it was stopped either
by a stop command or by some recoverable failure.
If you enter the {\bf restart} command in bconsole, you will get the
following prompts:
-\begin{verbatim}
+\begin{lstlisting}
*restart
You have the following choices:
1: Incomplete
2: Canceled
3: Failed
4: All
-Select termination code: (1-4):
-\end{verbatim}
+Select termination code: (1-4):
+\end{lstlisting}
If you select the {\bf All} option, you may see something like:
-\begin{verbatim}
+\begin{lstlisting}
Select termination code: (1-4): 4
+-------+-------------+---------------------+------+-------+----------+-----------+-----------+
| jobid | name | starttime | type | level | jobfiles |
| 4 | Incremental | 2012-03-26 12:18:38 | B | F | 331 |
3,548,058 | I |
+-------+-------------+---------------------+------+-------+----------+-----------+-----------+
-Enter the JobId list to select:
-\end{verbatim}
+Enter the JobId list to select:
+\end{lstlisting}
-Then you may enter one or more JobIds to be restarted, which may
+Then you may enter one or more JobIds to be restarted, which may
take the form of a list of JobIds separated by commas, and/or JobId
ranges such as {\bf 1-4}, which indicates you want to restart JobIds
1 through 4, inclusive.
Enterprise Edition.
\subsection{Support for MSSQL Block Level Backups}
-With the addition of block level backup support to the
+With the addition of block level backup support to the
Bacula Enterprise VSS MSSQL component, you can now do
Differential backups in addition to Full backups.
Differential backups use Microsoft's partial block backup
This partial block backup permits backing up only those
blocks that have changed. Database restores can be made while
the MSSQL server is running, but any databases selected for
-restore will be automatically taken offline by the MSSQL
+restore will be automatically taken offline by the MSSQL
server during the restore process.
Incremental backups for MSSQL are not support by
Microsoft. We strongly recommend that you not perform Incremental
backups with MSSQL as they will probably produce a situation
-where restore will no longer work correctly.
+where restore will no longer work correctly.
\smallskip
We are currently working on producing a white paper that will give more
\smallskip
It is possible to restore the {\bf master} database, but you must
-first shutdown the MSSQL server, then you must perform special
+first shutdown the MSSQL server, then you must perform special
recovery commands. Please see Microsoft documentation on how
to restore the master database.
that File daemon, or it can be set for each Job in the Director's conf file.
For example:
-\begin{verbatim}
+\begin{lstlisting}
FileDaemon {
Name = localhost-fd
Working Directory = /some/path
...
Maximum Bandwidth Per Job = 5Mb/s
}
-\end{verbatim}
+\end{lstlisting}
The above example would cause any jobs running with the FileDaemon to not
exceed 5Mb/s of throughput when sending data to the Storage Daemon.
You can specify the speed parameter in k/s, Kb/s, m/s, Mb/s.
For example:
-\begin{verbatim}
+\begin{lstlisting}
Job {
Name = locahost-data
FileSet = FS_localhost
Maximum Bandwidth = 5Mb/s
...
}
-\end{verbatim}
+\end{lstlisting}
The above example would cause Job \texttt{localhost-data} to not exceed 5MB/s
of throughput when sending data from the File daemon to the Storage daemon.
A new console command \texttt{setbandwidth} permits to set dynamically the
maximum throughput of a running Job or for future jobs of a Client.
-\begin{verbatim}
+\begin{lstlisting}
* setbandwidth limit=1000000 jobid=10
-\end{verbatim}
+\end{lstlisting}
The \texttt{limit} parameter is in Kb/s.
platform including Windows 32 and 64bit.
Accurate option should be turned on in the Job resource.
-\begin{verbatim}
+\begin{lstlisting}
Job {
Accurate = yes
FileSet = DeltaFS
}
}
-\end{verbatim}
+\end{lstlisting}
Please contact Bacula Systems support to get Delta Plugin specific
documentation.
\texttt{update slots} command. This script can be scheduled once a day in
an Admin job.
-\begin{verbatim}
+\begin{lstlisting}
$ /opt/bacula/scripts/reset-storageid MediaType StorageName
$ bconsole
* update slots storage=StorageName drive=0
-\end{verbatim}
+\end{lstlisting}
Please contact Bacula Systems support to get help on this advanced
configuration.
On some NDMP devices such as Celera or Blueray, the administrator can use arbitrary
volume structure name, ex:
-\begin{verbatim}
+\begin{lstlisting}
/dev/volume_home
/rootvolume/volume_tmp
/VG/volume_var
-\end{verbatim}
+\end{lstlisting}
The NDMP plugin should be aware of the structure organization in order to
detect if the administrator wants to restore in a new volume
(\texttt{where=/dev/vol\_tmp}) or inside a subdirectory of the targeted volume
(\texttt{where=/tmp}).
-\begin{verbatim}
+\begin{lstlisting}
FileSet {
Name = NDMPFS
...
Plugin = "ndmp:host=nasbox user=root pass=root file=/dev/vol1 volume_format=/dev/"
}
}
-\end{verbatim}
+\end{lstlisting}
Please contact Bacula Systems support to get NDMP Plugin specific
documentation.
When the Accurate mode is turned on, you can decide to always backup a file
by using then new {\bf A} Accurate option in your FileSet. For example:
-\begin{verbatim}
+\begin{lstlisting}
Job {
Name = ...
FileSet = FS_Example
}
...
}
-\end{verbatim}
+\end{lstlisting}
This project was funded by Bacula Systems based on an idea of James Harper and
is available with the Bacula Enterprise Edition.
You are now able to specify the Accurate mode on the \texttt{run} command and
in the Schedule resource.
-\begin{verbatim}
+\begin{lstlisting}
* run accurate=yes job=Test
-\end{verbatim}
+\end{lstlisting}
-\begin{verbatim}
+\begin{lstlisting}
Schedule {
Name = WeeklyCycle
Run = Full 1st sun at 23:05
Run = Differential accurate=yes 2nd-5th sun at 23:05
Run = Incremental accurate=no mon-sat at 23:05
}
-\end{verbatim}
+\end{lstlisting}
It can allow you to save memory and and CPU resources on the catalog server in
some cases.
You can have access to JobBytes, JobFiles and Director name using \%b, \%F and \%D
in your runscript command. The Client address is now available through \%h.
-\begin{verbatim}
+\begin{lstlisting}
RunAfterJob = "/bin/echo Job=%j JobBytes=%b JobFiles=%F ClientAddress=%h Dir=%D"
-\end{verbatim}
+\end{lstlisting}
\subsection{LZO Compression}
{\bf compression=LZO}).
For example:
-\begin{verbatim}
+\begin{lstlisting}
Include {
Options { compression=LZO }
File = /home
File = /data
}
-\end{verbatim}
+\end{lstlisting}
LZO provides much faster compression and decompression speed but lower
compression ratio than GZIP. It is a good option when you backup to disk. For
LZO is a good alternative for GZIP1 when you don't want to slow down your
backup. On a modern CPU it should be able to run almost as fast as:
-\begin{itemize}
+\begin{bsysitemize}
\item your client can read data from disk. Unless you have very fast disks like
SSD or large/fast RAID array.
\item the data transfers between the file daemon and the storage daemon even on
a 1Gb/s link.
-\end{itemize}
+\end{bsysitemize}
Note that bacula only use one compression level LZO1X-1.
Since the old integrated Windows tray monitor doesn't work with
recent Windows versions, we have written a new Qt Tray Monitor that is available
for both Linux and Windows. In addition to all the previous features,
-this new version allows you to run Backups from
+this new version allows you to run Backups from
the tray monitor menu.
-\begin{figure}[htbp]
- \centering
- \includegraphics[width=10cm]{\idir tray-monitor}
- \label{fig:traymonitor}
- \caption{New tray monitor}
-\end{figure}
+%\begin{figure}[htbp]
+% \centering
+% \includegraphics[width=10cm]{\idir
+\bsysimageH{tray-monitor}{New tray monitor}{figbs6:traymonitor}
+% \label{fig:traymonitor}
+% \caption{New tray monitor}
+%\end{figure}
-\begin{figure}[htbp]
- \centering
- \includegraphics[width=10cm]{\idir tray-monitor1}
- \label{fig:traymonitor1}
- \caption{Run a Job through the new tray monitor}
-\end{figure}
+%\begin{figure}[htbp]
+% \centering
+\bsysimageH{tray-monitor1}{Run a Job through the new tray monitor}{figbs6:traymonitor1}
+% \includegraphics[width=10cm]{\idir tray-monitor1}
+% \label{fig:traymonitor1}
+% \caption{Run a Job through the new tray monitor}
+%\end{figure}
To be able to run a job from the tray monitor, you need to
allow specific commands in the Director monitor console:
-\begin{verbatim}
+\begin{lstlisting}
Console {
Name = win2003-mon
Password = "xxx"
FileSetACL = *all*
WhereACL = *all*
}
-\end{verbatim}
+\end{lstlisting}
\medskip
This project was funded by Bacula Systems and is available with Bacula
\subsection{Purge Migration Job}
The new {\bf Purge Migration Job} directive may be added to the Migration
-Job definition in the Director's configuration file. When it is enabled
+Job definition in the Director's configuration file. When it is enabled
the Job that was migrated during a migration will be purged at
the end of the migration job.
For example:
-\begin{verbatim}
+\begin{lstlisting}
Job {
Name = "migrate-job"
Type = Migrate
...
Purge Migration Job = yes
}
-\end{verbatim}
+\end{lstlisting}
\medskip
not be longer the case. Now, Bacula won't prune automatically a Job if this
particular Job is needed to restore data. Example:
-\begin{verbatim}
+\begin{lstlisting}
JobId: 1 Level: Full
JobId: 2 Level: Incremental
JobId: 3 Level: Incremental
JobId: 4 Level: Differential
.. Other incrementals up to now
-\end{verbatim}
+\end{lstlisting}
In this example, if the Job Retention defined in the Pool or in the Client
resource causes that Jobs with Jobid in 1,2,3,4 can be pruned, Bacula will
To verify a given job, just specify the Job jobid in argument when starting the
job.
-\begin{verbatim}
+\begin{lstlisting}
*run job=VerifyVolume jobid=1 level=VolumeToCatalog
Run Verify job
JobName: VerifyVolume
When: 2010-09-08 14:17:31
Priority: 10
OK to run? (yes/mod/no):
-\end{verbatim}
+\end{lstlisting}
\medskip
This project was funded by Bacula Systems and is available with Bacula