X-Git-Url: https://git.sur5r.net/?a=blobdiff_plain;f=docs%2Fmanuals%2Fen%2Fmain%2Fnewfeatures.tex;h=77e6068f89da0be55142e601590eee78f2106e32;hb=4df9e46d36a6c4040908609af712dd718f663f83;hp=5c817af1e5ff8d082d022b47f65235bc49609968;hpb=17f371baa647a7b07cbb6080e981b075e6b6b556;p=bacula%2Fdocs diff --git a/docs/manuals/en/main/newfeatures.tex b/docs/manuals/en/main/newfeatures.tex index 5c817af1..77e6068f 100644 --- a/docs/manuals/en/main/newfeatures.tex +++ b/docs/manuals/en/main/newfeatures.tex @@ -1,14 +1,2133 @@ +\chapter{New Features in 9.0.0} +Note: The first beta versions are released as version 7.9.0, and the first +production release will be 9.0.0. + +\subsection{Maximum Virtual Full Interval Option} +Two new director directives have been added: + +\begin{verbatim} + Max Virtual Full Interval +and + Virtual Full Backup Pool +\end{verbatim} + +The {\bf Max Virtual Full Interval} directive should behave similar to the +{\bf Max Full Interval}, but for Virtual Full jobs. If Bacula sees that +there has not been a Full backup in Max Virtual Full Interval time then it +will upgrade the job to Virtual Full. If you have both {\bf Max Full +Interval} and {\bf Max Virtual Full Interval} set then Max Full Interval +should take precedence. + +The {\bf Virtual Full Backup Pool} directive allows one to change the pool +as well. You probably want to use these two directives in +conjunction with each other but that may depend on the specifics of one's +setup. If you set the {\bf Max Full Interval} without setting {\bf Max +Virtual Full Interval} then Bacula will use whatever the "default" pool is +set to which is the same behavior as with the Max Full Interval. + +\subsection{Progressive Virtual Full} + +In Bacula version 9.0.0, we have added a new Directive named {\bf Backups To Keep} that +permits you to implement Progressive Virtual Fulls within Bacula. Sometimes +this feature is known as Incremental Forever with Consolidation. + +\smallskip + +\begin{figure}[htbp] + \centering + \includegraphics[width=.8\linewidth]{pvf-slidingbackups} + \caption{Backup Sequence Slides Forward One Day, Each Day} + \label{fig:slidingbackups} +\end{figure} + +To implement the Progressive Virtual Full feature, simply add the +{\bf Backups To Keep} directive to your Virtual Full backup Job resource. +The value specified on the directive indicates the number of backup jobs +that should not be merged into the Virtual Full (i.e. the number of backup +jobs that should remain after the Virtual Full has completed. The default +is zero, which reverts to a standard Virtual Full than consolidates all the +backup jobs that it finds. + +\subsubsection{Backups To Keep Directive} +The new {\bf BackupsToKeep} directive is specified in the Job Resource and +has the form: + +\begin{verbatim} + Backups To Keep = 30 +\end{verbatim} + +where the value (30 in the above figure and example) is the number of +backups to retain. When this directive is present during a Virtual Full +(it is ignored for other Job types), it will look for the most recent Full +backup that has more subsequent backups than the value specified. In the +above example the Job will simply terminate unless there is a Full back +followed by at least 31 backups of either level Differential or +Incremental. + +\smallskip +Assuming that the last Full backup is followed by 32 Incremental backups, a +Virtual Full will be run that consolidates the Full with the first two +Incrementals that were run after the Full. The result is that you will end +up with a Full followed by 30 Incremental backups. The Job Resource +in {\bf bacula-dir.conf} to accomplish this would be: + +\begin{verbatim} + Job { + Name = "VFull" + Type = Backup + Level = VirtualFull + Client = "my-fd" + File Set = "FullSet" + Accurate = Yes + Backups To Keep = 10 + } +\end{verbatim} + +\subsubsection{Delete Consolidated Jobs} +The new directive {\bf Delete Consolidated Jobs} expects a {\bf yes} +or {\bf no} value that if set to {\bf yes} will cause any old Job that is +consolidated during a Virtual Full to be deleted. In the example above +we saw that a Full plus one other job (either an Incremental or +Differential) were consolidated into a new Full backup. The original Full +plus the other Job consolidated will be deleted. The default value is +{\bf no}. + +\subsubsection{Virtual Full Compatibility} +Virtual Full as well as Progressive Virtual Full works with any +standard backup Job. + +\smallskip +However, it should be noted that Virtual Full jobs are not compatible with +any plugins that you may be using. + +\subsection{TapeAlert Enhancements} +There are some significant enhancements to the TapeAlert feature of Bacula. +Several directives are used slightly differently, which unfortunately +causes a compatibility problem with the old TapeAlert implementation. +Consequently, if you are already using TapeAlert, you must modify your +{\bf bacula-sd.conf} in order for Tape Alerts to work. See below +for the details ... + +\subsubsection{What is New} +First, you must define a \textbf{Alert Command} directive in the Device +resource that calls the new \textbf{tapealert} script that is installed in +the scripts directory (normally: /opt/bacula/scripts). It is defined as +follows: + +\begin{verbatim} +Device { + Name = ... + Archive Device = /dev/nst0 + Alert Command = "/opt/bacula/scripts/tapealert %l" + Control Device = /dev/sg1 # must be SCSI ctl for /dev/nst0 + ... +} +\end{verbatim} + +In addition the \textbf{Control Device} directive in the Storage Daemon's +conf file must be specified in each Device resource to permit Bacula to +detect tape alerts on a specific devices (normally only tape devices). + +Once the above mentioned two directives (Alert Command and Control Device) +are in place in each of your Device resources, Bacula will check for tape +alerts at two points: + +\begin{itemize} +\item After the Drive is used and it becomes idle. +\item After each read or write error on the drive. +\end{itemize} + +At each of the above times, Bacula will call the new \textbf{tapealert} +script, which uses the \textbf{tapeinfo} program. The tapeinfo utility is +part of the apt sg3-utils and rpm sg3\_utils packages that must be +installed on your systems. Then after each alert that Bacula finds for +that drive, Bacula will emit a Job message that is either INFO, WARNING, or +FATAL depending on the designation in the Tape Alert published by the T10 +Technical Committee on SCSI Storage Interfaces (www.t10.org). For the +specification, please see: www.t10.org/ftp/t10/document.02/02-142r0.pdf + +\smallskip +As a somewhat extreme example, if tape alerts 3, 5, and 39 are set, you +will get the following output in your backup job. + +{\small + \begin{verbatim} + 17-Nov 13:37 rufus-sd JobId 1: Error: block.c:287 + Write error at 0:17 on device "tape" + (/home/kern/bacula/k/regress/working/ach/drive0) + Vol=TestVolume001. ERR=Input/output error. + + 17-Nov 13:37 rufus-sd JobId 1: Fatal error: Alert: + Volume="TestVolume001" alert=3: ERR=The operation has stopped because + an error has occurred while reading or writing data which the drive + cannot correct. The drive had a hard read or write error + + 17-Nov 13:37 rufus-sd JobId 1: Fatal error: Alert: + Volume="TestVolume001" alert=5: ERR=The tape is damaged or the drive + is faulty. Call the tape drive supplier helpline. The drive can no + longer read data from the tape + + 17-Nov 13:37 rufus-sd JobId 1: Warning: Disabled Device "tape" + (/home/kern/bacula/k/regress/working/ach/drive0) due to tape alert=39. + + 17-Nov 13:37 rufus-sd JobId 1: Warning: Alert: Volume="TestVolume001" + alert=39: ERR=The tape drive may have a fault. Check for availability + of diagnostic information and run extended diagnostics if applicable. + The drive may have had a failure which may be identified by stored + diagnostic information or by running extended diagnostics (eg Send + Diagnostic). Check the tape drive users manual for instructions on + running extended diagnostic tests and retrieving diagnostic data. + + \end{verbatim} +} + +Without the tape alert feature enabled, you would only get the first error +message above, which is the error return Bacula received when it gets the +error. Notice also, that in the above output the alert number 5 is a +critical error, which causes two things to happen. First the tape drive is +disabled, and second the Job is failed. + +\smallskip +If you attempt to run another Job using the Device that has been disabled, +you will get a message similar to the following: + +\begin{verbatim} +17-Nov 15:08 rufus-sd JobId 2: Warning: + Device "tape" requested by DIR is disabled. +\end{verbatim} + +and the Job may be failed if no other drive can be found. + +\smallskip +Once the problem with the tape drive has been corrected, you can +clear the tape alerts and re-enable the device with the Bacula bconsole +command such as the following: + +\begin{verbatim} + enable Storage=Tape +\end{verbatim} + +Note, when you enable the device, the list of prior tape alerts for that +drive will be discarded. + +\smallskip +Since is is possible to miss tape alerts, Bacula maintains a temporary list +of the last 8 alerts, and each time Bacula calls the \textbf{tapealert} +script, it will keep up to 10 alert status codes. Normally there will only +be one or two alert errors for each call to the tapealert script. + +\smallskip +Once a drive has one or more tape alerts, you can see them by using the +bconsole status command as follows: +\begin{verbatim} +status storage=Tape +\end{verbatim} +which produces the following output: +\begin{verbatim} +Device Vtape is "tape" (/home/kern/bacula/k/regress/working/ach/drive0) +mounted with: + Volume: TestVolume001 + Pool: Default + Media type: tape + Device is disabled. User command. + Total Bytes Read=0 Blocks Read=1 Bytes/block=0 + Positioned at File=1 Block=0 + Critical Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001" + alert=Hard Error + Critical Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001" + alert=Read Failure + Warning Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001" + alert=Diagnostics Required +\end{verbatim} +if you want to see the long message associated with each of the alerts, +simply set the debug level to 10 or more and re-issue the status command: +\begin{verbatim} +setdebug storage=Tape level=10 +status storage=Tape +\end{verbatim} +\begin{verbatim} + ... + Critical Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001" + flags=0x0 alert=The operation has stopped because an error has occurred + while reading or writing data which the drive cannot correct. The drive had + a hard read or write error + Critical Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001" + flags=0x0 alert=The tape is damaged or the drive is faulty. Call the tape + drive supplier helpline. The drive can no longer read data from the tape + Warning Alert: at 17-Nov-2016 15:08:01 Volume="TestVolume001" flags=0x1 + alert=The tape drive may have a fault. Check for availability of diagnostic + information and run extended diagnostics if applicable. The drive may + have had a failure which may be identified by stored diagnostic information + or by running extended diagnostics (eg Send Diagnostic). Check the tape + drive users manual for instructions on running extended diagnostic tests + and retrieving diagnostic data. + ... +\end{verbatim} +The next time you \textbf{enable} the Device by either using +\textbf{bconsole} or you restart the Storage Daemon, all the saved alert +messages will be discarded. + +\subsubsection{Handling of Alerts} +Tape Alerts numbered 7,8,13,14,20,22,52,53, and 54 will cause Bacula to +disable the current Volume. + +\smallskip +Tape Alerts numbered 14,20,29,30,31,38, and 39 will cause Bacula to disable +the drive. + +\smallskip +Please note certain tape alerts such as 14 have multiple effects (disable +the Volume and disable the drive). + +\subsection{New Console ACL Directives} +By default, if a Console ACL directive is not set, Bacula will assume that the +ACL list is empty. If the current Bacula Director configuration uses restricted +Consoles and allows restore jobs, it is mandatory to configure the new +directives. + +\subsubsection{DirectoryACL} +\index[dir]{Directive!DirectoryACL} + +This directive is used to specify a list of directories that can be +accessed by a restore session. Without this directive, a restricted +console cannot restore any file. Multiple directories names may be +specified by separating them with commas, and/or by specifying multiple +DirectoryACL directives. For example, the directive may be specified as: + +\footnotesize +\begin{verbatim} + DirectoryACL = /home/bacula/, "/etc/", "/home/test/*" +\end{verbatim} +\normalsize + +With the above specification, the console can access the following +directories: +\begin{itemize} +\item \texttt{/etc/password} +\item \texttt{/etc/group} +\item \texttt{/home/bacula/.bashrc} +\item \texttt{/home/test/.ssh/config} +\item \texttt{/home/test/Desktop/Images/something.png} +\end{itemize} + +But not to the following files or directories: +\begin{itemize} +\item \texttt{/etc/security/limits.conf} +\item \texttt{/home/bacula/.ssh/id\_dsa.pub} +\item \texttt{/home/guest/something} +\item \texttt{/usr/bin/make} +\end{itemize} + +If a directory starts with a Windows pattern (ex: c:/), Bacula will +automatically ignore the case when checking directory names. + +\subsection{New Bconsole ``list'' Command Behavior} + +The bconsole \texttt{list} commands can now be used safely from a +restricted bconsole session. The information displayed will respect the +ACL configured for the Console session. For example, if a restricted +Console has access to JobA, JobB and JobC, information about JobD will not +appear in the \texttt{list jobs} command. + +\subsection{New Console ACL Directives} +\index[dir]{Directive!BackupClientACL} +It is now possible to configure a restricted Console to distinguish Backup +and Restore job permissions. The \texttt{BackupClientACL} can restrict +backup jobs on a specific set of clients, while the +\texttt{RestoreClientACL} can restrict restore jobs. + +{\small +\begin{verbatim} +# cat /opt/bacula/etc/bacula-dir.conf +... + +Console { + Name = fd-cons # Name of the FD Console + Password = yyy +... + ClientACL = localhost-fd # everything allowed + RestoreClientACL = test-fd # restore only + BackupClientACL = production-fd # backup only +} +\end{verbatim} +} + +The \texttt{ClientACL} directive takes precedence over the +\texttt{RestoreClientACL} and the \texttt{BackupClientACL}. In the Console +resource resource above, it means that the bconsole linked to the Console{} +named "fd-cons" will be able to run: + +\begin{itemize} +\item backup and restore for ``localhost-fd'' +\item backup for ``production-fd'' +\item restore for ``test-fd'' +\end{itemize} + +At the restore time, jobs for client ``localhost-fd'', ``test-fd'' and +``production-fd'' will be available. + +If \texttt{*all*} is set for \texttt{ClientACL}, backup and restore will be +allowed for all clients, despite the use of \texttt{RestoreClientACL} or +\texttt{"BackupClientACL}. + +\subsection{Client Initiated Backup} +\label{sec:featurecib} +A console program such as the new \texttt{tray-monitor} or +\texttt{bconsole} can now be configured to connect a File Daemon. There +are many new features available (see the New Tray Monitor section below), +but probably the most important is the ability for the user to initiate a +backup of his own machine. The connection established by the FD to the +Director for the backup will be used by the Director for the backup, thus +not only can clients (users) initiate backups, but a File Daemon that is +NATed (cannot be reached by the Director) can now be backed up without +using advanced tunneling techniques providing that the File Daemon can +connect to the Director. + +\smallskip +The flow of information is shown in the picture below: +\bsysimageH{nat}{Client Initiated Backup Network Flow}{fig:nat3} + +\newpage +\subsection{Configuring Client Initiated Backup} +\smallskip +In order to ensure security, there are a number of new directives +that must be enabled in the new \texttt{tray-monitor}, the File +Daemon and in the Director. +A typical configuration might look like the following: + +{\small +\begin{verbatim} +# cat /opt/bacula/etc/bacula-dir.conf +... + +Console { + Name = fd-cons # Name of the FD Console + Password = yyy + + # These commands are used by the tray-monitor, it is possible to restrict + CommandACL = run, restore, wait, .status, .jobs, .clients + CommandACL = .storages, .pools, .filesets, .defaults, .estimate + + # Adapt for your needs + jobacl = *all* + poolacl = *all* + clientacl = *all* + storageacl = *all* + catalogacl = *all* + filesetacl = *all* +} +\end{verbatim} +} + +{\small +\begin{verbatim} +# cat /opt/bacula/etc/bacula-fd.conf +... + +Console { # Console to connect the Director + Name = fd-cons + DIRPort = 9101 + address = localhost + Password = "yyy" +} + +Director { + Name = remote-cons # Name of the tray monitor/bconsole + Password = "xxx" # Password of the tray monitor/bconsole + Remote = yes # Allow to use send commands to the Console defined +} +\end{verbatim} +} + +{\small +\begin{verbatim} +cat /opt/bacula/etc/bconsole-remote.conf +.... + +Director { + Name = localhost-fd + address = localhost # Specify the FD address + DIRport = 9102 # Specify the FD Port + Password = "notused" +} + +Console { + Name = remote-cons # Name used in the auth process + Password = "xxx" +} +\end{verbatim} +} + +{\small +\begin{verbatim} +cat ~/.bacula-tray-monitor.conf +Monitor { + Name = remote-cons +} + +Client { + Name = localhost-fd + address = localhost # Specify the FD address + Port = 9102 # Specify the FD Port + Password = "xxx" + Remote = yes +} +\end{verbatim} +} + +\bsysimageH{conf-nat}{Relation Between Resources (bconsole)}{fig:nat} +\bsysimageH{conf-nat2}{Relation Between Resources (tray-monitor)}{fig:nat2} + +\medskip +A more detailed description with complete examples is available in +chapter~\ref{TrayMonitorChapter}. + +\subsection{New Tray Monitor} + +A new tray monitor has been added to the 9.0 release, the tray monitor offers +the following features: + +\begin{itemize} +\item Director, File and Storage Daemon status page +\item Support for the Client Initiated Backup protocol (See + \vref{sec:featurecib}). To use the Client Initiated Backup option from the + tray monitor, the Client option ``Remote'' should be checked in the + configuration (Fig \vref{fig:tray2}). +\item Wizard to run new job (Fig \vref{fig:tray4}) +\item Display an estimation of the number of files and the size of the next + backup job (Fig \vref{fig:tray4}) +\item Ability to configure the tray monitor configuration file directly from + the GUI (Fig \vref{fig:tray2}) +\item Ability to monitor a component and adapt the tray monitor task bar icon + if a jobs are running. +\item TLS Support +\item Better network connection handling +\item Default configuration file is stored under \texttt{\$HOME/.bacula-tray-monitor.conf} +\item Ability to ``schedule'' jobs +\item Available on Linux and Windows platforms +\end{itemize} + +% \medskip +% Please see chapter \ref{TrayMonitorChapter} for more details about this new +% functionality. + + +\begin{figure}[htbp] + \centering + \includegraphics[width=0.8\linewidth]{tray-monitor-status} + \caption{Tray Monitor Status} + \label{fig:tray0} +\end{figure} + +\begin{figure}[htbp] + \centering + \includegraphics[width=0.9\linewidth]{tray-monitor-conf-fd} + \caption{Tray Monitor Client Configuration} + \label{fig:tray2} +\end{figure} + +\begin{figure}[htbp] + \centering + \includegraphics[width=0.8\linewidth]{tray-monitor-run1} + \smallskip + \includegraphics[width=0.8\linewidth]{tray-monitor-run2} + \caption{Tray Monitor Run a Job} + \label{fig:tray4} +\end{figure} + +\subsection{Schedule Jobs via the Tray Monitor} + +The Tray Monitor can scan periodically a specific directory ``Command +Directory'' and process ``*.bcmd'' files to find jobs to run. + +The format of the ``file.bcmd'' command file is the following: +\begin{verbatim} +: +: +... + + = string + = string (bconsole command line) +\end{verbatim} + +For example: +\begin{verbatim} +localhost-fd: run job=backup-localhost-fd level=full +localhost-dir: run job=BackupCatalog +\end{verbatim} + +The command file should contain at least one command. The component specified +in the first part of the command line should be defined in the tray +monitor. Once the command file is detected by the tray monitor, a popup is +displayed to the user and it is possible for the user to cancel the job directly. + +\smallskip{} + +The file can be created with tools such as ``cron'' or the ``task scheduler'' +on Windows. It is possible to verify the network connection at that time to +avoid network errors. + +\begin{verbatim} +#!/bin/sh +if ping -c 1 director &> /dev/null +then + echo "my-dir: run job=backup" > /path/to/commands/backup.bcmd +fi +\end{verbatim} + +%\bsysimageH{tray-monitor-status}{Tray Monitor Status}{fig:tray0} +%\bsysimageH{tray-monitor1}{Tray Monitor Configuration}{fig:tray1} +%\bsysimageH{tray-monitor-conf-fd}{Tray Monitor Client Configuration}{fig:tray2} +%\bsysimageH{tray-monitor-conf-dir}{Tray Monitor Director Configuration}{fig:tray3} +%\bsysimageH{tray-monitor-run1}{Tray Monitor Run new Job}{fig:tray4} +% find a way to group them together +%\bsysimageH{tray-monitor-run2}{Tray Monitor Setup new Job}{fig:tray5} + + +\subsection{Accurate Option for Verify ``Volume Data'' Job} + +Since Bacula version 8.4.1, it has been possible to have a Verify Job +configured with \texttt{level=Data} that will reread all records from a job +and optionally check the size and the checksum of all files. Starting with + +\smallskip +Bacula version 9.0, it is now possible to use the \texttt{accurate} option to check +catalog records at the same time. When using a Verify job with +\texttt{level=Data} and \texttt{accurate=yes} can replace the +\texttt{level=VolumeToCatalog} option. + +For more information on how to setup a Verify Data job, see +\vref{label:verifyvolumedata}. + +To run a Verify Job with the \texttt{accurate} option, it is possible to set +the option in the Job definition or set use the \texttt{accurate=yes} on the +command line. + +\begin{verbatim} +* run job=VerifyData jobid=10 accurate=yes +\end{verbatim} + +\subsection{FileDaemon Saved Messages Resource Destination} + +It is now possible to send the list of all saved files to a Messages +resource with the \texttt{saved} message type. It is not recommended to +send this flow of information to the director and/or the catalog when the +client FileSet is pretty large. To avoid side effects, the \texttt{all} +keyword doesn't include the \texttt{saved} message type. The +\texttt{saved} message type should be explicitely set. + +\begin{verbatim} +# cat /opt/bacula/etc/bacula-fd.conf +... +Messages { + Name = Standard + director = mydirector-dir = all, !terminate, !restored, !saved + append = /opt/bacula/working/bacula-fd.log = all, saved, restored +} +\end{verbatim} + +\subsection{Minor Enhancements} + +\subsubsection{New Bconsole ".estimate" Command} + +The new \texttt{.estimate} command can be used to get statistics about a +job to run. The command uses the database to approximate the size and the +number of files of the next job. On a PostgreSQL database, the command +uses regression slope to compute values. On SQLite or MySQL, where these +statistical functions are not available, the command uses a simple +``average'' estimation. The correlation number is given for each value. + +{\small +\begin{verbatim} +*.estimate job=backup +level=I +nbjob=0 +corrbytes=0 +jobbytes=0 +corrfiles=0 +jobfiles=0 +duration=0 +job=backup + +*.estimate job=backup level=F +level=F +nbjob=1 +corrbytes=0 +jobbytes=210937774 +corrfiles=0 +jobfiles=2545 +duration=0 +job=backup +\end{verbatim} +} + +\subsubsection{Traceback and Lockdump} + +After the reception of a signal, \texttt{traceback} and \texttt{lockdump} +information are now stored in the same file. + +\subsection{Bconsole ``list jobs'' command options} + +The \texttt{list jobs} bconsole command now accepts new command line options: + +\begin{itemize} +\item \textbf{joberrors} Display jobs with JobErrors +\item \textbf{jobstatus=T} Display jobs with the specified status code +\item \textbf{client=cli} Display jobs for a specified client +\item \textbf{order=asc/desc} Change the output format of the job list. The + jobs are sorted by start time and JobId, the sort can use ascendant (asc) or + descendant (desc) (default) value. +\end{itemize} + +\subsection{Minor Enhancements} + +\subsubsection{New Bconsole "Tee All" Command} + +The ``@tall'' command allows logging all input/output from a console session. + +\begin{verbatim} +*@tall /tmp/log +*st dir +... +\end{verbatim} + +\subsection{Bconsole ``list jobs'' command options} + +The \texttt{list jobs} bconsole command now accepts new command line options: + +\begin{itemize} +\item \textbf{joberrors} Display jobs with JobErrors +\item \textbf{jobstatus=T} Display jobs with the specified status code +\item \textbf{client=cli} Display jobs for a specified client +\item \textbf{order=asc/desc} Change the output format of the job list. The + jobs are sorted by start time and JobId, the sort can use ascendant (asc) or + descendant (desc) (default) value. +\end{itemize} + +\subsection{New Bconsole "Tee All" Command} + +The ``@tall'' command allows logging all input/output from a console session. + +\begin{verbatim} +*@tall /tmp/log +*st dir +... +\end{verbatim} + +\subsection{New Job Edit Codes \%I} +In various places such as RunScripts, you have now access to \%I to get the +JobId of the copy or migration job started by a migrate job. + +\begin{verbatim} +Job { + Name = Migrate-Job + Type = Migrate + ... + RunAfter = "echo New JobId is %I" +} +\end{verbatim} + + +\subsection*{.api version 2} + +In Bacula version 9.0 and later, we introduced a new .api version +to help external tools to parse various Bacula bconsole output. + +% waa - 20150317 - this section needs just a little more to explain what the "43" in "s43" mean. Perhaps +% if it is not a good place to list the possibilities here, then list where a reference +% is. Also, I think .api 2 ... Means "use API version 2" but that should be stated too + +The \texttt{api\_opts} option can use the following arguments: +\begin{itemize} +\item [C] Clear current options +\item [tn] Use a specific time format (1 ISO format, 2 Unix Timestamp, 3 Default Bacula time format) +\item [sn] Use a specific separator between items (new line by default). +\item [Sn] Use a specific separator between objects (new line by default). +\item [o] Convert all keywords to lowercase and convert all non \textsl{isalpha} characters to \_ +\end{itemize} + +% waa - 20150317 - I think there should either be more output listed here to give a better feeling +% or, perhaps another output listing for different .status commands + +\begin{verbatim} + .api 2 api_opts=t1s43S35 + .status dir running +================================== +jobid=10 +job=AJob +... +\end{verbatim} + +\subsection*{New Debug Options} + +In Bacula version 9.0 and later, we introduced a new \texttt{options} parameter for +the \texttt{setdebug} bconsole command. + +\smallskip{} + +The following arguments to the new \texttt{option} parameter are available to control debug functions. + +\begin{itemize} +\item [0] Clear debug flags +\item [i] Turn off, ignore bwrite() errors on restore on File Daemon +\item [d] Turn off decomp of BackupRead() streams on File Daemon +\item [t] Turn on timestamps in traces +\item [T] Turn off timestamps in traces + +% waa - 20150306 - does this "c" item mean to say "Truncate trace file if one exists, otherwise append to it" ??? +\item [c] Truncate trace file if trace file is activated + +\item [l] Turn on recoding events on P() and V() +\item [p] Turn on the display of the event ring when doing a bactrace +\end{itemize} + +\smallskip{} + +The following command will enable debugging for the File Daemon, truncate an existing trace file, +and turn on timestamps when writing to the trace file. + +\begin{verbatim} +* setdebug level=10 trace=1 options=ct fd +\end{verbatim} + +\smallskip{} + +It is now possible to use a \textsl{class} of debug messages called \texttt{tags} +to control the debug output of Bacula daemons. + +\begin{itemize} +\item [all] Display all debug messages +\item [bvfs] Display BVFS debug messages +\item [sql] Display SQL related debug messages +\item [memory] Display memory and poolmem allocation messages +\item [scheduler] Display scheduler related debug messages +\end{itemize} + +\begin{verbatim} +* setdebug level=10 tags=bvfs,sql,memory +* setdebug level=10 tags=!bvfs + +# bacula-dir -t -d 200,bvfs,sql +\end{verbatim} + +The \texttt{tags} option is composed of a list of tags. Tags are separated by +``,'' or ``+'' or ``-'' or ``!''. To disable a specific tag, use ``-'' or ``!'' +in front of the tag. Note that more tags are planned for future versions. + +%\LTXtable{\linewidth}{table_debugtags} + +\subsection{Communication Line Compression} +Bacula version 9.0.0 and later now includes communication +line compression. It is turned on by default, and if the +two Bacula components (Dir, FD, SD, bconsole) are both +version 6.6.0 or greater, communication line compression) +will be enabled, by default. If for some reason, you do not want +communication line compression, you may disable it with the +following directive: + +\begin{verbatim} +Comm Compression = no +\end{verbatim} + +This directive can appear in the following resources: +\begin{verbatim} +bacula-dir.conf: Director resource +bacula-fd.conf Client (or FileDaemon) resource +bacula-sd.conf: Storage resource +bconsole.conf: Console resource +bat.conf: Console resource +\end{verbatim} + +\smallskip +In many cases, the volume of data transmitted across the +communications line can be reduced by a factor of three when +this directive is enabled (default) In the case that the compression is not +effective, Bacula turns it off on a. record by record basis. + +\smallskip +If you are backing up data that is already compressed the comm line +compression will not be effective, and you are likely +to end up with an average compression ratio that is very small. +In this case, Bacula reports {\bf None} in the Job report. + +\subsection{Deduplication Optimized Volumes} +This version of Bacula includes a new alternative (or additional) +volume format that optimizes the placement of files so +that an underlying deduplicating filesystem such as ZFS +can optimally deduplicate the backup data that is written +by Bacula. These are called Deduplication Optimized Volumes +or Aligned Volumes for short. The details of how to use this +feature and its considerations are in the +Deduplication Optimized Volumes whitepaper. + +\smallskip +This feature is available if you have Bacula Community produced binaries +and the Aligned Volumes plugin. + +\subsection{New Message Identification Format} +We are starting to add unique message indentifiers to each message (other +than debug and the Job report) that Bacula prints. At the current time +only two files in the Storage Daemon have these message identifiers and +over time with subsequent releases we will modify all messages. + +\smallskip +The message identifier will be kept unique for each message and once +assigned to a message it will not change even if the text of the message +changes. This means that the message identifier will be the same no matter +what language the text is displayed in, and more importantly, it will allow +us to make listing of the messages with in some cases, additional +explanation or instructions on how to correct the problem. All this will +take several years since it is a lot of work and requires some new programs +that are not yet written to manage these message identifiers. + +\smallskip +The format of the message identifier is: + +\begin{verbatim} + [AAnnnn] +\end{verbatim} +where A is an upper case character and nnnn is a four digit number, where +the first charcter indicates the software component (daemon); the second +letter indicates the severity, and the number is unique for a given +componet and severity. + +\smallskip +For example: + +\begin{verbatim} + [SF0001] +\end{verbatim} + +The first charcter representing the componend at the current time one of +the following: + +\begin{verbatim} + S Storage daemon + D Director + F File daemon +\end{verbatim} + +\smallskip +The second character representing the severity or level can be: + +\begin{verbatim} + A Abort + F Fatal + E Erropr + W Warning + S Security + I Info + D Debug + O OK (i.e. operation completed normallly) +\end{verbatim} + +So in the example above [SF0001] indicates it is a message id, because of +the brackets and because it is at the beginning of the message, and that +it was generated by the Storage daemon as a fatal error. +\smallskip +As mentioned above it will take some time to implement these message ids +everywhere, and over time we may add more component letters and more +severity levels as needed. + + +\chapter{New Features in 7.4.0} +This chapter presents the new features that have been added to +the various versions of Bacula. + +\section{New Features in 7.4.3} +\subsection{RunScripts} +There are two new RunScript short cut directives implemented in +the Director. They are: + +\begin{verbatim} +Job { + ... + ConsoleRunBeforeJob = "console-command" + ... +} +\end{verbatim} + +\begin{verbatim} +Job { + ... + ConsoleRunAfterJob = "console-command" + ... +} +\end{verbatim} + +As with other RunScript commands, you may have multiple copies +of either the {\bf ConsoleRunBeforeJob} or the {\bf ConsoleRunAfterJob} +in the same Job resource definition. +\smallskip +Please note that not all console commands are permitted, and that +if you run a console command that requires a response, the results +are not determined (i.e. it will probably fail). + + + +\section{New Features in 7.4.0} +\subsection{Verify Volume Data} + +It is now possible to have a Verify Job configured with \texttt{level=Data} to +reread all records from a job and optionally check the size and the checksum +of all files. + +\begin{verbatim} +# Verify Job definition +Job { + Name = VerifyData + Level = Data + Client = 127.0.0.1-fd # Use local file daemon + FileSet = Dummy # Will be adapted during the job + Storage = File # Should be the right one + Messages = Standard + Pool = Default +} + +# Backup Job definition +Job { + Name = MyBackupJob + Type = Backup + Client = windows1 + FileSet = MyFileSet + Pool = 1Month + Storage = File +} + +FileSet { + Name = MyFileSet + Include { + Options { + Verify = s5 + Signature = MD5 + } + File = / +} +\end{verbatim} + +To run the Verify job, it is possible to use the ``jobid'' parameter of the ``run'' command. + +\begin{verbatim} +*run job=VerifyData jobid=10 +Run Verify Job +JobName: VerifyData +Level: Data +Client: 127.0.0.1-fd +FileSet: Dummy +Pool: Default (From Job resource) +Storage: File (From Job resource) +Verify Job: MyBackupJob.2015-11-11_09.41.55_03 +Verify List: /opt/bacula/working/working/VerifyVol.bsr +When: 2015-11-11 09:47:38 +Priority: 10 +OK to run? (yes/mod/no): yes +Job queued. JobId=14 + +... + +11-Nov 09:46 my-dir JobId 13: Bacula 7.4.0 (13Nov15): + Build OS: x86_64-unknown-linux-gnu archlinux + JobId: 14 + Job: VerifyData.2015-11-11_09.46.29_03 + FileSet: MyFileSet + Verify Level: Data + Client: 127.0.0.1-fd + Verify JobId: 10 + Verify Job:q + Start time: 11-Nov-2015 09:46:31 + End time: 11-Nov-2015 09:46:32 + Files Expected: 1,116 + Files Examined: 1,116 + Non-fatal FD errors: 0 + SD Errors: 0 + FD termination status: Verify differences + SD termination status: OK + Termination: Verify Differences +\end{verbatim} + +The current Verify Data implementation requires specifying the correct Storage +resource in the Verify job. The Storage resource can be changed with the bconsole +command line and with the menu. + +\subsection{Bconsole ``list jobs'' command options} + +The \texttt{list jobs} bconsole command now accepts new command line options: + +\begin{itemize} +\item \textbf{joberrors} Display jobs with JobErrors +\item \textbf{jobstatus=T} Display jobs with the specified status code +\item \textbf{client=cli} Display jobs for a specified client +\item \textbf{order=asc/desc} Change the output format of the job list. The + jobs are sorted by start time and JobId, the sort can use ascendant (asc) or + descendant (desc) (default) value. +\end{itemize} + +\subsection{Minor Enhancements} + +\subsubsection{New Bconsole "Tee All" Command} + +The ``@tall'' command allows logging all input/output from a console session. + +\begin{verbatim} +*@tall /tmp/log +*st dir +... +\end{verbatim} + +\subsection{Windows Encrypted File System (EFS) Support} + +The Bacula Enterprise Windows File Daemon for the community version +7.4.0 now automatically supports files and +directories that are encrypted on Windows filesystem. + +\subsection{SSL Connections to MySQL} + +There are five new Directives for the Catalog resource in the +{\bf bacula-dir.conf} file that you can use to encrypt the +communications between Bacula and MySQL for additional +security. + +\begin{description} +\item [dbsslkey] takes a string variable that specifies the filename of an +SSL key file. +\item [dbsslcert] takes a string variable that specifies the filename of an +SSL certificate file. +\item [dbsslca] takes a string variable that specifies the filename of a +SSL CA (certificate authority) certificate. +\item [dbsslcipher] takes a string variable that specifies the cipher +to be used. +\end{description} + +\subsection{Max Virtual Full Interval} +This is a new Job resource directive that specifies the time in seconds +that is a maximum time between Virtual Full jobs. It is much like the +Max Full Interval directive but applies to Virtual Full jobs rather +that Full jobs. + +\subsection{New List Volumes Output} +The {\bf list} and {\bf llist} commands have been modified so that when +listing Volumes a new pseudo field {\bf expiresin} will be printed. This +field is the number of seconds in which the retention period will expire. +If the retention period has already expired the value will be zero. Any +non-zero value means that the retention period is still in effect. + +An example with many columns shorted for display purpose is: + +\begin{verbatim} +*list volumes +Pool: Default +*list volumes +Pool: Default ++----+---------------+-----------+---------+-------------+-----------+ +| id | volumename | volstatus | enabled | volbytes | expiresin | ++----+---------------+-----------+---------+-------------+-----------+ +| 1 | TestVolume001 | Full | 1 | 249,940,696 | 0 | +| 2 | TestVolume002 | Full | 1 | 249,961,704 | 1 | +| 3 | TestVolume003 | Full | 1 | 249,961,704 | 2 | +| 4 | TestVolume004 | Append | 1 | 127,367,896 | 3 | ++----+---------------+-----------+---------+-------------+-----------+ +\end{verbatim} + +%% +%% +\chapter{New Features in 7.2.0} +This chapter presents the new features that have been added to +the various versions of Bacula. + +\section{New Features in 7.2.0} + +\subsection{New Job Edit Codes \%E \%R} +In various places such as RunScripts, you have now access to \%E to get the +number of non-fatal errors for the current Job and \%R to get the number of +bytes read from disk or from the network during a job. + +\subsection{Enable/Disable commands} +The \textbf{bconsole} \textbf{enable} and \textbf{disable} commands have +been extended from enabling/disabling Jobs to include Clients, Schedule, +and Storage devices. Examples: + +\begin{verbatim} + disable Job=NightlyBackup Client=Windows-fd +\end{verbatim} + +will disable the Job named \textbf{NightlyBackup} as well as the +client named \textbf{Windows-fd}. + +\begin{verbatim} + disable Storage=LTO-changer Drive=1 +\end{verbatim} + +will disable the first drive in the autochanger named \textbf{LTO-changer}. + +Please note that doing a \textbf{reload} command will set any values +changed by the enable/disable commands back to the values in the +bacula-dir.conf file. + +The Client and Schedule resources in the bacula-dir.conf file now permit +the directive Enable = yes or Enable = no. + + +\section{Bacula 7.2} + +\subsection{Snapshot Management} + +Bacula 7.2 is now able to handle Snapshots on Linux/Unix +systems. Snapshots can be automatically created and used to backup files. It is +also possible to manage Snapshots from Bacula's \texttt{bconsole} tool through a +unique interface. + +\subsubsection{Snapshot Backends} + +The following Snapshot backends are supported with Bacula Enterprise 8.2: + +\begin{itemize} +\item BTRFS +\item ZFS +\item LVM\footnote{Some restrictions described in \vref{LVMBackend} applies to + the LVM backend} +\end{itemize} + +By default, Snapshots are mounted (or directly available) under +\textbf{.snapshots} directory on the root filesystem. (On ZFS, the default +is \textbf{.zfs/snapshots}). + +\smallskip{} + +The Snapshot backend program is called \textbf{bsnapshot} and is available in +the \textbf{bacula-enterprise-snapshot} package. In order to use the Snapshot +Management feature, the package must be installed on the Client. + +\smallskip{} +\label{bsnapshotconf} +The \textbf{bsnapshot} program can be configured using +\texttt{/opt/bacula/etc/bsnapshot.conf} file. The following parameters can +be adjusted in the configuration file: + +\begin{itemize} +\item \texttt{trace=} Specify a trace file +\item \texttt{debug=} Specify a debug level +\item \texttt{sudo=} Use sudo to run commands +\item \texttt{disabled=} Disable snapshot support +\item \texttt{retry=} Configure the number of retries for some operations +\item \texttt{snapshot\_dir=} Use a custom name for the Snapshot directory. (\textbf{.SNAPSHOT}, \textbf{.snapdir}, etc...) +\item \texttt{lvm\_snapshot\_size=} Specify a custom snapshot size for a given LVM volume +\end{itemize} + +\begin{verbatim} +# cat /opt/bacula/etc/bsnapshot.conf +trace=/tmp/snap.log +debug=10 +lvm_snapshot_size=/dev/ubuntu-vg/root:5% +\end{verbatim} + + +\subsubsection{Application Quiescing} + +When using Snapshots, it is very important to quiesce applications that are +running on the system. The simplest way to quiesce an application is to stop +it. Usually, taking the Snapshot is very fast, and the downtime is only about a +couple of seconds. If downtime is not possible and/or the application provides +a way to quiesce, a more advanced script can be used. An example is +described on \vref{SnapRunScriptExample}. + +\subsubsection{New Director Directives} + +The use of the Snapshot Engine on the FileDaemon is determined by the +new \textbf{Enable Snapshot} FileSet directive. The default is \textbf{no}. + +\begin{verbatim} +FileSet { + Name = LinuxHome + + Enable Snapshot = yes + + Include { + Options = { Compression = LZO } + File = /home + } +} +\end{verbatim} + +By default, Snapshots are deleted from the Client at the end of the backup. To +keep Snapshots on the Client and record them in the Catalog for a determined +period, it is possible to use the \textbf{Snapshot Retention} directive in the +Client or in the Job resource. The default value is 0 secconds. If, for a given Job, +both Client and Job \textbf{Snapshot Retention} directives are set, the Job +directive will be used. + +\begin{verbatim} +Client { + Name = linux1 + ... + + Snapshot Retention = 5 days +} +\end{verbatim} + +To automatically prune Snapshots, it is possible to use the following RunScript +command: + +\begin{verbatim} +Job { + ... + Client = linux1 + ... + RunScript { + RunsOnClient = no + Console = "prune snapshot client=%c yes" + RunsAfter = yes + } +} +\end{verbatim} + + +\smallskip{} + +In RunScripts, the \texttt{AfterSnapshot} keyword for the \texttt{RunsWhen} directive will +allow a command to be run just after the Snapshot creation. \texttt{AfterSnapshot} is a +synonym for the \texttt{AfterVSS} keyword. + +\label{SnapRunScriptExample} +\begin{verbatim} +Job { + ... + RunScript { + Command = "/etc/init.d/mysql start" + RunsWhen = AfterSnapshot + RunsOnClient = yes + } + RunScript { + Command = "/etc/init.d/mysql stop" + RunsWhen = Before + RunsOnClient = yes + } +} +\end{verbatim} + +\subsubsection{Job Output Information} + +Information about Snapshots are displayed in the Job output. The list of all +devices used by the Snapshot Engine is displayed, and the Job summary +indicates if Snapshots were available. + +\begin{verbatim} +JobId 3: Create Snapshot of /home/build +JobId 3: Create Snapshot of /home/build/subvol +JobId 3: Delete snapshot of /home/build +JobId 3: Delete snapshot of /home/build/subvol +... +JobId 3: Bacula 127.0.0.1-dir 7.2.0 (23Jul15): + Build OS: x86_64-unknown-linux-gnu archlinux + JobId: 3 + Job: Incremental.2015-02-24_11.20.27_08 + Backup Level: Full +... + Snapshot/VSS: yes +... + Termination: Backup OK +\end{verbatim} + + +\subsubsection{New ``snapshot'' Bconsole Commands} + +The new \textbf{snapshot} command will display by default the following menu: +\begin{verbatim} +*snapshot +Snapshot choice: + 1: List snapshots in Catalog + 2: List snapshots on Client + 3: Prune snapshots + 4: Delete snapshot + 5: Update snapshot parameters + 6: Update catalog with Client snapshots + 7: Done +Select action to perform on Snapshot Engine (1-7): +\end{verbatim} + +The \textbf{snapshot} command can also have the following parameters: +\begin{verbatim} +[client= | job= | jobid=] + [delete | list | listclient | prune | sync | update] +\end{verbatim} + +It is also possible to use traditional \texttt{list}, \texttt{llist}, +\texttt{update}, \texttt{prune} or \texttt{delete} commands on Snapshots. + +\begin{verbatim} +*llist snapshot jobid=5 + snapshotid: 1 + name: NightlySave.2015-02-24_12.01.00_04 + createdate: 2015-02-24 12:01:03 + client: 127.0.0.1-fd + fileset: Full Set + jobid: 5 + volume: /home/.snapshots/NightlySave.2015-02-24_12.01.00_04 + device: /home/btrfs + type: btrfs + retention: 30 + comment: +\end{verbatim} + +\begin{verbatim} +* snapshot listclient +Automatically selected Client: 127.0.0.1-fd +Connecting to Client 127.0.0.1-fd at 127.0.0.1:8102 +Snapshot NightlySave.2015-02-24_12.01.00_04: + Volume: /home/.snapshots/NightlySave.2015-02-24_12.01.00_04 + Device: /home + CreateDate: 2015-02-24 12:01:03 + Type: btrfs + Status: OK + Error: +\end{verbatim} + +\smallskip{} + +With the \textsl{Update catalog with Client snapshots} option (or +\textbf{snapshot sync}), the Director contacts the FileDaemon, lists snapshots +of the system and creates catalog records of the Snapshots. + +\begin{verbatim} +*snapshot sync +Automatically selected Client: 127.0.0.1-fd +Connecting to Client 127.0.0.1-fd at 127.0.0.1:8102 +Snapshot NightlySave.2015-02-24_12.35.47_06: + Volume: /home/.snapshots/NightlySave.2015-02-24_12.35.47_06 + Device: /home + CreateDate: 2015-02-24 12:35:47 + Type: btrfs + Status: OK + Error: +Snapshot added in Catalog + +*llist snapshot + snapshotid: 13 + name: NightlySave.2015-02-24_12.35.47_06 + createdate: 2015-02-24 12:35:47 + client: 127.0.0.1-fd + fileset: + jobid: 0 + volume: /home/.snapshots/NightlySave.2015-02-24_12.35.47_06 + device: /home + type: btrfs + retention: 0 + comment: +\end{verbatim} + +% list +% llist +% prune +% delete +% update snapshot +% sync + +\subsubsection{LVM Backend Restrictions} +\label{LVMBackend} + +LVM Snapshots are quite primitive compared to ZFS, BTRFS, NetApp and other +systems. For example, it is not possible to use Snapshots if the Volume Group +(VG) is full. The administrator must keep some free space in the VG +to create Snapshots. The amount of free space required depends on the activity of the +Logical Volume (LV). \textbf{bsnapshot} uses 10\% of the LV by +default. This number can be configured per LV in the +\textbf{bsnapshot.conf} file. + +\begin{verbatim} +[root@system1]# vgdisplay + --- Volume group --- + VG Name vg_ssd + System ID + Format lvm2 +... + VG Size 29,81 GiB + PE Size 4,00 MiB + Total PE 7632 + Alloc PE / Size 125 / 500,00 MiB + Free PE / Size 7507 / 29,32 GiB +... +\end{verbatim} + +It is also not advisable to leave snapshots on the LVM backend. Having multiple +snapshots of the same LV on LVM will slow down the system. + +\subsubsection{Debug Options} + +To get low level information about the Snapshot Engine, the debug tag ``snapshot'' +should be used in the \textbf{setdebug} command. + +\begin{verbatim} +* setdebug level=10 tags=snapshot client +* setdebug level=10 tags=snapshot dir +\end{verbatim} + +\subsection{Minor Enhancements} +\subsubsection{Storage Daemon Reports Disk Usage} + +The \texttt{status storage} command now reports the space available on disk devices: +\begin{verbatim} +... +Device status: + +Device file: "FileStorage" (/bacula/arch1) is not open. + Available Space=5.762 GB +== + +Device file: "FileStorage1" (/bacula/arch2) is not open. + Available Space=5.862 GB +\end{verbatim} + +\subsection{Data Encryption Cipher Configuration} +Bacula Enterprise version 8.0 and later now allows configuration of the data +encryption cipher and the digest algorithm. Previously, the cipher was forced to AES 128, +but it is now possible to choose between the following ciphers: + +\begin{itemize} +\item AES128 (default) +\item AES192 +\item AES256 +\item blowfish +\end{itemize} + +The digest algorithm was set to SHA1 or SHA256 depending on the local OpenSSL +options. We advise you to not modify the PkiDigest default setting. Please, +refer to the OpenSSL documentation to understand the pros and cons regarding these options. + +\begin{verbatim} + FileDaemon { + ... + PkiCipher = AES256 + } +\end{verbatim} + +\subsubsection*{New Option Letter ``M'' for Accurate Directive in FileSet} + +% waa - 20150317 - is 8.0.5 correct here? +Added in version 8.0.5, the new ``M'' option letter for the Accurate directive +in the FileSet Options block, which allows comparing the modification time and/or +creation time against the last backup timestamp. This is in contrast to the +existing options letters ``m'' and/or ``c'', mtime and ctime, which are checked +against the stored catalog values, which can vary accross different machines +when using the BaseJob feature. + +The advantage of the new ``M'' option letter for Jobs that refer to BaseJobs is +that it will instruct Bacula to backup files based on the last backup time, which +is more useful because the mtime/ctime timestamps may differ on various Clients, +causing files to be needlessly backed up. + +\smallskip{} + +\begin{verbatim} + Job { + Name = USR + Level = Base + FileSet = BaseFS +... + } + + Job { + Name = Full + FileSet = FullFS + Base = USR +... + } + + FileSet { + Name = BaseFS + Include { + Options { + Signature = MD5 + } + File = /usr + } + } + + FileSet { + Name = FullFS + Include { + Options { + Accurate = Ms # check for mtime/ctime of last backup timestamp and Size + Signature = MD5 + } + File = /home + File = /usr + } + } +\end{verbatim} + +\subsection{Read Only Storage Devices} +This version of Bacula allows you to define a Storage deamon device +to be read-only. If the {\bf Read Only} directive is specified and +enabled, the drive can only be used for read operations. +The {\bf Read Only} directive can be defined in any bacula-sd.conf +Device resource, and is most useful for reserving one or more +drives for restores. An example is: + +\begin{verbatim} +Read Only = yes +\end{verbatim} + +\subsection{New Resume Command} +The new \texttt{resume} command does exactly the same thing as a +{\bf restart} command, but for some users the +name may be more logical because in general the +{\bf restart} command is used to resume running +a Job that was incomplete. + +\subsection{New Prune ``Expired'' Volume Command} +In Bacula Enterprise 6.4, it is now possible to prune all volumes +(from a pool, or globally) that are ``expired''. This option can be +scheduled after or before the backup of the catalog and can be +combined with the \texttt{Truncate On Purge} option. The \texttt{prune expired volme} command may +be used instead of the \texttt{manual\_prune.pl} script. + +\begin{verbatim} +* prune expired volume + +* prune expired volume pool=FullPool +\end{verbatim} + +To schedule this option automatically, it can be added to the Catalog backup job +definition. + +\begin{verbatim} + Job { + Name = CatalogBackup + ... + RunScript { + Console = "prune expired volume yes" + RunsWhen = Before + } + } +\end{verbatim} + + +\subsection{New Job Edit Codes \%P \%C} +In various places such as RunScripts, you have now access to \%P to get the +current Bacula process ID (PID) and \%C to know if the current job is a +cloned job. + +\subsection{Enhanced Status and Error Messages} +We have enhanced the Storage daemon status output to be more +readable. This is important when there are a large number of +devices. In addition to formatting changes, it also includes more +details on which devices are reading and writing. + +A number of error messages have been enhanced to have more specific +data on what went wrong. + +If a file changes size while being backed up the old and new size +are reported. + +\subsection{Miscellaneous New Features} +\begin{itemize} +\item Allow unlimited line lengths in .conf files (previously limited +to 2000 characters). + +\item Allow /dev/null in ChangerCommand to indicated a Virtual Autochanger. + +\item Add a --fileprune option to the manual\_prune.pl script. + +\item Add a -m option to make\_catalog\_backup.pl to do maintenance +on the catalog. + +\item Safer code that cleans up the working directory when starting +the daemons. It limits what files can be deleted, hence enhances +security. + +\item Added a new .ls command in bconsole to permit browsing a client's +filesystem. + +\item Fixed a number of bugs, includes some obscure seg faults, and a +race condition that occurred infrequently when running Copy, Migration, +or Virtual Full backups. + +\item Upgraded to a newer version of Qt4 for bat. All indications +are that this will improve bat's stability on Windows machines. + +\item The Windows installers now detect and refuse to install on +an OS that does not match the 32/64 bit value of the installer. +\end{itemize} + +\subsection{FD Storage Address} + +When the Director is behind a NAT, in a WAN area, to connect to +% the FileDaemon or +the StorageDaemon, the Director uses an ``external'' ip address, +and the FileDaemon should use an ``internal'' IP address to contact the +StorageDaemon. + +The normal way to handle this situation is to use a canonical name such as +``storage-server'' that will be resolved on the Director side as the WAN +address and on the Client side as the LAN address. This is now possible to +configure this parameter using the new directive \texttt{FDStorageAddress} in +the Storage or Client resource. + + +%%\bsysimageH{BackupOverWan1}{Backup Over WAN}{figbs6:fdstorageaddress} +% \label{fig:fdstorageaddress} + +\begin{verbatim} +Storage { + Name = storage1 + Address = 65.1.1.1 + FD Storage Address = 10.0.0.1 + SD Port = 9103 + ... +} +\end{verbatim} + +% # or in the Client resouce +% + +\begin{verbatim} + Client { + Name = client1 + Address = 65.1.1.2 + FD Storage Address = 10.0.0.1 + FD Port = 9102 + ... + } +\end{verbatim} + +Note that using the Client \texttt{FDStorageAddress} directive will not allow +to use multiple Storage Daemon, all Backup or Restore requests will be sent to +the specified \texttt{FDStorageAddress}. + +\subsection{Maximum Concurrent Read Jobs} +This is a new directive that can be used in the {\bf bacula-dir.conf} file +in the Storage resource. The main purpose is to limit the number +of concurrent Copy, Migration, and VirtualFull jobs so that +they don't monopolize all the Storage drives causing a deadlock situation +where all the drives are allocated for reading but none remain for +writing. This deadlock situation can occur when running multiple +simultaneous Copy, Migration, and VirtualFull jobs. + +\smallskip +The default value is set to 0 (zero), which means there is no +limit on the number of read jobs. Note, limiting the read jobs +does not apply to Restore jobs, which are normally started by +hand. A reasonable value for this directive is one half the number +of drives that the Storage resource has rounded down. Doing so, +will leave the same number of drives for writing and will generally +avoid over committing drives and a deadlock. + +\subsection{Incomplete Jobs} +During a backup, if the Storage daemon experiences disconnection +with the File daemon during backup (normally a comm line problem +or possibly an FD failure), under conditions that the SD determines +to be safe it will make the failed job as Incomplete rather than +failed. This is done only if there is sufficient valid backup +data that was written to the Volume. The advantage of an Incomplete +job is that it can be restarted by the new bconsole {\bf restart} +command from the point where it left off rather than from the +beginning of the jobs as is the case with a cancel. + +\subsection{The Stop Command} +Bacula has been enhanced to provide a {\bf stop} command, +very similar to the {\bf cancel} command with the main difference +that the Job that is stopped is marked as Incomplete so that +it can be restarted later by the {\bf restart} command where +it left off (see below). The {\bf stop} command with no +arguments, will like the cancel command, prompt you with the +list of running jobs allowing you to select one, which might +look like the following: + +\begin{verbatim} +*stop +Select Job: + 1: JobId=3 Job=Incremental.2012-03-26_12.04.26_07 + 2: JobId=4 Job=Incremental.2012-03-26_12.04.30_08 + 3: JobId=5 Job=Incremental.2012-03-26_12.04.36_09 +Choose Job to stop (1-3): 2 +2001 Job "Incremental.2012-03-26_12.04.30_08" marked to be stopped. +3000 JobId=4 Job="Incremental.2012-03-26_12.04.30_08" marked to be stopped. +\end{verbatim} + +\subsection{The Restart Command} +The new {\bf Restart command} allows console users to restart +a canceled, failed, or incomplete Job. For canceled and failed +Jobs, the Job will restart from the beginning. For incomplete +Jobs the Job will restart at the point that it was stopped either +by a stop command or by some recoverable failure. + +\smallskip +If you enter the {\bf restart} command in bconsole, you will get the +following prompts: + +\begin{verbatim} +*restart +You have the following choices: + 1: Incomplete + 2: Canceled + 3: Failed + 4: All +Select termination code: (1-4): +\end{verbatim} + +If you select the {\bf All} option, you may see something like: + +\begin{verbatim} +Select termination code: (1-4): 4 ++-------+-------------+---------------------+------+-------+----------+-----------+-----------+ +| jobid | name | starttime | type | level | jobfiles | +jobbytes | jobstatus | ++-------+-------------+---------------------+------+-------+----------+-----------+-----------+ +| 1 | Incremental | 2012-03-26 12:15:21 | B | F | 0 | + 0 | A | +| 2 | Incremental | 2012-03-26 12:18:14 | B | F | 350 | +4,013,397 | I | +| 3 | Incremental | 2012-03-26 12:18:30 | B | F | 0 | + 0 | A | +| 4 | Incremental | 2012-03-26 12:18:38 | B | F | 331 | +3,548,058 | I | ++-------+-------------+---------------------+------+-------+----------+-----------+-----------+ +Enter the JobId list to select: +\end{verbatim} + +Then you may enter one or more JobIds to be restarted, which may +take the form of a list of JobIds separated by commas, and/or JobId +ranges such as {\bf 1-4}, which indicates you want to restart JobIds +1 through 4, inclusive. + +\subsection{Job Bandwidth Limitation} + +The new {\bf Job Bandwidth Limitation} directive may be added to the File +daemon's and/or Director's configuration to limit the bandwidth used by a +Job on a Client. It can be set in the File daemon's conf file for all Jobs +run in that File daemon, or it can be set for each Job in the Director's +conf file. The speed is always specified in bytes per second. + +For example: +\begin{verbatim} +FileDaemon { + Name = localhost-fd + Working Directory = /some/path + Pid Directory = /some/path + ... + Maximum Bandwidth Per Job = 5Mb/s +} +\end{verbatim} + +The above example would cause any jobs running with the FileDaemon to not +exceed 5 megabytes per second of throughput when sending data to the +Storage Daemon. Note, the speed is always specified in bytes per second +(not in bits per second), and the case (upper/lower) of the specification +characters is ignored (i.e. 1MB/s = 1Mb/s). + +You may specify the following speed parameter modifiers: + k/s (1,000 bytes per second), kb/s (1,024 bytes per second), + m/s (1,000,000 bytes per second), or mb/s (1,048,576 bytes per second). + +For example: +\begin{verbatim} +Job { + Name = locahost-data + FileSet = FS_localhost + Accurate = yes + ... + Maximum Bandwidth = 5Mb/s + ... +} +\end{verbatim} + +The above example would cause Job \texttt{localhost-data} to not exceed 5MB/s +of throughput when sending data from the File daemon to the Storage daemon. + +A new console command \texttt{setbandwidth} permits to set dynamically the +maximum throughput of a running Job or for future jobs of a Client. + +\begin{verbatim} +* setbandwidth limit=1000 jobid=10 +\end{verbatim} + +Please note that the value specified for the \texttt{limit} command +line parameter is always in units of 1024 bytes (i.e. the number +is multiplied by 1024 to give the number of bytes per second). As +a consequence, the above limit of 1000 will be interpreted as a +limit of 1000 * 1024 = 1,024,000 bytes per second. + +\subsection{Always Backup a File} + +When the Accurate mode is turned on, you can decide to always backup a file +by using then new {\bf A} Accurate option in your FileSet. For example: + +\begin{verbatim} +Job { + Name = ... + FileSet = FS_Example + Accurate = yes + ... +} + +FileSet { + Name = FS_Example + Include { + Options { + Accurate = A + } + File = /file + File = /file2 + } + ... +} +\end{verbatim} + +This project was funded by Bacula Systems based on an idea of James Harper and +is available with the Bacula Enterprise Edition. + +\subsection{Setting Accurate Mode at Runtime} + +You are now able to specify the Accurate mode on the \texttt{run} command and +in the Schedule resource. + +\begin{verbatim} +* run accurate=yes job=Test +\end{verbatim} + +\begin{verbatim} +Schedule { + Name = WeeklyCycle + Run = Full 1st sun at 23:05 + Run = Differential accurate=yes 2nd-5th sun at 23:05 + Run = Incremental accurate=no mon-sat at 23:05 +} +\end{verbatim} + +It can allow you to save memory and and CPU resources on the catalog server in +some cases. + +\medskip +These advanced tuning options are available with the Bacula Enterprise Edition. + +% Common with community +\subsection{Additions to RunScript variables} +You can have access to JobBytes, JobFiles and Director name using \%b, \%F and \%D +in your runscript command. The Client address is now available through \%h. + +\begin{verbatim} +RunAfterJob = "/bin/echo Job=%j JobBytes=%b JobFiles=%F ClientAddress=%h Dir=%D" +\end{verbatim} + +\subsection{LZO Compression} + +LZO compression was added in the Unix File Daemon. From the user point of view, +it works like the GZIP compression (just replace {\bf compression=GZIP} with +{\bf compression=LZO}). + +For example: +\begin{verbatim} +Include { + Options { compression=LZO } + File = /home + File = /data +} +\end{verbatim} + +LZO provides much faster compression and decompression speed but lower +compression ratio than GZIP. It is a good option when you backup to disk. For +tape, the built-in compression may be a better option. + +LZO is a good alternative for GZIP1 when you don't want to slow down your +backup. On a modern CPU it should be able to run almost as fast as: + +\begin{itemize} +\item your client can read data from disk. Unless you have very fast disks like + SSD or large/fast RAID array. +\item the data transfers between the file daemon and the storage daemon even on + a 1Gb/s link. +\end{itemize} + +Note that bacula only use one compression level LZO1X-1. + +\medskip +The code for this feature was contributed by Laurent Papier. + +\subsection{Purge Migration Job} + +The new {\bf Purge Migration Job} directive may be added to the Migration +Job definition in the Director's configuration file. When it is enabled +the Job that was migrated during a migration will be purged at +the end of the migration job. + +For example: +\begin{verbatim} +Job { + Name = "migrate-job" + Type = Migrate + Level = Full + Client = localhost-fd + FileSet = "Full Set" + Messages = Standard + Storage = DiskChanger + Pool = Default + Selection Type = Job + Selection Pattern = ".*Save" +... + Purge Migration Job = yes +} +\end{verbatim} + +\medskip + +This project was submitted by Dunlap Blake; testing and documentation was funded +by Bacula Systems. + +\subsection{Changes in the Pruning Algorithm} + +We rewrote the job pruning algorithm in this version. Previously, in some +users reported that the pruning process at the end of jobs was very long. +It should not be longer the case. Now, Bacula won't prune automatically a +Job if this particular Job is needed to restore data. Example: + +\begin{verbatim} +JobId: 1 Level: Full +JobId: 2 Level: Incremental +JobId: 3 Level: Incremental +JobId: 4 Level: Differential +.. Other incrementals up to now +\end{verbatim} + +In this example, if the Job Retention defined in the Pool or in the Client +resource causes that Jobs with Jobid in 1,2,3,4 can be pruned, Bacula will +detect that JobId 1 and 4 are essential to restore data at the current state +and will prune only JobId 2 and 3. + +\texttt{Important}, this change affect only the automatic pruning step +after a Job and the \texttt{prune jobs} Bconsole command. If a volume +expires after the \texttt{VolumeRetention} period, important jobs can be +pruned. + +\subsection{Ability to Verify any specified Job} +You now have the ability to tell Bacula which Job should verify instead of +automatically verify just the last one. + +This feature can be used with VolumeToCatalog, DiskToCatalog and Catalog level. + +To verify a given job, just specify the Job jobid in argument when starting the +job. +\begin{verbatim} +*run job=VerifyVolume jobid=1 level=VolumeToCatalog +Run Verify job +JobName: VerifyVolume +Level: VolumeToCatalog +Client: 127.0.0.1-fd +FileSet: Full Set +Pool: Default (From Job resource) +Storage: File (From Job resource) +Verify Job: VerifyVol.2010-09-08_14.17.17_03 +Verify List: /tmp/regress/working/VerifyVol.bsr +When: 2010-09-08 14:17:31 +Priority: 10 +OK to run? (yes/mod/no): +\end{verbatim} + + + \chapter{New Features in 7.0.0} -This chapter presents the new features that have been added to the next -Community version of Bacula that is not yet released. +This chapter presents the new features that have been added to +the various versions of Bacula. \section{New Features in 7.0.0} +\subsection{Storage daemon to Storage daemon} +Bacula version 7.0 permits SD to SD transfer of Copy and Migration +Jobs. This permits what is commonly referred to as replication or +off-site transfer of Bacula backups. It occurs automatically, if +the source SD and destination SD of a Copy or Migration job are +different. The following picture shows how this works. + +\includegraphics[width=0.8\linewidth]{sd-to-sd} + +\subsection{SD Calls Client} +If the {\bf SD Calls Client} directive is set to true in a Client resource +any Backup, Restore, Verify, Copy, or Migration Job where the client +is involved, the client will wait for the Storage daemon to contact it. +By default this directive is set to false, and the Client will call +the Storage daemon. This directive can be useful if your Storage daemon +is behind a firewall that permits outgoing connections but not incoming +one. The following picture shows the communications connection paths in +both cases. + +\includegraphics[width=0.8\linewidth]{sd-calls-client} + +\subsection{Next Pool} +In previous versions of Bacula the Next Pool directive could be +specified in the Pool resource for use with Migration and Copy Jobs. +The Next Pool concept has been +extended in Bacula version 7.0.0 to allow you to specify the +Next Pool directive in the Job resource as well. If specified in +the Job resource, it will override any value specified in the Pool +resource. + +In addition to being permitted in the Job resource, the +{\bf nextpool=xxx} specification can be specified as a run +override in the {\bf run} directive of a Schedule resource. +Any {\bf nextpool} specification in a {\bf run} +directive will override any other specification in either +the Job or the Pool. + +In general, more information is displayed in the Job log +on exactly which Next Pool specification is ultimately used. + +\subsection{status storage} +The bconsole {\bf status storage} has been modified to attempt to eliminate +duplicate storage resources and only show one that references any given +storage daemon. This might be confusing at first, but tends to make a +much more compact list of storage resource from which to select if there +are multiple storage devices in the same storage daemon. + +If you want the old behavior (always display all storage resources) simply +add the keyword {\bf select} to the command -- i.e. use +{\bf status select storage}. + + + + + +\subsection{status schedule} +A new status command option called {\bf scheduled} has been implemented +in bconsole. By default it will display 20 lines of the next scheduled +jobs. For example, with the default bacula-dir.conf configuration file, +a bconsole command {\bf status scheduled} produces: + +\begin{verbatim} +Scheduled Jobs: +Level Type Pri Scheduled Job Name Schedule +====================================================================== +Differential Backup 10 Sun 30-Mar 23:05 BackupClient1 WeeklyCycle +Incremental Backup 10 Mon 24-Mar 23:05 BackupClient1 WeeklyCycle +Incremental Backup 10 Tue 25-Mar 23:05 BackupClient1 WeeklyCycle +... +Full Backup 11 Mon 24-Mar 23:10 BackupCatalog WeeklyCycleAfterBackup +Full Backup 11 Wed 26-Mar 23:10 BackupCatalog WeeklyCycleAfterBackup +... +==== +\end{verbatim} + +Note, the output is listed by the Jobs found, and is not sorted +chronologically. + +\smallskip +This command has a number of options, most of which act as filters: +\begin{itemize} +\item {\bf days=nn} This specifies the number of days to list. The default is + 10 but can be set from 0 to 500. +\item {\bf limit=nn} This specifies the limit to the number of lines to print. + The default is 100 but can be any number in the range 0 to 2000. +\item {\bf time="YYYY-MM-DD HH:MM:SS"} Sets the start time for listing the + scheduled jobs. The default is to use the current time. Note, the + time value must be specified inside double quotes and must be in + the exact form shown above. +\item {\bf schedule=schedule-name} This option restricts the output to + the named schedule. +\item {\bf job=job-name} This option restricts the output to the specified + Job name. +\end{itemize} + \subsection{Data Encryption Cipher Configuration} Bacula version 7.0 and later now allows to configure the data encryption cipher and the digest algorithm. The cipher was forced to AES -128, -and it is now possible to choose between the following ciphers: +128, and it is now possible to choose between the following ciphers: \begin{itemize} \item AES128 (default) @@ -47,13 +2166,6 @@ The above command is now simplified to be: truncate storage=File pool=Default \end{verbatim} -\subsection{New Resume Command} -This command does exactly the same thing as a -{\bf restart} command but for some users the -name may be more logical since in general the -{\bf restart} command is used to resume running -a Job that was incompleted. - \subsection{Migration/Copy/VirtualFull Performance Enhancements} The Bacula Storage daemon now permits multiple jobs to simultaneously read the same disk Volume, which gives substantial performance enhancements when @@ -63,6 +2175,63 @@ finish up to ten times faster with this version of Bacula. This is built-in to the Storage daemon, so it happens automatically and transparently. +\subsection{VirtualFull Backup Consolidation Enhancements} +By default Bacula selects jobs automatically for a VirtualFull, +however, you may want to create the Virtual backup based on a +particular backup (point in time) that exists. + +For example, if you have the following backup Jobs in your catalog: +\begin{verbatim} ++-------+---------+-------+----------+----------+-----------+ +| JobId | Name | Level | JobFiles | JobBytes | JobStatus | ++-------+---------+-------+----------+----------+-----------+ +| 1 | Vbackup | F | 1754 | 50118554 | T | +| 2 | Vbackup | I | 1 | 4 | T | +| 3 | Vbackup | I | 1 | 4 | T | +| 4 | Vbackup | D | 2 | 8 | T | +| 5 | Vbackup | I | 1 | 6 | T | +| 6 | Vbackup | I | 10 | 60 | T | +| 7 | Vbackup | I | 11 | 65 | T | +| 8 | Save | F | 1758 | 50118564 | T | ++-------+---------+-------+----------+----------+-----------+ +\end{verbatim} + +and you want to consolidate only the first 3 jobs and create a +virtual backup equivalent to Job 1 + Job 2 + Job 3, you will use +\texttt{jobid=3} in the \texttt{run} command, then Bacula will select the +previous Full backup, the previous Differential (if any) and all subsequent +Incremental jobs. + +\begin{verbatim} +run job=Vbackup jobid=3 level=VirtualFull +\end{verbatim} + +If you want to consolidate a specific job list, you must specify the exact +list of jobs to merge in the run command line. For example, to consolidate +the last Differential and all subsequent Incremental, you will use +\texttt{jobid=4,5,6,7} or \texttt{jobid=4-7} on the run command line. As one +of the Job in the list is a Differential backup, Bacula will set the new job +level to Differential. If the list is composed only with Incremental jobs, +the new job will have a level set to Incremental. + +\begin{verbatim} +run job=Vbackup jobid=4-7 level=VirtualFull +\end{verbatim} + +When using this feature, Bacula will automatically discard jobs that are +not related to the current Job. For example, specifying +\texttt{jobid=7,8}, Bacula will discard JobId 8 because it is not +part of the same backup Job. + +We do not recommend it, but really want to consolidate jobs that have +different names (so probably different clients, filesets, etc...), you must +use \texttt{alljobid=} keyword instead of \texttt{jobid=}. + +\begin{verbatim} +run job=Vbackup alljobid=1-3,6-8 level=VirtualFull +\end{verbatim} + + \subsection{FD Storage Address} When the Director is behind a NAT, in a WAN area, to connect to @@ -209,7 +2378,7 @@ RunAfterJob = "/bin/echo Pid=%P isCloned=%C" \end{verbatim} \subsection{Read Only Storage Devices} -This version of Bacula permits defining a Storage deamon device +This version of Bacula permits defining a Storage daemon device to be read-only. That is if the {\bf ReadOnly} directive is specified and enabled, the drive can only be used for read operations. The the {\bf ReadOnly} directive can be defined in any bacula-sd.conf @@ -247,6 +2416,139 @@ definition. } \end{verbatim} +\subsection{Hardlink Performance Enhancements} +If you use a program such as Cyrus IMAP that creates very large numbers +of hardlinks, the time to build the interactive restore tree can be +excessively long. This version of Bacula has a new feature that +automatically keeps the hardlinks associated with the restore tree +in memory, which consumes a bit more memory but vastly speeds up +building the tree. If the memory usage is too big for your system, you +can reduce the amount of memory used during the restore command by +adding the option {\bf optimizespeed=false} on the bconsole run +command line. + +This feature was developed by Josip Almasi, and enhanced to be runtime +dynamic by Kern Sibbald. + +\subsection{DisableCommand Directive} +There is a new Directive named {\bf Disable Command} that +can be put in the File daemon Client or Director resource. +If it is in the Client, it applies globally, otherwise the +directive applies only to the Director in which it is found. +The Disable Command adds security to your File daemon by +disabling certain commands. The commands that can be +disabled are: + +\begin{verbatim} +backup +cancel +setdebug= +setbandwidth= +estimate +fileset +JobId= +level = +restore +endrestore +session +status +.status +storage +verify +RunBeforeNow +RunBeforeJob +RunAfterJob +Run +accurate +\end{verbatim} + +On or more of these command keywords can be placed in quotes and separated +by spaces on the Disable Command directive line. Note: the commands must +be written exactly as they appear above. + +\subsection{Multiple Console Directors} +Support for multiple bconsole and bat Directors in the bconsole.conf and +bat.conf files has been implemented and/or improved. + +\subsection{Restricted Consoles} +Better support for Restricted consoles has been implement for bconsole and +bat. + +\subsection{Configuration Files} +In previous versions of Bacula the configuration files for each component +were limited to a maximum of 499 bytes per configuration file line. This +version of Bacula permits unlimited input line lengths. This can be +especially useful for specifying more complicated Migration/Copy SQL +statements and in creating long restricted console ACL lists. + +\subsection{Maximum Spawned Jobs} +The Job resource now permits specifying a number of {\bf Maximum Spawned +Jobs}. The default is 600. This directive can be useful if you have +big hardware and you do a lot of Migration/Copy jobs which start +at the same time. In prior versions of Bacula, Migration/Copy +was limited to spawning a maximum of 100 jobs at a time. + +\subsection{Progress Meter} +The new File daemon has been enhanced to send its progress (files +processed and bytes written) to the Director every 30 seconds. These +figures can then be displayed with a bconsole {\bf status dir} +command. + +\subsection{Scheduling a 6th Week} +Prior version of Bacula permits specifying 1st through 5th week of +a month (first through fifth) as a keyword on the {\bf run} +directive of a Schedule resource. This version of Bacula also permits +specifying the 6th week of a month with the keyword {\bf sixth} or +{\bf 6th}. + +\subsection{Scheduling the Last Day of a Month} +This version of Bacula now permits specifying the {\bf lastday} +keyword in the {\bf run} directive of a Schedule resource. +If {\bf lastday} is specified, it will apply only to those months +specified on the {\bf run} directive. Note: by default all months +are specified. + +\subsection{Improvements to Cancel and Restart bconsole Commands} +The Restart bconsole command now allow selection of either +canceled or failed jobs to be restarted. In addition both the +{\bf cancel} and {\bf restart} bconsole commands permit entering +a number of JobIds separated by commas or a range of JobIds indicated +by a dash between the begin and end range (e.g. 3-10). Finally the +two commands also allow one to enter the special keyword {\bf all} +to select all the appropriate Jobs. + +\subsection{bconsole Performance Improvements} +In previous versions of Bacula certain bconsole commands could wait a long +time due to catalog lock contention. This was especially noticeable +when a large number of jobs were running and putting their attributes +into the catalog. This version uses a separate catalog connection that +should significantly enhance performance. + +\subsection{New .bvfs\_decode\_lstat Command} +There is a new bconsole command, which is +{\bf .bvfs\_decode\_lstat} it requires one argument, which +is {\bf lstat="lstat value to decode"}. An example command +in bconsole and the output might be: + +\small +\begin{verbatim} +.bvfs_decode_lstat lstat="A A EHt B A A A JP BAA B BTL/A7 BTL/A7 BTL/A7 A A C" + +st_nlink=1 +st_mode=16877 +st_uid=0 +st_gid=0 +st_size=591 +st_blocks=1 +st_ino=0 +st_ctime=1395650619 +st_mtime=1395650619 +st_mtime=1395650619 +st_dev=0 +LinkFI=0 +\end{verbatim} +\normalsize + \subsection*{New Debug Options}