This chapter presents the new features added to the development 2.5.x
versions to be released as Bacula version 3.0.0 near the end of 2008.
+\section{Accurate}
+\index[general]{Accurate Backup}
+
+As with most other backup programs, Bacula decides what files to backup for
+Incremental and Differental backup by comparing the change (st\_ctime) and
+modification (st\_mtime) times of the file to the time the last backup
+completed. If one of those two times is different than from last backup time,
+then the file will be backed up. This does not, however, permit tracking what
+files have been deleted and will miss any file with an old time that may have
+been restored or moved on the client filesystem.
+
+\subsection{Accurate = \lt{}yes|no\gt{}}
+If the {\bf Accurate = \lt{}yes|no\gt{}} directive is enabled (default no) in
+the Job resource, the job will be run as an Accurate Job. For a {\bf Full}
+backup, there is no difference, but for {\bf Differential} and {\bf
+ Incremental} backups, the Director will send a list of all previous files
+backed up, and the File daemon will use that list to determine if any new files
+have been added or or moved and if any files have been deleted. This allows
+Bacula to make an accurate backup of your system to that point in time so that
+if you do a restore, it will restore your system exactly. One note of caution
+about using Accurate backup is that it requires more resources (CPU and memory)
+on both the Director and the Client machines to create the list of previous
+files backed up, to send that list to the File daemon, for the File daemon to
+keep the list (possibly very big) in memory, and for the File daemon to do
+comparisons between every file in the FileSet and the list.
+
+
+\section{Copy Jobs}
+\index[general]{Copy Jobs}
+A new {\bf Copy} job type has been implemented. It is essentially
+identical to the existing Migration feature with the exception that
+the Job that is copied is left unchanged. This essentially creates
+two identical copies of the same backup. The Copy Job runs without
+using the File daemon by copying the data from the old backup Volume to
+a different Volume in a different Pool. See the Migration documentation
+for additional details.
+
\section{Virtual Backup (Vbackup)}
\index[general]{Virtual Backup}
\index[general]{Vbackup}
In some respects the Vbackup feature works similar to a Migration job, in
that Bacula normally reads the data from the pool specified in the
-Job resource, and writes it to the \bf{Next Pool} specified in the
+Job resource, and writes it to the {\bf Next Pool} specified in the
Job resource. The input Storage resource and the Output Storage resource
must be different.
The Vbackup is enabled on a Job by Job in the Job resource by specifying
-a level of \bf{VirtualFull}.
+a level of {\bf VirtualFull}.
A typical Job resource definition might look like the following:
Pool {
Name = Default
Pool Type = Backup
- Recycle = yes # Bacula can automatically recycle Volumes
- AutoPrune = yes # Prune expired volumes
- Volume Retention = 365d # one year
+ Recycle = yes # Automatically recycle Volumes
+ AutoPrune = yes # Prune expired volumes
+ Volume Retention = 365d # one year
NextPool = Full
Storage = File
}
Pool {
Name = Full
Pool Type = Backup
- Recycle = yes # Bacula can automatically recycle Volumes
- AutoPrune = yes # Prune expired volumes
- Volume Retention = 365d # one year
+ Recycle = yes # Automatically recycle Volumes
+ AutoPrune = yes # Prune expired volumes
+ Volume Retention = 365d # one year
Storage = DiskChanger
}
# Definition of DDS Virtual tape disk storage device
Storage {
Name = DiskChanger
- Address = localhost # N.B. Use a fully qualified name here
+ Address = localhost # N.B. Use a fully qualified name here
Password = "yyy"
Device = DiskChanger
Media Type = DiskChangerMedia
run job=MyBackup level=Incremental
\end{verbatim}
-So providing there were changes between each of those jobs, you would end up with
-a Full backup, a Differential, which includes the first Incremental backup, then two
-Incremental backups. All the above jobs would be written to the \bf{Default} pool.
+So providing there were changes between each of those jobs, you would end up
+with a Full backup, a Differential, which includes the first Incremental
+backup, then two Incremental backups. All the above jobs would be written to
+the {\bf Default} pool.
-To consolidate those backups into a new Full backup, you would run the following:
+To consolidate those backups into a new Full backup, you would run the
+following:
\begin{verbatim}
run job=MyBackup level=VirtualFull
\end{verbatim}
-And it would produce a new Full backup without using the client, and the output would
-be written to the \bf{Full} Pool which uses the Diskchanger Storage.
+And it would produce a new Full backup without using the client, and the output
+would be written to the {\bf Full} Pool which uses the Diskchanger Storage.
+
+If the Virtual Full is run, and there are no prior Jobs, the Virtual Full will
+fail with an error.
+
+\section{Duplicate Job Control}
+\index[general]{Duplicate Jobs}
+The new version of Bacula provides four new directives that
+give additional control over what Bacula does if duplicate jobs
+are started. A duplicate job in the sense we use it here means
+a second or subsequent job with the same name starts. This
+happens most frequently when the first job runs longer than expected because no
+tapes are available.
+
+The four directives each take as an argument a {\bf yes} or {\bf no} value and
+are specified in the Job resource.
+
+They are:
+
+\subsection{Allow Duplicate Jobs = \lt{}yes|no\gt{}}
+ If this directive is enabled duplicate jobs will be run. If
+ the directive is set to {\bf no} (default) then only one job of a given name
+ may run at one time, and the action that Bacula takes to ensure only
+ one job runs is determined by the other directives (see below).
+
+\subsection{Allow Higher Duplicates = \lt{}yes|no\gt{}}
+ If this directive is set to {\bf yes} (default) the job with a higher
+ priority (lower priority number) will be permitted to run. If the
+ priorities of the two jobs are the same, the outcome is determined by
+ other directives (see below).
+
+\subsection{Cancel Queued Duplicates = \lt{}yes|no\gt{}}
+ If this directive is set to {\bf yes} (default) any job that is
+ already queued to run but not yet running will be canceled.
+
+\subsection{Cancel Running Duplicates = \lt{}yes|no\gt{}}
+ If this directive is set to {\bf yes} any job that is already running
+ will be canceled. The default is {\bf no}.
+
+
+\section{TLS Authentication}
+\index[general]{TLS Authentication}
+In Bacula version 2.5.x and later, in addition to the normal Bacula
+CRAM-MD5 authentication that is used to authenticate each Bacula
+connection, you can specify that you want TLS Authentication as well,
+which will provide more secure authentication.
+
+This new feature uses Bacula's existing TLS code (normally used for
+communications encryption) to do authentication. To use it, you must
+specify all the TLS directives normally used to enable communications
+encryption (TLS Enable, TLS Verify Peer, TLS Certificate, ...) and
+a new directive:
+
+\subsection{TLS Authenticate = yes}
+\begin{verbatim}
+TLS Authenticate = yes
+\end{verbatim}
+
+in the main daemon configuration resource (Director for the Director,
+Client for the File daemon, and Storage for the Storage daemon).
+
+When {\bf TLS Authenticate} is enabled, after doing the CRAM-MD5
+authentication, Bacula will do the normal TLS authentication, then TLS
+encryption will be turned off.
+
+If you want to encrypt communications data, do not turn on {\bf TLS
+Authenticate}.
+
+\section{bextract non-portable Win32 data}
+\index[general]{bextract handles Win32 non-portable data}
+{\bf bextract} has been enhanced to be able to restore
+non-portable Win32 data to any OS. Previous versions were
+unable to restore non-portable Win32 data to machines that
+did not have the Win32 BackupRead and BackupWrite API calls.
+
+\section{State File updated at Job Termination}
+\index[general]{State File}
+In previous versions of Bacula, the state file, which provides a
+summary of previous jobs run in the {\bf status} command output was
+updated only when Bacula terminated, thus if the daemon crashed, the
+state file might not contain all the run data. This version of
+the Bacula daemons updates the state file on each job termination.
+
+\section{MaxFullInterval = \lt{}time-interval\gt{}}
+\index[general]{MaxFullInterval}
+The new Job resource directive {\bf Max Full Interval = \lt{}time-interval\gt{}}
+can be used to specify the maximum time interval between {\bf Full} backup
+jobs. When a job starts, if the time since the last Full backup is
+greater than the specified interval, and the job would normally be an
+{\bf Incremental} or {\bf Differential}, it will be automatically
+upgraded to a {\bf Full} backup.
+
+\section{MaxDiffInterval = \lt{}time-interval\gt{}}
+\index[general]{MaxDiffInterval}
+The new Job resource directive {\bf Max Diff Interval = \lt{}time-interval\gt{}}
+can be used to specify the maximum time interval between {\bf Differential} backup
+jobs. When a job starts, if the time since the last Differential backup is
+greater than the specified interval, and the job would normally be an
+{\bf Incremental}, it will be automatically
+upgraded to a {\bf Differential} backup.
+
+\section{Honor No Dump Flag = \lt{}yes|no\gt{}}
+\index[general]{MaxDiffInterval}
+On FreeBSD systems, each file has a {\bf no dump flag} that can be set
+by the user, and when it is set it is an indication to backup programs
+to not backup that particular file. This version of Bacula contains a
+new Options directive within a FileSet resource, which instructs Bacula to
+obey this flag. The new directive is:
+
+\begin{verbatim}
+ Honor No Dump Flag = yes|no
+\end{verbatim}
+
+The default value is {\bf no}.
+
+
+\section{Exclude Dirs Containing = \lt{}filename-string\gt{}}
+\index[general]{IgnoreDir}
+The {\bf ExcludeDirsContaining = \lt{}filename\gt{}} is a new directive that can be added to the Include
+section of the FileSet resource. If the specified
+filename is found on the Client in any directory to be backed up,
+the whole directory will be ignored (not backed up).
+For example:
+
+\begin{verbatim}
+ # List of files to be backed up
+ FileSet {
+ Name = "MyFileSet"
+ Include {
+ Options {
+ signature = MD5
+ }
+ File = /home
+ Exclude Dirs Containing = .excludeme
+ }
+ }
+\end{verbatim}
+
+But in /home, there may be hundreds of directories of users and some
+people want to indicate that they don't want to have certain
+directories backed up. For example, with the above FileSet, if
+the user or sysadmin creates a file named {\bf .excludeme} in
+specific directories, such as
+
+\begin{verbatim}
+ /home/user/www/cache/.excludeme
+ /home/user/temp/.excludeme
+\end{verbatim}
+
+then Bacula will not backup the two directories named:
+
+\begin{verbatim}
+ /home/user/www/cache
+ /home/user/temp
+\end{verbatim}
+
+NOTE: subdirectories will not be backed up. That is, the directive
+applies to the two directories in question and any children (be they
+files, directories, etc).
+
+
+
+\section{Bacula Plugins}
+\index[general]{Plugin}
+Support for shared object plugins has been implemented in the Linux
+(and Unix) File daemon. The API will be documented separately in
+the Developer's Guide or in a new document. For the moment, there is
+a single plugin named {\bf bpipe} that allows an external program to
+get control to backup and restore a file.
+
+Plugins are also planned (partially implemented) in the Director and the
+Storage daemon. The code is also implemented to work on Win32 machines,
+but it has not yet been tested.
+
+\subsection{Plugin Directory}
+Each daemon (DIR, FD, SD) has a new {\bf Plugin Directory} directive that may
+be added to the daemon definition resource. The directory takes a quoted
+string argument, which is the name of the directory in which the daemon can
+find the Bacula plugins. If this directive is not specified, Bacula will not
+load any plugins. Since each plugin has a distinctive name, all the daemons
+can share the same plugin directory.
+
+
+
+\subsection{Plugin Options}
+The {\bf Plugin Options} directive takes a quoted string
+arguement (after the equal sign) and may be specified in the
+Job resource. The options specified will be passed to the plugin
+when it is run. The value defined in the Job resource can be modified
+by the user when he runs a Job via the {\bf bconsole} command line
+prompts.
+
+Note: this directive may be specified, but it is not yet passed to
+the plugin (i.e. not fully implemented).
+
+\subsection{Plugin Options ACL}
+The {\bf Plugin Options ACL} directive may be specified in the
+Director's Console resource. It functions as all the other ACL commands
+do by permitting users running restricted consoles to specify a
+{\bf Plugin Options} that overrides the one specified in the Job
+definition. Without this directive restricted consoles may not modify
+the Plugin Options.
+
+\subsection{Plugin = \lt{}plugin-command-string\gt{}}
+The {\bf Plugin} directive is specified in the Include section of
+a FileSet resource where you put your {\bf File = xxx} directives.
+For example:
+
+\begin{verbatim}
+ FileSet {
+ Name = "MyFileSet"
+ Include {
+ Options {
+ signature = MD5
+ }
+ File = /home
+ Plugin = "bpipe:..."
+ }
+ }
+\end{verbatim}
+
+In the above example, when the File daemon is processing the directives
+in the Include section, it will first backup all the files in {\bf /home}
+then it will load the plugin named {\bf bpipe} (actually bpipe-dir.so) from
+the Plugin Directory. The syntax and semantics of the Plugin directive
+require the first part of the string up to the colon (:) to be the name
+of the plugin. Everything after the first colon is ignored by the File daemon but
+is passed to the plugin. Thus the plugin writer may define the meaning of the
+rest of the string as he wishes.
+
+Please see the next section for information about the {\bf bpipe} Bacula
+plugin.
+
+\section{The bpipe Plugin}
+The {\bf bpipe} plugin is provided in the directory src/plugins/fd/bpipe-fd.c of
+the Bacula source distribution. When the plugin is compiled and linking into
+the resulting dynamic shared object (DSO), it will have the name {\bf bpipe-fd.so}.
+
+The purpose of the plugin is to provide an interface to any system program for
+backup and restore. As specified above the {\bf bpipe} plugin is specified in
+the Include section of your Job's FileSet resource. The full syntax of the
+plugin directive as interpreted by the {\bf bpipe} plugin (each plugin is free
+to specify the sytax as it wishes) is:
+
+\begin{verbatim}
+ Plugin = "<field1>:<field2>:<field3>:<field4>"
+\end{verbatim}
+
+where
+\begin{description}
+\item {\bf field1} is the name of the plugin with the trailing {\bf -fd.so}
+stripped off, so in this case, we would put {\bf bpipe} in this field.
+
+\item {\bf field2} specifies the namespace, which for {\bf bpipe} is the
+pseudo path and filename under which the backup will be saved. This pseudo
+path and filename will be seen by the user in the restore file tree.
+For example, if the value is {\bf /MYSQL/regress.sql}, the data
+backed up by the plugin will be put under that "pseudo" path and filename.
+You must be careful to choose a naming convention that is unique to avoid
+a conflict with a path and filename that actually exists on your system.
+
+\item {\bf field3} for the {\bf bpipe} plugin
+specifies the "reader" program that is called by the plugin during
+backup to read the data. {\bf bpipe} will call this program by doing a
+{\bf popen} on it.
+
+\item {\bf field4} for the {\bf bpipe} plugin
+specifies the "writer" program that is called by the plugin during
+restore to write the data back to the filesystem.
+\end{description}
+
+Putting it all together, the full plugin directive line might look
+like the following:
+
+\begin{verbatim}
+Plugin = "bpipe:/MYSQL/regress.sql:mysqldump -f
+ --opt --databases bacula:mysql"
+\end{verbatim}
+
+The directive has been split into two lines, but within the {\bf bacula-dir.conf} file
+would be written on a single line.
+
+This causes the File daemon to call the {\bf bpipe} plugin, which will write
+its data into the "pseudo" file {\bf /MYSQL/regress.sql} by calling the
+program {\bf mysqldump -f --opt --database bacula} to read the data during
+backup. The mysqldump command outputs all the data for the database named
+{\bf bacula}, which will be read by the plugin and stored in the backup.
+During restore, the data that was backed up will be sent to the program
+specified in the last field, which in this case is {\bf mysql}. When
+{\bf mysql} is called, it will read the data sent to it by the plugn
+then write it back to the same database from which it came ({\bf bacula}
+in this case).
+
+The {\bf bpipe} plugin is a generic pipe program, that simply transmits
+the data from a specified program to Bacula for backup, and then from Bacula to
+a specified program for restore.
+
+By using different command lines to {\bf bpipe},
+you can backup any kind of data (ASCII or binary) depending
+on the program called.
+
+\section{Microsoft Exchange Server 2003/2007 Plugin}
+
+\subsection{Concepts}
+
+Although it is possible to backup Exchange using Bacula VSS the Exchange
+plugin adds a good deal of functionality, because while Bacula VSS
+completes a full backup (snapshot) of Exchange, it does
+not support Incremental or Differential backups, restoring is more
+complicated, and a single database restore is not possible.
+
+Microsoft Exchange organises its storage into Storage Groups with
+Databases inside them. A default installation of Exchange will have a
+single Storage Group called 'First Storage Group', with two Databases
+inside it, "Mailbox Store (SERVER NAME)" and
+"Public Folder Store (SERVER NAME)",
+which hold user email and public folders respectively.
+
+In the default configuration, Exchange logs everything that happens to
+log files, such that if you have a backup, and all the log files since,
+you can restore to the present time. Each Storage Group has its own set
+of log files and operates independently of any other Storage Groups. At
+the Storage Group level, the logging can be turned off by enabling a
+function called "Enable circular logging". At this time the Exchange
+plugin will not function if this option is enabled.
+
+The plugin allows backing up of entire storage groups, and the restoring
+of entire storage groups or individual databases. Backing up and
+restoring at the individual mailbox or email item is not supported but
+can be simulated by use of the "Recovery" Storage Group (see below).
+
+\subsection{Installing}
+
+The Exchange plugin requires a DLL that is shipped with Microsoft
+Exchanger Server called {\bf esebcli2.dll}. Assuming Exchange is installed
+correctly the Exchange plugin should find this automatically and run
+without any additional installation.
+
+If the DLL can not be found automatically it will need to be copied into
+the Bacula installation
+directory (eg C:\verb+\+Program Files\verb+\+Bacula\verb+\+bin). The Exchange API DLL is
+named esebcli2.dll and is found in C:\verb+\+Program Files\verb+\+Exchsrvr\verb+\+bin on a
+default Exchange installation.
+
+\subsection{Backup up}
+
+To back up an Exchange server the Fileset definition must contain at
+least {\bf Plugin = "exchange:/@EXCHANGE/Microsoft Information Store"} for
+the backup to work correctly. The 'exchange:' bit tells Bacula to look
+for the exchange plugin, the '@EXCHANGE' bit makes sure all the backed
+up files are prefixed with something that isn't going to share a name
+with something outside the plugin, and the 'Microsoft Information Store'
+bit is required also. It is also possible to add the name of a storage
+group to the "Plugin =" line, eg \\
+{\bf Plugin = "exchange:/@EXCHANGE/Microsoft Information Store/First Storage Group"} \\
+if you want only a single storage group backed up.
+
+Additionally, you can suffix the 'Plugin =' directive with
+":notrunconfull" which will tell the plugin not to truncate the Exchange
+database at the end of a full backup.
+
+An Incremental or Differential backup will backup only the database logs
+for each Storage Group by inspecting the "modified date" on each
+physical log file. Because of the way the Exchange API works, the last
+logfile backed up on each backup will always be backed up by the next
+Incremental or Differential backup too. This adds 5MB to each
+Incremental or Differential backup size but otherwise does not cause any
+problems.
+
+By default, a normal VSS fileset containing all the drive letters will
+also back up the Exchange databases using VSS. This will interfere with
+the plugin and Exchange's shared ideas of when the last full backup was
+done, and may also truncate log files incorrectly. It is important,
+therefore, that the Exchange database files be excluded from the backup,
+although the folders the files are in should be included, or they will
+have to be recreated manually if a baremetal restore is done.
+
+\begin{verbatim}
+FileSet {
+ Include {
+ File = C:/Program Files/Exchsrvr/mdbdata
+ Plugin = "exchange:..."
+ }
+ Exclude {
+ File = C:/Program Files/Exchsrvr/mdbdata/E00.chk
+ File = C:/Program Files/Exchsrvr/mdbdata/E00.log
+ File = C:/Program Files/Exchsrvr/mdbdata/E000000F.log
+ File = C:/Program Files/Exchsrvr/mdbdata/E0000010.log
+ File = C:/Program Files/Exchsrvr/mdbdata/E0000011.log
+ File = C:/Program Files/Exchsrvr/mdbdata/E00tmp.log
+ File = C:/Program Files/Exchsrvr/mdbdata/priv1.edb
+ }
+}
+\end{verbatim}
+
+The advantage of excluding the above files is that you can significantly
+reduce the size of your backup since all the important Exchange files
+will be properly saved by the Plugin.
+
+
+\subsection{Restoring}
+
+The restore operation is much the same as a normal Bacula restore, with
+the following provisos:
+
+\begin{itemize}
+\item The {\bf Where} restore option must not be specified
+\item Each Database directory must be marked as a whole. You cannot just
+ select (say) the .edb file and not the others.
+\item If a Storage Group is restored, the directory of the Storage Group
+ must be marked too.
+\item It is possible to restore only a subset of the available log files,
+ but they {\bf must} be contiguous. Exchange will fail to restore correctly
+ if a log file is missing from the sequence of log files
+\item Each database to be restored must be dismounted and marked as "Can be
+ overwritten by restore"
+\item If an entire Storage Group is to be restored (eg all databases and
+ logs in the Storage Group), then it is best to manually delete the
+ database files from the server (eg C:\verb+\+Program Files\verb+\+Exchsrvr\verb+\+mdbdata\verb+\+*)
+ as Exchange can get confused by stray log files lying around.
+\end{itemize}
+
+\subsection{Restoring to the Recovery Storage Group}
+
+The concept of the Recovery Storage Group is well documented by
+Microsoft
+\elink{http://support.microsoft.com/kb/824126}{http://support.microsoft.com/kb/824126},
+but to briefly summarize...
+
+Microsoft Exchange allows the creation of an additional Storage Group
+called the Recovery Storage Group, which is used to restore an older
+copy of a database (e.g. before a mailbox was deleted) into without
+messing with the current live data. This is required as the Standard and
+Small Business Server versions of Exchange can not ordinarily have more
+than one Storage Group.
+
+To create the Recovery Storage Group, drill down to the Server in
+Exchange System Manager, right click, and select
+{\bf "New -> Recovery Storage Group..."}. Accept or change the file locations and click OK. On
+the Recovery Storage Group, right click and select
+{\bf "Add Database to Recover..."} and select the database you will be restoring.
+
+In Bacula, select the Database and the log files, making sure to mark
+the Storage Group directory itself too. Once you have selected the files
+to back up, use the RegexWhere clause to remove the prefix of
+"/@EXCHANGE/Microsoft Information Store/\lt{}storage group name\gt{}/" and
+replace it with "/@EXCHANGE/Microsoft Information Store/Recovery Storage Group/".
+Then run the restore.
+
+\subsection{Caveats}
+
+This plugin is still being developed, so you should consider it
+currently in BETA test, and thus use in a production environment
+should be done only after very careful testing.
+
+The "Enable Circular Logging" option cannot be enabled or the plugin
+will fail.
+
+Exchange insists that a successful Full backup must have taken place if
+an Incremental or Differential backup is desired, and the plugin will
+fail if this is not the case. If a restore is done, Exchange will
+require that a Full backup be done before an Incremental or Differential
+backup is done.
+
+The plugin will most likely not work well if another backup application
+(eg NTBACKUP) is backing up the Exchange database, especially if the
+other backup application is truncating the log files.
+
+The Exchange plugin has not been tested with the {\bf Accurate} option, so
+we recommend either carefully testing or that you avoid this option for
+the current time.
+
+The Exchange plugin is not called during processing the bconsole {\bf estimate} command,
+and so anything that would be backed up by the plugin will not be added
+to the estimate total that is displayed.
+
+
+\section{libdbi Framework}
+As a general guideline, Bacula has support for a few catalog database drivers
+(MySQL, PostgreSQL, SQLite)
+coded natively by the Bacula team. With the libdbi implementation, which is a
+Bacula driver that uses libdbi to access the catalog, we have an open field to
+use many different kinds database engines following the needs of users.
+
+The according to libdbi (http://libdbi.sourceforge.net/) project: libdbi
+implements a database-independent abstraction layer in C, similar to the
+DBI/DBD layer in Perl. Writing one generic set of code, programmers can
+leverage the power of multiple databases and multiple simultaneous database
+connections by using this framework.
+
+Currently the libdbi driver in Bacula project only supports the same drivers
+natively coded in Bacula. However the libdbi project has support for many
+others database engines. You can view the list at
+http://libdbi-drivers.sourceforge.net/. In the future all those drivers can be
+supported by Bacula, however, they must be tested properly by the Bacula team.
+
+Some of benefits of using libdbi are:
+\begin{itemize}
+\item The possibility to use proprietary databases engines in which your
+ proprietary licenses prevent the Bacula team from developing the driver.
+ \item The possibility to use the drivers written for the libdbi project.
+ \item The possibility to use other database engines without recompiling Bacula
+ to use them. Just change one line in bacula-dir.conf
+ \item Abstract Database access, this is, unique point to code and profiling
+ catalog database access.
+ \end{itemize}
+
+ The following drivers have been tested:
+ \begin{itemize}
+ \item PostgreSQL, with and without batch insert
+ \item Mysql, with and without batch insert
+ \item SQLite
+ \item SQLite3
+ \end{itemize}
+
+ In the future, we will test and approve to use others databases engines
+ (proprietary or not) like DB2, Oracle, Microsoft SQL.
+
+ To compile Bacula to support libdbi we need to configure the code with the
+ --with-dbi and --with-dbi-driver=[database] ./configure options, where
+ [database] is the database engine to be used with Bacula (of course we can
+ change the driver in file bacula-dir.conf, see below). We must configure the
+ access port of the database engine with the option --with-db-port, because the
+ libdbi framework doesn't know the default access port of each database.
+
+The next phase is checking (or configuring) the bacula-dir.conf, example:
+\begin{verbatim}
+Catalog {
+ Name = MyCatalog
+ dbdriver = dbi:mysql; dbaddress = 127.0.0.1; dbport = 3306
+ dbname = regress; user = regress; password = ""
+}
+\end{verbatim}
+
+The parameter {\bf dbdriver} indicates that we will use the driver dbi with a
+mysql database. Currently the drivers supported by Bacula are: postgresql,
+mysql, sqlite, sqlite3; these are the names that may be added to string "dbi:".
+
+The following limitations apply when Bacula is set to use the libdbi framework:
+ - Not tested on the Win32 platform
+ - A little performance is lost if comparing with native database driver.
+ The reason is bound with the database driver provided by libdbi and the
+ simple fact that one more layer of code was added.
+
+It is important to remember, when compiling Bacula with libdbi, the
+following packages are needed:
+ \begin{itemize}
+ \item libdbi version 1.0.0, http://libdbi.sourceforge.net/
+ \item libdbi-drivers 1.0.0, http://libdbi-drivers.sourceforge.net/
+ \end{itemize}
+
+ You can download them and compile them on your system or install the packages
+ from your OS distribution.
+
+
+\section{Display Autochanger Content}
+\index[general]{StatusSlots}
+
+The {\bf status slots storage=\lt{}storage-name\gt{}} command displays
+autochanger content.
+
+\footnotesize
+\begin{verbatim}
+ Slot | Volume Name | Status | Media Type | Pool |
+------+---------------+----------+-------------------+------------|
+ 1 | 00001 | Append | DiskChangerMedia | Default |
+ 2 | 00002 | Append | DiskChangerMedia | Default |
+ 3*| 00003 | Append | DiskChangerMedia | Scratch |
+ 4 | | | | |
+\end{verbatim}
+\normalsize
+
+If you an asterisk ({\bf *}) appears after the slot number, you must run an
+{\bf update slots} command to synchronize autochanger content with your
+catalog.
+
+\section{Miscellaneous}
+\index[general]{Misc New Features}
+
+\subsection{Allow Mixed Priority = \lt{}yes|no\gt{}}
+ This directive is only implemented in version 2.5 and later. When
+ set to {\bf yes} (default {\bf no}), this job may run even if lower
+ priority jobs are already running. This means a high priority job
+ will not have to wait for other jobs to finish before starting.
+ The scheduler will only mix priorities when all running jobs have
+ this set to true.
+
+ Note that only higher priority jobs will start early. Suppose the
+ director will allow two concurrent jobs, and that two jobs with
+ priority 10 are running, with two more in the queue. If a job with
+ priority 5 is added to the queue, it will be run as soon as one of
+ the running jobs finishes. However, new priority 10 jobs will not
+ be run until the priority 5 job has finished.
+
+\subsection{Bootstrap File Directive -- FileRegex}
+ {\bf FileRegex} is a new command that can be added to the bootstrap
+ (.bsr) file. The value is a regular expression. When specified, only
+ matching filenames will be restored.
+
+ During a restore, if all File records are pruned from the catalog
+ for a Job, normally Bacula can restore only all files saved. That
+ is there is no way using the catalog to select individual files.
+ With this new command, Bacula will ask if you want to specify a Regex
+ expression for extracting only a part of the full backup.
+
+
+\subsection{Virtual Tape Emulation}
+We now have a Virtual Tape emulator that allows us to run though 99.9\% of
+the tape code but actually reading and writing to a disk file. Used with the
+\textbf{disk-changer} script, you can now emulate an autochanger with 10 drives
+and 700 slots. This feature is most useful in testing. It is enabled
+by using {\bf Device Type = vtape} in the Storage daemon's Device
+directive. This feature is only implemented on Linux machines.
+
+\subsection{Bat Enhancements}
+Bat (the Bacula Administration Tool) GUI program has been significantly
+enhanced and stabilized. In particular, there are new table based status
+commands; it can now be easily localized using Qt4 Linguist.
+
+The Bat communications protocol has been significantly enhanced to improve
+GUI handling.
+
+\subsection{RunScript Enhancements}
+The {\bf RunScript} resource has been enhanced to permit multiple
+commands per RunScript. Simply specify multiple {\bf Command} directives
+in your RunScript.
+
+\begin{verbatim}
+Job {
+ Name = aJob
+ RunScript {
+ Command = "/bin/echo test"
+ Command = "/bin/echo an other test"
+ Command = "/bin/echo 3 commands in the same runscript"
+ RunsWhen = Before
+ }
+ ...
+}
+\end{verbatim}
+
+A new Client RunScript {\bf RunsWhen} keyword of {\bf AfterVSS} has been
+implemented, which runs the command after the Volume Shadow Copy has been made.
+
+Console commands can be specified within a RunScript by using:
+{\bf Console = \lt{}command\gt{}}, however, this command has not been
+carefully tested and debugged and is known to easily crash the Director.
+We would appreciate feedback. Due to the recursive nature of this command, we
+may remove it before the final release.
+
+\subsection{Status Enhancements}
+The bconsole {\bf status dir} output has been enhanced to indicate
+Storage daemon job spooling and despooling activity.
+
+\subsection{Connect Timeout}
+The default connect timeout to the File
+daemon has been set to 3 minutes. Previously it was 30 minutes.
+
+\subsection{ftruncate for NFS Volumes}
+If you write to a Volume mounted by NFS (say on a local file server),
+in previous Bacula versions, when the Volume was recycled, it was not
+properly truncated because NFS does not implement ftruncate (file
+truncate). This is now corrected in the new version because we have
+written code (actually a kind user) that deletes and recreates the Volume,
+thus accomplishing the same thing as a truncate.
+
+\subsection{Support for Ubuntu}
+The new version of Bacula now recognizes the Ubuntu (and Kubuntu)
+version of Linux, and thus now provides correct autostart routines.
+Since Ubuntu officially supports Bacula, you can also obtain any
+recent release of Bacula from the Ubuntu repositories.
+
+\subsection{Recycle Pool = \lt{}pool-name\gt{}}
+The new \textbf{RecyclePool} directive defines to which pool the Volume will
+be placed (moved) when it is recycled. Without this directive, a Volume will
+remain in the same pool when it is recycled. With this directive, it can be
+moved automatically to any existing pool during a recycle. This directive is
+probably most useful when defined in the Scratch pool, so that volumes will
+be recycled back into the Scratch pool.
+
+\subsection{FD Version}
+The File daemon to Director protocol now includes a version
+number, which although there is no visible change for users,
+will help us in future versions automatically determine
+if a File daemon is not compatible.
+
+\subsection{Max Run Sched Time = \lt{}time-period-in-seconds\gt{}}
+The time specifies the maximum allowed time that a job may run, counted from
+when the job was scheduled. This can be useful to prevent jobs from running
+during working hours. We can see it like \texttt{Max Start Delay + Max Run
+ Time}.
+
+\subsection{Max Wait Time = \lt{}time-period-in-seconds\gt{}}
+
+Previous \textbf{MaxWaitTime} directives aren't working as expected, instead
+of checking the maximum allowed time that a job may block for a resource,
+those directives worked like \textbf{MaxRunTime}. Some users are reporting to
+use \textbf{Incr/Diff/Full Max Wait Time} to control the maximum run time of
+their job depending on the level. Now, they have to use
+\textbf{Incr/Diff/Full Max Run Time}. \textbf{Incr/Diff/Full Max Wait Time}
+directives are now deprecated.
+
+\subsection{Incremental|Differential Max Wait Time = \lt{}time-period-in-seconds\gt{}}
+Theses directives have been deprecated in favor of
+\texttt{Incremental|Differential Max Run Time}.
+
+\subsection{Max Run Time directives}
+Using \textbf{Full/Diff/Incr Max Run Time}, it's now possible to specify the
+maximum allowed time that a job can run depending on the level.
+
+\addcontentsline{lof}{figure}{Job time control directives}
+\includegraphics{\idir different_time.eps}
+
+\subsection{Statistics Enhancements}
+If you (or probably your boss) want to have statistics on your backups to
+provide some \textit{Service Level Agreement} indicators, you could use a few
+SQL queries on the Job table to report how many:
+
+\begin{itemize}
+\item jobs have run
+\item jobs have been successful
+\item files have been backed up
+\item ...
+\end{itemize}
+
+However, these statistics are accurate only if your job retention is greater
+than your statistics period. Ie, if jobs are purged from the catalog, you won't
+be able to use them.
+
+Now, you can use the \textbf{update stats [days=num]} console command to fill
+the JobHistory table with new Job records. If you want to be sure to take in
+account only \textbf{good jobs}, ie if one of your important job has failed but
+you have fixed the problem and restarted it on time, you probably want to
+delete the first \textit{bad} job record and keep only the successful one. For
+that simply let your staff do the job, and update JobHistory table after two or
+three days depending on your organization using the \textbf{[days=num]} option.
+
+These statistics records aren't used for restoring, but mainly for
+capacity planning, billings, etc.
+
+The Bweb interface provides a statistics module that can use this feature. You
+can also use tools like Talend or extract information by yourself.
+
+The {\textbf Statistics Retention = \lt{}time\gt{}} director directive defines
+the length of time that Bacula will keep statistics job records in the Catalog
+database after the Job End time. (In \texttt{JobHistory} table) When this time
+period expires, and if user runs \texttt{prune stats} command, Bacula will
+prune (remove) Job records that are older than the specified period.
+
+You can use the following Job resource in your nightly \textbf{BackupCatalog}
+job to maintain statistics.
+\begin{verbatim}
+Job {
+ Name = BackupCatalog
+ ...
+ RunScript {
+ Console = "update stats days=3"
+ Console = "prune stats yes"
+ RunsWhen = After
+ RunsOnClient = no
+ }
+}
+\end{verbatim}
+
+\subsection{SpoolSize = \lt{}size-specification-in-bytes\gt{}}
+A new job directive permits to specify the spool size per job. This is used
+in advanced job tunning. {\bf SpoolSize={\it bytes}}
+
+
+\section{Building Bacula Plugins}
+There is currently one sample program {\bf example-plugin-fd.c} and
+one working plugin {\bf bpipe-fd.c} that can be found in the Bacula
+{\bf src/plugins/fd} directory. Both are built with the following:
+
+\begin{verbatim}
+ cd <bacula-source>
+ ./configure <your-options>
+ make
+ ...
+ cd src/plugins/fd
+ make
+ make test
+\end{verbatim}
+
+After building Bacula and changing into the src/plugins/fd directory,
+the {\bf make} command will build the {\bf bpipe-fd.so} plugin, which
+is a very useful and working program.
+
+The {\bf make test} command will build the {\bf example-plugin-fd.so}
+plugin and a binary named {\bf main}, which is build from the source
+code located in {\bf src/filed/fd\_plugins.c}.
+
+If you execute {\bf ./main}, it will load and run the example-plugin-fd
+plugin simulating a small number of the calling sequences that Bacula uses
+in calling a real plugin. This allows you to do initial testing of
+your plugin prior to trying it with Bacula.
+
+You can get a good idea of how to write your own plugin by first
+studying the example-plugin-fd, and actually running it. Then
+it can also be instructive to read the bpipe-fd.c code as it is
+a real plugin, which is still rather simple and small.
+
+When actually writing your own plugin, you may use the example-plugin-fd.c
+code as a template for your code.
+
+
+%%
+%%
+
+\chapter{Bacula FD Plugin API}
+To write a Bacula plugin, you create a dynamic shared object
+program (or dll on Win32) with a particular name and two
+exported entry points, place it in the {\bf Plugins Directory}, which is defined in the
+{\bf bacula-fd.conf} file in the {\bf Client} resource, and when the FD
+starts, it will load all the plugins that end with {\bf -fd.so} (or {\bf -fd.dll}
+on Win32) found in that directory.
+
+\section{Normal vs Command Plugins}
+In general, there are two ways that plugins are called. The first way,
+is when a particular event is detected in Bacula, it will transfer control
+to each plugin that is loaded in turn informing the plugin of the event.
+This is very similar to how a {\bf RunScript} works, and the events are very similar.
+Once the plugin gets control, it can interact with Bacula by getting and
+setting Bacula variables. In this way, it behaves much like a RunScript.
+Currently very few Bacula variables are defined, but they will be implemented
+as the need arrises, and it is very extensible.
+
+We plan to have plugins register to receive events that they normally would
+not receive, such as an event for each file examined for backup or restore.
+This feature is not yet implemented.
+
+The second type of plugin, which is more useful and fully implemented
+in the current version is what we call a command plugin. As with all
+plugins, it gets notified of important events as noted above (details described below),
+but in addition, this kind of plugin can accept a command line, which
+is a:
+
+\begin{verbatim}
+ Plugin = <command-string>
+\end{verbatim}
+
+directive that is placed in the Include section of a FileSet and is very
+similar to the "File = " directive. When this Plugin directive is encountered
+by Bacula during backup, it passes the "command" part of the Plugin directive
+only to the plugin that is explicitly named in the first field of that command string.
+This allows that plugin to backup any file or files on the system that it wants. It can
+even create "virtual files" in the catalog that contain data to be restored but do
+not necessarily correspond to actual files on the filesystem.
+
+The important features of the command plugin entry points are:
+\begin{itemize}
+ \item It is triggered by a "Plugin =" directive in the FileSet
+ \item Only a single plugin is called that is named on the "Plugin =" directive.
+ \item The full command string after the "Plugin =" is passed to the plugin
+ so that it can be told what to backup/restore.
+\end{itemize}
+
+
+\section{Loading Plugins}
+Once the File daemon loads the plugins, it asks the OS for the
+two entry points (loadPlugin and unloadPlugin) then calls the
+{\bf loadPlugin} entry point (see below).
+
+Bacula passes information to the plugin through this call and it gets
+back information that it needs to use the plugin. Later, Bacula
+ will call particular functions that are defined by the
+{\bf loadPlugin} interface.
+
+When Bacula is finished with the plugin
+(when Bacula is going to exit), it will call the {\bf unloadPlugin}
+entry point.
+
+The two entry points are:
+
+\begin{verbatim}
+bRC loadPlugin(bInfo *lbinfo, bFuncs *lbfuncs, pInfo **pinfo, pFuncs **pfuncs)
+
+and
+
+bRC unloadPlugin()
+\end{verbatim}
+
+both these external entry points to the shared object are defined as C entry points
+to avoid name mangling complications with C++. However, the shared object
+can actually be written in any language (preferrably C or C++) providing that it
+follows C language calling conventions.
+
+The definitions for {\bf bRC} and the arguments are {\bf
+src/filed/fd-plugins.h} and so this header file needs to be included in
+your plugin. It along with {\bf src/lib/plugins.h} define basically the whole
+plugin interface. Within this header file, it includes the following
+files:
+
+\begin{verbatim}
+#include <sys/types.h>
+#include "config.h"
+#include "bc_types.h"
+#include "lib/plugins.h"
+#include <sys/stat.h>
+\end{verbatim}
+
+Aside from the {\bf bc\_types.h} and {\bf confit.h} headers, the plugin definition uses the
+minimum code from Bacula. The bc\_types.h file is required to ensure that
+the data type defintions in arguments correspond to the Bacula core code.
+
+The return codes are defined as:
+\begin{verbatim}
+typedef enum {
+ bRC_OK = 0, /* OK */
+ bRC_Stop = 1, /* Stop calling other plugins */
+ bRC_Error = 2, /* Some kind of error */
+ bRC_More = 3, /* More files to backup */
+} bRC;
+\end{verbatim}
+
+
+At a future point in time, we hope to make the Bacula libbac.a into a
+shared object so that the plugin can use much more of Bacula's
+infrastructure, but for this first cut, we have tried to minimize the
+dependence on Bacula.
+
+\section{loadPlugin}
+As previously mentioned, the {\bf loadPlugin} entry point in the plugin
+is called immediately after Bacula loads the plugin when the File daemon
+itself is first starting. This entry point is only called once during the
+execution of the File daemon. In calling the
+plugin, the first two arguments are information from Bacula that
+is passed to the plugin, and the last two arguments are information
+about the plugin that the plugin must return to Bacula. The call is:
+
+\begin{verbatim}
+bRC loadPlugin(bInfo *lbinfo, bFuncs *lbfuncs, pInfo **pinfo, pFuncs **pfuncs)
+\end{verbatim}
+
+and the arguments are:
+
+\begin{description}
+\item [lbinfo]
+This is information about Bacula in general. Currently, the only value
+defined in the bInfo structure is the version, which is the Bacula plugin
+interface version, currently defined as 1. The {\bf size} is set to the
+byte size of the structure. The exact definition of the bInfo structure
+as of this writing is:
+
+\begin{verbatim}
+typedef struct s_baculaInfo {
+ uint32_t size;
+ uint32_t version;
+} bInfo;
+\end{verbatim}
+
+\item [lbfuncs]
+The bFuncs structure defines the callback entry points within Bacula
+that the plugin can use register events, get Bacula values, set
+Bacula values, and send messages to the Job output or debug output.
+
+The exact definition as of this writing is:
+\begin{verbatim}
+typedef struct s_baculaFuncs {
+ uint32_t size;
+ uint32_t version;
+ bRC (*registerBaculaEvents)(bpContext *ctx, ...);
+ bRC (*getBaculaValue)(bpContext *ctx, bVariable var, void *value);
+ bRC (*setBaculaValue)(bpContext *ctx, bVariable var, void *value);
+ bRC (*JobMessage)(bpContext *ctx, const char *file, int line,
+ int type, time_t mtime, const char *fmt, ...);
+ bRC (*DebugMessage)(bpContext *ctx, const char *file, int line,
+ int level, const char *fmt, ...);
+ void *(*baculaMalloc)(bpContext *ctx, const char *file, int line,
+ size_t size);
+ void (*baculaFree)(bpContext *ctx, const char *file, int line, void *mem);
+} bFuncs;
+\end{verbatim}
+
+We will discuss these entry points and how to use them a bit later when
+describing the plugin code.
+
+
+\item [pInfo]
+When the loadPlugin entry point is called, the plugin must initialize
+an information structure about the plugin and return a pointer to
+this structure to Bacula.
+
+The exact definition as of this writing is:
+
+\begin{verbatim}
+typedef struct s_pluginInfo {
+ uint32_t size;
+ uint32_t version;
+ const char *plugin_magic;
+ const char *plugin_license;
+ const char *plugin_author;
+ const char *plugin_date;
+ const char *plugin_version;
+ const char *plugin_description;
+} pInfo;
+\end{verbatim}
+
+Where:
+ \begin{description}
+ \item [version] is the current Bacula defined plugin interface version, currently
+ set to 1. If the interface version differs from the current version of
+ Bacula, the plugin will not be run (not yet implemented).
+ \item [plugin\_magic] is a pointer to the text string "*FDPluginData*", a
+ sort of sanity check. If this value is not specified, the plugin
+ will not be run (not yet implemented).
+ \item [plugin\_license] is a pointer to a text string that describes the
+ plugin license. Bacula will only accept compatible licenses (not yet
+ implemented).
+ \item [plugin\_author] is a pointer to the text name of the author of the program.
+ This string can be anything but is generally the author's name.
+ \item [plugin\_date] is the pointer text string containing the date of the plugin.
+ This string can be anything but is generally some human readable form of
+ the date.
+ \item [plugin\_version] is a pointer to a text string containing the version of
+ the plugin. The contents are determined by the plugin writer.
+ \item [plugin\_description] is a pointer to a string describing what the
+ plugin does. The contents are determined by the plugin writer.
+ \end{description}
+
+The pInfo structure must be defined in static memory because Bacula does not
+copy it and may refer to the values at any time while the plugin is
+loaded. All values must be supplied or the plugin will not run (not yet
+implemented). All text strings must be either ASCII or UTF-8 strings that
+are terminated with a zero byte.
+
+\item [pFuncs]
+When the loadPlugin entry point is called, the plugin must initialize
+an entry point structure about the plugin and return a pointer to
+this structure to Bacula. This structure contains pointer to each
+of the entry points that the plugin must provide for Bacula. When
+Bacula is actually running the plugin, it will call the defined
+entry points at particular times. All entry points must be defined.
+
+The pFuncs structure must be defined in static memory because Bacula does not
+copy it and may refer to the values at any time while the plugin is
+loaded.
+
+The exact definition as of this writing is:
+
+\begin{verbatim}
+typedef struct s_pluginFuncs {
+ uint32_t size;
+ uint32_t version;
+ bRC (*newPlugin)(bpContext *ctx);
+ bRC (*freePlugin)(bpContext *ctx);
+ bRC (*getPluginValue)(bpContext *ctx, pVariable var, void *value);
+ bRC (*setPluginValue)(bpContext *ctx, pVariable var, void *value);
+ bRC (*handlePluginEvent)(bpContext *ctx, bEvent *event, void *value);
+ bRC (*startBackupFile)(bpContext *ctx, struct save_pkt *sp);
+ bRC (*endBackupFile)(bpContext *ctx);
+ bRC (*startRestoreFile)(bpContext *ctx, const char *cmd);
+ bRC (*endRestoreFile)(bpContext *ctx);
+ bRC (*pluginIO)(bpContext *ctx, struct io_pkt *io);
+ bRC (*createFile)(bpContext *ctx, struct restore_pkt *rp);
+ bRC (*setFileAttributes)(bpContext *ctx, struct restore_pkt *rp);
+} pFuncs;
+\end{verbatim}
+
+The details of the entry points will be presented in
+separate sections below.
+
+Where:
+ \begin{description}
+ \item [size] is the byte size of the structure.
+ \item [version] is the plugin interface version currently set to 1.
+ \end{description}
+
+Sample code for loadPlugin:
+\begin{verbatim}
+ bfuncs = lbfuncs; /* set Bacula funct pointers */
+ binfo = lbinfo;
+ *pinfo = &pluginInfo; /* return pointer to our info */
+ *pfuncs = &pluginFuncs; /* return pointer to our functions */
+
+ return bRC_OK;
+\end{verbatim}
+
+where pluginInfo and pluginFuncs are statically defined structures.
+See bpipe-fd.c for details.
+
+
+
+\end{description}
+
+\section{Plugin Entry Points}
+This section will describe each of the entry points (subroutines) within
+the plugin that the plugin must provide for Bacula, when they are called
+and their arguments. As noted above, pointers to these subroutines are
+passed back to Bacula in the pFuncs structure when Bacula calls the
+loadPlugin() externally defined entry point.
+
+\subsection{newPlugin(bpContext *ctx)}
+ This is the entry point that Bacula will call
+ when a new "instance" of the plugin is created. This typically
+ happens at the beginning of a Job. If 10 Jobs are running
+ simultaneously, there will be at least 10 instances of the
+ plugin.
+
+ The bpContext structure will be passed to the plugin, and
+ during this call, if the plugin needs to have its private
+ working storage that is associated with the particular
+ instance of the plugin, it should create it from the heap
+ (malloc the memory) and store a pointer to
+ its private working storage in the {\bf pContext} variable.
+ Note: since Bacula is a multi-threaded program, you must not
+ keep any variable data in your plugin unless it is truely meant
+ to apply globally to the whole plugin. In addition, you must
+ be aware that except the first and last call to the plugin
+ (loadPlugin and unloadPlugin) all the other calls will be
+ made by threads that correspond to a Bacula job. The
+ bpContext that will be passed for each thread will remain the
+ same throughout the Job thus you can keep your privat Job specific
+ data in it ({\bf bContext}).
+
+\begin{verbatim}
+typedef struct s_bpContext {
+ void *pContext; /* Plugin private context */
+ void *bContext; /* Bacula private context */
+} bpContext;
+
+\end{verbatim}
+
+ This context pointer will be passed as the first argument to all
+ the entry points that Bacula calls within the plugin. Needless
+ to say, the plugin should not change the bContext variable, which
+ is Bacula's private context pointer for this instance (Job) of this
+ plugin.
+
+\subsection{freePlugin(bpContext *ctx)}
+This entry point is called when the
+this instance of the plugin is no longer needed (the Job is
+ending), and the plugin should release all memory it may
+have allocated for this particular instance (Job) i.e. the pContext.
+This is not the final termination
+of the plugin signaled by a call to {\bf unloadPlugin}.
+Any other instances (Job) will
+continue to run, and the entry point {\bf newPlugin} may be called
+again if other jobs start.
+
+\subsection{getPluginValue(bpContext *ctx, pVariable var, void *value)}
+Bacula will call this entry point to get
+a value from the plugin. This entry point is currently not called.
+
+\subsection{setPluginValue(bpContext *ctx, pVariable var, void *value)}
+Bacula will call this entry point to set
+a value in the plugin. This entry point is currently not called.
+
+\subsection{handlePluginEvent(bpContext *ctx, bEvent *event, void *value)}
+This entry point is called when Bacula
+encounters certain events (discussed below). This is, in fact, the
+main way that most plugins get control when a Job runs and how
+they know what is happening in the job. It can be likened to the
+{\bf RunScript} feature that calls external programs and scripts,
+and is very similar to the Bacula Python interface.
+When the plugin is called, Bacula passes it the pointer to an event
+structure (bEvent), which currently has one item, the eventType:
+
+\begin{verbatim}
+typedef struct s_bEvent {
+ uint32_t eventType;
+} bEvent;
+\end{verbatim}
+
+ which defines what event has been triggered, and for each event,
+ Bacula will pass a pointer to a value associated with that event.
+ If no value is associated with a particular event, Bacula will
+ pass a NULL pointer, so the plugin must be careful to always check
+ value pointer prior to dereferencing it.
+
+ The current list of events are:
+
+\begin{verbatim}
+typedef enum {
+ bEventJobStart = 1,
+ bEventJobEnd = 2,
+ bEventStartBackupJob = 3,
+ bEventEndBackupJob = 4,
+ bEventStartRestoreJob = 5,
+ bEventEndRestoreJob = 6,
+ bEventStartVerifyJob = 7,
+ bEventEndVerifyJob = 8,
+ bEventBackupCommand = 9,
+ bEventRestoreCommand = 10,
+ bEventLevel = 11,
+ bEventSince = 12,
+} bEventType;
+
+\end{verbatim}
+
+Most of the above are self-explanatory.
+
+\begin{description}
+ \item [bEventJobStart] is called whenever a Job starts. The value
+ passed is a pointer to a string that contains: "Jobid=nnn
+ Job=job-name". Where nnn will be replaced by the JobId and job-name
+ will be replaced by the Job name. The variable is temporary so if you
+ need the values, you must copy them.
+
+ \item [bEventJobEnd] is called whenever a Job ends. No value is passed.
+
+ \item [bEventStartBackupJob] is called when a Backup Job begins. No value
+ is passed.
+
+ \item [bEventEndBackupJob] is called when a Backup Job ends. No value is
+ passed.
+
+ \item [bEventStartRestoreJob] is called when a Restore Job starts. No value
+ is passed.
+
+ \item [bEventEndRestoreJob] is called when a Restore Job ends. No value is
+ passed.
+
+ \item [bEventStartVerifyJob] is called when a Verify Job starts. No value
+ is passed.
+
+ \item [bEventEndVerifyJob] is called when a Verify Job ends. No value
+ is passed.
+
+ \item [bEventBackupCommand] is called prior to the bEventStartBackupJob and
+ the plugin is passed the command string (everything after the equal sign
+ in "Plugin =" as the value.
+
+ Note, if you intend to backup a file, this is an important first point to
+ write code that copies the command string passed into your pContext area
+ so that you will know that a backup is being performed and you will know
+ the full contents of the "Plugin =" command (i.e. what to backup and
+ what virtual filename the user wants to call it.
+
+ \item [bEventRestoreCommand] is called prior to the bEventStartRestoreJob and
+ the plugin is passed the command string (everything after the equal sign
+ in "Plugin =" as the value.
+
+ See the notes above concerning backup and the command string. This is the
+ point at which Bacula passes you the original command string that was
+ specified during the backup, so you will want to save it in your pContext
+ area for later use when Bacula calls the plugin again.
+
+ \item [bEventLevel] is called when the level is set for a new Job. The value
+ is a 32 bit integer stored in the void*, which represents the Job Level code.
+
+ \item [bEventSince] is called when the since time is set for a new Job. The
+ value is a time\_t time at which the last job was run.
+\end{description}
+
+During each of the above calls, the plugin receives either no specific value or
+only one value, which in some cases may not be sufficient. However, knowing the
+context of the event, the plugin can call back to the Bacula entry points it
+was passed during the {\bf loadPlugin} call and get to a number of Bacula variables.
+(at the current time few Bacula variables are implemented, but it easily extended
+at a future time and as needs require).
+
+\subsection{startBackupFile(bpContext *ctx, struct save\_pkt *sp)}
+This entry point is called only if your plugin is a command plugin, and
+it is called when Bacula encounters the "Plugin = " directive in
+the Include section of the FileSet.
+Called when beginning the backup of a file. Here Bacula provides you
+with a pointer to the {\bf save\_pkt} structure and you must fill in
+this packet with the "attribute" data of the file.
+
+\begin{verbatim}
+struct save_pkt {
+ int32_t pkt_size; /* size of this packet */
+ char *fname; /* Full path and filename */
+ char *link; /* Link name if any */
+ struct stat statp; /* System stat() packet for file */
+ int32_t type; /* FT_xx for this file */
+ uint32_t flags; /* Bacula internal flags */
+ bool portable; /* set if data format is portable */
+ char *cmd; /* command */
+ int32_t pkt_end; /* end packet sentinel */
+};
+\end{verbatim}
+
+The second argument is a pointer to the {\bf save\_pkt} structure for the file
+to be backed up. The plugin is responsible for filling in all the fields
+of the {\bf save\_pkt}. If you are backing up
+a real file, then generally, the statp structure can be filled in by doing
+a {\bf stat} system call on the file.
+
+If you are backing up a database or
+something that is more complex, you might want to create a virtual file.
+That is a file that does not actually exist on the filesystem, but represents
+say an object that you are backing up. In that case, you need to ensure
+that the {\bf fname} string that you pass back is unique so that it
+does not conflict with a real file on the system, and you need to
+artifically create values in the statp packet.
+
+Example programs such as {\bf bpipe-fd.c} show how to set these fields.
+You must take care not to store pointers the stack in the pointer fields such
+as fname and link, because when you return from your function, your stack entries
+will be destroyed. The solution in that case is to malloc() and return the pointer
+to it. In order to not have memory leaks, you should store a pointer to all memory
+allocated in your pContext structure so that in subsequent calls or at termination,
+you can release it back to the system.
+
+Once the backup has begun, Bacula will call your plugin at the {\bf pluginIO}
+entry point to "read" the data to be backed up. Please see the {\bf bpipe-fd.c}
+plugin for how to do I/O.
+
+Example of filling in the save\_pkt as used in bpipe-fd.c:
+
+\begin{verbatim}
+ struct plugin_ctx *p_ctx = (struct plugin_ctx *)ctx->pContext;
+ time_t now = time(NULL);
+ sp->fname = p_ctx->fname;
+ sp->statp.st_mode = 0700 | S_IFREG;
+ sp->statp.st_ctime = now;
+ sp->statp.st_mtime = now;
+ sp->statp.st_atime = now;
+ sp->statp.st_size = -1;
+ sp->statp.st_blksize = 4096;
+ sp->statp.st_blocks = 1;
+ p_ctx->backup = true;
+ return bRC_OK;
+\end{verbatim}
+
+Note: the filename to be created has already been created from the
+command string previously sent to the plugin and is in the plugin
+context (p\_ctx->fname) and is a malloc()ed string. This example
+creates a regular file (S\_IFREG), with various fields being created.
+
+In general, the sequence of commands issued from Bacula to the plugin
+to do a backup while processing the "Plugin = " directive are:
+
+\begin{enumerate}
+ \item generate a bEventBackupCommand event to the specified plugin
+ and pass it the command string.
+ \item make a startPluginBackup call to the plugin, which
+ fills in the data needed in save\_pkt to save as the file
+ attributes and to put on the Volume and in the catalog.
+ \item call Bacula's internal save\_file() subroutine to save the specified
+ file. The plugin will then be called at pluginIO() to "open"
+ the file, and then to read the file data.
+ Note, if you are dealing with a virtual file, the "open" operation
+ is something the plugin does internally and it doesn't necessarily
+ mean opening a file on the filesystem. For example in the case of
+ the bpipe-fd.c program, it initiates a pipe to the requested program.
+ Finally when the plugin signals to Bacula that all the data was read,
+ Bacula will call the plugin with the "close" pluginIO() function.
+\end{enumerate}
+
+
+\subsection{endBackupFile(bpContext *ctx)}
+Called at the end of backing up a file for a command plugin. If the plugin's work
+is done, it should return bRC\_OK. If the plugin wishes to create another
+file and back it up, then it must return bRC\_More (not yet implemented).
+This is probably a good time to release any malloc()ed memory you used to
+pass back filenames.
+
+\subsection{startRestoreFile(bpContext *ctx, const char *cmd)}
+Called when the first record is read from the Volume that was
+previously written by the command plugin.
+
+\subsection{createFile(bpContext *ctx, struct restore\_pkt *rp)}
+Called for a command plugin to create a file during a Restore job before
+restoring the data.
+This entry point is called before any I/O is done on the file. After
+this call, Bacula will call pluginIO() to open the file for write.
+
+The data in the
+restore\_pkt is passed to the plugin and is based on the data that was
+originally given by the plugin during the backup and the current user
+restore settings (e.g. where, RegexWhere, replace). This allows the
+plugin to first create a file (if necessary) so that the data can
+be transmitted to it. The next call to the plugin will be a
+pluginIO command with a request to open the file write-only.
+
+This call must return one of the following values:
+
+\begin{verbatim}
+ enum {
+ CF_SKIP = 1, /* skip file (not newer or something) */
+ CF_ERROR, /* error creating file */
+ CF_EXTRACT, /* file created, data to extract */
+ CF_CREATED /* file created, no data to extract */
+};
+\end{verbatim}
+
+in the restore\_pkt value {\bf create\_status}. For a normal file,
+unless there is an error, you must return {\bf CF\_EXTRACT}.
+
+\begin{verbatim}
+
+struct restore_pkt {
+ int32_t pkt_size; /* size of this packet */
+ int32_t stream; /* attribute stream id */
+ int32_t data_stream; /* id of data stream to follow */
+ int32_t type; /* file type FT */
+ int32_t file_index; /* file index */
+ int32_t LinkFI; /* file index to data if hard link */
+ uid_t uid; /* userid */
+ struct stat statp; /* decoded stat packet */
+ const char *attrEx; /* extended attributes if any */
+ const char *ofname; /* output filename */
+ const char *olname; /* output link name */
+ const char *where; /* where */
+ const char *RegexWhere; /* regex where */
+ int replace; /* replace flag */
+ int create_status; /* status from createFile() */
+ int32_t pkt_end; /* end packet sentinel */
+
+};
+\end{verbatim}
+
+Typical code to create a regular file would be the following:
+
+\begin{verbatim}
+ struct plugin_ctx *p_ctx = (struct plugin_ctx *)ctx->pContext;
+ time_t now = time(NULL);
+ sp->fname = p_ctx->fname; /* set the full path/filename I want to create */
+ sp->type = FT_REG;
+ sp->statp.st_mode = 0700 | S_IFREG;
+ sp->statp.st_ctime = now;
+ sp->statp.st_mtime = now;
+ sp->statp.st_atime = now;
+ sp->statp.st_size = -1;
+ sp->statp.st_blksize = 4096;
+ sp->statp.st_blocks = 1;
+ return bRC_OK;
+\end{verbatim}
+
+This will create a virtual file. If you are creating a file that actually
+exists, you will most likely want to fill the statp packet using the
+stat() system call.
+
+Creating a directory is similar, but requires a few extra steps:
+
+\begin{verbatim}
+ struct plugin_ctx *p_ctx = (struct plugin_ctx *)ctx->pContext;
+ time_t now = time(NULL);
+ sp->fname = p_ctx->fname; /* set the full path I want to create */
+ sp->link = xxx; where xxx is p_ctx->fname with a trailing forward slash
+ sp->type = FT_DIREND
+ sp->statp.st_mode = 0700 | S_IFDIR;
+ sp->statp.st_ctime = now;
+ sp->statp.st_mtime = now;
+ sp->statp.st_atime = now;
+ sp->statp.st_size = -1;
+ sp->statp.st_blksize = 4096;
+ sp->statp.st_blocks = 1;
+ return bRC_OK;
+\end{verbatim}
+
+The link field must be set with the full cononical path name, which always
+ends with a forward slash. If you do not terminate it with a forward slash,
+you will surely have problems later.
+
+As with the example that creates a file, if you are backing up a real
+directory, you will want to do an stat() on the directory.
+
+Note, if you want the directory permissions and times to be correctly
+restored, you must create the directory {\bf after} all the file directories
+have been sent to Bacula. That allows the restore process to restore all the
+files in a directory using default directory options, then at the end, restore
+the directory permissions. If you do it the other way around, each time you
+restore a file, the OS will modify the time values for the directory entry.
+
+\subsection{setFileAttributes(bpContext *ctx, struct restore\_pkt *rp)}
+This is call not yet implemented. Called for a command plugin.
+
+See the definition of {\bf restre\_pkt} in the above section.
+
+\subsection{endRestoreFile(bpContext *ctx)}
+Called when a command plugin is done restoring a file.
+
+\subsection{pluginIO(bpContext *ctx, struct io\_pkt *io)}
+Called to do the input (backup) or output (restore) of data from or to a
+file for a command plugin. These routines simulate the Unix read(), write(), open(), close(),
+and lseek() I/O calls, and the arguments are passed in the packet and
+the return values are also placed in the packet. In addition for Win32
+systems the plugin must return two additional values (described below).
+
+\begin{verbatim}
+ enum {
+ IO_OPEN = 1,
+ IO_READ = 2,
+ IO_WRITE = 3,
+ IO_CLOSE = 4,
+ IO_SEEK = 5
+};
+
+struct io_pkt {
+ int32_t pkt_size; /* Size of this packet */
+ int32_t func; /* Function code */
+ int32_t count; /* read/write count */
+ mode_t mode; /* permissions for created files */
+ int32_t flags; /* Open flags */
+ char *buf; /* read/write buffer */
+ const char *fname; /* open filename */
+ int32_t status; /* return status */
+ int32_t io_errno; /* errno code */
+ int32_t lerror; /* Win32 error code */
+ int32_t whence; /* lseek argument */
+ boffset_t offset; /* lseek argument */
+ bool win32; /* Win32 GetLastError returned */
+ int32_t pkt_end; /* end packet sentinel */
+};
+\end{verbatim}
+
+The particular Unix function being simulated is indicated by the {\bf func},
+which will have one of the IO\_OPEN, IO\_READ, ... codes listed above.
+The status code that would be returned from a Unix call is returned in
+{\bf status} for IO\_OPEN, IO\_CLOSE, IO\_READ, and IO\_WRITE. The return value for
+IO\_SEEK is returned in {\bf offset} which in general is a 64 bit value.
+
+When there is an error on Unix systems, you must always set io\_error, and
+on a Win32 system, you must always set win32, and the returned value from
+the OS call GetLastError() in lerror.
+
+For all except IO\_SEEK, {\bf status} is the return result. In general it is
+a positive integer unless there is an error in which case it is -1.
+
+The following describes each call and what you get and what you
+should return:
+
+\begin{description}
+ \item [IO\_OPEN]
+ You will be passed fname, mode, and flags.
+ You must set on return: status, and if there is a Unix error
+ io\_errno must be set to the errno value, and if there is a
+ Win32 error win32 and lerror.
+
+ \item [IO\_READ]
+ You will be passed: count, and buf (buffer of size count).
+ You must set on return: status to the number of bytes
+ read into the buffer (buf) or -1 on an error,
+ and if there is a Unix error
+ io\_errno must be set to the errno value, and if there is a
+ Win32 error, win32 and lerror must be set.
+
+ \item [IO\_WRITE]
+ You will be passed: count, and buf (buffer of size count).
+ You must set on return: status to the number of bytes
+ written from the buffer (buf) or -1 on an error,
+ and if there is a Unix error
+ io\_errno must be set to the errno value, and if there is a
+ Win32 error, win32 and lerror must be set.
+
+ \item [IO\_CLOSE]
+ Nothing will be passed to you. On return you must set
+ status to 0 on success and -1 on failure. If there is a Unix error
+ io\_errno must be set to the errno value, and if there is a
+ Win32 error, win32 and lerror must be set.
+
+ \item [IO\_LSEEK]
+ You will be passed: offset, and whence. offset is a 64 bit value
+ and is the position to seek to relative to whence. whence is one
+ of the following SEEK\_SET, SEEK\_CUR, or SEEK\_END indicating to
+ either to seek to an absolute possition, relative to the current
+ position or relative to the end of the file.
+ You must pass back in offset the absolute location to which you
+ seeked. If there is an error, offset should be set to -1.
+ If there is a Unix error
+ io\_errno must be set to the errno value, and if there is a
+ Win32 error, win32 and lerror must be set.
+
+ Note: Bacula will call IO\_SEEK only when writing a sparse file.
+
+\end{description}
+
+\section{Bacula Plugin Entrypoints}
+When Bacula calls one of your plugin entrypoints, you can call back to
+the entrypoints in Bacula that were supplied during the xxx plugin call
+to get or set information within Bacula.
+
+\subsection{bRC registerBaculaEvents(bpContext *ctx, ...)}
+This Bacula entrypoint will allow you to register to receive events
+that are not autmatically passed to your plugin by default. This
+entrypoint currently is unimplemented.
+
+\subsection{bRC getBaculaValue(bpContext *ctx, bVariable var, void *value)}
+Calling this entrypoint, you can obtain specific values that are available
+in Bacula.
+
+\subsection{bRC setBaculaValue(bpContext *ctx, bVariable var, void *value)}
+Calling this entrypoint allows you to set particular values in
+Bacula.
+
+\subsection{bRC JobMessage(bpContext *ctx, const char *file, int line,
+ int type, time\_t mtime, const char *fmt, ...)}
+This call permits you to put a message in the Job Report.
+
+
+\subsection{bRC DebugMessage)(bpContext *ctx, const char *file, int line,
+ int level, const char *fmt, ...)}
+This call permits you to print a debug message.
+\subsection{void baculaMalloc(bpContext *ctx, const char *file, int line,
+ size\_t size)}
+This call permits you to obtain memory from Bacula's memory allocator.
+\subsection{void baculaFree(bpContext *ctx, const char *file, int line, void *mem)}
+This call permits you to free memory obtained from Bacula's memory allocator.