This chapter presents the new features added to the development 2.5.x
versions to be released as Bacula version 3.0.0 near the end of 2008.
+\section{Accurate}
+\index[general]{Accurate Backup}
+As with most other backup programs, Bacula decides what files to backup
+for Incremental and Differental backup by comparing the change (st\_ctime)
+and modification (st\_mtime) times of the file to the time the last backup completed.
+If one of those two times is later than the last backup time, then the file
+will be backed up. This does not, however, permit tracking what files have
+been deleted and will miss any file with an old time that may have been
+restored or moved on the client filesystem.
+
+If the {\bf Accurate = \lt{}yes|no\gt{}} directive is enabled (default no) in the
+Job resource, the job will be run as an Accurate Job. For a {\bf Full}
+backup, there is no difference, but for {\bf Differential} and {\bf Incremental}
+backups, the Director will send a list of all previous files backed up, and the
+File daemon will use that list to determine if any new files have been added or
+or moved and if any files have been deleted. This allows Bacula to make an accurate
+backup of your system to that point in time so that if you do a restore, it
+will restore your system exactly. One note of caution about using Accurate backup is that
+it requires more resources (CPU and memory) on both the Director and
+the Client machines to create the list of previous files backed up, to send that
+list to the File daemon, for the File daemon to keep the list (possibly very big)
+in memory, and for the File daemon to do comparisons between every file in the
+FileSet and the list.
+
+
\section{Copy Jobs}
\index[general]{Copy Jobs}
A new {\bf Copy} job type has been implemented. It is essentially
Pool {
Name = Default
Pool Type = Backup
- Recycle = yes # Bacula can automatically recycle Volumes
- AutoPrune = yes # Prune expired volumes
- Volume Retention = 365d # one year
+ Recycle = yes # Automatically recycle Volumes
+ AutoPrune = yes # Prune expired volumes
+ Volume Retention = 365d # one year
NextPool = Full
Storage = File
}
Pool {
Name = Full
Pool Type = Backup
- Recycle = yes # Bacula can automatically recycle Volumes
- AutoPrune = yes # Prune expired volumes
- Volume Retention = 365d # one year
+ Recycle = yes # Automatically recycle Volumes
+ AutoPrune = yes # Prune expired volumes
+ Volume Retention = 365d # one year
Storage = DiskChanger
}
# Definition of DDS Virtual tape disk storage device
Storage {
Name = DiskChanger
- Address = localhost # N.B. Use a fully qualified name here
+ Address = localhost # N.B. Use a fully qualified name here
Password = "yyy"
Device = DiskChanger
Media Type = DiskChangerMedia
And it would produce a new Full backup without using the client, and the output
would be written to the {\bf Full} Pool which uses the Diskchanger Storage.
+\section{Duplicate Job Control}
+\index[general]{Duplicate Jobs}
+The new version of Bacula provides four new directives that
+give additional control over what Bacula does if duplicate jobs
+are started. A duplicate job in the sense we use it here means
+a second or subsequent job with the same name starts. This
+happens most frequently when the first job runs longer than expected because no
+tapes are available.
+
+The four directives each take as an argument a {\bf yes} or {\bf no} value and
+are specified in the Job resource.
+
+They are:
+
+\begin{description}
+\item [Allow Duplicate Jobs = \lt{}yes|no\gt{}]
+ If this directive is enabled duplicate jobs will be run. If
+ the directive is set to {\bf no} (default) then only one job of a given name
+ may run at one time, and the action that Bacula takes to ensure only
+ one job runs is determined by the other directives (see below).
+
+\item [Allow Higher Duplicates = \lt{}yes|no\gt{}]
+ If this directive is set to {\bf yes} (default) the job with a higher
+ priority (lower priority number) will be permitted to run. If the
+ priorities of the two jobs are the same, the outcome is determined by
+ other directives (see below).
+
+\item [Cancel Queued Duplicates = \lt{}yes|no\gt{}]
+ If this directive is set to {\bf yes} (default) any job that is
+ already queued to run but not yet running will be canceled.
+
+\item [Cancel Running Duplicates = \lt{}yes|no\gt{}]
+ If this directive is set to {\bf yes} any job that is already running
+ will be canceled. The default is {\bf no}.
+\end{description}
+
\section{TLS Authentication}
\index[general]{TLS Authentication}
In Bacula version 2.5.x and later, in addition to the normal Bacula
The default value is {\bf no}.
-\section{Duplicate Job Control}
-\index[general]{Duplicate Jobs}
-The new version of Bacula provides four new directives that
-give additional control over what Bacula does if duplicate jobs
-are started. A duplicate job in the sense we use it here means
-a second or subsequent job with the same name starts. This
-happens most frequently when the first job runs longer than expected because no
-tapes are available.
-
-The four directives each take as an argument a yes or no value and
-are specified in the Job resource.
-
-They are:
-
-\begin{description}
-\item [Allow Duplicate Jobs = \lt{}yes|no\gt{}]
- If this directive is enabled duplicate jobs will be run. If
- the directive is set to {\bf no} (default) then only one job of a given name
- may run at one time, and the action that Bacula takes to ensure only
- one job runs is determined by the other directives (see below).
-
-\item [Allow Higher Duplicates = \lt{}yes|no\gt{}]
- If this directive is set to {\bf yes} (default) the job with a higher
- priority (lower priority number) will be permitted to run. If the
- priorities of the two jobs are the same, the outcome is determined by
- other directives (see below).
-
-\item [Cancel Queued Duplicates = \lt{}yes|no\gt{}]
- If this directive is set to {\bf yes} (default) any job that is
- already queued to run but not yet running will be canceled.
-
-\item [Cancel Running Duplicates = \lt{}yes|no\gt{}]
- If this directive is set to {\bf yes} any job that is already running
- will be canceled. The default is {\bf no}.
-\end{description}
-
-
\section{Ignore Dir}
\index[general]{IgnoreDir}
The {\bf Ignore Dir = \lt{}filename\gt{}} is a new directive that can be added to the Include
For example:
\begin{verbatim}
- # List of files to be backed up
- FileSet {
- Name = "MyFileSet"
- Include {
- Options {
- signature = MD5
- }
- File = /home
- IgnoreDir = .excludeme
- }
- }
+ # List of files to be backed up
+ FileSet {
+ Name = "MyFileSet"
+ Include {
+ Options {
+ signature = MD5
+ }
+ File = /home
+ IgnoreDir = .excludeme
+ }
+ }
\end{verbatim}
But in /home, there may be hundreds of directories of users and some
-\section{Accurate}
-\index[general]{Accurate Backup}
-As with most other backup programs, Bacula decides what files to backup
-for Incremental and Differental backup by comparing the change (st\_ctime)
-and modification (st\_mtime) times of the file to the time the last backup completed.
-If one of those two times is later than the last backup time, then the file
-will be backed up. This does not, however, permit tracking what files have
-been deleted and will miss any file with an old time that may have been
-restored or moved on the client filesystem.
-
-If the {\bf Accurate = \lt{}yes|no\gt{}} directive is enabled (default no) in the
-Job resource, the job will be run as an Accurate Job. For a {\bf Full}
-backup, there is no difference, but for {\bf Differential} and {\bf Incremental}
-backups, the Director will send a list of all previous files backed up, and the
-File daemon will use that list to determine if any new files have been added or
-or moved and if any files have been deleted. This allows Bacula to make an accurate
-backup of your system to that point in time so that if you do a restore, it
-will restore your system exactly. The downside of using Accurate backup is that
-it requires significantly more resources (CPU and memory) on both the Director and
-the Client machine to create the list of previous files backed up, to send that
-list to the File daemon, and do comparisons on the File daemon between every file
-and the list.
\section{Bacula Plugins}
\index[general]{Plugin}
get control to backup and restore a file.
Plugins are also planned (partially implemented) in the Director and the
-Storage daemon. Also we plan (at some point) to port (partially implemented)
-the plugin code to Win32 machines.
+Storage daemon. The code is also implemented to work on Win32 machines,
+but it has not yet been tested.
+
+\subsection{Plugin Directory}
+Each daemon (DIR, FD, SD) has a new {\bf Plugin Directory} directive that may
+be added to the daemon definition resource. The directory takes a quoted
+string argument, which is the name of the directory in which the daemon can
+find the Bacula plugins. If this directive is not specified, Bacula will not
+load any plugins. Since each plugin has a distinctive name, all the daemons
+can share the same plugin directory.
+
+
+
+\subsection{Plugin Options}
+The {\bf Plugin Options} directive takes a quoted string
+arguement (after the equal sign) and may be specified in the
+Job resource. The options specified will be passed to the plugin
+when it is run. The value defined in the Job resource can be modified
+by the user when he runs a Job via the {\bf bconsole} command line
+prompts.
+
+Note: this directive may be specified, but it is not yet passed to
+the plugin (i.e. not fully implemented).
+
+\subsection{Plugin Options ACL}
+The {\bf Plugin Options ACL} directive may be specified in the
+Director's Console resource. It functions as all the other ACL commands
+do by permitting users running restricted consoles to specify a
+{\bf Plugin Options} that overrides the one specified in the Job
+definition. Without this directive restricted consoles may not modify
+the Plugin Options.
+
+\subsection{Plugin}
+The {\bf Plugin} directive is specified in the Include section of
+a FileSet resource where you put your {\bf File = xxx} directives.
+For example:
+
+\begin{verbatim}
+ FileSet {
+ Name = "MyFileSet"
+ Include {
+ Options {
+ signature = MD5
+ }
+ File = /home
+ Plugin = "bpipe:..."
+ }
+ }
+\end{verbatim}
+
+In the above example, when the File daemon is processing the directives
+in the Include section, it will first backup all the files in {\bf /home}
+then it will load the plugin named {\bf bpipe} (actually bpipe-dir.so) from
+the Plugin Directory. The syntax and semantics of the Plugin directive
+require the first part of the string up to the colon (:) to be the name
+of the plugin. Everything after the first colon is ignored by the File daemon but
+is passed to the plugin. Thus the plugin writer may define the meaning of the
+rest of the string as he wishes.
+
+Please see the next section for information about the {\bf bpipe} Bacula
+plugin.
+
+\section{The bpipe Plugin}
+The {\bf bpipe} plugin is provided in the directory src/plugins/fd/bpipe-fd.c of
+the Bacula source distribution. When the plugin is compiled and linking into
+the resulting dynamic shared object (DSO), it will have the name {\bf bpipe-fd.so}.
+
+The purpose of the plugin is to provide an interface to any system program for
+backup and restore. As specified above the {\bf bpipe} plugin is specified in
+the Include section of your Job's FileSet resource. The full syntax of the
+plugin directive as interpreted by the {\bf bpipe} plugin (each plugin is free
+to specify the sytax as it wishes) is:
+
+\begin{verbatim}
+ Plugin = "<field1>:<field2>:<field3>:<field4>"
+\end{verbatim}
+
+where
+\begin{description}
+\item {\bf field1} is the name of the plugin with the trailing {\bf -fd.so}
+stripped off, so in this case, we would put {\bf bpipe} in this field.
+
+\item {\bf field2} specifies the namespace, which for {\bf bpipe} is the
+pseudo path and filename under which the backup will be saved. This pseudo
+path and filename will be seen by the user in the restore file tree.
+For example, if the value is {\bf /MYSQL/regress.sql}, the data
+backed up by the plugin will be put under that "pseudo" path and filename.
+You must be careful to choose a naming convention that is unique to avoid
+a conflict with a path and filename that actually exists on your system.
+
+\item {\bf field3} for the {\bf bpipe} plugin
+specifies the "reader" program that is called by the plugin during
+backup to read the data. {\bf bpipe} will call this program by doing a
+{\bf popen} on it.
+
+\item {\bf field4} for the {\bf bpipe} plugin
+specifies the "writer" program that is called by the plugin during
+restore to write the data back to the filesystem.
+\end{description}
+
+Putting it all together, the full plugin directive line might look
+like the following:
+
+\begin{verbatim}
+Plugin = "bpipe:/MYSQL/regress.sql:mysqldump -f
+ --opt --databases bacula:mysql"
+\end{verbatim}
+
+The directive has been split into two lines, but within the {\bf bacula-dir.conf} file
+would be written on a single line.
+
+This causes the File daemon to call the {\bf bpipe} plugin, which will write
+its data into the "pseudo" file {\bf /MYSQL/regress.sql} by calling the
+program {\bf mysqldump -f --opt --database bacula} to read the data during
+backup. The mysqldump command outputs all the data for the database named
+{\bf bacula}, which will be read by the plugin and stored in the backup.
+During restore, the data that was backed up will be sent to the program
+specified in the last field, which in this case is {\bf mysql}. When
+{\bf mysql} is called, it will read the data sent to it by the plugn
+then write it back to the same database from which it came ({\bf bacula}
+in this case).
+
+The {\bf bpipe} plugin is a generic pipe program, that simply transmits
+the data from a specified program to Bacula for backup, and then from Bacula to
+a specified program for restore.
+
+By using different command lines to {\bf bpipe},
+you can backup any kind of data (ASCII or binary) depending
+on the program called.
+
\section{Display Autochanger Content}
The bconsole {\bf status dir} output has been enhanced to indicate
Storage daemon job spooling and despooling activity.
-\item [Statistics Enhancements]
+\item [Connect Timeout]
+The default connect timeout to the File
+daemon has been set to 3 minutes. Previously it was 30 minutes.
+
+\item [ftruncate for NFS Volumes]
+If you write to a Volume mounted by NFS (say on a local file server),
+in previous Bacula versions, when the Volume was recycled, it was not
+properly truncated because NFS does not implement ftruncate (file
+truncate). This is now corrected in the new version because we have
+written code (actually a kind user) that deletes and recreates the Volume,
+thus accomplishing the same thing as a truncate.
+
+\item [Support for Ubuntu]
+The new version of Bacula now recognizes the Ubuntu (and Kubuntu)
+version of Linux, and thus now provides correct autostart routines.
+Since Ubuntu officially supports Bacula, you can also obtain any
+recent release of Bacula from the Ubuntu repositories.
+
+
+\item [FD Version]
+The File daemon to Director protocol now includes a version
+number, which will help us in future versions automatically determine
+if a File daemon is not compatible.
+
+\item [Max Run Sched Time]
+
+\item [Full Max Wait Time]
+
+\item [Incremental Max Wait Time]
-If you (or you boss) want to have statistics on your backups, you could use
-some SQL stuffs on the Job table to report how many:
+\item [Differential Max Wait Time]
+
+\item [Full Max Run Time]
+
+\item [Differential Max Run Time]
+
+\item [Incremental Max Run Time]
+
+
+\item [Statistics Enhancements]
+If you (or your boss) want to have statistics on your backups, you could use
+a few SQL queries on the Job table to report how many:
\begin{itemize}
\item jobs have run
\item jobs have been successful
-\item files have been backuped
+\item files have been backed up
\item ...
\end{itemize}
-Theses statistics are accurate only if your job retention is greater than
-your statistic period. Ie, if jobs are purged from the catalog, you won't be
+However, these statistics are accurate only if your job retention is greater than
+your statistics period. Ie, if jobs are purged from the catalog, you won't be
able to use them.
-Now, you can use the \textbf{update stats [days=num]} console to fill the
+Now, you can use the \textbf{update stats [days=num]} console command to fill the
JobStat table with new Job records.
-The \textbt{Statistics Retention = \lt{}time\gt{}} director directive defines
+The \textbf{Statistics Retention = \lt{}time\gt{}} director directive defines
the length of time that Bacula will keep statistics job records in the Catalog
database after the Job End time. (In \texttt{JobStat} table) When this time
period expires, and if user runs \texttt{prune stats} command, Bacula will
prune (remove) Job records that are older than the specified period.
-Theses statistics records aren't use for restore purpose, but mainly for
+These statistics records aren't used for restore purpose, but mainly for
capacity planning, billings, etc.
-You can use this setup in your \textbf{BackupCatalog} job to maintain
+You can use the following Job resource in your nightly \textbf{BackupCatalog} job to maintain
statistics.
\begin{verbatim}
Job {
\end{verbatim}
\item [Spooling Enhancements]
-
A new job directive permits to specify the spool size per job. This is used
-in advance job tunning. {\bf SpoolSize={\it bytes}}
+in advanced job tunning. {\bf SpoolSize={\it bytes}}
+
+\end{description}
+
+\section{Building Bacula Plugins}
+There is currently one sample program {\bf example-plugin-fd.c} and
+one working plugin {\bf bpipe-fd.c} that can be found in the Bacula
+{\bf src/plugins/fd} directory. Both are built with the following:
+
+\begin{verbatim}
+ cd <bacula-source>
+ ./configure <your-options>
+ make
+ ...
+ cd src/plugins/fd
+ make
+ make test
+\end{verbatim}
+
+After building Bacula and changing into the src/plugins/fd directory,
+the {\bf make} command will build the {\bf bpipe-fd.so} plugin, which
+is a very useful and working program.
+
+The {\bf make test} command will build the {\bf example-plugin-fd.so}
+plugin and a binary named main, which is build from the source
+code located in {\bf src/filed/fd\_plugins.c}.
+
+If you execute {\bf ./main}, it will load and run the example-plugin-fd
+plugin simulating a small number of the calling sequences that Bacula uses
+in calling a real plugin. This allows you to do initial testing of
+your plugin prior to trying it with Bacula.
+
+You can get a good idea of how to write your own plugin by first
+studying the example-plugin-fd, and actually running it. Then
+it can also be instructive to read the bpipe-fd.c code as it is
+a real plugin, which is still rather simple and small.
+
+When actually writing your own plugin, you may use the example-plugin-fd.c
+code as a template for your code.
+
+
+%%
+%%
+
+\chapter{Bacula FD Plugin API}
+To write a Bacula plugin, you cread a dynamic shared object
+program (or dll on Win32) with a particular name and two
+entry points, place it in the {\bf Plugins Directory}, and when the FD
+starts, it will load all the plugins found in that directory.
+Once it loads them, it calls the {\bf loadPlugin} entry point (see below)
+then later, it will call particular functions that are defined by the
+{\bf loadPlugin} interface. When Bacula is finished with the plugin
+(when Bacula is going to exit), it will call the {\bf unloadPlugin}
+entry point.
+
+The two entry points are:
+
+\begin{verbatim}
+bRC loadPlugin(bInfo *lbinfo, bFuncs *lbfuncs, pInfo **pinfo, pFuncs **pfuncs)
+
+and
+
+bRC unloadPlugin()
+\end{verbatim}
+
+both these entry points to the shared object are defined as C entry points
+to avoid name mangling complications with C++. However, the shared object
+can actually be written in any language.
+
+The definitions for {\bf bRC} and the arguments are {\bf
+src/filed/fd-plugins.h} and so this header file needs to be included in
+your plug. It along with {\bf lib/plugins.h} define basically the whole
+plugin interface. Within this header file, it includes the fillowing
+files:
+
+\begin{verbatim}
+#include <sys/types.h>
+#include "config.h"
+#include "bc_types.h"
+#include "lib/plugins.h"
+#include <sys/stat.h>
+\end{verbatim}
+
+Aside from the {\bf bc\_types.h} header, the plugin definition uses the
+minimum code from Bacula. The bc\_types.h file is required to ensure that
+the data type defintions in arguments correspond to the Bacula core code.
+
+The return codes are defined as:
+\begin{verbatim}
+typedef enum {
+ bRC_OK = 0, /* OK */
+ bRC_Stop = 1, /* Stop calling other plugins */
+ bRC_Error = 2, /* Some kind of error */
+ bRC_More = 3, /* More files to backup */
+} bRC;
+\end{verbatim}
+
+
+At a future point in time, we hope to make the Bacula libbac.a into a
+shared object so that the plugin can use much more of Bacula's
+infrastructure, but for this first cut, we have tried to minimize the
+dependence on Bacula.
+
+\section{loadPlugin}
+As previously mentioned, the {\bf loadPlugin} entry point in the plugin
+is called immediately after Bacula loads the plugin. In calling the
+plugin, the first two arguments are information from Bacula that
+is passed to the plugin, and the last two arguments are information
+about the plugin that is returned to Bacula. The call is:
+
+\begin{verbatim}
+bRC loadPlugin(bInfo *lbinfo, bFuncs *lbfuncs, pInfo **pinfo, pFuncs **pfuncs)
+\end{verbatim}
+and the arguments are:
+\begin{description}
+\item [lbinfo]
+This is information about Bacula in general. Currently, the only value
+defined in the bInfo structure is version, which is the Bacula plugin
+interface version, currently defined as 1.
+The exact definition as of this writing is:
+
+\begin{verbatim}
+typedef struct s_baculaInfo {
+ uint32_t size;
+ uint32_t version;
+} bInfo;
+\end{verbatim}
+
+\item [lbfuncs]
+The bFuncs structure defines the callback entry points within Bacula
+that the plugin can use register events, get Bacula values, set
+Bacula values, and send messages to the Job output.
+
+The exact definition as of this writing is:
+
+\begin{verbatim}
+ypedef struct s_baculaFuncs {
+ uint32_t size;
+ uint32_t version;
+ bRC (*registerBaculaEvents)(bpContext *ctx, ...);
+ bRC (*getBaculaValue)(bpContext *ctx, bVariable var, void *value);
+ bRC (*setBaculaValue)(bpContext *ctx, bVariable var, void *value);
+ bRC (*JobMessage)(bpContext *ctx, const char *file, int line,
+ int type, time_t mtime, const char *fmt, ...);
+ bRC (*DebugMessage)(bpContext *ctx, const char *file, int line,
+ int level, const char *fmt, ...);
+} bFuncs;
+\end{verbatim}
+
+We will discuss these entry points and how to use them a bit later when
+describing the plugin code.
+
+\item [pInfo]
+When the loadPlugin entry point is called, the plugin must initialize
+an information structure about the plugin and return a pointer to
+this structure to Bacula.
+
+The exact definition as of this writing is:
+
+\begin{verbatim}
+typedef struct s_pluginInfo {
+ uint32_t size;
+ uint32_t version;
+ const char *plugin_magic;
+ const char *plugin_license;
+ const char *plugin_author;
+ const char *plugin_date;
+ const char *plugin_version;
+ const char *plugin_description;
+} pInfo;
+\end{verbatim}
+
+Where:
+ \begin{description}
+ \item [version] is the current plugin interface version, currently
+ set to 1.
+ \item [plugin\_magic] is a pointer to the string "*FDPluginData*", a
+ sort of sanity check.
+ \item [plugin\_license] is a pointer to a string that describes the
+ plugin license.
+ \item [plugin\_author] is a pointer to the name of the author of the program.
+ \item [plugin\_date] is the pointer string containing the date of the plugin.
+ \item [plugin\_version] is a pointer to a string containing the version of
+ the plugin.
+ \item [plugin\_description] is a pointer to a string describing what the
+ plugin does.
+ \end{description}
+
+The pInfo structure must be defined in static memory because Bacula does not
+copy it and may refer to the values at any time while the plugin is
+loaded.
+
+\item [pFuncs]
+When the loadPlugin entry point is called, the plugin must initialize
+an entry point structure about the plugin and return a pointer to
+this structure to Bacula. This structure contains pointer to each
+of the entry points that the plugin must provide for Bacula. When
+Bacula is actually running the plugin, it will call the defined
+entry points at particular times. All entry points must be defined.
+
+The pFuncs structure must be defined in static memory because Bacula does not
+copy it and may refer to the values at any time while the plugin is
+loaded.
+
+
+The exact definition as of this writing is:
+
+\begin{verbatim}
+typedef struct s_pluginFuncs {
+ uint32_t size;
+ uint32_t version;
+ bRC (*newPlugin)(bpContext *ctx);
+ bRC (*freePlugin)(bpContext *ctx);
+ bRC (*getPluginValue)(bpContext *ctx, pVariable var, void *value);
+ bRC (*setPluginValue)(bpContext *ctx, pVariable var, void *value);
+ bRC (*handlePluginEvent)(bpContext *ctx, bEvent *event, void *value);
+ bRC (*startBackupFile)(bpContext *ctx, struct save_pkt *sp);
+ bRC (*endBackupFile)(bpContext *ctx);
+ bRC (*startRestoreFile)(bpContext *ctx, const char *cmd);
+ bRC (*endRestoreFile)(bpContext *ctx);
+ bRC (*pluginIO)(bpContext *ctx, struct io_pkt *io);
+ bRC (*createFile)(bpContext *ctx, struct restore_pkt *rp);
+ bRC (*setFileAttributes)(bpContext *ctx, struct restore_pkt *rp);
+} pFuncs;
+\end{verbatim}
+
+The details of the entry points will be presented in
+separate sections below.
+
+Where:
+ \begin{description}
+ \item [size] is the size of the structure.
+ \item [version] is the plugin interface version.
+ \end{description}
+
+\end{description}
+
+\section{Plugin Entry Points}
+This section will describe each of the entry points that
+the plugin must provide for Bacula, when they are called
+and their arguments.
+
+\subsection{newPlugin(bpContext *ctx)}
+ This is the entry point that Bacula will call
+ when a new instance of the plugin is created. This typically
+ happens at the beginning of a Job. If 10 Jobs are running
+ simultaneously, there will be at least 10 instances of the
+ plugin.
+
+ The bpContext structure will be passed to the plugin, and
+ during this call, if the plugin needs to have its private
+ working storage that is associated with the particular
+ instance of the plugin, it should create it from the heap
+ (malloc the memory) and store a pointer to
+ its private working storage in the {\bf pContext} variable.
+
+\begin{verbatim}
+typedef struct s_bpContext {
+ void *pContext; /* Plugin private context */
+ void *bContext; /* Bacula private context */
+} bpContext;
+
+\end{verbatim}
+
+ This context pointer will be passed as the first argument to all
+ the entry points that Bacula calls within the plugin. Needless
+ to say, the plugin should not change the bContext variable, which
+ is Bacula's private context pointer for this instance of this
+ plugin.
+
+\subsection{freePlugin(bpContext *ctx)}
+This entry point is called when the
+this instance of the plugin is no longer needed (the Job is
+ending), and the plugin should release all memory it may
+have allocated for the pContext.
+
+\subsection{getPluginValue(bpContext *ctx, pVariable var, void *value)}
+Bacula will call this entry point to get
+a value from the plugin. This entry point is currently not called.
+
+\subsection{setPluginValue(bpContext *ctx, pVariable var, void *value)}
+Bacula will call this entry point to set
+a value in the plugin. This entry point is currently not called.
+
+\subsection{handlePluginEvent(bpContext *ctx, bEvent *event, void *value)}
+This entry point is called when Bacula
+encounters certain events (discussed below). Bacula passes the pointer to an event
+structure (bEvent), which currently has one item, the eventType:
+
+\begin{verbatim}
+typedef struct s_bEvent {
+ uint32_t eventType;
+} bEvent;
+\end{verbatim}
+
+ which defines what event has been triggered, and for each event,
+ Bacula will pass a pointer to a value associated with that event.
+ If no value is associated with a particular event, Bacula will
+ pass a NULL pointer, so you must always check for it.
+
+ The current list of events are:
+
+\begin{verbatim}
+typedef enum {
+ bEventJobStart = 1,
+ bEventJobEnd = 2,
+ bEventStartBackupJob = 3,
+ bEventEndBackupJob = 4,
+ bEventStartRestoreJob = 5,
+ bEventEndRestoreJob = 6,
+ bEventStartVerifyJob = 7,
+ bEventEndVerifyJob = 8,
+ bEventBackupCommand = 9,
+ bEventRestoreCommand = 10,
+ bEventLevel = 11,
+ bEventSince = 12,
+} bEventType;
+
+\end{verbatim}
+
+Most of which are pretty explanatory.
+
+\begin{description}
+ \item [bEventJobStart] is called whenever a Job starts. The value
+ passed is a pointer to a string that contains: "Jobid=nnn
+ Job=job-name". Where nnn will be replaced by the JobId and job-name
+ will be replaced by the Job name. The variable is temporary so if you
+ need the values, you must copy them.
+ \item [bEventJobEnd] is called whenever a Job ends. No value is passed.
+ \item [bEventStartBackupJob] is called when a Backup Job begins. No value
+ is passed.
+ \item [bEventEndBackupJob] is called when a Backup Job ends. No value is
+ passed.
+ \item [bEventStartRestoreJob] is called when a Restore Job starts. No value
+ is passed.
+ \item [bEventEndRestoreJob] is called when a Restore Job ends. No value is
+ passed.
+ \item [bEventStartVerifyJob] is called when a Verify Job starts. No value
+ is passed.
+ \item [bEventEndVerifyJob] is called when a Verify Job ends. No value
+ is passed.
+ \item [bEventBackupCommand] is called prior to the bEventStartBackupJob and
+ the plugin is passed the command string (everything after the equal sign
+ in "Plugin =" as the value.
+ \item [bEventRestoreCommand] is called prior to the bEventStartRestoreJob and
+ the plugin is passed the command string (everything after the equal sign
+ in "Plugin =" as the value.
+ \item [bEventLevel] is called when the level is set for a new Job. The value
+ is a 32 bit integer stored in the void*, which represents the Job Level code.
+ \item [bEventSince] is called when the since time is set for a new Job. The
+ value is a time\_t time at which the last job was run.
\end{description}
+
+\subsection{startBackupFile(bpContext *ctx, struct save\_pkt *sp)}
+Called when beginning the backup of a file.
+
+\begin{verbatim}
+ struct save_pkt {
+ char *fname; /* Full path and filename */
+ char *link; /* Link name if any */
+ struct stat statp; /* System stat() packet for file */
+ int32_t type; /* FT_xx for this file */
+ uint32_t flags; /* Bacula internal flags */
+ bool portable; /* set if data format is portable */
+ char *cmd; /* command */
+};
+\end{verbatim}
+
+The second argument is a pointer to the {\bf save\_pkt} structure for the file
+to be backed up. The plugin is responsible for filling in all the fields
+of the {\bf save\_pkt}. The values in the {\bf save\_pkt} are used to create a virtual file
+entry in the Bacula catalog database. The full path and filename should be
+unique on the system to avoid conflicts with real files. Example programs such
+as {\bf bpipe.c} show how to set these fields.
+
+\subsection{endBackupFile(bpContext *ctx)}
+Called at the end of backing up a file. If the plugin's work
+is done, it should return bRC\_OK. If the plugin wishes to create another
+file and back it up, then it must return bRC\_More.
+
+\subsection{startRestoreFile(bpContext *ctx, const char *cmd)}
+Not implemented.
+
+
+\subsection{endRestoreFile(bpContext *ctx)}
+Called when done restoring a file.
+
+\subsection{pluginIO(bpContext *ctx, struct io\_pkt *io)}
+Called to do the input (backup) or output (restore) of data from or to a
+file.
+
+\begin{verbatim}
+ enum {
+ IO_OPEN = 1,
+ IO_READ = 2,
+ IO_WRITE = 3,
+ IO_CLOSE = 4,
+ IO_SEEK = 5
+};
+
+struct io_pkt {
+ int32_t func; /* Function code */
+ int32_t count; /* read/write count */
+ mode_t mode; /* permissions for created files */
+ int32_t flags; /* open flags (e.g. O_WRONLY ...) */
+ char *buf; /* read/write buffer */
+ int32_t status; /* return status */
+ int32_t io_errno; /* errno code */
+ int32_t whence;
+ boffset_t offset;
+};
+
+\end{verbatim}
+
+
+\subsection{createFile(bpContext *ctx, struct restore\_pkt *rp)}
+Called to create a file before restoring the data. The data in the
+restore\_pkt is passed to the plugin and is based on the data that was
+originally given by the plugin during the backup and the current user
+restore settings (e.g. where, RegexWhere, replace). This allows the
+plugin to first create a file (if necessary) so that the data can
+be transmitted to it. The next call to the plugin will be a
+pluginIO command with a request to open the file write-only.
+
+\begin{verbatim}
+
+struct restore_pkt {
+ int32_t stream; /* attribute stream id */
+ int32_t data_stream; /* id of data stream to follow */
+ int32_t type; /* file type FT */
+ int32_t file_index; /* file index */
+ int32_t LinkFI; /* file index to data if hard link */
+ uid_t uid; /* userid */
+ struct stat statp; /* decoded stat packet */
+ const char *attrEx; /* extended attributes if any */
+ const char *ofname; /* output filename */
+ const char *olname; /* output link name */
+ const char *where; /* where */
+ const char *RegexWhere; /* regex where */
+ int replace; /* replace flag */
+};
+\end{verbatim}
+
+\subsection{setFileAttributes(bpContext *ctx, struct restore\_pkt *rp)}
+This is call not yet implemented.
+
+\begin{verbatim}
+struct restore_pkt {
+ int32_t stream; /* attribute stream id */
+ int32_t data_stream; /* id of data stream to follow */
+ int32_t type; /* file type FT */
+ int32_t file_index; /* file index */
+ int32_t LinkFI; /* file index to data if hard link */
+ uid_t uid; /* userid */
+ struct stat statp; /* decoded stat packet */
+ const char *attrEx; /* extended attributes if any */
+ const char *ofname; /* output filename */
+ const char *olname; /* output link name */
+ const char *where; /* where */
+ const char *RegexWhere; /* regex where */
+ int replace; /* replace flag */
+};
+\end{verbatim}
\ No newline at end of file