5 \label{NewFeaturesChapter}
6 \index[general]{New Features}
8 This chapter presents the new features added to the development 2.5.x
9 versions to be released as Bacula version 3.0.0 sometime in April 2009.
11 \section{Accurate Backup}
12 \index[general]{Accurate Backup}
14 As with most other backup programs, by default Bacula decides what files to
15 backup for Incremental and Differental backup by comparing the change
16 (st\_ctime) and modification (st\_mtime) times of the file to the time the last
17 backup completed. If one of those two times is later than the last backup
18 time, then the file will be backed up. This does not, however, permit tracking
19 what files have been deleted and will miss any file with an old time that may
20 have been restored to or moved onto the client filesystem.
22 \subsection{Accurate = \lt{}yes|no\gt{}}
23 If the {\bf Accurate = \lt{}yes|no\gt{}} directive is enabled (default no) in
24 the Job resource, the job will be run as an Accurate Job. For a {\bf Full}
25 backup, there is no difference, but for {\bf Differential} and {\bf
26 Incremental} backups, the Director will send a list of all previous files
27 backed up, and the File daemon will use that list to determine if any new files
28 have been added or or moved and if any files have been deleted. This allows
29 Bacula to make an accurate backup of your system to that point in time so that
30 if you do a restore, it will restore your system exactly. One note of caution
31 about using Accurate backup is that it requires more resources (CPU and memory)
32 on both the Director and the Client machines to create the list of previous
33 files backed up, to send that list to the File daemon, for the File daemon to
34 keep the list (possibly very big) in memory, and for the File daemon to do
35 comparisons between every file in the FileSet and the list.
37 Accurate must not be enabled when backing up with a plugin that is not
38 specially designed to work with Accurate. If you enable it, your restores
39 will probably not work correctly.
44 \index[general]{Copy Jobs}
46 A new {\bf Copy} job type 'C' has been implemented. It is similar to the
47 existing Migration feature with the exception that the Job that is copied is
48 left unchanged. This essentially creates two identical copies of the same
49 backup. However, the copy is treated as a copy rather than a backup job, and
50 hence is not directly available for restore. The {\bf restore} command lists
51 copy jobs and allows selection of copies by using \texttt{jobid=}
52 option. If the keyword {\bf copies} is present on the command line, Bacula will
53 display the list of all copies for selected jobs.
58 These JobIds have copies as follows:
59 +-------+------------------------------------+-----------+------------------+
60 | JobId | Job | CopyJobId | MediaType |
61 +-------+------------------------------------+-----------+------------------+
62 | 2 | CopyJobSave.2009-02-17_16.31.00.11 | 7 | DiskChangerMedia |
63 +-------+------------------------------------+-----------+------------------+
64 +-------+-------+----------+----------+---------------------+------------------+
65 | JobId | Level | JobFiles | JobBytes | StartTime | VolumeName |
66 +-------+-------+----------+----------+---------------------+------------------+
67 | 19 | F | 6274 | 76565018 | 2009-02-17 16:30:45 | ChangerVolume002 |
68 | 2 | I | 1 | 5 | 2009-02-17 16:30:51 | FileVolume001 |
69 +-------+-------+----------+----------+---------------------+------------------+
70 You have selected the following JobIds: 19,2
72 Building directory tree for JobId(s) 19,2 ... ++++++++++++++++++++++++++++++++++++++++++++
73 5,611 files inserted into the tree.
78 The Copy Job runs without using the File daemon by copying the data from the
79 old backup Volume to a different Volume in a different Pool. See the Migration
80 documentation for additional details. For copy Jobs there is a new selection
81 criterium named PoolUncopiedJobs which copies all jobs from a pool to an other
82 pool which were not copied before. Next to that the client, volume, job or sql
83 query are possible ways of selecting jobs which should be copied. Selection
84 types like smallestvolume, oldestvolume, pooloccupancy and pooltime are
85 probably more suited for migration jobs only. But we could imagine some people
86 have a valid use for those kind of copy jobs too.
88 If bacula founds a copy when a job record is purged (deleted) from the catalog,
89 it will promote the copy as \textsl{real} backup and will make it available for
90 automatic restore. If more than one copy is available, it will promote the copy
91 with the smallest jobid.
93 A nice solution which can be build with the new copy jobs is what is
94 called the disk-to-disk-to-tape backup (DTDTT). A sample config could
95 look somethings like the one below:
99 Name = FullBackupsVirtualPool
101 Purge Oldest Volume = Yes
103 NextPool = FullBackupsTapePool
107 Name = FullBackupsTapePool
111 Volume Retention = 365 days
112 Storage = superloader
116 # Fake fileset for copy jobs
128 # Fake client for copy jobs
138 # Default template for a CopyDiskToTape Job
141 Name = CopyDiskToTape
143 Messages = StandardCopy
146 Selection Type = PoolUncopiedJobs
147 Maximum Concurrent Jobs = 10
149 Allow Duplicate Jobs = Yes
150 Allow Higher Duplicates = No
151 Cancel Queued Duplicates = No
152 Cancel Running Duplicates = No
157 Name = DaySchedule7:00
158 Run = Level=Full daily at 7:00
162 Name = CopyDiskToTapeFullBackups
164 Schedule = DaySchedule7:00
165 Pool = FullBackupsVirtualPool
166 JobDefs = CopyDiskToTape
170 The example above had 2 pool which are copied using the PoolUncopiedJobs
171 selection criteria. Normal Full backups go to the Virtual pool and are copied
172 to the Tape pool the next morning.
174 The command \texttt{list copies [jobid=x,y,z]} lists copies for a given
179 +-------+------------------------------------+-----------+------------------+
180 | JobId | Job | CopyJobId | MediaType |
181 +-------+------------------------------------+-----------+------------------+
182 | 9 | CopyJobSave.2008-12-20_22.26.49.05 | 11 | DiskChangerMedia |
183 +-------+------------------------------------+-----------+------------------+
186 \section{ACL Updates}
187 The whole ACL code had been overhauled and in this version each platforms has
188 different streams for each type of acl available on such an platform. As ACLs
189 between platforms tend to be not that portable (most implement POSIX acls but
190 some use an other draft or a completely different format) we currently only
191 allow certain platform specific ACL streams to be decoded and restored on the
192 same platform that they were created on. The old code allowed to restore ACL
193 cross platform but the comments already mention that not being to wise. For
194 backward compatability the new code will accept the two old ACL streams and
195 handle those with the platform specific handler. But for all new backups it
196 will save the ACLs using the new streams.
198 Currently the following platforms support ACLs:
202 \item {\bf Darwin/OSX}
211 Currently we support the following ACL types (these ACL streams use a reserved
212 part of the stream numbers):
215 \item {\bf STREAM\_ACL\_AIX\_TEXT} 1000 AIX specific string representation from
217 \item {\bf STREAM\_ACL\_DARWIN\_ACCESS\_ACL} 1001 Darwin (OSX) specific acl\_t
218 string representation from acl\_to\_text (POSIX acl)
219 \item {\bf STREAM\_ACL\_FREEBSD\_DEFAULT\_ACL} 1002 FreeBSD specific acl\_t
220 string representation from acl\_to\_text (POSIX acl) for default acls.
221 \item {\bf STREAM\_ACL\_FREEBSD\_ACCESS\_ACL} 1003 FreeBSD specific acl\_t
222 string representation from acl\_to\_text (POSIX acl) for access acls.
223 \item {\bf STREAM\_ACL\_HPUX\_ACL\_ENTRY} 1004 HPUX specific acl\_entry
224 string representation from acltostr (POSIX acl)
225 \item {\bf STREAM\_ACL\_IRIX\_DEFAULT\_ACL} 1005 IRIX specific acl\_t string
226 representation from acl\_to\_text (POSIX acl) for default acls.
227 \item {\bf STREAM\_ACL\_IRIX\_ACCESS\_ACL} 1006 IRIX specific acl\_t string
228 representation from acl\_to\_text (POSIX acl) for access acls.
229 \item {\bf STREAM\_ACL\_LINUX\_DEFAULT\_ACL} 1007 Linux specific acl\_t
230 string representation from acl\_to\_text (POSIX acl) for default acls.
231 \item {\bf STREAM\_ACL\_LINUX\_ACCESS\_ACL} 1008 Linux specific acl\_t string
232 representation from acl\_to\_text (POSIX acl) for access acls.
233 \item {\bf STREAM\_ACL\_TRU64\_DEFAULT\_ACL} 1009 Tru64 specific acl\_t
234 string representation from acl\_to\_text (POSIX acl) for default acls.
235 \item {\bf STREAM\_ACL\_TRU64\_DEFAULT\_DIR\_ACL} 1010 Tru64 specific acl\_t
236 string representation from acl\_to\_text (POSIX acl) for default acls.
237 \item {\bf STREAM\_ACL\_TRU64\_ACCESS\_ACL} 1011 Tru64 specific acl\_t string
238 representation from acl\_to\_text (POSIX acl) for access acls.
239 \item {\bf STREAM\_ACL\_SOLARIS\_ACLENT} 1012 Solaris specific aclent\_t
240 string representation from acltotext or acl\_totext (POSIX acl)
241 \item {\bf STREAM\_ACL\_SOLARIS\_ACE} 1013 Solaris specific ace\_t string
242 representation from from acl\_totext (NFSv4 or ZFS acl)
245 In future versions we might support conversion functions from one type of acl
246 into an other for types that are either the same or easily convertable. For now
247 the streams are seperate and restoring them on a platform that doesn't
248 recognize them will give you a warning.
250 \section{Extended Attributes}
251 Something that was on the project list for some time is now implemented for
252 platforms that support a similar kind of interface. Its the support for backup
253 and restore of so called extended attributes. As extended attributes are so
254 platform specific these attributes are saved in seperate streams for each
255 platform. Restores can only be performed on the same platform the backup was
256 done. There is support for all types of extended attributes, but restoring from
257 one type of filesystem onto an other type of filesystem on the same platform
258 may lead to supprises. As extended attributes can contain any type of data they
259 are stored as a series of so called value-pairs. This data must be seen as
260 mostly binary and is stored as such. As security labels from selinux are also
261 extended attributes this option also stores those labels and no specific code
262 is enabled for handling selinux security labels.
264 Currently the following platforms support extended attributes:
266 \item {\bf Darwin/OSX}
272 On linux acls are also extended attributes, as such when you enable ACLs on a
273 Linux platform it will NOT save the same data twice e.g. it will save the ACLs
274 and not the same exteneded attribute.
276 To enable the backup of extended attributes please add the following to your
291 \section{Shared objects}
292 A default build of Bacula will now create the libraries as shared objects
293 (.so) rather than static libraries as was previously the case.
294 The shared libraries are built using {\bf libtool} so it should be quite
297 An important advantage of using shared objects is that on a machine with the
298 Directory, File daemon, the Storage daemon, and a console, you will have only
299 one copy of the code in memory rather than four copies. Also the total size of
300 the binary release is smaller since the library code appears only once rather
301 than once for every program that uses it; this results in significant reduction
302 in the size of the binaries particularly for the utility tools.
304 In order for the system loader to find the shared objects when loading the
305 Bacula binaries, the Bacula shared objects must either be in a shared object
306 directory known to the loader (typically /usr/lib) or they must be in the
307 directory that may be specified on the {\bf ./configure} line using the {\bf
308 {-}{-}libdir} option as:
311 ./configure --libdir=/full-path/dir
314 the default is /usr/lib. If {-}{-}libdir is specified, there should be
315 no need to modify your loader configuration provided that
316 the shared objects are installed in that directory (Bacula
317 does this with the make install command). The shared objects
318 that Bacula references are:
327 These files are symbolically linked to the real shared object file,
328 which has a version number to permit running multiple versions of
329 the libraries if desired (not normally the case).
331 If you have problems with libtool or you wish to use the old
332 way of building static libraries, you can do so by disabling
333 libtool on the configure command line with:
336 ./configure --disable-libtool
340 \section{Virtual Backup (Vbackup)}
341 \index[general]{Virtual Backup}
342 \index[general]{Vbackup}
344 Bacula's virtual backup feature is often called Synthetic Backup or
345 Consolidation in other backup products. It permits you to consolidate
346 the previous Full backup plus the most recent Differential backup and any
347 subsequent Incremental backups into a new Full backup. This is accomplished
348 without contacting the client by reading the previous backup data and
349 writing it to a volume in a different pool.
351 In some respects the Vbackup feature works similar to a Migration job, in
352 that Bacula normally reads the data from the pool specified in the
353 Job resource, and writes it to the {\bf Next Pool} specified in the
354 Job resource. The input Storage resource and the Output Storage resource
357 The Vbackup is enabled on a Job by Job in the Job resource by specifying
358 a level of {\bf VirtualFull}.
360 A typical Job resource definition might look like the following:
374 # Default pool definition
378 Recycle = yes # Automatically recycle Volumes
379 AutoPrune = yes # Prune expired volumes
380 Volume Retention = 365d # one year
388 Recycle = yes # Automatically recycle Volumes
389 AutoPrune = yes # Prune expired volumes
390 Volume Retention = 365d # one year
391 Storage = DiskChanger
394 # Definition of file storage device
401 Maximum Concurrent Jobs = 5
404 # Definition of DDS Virtual tape disk storage device
407 Address = localhost # N.B. Use a fully qualified name here
410 Media Type = DiskChangerMedia
411 Maximum Concurrent Jobs = 4
416 Then in bconsole or via a Run schedule, you would run the job as:
419 run job=MyBackup level=Full
420 run job=MyBackup level=Incremental
421 run job=MyBackup level=Differential
422 run job=MyBackup level=Incremental
423 run job=MyBackup level=Incremental
426 So providing there were changes between each of those jobs, you would end up
427 with a Full backup, a Differential, which includes the first Incremental
428 backup, then two Incremental backups. All the above jobs would be written to
429 the {\bf Default} pool.
431 To consolidate those backups into a new Full backup, you would run the
435 run job=MyBackup level=VirtualFull
438 And it would produce a new Full backup without using the client, and the output
439 would be written to the {\bf Full} Pool which uses the Diskchanger Storage.
441 If the Virtual Full is run, and there are no prior Jobs, the Virtual Full will
444 Note, the Start and End time of the Virtual Full backup is set to the
445 values for the last job included in the Virtual Full (in the above example,
446 it is an Increment). This is so that if another incremental is done, which
447 will be based on the Virtual Full, it will backup all files from the
448 last Job included in the Virtual Full rather than from the time the Virtual
449 Full was actually run.
451 \section{Catalog Format}
452 Bacula 3.0 comes with some changes on the catalog format. The upgrade
453 operation will convert an essential field of the File table that permits to
454 handle more than 4 billion objects over the time, and this operation will
455 take TIME and will likely DOUBLE THE SIZE of your catalog during the
456 conversion. Depending on your catalog backend, you won't be able to run
457 jobs during this period. For example, a 3 million files catalog will take
458 2 minutes to upgrade on a normal machine. Please don't forget to make a
459 valid backup of your database before executing the upgrade script.
461 \section{Duplicate Job Control}
462 \index[general]{Duplicate Jobs}
463 The new version of Bacula provides four new directives that
464 give additional control over what Bacula does if duplicate jobs
465 are started. A duplicate job in the sense we use it here means
466 a second or subsequent job with the same name starts. This
467 happens most frequently when the first job runs longer than expected because no
470 The four directives each take as an argument a {\bf yes} or {\bf no} value and
471 are specified in the Job resource.
475 \subsection{Allow Duplicate Jobs = \lt{}yes|no\gt{}}
476 If this directive is enabled duplicate jobs will be run. If
477 the directive is set to {\bf no} (default) then only one job of a given name
478 may run at one time, and the action that Bacula takes to ensure only
479 one job runs is determined by the other directives (see below).
481 \subsection{Allow Higher Duplicates = \lt{}yes|no\gt{}}
482 If this directive is set to {\bf yes} (default) the job with a higher
483 priority (lower priority number) will be permitted to run. If the
484 priorities of the two jobs are the same, the outcome is determined by
485 other directives (see below).
487 \subsection{Cancel Queued Duplicates = \lt{}yes|no\gt{}}
488 If this directive is set to {\bf yes} (default) any job that is
489 already queued to run but not yet running will be canceled.
491 \subsection{Cancel Running Duplicates = \lt{}yes|no\gt{}}
492 If this directive is set to {\bf yes} any job that is already running
493 will be canceled. The default is {\bf no}.
496 \section{TLS Authentication}
497 \index[general]{TLS Authentication}
498 In Bacula version 2.5.x and later, in addition to the normal Bacula
499 CRAM-MD5 authentication that is used to authenticate each Bacula
500 connection, you can specify that you want TLS Authentication as well,
501 which will provide more secure authentication.
503 This new feature uses Bacula's existing TLS code (normally used for
504 communications encryption) to do authentication. To use it, you must
505 specify all the TLS directives normally used to enable communications
506 encryption (TLS Enable, TLS Verify Peer, TLS Certificate, ...) and
509 \subsection{TLS Authenticate = yes}
511 TLS Authenticate = yes
514 in the main daemon configuration resource (Director for the Director,
515 Client for the File daemon, and Storage for the Storage daemon).
517 When {\bf TLS Authenticate} is enabled, after doing the CRAM-MD5
518 authentication, Bacula will do the normal TLS authentication, then TLS
519 encryption will be turned off.
521 If you want to encrypt communications data, do not turn on {\bf TLS
524 \section{bextract non-portable Win32 data}
525 \index[general]{bextract handles Win32 non-portable data}
526 {\bf bextract} has been enhanced to be able to restore
527 non-portable Win32 data to any OS. Previous versions were
528 unable to restore non-portable Win32 data to machines that
529 did not have the Win32 BackupRead and BackupWrite API calls.
531 \section{State File updated at Job Termination}
532 \index[general]{State File}
533 In previous versions of Bacula, the state file, which provides a
534 summary of previous jobs run in the {\bf status} command output was
535 updated only when Bacula terminated, thus if the daemon crashed, the
536 state file might not contain all the run data. This version of
537 the Bacula daemons updates the state file on each job termination.
539 \section{MaxFullInterval = \lt{}time-interval\gt{}}
540 \index[general]{MaxFullInterval}
541 The new Job resource directive {\bf Max Full Interval = \lt{}time-interval\gt{}}
542 can be used to specify the maximum time interval between {\bf Full} backup
543 jobs. When a job starts, if the time since the last Full backup is
544 greater than the specified interval, and the job would normally be an
545 {\bf Incremental} or {\bf Differential}, it will be automatically
546 upgraded to a {\bf Full} backup.
548 \section{MaxDiffInterval = \lt{}time-interval\gt{}}
549 \index[general]{MaxDiffInterval}
550 The new Job resource directive {\bf Max Diff Interval = \lt{}time-interval\gt{}}
551 can be used to specify the maximum time interval between {\bf Differential} backup
552 jobs. When a job starts, if the time since the last Differential backup is
553 greater than the specified interval, and the job would normally be an
554 {\bf Incremental}, it will be automatically
555 upgraded to a {\bf Differential} backup.
557 \section{Honor No Dump Flag = \lt{}yes|no\gt{}}
558 \index[general]{MaxDiffInterval}
559 On FreeBSD systems, each file has a {\bf no dump flag} that can be set
560 by the user, and when it is set it is an indication to backup programs
561 to not backup that particular file. This version of Bacula contains a
562 new Options directive within a FileSet resource, which instructs Bacula to
563 obey this flag. The new directive is:
566 Honor No Dump Flag = yes|no
569 The default value is {\bf no}.
572 \section{Exclude Dirs Containing = \lt{}filename-string\gt{}}
573 \index[general]{IgnoreDir}
574 The {\bf ExcludeDirsContaining = \lt{}filename\gt{}} is a new directive that
575 can be added to the Include section of the FileSet resource. If the specified
576 filename ({\bf filename-string}) is found on the Client in any directory to be
577 backed up, the whole directory will be ignored (not backed up). For example:
580 # List of files to be backed up
588 Exclude Dirs Containing = .excludeme
593 But in /home, there may be hundreds of directories of users and some
594 people want to indicate that they don't want to have certain
595 directories backed up. For example, with the above FileSet, if
596 the user or sysadmin creates a file named {\bf .excludeme} in
597 specific directories, such as
600 /home/user/www/cache/.excludeme
601 /home/user/temp/.excludeme
604 then Bacula will not backup the two directories named:
611 NOTE: subdirectories will not be backed up. That is, the directive
612 applies to the two directories in question and any children (be they
613 files, directories, etc).
617 \section{Bacula Plugins}
618 \index[general]{Plugin}
619 Support for shared object plugins has been implemented in the Linux, Unix
620 and Win32 File daemons. The API will be documented separately in
621 the Developer's Guide or in a new document. For the moment, there is
622 a single plugin named {\bf bpipe} that allows an external program to
623 get control to backup and restore a file.
625 Plugins are also planned (partially implemented) in the Director and the
628 \subsection{Plugin Directory}
629 Each daemon (DIR, FD, SD) has a new {\bf Plugin Directory} directive that may
630 be added to the daemon definition resource. The directory takes a quoted
631 string argument, which is the name of the directory in which the daemon can
632 find the Bacula plugins. If this directive is not specified, Bacula will not
633 load any plugins. Since each plugin has a distinctive name, all the daemons
634 can share the same plugin directory.
636 \subsection{Plugin Options}
637 The {\bf Plugin Options} directive takes a quoted string
638 arguement (after the equal sign) and may be specified in the
639 Job resource. The options specified will be passed to all plugins
640 when they are run. This each plugin must know what it is looking
641 for. The value defined in the Job resource can be modified
642 by the user when he runs a Job via the {\bf bconsole} command line
645 Note: this directive may be specified, and there is code to modify
646 the string in the run command, but the plugin options are not yet passed to
647 the plugin (i.e. not fully implemented).
649 \subsection{Plugin Options ACL}
650 The {\bf Plugin Options ACL} directive may be specified in the
651 Director's Console resource. It functions as all the other ACL commands
652 do by permitting users running restricted consoles to specify a
653 {\bf Plugin Options} that overrides the one specified in the Job
654 definition. Without this directive restricted consoles may not modify
657 \subsection{Plugin = \lt{}plugin-command-string\gt{}}
658 The {\bf Plugin} directive is specified in the Include section of
659 a FileSet resource where you put your {\bf File = xxx} directives.
675 In the above example, when the File daemon is processing the directives
676 in the Include section, it will first backup all the files in {\bf /home}
677 then it will load the plugin named {\bf bpipe} (actually bpipe-dir.so) from
678 the Plugin Directory. The syntax and semantics of the Plugin directive
679 require the first part of the string up to the colon (:) to be the name
680 of the plugin. Everything after the first colon is ignored by the File daemon but
681 is passed to the plugin. Thus the plugin writer may define the meaning of the
682 rest of the string as he wishes.
684 Please see the next section for information about the {\bf bpipe} Bacula
687 \section{The bpipe Plugin}
688 The {\bf bpipe} plugin is provided in the directory src/plugins/fd/bpipe-fd.c of
689 the Bacula source distribution. When the plugin is compiled and linking into
690 the resulting dynamic shared object (DSO), it will have the name {\bf bpipe-fd.so}.
692 The purpose of the plugin is to provide an interface to any system program for
693 backup and restore. As specified above the {\bf bpipe} plugin is specified in
694 the Include section of your Job's FileSet resource. The full syntax of the
695 plugin directive as interpreted by the {\bf bpipe} plugin (each plugin is free
696 to specify the sytax as it wishes) is:
699 Plugin = "<field1>:<field2>:<field3>:<field4>"
704 \item {\bf field1} is the name of the plugin with the trailing {\bf -fd.so}
705 stripped off, so in this case, we would put {\bf bpipe} in this field.
707 \item {\bf field2} specifies the namespace, which for {\bf bpipe} is the
708 pseudo path and filename under which the backup will be saved. This pseudo
709 path and filename will be seen by the user in the restore file tree.
710 For example, if the value is {\bf /MYSQL/regress.sql}, the data
711 backed up by the plugin will be put under that "pseudo" path and filename.
712 You must be careful to choose a naming convention that is unique to avoid
713 a conflict with a path and filename that actually exists on your system.
715 \item {\bf field3} for the {\bf bpipe} plugin
716 specifies the "reader" program that is called by the plugin during
717 backup to read the data. {\bf bpipe} will call this program by doing a
720 \item {\bf field4} for the {\bf bpipe} plugin
721 specifies the "writer" program that is called by the plugin during
722 restore to write the data back to the filesystem.
725 Putting it all together, the full plugin directive line might look
729 Plugin = "bpipe:/MYSQL/regress.sql:mysqldump -f
730 --opt --databases bacula:mysql"
733 The directive has been split into two lines, but within the {\bf bacula-dir.conf} file
734 would be written on a single line.
736 This causes the File daemon to call the {\bf bpipe} plugin, which will write
737 its data into the "pseudo" file {\bf /MYSQL/regress.sql} by calling the
738 program {\bf mysqldump -f --opt --database bacula} to read the data during
739 backup. The mysqldump command outputs all the data for the database named
740 {\bf bacula}, which will be read by the plugin and stored in the backup.
741 During restore, the data that was backed up will be sent to the program
742 specified in the last field, which in this case is {\bf mysql}. When
743 {\bf mysql} is called, it will read the data sent to it by the plugn
744 then write it back to the same database from which it came ({\bf bacula}
747 The {\bf bpipe} plugin is a generic pipe program, that simply transmits
748 the data from a specified program to Bacula for backup, and then from Bacula to
749 a specified program for restore.
751 By using different command lines to {\bf bpipe},
752 you can backup any kind of data (ASCII or binary) depending
753 on the program called.
755 \section{Microsoft Exchange Server 2003/2007 Plugin}
757 \subsection{Concepts}
758 Although it is possible to backup Exchange using Bacula VSS the Exchange
759 plugin adds a good deal of functionality, because while Bacula VSS
760 completes a full backup (snapshot) of Exchange, it does
761 not support Incremental or Differential backups, restoring is more
762 complicated, and a single database restore is not possible.
764 Microsoft Exchange organises its storage into Storage Groups with
765 Databases inside them. A default installation of Exchange will have a
766 single Storage Group called 'First Storage Group', with two Databases
767 inside it, "Mailbox Store (SERVER NAME)" and
768 "Public Folder Store (SERVER NAME)",
769 which hold user email and public folders respectively.
771 In the default configuration, Exchange logs everything that happens to
772 log files, such that if you have a backup, and all the log files since,
773 you can restore to the present time. Each Storage Group has its own set
774 of log files and operates independently of any other Storage Groups. At
775 the Storage Group level, the logging can be turned off by enabling a
776 function called "Enable circular logging". At this time the Exchange
777 plugin will not function if this option is enabled.
779 The plugin allows backing up of entire storage groups, and the restoring
780 of entire storage groups or individual databases. Backing up and
781 restoring at the individual mailbox or email item is not supported but
782 can be simulated by use of the "Recovery" Storage Group (see below).
784 \subsection{Installing}
785 The Exchange plugin requires a DLL that is shipped with Microsoft
786 Exchanger Server called {\bf esebcli2.dll}. Assuming Exchange is installed
787 correctly the Exchange plugin should find this automatically and run
788 without any additional installation.
790 If the DLL can not be found automatically it will need to be copied into
791 the Bacula installation
792 directory (eg C:\verb+\+Program Files\verb+\+Bacula\verb+\+bin). The Exchange API DLL is
793 named esebcli2.dll and is found in C:\verb+\+Program Files\verb+\+Exchsrvr\verb+\+bin on a
794 default Exchange installation.
796 \subsection{Backup up}
797 To back up an Exchange server the Fileset definition must contain at
798 least {\bf Plugin = "exchange:/@EXCHANGE/Microsoft Information Store"} for
799 the backup to work correctly. The 'exchange:' bit tells Bacula to look
800 for the exchange plugin, the '@EXCHANGE' bit makes sure all the backed
801 up files are prefixed with something that isn't going to share a name
802 with something outside the plugin, and the 'Microsoft Information Store'
803 bit is required also. It is also possible to add the name of a storage
804 group to the "Plugin =" line, eg \\
805 {\bf Plugin = "exchange:/@EXCHANGE/Microsoft Information Store/First Storage Group"} \\
806 if you want only a single storage group backed up.
808 Additionally, you can suffix the 'Plugin =' directive with
809 ":notrunconfull" which will tell the plugin not to truncate the Exchange
810 database at the end of a full backup.
812 An Incremental or Differential backup will backup only the database logs
813 for each Storage Group by inspecting the "modified date" on each
814 physical log file. Because of the way the Exchange API works, the last
815 logfile backed up on each backup will always be backed up by the next
816 Incremental or Differential backup too. This adds 5MB to each
817 Incremental or Differential backup size but otherwise does not cause any
820 By default, a normal VSS fileset containing all the drive letters will
821 also back up the Exchange databases using VSS. This will interfere with
822 the plugin and Exchange's shared ideas of when the last full backup was
823 done, and may also truncate log files incorrectly. It is important,
824 therefore, that the Exchange database files be excluded from the backup,
825 although the folders the files are in should be included, or they will
826 have to be recreated manually if a baremetal restore is done.
831 File = C:/Program Files/Exchsrvr/mdbdata
832 Plugin = "exchange:..."
835 File = C:/Program Files/Exchsrvr/mdbdata/E00.chk
836 File = C:/Program Files/Exchsrvr/mdbdata/E00.log
837 File = C:/Program Files/Exchsrvr/mdbdata/E000000F.log
838 File = C:/Program Files/Exchsrvr/mdbdata/E0000010.log
839 File = C:/Program Files/Exchsrvr/mdbdata/E0000011.log
840 File = C:/Program Files/Exchsrvr/mdbdata/E00tmp.log
841 File = C:/Program Files/Exchsrvr/mdbdata/priv1.edb
846 The advantage of excluding the above files is that you can significantly
847 reduce the size of your backup since all the important Exchange files
848 will be properly saved by the Plugin.
851 \subsection{Restoring}
853 The restore operation is much the same as a normal Bacula restore, with
854 the following provisos:
857 \item The {\bf Where} restore option must not be specified
858 \item Each Database directory must be marked as a whole. You cannot just
859 select (say) the .edb file and not the others.
860 \item If a Storage Group is restored, the directory of the Storage Group
862 \item It is possible to restore only a subset of the available log files,
863 but they {\bf must} be contiguous. Exchange will fail to restore correctly
864 if a log file is missing from the sequence of log files
865 \item Each database to be restored must be dismounted and marked as "Can be
866 overwritten by restore"
867 \item If an entire Storage Group is to be restored (eg all databases and
868 logs in the Storage Group), then it is best to manually delete the
869 database files from the server (eg C:\verb+\+Program Files\verb+\+Exchsrvr\verb+\+mdbdata\verb+\+*)
870 as Exchange can get confused by stray log files lying around.
873 \subsection{Restoring to the Recovery Storage Group}
875 The concept of the Recovery Storage Group is well documented by
877 \elink{http://support.microsoft.com/kb/824126}{http://support.microsoft.com/kb/824126},
878 but to briefly summarize...
880 Microsoft Exchange allows the creation of an additional Storage Group
881 called the Recovery Storage Group, which is used to restore an older
882 copy of a database (e.g. before a mailbox was deleted) into without
883 messing with the current live data. This is required as the Standard and
884 Small Business Server versions of Exchange can not ordinarily have more
885 than one Storage Group.
887 To create the Recovery Storage Group, drill down to the Server in Exchange
888 System Manager, right click, and select
889 {\bf "New -> Recovery Storage Group..."}. Accept or change the file
890 locations and click OK. On the Recovery Storage Group, right click and
891 select {\bf "Add Database to Recover..."} and select the database you will
894 Restore only the single database nominated as the database in the
895 Recovery Storage Group. Exchange will redirect the restore to the
896 Recovery Storage Group automatically.
897 Then run the restore.
899 \subsection{Restoring on Microsoft Server 2007}
900 Apparently the {\bf Exmerge} program no longer exists in Microsoft Server
901 2007, and henc you use a new proceedure for recovering a single mail box.
902 This procedure is ducomented by Microsoft at:
903 \elink{http://technet.microsoft.com/en-us/library/aa997694.aspx}{http://technet.microsoft.com/en-us/library/aa997694.aspx},
904 and involves using the {\bf Restore-Mailbox} and {\bf
905 Get-MailboxStatistics} shell commands.
908 This plugin is still being developed, so you should consider it
909 currently in BETA test, and thus use in a production environment
910 should be done only after very careful testing.
912 When doing a full backup, the Exchange database logs are truncated by
913 Exchange as soon as the plugin has completed the backup. If the data
914 never makes it to the backup medium (eg because of spooling) then the
915 logs will still be truncated, but they will also not have been backed
916 up. A solution to this is being worked on.
918 The "Enable Circular Logging" option cannot be enabled or the plugin
921 Exchange insists that a successful Full backup must have taken place if
922 an Incremental or Differential backup is desired, and the plugin will
923 fail if this is not the case. If a restore is done, Exchange will
924 require that a Full backup be done before an Incremental or Differential
927 The plugin will most likely not work well if another backup application
928 (eg NTBACKUP) is backing up the Exchange database, especially if the
929 other backup application is truncating the log files.
931 The Exchange plugin has not been tested with the {\bf Accurate} option, so
932 we recommend either carefully testing or that you avoid this option for
935 The Exchange plugin is not called during processing the bconsole {\bf
936 estimate} command, and so anything that would be backed up by the plugin
937 will not be added to the estimate total that is displayed.
940 \section{libdbi Framework}
941 As a general guideline, Bacula has support for a few catalog database drivers
942 (MySQL, PostgreSQL, SQLite)
943 coded natively by the Bacula team. With the libdbi implementation, which is a
944 Bacula driver that uses libdbi to access the catalog, we have an open field to
945 use many different kinds database engines following the needs of users.
947 The according to libdbi (http://libdbi.sourceforge.net/) project: libdbi
948 implements a database-independent abstraction layer in C, similar to the
949 DBI/DBD layer in Perl. Writing one generic set of code, programmers can
950 leverage the power of multiple databases and multiple simultaneous database
951 connections by using this framework.
953 Currently the libdbi driver in Bacula project only supports the same drivers
954 natively coded in Bacula. However the libdbi project has support for many
955 others database engines. You can view the list at
956 http://libdbi-drivers.sourceforge.net/. In the future all those drivers can be
957 supported by Bacula, however, they must be tested properly by the Bacula team.
959 Some of benefits of using libdbi are:
961 \item The possibility to use proprietary databases engines in which your
962 proprietary licenses prevent the Bacula team from developing the driver.
963 \item The possibility to use the drivers written for the libdbi project.
964 \item The possibility to use other database engines without recompiling Bacula
965 to use them. Just change one line in bacula-dir.conf
966 \item Abstract Database access, this is, unique point to code and profiling
967 catalog database access.
970 The following drivers have been tested:
972 \item PostgreSQL, with and without batch insert
973 \item Mysql, with and without batch insert
978 In the future, we will test and approve to use others databases engines
979 (proprietary or not) like DB2, Oracle, Microsoft SQL.
981 To compile Bacula to support libdbi we need to configure the code with the
982 --with-dbi and --with-dbi-driver=[database] ./configure options, where
983 [database] is the database engine to be used with Bacula (of course we can
984 change the driver in file bacula-dir.conf, see below). We must configure the
985 access port of the database engine with the option --with-db-port, because the
986 libdbi framework doesn't know the default access port of each database.
988 The next phase is checking (or configuring) the bacula-dir.conf, example:
992 dbdriver = dbi:mysql; dbaddress = 127.0.0.1; dbport = 3306
993 dbname = regress; user = regress; password = ""
997 The parameter {\bf dbdriver} indicates that we will use the driver dbi with a
998 mysql database. Currently the drivers supported by Bacula are: postgresql,
999 mysql, sqlite, sqlite3; these are the names that may be added to string "dbi:".
1001 The following limitations apply when Bacula is set to use the libdbi framework:
1002 - Not tested on the Win32 platform
1003 - A little performance is lost if comparing with native database driver.
1004 The reason is bound with the database driver provided by libdbi and the
1005 simple fact that one more layer of code was added.
1007 It is important to remember, when compiling Bacula with libdbi, the
1008 following packages are needed:
1010 \item libdbi version 1.0.0, http://libdbi.sourceforge.net/
1011 \item libdbi-drivers 1.0.0, http://libdbi-drivers.sourceforge.net/
1014 You can download them and compile them on your system or install the packages
1015 from your OS distribution.
1017 \section{Console Command Additions and Enhancements}
1019 \subsection{Display Autochanger Content}
1020 \index[general]{StatusSlots}
1022 The {\bf status slots storage=\lt{}storage-name\gt{}} command displays
1023 autochanger content.
1027 Slot | Volume Name | Status | Media Type | Pool |
1028 ------+---------------+----------+-------------------+------------|
1029 1 | 00001 | Append | DiskChangerMedia | Default |
1030 2 | 00002 | Append | DiskChangerMedia | Default |
1031 3*| 00003 | Append | DiskChangerMedia | Scratch |
1036 If you an asterisk ({\bf *}) appears after the slot number, you must run an
1037 {\bf update slots} command to synchronize autochanger content with your
1040 \subsection{list joblog job=xxx or jobid=nnn}
1041 A new list command has been added that allows you to list the contents
1042 of the Job Log stored in the catalog for either a Job Name (fully qualified)
1043 or for a particular JobId. The {\bf llist} command will include a line with
1044 the time and date of the entry.
1046 Note for the catalog to have Job Log entries, you must have a directive
1053 In your Director's {\bf Messages} resource.
1055 \subsection{Use separator for multiple commands}
1056 When using bconsole with readline, you can set the command separator to one
1057 of those characters to write commands who require multiple input in one line.
1059 !$%&'()*+,-/:;<>?[]^`{|}~
1062 \section{Miscellaneous}
1063 \index[general]{Misc New Features}
1065 \subsection{Allow Mixed Priority = \lt{}yes|no\gt{}}
1066 This directive is only implemented in version 2.5 and later. When
1067 set to {\bf yes} (default {\bf no}), this job may run even if lower
1068 priority jobs are already running. This means a high priority job
1069 will not have to wait for other jobs to finish before starting.
1070 The scheduler will only mix priorities when all running jobs have
1073 Note that only higher priority jobs will start early. Suppose the
1074 director will allow two concurrent jobs, and that two jobs with
1075 priority 10 are running, with two more in the queue. If a job with
1076 priority 5 is added to the queue, it will be run as soon as one of
1077 the running jobs finishes. However, new priority 10 jobs will not
1078 be run until the priority 5 job has finished.
1080 \subsection{Bootstrap File Directive -- FileRegex}
1081 {\bf FileRegex} is a new command that can be added to the bootstrap
1082 (.bsr) file. The value is a regular expression. When specified, only
1083 matching filenames will be restored.
1085 During a restore, if all File records are pruned from the catalog
1086 for a Job, normally Bacula can restore only all files saved. That
1087 is there is no way using the catalog to select individual files.
1088 With this new command, Bacula will ask if you want to specify a Regex
1089 expression for extracting only a part of the full backup.
1091 \subsection{Bootstrap File Optimization Changes}
1092 In order to permit proper seeking on disk files, we have extended the bootstrap
1093 file format to include a {\bf VolStartAddr} and {\bf VolEndAddr} records. Each
1094 takes a 64 bit unsigned integer range (i.e. nnn-mmm) which defines the start
1095 address range and end address range respectively. These two directives replace
1096 the {\bf VolStartFile}, {\bf VolEndFile}, {\bf VolStartBlock} and {\bf
1097 VolEndBlock} directives. Bootstrap files containing the old directives will
1098 still work, but will not properly take advantage of proper disk seeking, and
1099 may read completely to the end of a disk volume during a restore. With the new
1100 format (automatically generated by the new Director), restores will seek
1101 properly and stop reading the volume when all the files have been restored.
1103 \subsection{Solaris ZFS/NFSv4 ACLs}
1104 This is an upgrade of the previous Solaris ACL backup code
1105 to the new library format, which will backup both the old
1106 POSIX(UFS) ACLs as well as the ZFS ACLs.
1108 The new code can also restore POSIX(UFS) ACLs to a ZFS filesystem
1109 (it will translate the POSIX(UFS)) ACL into a ZFS/NFSv4 one) it can also
1110 be used to transfer from UFS to ZFS filesystems.
1113 \subsection{Virtual Tape Emulation}
1114 We now have a Virtual Tape emulator that allows us to run though 99.9\% of
1115 the tape code but actually reading and writing to a disk file. Used with the
1116 \textbf{disk-changer} script, you can now emulate an autochanger with 10 drives
1117 and 700 slots. This feature is most useful in testing. It is enabled
1118 by using {\bf Device Type = vtape} in the Storage daemon's Device
1119 directive. This feature is only implemented on Linux machines.
1121 \subsection{Bat Enhancements}
1122 Bat (the Bacula Administration Tool) GUI program has been significantly
1123 enhanced and stabilized. In particular, there are new table based status
1124 commands; it can now be easily localized using Qt4 Linguist.
1126 The Bat communications protocol has been significantly enhanced to improve
1129 \subsection{RunScript Enhancements}
1130 The {\bf RunScript} resource has been enhanced to permit multiple
1131 commands per RunScript. Simply specify multiple {\bf Command} directives
1138 Command = "/bin/echo test"
1139 Command = "/bin/echo an other test"
1140 Command = "/bin/echo 3 commands in the same runscript"
1147 A new Client RunScript {\bf RunsWhen} keyword of {\bf AfterVSS} has been
1148 implemented, which runs the command after the Volume Shadow Copy has been made.
1150 Console commands can be specified within a RunScript by using:
1151 {\bf Console = \lt{}command\gt{}}, however, this command has not been
1152 carefully tested and debugged and is known to easily crash the Director.
1153 We would appreciate feedback. Due to the recursive nature of this command, we
1154 may remove it before the final release.
1156 \subsection{Status Enhancements}
1157 The bconsole {\bf status dir} output has been enhanced to indicate
1158 Storage daemon job spooling and despooling activity.
1160 \subsection{Connect Timeout}
1161 The default connect timeout to the File
1162 daemon has been set to 3 minutes. Previously it was 30 minutes.
1164 \subsection{ftruncate for NFS Volumes}
1165 If you write to a Volume mounted by NFS (say on a local file server),
1166 in previous Bacula versions, when the Volume was recycled, it was not
1167 properly truncated because NFS does not implement ftruncate (file
1168 truncate). This is now corrected in the new version because we have
1169 written code (actually a kind user) that deletes and recreates the Volume,
1170 thus accomplishing the same thing as a truncate.
1172 \subsection{Support for Ubuntu}
1173 The new version of Bacula now recognizes the Ubuntu (and Kubuntu)
1174 version of Linux, and thus now provides correct autostart routines.
1175 Since Ubuntu officially supports Bacula, you can also obtain any
1176 recent release of Bacula from the Ubuntu repositories.
1178 \subsection{Recycle Pool = \lt{}pool-name\gt{}}
1179 The new \textbf{RecyclePool} directive defines to which pool the Volume will
1180 be placed (moved) when it is recycled. Without this directive, a Volume will
1181 remain in the same pool when it is recycled. With this directive, it can be
1182 moved automatically to any existing pool during a recycle. This directive is
1183 probably most useful when defined in the Scratch pool, so that volumes will
1184 be recycled back into the Scratch pool.
1186 \subsection{FD Version}
1187 The File daemon to Director protocol now includes a version
1188 number, which although there is no visible change for users,
1189 will help us in future versions automatically determine
1190 if a File daemon is not compatible.
1192 \subsection{Max Run Sched Time = \lt{}time-period-in-seconds\gt{}}
1193 The time specifies the maximum allowed time that a job may run, counted from
1194 when the job was scheduled. This can be useful to prevent jobs from running
1195 during working hours. We can see it like \texttt{Max Start Delay + Max Run
1198 \subsection{Max Wait Time = \lt{}time-period-in-seconds\gt{}}
1200 Previous \textbf{MaxWaitTime} directives aren't working as expected, instead
1201 of checking the maximum allowed time that a job may block for a resource,
1202 those directives worked like \textbf{MaxRunTime}. Some users are reporting to
1203 use \textbf{Incr/Diff/Full Max Wait Time} to control the maximum run time of
1204 their job depending on the level. Now, they have to use
1205 \textbf{Incr/Diff/Full Max Run Time}. \textbf{Incr/Diff/Full Max Wait Time}
1206 directives are now deprecated.
1208 \subsection{Incremental|Differential Max Wait Time = \lt{}time-period-in-seconds\gt{}}
1209 These directives have been deprecated in favor of
1210 \texttt{Incremental|Differential Max Run Time}.
1212 \subsection{Max Run Time directives}
1213 Using \textbf{Full/Diff/Incr Max Run Time}, it's now possible to specify the
1214 maximum allowed time that a job can run depending on the level.
1216 \addcontentsline{lof}{figure}{Job time control directives}
1217 \includegraphics{\idir different_time.eps}
1219 \subsection{Statistics Enhancements}
1220 If you (or probably your boss) want to have statistics on your backups to
1221 provide some \textit{Service Level Agreement} indicators, you could use a few
1222 SQL queries on the Job table to report how many:
1226 \item jobs have been successful
1227 \item files have been backed up
1231 However, these statistics are accurate only if your job retention is greater
1232 than your statistics period. Ie, if jobs are purged from the catalog, you won't
1233 be able to use them.
1235 Now, you can use the \textbf{update stats [days=num]} console command to fill
1236 the JobHistory table with new Job records. If you want to be sure to take in
1237 account only \textbf{good jobs}, ie if one of your important job has failed but
1238 you have fixed the problem and restarted it on time, you probably want to
1239 delete the first \textit{bad} job record and keep only the successful one. For
1240 that simply let your staff do the job, and update JobHistory table after two or
1241 three days depending on your organization using the \textbf{[days=num]} option.
1243 These statistics records aren't used for restoring, but mainly for
1244 capacity planning, billings, etc.
1246 The Bweb interface provides a statistics module that can use this feature. You
1247 can also use tools like Talend or extract information by yourself.
1249 The {\textbf Statistics Retention = \lt{}time\gt{}} director directive defines
1250 the length of time that Bacula will keep statistics job records in the Catalog
1251 database after the Job End time. (In \texttt{JobHistory} table) When this time
1252 period expires, and if user runs \texttt{prune stats} command, Bacula will
1253 prune (remove) Job records that are older than the specified period.
1255 You can use the following Job resource in your nightly \textbf{BackupCatalog}
1256 job to maintain statistics.
1259 Name = BackupCatalog
1262 Console = "update stats days=3"
1263 Console = "prune stats yes"
1270 \subsection{ScratchPool = \lt{}pool-resource-name\gt{}}
1271 This directive permits to specify a specific \textsl{Scratch} pool for the
1272 current pool. This is useful when using multiple storage sharing the same
1273 mediatype or when you want to dedicate volumes to a particular set of pool.
1275 \subsection{Enhanced Attribute Despooling}
1276 If the storage daemon and the Director are on the same machine, the spool file
1277 that contains attributes is read directly by the Director instead of being
1278 transmitted across the network. That should reduce load and speedup insertion.
1280 \subsection{SpoolSize = \lt{}size-specification-in-bytes\gt{}}
1281 A new Job directive permits to specify the spool size per job. This is used
1282 in advanced job tunning. {\bf SpoolSize={\it bytes}}
1284 \subsection{MaxConsoleConnections = \lt{}number\gt{}}
1285 A new director directive permits to specify the maximum number of Console
1286 Connections that could run concurrently. The default is set to 20, but you may
1287 set it to a larger number.
1289 \subsection{dbcheck enhancements}
1290 If you are using Mysql, dbcheck will now ask you if you want to create
1291 temporary indexes to speed up orphaned Path and Filename elimination.
1293 A new \texttt{-B} option allows you to print catalog information in a simple
1294 text based format. This is useful to backup it in a secure way.
1309 You can now specify the database connection port in the command line.
1311 \section{Building Bacula Plugins}
1312 There is currently one sample program {\bf example-plugin-fd.c} and
1313 one working plugin {\bf bpipe-fd.c} that can be found in the Bacula
1314 {\bf src/plugins/fd} directory. Both are built with the following:
1318 ./configure <your-options>
1326 After building Bacula and changing into the src/plugins/fd directory,
1327 the {\bf make} command will build the {\bf bpipe-fd.so} plugin, which
1328 is a very useful and working program.
1330 The {\bf make test} command will build the {\bf example-plugin-fd.so}
1331 plugin and a binary named {\bf main}, which is build from the source
1332 code located in {\bf src/filed/fd\_plugins.c}.
1334 If you execute {\bf ./main}, it will load and run the example-plugin-fd
1335 plugin simulating a small number of the calling sequences that Bacula uses
1336 in calling a real plugin. This allows you to do initial testing of
1337 your plugin prior to trying it with Bacula.
1339 You can get a good idea of how to write your own plugin by first
1340 studying the example-plugin-fd, and actually running it. Then
1341 it can also be instructive to read the bpipe-fd.c code as it is
1342 a real plugin, which is still rather simple and small.
1344 When actually writing your own plugin, you may use the example-plugin-fd.c
1345 code as a template for your code.
1351 \chapter{Bacula FD Plugin API}
1352 To write a Bacula plugin, you create a dynamic shared object program (or dll on
1353 Win32) with a particular name and two exported entry points, place it in the
1354 {\bf Plugins Directory}, which is defined in the {\bf bacula-fd.conf} file in
1355 the {\bf Client} resource, and when the FD starts, it will load all the plugins
1356 that end with {\bf -fd.so} (or {\bf -fd.dll} on Win32) found in that directory.
1358 \section{Normal vs Command Plugins}
1359 In general, there are two ways that plugins are called. The first way, is when
1360 a particular event is detected in Bacula, it will transfer control to each
1361 plugin that is loaded in turn informing the plugin of the event. This is very
1362 similar to how a {\bf RunScript} works, and the events are very similar. Once
1363 the plugin gets control, it can interact with Bacula by getting and setting
1364 Bacula variables. In this way, it behaves much like a RunScript. Currently
1365 very few Bacula variables are defined, but they will be implemented as the need
1366 arrises, and it is very extensible.
1368 We plan to have plugins register to receive events that they normally would
1369 not receive, such as an event for each file examined for backup or restore.
1370 This feature is not yet implemented.
1372 The second type of plugin, which is more useful and fully implemented in the
1373 current version is what we call a command plugin. As with all plugins, it gets
1374 notified of important events as noted above (details described below), but in
1375 addition, this kind of plugin can accept a command line, which is a:
1378 Plugin = <command-string>
1381 directive that is placed in the Include section of a FileSet and is very
1382 similar to the "File = " directive. When this Plugin directive is encountered
1383 by Bacula during backup, it passes the "command" part of the Plugin directive
1384 only to the plugin that is explicitly named in the first field of that command
1385 string. This allows that plugin to backup any file or files on the system that
1386 it wants. It can even create "virtual files" in the catalog that contain data
1387 to be restored but do not necessarily correspond to actual files on the
1390 The important features of the command plugin entry points are:
1392 \item It is triggered by a "Plugin =" directive in the FileSet
1393 \item Only a single plugin is called that is named on the "Plugin =" directive.
1394 \item The full command string after the "Plugin =" is passed to the plugin
1395 so that it can be told what to backup/restore.
1399 \section{Loading Plugins}
1400 Once the File daemon loads the plugins, it asks the OS for the
1401 two entry points (loadPlugin and unloadPlugin) then calls the
1402 {\bf loadPlugin} entry point (see below).
1404 Bacula passes information to the plugin through this call and it gets
1405 back information that it needs to use the plugin. Later, Bacula
1406 will call particular functions that are defined by the
1407 {\bf loadPlugin} interface.
1409 When Bacula is finished with the plugin
1410 (when Bacula is going to exit), it will call the {\bf unloadPlugin}
1413 The two entry points are:
1416 bRC loadPlugin(bInfo *lbinfo, bFuncs *lbfuncs, pInfo **pinfo, pFuncs **pfuncs)
1423 both these external entry points to the shared object are defined as C entry
1424 points to avoid name mangling complications with C++. However, the shared
1425 object can actually be written in any language (preferrably C or C++) providing
1426 that it follows C language calling conventions.
1428 The definitions for {\bf bRC} and the arguments are {\bf
1429 src/filed/fd-plugins.h} and so this header file needs to be included in
1430 your plugin. It along with {\bf src/lib/plugins.h} define basically the whole
1431 plugin interface. Within this header file, it includes the following
1435 #include <sys/types.h>
1437 #include "bc_types.h"
1438 #include "lib/plugins.h"
1439 #include <sys/stat.h>
1442 Aside from the {\bf bc\_types.h} and {\bf confit.h} headers, the plugin
1443 definition uses the minimum code from Bacula. The bc\_types.h file is required
1444 to ensure that the data type defintions in arguments correspond to the Bacula
1447 The return codes are defined as:
1450 bRC_OK = 0, /* OK */
1451 bRC_Stop = 1, /* Stop calling other plugins */
1452 bRC_Error = 2, /* Some kind of error */
1453 bRC_More = 3, /* More files to backup */
1458 At a future point in time, we hope to make the Bacula libbac.a into a
1459 shared object so that the plugin can use much more of Bacula's
1460 infrastructure, but for this first cut, we have tried to minimize the
1461 dependence on Bacula.
1463 \section{loadPlugin}
1464 As previously mentioned, the {\bf loadPlugin} entry point in the plugin
1465 is called immediately after Bacula loads the plugin when the File daemon
1466 itself is first starting. This entry point is only called once during the
1467 execution of the File daemon. In calling the
1468 plugin, the first two arguments are information from Bacula that
1469 is passed to the plugin, and the last two arguments are information
1470 about the plugin that the plugin must return to Bacula. The call is:
1473 bRC loadPlugin(bInfo *lbinfo, bFuncs *lbfuncs, pInfo **pinfo, pFuncs **pfuncs)
1476 and the arguments are:
1480 This is information about Bacula in general. Currently, the only value
1481 defined in the bInfo structure is the version, which is the Bacula plugin
1482 interface version, currently defined as 1. The {\bf size} is set to the
1483 byte size of the structure. The exact definition of the bInfo structure
1484 as of this writing is:
1487 typedef struct s_baculaInfo {
1494 The bFuncs structure defines the callback entry points within Bacula
1495 that the plugin can use register events, get Bacula values, set
1496 Bacula values, and send messages to the Job output or debug output.
1498 The exact definition as of this writing is:
1500 typedef struct s_baculaFuncs {
1503 bRC (*registerBaculaEvents)(bpContext *ctx, ...);
1504 bRC (*getBaculaValue)(bpContext *ctx, bVariable var, void *value);
1505 bRC (*setBaculaValue)(bpContext *ctx, bVariable var, void *value);
1506 bRC (*JobMessage)(bpContext *ctx, const char *file, int line,
1507 int type, utime_t mtime, const char *fmt, ...);
1508 bRC (*DebugMessage)(bpContext *ctx, const char *file, int line,
1509 int level, const char *fmt, ...);
1510 void *(*baculaMalloc)(bpContext *ctx, const char *file, int line,
1512 void (*baculaFree)(bpContext *ctx, const char *file, int line, void *mem);
1516 We will discuss these entry points and how to use them a bit later when
1517 describing the plugin code.
1521 When the loadPlugin entry point is called, the plugin must initialize
1522 an information structure about the plugin and return a pointer to
1523 this structure to Bacula.
1525 The exact definition as of this writing is:
1528 typedef struct s_pluginInfo {
1531 const char *plugin_magic;
1532 const char *plugin_license;
1533 const char *plugin_author;
1534 const char *plugin_date;
1535 const char *plugin_version;
1536 const char *plugin_description;
1542 \item [version] is the current Bacula defined plugin interface version, currently
1543 set to 1. If the interface version differs from the current version of
1544 Bacula, the plugin will not be run (not yet implemented).
1545 \item [plugin\_magic] is a pointer to the text string "*FDPluginData*", a
1546 sort of sanity check. If this value is not specified, the plugin
1547 will not be run (not yet implemented).
1548 \item [plugin\_license] is a pointer to a text string that describes the
1549 plugin license. Bacula will only accept compatible licenses (not yet
1551 \item [plugin\_author] is a pointer to the text name of the author of the program.
1552 This string can be anything but is generally the author's name.
1553 \item [plugin\_date] is the pointer text string containing the date of the plugin.
1554 This string can be anything but is generally some human readable form of
1556 \item [plugin\_version] is a pointer to a text string containing the version of
1557 the plugin. The contents are determined by the plugin writer.
1558 \item [plugin\_description] is a pointer to a string describing what the
1559 plugin does. The contents are determined by the plugin writer.
1562 The pInfo structure must be defined in static memory because Bacula does not
1563 copy it and may refer to the values at any time while the plugin is
1564 loaded. All values must be supplied or the plugin will not run (not yet
1565 implemented). All text strings must be either ASCII or UTF-8 strings that
1566 are terminated with a zero byte.
1569 When the loadPlugin entry point is called, the plugin must initialize
1570 an entry point structure about the plugin and return a pointer to
1571 this structure to Bacula. This structure contains pointer to each
1572 of the entry points that the plugin must provide for Bacula. When
1573 Bacula is actually running the plugin, it will call the defined
1574 entry points at particular times. All entry points must be defined.
1576 The pFuncs structure must be defined in static memory because Bacula does not
1577 copy it and may refer to the values at any time while the plugin is
1580 The exact definition as of this writing is:
1583 typedef struct s_pluginFuncs {
1586 bRC (*newPlugin)(bpContext *ctx);
1587 bRC (*freePlugin)(bpContext *ctx);
1588 bRC (*getPluginValue)(bpContext *ctx, pVariable var, void *value);
1589 bRC (*setPluginValue)(bpContext *ctx, pVariable var, void *value);
1590 bRC (*handlePluginEvent)(bpContext *ctx, bEvent *event, void *value);
1591 bRC (*startBackupFile)(bpContext *ctx, struct save_pkt *sp);
1592 bRC (*endBackupFile)(bpContext *ctx);
1593 bRC (*startRestoreFile)(bpContext *ctx, const char *cmd);
1594 bRC (*endRestoreFile)(bpContext *ctx);
1595 bRC (*pluginIO)(bpContext *ctx, struct io_pkt *io);
1596 bRC (*createFile)(bpContext *ctx, struct restore_pkt *rp);
1597 bRC (*setFileAttributes)(bpContext *ctx, struct restore_pkt *rp);
1598 bRC (*checkFile)(bpContext *ctx, char *fname);
1602 The details of the entry points will be presented in
1603 separate sections below.
1607 \item [size] is the byte size of the structure.
1608 \item [version] is the plugin interface version currently set to 3.
1611 Sample code for loadPlugin:
1613 bfuncs = lbfuncs; /* set Bacula funct pointers */
1615 *pinfo = &pluginInfo; /* return pointer to our info */
1616 *pfuncs = &pluginFuncs; /* return pointer to our functions */
1621 where pluginInfo and pluginFuncs are statically defined structures.
1622 See bpipe-fd.c for details.
1628 \section{Plugin Entry Points}
1629 This section will describe each of the entry points (subroutines) within
1630 the plugin that the plugin must provide for Bacula, when they are called
1631 and their arguments. As noted above, pointers to these subroutines are
1632 passed back to Bacula in the pFuncs structure when Bacula calls the
1633 loadPlugin() externally defined entry point.
1635 \subsection{newPlugin(bpContext *ctx)}
1636 This is the entry point that Bacula will call
1637 when a new "instance" of the plugin is created. This typically
1638 happens at the beginning of a Job. If 10 Jobs are running
1639 simultaneously, there will be at least 10 instances of the
1642 The bpContext structure will be passed to the plugin, and
1643 during this call, if the plugin needs to have its private
1644 working storage that is associated with the particular
1645 instance of the plugin, it should create it from the heap
1646 (malloc the memory) and store a pointer to
1647 its private working storage in the {\bf pContext} variable.
1648 Note: since Bacula is a multi-threaded program, you must not
1649 keep any variable data in your plugin unless it is truely meant
1650 to apply globally to the whole plugin. In addition, you must
1651 be aware that except the first and last call to the plugin
1652 (loadPlugin and unloadPlugin) all the other calls will be
1653 made by threads that correspond to a Bacula job. The
1654 bpContext that will be passed for each thread will remain the
1655 same throughout the Job thus you can keep your privat Job specific
1656 data in it ({\bf bContext}).
1659 typedef struct s_bpContext {
1660 void *pContext; /* Plugin private context */
1661 void *bContext; /* Bacula private context */
1666 This context pointer will be passed as the first argument to all
1667 the entry points that Bacula calls within the plugin. Needless
1668 to say, the plugin should not change the bContext variable, which
1669 is Bacula's private context pointer for this instance (Job) of this
1672 \subsection{freePlugin(bpContext *ctx)}
1673 This entry point is called when the
1674 this instance of the plugin is no longer needed (the Job is
1675 ending), and the plugin should release all memory it may
1676 have allocated for this particular instance (Job) i.e. the pContext.
1677 This is not the final termination
1678 of the plugin signaled by a call to {\bf unloadPlugin}.
1679 Any other instances (Job) will
1680 continue to run, and the entry point {\bf newPlugin} may be called
1681 again if other jobs start.
1683 \subsection{getPluginValue(bpContext *ctx, pVariable var, void *value)}
1684 Bacula will call this entry point to get
1685 a value from the plugin. This entry point is currently not called.
1687 \subsection{setPluginValue(bpContext *ctx, pVariable var, void *value)}
1688 Bacula will call this entry point to set
1689 a value in the plugin. This entry point is currently not called.
1691 \subsection{handlePluginEvent(bpContext *ctx, bEvent *event, void *value)}
1692 This entry point is called when Bacula
1693 encounters certain events (discussed below). This is, in fact, the
1694 main way that most plugins get control when a Job runs and how
1695 they know what is happening in the job. It can be likened to the
1696 {\bf RunScript} feature that calls external programs and scripts,
1697 and is very similar to the Bacula Python interface.
1698 When the plugin is called, Bacula passes it the pointer to an event
1699 structure (bEvent), which currently has one item, the eventType:
1702 typedef struct s_bEvent {
1707 which defines what event has been triggered, and for each event,
1708 Bacula will pass a pointer to a value associated with that event.
1709 If no value is associated with a particular event, Bacula will
1710 pass a NULL pointer, so the plugin must be careful to always check
1711 value pointer prior to dereferencing it.
1713 The current list of events are:
1719 bEventStartBackupJob = 3,
1720 bEventEndBackupJob = 4,
1721 bEventStartRestoreJob = 5,
1722 bEventEndRestoreJob = 6,
1723 bEventStartVerifyJob = 7,
1724 bEventEndVerifyJob = 8,
1725 bEventBackupCommand = 9,
1726 bEventRestoreCommand = 10,
1733 Most of the above are self-explanatory.
1736 \item [bEventJobStart] is called whenever a Job starts. The value
1737 passed is a pointer to a string that contains: "Jobid=nnn
1738 Job=job-name". Where nnn will be replaced by the JobId and job-name
1739 will be replaced by the Job name. The variable is temporary so if you
1740 need the values, you must copy them.
1742 \item [bEventJobEnd] is called whenever a Job ends. No value is passed.
1744 \item [bEventStartBackupJob] is called when a Backup Job begins. No value
1747 \item [bEventEndBackupJob] is called when a Backup Job ends. No value is
1750 \item [bEventStartRestoreJob] is called when a Restore Job starts. No value
1753 \item [bEventEndRestoreJob] is called when a Restore Job ends. No value is
1756 \item [bEventStartVerifyJob] is called when a Verify Job starts. No value
1759 \item [bEventEndVerifyJob] is called when a Verify Job ends. No value
1762 \item [bEventBackupCommand] is called prior to the bEventStartBackupJob and
1763 the plugin is passed the command string (everything after the equal sign
1764 in "Plugin =" as the value.
1766 Note, if you intend to backup a file, this is an important first point to
1767 write code that copies the command string passed into your pContext area
1768 so that you will know that a backup is being performed and you will know
1769 the full contents of the "Plugin =" command (i.e. what to backup and
1770 what virtual filename the user wants to call it.
1772 \item [bEventRestoreCommand] is called prior to the bEventStartRestoreJob and
1773 the plugin is passed the command string (everything after the equal sign
1774 in "Plugin =" as the value.
1776 See the notes above concerning backup and the command string. This is the
1777 point at which Bacula passes you the original command string that was
1778 specified during the backup, so you will want to save it in your pContext
1779 area for later use when Bacula calls the plugin again.
1781 \item [bEventLevel] is called when the level is set for a new Job. The value
1782 is a 32 bit integer stored in the void*, which represents the Job Level code.
1784 \item [bEventSince] is called when the since time is set for a new Job. The
1785 value is a time\_t time at which the last job was run.
1788 During each of the above calls, the plugin receives either no specific value or
1789 only one value, which in some cases may not be sufficient. However, knowing
1790 the context of the event, the plugin can call back to the Bacula entry points
1791 it was passed during the {\bf loadPlugin} call and get to a number of Bacula
1792 variables. (at the current time few Bacula variables are implemented, but it
1793 easily extended at a future time and as needs require).
1795 \subsection{startBackupFile(bpContext *ctx, struct save\_pkt *sp)}
1796 This entry point is called only if your plugin is a command plugin, and
1797 it is called when Bacula encounters the "Plugin = " directive in
1798 the Include section of the FileSet.
1799 Called when beginning the backup of a file. Here Bacula provides you
1800 with a pointer to the {\bf save\_pkt} structure and you must fill in
1801 this packet with the "attribute" data of the file.
1805 int32_t pkt_size; /* size of this packet */
1806 char *fname; /* Full path and filename */
1807 char *link; /* Link name if any */
1808 struct stat statp; /* System stat() packet for file */
1809 int32_t type; /* FT_xx for this file */
1810 uint32_t flags; /* Bacula internal flags */
1811 bool portable; /* set if data format is portable */
1812 char *cmd; /* command */
1813 int32_t pkt_end; /* end packet sentinel */
1817 The second argument is a pointer to the {\bf save\_pkt} structure for the file
1818 to be backed up. The plugin is responsible for filling in all the fields
1819 of the {\bf save\_pkt}. If you are backing up
1820 a real file, then generally, the statp structure can be filled in by doing
1821 a {\bf stat} system call on the file.
1823 If you are backing up a database or
1824 something that is more complex, you might want to create a virtual file.
1825 That is a file that does not actually exist on the filesystem, but represents
1826 say an object that you are backing up. In that case, you need to ensure
1827 that the {\bf fname} string that you pass back is unique so that it
1828 does not conflict with a real file on the system, and you need to
1829 artifically create values in the statp packet.
1831 Example programs such as {\bf bpipe-fd.c} show how to set these fields. You
1832 must take care not to store pointers the stack in the pointer fields such as
1833 fname and link, because when you return from your function, your stack entries
1834 will be destroyed. The solution in that case is to malloc() and return the
1835 pointer to it. In order to not have memory leaks, you should store a pointer to
1836 all memory allocated in your pContext structure so that in subsequent calls or
1837 at termination, you can release it back to the system.
1839 Once the backup has begun, Bacula will call your plugin at the {\bf pluginIO}
1840 entry point to "read" the data to be backed up. Please see the {\bf bpipe-fd.c}
1841 plugin for how to do I/O.
1843 Example of filling in the save\_pkt as used in bpipe-fd.c:
1846 struct plugin_ctx *p_ctx = (struct plugin_ctx *)ctx->pContext;
1847 time_t now = time(NULL);
1848 sp->fname = p_ctx->fname;
1849 sp->statp.st_mode = 0700 | S_IFREG;
1850 sp->statp.st_ctime = now;
1851 sp->statp.st_mtime = now;
1852 sp->statp.st_atime = now;
1853 sp->statp.st_size = -1;
1854 sp->statp.st_blksize = 4096;
1855 sp->statp.st_blocks = 1;
1856 p_ctx->backup = true;
1860 Note: the filename to be created has already been created from the
1861 command string previously sent to the plugin and is in the plugin
1862 context (p\_ctx->fname) and is a malloc()ed string. This example
1863 creates a regular file (S\_IFREG), with various fields being created.
1865 In general, the sequence of commands issued from Bacula to the plugin
1866 to do a backup while processing the "Plugin = " directive are:
1869 \item generate a bEventBackupCommand event to the specified plugin
1870 and pass it the command string.
1871 \item make a startPluginBackup call to the plugin, which
1872 fills in the data needed in save\_pkt to save as the file
1873 attributes and to put on the Volume and in the catalog.
1874 \item call Bacula's internal save\_file() subroutine to save the specified
1875 file. The plugin will then be called at pluginIO() to "open"
1876 the file, and then to read the file data.
1877 Note, if you are dealing with a virtual file, the "open" operation
1878 is something the plugin does internally and it doesn't necessarily
1879 mean opening a file on the filesystem. For example in the case of
1880 the bpipe-fd.c program, it initiates a pipe to the requested program.
1881 Finally when the plugin signals to Bacula that all the data was read,
1882 Bacula will call the plugin with the "close" pluginIO() function.
1886 \subsection{endBackupFile(bpContext *ctx)}
1887 Called at the end of backing up a file for a command plugin. If the plugin's
1888 work is done, it should return bRC\_OK. If the plugin wishes to create another
1889 file and back it up, then it must return bRC\_More (not yet implemented). This
1890 is probably a good time to release any malloc()ed memory you used to pass back
1893 \subsection{startRestoreFile(bpContext *ctx, const char *cmd)}
1894 Called when the first record is read from the Volume that was
1895 previously written by the command plugin.
1897 \subsection{createFile(bpContext *ctx, struct restore\_pkt *rp)}
1898 Called for a command plugin to create a file during a Restore job before
1900 This entry point is called before any I/O is done on the file. After
1901 this call, Bacula will call pluginIO() to open the file for write.
1904 restore\_pkt is passed to the plugin and is based on the data that was
1905 originally given by the plugin during the backup and the current user
1906 restore settings (e.g. where, RegexWhere, replace). This allows the
1907 plugin to first create a file (if necessary) so that the data can
1908 be transmitted to it. The next call to the plugin will be a
1909 pluginIO command with a request to open the file write-only.
1911 This call must return one of the following values:
1915 CF_SKIP = 1, /* skip file (not newer or something) */
1916 CF_ERROR, /* error creating file */
1917 CF_EXTRACT, /* file created, data to extract */
1918 CF_CREATED /* file created, no data to extract */
1922 in the restore\_pkt value {\bf create\_status}. For a normal file,
1923 unless there is an error, you must return {\bf CF\_EXTRACT}.
1927 struct restore_pkt {
1928 int32_t pkt_size; /* size of this packet */
1929 int32_t stream; /* attribute stream id */
1930 int32_t data_stream; /* id of data stream to follow */
1931 int32_t type; /* file type FT */
1932 int32_t file_index; /* file index */
1933 int32_t LinkFI; /* file index to data if hard link */
1934 uid_t uid; /* userid */
1935 struct stat statp; /* decoded stat packet */
1936 const char *attrEx; /* extended attributes if any */
1937 const char *ofname; /* output filename */
1938 const char *olname; /* output link name */
1939 const char *where; /* where */
1940 const char *RegexWhere; /* regex where */
1941 int replace; /* replace flag */
1942 int create_status; /* status from createFile() */
1943 int32_t pkt_end; /* end packet sentinel */
1948 Typical code to create a regular file would be the following:
1951 struct plugin_ctx *p_ctx = (struct plugin_ctx *)ctx->pContext;
1952 time_t now = time(NULL);
1953 sp->fname = p_ctx->fname; /* set the full path/filename I want to create */
1955 sp->statp.st_mode = 0700 | S_IFREG;
1956 sp->statp.st_ctime = now;
1957 sp->statp.st_mtime = now;
1958 sp->statp.st_atime = now;
1959 sp->statp.st_size = -1;
1960 sp->statp.st_blksize = 4096;
1961 sp->statp.st_blocks = 1;
1965 This will create a virtual file. If you are creating a file that actually
1966 exists, you will most likely want to fill the statp packet using the
1969 Creating a directory is similar, but requires a few extra steps:
1972 struct plugin_ctx *p_ctx = (struct plugin_ctx *)ctx->pContext;
1973 time_t now = time(NULL);
1974 sp->fname = p_ctx->fname; /* set the full path I want to create */
1975 sp->link = xxx; where xxx is p_ctx->fname with a trailing forward slash
1976 sp->type = FT_DIREND
1977 sp->statp.st_mode = 0700 | S_IFDIR;
1978 sp->statp.st_ctime = now;
1979 sp->statp.st_mtime = now;
1980 sp->statp.st_atime = now;
1981 sp->statp.st_size = -1;
1982 sp->statp.st_blksize = 4096;
1983 sp->statp.st_blocks = 1;
1987 The link field must be set with the full cononical path name, which always
1988 ends with a forward slash. If you do not terminate it with a forward slash,
1989 you will surely have problems later.
1991 As with the example that creates a file, if you are backing up a real
1992 directory, you will want to do an stat() on the directory.
1994 Note, if you want the directory permissions and times to be correctly
1995 restored, you must create the directory {\bf after} all the file directories
1996 have been sent to Bacula. That allows the restore process to restore all the
1997 files in a directory using default directory options, then at the end, restore
1998 the directory permissions. If you do it the other way around, each time you
1999 restore a file, the OS will modify the time values for the directory entry.
2001 \subsection{setFileAttributes(bpContext *ctx, struct restore\_pkt *rp)}
2002 This is call not yet implemented. Called for a command plugin.
2004 See the definition of {\bf restre\_pkt} in the above section.
2006 \subsection{endRestoreFile(bpContext *ctx)}
2007 Called when a command plugin is done restoring a file.
2009 \subsection{pluginIO(bpContext *ctx, struct io\_pkt *io)}
2010 Called to do the input (backup) or output (restore) of data from or to a file
2011 for a command plugin. These routines simulate the Unix read(), write(), open(),
2012 close(), and lseek() I/O calls, and the arguments are passed in the packet and
2013 the return values are also placed in the packet. In addition for Win32 systems
2014 the plugin must return two additional values (described below).
2026 int32_t pkt_size; /* Size of this packet */
2027 int32_t func; /* Function code */
2028 int32_t count; /* read/write count */
2029 mode_t mode; /* permissions for created files */
2030 int32_t flags; /* Open flags */
2031 char *buf; /* read/write buffer */
2032 const char *fname; /* open filename */
2033 int32_t status; /* return status */
2034 int32_t io_errno; /* errno code */
2035 int32_t lerror; /* Win32 error code */
2036 int32_t whence; /* lseek argument */
2037 boffset_t offset; /* lseek argument */
2038 bool win32; /* Win32 GetLastError returned */
2039 int32_t pkt_end; /* end packet sentinel */
2043 The particular Unix function being simulated is indicated by the {\bf func},
2044 which will have one of the IO\_OPEN, IO\_READ, ... codes listed above. The
2045 status code that would be returned from a Unix call is returned in {\bf status}
2046 for IO\_OPEN, IO\_CLOSE, IO\_READ, and IO\_WRITE. The return value for IO\_SEEK
2047 is returned in {\bf offset} which in general is a 64 bit value.
2049 When there is an error on Unix systems, you must always set io\_error, and
2050 on a Win32 system, you must always set win32, and the returned value from
2051 the OS call GetLastError() in lerror.
2053 For all except IO\_SEEK, {\bf status} is the return result. In general it is
2054 a positive integer unless there is an error in which case it is -1.
2056 The following describes each call and what you get and what you
2061 You will be passed fname, mode, and flags.
2062 You must set on return: status, and if there is a Unix error
2063 io\_errno must be set to the errno value, and if there is a
2064 Win32 error win32 and lerror.
2067 You will be passed: count, and buf (buffer of size count).
2068 You must set on return: status to the number of bytes
2069 read into the buffer (buf) or -1 on an error,
2070 and if there is a Unix error
2071 io\_errno must be set to the errno value, and if there is a
2072 Win32 error, win32 and lerror must be set.
2075 You will be passed: count, and buf (buffer of size count).
2076 You must set on return: status to the number of bytes
2077 written from the buffer (buf) or -1 on an error,
2078 and if there is a Unix error
2079 io\_errno must be set to the errno value, and if there is a
2080 Win32 error, win32 and lerror must be set.
2083 Nothing will be passed to you. On return you must set
2084 status to 0 on success and -1 on failure. If there is a Unix error
2085 io\_errno must be set to the errno value, and if there is a
2086 Win32 error, win32 and lerror must be set.
2089 You will be passed: offset, and whence. offset is a 64 bit value
2090 and is the position to seek to relative to whence. whence is one
2091 of the following SEEK\_SET, SEEK\_CUR, or SEEK\_END indicating to
2092 either to seek to an absolute possition, relative to the current
2093 position or relative to the end of the file.
2094 You must pass back in offset the absolute location to which you
2095 seeked. If there is an error, offset should be set to -1.
2096 If there is a Unix error
2097 io\_errno must be set to the errno value, and if there is a
2098 Win32 error, win32 and lerror must be set.
2100 Note: Bacula will call IO\_SEEK only when writing a sparse file.
2104 \subsection{bool checkFile(bpContext *ctx, char *fname)}
2105 If this entry point is set, Bacula will call it after backing up all file
2106 data during an Accurate backup. It will be passed the full filename for
2107 each file that Bacula is proposing to mark as deleted. Only files
2108 previously backed up but not backed up in the current session will be
2109 marked to be deleted. If you return {\bf false}, the file will be be
2110 marked deleted. If you return {\bf true} the file will not be marked
2111 deleted. This permits a plugin to ensure that previously saved virtual
2112 files or files controlled by your plugin that have not change (not backed
2113 up in the current job) are not marked to be deleted. This entry point will
2114 only be called during Accurate Incrmental and Differential backup jobs.
2117 \section{Bacula Plugin Entrypoints}
2118 When Bacula calls one of your plugin entrypoints, you can call back to
2119 the entrypoints in Bacula that were supplied during the xxx plugin call
2120 to get or set information within Bacula.
2122 \subsection{bRC registerBaculaEvents(bpContext *ctx, ...)}
2123 This Bacula entrypoint will allow you to register to receive events
2124 that are not autmatically passed to your plugin by default. This
2125 entrypoint currently is unimplemented.
2127 \subsection{bRC getBaculaValue(bpContext *ctx, bVariable var, void *value)}
2128 Calling this entrypoint, you can obtain specific values that are available
2129 in Bacula. The following Variables can be referenced:
2131 \item bVarJobId returns an int
2132 \item bVarFDName returns a char *
2133 \item bVarLevel returns an int
2134 \item bVarClient returns a char *
2135 \item bVarJobName returns a char *
2136 \item bVarJobStatus returns an int
2137 \item bVarSinceTime returns an int (time\_t)
2138 \item bVarAccurate returns an int
2141 \subsection{bRC setBaculaValue(bpContext *ctx, bVariable var, void *value)}
2142 Calling this entrypoint allows you to set particular values in
2143 Bacula. The only variable that can currently be set is
2144 {\bf bVarFileSeen} and the value passed is a char * that points
2145 to the full filename for a file that you are indicating has been
2146 seen and hence is not deleted.
2148 \subsection{bRC JobMessage(bpContext *ctx, const char *file, int line,
2149 int type, utime\_t mtime, const char *fmt, ...)}
2150 This call permits you to put a message in the Job Report.
2153 \subsection{bRC DebugMessage(bpContext *ctx, const char *file, int line,
2154 int level, const char *fmt, ...)}
2155 This call permits you to print a debug message.
2158 \subsection{void baculaMalloc(bpContext *ctx, const char *file, int line,
2160 This call permits you to obtain memory from Bacula's memory allocator.
2163 \subsection{void baculaFree(bpContext *ctx, const char *file, int line, void *mem)}
2164 This call permits you to free memory obtained from Bacula's memory allocator.