1 \chapter{New Features in Bacula Enterprise 6.0.x}
2 This chapter presents the new features that have been added to the
3 current Enterprise version of Bacula.
4 These features are available only with a Bacula Systems subscription.
6 In addition to the features in this chapter, the Enterprise version
7 will include the Community features described in the Community new Features
10 \chapter{New Features in Bacula Enterprise 6.0.6}
12 \section{Incremental Accelerator Plugin for NetApp}
14 The Incremental Accelerator for NetApp Plugin is designed to simplify the
15 backup and restore procedure of your NetApp NAS hosting a huge number of files.
17 \smallskip{} When using the NetApp HFC Plugin, Bacula Enterprise will query the
18 NetApp device to get the list of all files modified since the last backup
19 instead of having to walk through the entire filesystem. Once Bacula have the
20 list of all files to back's up, it will use a standard network share (such as
21 NFS or CIFS) to access files.
24 This project was funded by Bacula Systems and is available with the Bacula
27 \section{PostgreSQL Plugin}
29 The PostgreSQL plugin is designed to simplify the backup and restore procedure
30 of your PostgreSQL cluster, the backup administrator doesn't need to learn about
31 internals of Postgres backup techniques or write complex scripts. The plugin
32 will automatically take care for you to backup essential information such as
33 configuration, users definition or tablespaces. The PostgreSQL plugin supports
34 both dump and Point In Time Recovery (PITR) backup techniques.
37 This project was funded by Bacula Systems and is available with the Bacula
40 \chapter{New Features in Bacula Enterprise 6.0.5}
42 \section{Maximum Reload Requests}
44 The new Director directive \texttt{Maximum Reload Requests} permits to
45 configure the number of reload requests that can be done while jobs are
51 Maximum Reload Requests = 64
57 \section{FD Storage Address}
59 When the Director is behind a NAT, in a WAN area, to connect to
61 the StorageDaemon, the Director uses an "external" ip address,
62 and the FileDaemon should use an "internal" ip address to contact the
65 The normal way to handle this situation is to use a canonical name such as
66 "storage-server" that will be resolved on the Director side as the WAN address
67 and on the Client side as the LAN address. This is now possible to configure
68 this parameter using the new \texttt{FDStorageAddress} Storage
74 \includegraphics[width=10cm]{\idir BackupOverWan1}
75 \label{fig:fdstorageaddress}
76 \caption{Backup over WAN}
83 FD Storage Address = 10.0.0.1
89 % # or in the Client resouce
94 % FD Storage Address = 10.0.0.1
100 % Note that using the Client \texttt{FDStorageAddress} directive will not allow
101 % to use multiple Storage Daemon, all Backup or Restore requests will be sent to
102 % the specified \texttt{FDStorageAddress}.
104 \chapter{New Features in Bacula Enterprise 6.0.0}
106 \section{Incomplete Jobs}
107 During a backup, if the Storage daemon experiences disconnection
108 with the File daemon during backup (normally a comm line problem
109 or possibly an FD failure), under conditions that the SD determines
110 to be safe it will make the failed job as Incomplete rather than
111 failed. This is done only if there is sufficient valid backup
112 data that was written to the Volume. The advantage of an Incomplete
113 job is that it can be restarted by the new bconsole {\bf restart}
114 command from the point where it left off rather than from the
115 beginning of the jobs as is the case with a cancel.
117 \section{The Stop Command}
118 Bacula has been enhanced to provide a {\bf stop} command,
119 very similar to the {\bf cancel} command with the main difference
120 that the Job that is stopped is marked as Incomplete so that
121 it can be restarted later by the {\bf restart} command where
122 it left off (see below). The {\bf stop} command with no
123 arguments, will like the cancel command, prompt you with the
124 list of running jobs allowing you to select one, which might
125 look like the following:
130 1: JobId=3 Job=Incremental.2012-03-26_12.04.26_07
131 2: JobId=4 Job=Incremental.2012-03-26_12.04.30_08
132 3: JobId=5 Job=Incremental.2012-03-26_12.04.36_09
133 Choose Job to stop (1-3): 2
134 2001 Job "Incremental.2012-03-26_12.04.30_08" marked to be stopped.
135 3000 JobId=4 Job="Incremental.2012-03-26_12.04.30_08" marked to be stopped.
138 \section{The Restart Command}
139 The new {\bf Restart command} allows console users to restart
140 a canceled, failed, or incomplete Job. For canceled and failed
141 Jobs, the Job will restart from the beginning. For incomplete
142 Jobs the Job will restart at the point that it was stopped either
143 by a stop command or by some recoverable failure.
146 If you enter the {\bf restart} command in bconsole, you will get the
151 You have the following choices:
156 Select termination code: (1-4):
159 If you select the {\bf All} option, you may see something like:
162 Select termination code: (1-4): 4
163 +-------+-------------+---------------------+------+-------+----------+-----------+-----------+
164 | jobid | name | starttime | type | level | jobfiles |
165 jobbytes | jobstatus |
166 +-------+-------------+---------------------+------+-------+----------+-----------+-----------+
167 | 1 | Incremental | 2012-03-26 12:15:21 | B | F | 0 |
169 | 2 | Incremental | 2012-03-26 12:18:14 | B | F | 350 |
171 | 3 | Incremental | 2012-03-26 12:18:30 | B | F | 0 |
173 | 4 | Incremental | 2012-03-26 12:18:38 | B | F | 331 |
175 +-------+-------------+---------------------+------+-------+----------+-----------+-----------+
176 Enter the JobId list to select:
179 Then you may enter one or more JobIds to be restarted, which may
180 take the form of a list of JobIds separated by commas, and/or JobId
181 ranges such as {\bf 1-4}, which indicates you want to restart JobIds
182 1 through 4, inclusive.
184 \section{Support for Exchange Incremental Backups}
185 The Bacula Enterprise version 6.0 VSS plugin now supports
186 Full and Incremental backups for Exchange. We strongly
187 recommend that you do not attempt to run Differential jobs with
188 Exchange as it is likely to produce a situation where restores
189 will no longer select the correct jobs, and thus the
190 Windows Exchange VSS writer will fail when applying log files.
191 There is a Bacula Systems Enterprise white paper that provides
192 the details of backup and restore of Exchange 2010 with the
196 Restores can be done while Exchange is running, but you
197 must first unmount (dismount in Microsoft terms) any database
198 you wish to restore and explicitly mark them to permit a
199 restore operation (see the white paper for details).
202 This project was funded by Bacula Systems and is available with the Bacula
205 \section{Support for MSSQL Block Level Backups}
206 With the addition of block level backup support to the
207 Bacula Enterprise VSS MSSQL component, you can now do
208 Differential backups in addition to Full backups.
209 Differential backups use Microsoft's partial block backup
210 (a block differencing or deduplication that we call Delta).
211 This partial block backup permits backing up only those
212 blocks that have changed. Database restores can be made while
213 the MSSQL server is running, but any databases selected for
214 restore will be automatically taken offline by the MSSQL
215 server during the restore process.
217 Incremental backups for MSSQL are not support by
218 Microsoft. We strongly recommend that you not perform Incremental
219 backups with MSSQL as they will probably produce a situation
220 where restore will no longer work correctly.
223 We are currently working on producing a white paper that will give more
224 details of backup and restore with MSSQL. One point to note is that during
225 a restore, you will normally not want to restore the {\bf master} database.
226 You must exclude it from the backup selections that you have made or the
230 It is possible to restore the {\bf master} database, but you must
231 first shutdown the MSSQL server, then you must perform special
232 recovery commands. Please see Microsoft documentation on how
233 to restore the master database.
236 This project was funded by Bacula Systems and is available with the Bacula
240 \section{Job Bandwidth Limitation}
242 The new {\bf Job Bandwidth Limitation} directive may be added to the File
243 daemon's and/or Director's configuration to limit the bandwidth used by a Job
244 on a Client. It can be set in the File daemon's conf file for all Jobs run in
245 that File daemon, or it can be set for each Job in the Director's conf file.
251 Working Directory = /some/path
252 Pid Directory = /some/path
254 Maximum Bandwidth Per Job = 5Mb/s
258 The above example would cause any jobs running with the FileDaemon to not
259 exceed 5Mb/s of throughput when sending data to the Storage Daemon.
261 You can specify the speed parameter in k/s, Kb/s, m/s, Mb/s.
267 FileSet = FS_localhost
270 Maximum Bandwidth = 5Mb/s
275 The above example would cause Job \texttt{localhost-data} to not exceed 5MB/s
276 of throughput when sending data from the File daemon to the Storage daemon.
278 A new console command \texttt{setbandwidth} permits to set dynamically the
279 maximum throughput of a running Job or for future jobs of a Client.
282 * setbandwidth limit=1000000 jobid=10
285 The \texttt{limit} parameter is in Kb/s.
288 This project was funded by Bacula Systems and is available in
289 the Enterprise Edition.
291 \section{Incremental/Differential Block Level Difference Backup}
293 The new \texttt{delta} Plugin is able to compute and apply signature-based file
294 differences. It can be used to backup only changes in a big binary file like
295 Outlook PST, VirtualBox/VMware images or database files.
297 It supports both Incremental and Differential backups and stores signatures
298 database in the File Daemon working directory. This plugin is available on all
299 platform including Windows 32 and 64bit.
301 Accurate option should be turned on in the Job resource.
314 Plugin = "delta:/home/eric/.VirtualBox/HardDisks/lenny-i386.vdi"
319 Name = DeltaFS-Include
327 # Use the Options{} filtering and options
328 File = /home/user/.VirtualBox
334 Please contact Bacula Systems support to get Delta Plugin specific
338 This project was funded by Bacula Systems and is available with the Bacula
341 \section{SAN Shared Tape Storage Plugin}
343 The problem with backing up multiple servers at the same time to the
344 same tape library (or autoloader) is that if both servers access the
345 same tape drive same time, you will very likely get data corruption.
346 This is where the Bacula Systems shared tape storage plugin comes into play. The
347 plugin ensures that only one server at a time can connect to each device
348 (tape drive) by using the SPC-3 SCSI reservation protocol. Please contact
349 Bacula Systems support to get SAN Shared Storage Plugin specific
353 This project was funded by Bacula Systems and is available with Bacula
356 \section{Advanced Autochanger Usage}
358 The new \texttt{Shared Storage} Director's directive is a Bacula Enterprise
359 feature that allows you to share volumes between different Storage
360 resources. This directive should be used \textbf{only} if all \texttt{Media
361 Type} are correctly set across all Devices.
363 The \texttt{Shared Storage} directive should be used when using the SAN
364 Shared Storage plugin or when accessing from the Director Storage resources
365 directly to Devices of an Autochanger.
367 When sharing volumes between different Storage resources, you will
368 need also to use the \texttt{reset-storageid} script before using the
369 \texttt{update slots} command. This script can be scheduled once a day in
373 $ /opt/bacula/scripts/reset-storageid MediaType StorageName
375 * update slots storage=StorageName drive=0
378 Please contact Bacula Systems support to get help on this advanced
382 This project was funded by Bacula Systems and is available with Bacula
385 \section{Enhancement of the NDMP Plugin}
387 The previous NDMP Plugin 4.0 was fully supporting only the NetApp hardware, the
388 new NDMP Plugin should now be able to support all NAS vendors with the
389 \texttt{volume\_format} plugin command option.
391 On some NDMP devices such as Celera or Blueray, the administrator can use arbitrary
392 volume structure name, ex:
396 /rootvolume/volume_tmp
400 The NDMP plugin should be aware of the structure organization in order to
401 detect if the administrator wants to restore in a new volume
402 (\texttt{where=/dev/vol\_tmp}) or inside a subdirectory of the targeted volume
403 (\texttt{where=/tmp}).
410 Plugin = "ndmp:host=nasbox user=root pass=root file=/dev/vol1 volume_format=/dev/"
415 Please contact Bacula Systems support to get NDMP Plugin specific
419 This project was funded by Bacula Systems and is available with the Bacula
422 \section{Always Backup a File}
424 When the Accurate mode is turned on, you can decide to always backup a file
425 by using then new {\bf A} Accurate option in your FileSet. For example:
448 This project was funded by Bacula Systems based on an idea of James Harper and
449 is available with the Bacula Enterprise Edition.
451 \section{Setting Accurate Mode During at Runtime}
453 You are now able to specify the Accurate mode on the \texttt{run} command and
454 in the Schedule resource.
457 * run accurate=yes job=Test
463 Run = Full 1st sun at 23:05
464 Run = Differential accurate=yes 2nd-5th sun at 23:05
465 Run = Incremental accurate=no mon-sat at 23:05
469 It can allow you to save memory and and CPU resources on the catalog server in
473 These advanced tuning options are available with the Bacula Enterprise Edition.
475 % Common with community
476 \section{Additions to RunScript variables}
477 You can have access to JobBytes, JobFiles and Director name using \%b, \%F and \%D
478 in your runscript command. The Client address is now available through \%h.
481 RunAfterJob = "/bin/echo Job=%j JobBytes=%b JobFiles=%F ClientAddress=%h Dir=%D"
484 \section{LZO Compression}
486 LZO compression was added in the Unix File Daemon. From the user point of view,
487 it works like the GZIP compression (just replace {\bf compression=GZIP} with
488 {\bf compression=LZO}).
493 Options { compression=LZO }
499 LZO provides much faster compression and decompression speed but lower
500 compression ratio than GZIP. It is a good option when you backup to disk. For
501 tape, the built-in compression may be a better option.
503 LZO is a good alternative for GZIP1 when you don't want to slow down your
504 backup. On a modern CPU it should be able to run almost as fast as:
507 \item your client can read data from disk. Unless you have very fast disks like
508 SSD or large/fast RAID array.
509 \item the data transfers between the file daemon and the storage daemon even on
513 Note that bacula only use one compression level LZO1X-1.
516 The code for this feature was contributed by Laurent Papier.
518 \section{New Tray Monitor}
520 Since the old integrated Windows tray monitor doesn't work with
521 recent Windows versions, we have written a new Qt Tray Monitor that is available
522 for both Linux and Windows. In addition to all the previous features,
523 this new version allows you to run Backups from
524 the tray monitor menu.
528 \includegraphics[width=10cm]{\idir tray-monitor}
529 \label{fig:traymonitor}
530 \caption{New tray monitor}
535 \includegraphics[width=10cm]{\idir tray-monitor1}
536 \label{fig:traymonitor1}
537 \caption{Run a Job through the new tray monitor}
541 To be able to run a job from the tray monitor, you need to
542 allow specific commands in the Director monitor console:
547 CommandACL = status, .clients, .jobs, .pools, .storage, .filesets, .messages, run
548 ClientACL = *all* # you can restrict to a specific host
560 This project was funded by Bacula Systems and is available with Bacula
561 the Enterprise Edition and the Community Edition.
563 \section{Purge Migration Job}
565 The new {\bf Purge Migration Job} directive may be added to the Migration
566 Job definition in the Director's configuration file. When it is enabled
567 the Job that was migrated during a migration will be purged at
568 the end of the migration job.
576 Client = localhost-fd
579 Storage = DiskChanger
582 Selection Pattern = ".*Save"
584 Purge Migration Job = yes
590 This project was submitted by Dunlap Blake; testing and documentation was funded
593 \section{Changes in the Pruning Algorithm}
595 We rewrote the job pruning algorithm in this version. Previously, in some users
596 reported that the pruning process at the end of jobs was very long. It should
597 not be longer the case. Now, Bacula won't prune automatically a Job if this
598 particular Job is needed to restore data. Example:
602 JobId: 2 Level: Incremental
603 JobId: 3 Level: Incremental
604 JobId: 4 Level: Differential
605 .. Other incrementals up to now
608 In this example, if the Job Retention defined in the Pool or in the Client
609 resource causes that Jobs with Jobid in 1,2,3,4 can be pruned, Bacula will
610 detect that JobId 1 and 4 are essential to restore data at the current state
611 and will prune only JobId 2 and 3.
613 \texttt{Important}, this change affect only the automatic pruning step after a
614 Job and the \texttt{prune jobs} Bconsole command. If a volume expires after the
615 \texttt{VolumeRetention} period, important jobs can be pruned.
617 \section{Ability to Verify any specified Job}
618 You now have the ability to tell Bacula which Job should verify instead of
619 automatically verify just the last one.
621 This feature can be used with VolumeToCatalog, DiskToCatalog and Catalog level.
623 To verify a given job, just specify the Job jobid in argument when starting the
626 *run job=VerifyVolume jobid=1 level=VolumeToCatalog
628 JobName: VerifyVolume
629 Level: VolumeToCatalog
632 Pool: Default (From Job resource)
633 Storage: File (From Job resource)
634 Verify Job: VerifyVol.2010-09-08_14.17.17_03
635 Verify List: /tmp/regress/working/VerifyVol.bsr
636 When: 2010-09-08 14:17:31
638 OK to run? (yes/mod/no):
642 This project was funded by Bacula Systems and is available with Bacula
643 Enterprise Edition and Community Edition.