3 Bacula Projects Roadmap
4 Status updated 14 Jun 2009
9 Item 1: Ability to restart failed jobs
10 *Item 2: 'restore' menu: enter a JobId, automatically select dependents
11 Item 3: Scheduling syntax that permits more flexibility and options
12 Item 4: Data encryption on storage daemon
13 Item 5: Deletion of disk Volumes when pruned
14 Item 6: Implement Base jobs
15 Item 7: Add ability to Verify any specified Job.
16 Item 8: Improve Bacula's tape and drive usage and cleaning management
17 Item 9: Allow FD to initiate a backup
18 *Item 10: Restore from volumes on multiple storage daemons
19 Item 11: Implement Storage daemon compression
20 Item 12: Reduction of communications bandwidth for a backup
21 Item 13: Ability to reconnect a disconnected comm line
22 Item 14: Start spooling even when waiting on tape
23 Item 15: Enable/disable compression depending on storage device (disk/tape)
24 Item 16: Include all conf files in specified directory
25 Item 17: Multiple threads in file daemon for the same job
26 Item 18: Possibilty to schedule Jobs on last Friday of the month
27 Item 19: Include timestamp of job launch in "stat clients" output
28 *Item 20: Cause daemons to use a specific IP address to source communications
29 Item 21: Message mailing based on backup types
30 Item 22: Ability to import/export Bacula database entities
31 *Item 23: "Maximum Concurrent Jobs" for drives when used with changer device
32 Item 24: Implementation of running Job speed limit.
33 Item 25: Add an override in Schedule for Pools based on backup types
34 Item 26: Automatic promotion of backup levels based on backup size
35 Item 27: Allow inclusion/exclusion of files in a fileset by creation/mod times
36 Item 28: Archival (removal) of User Files to Tape
37 Item 29: An option to operate on all pools with update vol parameters
38 Item 30: Automatic disabling of devices
39 *Item 31: List InChanger flag when doing restore.
40 Item 32: Ability to defer Batch Insert to a later time
41 Item 33: Add MaxVolumeSize/MaxVolumeBytes statement to Storage resource
42 Item 34: Enable persistent naming/number of SQL queries
43 Item 35: Port bat to Win32
44 Item 36: Bacula Dir, FD and SD to support proxies
45 Item 37: Add Minumum Spool Size directive
46 Item 38: Backup and Restore of Windows Encrypted Files using Win raw encryption
47 Item 39: Implement an interface between Bacula and Amazon's S3.
48 Item 40: Convert Bacula existing tray monitor on Windows to a stand alone program
50 Item 1: Ability to restart failed jobs
55 What: Often jobs fail because of a communications line drop or max run time,
56 cancel, or some other non-critical problem. Currrently any data
57 saved is lost. This implementation should modify the Storage daemon
58 so that it saves all the files that it knows are completely backed
61 The jobs should then be marked as incomplete and a subsequent
62 Incremental Accurate backup will then take into account all the
65 Why: Avoids backuping data already saved.
67 Notes: Requires Accurate to restart correctly. Must completed have a minimum
68 volume of data or files stored on Volume before enabling.
71 Item 2: 'restore' menu: enter a JobId, automatically select dependents
72 Origin: Graham Keeling (graham@equiinet.com)
76 What: Add to the bconsole 'restore' menu the ability to select a job
77 by JobId, and have bacula automatically select all the
80 Why: Currently, you either have to...
82 a) laboriously type in a date that is greater than the date of the
83 backup that you want and is less than the subsequent backup (bacula
84 then figures out the dependent jobs), or
85 b) manually figure out all the JobIds that you want and laboriously
86 type them all in. It would be extremely useful (in a programmatical
87 sense, as well as for humans) to be able to just give it a single JobId
88 and let bacula do the hard work (work that it already knows how to do).
90 Notes (Kern): I think this should either be modified to have Bacula
91 print a list of dates that the user can choose from as is done in
92 bwx-console and bat or the name of this command must be carefully
93 chosen so that the user clearly understands that the JobId is being
94 used to specify what Job and the date to which he wishes the restore to
98 Item 3: Scheduling syntax that permits more flexibility and options
99 Date: 15 December 2006
100 Origin: Gregory Brauer (greg at wildbrain dot com) and
101 Florian Schnabel <florian.schnabel at docufy dot de>
104 What: Currently, Bacula only understands how to deal with weeks of the
105 month or weeks of the year in schedules. This makes it impossible
106 to do a true weekly rotation of tapes. There will always be a
107 discontinuity that will require disruptive manual intervention at
108 least monthly or yearly because week boundaries never align with
109 month or year boundaries.
111 A solution would be to add a new syntax that defines (at least)
112 a start timestamp, and repetition period.
114 An easy option to skip a certain job on a certain date.
117 Why: Rotated backups done at weekly intervals are useful, and Bacula
118 cannot currently do them without extensive hacking.
120 You could then easily skip tape backups on holidays. Especially
121 if you got no autochanger and can only fit one backup on a tape
122 that would be really handy, other jobs could proceed normally
123 and you won't get errors that way.
126 Notes: Here is an example syntax showing a 3-week rotation where full
127 Backups would be performed every week on Saturday, and an
128 incremental would be performed every week on Tuesday. Each
129 set of tapes could be removed from the loader for the following
130 two cycles before coming back and being reused on the third
131 week. Since the execution times are determined by intervals
132 from a given point in time, there will never be any issues with
133 having to adjust to any sort of arbitrary time boundary. In
134 the example provided, I even define the starting schedule
135 as crossing both a year and a month boundary, but the run times
136 would be based on the "Repeat" value and would therefore happen
141 Name = "Week 1 Rotation"
142 #Saturday. Would run Dec 30, Jan 20, Feb 10, etc.
146 Start = 2006-12-30 01:00
150 #Tuesday. Would run Jan 2, Jan 23, Feb 13, etc.
154 Start = 2007-01-02 01:00
161 Name = "Week 2 Rotation"
162 #Saturday. Would run Jan 6, Jan 27, Feb 17, etc.
166 Start = 2007-01-06 01:00
170 #Tuesday. Would run Jan 9, Jan 30, Feb 20, etc.
174 Start = 2007-01-09 01:00
181 Name = "Week 3 Rotation"
182 #Saturday. Would run Jan 13, Feb 3, Feb 24, etc.
186 Start = 2007-01-13 01:00
190 #Tuesday. Would run Jan 16, Feb 6, Feb 27, etc.
194 Start = 2007-01-16 01:00
200 Notes: Kern: I have merged the previously separate project of skipping
201 jobs (via Schedule syntax) into this.
204 Item 4: Data encryption on storage daemon
205 Origin: Tobias Barth <tobias.barth at web-arts.com>
206 Date: 04 February 2009
209 What: The storage demon should be able to do the data encryption that can
210 currently be done by the file daemon.
212 Why: This would have 2 advantages:
213 1) one could encrypt the data of unencrypted tapes by doing a
215 2) the storage daemon would be the only machine that would have
216 to keep the encryption keys.
219 As an addendum to the feature request, here are some crypto
220 implementation details I wrote up regarding SD-encryption back in Jan
222 http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg28860.html
225 Item 5: Deletion of disk Volumes when pruned
227 Origin: Ross Boylan <RossBoylan at stanfordalumni dot org> (edited
229 Status: Truncate operation implemented in 3.1.4
231 What: Provide a way for Bacula to automatically remove Volumes
232 from the filesystem, or optionally to truncate them.
233 Obviously, the Volume must be pruned prior removal.
235 Why: This would allow users more control over their Volumes and
236 prevent disk based volumes from consuming too much space.
238 Notes: The following two directives might do the trick:
240 Volume Data Retention = <time period>
241 Remove Volume After = <time period>
243 The migration project should also remove a Volume that is
244 migrated. This might also work for tape Volumes.
246 Notes: (Kern). The data fields to control this have been added
247 to the new 3.0.0 database table structure.
250 Item 6: Implement Base jobs
251 Date: 28 October 2005
255 What: A base job is sort of like a Full save except that you
256 will want the FileSet to contain only files that are
257 unlikely to change in the future (i.e. a snapshot of
258 most of your system after installing it). After the
259 base job has been run, when you are doing a Full save,
260 you specify one or more Base jobs to be used. All
261 files that have been backed up in the Base job/jobs but
262 not modified will then be excluded from the backup.
263 During a restore, the Base jobs will be automatically
264 pulled in where necessary.
266 Why: This is something none of the competition does, as far as
267 we know (except perhaps BackupPC, which is a Perl program that
268 saves to disk only). It is big win for the user, it
269 makes Bacula stand out as offering a unique
270 optimization that immediately saves time and money.
271 Basically, imagine that you have 100 nearly identical
272 Windows or Linux machine containing the OS and user
273 files. Now for the OS part, a Base job will be backed
274 up once, and rather than making 100 copies of the OS,
275 there will be only one. If one or more of the systems
276 have some files updated, no problem, they will be
277 automatically restored.
279 Notes: Huge savings in tape usage even for a single machine.
280 Will require more resources because the DIR must send
281 FD a list of files/attribs, and the FD must search the
282 list and compare it for each file to be saved.
285 Item 7: Add ability to Verify any specified Job.
286 Date: 17 January 2008
287 Origin: portrix.net Hamburg, Germany.
288 Contact: Christian Sabelmann
289 Status: 70% of the required Code is part of the Verify function since v. 2.x
292 The ability to tell Bacula which Job should verify instead of
293 automatically verify just the last one.
296 It is sad that such a powerfull feature like Verify Jobs
297 (VolumeToCatalog) is restricted to be used only with the last backup Job
298 of a client. Actual users who have to do daily Backups are forced to
299 also do daily Verify Jobs in order to take advantage of this useful
300 feature. This Daily Verify after Backup conduct is not always desired
301 and Verify Jobs have to be sometimes scheduled. (Not necessarily
302 scheduled in Bacula). With this feature Admins can verify Jobs once a
303 Week or less per month, selecting the Jobs they want to verify. This
304 feature is also not to difficult to implement taking in account older bug
305 reports about this feature and the selection of the Job to be verified.
307 Notes: For the verify Job, the user could select the Job to be verified
308 from a List of the latest Jobs of a client. It would also be possible to
309 verify a certain volume. All of these would naturaly apply only for
310 Jobs whose file information are still in the catalog.
313 Item 8: Improve Bacula's tape and drive usage and cleaning management
314 Date: 8 November 2005, November 11, 2005
315 Origin: Adam Thornton <athornton at sinenomine dot net>,
316 Arno Lehmann <al at its-lehmann dot de>
319 What: Make Bacula manage tape life cycle information, tape reuse
320 times and drive cleaning cycles.
322 Why: All three parts of this project are important when operating
324 We need to know which tapes need replacement, and we need to
325 make sure the drives are cleaned when necessary. While many
326 tape libraries and even autoloaders can handle all this
327 automatically, support by Bacula can be helpful for smaller
328 (older) libraries and single drives. Limiting the number of
329 times a tape is used might prevent tape errors when using
330 tapes until the drives can't read it any more. Also, checking
331 drive status during operation can prevent some failures (as I
332 [Arno] had to learn the hard way...)
334 Notes: First, Bacula could (and even does, to some limited extent)
335 record tape and drive usage. For tapes, the number of mounts,
336 the amount of data, and the time the tape has actually been
337 running could be recorded. Data fields for Read and Write
338 time and Number of mounts already exist in the catalog (I'm
339 not sure if VolBytes is the sum of all bytes ever written to
340 that volume by Bacula). This information can be important
341 when determining which media to replace. The ability to mark
342 Volumes as "used up" after a given number of write cycles
343 should also be implemented so that a tape is never actually
344 worn out. For the tape drives known to Bacula, similar
345 information is interesting to determine the device status and
346 expected life time: Time it's been Reading and Writing, number
347 of tape Loads / Unloads / Errors. This information is not yet
348 recorded as far as I [Arno] know. A new volume status would
349 be necessary for the new state, like "Used up" or "Worn out".
350 Volumes with this state could be used for restores, but not
351 for writing. These volumes should be migrated first (assuming
352 migration is implemented) and, once they are no longer needed,
353 could be moved to a Trash pool.
355 The next step would be to implement a drive cleaning setup.
356 Bacula already has knowledge about cleaning tapes. Once it
357 has some information about cleaning cycles (measured in drive
358 run time, number of tapes used, or calender days, for example)
359 it can automatically execute tape cleaning (with an
360 autochanger, obviously) or ask for operator assistance loading
363 The final step would be to implement TAPEALERT checks not only
364 when changing tapes and only sending the information to the
365 administrator, but rather checking after each tape error,
366 checking on a regular basis (for example after each tape
367 file), and also before unloading and after loading a new tape.
368 Then, depending on the drives TAPEALERT state and the known
369 drive cleaning state Bacula could automatically schedule later
370 cleaning, clean immediately, or inform the operator.
372 Implementing this would perhaps require another catalog change
373 and perhaps major changes in SD code and the DIR-SD protocol,
374 so I'd only consider this worth implementing if it would
375 actually be used or even needed by many people.
377 Implementation of these projects could happen in three distinct
378 sub-projects: Measuring Tape and Drive usage, retiring
379 volumes, and handling drive cleaning and TAPEALERTs.
382 Item 9: Allow FD to initiate a backup
383 Origin: Frank Volf (frank at deze dot org)
384 Date: 17 November 2005
387 What: Provide some means, possibly by a restricted console that
388 allows a FD to initiate a backup, and that uses the connection
389 established by the FD to the Director for the backup so that
390 a Director that is firewalled can do the backup.
391 Why: Makes backup of laptops much easier.
394 Item 10: Restore from volumes on multiple storage daemons
395 Origin: Graham Keeling (graham@equiinet.com)
397 Status: Done in 3.0.2
399 What: The ability to restore from volumes held by multiple storage daemons
400 would be very useful.
402 Why: It is useful to be able to backup to any number of different storage
403 daemons. For example, your first storage daemon may run out of space,
404 so you switch to your second and carry on. Bacula will currently let
405 you do this. However, once you come to restore, bacula cannot cope
406 when volumes on different storage daemons are required.
408 Notes: The director knows that more than one storage daemon is needed,
409 as bconsole outputs something like the following table.
411 The job will require the following
412 Volume(s) Storage(s) SD Device(s)
413 =====================================================================
415 backup-0001 Disk 1 Disk 1.0
416 backup-0002 Disk 2 Disk 2.0
418 However, the bootstrap file that it creates gets sent to the first
419 storage daemon only, which then stalls for a long time, 'waiting for a
420 mount request' for the volume that it doesn't have. The bootstrap file
421 contains no knowledge of the storage daemon. Under the current design:
423 The director connects to the storage daemon, and gets an sd_auth_key.
424 The director then connects to the file daemon, and gives it the
425 sd_auth_key with the 'jobcmd'. (restoring of files happens) The
426 director does a 'wait_for_storage_daemon_termination()'. The director
427 waits for the file daemon to indicate the end of the job.
431 The director connects to the file daemon.
432 Then, for each storage daemon in the .bsr file... {
433 The director connects to the storage daemon, and gets an sd_auth_key.
434 The director then connects to the file daemon, and gives it the
435 sd_auth_key with the 'storaddr' command.
436 (restoring of files happens)
437 The director does a 'wait_for_storage_daemon_termination()'.
438 The director waits for the file daemon to indicate the end of the
439 work on this storage.
442 The director tells the file daemon that there are no more storages to
443 contact. The director waits for the file daemon to indicate the end of
444 the job. As you can see, each restore between the file daemon and
445 storage daemon is handled in the same way that it is currently handled,
446 using the same method for authentication, except that the sd_auth_key
447 is moved from the 'jobcmd' to the 'storaddr' command - where it
451 Item 11: Implement Storage daemon compression
452 Date: 18 December 2006
453 Origin: Vadim A. Umanski , e-mail umanski@ext.ru
455 What: The ability to compress backup data on the SD receiving data
456 instead of doing that on client sending data.
457 Why: The need is practical. I've got some machines that can send
458 data to the network 4 or 5 times faster than compressing
459 them (I've measured that). They're using fast enough SCSI/FC
460 disk subsystems but rather slow CPUs (ex. UltraSPARC II).
461 And the backup server has got a quite fast CPUs (ex. Dual P4
462 Xeons) and quite a low load. When you have 20, 50 or 100 GB
463 of raw data - running a job 4 to 5 times faster - that
464 really matters. On the other hand, the data can be
465 compressed 50% or better - so losing twice more space for
466 disk backup is not good at all. And the network is all mine
467 (I have a dedicated management/provisioning network) and I
468 can get as high bandwidth as I need - 100Mbps, 1000Mbps...
469 That's why the server-side compression feature is needed!
473 Item 12: Reduction of communications bandwidth for a backup
474 Date: 14 October 2008
475 Origin: Robin O'Leary (Equiinet)
478 What: Using rdiff techniques, Bacula could significantly reduce
479 the network data transfer volume to do a backup.
481 Why: Faster backup across the Internet
483 Notes: This requires retaining certain data on the client during a Full
484 backup that will speed up subsequent backups.
487 Item 13: Ability to reconnect a disconnected comm line
492 What: Often jobs fail because of a communications line drop. In that
493 case, Bacula should be able to reconnect to the other daemon and
496 Why: Avoids backuping data already saved.
498 Notes: *Very* complicated from a design point of view because of authenication.
500 Item 14: Start spooling even when waiting on tape
501 Origin: Tobias Barth <tobias.barth@web-arts.com>
505 What: If a job can be spooled to disk before writing it to tape, it should
506 be spooled immediately. Currently, bacula waits until the correct
507 tape is inserted into the drive.
509 Why: It could save hours. When bacula waits on the operator who must insert
510 the correct tape (e.g. a new tape or a tape from another media
511 pool), bacula could already prepare the spooled data in the spooling
512 directory and immediately start despooling when the tape was
513 inserted by the operator.
515 2nd step: Use 2 or more spooling directories. When one directory is
516 currently despooling, the next (on different disk drives) could
517 already be spooling the next data.
519 Notes: I am using bacula 2.2.8, which has none of those features
523 Item 15: Enable/disable compression depending on storage device (disk/tape)
524 Origin: Ralf Gross ralf-lists@ralfgross.de
526 Status: Initial Request
528 What: Add a new option to the storage resource of the director. Depending
529 on this option, compression will be enabled/disabled for a device.
531 Why: If different devices (disks/tapes) are used for full/diff/incr
532 backups, software compression will be enabled for all backups
533 because of the FileSet compression option. For backup to tapes
534 wich are able to do hardware compression this is not desired.
538 http://news.gmane.org/gmane.comp.sysutils.backup.bacula.devel/cutoff=11124
539 It must be clear to the user, that the FileSet compression option
540 must still be enabled use compression for a backup job at all.
541 Thus a name for the new option in the director must be
544 Notes: KES I think the Storage definition should probably override what
545 is in the Job definition or vice-versa, but in any case, it must
549 Item 16: Include all conf files in specified directory
550 Date: 18 October 2008
551 Origin: Database, Lda. Maputo, Mozambique
552 Contact:Cameron Smith / cameron.ord@database.co.mz
555 What: A directive something like "IncludeConf = /etc/bacula/subconfs" Every
556 time Bacula Director restarts or reloads, it will walk the given
557 directory (non-recursively) and include the contents of any files
558 therein, as though they were appended to bacula-dir.conf
560 Why: Permits simplified and safer configuration for larger installations with
561 many client PCs. Currently, through judicious use of JobDefs and
562 similar directives, it is possible to reduce the client-specific part of
563 a configuration to a minimum. The client-specific directives can be
564 prepared according to a standard template and dropped into a known
565 directory. However it is still necessary to add a line to the "master"
566 (bacula-dir.conf) referencing each new file. This exposes the master to
567 unnecessary risk of accidental mistakes and makes automation of adding
568 new client-confs, more difficult (it is easier to automate dropping a
569 file into a dir, than rewriting an existing file). Ken has previously
570 made a convincing argument for NOT including Bacula's core configuration
571 in an RDBMS, but I believe that the present request is a reasonable
572 extension to the current "flat-file-based" configuration philosophy.
574 Notes: There is NO need for any special syntax to these files. They should
575 contain standard directives which are simply "inlined" to the parent
576 file as already happens when you explicitly reference an external file.
578 Notes: (kes) this can already be done with scripting
579 From: John Jorgensen <jorgnsn@lcd.uregina.ca>
580 The bacula-dir.conf at our site contains these lines:
583 # Include subfiles associated with configuration of clients.
584 # They define the bulk of the Clients, Jobs, and FileSets.
586 @|"sh -c 'for f in /etc/bacula/clientdefs/*.conf ; do echo @${f} ; done'"
588 and when we get a new client, we just put its configuration into
589 a new file called something like:
591 /etc/bacula/clientdefs/clientname.conf
594 Item 17: Multiple threads in file daemon for the same job
595 Date: 27 November 2005
596 Origin: Ove Risberg (Ove.Risberg at octocode dot com)
599 What: I want the file daemon to start multiple threads for a backup
600 job so the fastest possible backup can be made.
602 The file daemon could parse the FileSet information and start
603 one thread for each File entry located on a separate
606 A confiuration option in the job section should be used to
607 enable or disable this feature. The confgutration option could
608 specify the maximum number of threads in the file daemon.
610 If the theads could spool the data to separate spool files
611 the restore process will not be much slower.
613 Why: Multiple concurrent backups of a large fileserver with many
614 disks and controllers will be much faster.
617 Item 18: Possibilty to schedule Jobs on last Friday of the month
618 Origin: Carsten Menke <bootsy52 at gmx dot net>
622 What: Currently if you want to run your monthly Backups on the last
623 Friday of each month this is only possible with workarounds (e.g
624 scripting) (As some months got 4 Fridays and some got 5 Fridays)
625 The same is true if you plan to run your yearly Backups on the
626 last Friday of the year. It would be nice to have the ability to
627 use the builtin scheduler for this.
629 Why: In many companies the last working day of the week is Friday (or
630 Saturday), so to get the most data of the month onto the monthly
631 tape, the employees are advised to insert the tape for the
632 monthly backups on the last friday of the month.
634 Notes: To give this a complete functionality it would be nice if the
635 "first" and "last" Keywords could be implemented in the
636 scheduler, so it is also possible to run monthy backups at the
637 first friday of the month and many things more. So if the syntax
638 would expand to this {first|last} {Month|Week|Day|Mo-Fri} of the
639 {Year|Month|Week} you would be able to run really flexible jobs.
641 To got a certain Job run on the last Friday of the Month for example
644 Run = pool=Monthly last Fri of the Month at 23:50
648 Run = pool=Yearly last Fri of the Year at 23:50
650 ## Certain Jobs the last Week of a Month
652 Run = pool=LastWeek last Week of the Month at 23:50
654 ## Monthly Backup on the last day of the month
656 Run = pool=Monthly last Day of the Month at 23:50
659 Item 19: Include timestamp of job launch in "stat clients" output
660 Origin: Mark Bergman <mark.bergman@uphs.upenn.edu>
661 Date: Tue Aug 22 17:13:39 EDT 2006
664 What: The "stat clients" command doesn't include any detail on when
665 the active backup jobs were launched.
667 Why: Including the timestamp would make it much easier to decide whether
668 a job is running properly.
670 Notes: It may be helpful to have the output from "stat clients" formatted
671 more like that from "stat dir" (and other commands), in a column
672 format. The per-client information that's currently shown (level,
673 client name, JobId, Volume, pool, device, Files, etc.) is good, but
674 somewhat hard to parse (both programmatically and visually),
675 particularly when there are many active clients.
678 Item 20: Cause daemons to use a specific IP address to source communications
679 Origin: Bill Moran <wmoran@collaborativefusion.com>
681 Status: Done in 3.0.2
682 What: Cause Bacula daemons (dir, fd, sd) to always use the ip address
683 specified in the [DIR|DF|SD]Addr directive as the source IP
684 for initiating communication.
685 Why: On complex networks, as well as extremely secure networks, it's
686 not unusual to have multiple possible routes through the network.
687 Often, each of these routes is secured by different policies
688 (effectively, firewalls allow or deny different traffic depending
689 on the source address)
690 Unfortunately, it can sometimes be difficult or impossible to
691 represent this in a system routing table, as the result is
692 excessive subnetting that quickly exhausts available IP space.
693 The best available workaround is to provide multiple IPs to
694 a single machine that are all on the same subnet. In order
695 for this to work properly, applications must support the ability
696 to bind outgoing connections to a specified address, otherwise
697 the operating system will always choose the first IP that
698 matches the required route.
699 Notes: Many other programs support this. For example, the following
700 can be configured in BIND:
701 query-source address 10.0.0.1;
702 transfer-source 10.0.0.2;
703 Which means queries from this server will always come from
704 10.0.0.1 and zone transfers will always originate from
708 Item 21: Message mailing based on backup types
709 Origin: Evan Kaufman <evan.kaufman@gmail.com>
710 Date: January 6, 2006
713 What: In the "Messages" resource definitions, allowing messages
714 to be mailed based on the type (backup, restore, etc.) and level
715 (full, differential, etc) of job that created the originating
718 Why: It would, for example, allow someone's boss to be emailed
719 automatically only when a Full Backup job runs, so he can
720 retrieve the tapes for offsite storage, even if the IT dept.
721 doesn't (or can't) explicitly notify him. At the same time, his
722 mailbox wouldnt be filled by notifications of Verifies, Restores,
723 or Incremental/Differential Backups (which would likely be kept
726 Notes: One way this could be done is through additional message types, for
730 # email the boss only on full system backups
731 Mail = boss@mycompany.com = full, !incremental, !differential, !restore,
733 # email us only when something breaks
734 MailOnError = itdept@mycompany.com = all
737 Notes: Kern: This should be rather trivial to implement.
740 Item 22: Ability to import/export Bacula database entities
745 What: Create a Bacula ASCII SQL database independent format that permits
746 importing and exporting database catalog Job entities.
748 Why: For achival, database clustering, tranfer to other databases
751 Notes: Job selection should be by Job, time, Volume, Client, Pool and possibly
755 Item 23: "Maximum Concurrent Jobs" for drives when used with changer device
756 Origin: Ralf Gross ralf-lists <at> ralfgross.de
758 Status: Done in 3.0.3
760 What: respect the "Maximum Concurrent Jobs" directive in the _drives_
761 Storage section in addition to the changer section
763 Why: I have a 3 drive changer where I want to be able to let 3 concurrent
764 jobs run in parallel. But only one job per drive at the same time.
765 Right now I don't see how I could limit the number of concurrent jobs
766 per drive in this situation.
768 Notes: Using different priorities for these jobs lead to problems that other
769 jobs are blocked. On the user list I got the advice to use the
770 "Prefer Mounted Volumes" directive, but Kern advised against using
771 "Prefer Mounted Volumes" in an other thread:
772 http://article.gmane.org/gmane.comp.sysutils.backup.bacula.devel/11876/
774 In addition I'm not sure if this would be the same as respecting the
775 drive's "Maximum Concurrent Jobs" setting.
787 Maximum Concurrent Jobs = 3
791 Name = Neo4100-LTO4-D1
795 Device = ULTRIUM-TD4-D1
797 Maximum Concurrent Jobs = 1
802 The "Maximum Concurrent Jobs = 1" directive in the drive's section is
806 Item 24: Implementation of running Job speed limit.
807 Origin: Alex F, alexxzell at yahoo dot com
808 Date: 29 January 2009
810 What: I noticed the need for an integrated bandwidth limiter for
811 running jobs. It would be very useful just to specify another
812 field in bacula-dir.conf, like speed = how much speed you wish
813 for that specific job to run at
815 Why: Because of a couple of reasons. First, it's very hard to implement a
816 traffic shaping utility and also make it reliable. Second, it is very
817 uncomfortable to have to implement these apps to, let's say 50 clients
818 (including desktops, servers). This would also be unreliable because you
819 have to make sure that the apps are properly working when needed; users
820 could also disable them (accidentally or not). It would be very useful
821 to provide Bacula this ability. All information would be centralized,
822 you would not have to go to 50 different clients in 10 different
823 locations for configuration; eliminating 3rd party additions help in
824 establishing efficiency. Would also avoid bandwidth congestion,
825 especially where there is little available.
828 Item 25: Add an override in Schedule for Pools based on backup types
830 Origin: Chad Slater <chad.slater@clickfox.com>
833 What: Adding a FullStorage=BigTapeLibrary in the Schedule resource
834 would help those of us who use different storage devices for different
835 backup levels cope with the "auto-upgrade" of a backup.
837 Why: Assume I add several new devices to be backed up, i.e. several
838 hosts with 1TB RAID. To avoid tape switching hassles, incrementals are
839 stored in a disk set on a 2TB RAID. If you add these devices in the
840 middle of the month, the incrementals are upgraded to "full" backups,
841 but they try to use the same storage device as requested in the
842 incremental job, filling up the RAID holding the differentials. If we
843 could override the Storage parameter for full and/or differential
844 backups, then the Full job would use the proper Storage device, which
845 has more capacity (i.e. a 8TB tape library.
848 Item 26: Automatic promotion of backup levels based on backup size
849 Date: 19 January 2006
850 Origin: Adam Thornton <athornton@sinenomine.net>
853 What: Other backup programs have a feature whereby it estimates the space
854 that a differential, incremental, and full backup would take. If
855 the difference in space required between the scheduled level and the
856 next level up is beneath some user-defined critical threshold, the
857 backup level is bumped to the next type. Doing this minimizes the
858 number of volumes necessary during a restore, with a fairly minimal
859 cost in backup media space.
861 Why: I know at least one (quite sophisticated and smart) user for whom the
862 absence of this feature is a deal-breaker in terms of using Bacula;
863 if we had it it would eliminate the one cool thing other backup
864 programs can do and we can't (at least, the one cool thing I know
868 Item 27: Allow inclusion/exclusion of files in a fileset by creation/mod times
869 Origin: Evan Kaufman <evan.kaufman@gmail.com>
870 Date: January 11, 2006
873 What: In the vein of the Wild and Regex directives in a Fileset's
874 Options, it would be helpful to allow a user to include or exclude
875 files and directories by creation or modification times.
877 You could factor the Exclude=yes|no option in much the same way it
878 affects the Wild and Regex directives. For example, you could exclude
879 all files modified before a certain date:
883 Modified Before = ####
886 Or you could exclude all files created/modified since a certain date:
890 Created Modified Since = ####
893 The format of the time/date could be done several ways, say the number
894 of seconds since the epoch:
895 1137008553 = Jan 11 2006, 1:42:33PM # result of `date +%s`
897 Or a human readable date in a cryptic form:
898 20060111134233 = Jan 11 2006, 1:42:33PM # YYYYMMDDhhmmss
900 Why: I imagine a feature like this could have many uses. It would
901 allow a user to do a full backup while excluding the base operating
902 system files, so if I installed a Linux snapshot from a CD yesterday,
903 I'll *exclude* all files modified *before* today. If I need to
904 recover the system, I use the CD I already have, plus the tape backup.
905 Or if, say, a Windows client is hit by a particularly corrosive
906 virus, and I need to *exclude* any files created/modified *since* the
909 Notes: Of course, this feature would work in concert with other
910 in/exclude rules, and wouldnt override them (or each other).
912 Notes: The directives I'd imagine would be along the lines of
913 "[Created] [Modified] [Before|Since] = <date>".
914 So one could compare against 'ctime' and/or 'mtime', but ONLY 'before'
918 Item 28: Archival (removal) of User Files to Tape
920 Origin: Ray Pengelly [ray at biomed dot queensu dot ca
923 What: The ability to archive data to storage based on certain parameters
924 such as age, size, or location. Once the data has been written to
925 storage and logged it is then pruned from the originating
926 filesystem. Note! We are talking about user's files and not
929 Why: This would allow fully automatic storage management which becomes
930 useful for large datastores. It would also allow for auto-staging
931 from one media type to another.
933 Example 1) Medical imaging needs to store large amounts of data.
934 They decide to keep data on their servers for 6 months and then put
935 it away for long term storage. The server then finds all files
936 older than 6 months writes them to tape. The files are then removed
939 Example 2) All data that hasn't been accessed in 2 months could be
940 moved from high-cost, fibre-channel disk storage to a low-cost
941 large-capacity SATA disk storage pool which doesn't have as quick of
942 access time. Then after another 6 months (or possibly as one
943 storage pool gets full) data is migrated to Tape.
946 Item 29: An option to operate on all pools with update vol parameters
947 Origin: Dmitriy Pinchukov <absh@bossdev.kiev.ua>
949 Status: Patch made by Nigel Stepp
951 What: When I do update -> Volume parameters -> All Volumes
952 from Pool, then I have to select pools one by one. I'd like
953 console to have an option like "0: All Pools" in the list of
956 Why: I have many pools and therefore unhappy with manually
957 updating each of them using update -> Volume parameters -> All
958 Volumes from Pool -> pool #.
961 Item 30: Automatic disabling of devices
963 Origin: Peter Eriksson <peter at ifm.liu dot se>
966 What: After a configurable amount of fatal errors with a tape drive
967 Bacula should automatically disable further use of a certain
968 tape drive. There should also be "disable"/"enable" commands in
971 Why: On a multi-drive jukebox there is a possibility of tape drives
972 going bad during large backups (needing a cleaning tape run,
973 tapes getting stuck). It would be advantageous if Bacula would
974 automatically disable further use of a problematic tape drive
975 after a configurable amount of errors has occurred.
977 An example: I have a multi-drive jukebox (6 drives, 380+ slots)
978 where tapes occasionally get stuck inside the drive. Bacula will
979 notice that the "mtx-changer" command will fail and then fail
980 any backup jobs trying to use that drive. However, it will still
981 keep on trying to run new jobs using that drive and fail -
982 forever, and thus failing lots and lots of jobs... Since we have
983 many drives Bacula could have just automatically disabled
984 further use of that drive and used one of the other ones
988 Item 31: List InChanger flag when doing restore.
989 Origin: Jesper Krogh<jesper@krogh.cc>
991 Status: Done in version 3.0.2
993 What: When doing a restore the restore selection dialog ends by telling
995 The job will require the following
996 Volume(s) Storage(s) SD Device(s)
997 ===========================================================================
1008 When having an autochanger, it would be really nice with an inChanger
1009 column so the operator knew if this restore job would stop waiting for
1010 operator intervention. This is done just by selecting the inChanger flag
1011 from the catalog and printing it in a seperate column.
1014 Why: This would help getting large restores through minimizing the
1015 time spent waiting for operator to drop by and change tapes in the library.
1017 Notes: [Kern] I think it would also be good to have the Slot as well,
1018 or some indication that Bacula thinks the volume is in the autochanger
1019 because it depends on both the InChanger flag and the Slot being
1023 Item 32: Ability to defer Batch Insert to a later time
1028 What: Instead of doing a Job Batch Insert at the end of the Job
1029 which might create resource contention with lots of Job,
1030 defer the insert to a later time.
1032 Why: Permits to focus on getting the data on the Volume and
1033 putting the metadata into the Catalog outside the backup
1036 Notes: Will use the proposed Bacula ASCII database import/export
1037 format (i.e. dependent on the import/export entities project).
1040 Item 33: Add MaxVolumeSize/MaxVolumeBytes statement to Storage resource
1041 Origin: Bastian Friedrich <bastian.friedrich@collax.com>
1045 What: SD has a "Maximum Volume Size" statement, which is deprecated and
1046 superseded by the Pool resource statement "Maximum Volume Bytes".
1047 It would be good if either statement could be used in Storage
1050 Why: Pools do not have to be restricted to a single storage type/device;
1051 thus, it may be impossible to define Maximum Volume Bytes in the
1052 Pool resource. The old MaxVolSize statement is deprecated, as it
1053 is SD side only. I am using the same pool for different devices.
1055 Notes: State of idea currently unknown. Storage resources in the dir
1056 config currently translate to very slim catalog entries; these
1057 entries would require extensions to implement what is described
1058 here. Quite possibly, numerous other statements that are currently
1059 available in Pool resources could be used in Storage resources too
1063 Item 34: Enable persistent naming/number of SQL queries
1065 Origin: Mark Bergman
1069 Change the parsing of the query.sql file and the query command so that
1070 queries are named/numbered by a fixed value, not their order in the
1075 One of the real strengths of bacula is the ability to query the
1076 database, and the fact that complex queries can be saved and
1077 referenced from a file is very powerful. However, the choice
1078 of query (both for interactive use, and by scripting input
1079 to the bconsole command) is completely dependent on the order
1080 within the query.sql file. The descriptve labels are helpful for
1081 interactive use, but users become used to calling a particular
1082 query "by number", or may use scripts to execute queries. This
1083 presents a problem if the number or order of queries in the file
1086 If the query.sql file used the numeric tags as a real value (rather
1087 than a comment), then users could have a higher confidence that they
1088 are executing the intended query, that their local changes wouldn't
1089 conflict with future bacula upgrades.
1091 For scripting, it's very important that the intended query is
1092 what's actually executed. The current method of parsing the
1093 query.sql file discourages scripting because the addition or
1094 deletion of queries within the file will require corresponding
1095 changes to scripts. It may not be obvious to users that deleting
1096 query "17" in the query.sql file will require changing all
1097 references to higher numbered queries. Similarly, when new
1098 bacula distributions change the number of "official" queries,
1099 user-developed queries cannot simply be appended to the file
1100 without also changing any references to those queries in scripts
1101 or procedural documentation, etc.
1103 In addition, using fixed numbers for queries would encourage more
1104 user-initiated development of queries, by supporting conventions
1107 queries numbered 1-50 are supported/developed/distributed by
1108 with official bacula releases
1110 queries numbered 100-200 are community contributed, and are
1111 related to media management
1113 queries numbered 201-300 are community contributed, and are
1114 related to checksums, finding duplicated files across
1115 different backups, etc.
1117 queries numbered 301-400 are community contributed, and are
1118 related to backup statistics (average file size, size per
1119 client per backup level, time for all clients by backup level,
1120 storage capacity by media type, etc.)
1122 queries numbered 500-999 are locally created
1125 Alternatively, queries could be called by keyword (tag), rather
1129 Item 35: Port bat to Win32
1134 What: Make bat run on Win32/64.
1136 Why: To have GUI on Windows
1141 Item 36: Bacula Dir, FD and SD to support proxies
1142 Origin: Karl Grindley @ MIT Lincoln Laboratory <kgrindley at ll dot mit dot edu>
1146 What: Support alternate methods for nailing up a TCP session such
1147 as SOCKS5, SOCKS4 and HTTP (CONNECT) proxies. Such a feature
1148 would allow tunneling of bacula traffic in and out of proxied
1151 Why: Currently, bacula is architected to only function on a flat network, with
1152 no barriers or limitations. Due to the large configuration states of
1153 any network and the infinite configuration where file daemons and
1154 storage daemons may sit in relation to one another, bacula often is
1155 not usable on a network where filtered or air-gaped networks exist.
1156 While often solutions such as ACL modifications to firewalls or port
1157 redirection via SNAT or DNAT will solve the issue, often however,
1158 these solutions are not adequate or not allowed by hard policy.
1160 In an air-gapped network with only a highly locked down proxy services
1161 are provided (SOCKS4/5 and/or HTTP and/or SSH outbound) ACLs or
1162 iptable rules will not work.
1164 Notes: Director resource tunneling: This configuration option to utilize a
1165 proxy to connect to a client should be specified in the client
1166 resource Client resource tunneling: should be configured in the client
1167 resource in the director config file? Or configured on the bacula-fd
1168 configuration file on the fd host itself? If the ladder, this would
1169 allow only certain clients to use a proxy, where others do not when
1170 establishing the TCP connection to the storage server.
1172 Also worth noting, there are other 3rd party, light weight apps that
1173 could be utilized to bootstrap this. Instead of sockifing bacula
1174 itself, use an external program to broker proxy authentication, and
1175 connection to the remote host. OpenSSH does this by using the
1176 "ProxyCommand" syntax in the client configuration and uses stdin and
1177 stdout to the command. Connect.c is a very popular one.
1178 (http://bent.latency.net/bent/darcs/goto-san-connect-1.85/src/connect.html).
1179 One could also possibly use stunnel, netcat, etc.
1182 Item 37: Add Minumum Spool Size directive
1184 Origin: Frank Sweetser <fs@wpi.edu>
1186 What: Add a new SD directive, "minimum spool size" (or similar). This
1187 directive would specify a minimum level of free space available for
1188 spooling. If the unused spool space is less than this level, any
1189 new spooling requests would be blocked as if the "maximum spool
1190 size" threshold had bee reached. Already spooling jobs would be
1191 unaffected by this directive.
1193 Why: I've been bitten by this scenario a couple of times:
1195 Assume a maximum spool size of 100M. Two concurrent jobs, A and B,
1196 are both running. Due to timing quirks and previously running jobs,
1197 job A has used 99.9M of space in the spool directory. While A is
1198 busy despooling to disk, B is happily using the remaining 0.1M of
1199 spool space. This ends up in a spool/despool sequence every 0.1M of
1200 data. In addition to fragmenting the data on the volume far more
1201 than was necessary, in larger data sets (ie, tens or hundreds of
1202 gigabytes) it can easily produce multi-megabyte report emails!
1205 Item 38: Backup and Restore of Windows Encrypted Files using Win raw encryption
1206 Origin: Michael Mohr, SAG Mohr.External@infineon.com
1207 Date: 22 February 2008
1208 Origin: Alex Ehrlich (Alex.Ehrlich-at-mail.ee)
1209 Date: 05 August 2008
1212 What: Make it possible to backup and restore Encypted Files from and to
1213 Windows systems without the need to decrypt it by using the raw
1214 encryption functions API (see:
1215 http://msdn2.microsoft.com/en-us/library/aa363783.aspx)
1216 that is provided for that reason by Microsoft.
1217 If a file ist encrypted could be examined by evaluating the
1218 FILE_ATTRIBUTE_ENCRYTED flag of the GetFileAttributes
1220 For each file backed up or restored by FD on Windows, check if
1221 the file is encrypted; if so then use OpenEncryptedFileRaw,
1222 ReadEncryptedFileRaw, WriteEncryptedFileRaw,
1223 CloseEncryptedFileRaw instead of BackupRead and BackupWrite
1226 Why: Without the usage of this interface the fd-daemon running
1227 under the system account can't read encypted Files because
1228 the key needed for the decrytion is missed by them. As a result
1229 actually encrypted files are not backed up
1230 by bacula and also no error is shown while missing these files.
1232 Notes: Using xxxEncryptedFileRaw API would allow to backup and
1233 restore EFS-encrypted files without decrypting their data.
1234 Note that such files cannot be restored "portably" (at least,
1235 easily) but they would be restoreable to a different (or
1236 reinstalled) Win32 machine; the restore would require setup
1237 of a EFS recovery agent in advance, of course, and this shall
1238 be clearly reflected in the documentation, but this is the
1239 normal Windows SysAdmin's business.
1240 When "portable" backup is requested the EFS-encrypted files
1241 shall be clearly reported as errors.
1242 See MSDN on the "Backup and Restore of Encrypted Files" topic:
1243 http://msdn.microsoft.com/en-us/library/aa363783.aspx
1244 Maybe the EFS support requires a new flag in the database for
1246 Unfortunately, the implementation is not as straightforward as
1247 1-to-1 replacement of BackupRead with ReadEncryptedFileRaw,
1248 requiring some FD code rewrite to work with
1249 encrypted-file-related callback functions.
1252 Item 39: Implement an interface between Bacula and Storage clould like Amazon's S3.
1253 Date: 25 August 2008
1254 Origin: Soren Hansen <soren@ubuntu.com>
1255 Status: Not started.
1256 What: Enable the storage daemon to store backup data on Amazon's
1259 Why: Amazon's S3 is a cheap way to store data off-site.
1261 Notes: If we configure the Pool to put only one job per volume (they don't
1262 support append operation), and the volume size isn't to big (100MB?),
1263 it should be easy to adapt the disk-changer script to add get/put
1264 procedure with curl. So, the data would be safetly copied during the
1267 Cloud should be only used with Copy jobs, users should always have
1268 a copy of their data on their site.
1270 We should also think to have our own cache, trying always to have
1271 cloud volume on the local disk. (I don't know if users want to store
1272 100GB on cloud, so it shouldn't be a disk size problem). For example,
1273 if bacula want to recycle a volume, it will start by downloading the
1274 file to truncate it few seconds later, if we can avoid that...
1276 Item 40: Convert Bacula existing tray monitor on Windows to a stand alone program
1281 What: Separate Win32 tray monitor to be a separate program.
1283 Why: Vista does not allow SYSTEM services to interact with the
1284 desktop, so the current tray monitor does not work on Vista
1287 Notes: Requires communicating with the FD via the network (simulate
1288 a console connection).
1292 ========= End items voted on May 2009 ==================
1294 ========= New items after last vote ====================
1296 Item 1: Relabel disk volume after recycling
1297 Origin: Pasi Kärkkäinen <pasik@iki.fi>
1299 Status: Not implemented yet, no code written.
1301 What: The ability to relabel the disk volume (and thus rename the file on the
1302 disk) after it has been recycled. Useful when you have a single job
1303 per disk volume, and you use a custom Label format, for example:
1305 "${Client}-${Level}-${NumVols:p/4/0/r}-${Year}_${Month}_${Day}-${Hour}_${Minute}"
1307 Why: Disk volumes in Bacula get the label/filename when they are used for the
1308 first time. If you use recycling and custom label format like above,
1309 the disk volume name doesn't match the contents after it has been
1310 recycled. This feature makes it possible to keep the label/filename
1311 in sync with the content and thus makes it easy to check/monitor the
1312 backups from the shell and/or normal file management tools, because
1313 the filenames of the disk volumes match the content.
1315 Notes: The configuration option could be "Relabel after Recycling = Yes".
1317 Item n: Command that releases all drives in an autochanger
1318 Origin: Blake Dunlap (blake@nxs.net)
1322 What: It would be nice if there was a release command that
1323 would release all drives in an autochanger instead of having to
1324 do each one in turn.
1326 Why: It can take some time for a release to occur, and the
1327 commands must be given for each drive in turn, which can quicky
1328 scale if there are several drives in the library. (Having to
1329 watch the console, to give each command can waste a good bit of
1330 time when you start getting into the 16 drive range when the
1331 tapes can take up to 3 minutes to eject each)
1333 Notes: Due to the way some autochangers/libraries work, you
1334 cannot assume that new tapes inserted will go into slots that are
1335 not currently believed to be in use by bacula (the tape from that
1336 slot is in a drive). This would make any changes in
1337 configuration quicker/easier, as all drives need to be released
1338 before any modifications to slots.
1340 Item n: Run bscan on a remote storage daemon from within bconsole.
1341 Date: 07 October 2009
1342 Origin: Graham Keeling <graham@equiinet.com>
1345 What: The ability to be able to run bscan on a remote storage daemon from
1346 within bconsole in order to populate your catalog.
1348 Why: Currently, it seems you have to:
1349 a) log in to a console on the remote machine
1350 b) figure out where the storage daemon config file is
1351 c) figure out the storage device from the config file
1352 d) figure out the catalog IP address
1353 e) figure out the catalog port
1354 f) open the port on the catalog firewall
1355 g) configure the catalog database to accept connections from the
1357 h) build a 'bscan' command from (b)-(e) above and run it
1358 It would be much nicer to be able to type something like this into
1360 *bscan storage=<storage> device=<device> volume=<volume>
1362 *bscan storage=<storage> all
1363 It seems to me that the scan could also do a better job than the
1364 external bscan program currently does. It would possibly be able to
1365 deduce some extra details, such as the catalog StorageId for the
1368 Notes: (Kern). If you need to do a bscan, you have done something wrong,
1369 so this functionality should not need to be integrated into the
1370 the Storage daemon. However, I am not opposed to someone implementing
1371 this feature providing that all the code is in a shared object (or dll)
1372 and does not add significantly to the size of the Storage daemon. In
1373 addition, the code should be written in a way such that the same source
1374 code is used in both the bscan program and the Storage daemon to avoid
1375 adding a lot of new code that must be maintained by the project.
1377 Item n: Implement a Migration job type that will create a reverse
1378 incremental (or decremental) backup from two existing full backups.
1379 Date: 05 October 2009
1380 Origin: Griffith College Dublin. Some sponsorship available.
1381 Contact: Gavin McCullagh <gavin.mccullagh@gcd.ie>
1384 What: The ability to take two full backup jobs and derive a reverse
1385 incremental backup from them. The older full backup data may then
1388 Why: Long-term backups based on keeping full backups can be expensive in
1389 media. In many cases (eg a NAS), as the client accumulates files
1390 over months and years, the same file will be duplicated unchanged,
1391 across many media and datasets. Eg, Less than 10% (and
1392 shrinking) of our monthly full mail server backup is new files,
1393 the other 90% is also in the previous full backup.
1394 Regularly converting the oldest full backup into a reverse
1395 incremental backup allows the admin to keep access to old backup
1396 jobs, but remove all of the duplicated files, freeing up media.
1398 Notes: This feature was previously discussed on the bacula-devel list
1399 here: http://www.mail-archive.com/bacula-devel@lists.sourceforge.net/msg04962.html
1402 ========= Add new items above this line =================
1405 ============= Empty Feature Request form ===========
1406 Item n: One line summary ...
1407 Date: Date submitted
1408 Origin: Name and email of originator.
1411 What: More detailed explanation ...
1413 Why: Why it is important ...
1415 Notes: Additional notes or features (omit if not used)
1416 ============== End Feature Request form ==============
1419 ========== Items put on hold by Kern ============================