3 Bacula Projects Roadmap
4 Status updated 04 February 2009
7 Item 2: Allow FD to initiate a backup
8 Item 6: Deletion of disk Volumes when pruned
9 Item 7: Implement Base jobs
10 Item 9: Scheduling syntax that permits more flexibility and options
11 Item 10: Message mailing based on backup types
12 Item 11: Cause daemons to use a specific IP address to source communications
13 Item 14: Add an override in Schedule for Pools based on backup types
14 Item 15: Implement more Python events and functions --- Abandoned for plugins
15 Item 16: Allow inclusion/exclusion of files in a fileset by creation/mod times
16 Item 17: Automatic promotion of backup levels based on backup size
17 Item 19: Automatic disabling of devices
18 Item 20: An option to operate on all pools with update vol parameters
19 Item 21: Include timestamp of job launch in "stat clients" output
20 Item 22: Implement Storage daemon compression
21 Item 23: Improve Bacula's tape and drive usage and cleaning management
22 Item 24: Multiple threads in file daemon for the same job
23 Item 25: Archival (removal) of User Files to Tape
26 Item 2: Allow FD to initiate a backup
27 Origin: Frank Volf (frank at deze dot org)
28 Date: 17 November 2005
31 What: Provide some means, possibly by a restricted console that
32 allows a FD to initiate a backup, and that uses the connection
33 established by the FD to the Director for the backup so that
34 a Director that is firewalled can do the backup.
36 Why: Makes backup of laptops much easier.
40 Item 6: Deletion of disk Volumes when pruned
42 Origin: Ross Boylan <RossBoylan at stanfordalumni dot org> (edited
46 What: Provide a way for Bacula to automatically remove Volumes
47 from the filesystem, or optionally to truncate them.
48 Obviously, the Volume must be pruned prior removal.
50 Why: This would allow users more control over their Volumes and
51 prevent disk based volumes from consuming too much space.
53 Notes: The following two directives might do the trick:
55 Volume Data Retention = <time period>
56 Remove Volume After = <time period>
58 The migration project should also remove a Volume that is
59 migrated. This might also work for tape Volumes.
61 Item 7: Implement Base jobs
66 What: A base job is sort of like a Full save except that you
67 will want the FileSet to contain only files that are
68 unlikely to change in the future (i.e. a snapshot of
69 most of your system after installing it). After the
70 base job has been run, when you are doing a Full save,
71 you specify one or more Base jobs to be used. All
72 files that have been backed up in the Base job/jobs but
73 not modified will then be excluded from the backup.
74 During a restore, the Base jobs will be automatically
75 pulled in where necessary.
77 Why: This is something none of the competition does, as far as
78 we know (except perhaps BackupPC, which is a Perl program that
79 saves to disk only). It is big win for the user, it
80 makes Bacula stand out as offering a unique
81 optimization that immediately saves time and money.
82 Basically, imagine that you have 100 nearly identical
83 Windows or Linux machine containing the OS and user
84 files. Now for the OS part, a Base job will be backed
85 up once, and rather than making 100 copies of the OS,
86 there will be only one. If one or more of the systems
87 have some files updated, no problem, they will be
88 automatically restored.
90 Notes: Huge savings in tape usage even for a single machine.
91 Will require more resources because the DIR must send
92 FD a list of files/attribs, and the FD must search the
93 list and compare it for each file to be saved.
96 Item 9: Scheduling syntax that permits more flexibility and options
97 Date: 15 December 2006
98 Origin: Gregory Brauer (greg at wildbrain dot com) and
99 Florian Schnabel <florian.schnabel at docufy dot de>
102 What: Currently, Bacula only understands how to deal with weeks of the
103 month or weeks of the year in schedules. This makes it impossible
104 to do a true weekly rotation of tapes. There will always be a
105 discontinuity that will require disruptive manual intervention at
106 least monthly or yearly because week boundaries never align with
107 month or year boundaries.
109 A solution would be to add a new syntax that defines (at least)
110 a start timestamp, and repetition period.
112 An easy option to skip a certain job on a certain date.
115 Why: Rotated backups done at weekly intervals are useful, and Bacula
116 cannot currently do them without extensive hacking.
118 You could then easily skip tape backups on holidays. Especially
119 if you got no autochanger and can only fit one backup on a tape
120 that would be really handy, other jobs could proceed normally
121 and you won't get errors that way.
124 Notes: Here is an example syntax showing a 3-week rotation where full
125 Backups would be performed every week on Saturday, and an
126 incremental would be performed every week on Tuesday. Each
127 set of tapes could be removed from the loader for the following
128 two cycles before coming back and being reused on the third
129 week. Since the execution times are determined by intervals
130 from a given point in time, there will never be any issues with
131 having to adjust to any sort of arbitrary time boundary. In
132 the example provided, I even define the starting schedule
133 as crossing both a year and a month boundary, but the run times
134 would be based on the "Repeat" value and would therefore happen
139 Name = "Week 1 Rotation"
140 #Saturday. Would run Dec 30, Jan 20, Feb 10, etc.
144 Start = 2006-12-30 01:00
148 #Tuesday. Would run Jan 2, Jan 23, Feb 13, etc.
152 Start = 2007-01-02 01:00
159 Name = "Week 2 Rotation"
160 #Saturday. Would run Jan 6, Jan 27, Feb 17, etc.
164 Start = 2007-01-06 01:00
168 #Tuesday. Would run Jan 9, Jan 30, Feb 20, etc.
172 Start = 2007-01-09 01:00
179 Name = "Week 3 Rotation"
180 #Saturday. Would run Jan 13, Feb 3, Feb 24, etc.
184 Start = 2007-01-13 01:00
188 #Tuesday. Would run Jan 16, Feb 6, Feb 27, etc.
192 Start = 2007-01-16 01:00
198 Notes: Kern: I have merged the previously separate project of skipping
199 jobs (via Schedule syntax) into this.
202 Item 10: Message mailing based on backup types
203 Origin: Evan Kaufman <evan.kaufman@gmail.com>
204 Date: January 6, 2006
207 What: In the "Messages" resource definitions, allowing messages
208 to be mailed based on the type (backup, restore, etc.) and level
209 (full, differential, etc) of job that created the originating
212 Why: It would, for example, allow someone's boss to be emailed
213 automatically only when a Full Backup job runs, so he can
214 retrieve the tapes for offsite storage, even if the IT dept.
215 doesn't (or can't) explicitly notify him. At the same time, his
216 mailbox wouldnt be filled by notifications of Verifies, Restores,
217 or Incremental/Differential Backups (which would likely be kept
220 Notes: One way this could be done is through additional message types, for example:
223 # email the boss only on full system backups
224 Mail = boss@mycompany.com = full, !incremental, !differential, !restore,
226 # email us only when something breaks
227 MailOnError = itdept@mycompany.com = all
230 Notes: Kern: This should be rather trivial to implement.
233 Item 11: Cause daemons to use a specific IP address to source communications
234 Origin: Bill Moran <wmoran@collaborativefusion.com>
237 What: Cause Bacula daemons (dir, fd, sd) to always use the ip address
238 specified in the [DIR|DF|SD]Addr directive as the source IP
239 for initiating communication.
240 Why: On complex networks, as well as extremely secure networks, it's
241 not unusual to have multiple possible routes through the network.
242 Often, each of these routes is secured by different policies
243 (effectively, firewalls allow or deny different traffic depending
244 on the source address)
245 Unfortunately, it can sometimes be difficult or impossible to
246 represent this in a system routing table, as the result is
247 excessive subnetting that quickly exhausts available IP space.
248 The best available workaround is to provide multiple IPs to
249 a single machine that are all on the same subnet. In order
250 for this to work properly, applications must support the ability
251 to bind outgoing connections to a specified address, otherwise
252 the operating system will always choose the first IP that
253 matches the required route.
254 Notes: Many other programs support this. For example, the following
255 can be configured in BIND:
256 query-source address 10.0.0.1;
257 transfer-source 10.0.0.2;
258 Which means queries from this server will always come from
259 10.0.0.1 and zone transfers will always originate from
263 Item 14: Add an override in Schedule for Pools based on backup types
265 Origin: Chad Slater <chad.slater@clickfox.com>
268 What: Adding a FullStorage=BigTapeLibrary in the Schedule resource
269 would help those of us who use different storage devices for different
270 backup levels cope with the "auto-upgrade" of a backup.
272 Why: Assume I add several new devices to be backed up, i.e. several
273 hosts with 1TB RAID. To avoid tape switching hassles, incrementals are
274 stored in a disk set on a 2TB RAID. If you add these devices in the
275 middle of the month, the incrementals are upgraded to "full" backups,
276 but they try to use the same storage device as requested in the
277 incremental job, filling up the RAID holding the differentials. If we
278 could override the Storage parameter for full and/or differential
279 backups, then the Full job would use the proper Storage device, which
280 has more capacity (i.e. a 8TB tape library.
283 Item 15: Implement more Python events and functions
284 Date: 28 October 2005
286 Status: Project abandoned in favor of plugins.
288 What: Allow Python scripts to be called at more places
289 within Bacula and provide additional access to Bacula
292 Implement an interface for Python scripts to access the
293 catalog through Bacula.
295 Why: This will permit users to customize Bacula through
303 Also add a way to get a listing of currently running
304 jobs (possibly also scheduled jobs).
307 to start the appropriate job.
310 Item 16: Allow inclusion/exclusion of files in a fileset by creation/mod times
311 Origin: Evan Kaufman <evan.kaufman@gmail.com>
312 Date: January 11, 2006
315 What: In the vein of the Wild and Regex directives in a Fileset's
316 Options, it would be helpful to allow a user to include or exclude
317 files and directories by creation or modification times.
319 You could factor the Exclude=yes|no option in much the same way it
320 affects the Wild and Regex directives. For example, you could exclude
321 all files modified before a certain date:
325 Modified Before = ####
328 Or you could exclude all files created/modified since a certain date:
332 Created Modified Since = ####
335 The format of the time/date could be done several ways, say the number
336 of seconds since the epoch:
337 1137008553 = Jan 11 2006, 1:42:33PM # result of `date +%s`
339 Or a human readable date in a cryptic form:
340 20060111134233 = Jan 11 2006, 1:42:33PM # YYYYMMDDhhmmss
342 Why: I imagine a feature like this could have many uses. It would
343 allow a user to do a full backup while excluding the base operating
344 system files, so if I installed a Linux snapshot from a CD yesterday,
345 I'll *exclude* all files modified *before* today. If I need to
346 recover the system, I use the CD I already have, plus the tape backup.
347 Or if, say, a Windows client is hit by a particularly corrosive
348 virus, and I need to *exclude* any files created/modified *since* the
351 Notes: Of course, this feature would work in concert with other
352 in/exclude rules, and wouldnt override them (or each other).
354 Notes: The directives I'd imagine would be along the lines of
355 "[Created] [Modified] [Before|Since] = <date>".
356 So one could compare against 'ctime' and/or 'mtime', but ONLY 'before'
360 Item 17: Automatic promotion of backup levels based on backup size
361 Date: 19 January 2006
362 Origin: Adam Thornton <athornton@sinenomine.net>
365 What: Amanda has a feature whereby it estimates the space that a
366 differential, incremental, and full backup would take. If the
367 difference in space required between the scheduled level and the next
368 level up is beneath some user-defined critical threshold, the backup
369 level is bumped to the next type. Doing this minimizes the number of
370 volumes necessary during a restore, with a fairly minimal cost in
373 Why: I know at least one (quite sophisticated and smart) user
374 for whom the absence of this feature is a deal-breaker in terms of
375 using Bacula; if we had it it would eliminate the one cool thing
376 Amanda can do and we can't (at least, the one cool thing I know of).
379 Item 19: Automatic disabling of devices
381 Origin: Peter Eriksson <peter at ifm.liu dot se>
384 What: After a configurable amount of fatal errors with a tape drive
385 Bacula should automatically disable further use of a certain
386 tape drive. There should also be "disable"/"enable" commands in
389 Why: On a multi-drive jukebox there is a possibility of tape drives
390 going bad during large backups (needing a cleaning tape run,
391 tapes getting stuck). It would be advantageous if Bacula would
392 automatically disable further use of a problematic tape drive
393 after a configurable amount of errors has occurred.
395 An example: I have a multi-drive jukebox (6 drives, 380+ slots)
396 where tapes occasionally get stuck inside the drive. Bacula will
397 notice that the "mtx-changer" command will fail and then fail
398 any backup jobs trying to use that drive. However, it will still
399 keep on trying to run new jobs using that drive and fail -
400 forever, and thus failing lots and lots of jobs... Since we have
401 many drives Bacula could have just automatically disabled
402 further use of that drive and used one of the other ones
405 Item 20: An option to operate on all pools with update vol parameters
406 Origin: Dmitriy Pinchukov <absh@bossdev.kiev.ua>
408 Status: Patch made by Nigel Stepp
410 What: When I do update -> Volume parameters -> All Volumes
411 from Pool, then I have to select pools one by one. I'd like
412 console to have an option like "0: All Pools" in the list of
415 Why: I have many pools and therefore unhappy with manually
416 updating each of them using update -> Volume parameters -> All
417 Volumes from Pool -> pool #.
420 Item 21: Include timestamp of job launch in "stat clients" output
421 Origin: Mark Bergman <mark.bergman@uphs.upenn.edu>
422 Date: Tue Aug 22 17:13:39 EDT 2006
425 What: The "stat clients" command doesn't include any detail on when
426 the active backup jobs were launched.
428 Why: Including the timestamp would make it much easier to decide whether
429 a job is running properly.
431 Notes: It may be helpful to have the output from "stat clients" formatted
432 more like that from "stat dir" (and other commands), in a column
433 format. The per-client information that's currently shown (level,
434 client name, JobId, Volume, pool, device, Files, etc.) is good, but
435 somewhat hard to parse (both programmatically and visually),
436 particularly when there are many active clients.
440 Item 22: Implement Storage daemon compression
441 Date: 18 December 2006
442 Origin: Vadim A. Umanski , e-mail umanski@ext.ru
444 What: The ability to compress backup data on the SD receiving data
445 instead of doing that on client sending data.
446 Why: The need is practical. I've got some machines that can send
447 data to the network 4 or 5 times faster than compressing
448 them (I've measured that). They're using fast enough SCSI/FC
449 disk subsystems but rather slow CPUs (ex. UltraSPARC II).
450 And the backup server has got a quite fast CPUs (ex. Dual P4
451 Xeons) and quite a low load. When you have 20, 50 or 100 GB
452 of raw data - running a job 4 to 5 times faster - that
453 really matters. On the other hand, the data can be
454 compressed 50% or better - so losing twice more space for
455 disk backup is not good at all. And the network is all mine
456 (I have a dedicated management/provisioning network) and I
457 can get as high bandwidth as I need - 100Mbps, 1000Mbps...
458 That's why the server-side compression feature is needed!
461 Item 23: Improve Bacula's tape and drive usage and cleaning management
462 Date: 8 November 2005, November 11, 2005
463 Origin: Adam Thornton <athornton at sinenomine dot net>,
464 Arno Lehmann <al at its-lehmann dot de>
467 What: Make Bacula manage tape life cycle information, tape reuse
468 times and drive cleaning cycles.
470 Why: All three parts of this project are important when operating
472 We need to know which tapes need replacement, and we need to
473 make sure the drives are cleaned when necessary. While many
474 tape libraries and even autoloaders can handle all this
475 automatically, support by Bacula can be helpful for smaller
476 (older) libraries and single drives. Limiting the number of
477 times a tape is used might prevent tape errors when using
478 tapes until the drives can't read it any more. Also, checking
479 drive status during operation can prevent some failures (as I
480 [Arno] had to learn the hard way...)
482 Notes: First, Bacula could (and even does, to some limited extent)
483 record tape and drive usage. For tapes, the number of mounts,
484 the amount of data, and the time the tape has actually been
485 running could be recorded. Data fields for Read and Write
486 time and Number of mounts already exist in the catalog (I'm
487 not sure if VolBytes is the sum of all bytes ever written to
488 that volume by Bacula). This information can be important
489 when determining which media to replace. The ability to mark
490 Volumes as "used up" after a given number of write cycles
491 should also be implemented so that a tape is never actually
492 worn out. For the tape drives known to Bacula, similar
493 information is interesting to determine the device status and
494 expected life time: Time it's been Reading and Writing, number
495 of tape Loads / Unloads / Errors. This information is not yet
496 recorded as far as I [Arno] know. A new volume status would
497 be necessary for the new state, like "Used up" or "Worn out".
498 Volumes with this state could be used for restores, but not
499 for writing. These volumes should be migrated first (assuming
500 migration is implemented) and, once they are no longer needed,
501 could be moved to a Trash pool.
503 The next step would be to implement a drive cleaning setup.
504 Bacula already has knowledge about cleaning tapes. Once it
505 has some information about cleaning cycles (measured in drive
506 run time, number of tapes used, or calender days, for example)
507 it can automatically execute tape cleaning (with an
508 autochanger, obviously) or ask for operator assistance loading
511 The final step would be to implement TAPEALERT checks not only
512 when changing tapes and only sending the information to the
513 administrator, but rather checking after each tape error,
514 checking on a regular basis (for example after each tape
515 file), and also before unloading and after loading a new tape.
516 Then, depending on the drives TAPEALERT state and the known
517 drive cleaning state Bacula could automatically schedule later
518 cleaning, clean immediately, or inform the operator.
520 Implementing this would perhaps require another catalog change
521 and perhaps major changes in SD code and the DIR-SD protocol,
522 so I'd only consider this worth implementing if it would
523 actually be used or even needed by many people.
525 Implementation of these projects could happen in three distinct
526 sub-projects: Measuring Tape and Drive usage, retiring
527 volumes, and handling drive cleaning and TAPEALERTs.
529 Item 24: Multiple threads in file daemon for the same job
530 Date: 27 November 2005
531 Origin: Ove Risberg (Ove.Risberg at octocode dot com)
534 What: I want the file daemon to start multiple threads for a backup
535 job so the fastest possible backup can be made.
537 The file daemon could parse the FileSet information and start
538 one thread for each File entry located on a separate
541 A confiuration option in the job section should be used to
542 enable or disable this feature. The confgutration option could
543 specify the maximum number of threads in the file daemon.
545 If the theads could spool the data to separate spool files
546 the restore process will not be much slower.
548 Why: Multiple concurrent backups of a large fileserver with many
549 disks and controllers will be much faster.
551 Item 25: Archival (removal) of User Files to Tape
553 Origin: Ray Pengelly [ray at biomed dot queensu dot ca
556 What: The ability to archive data to storage based on certain parameters
557 such as age, size, or location. Once the data has been written to
558 storage and logged it is then pruned from the originating
559 filesystem. Note! We are talking about user's files and not
562 Why: This would allow fully automatic storage management which becomes
563 useful for large datastores. It would also allow for auto-staging
564 from one media type to another.
566 Example 1) Medical imaging needs to store large amounts of data.
567 They decide to keep data on their servers for 6 months and then put
568 it away for long term storage. The server then finds all files
569 older than 6 months writes them to tape. The files are then removed
572 Example 2) All data that hasn't been accessed in 2 months could be
573 moved from high-cost, fibre-channel disk storage to a low-cost
574 large-capacity SATA disk storage pool which doesn't have as quick of
575 access time. Then after another 6 months (or possibly as one
576 storage pool gets full) data is migrated to Tape.
579 ========= New Items since the last vote =================
581 Item 26: Add a new directive to bacula-dir.conf which permits inclusion of all subconfiguration files in a given directory
582 Date: 18 October 2008
583 Origin: Database, Lda. Maputo, Mozambique
584 Contact:Cameron Smith / cameron.ord@database.co.mz
587 What: A directive something like "IncludeConf = /etc/bacula/subconfs" Every
588 time Bacula Director restarts or reloads, it will walk the given
589 directory (non-recursively) and include the contents of any files
590 therein, as though they were appended to bacula-dir.conf
592 Why: Permits simplified and safer configuration for larger installations with
593 many client PCs. Currently, through judicious use of JobDefs and
594 similar directives, it is possible to reduce the client-specific part of
595 a configuration to a minimum. The client-specific directives can be
596 prepared according to a standard template and dropped into a known
597 directory. However it is still necessary to add a line to the "master"
598 (bacula-dir.conf) referencing each new file. This exposes the master to
599 unnecessary risk of accidental mistakes and makes automation of adding
600 new client-confs, more difficult (it is easier to automate dropping a
601 file into a dir, than rewriting an existing file). Ken has previously
602 made a convincing argument for NOT including Bacula's core configuration
603 in an RDBMS, but I believe that the present request is a reasonable
604 extension to the current "flat-file-based" configuration philosophy.
606 Notes: There is NO need for any special syntax to these files. They should
607 contain standard directives which are simply "inlined" to the parent
608 file as already happens when you explicitly reference an external file.
610 Notes: (kes) this can already be done with scripting
611 From: John Jorgensen <jorgnsn@lcd.uregina.ca>
612 The bacula-dir.conf at our site contains these lines:
615 # Include subfiles associated with configuration of clients.
616 # They define the bulk of the Clients, Jobs, and FileSets.
618 @|"sh -c 'for f in /etc/bacula/clientdefs/*.conf ; do echo @${f} ; done'"
620 and when we get a new client, we just put its configuration into
621 a new file called something like:
623 /etc/bacula/clientdefs/clientname.conf
626 Item n: List inChanger flag when doing restore.
627 Origin: Jesper Krogh<jesper@krogh.cc>
631 What: When doing a restore the restore selection dialog ends by telling stuff
633 The job will require the following
634 Volume(s) Storage(s) SD Device(s)
635 ===========================================================================
646 When having an autochanger, it would be really nice with an inChanger
647 column so the operator knew if this restore job would stop waiting for
648 operator intervention. This is done just by selecting the inChanger flag
649 from the catalog and printing it in a seperate column.
652 Why: This would help getting large restores through minimizing the
653 time spent waiting for operator to drop by and change tapes in the library.
655 Notes: [Kern] I think it would also be good to have the Slot as well,
656 or some indication that Bacula thinks the volume is in the autochanger
657 because it depends on both the InChanger flag and the Slot being
661 Item 1: Implement an interface between Bacula and Amazon's S3.
663 Origin: Soren Hansen <soren@ubuntu.com>
665 What: Enable the storage daemon to store backup data on Amazon's
668 Why: Amazon's S3 is a cheap way to store data off-site. Current
669 ways to integrate Bacula and S3 involve storing all the data
670 locally and syncing them to S3, and manually fetching them
671 again when they're needed. This is very cumbersome.
674 Item 1: enable/disable compression depending on storage device (disk/tape)
675 Origin: Ralf Gross ralf-lists@ralfgross.de
677 Status: Initial Request
679 What: Add a new option to the storage resource of the director. Depending
680 on this option, compression will be enabled/disabled for a device.
682 Why: If different devices (disks/tapes) are used for full/diff/incr
683 backups, software compression will be enabled for all backups
684 because of the FileSet compression option. For backup to tapes
685 wich are able to do hardware compression this is not desired.
689 http://news.gmane.org/gmane.comp.sysutils.backup.bacula.devel/cutoff=11124
690 It must be clear to the user, that the FileSet compression option
691 must still be enabled use compression for a backup job at all.
692 Thus a name for the new option in the director must be
695 Notes: KES I think the Storage definition should probably override what
696 is in the Job definition or vice-versa, but in any case, it must
700 Item 1: Backup and Restore of Windows Encrypted Files through raw encryption
703 Origin: Michael Mohr, SAG Mohr.External@infineon.com
705 Date: 22 February 2008
709 What: Make it possible to backup and restore Encypted Files from and to
710 Windows systems without the need to decrypt it by using the raw
711 encryption functions API (see:
712 http://msdn2.microsoft.com/en-us/library/aa363783.aspx)
714 that is provided for that reason by Microsoft.
715 If a file ist encrypted could be examined by evaluating the
716 FILE_ATTRIBUTE_ENCRYTED flag of the GetFileAttributes
719 Why: Without the usage of this interface the fd-daemon running
720 under the system account can't read encypted Files because
721 the key needed for the decrytion is missed by them. As a result
722 actually encrypted files are not backed up
723 by bacula and also no error is shown while missing these files.
727 Item 1: Possibilty to schedule Jobs on last Friday of the month
728 Origin: Carsten Menke <bootsy52 at gmx dot net>
732 What: Currently if you want to run your monthly Backups on the last
733 Friday of each month this is only possible with workarounds (e.g
734 scripting) (As some months got 4 Fridays and some got 5 Fridays)
735 The same is true if you plan to run your yearly Backups on the
736 last Friday of the year. It would be nice to have the ability to
737 use the builtin scheduler for this.
739 Why: In many companies the last working day of the week is Friday (or
740 Saturday), so to get the most data of the month onto the monthly
741 tape, the employees are advised to insert the tape for the
742 monthly backups on the last friday of the month.
744 Notes: To give this a complete functionality it would be nice if the
745 "first" and "last" Keywords could be implemented in the
746 scheduler, so it is also possible to run monthy backups at the
747 first friday of the month and many things more. So if the syntax
748 would expand to this {first|last} {Month|Week|Day|Mo-Fri} of the
749 {Year|Month|Week} you would be able to run really flexible jobs.
751 To got a certain Job run on the last Friday of the Month for example one could
754 Run = pool=Monthly last Fri of the Month at 23:50
758 Run = pool=Yearly last Fri of the Year at 23:50
760 ## Certain Jobs the last Week of a Month
762 Run = pool=LastWeek last Week of the Month at 23:50
764 ## Monthly Backup on the last day of the month
766 Run = pool=Monthly last Day of the Month at 23:50
770 Origin: Frank Sweetser <fs@wpi.edu>
772 What: Add a new SD directive, "minimum spool size" (or similar). This
773 directive would specify a minimum level of free space available for
774 spooling. If the unused spool space is less than this level, any
775 new spooling requests would be blocked as if the "maximum spool
776 size" threshold had bee reached. Already spooling jobs would be
777 unaffected by this directive.
779 Why: I've been bitten by this scenario a couple of times:
781 Assume a maximum spool size of 100M. Two concurrent jobs, A and B,
782 are both running. Due to timing quirks and previously running jobs,
783 job A has used 99.9M of space in the spool directory. While A is
784 busy despooling to disk, B is happily using the remaining 0.1M of
785 spool space. This ends up in a spool/despool sequence every 0.1M of
786 data. In addition to fragmenting the data on the volume far more
787 than was necessary, in larger data sets (ie, tens or hundreds of
788 gigabytes) it can easily produce multi-megabyte report emails!
790 Item n?: Expand the Verify Job capability to verify Jobs older than the
791 last one. For VolumeToCatalog Jobs
793 Origin: portrix.net Hamburg, Germany.
794 Contact: Christian Sabelmann
795 Status: 70% of the required Code is part of the Verify function since v. 2.x
798 The ability to tell Bacula which Job should verify instead of
799 automatically verify just the last one.
802 It is sad that such a powerfull feature like Verify Jobs
803 (VolumeToCatalog) is restricted to be used only with the last backup Job
804 of a client. Actual users who have to do daily Backups are forced to
805 also do daily Verify Jobs in order to take advantage of this useful
806 feature. This Daily Verify after Backup conduct is not always desired
807 and Verify Jobs have to be sometimes scheduled. (Not necessarily
808 scheduled in Bacula). With this feature Admins can verify Jobs once a
809 Week or less per month, selecting the Jobs they want to verify. This
810 feature is also not to difficult to implement taking in account older bug
811 reports about this feature and the selection of the Job to be verified.
813 Notes: For the verify Job, the user could select the Job to be verified
814 from a List of the latest Jobs of a client. It would also be possible to
815 verify a certain volume. All of these would naturaly apply only for
816 Jobs whose file information are still in the catalog.
818 Item X: Add EFS support on Windows
819 Origin: Alex Ehrlich (Alex.Ehrlich-at-mail.ee)
823 What: For each file backed up or restored by FD on Windows, check if
824 the file is encrypted; if so then use OpenEncryptedFileRaw,
825 ReadEncryptedFileRaw, WriteEncryptedFileRaw,
826 CloseEncryptedFileRaw instead of BackupRead and BackupWrite
829 Why: Many laptop users utilize the EFS functionality today; so do.
830 some non-laptop ones, too.
831 Currently files encrypted by means of EFS cannot be backed up.
832 It means a Windows boutique cannot rely on Bacula as its
833 backup solution, at least when using Windows 2K, XPP,
834 "better" Vista etc on workstations, unless EFS is
835 forbidden by policies.
836 The current situation might result into "false sense of
837 security" among the end-users.
839 Notes: Using xxxEncryptedFileRaw API would allow to backup and
840 restore EFS-encrypted files without decrypting their data.
841 Note that such files cannot be restored "portably" (at least,
842 easily) but they would be restoreable to a different (or
843 reinstalled) Win32 machine; the restore would require setup
844 of a EFS recovery agent in advance, of course, and this shall
845 be clearly reflected in the documentation, but this is the
846 normal Windows SysAdmin's business.
847 When "portable" backup is requested the EFS-encrypted files
848 shall be clearly reported as errors.
849 See MSDN on the "Backup and Restore of Encrypted Files" topic:
850 http://msdn.microsoft.com/en-us/library/aa363783.aspx
851 Maybe the EFS support requires a new flag in the database for
853 Unfortunately, the implementation is not as straightforward as
854 1-to-1 replacement of BackupRead with ReadEncryptedFileRaw,
855 requiring some FD code rewrite to work with
856 encrypted-file-related callback functions.
858 Item n: Data encryption on storage daemon
859 Origin: Tobias Barth <tobias.barth at web-arts.com>
860 Date: 04 February 2009
863 What: The storage demon should be able to do the data encryption that can currently be done by the file daemon.
865 Why: This would have 2 advantages: 1) one could encrypt the data of unencrypted tapes by doing a migration job, and 2) the storage daemon would be the only machine that would have to keep the encryption keys.
868 As an addendum to the feature request, here are some crypto
869 implementation details I wrote up regarding SD-encryption back in Jan
871 http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg28860.html
875 Item 1: "Maximum Concurrent Jobs" for drives when used with changer device
876 Origin: Ralf Gross ralf-lists <at> ralfgross.de
878 Status: Initial Request
880 What: respect the "Maximum Concurrent Jobs" directive in the _drives_
881 Storage section in addition to the changer section
883 Why: I have a 3 drive changer where I want to be able to let 3 concurrent
884 jobs run in parallel. But only one job per drive at the same time.
885 Right now I don't see how I could limit the number of concurrent jobs
886 per drive in this situation.
888 Notes: Using different priorities for these jobs lead to problems that other
889 jobs are blocked. On the user list I got the advice to use the "Prefer Mounted
890 Volumes" directive, but Kern advised against using "Prefer Mounted
891 Volumes" in an other thread:
892 http://article.gmane.org/gmane.comp.sysutils.backup.bacula.devel/11876/
894 In addition I'm not sure if this would be the same as respecting the
895 drive's "Maximum Concurrent Jobs" setting.
907 Maximum Concurrent Jobs = 3
911 Name = Neo4100-LTO4-D1
915 Device = ULTRIUM-TD4-D1
917 Maximum Concurrent Jobs = 1
922 The "Maximum Concurrent Jobs = 1" directive in the drive's section is ignored.
924 Item n: Add MaxVolumeSize/MaxVolumeBytes statement to Storage resource
925 Origin: Bastian Friedrich <bastian.friedrich@collax.com>
929 What: SD has a "Maximum Volume Size" statement, which is deprecated
930 and superseded by the Pool resource statement "Maximum Volume Bytes". It
931 would be good if either statement could be used in Storage resources.
933 Why: Pools do not have to be restricted to a single storage
934 type/device; thus, it may be impossible to define Maximum Volume Bytes in
935 the Pool resource. The old MaxVolSize statement is deprecated, as it is
937 I am using the same pool for different devices.
939 Notes: State of idea currently unknown. Storage resources in the dir
940 config currently translate to very slim catalog entries; these entries
941 would require extensions to implement what is described here. Quite
942 possibly, numerous other statements that are currently available in Pool
943 resources could be used in Storage resources too quite well.
945 Item 1: Start spooling even when waiting on tape
946 Origin: Tobias Barth <tobias.barth@web-arts.com>
950 What: If a job can be spooled to disk before writing it to tape, it
951 should be spooled immediately.
952 Currently, bacula waits until the correct tape is inserted
955 Why: It could save hours. When bacula waits on the operator who
956 must insert the correct tape (e.g. a new
957 tape or a tape from another media pool), bacula could already
958 prepare the spooled data in the
959 spooling directory and immediately start despooling when the
960 tape was inserted by the operator.
962 2nd step: Use 2 or more spooling directories. When one directory is
963 currently despooling, the next (on different
964 disk drives) could already be spooling the next data.
966 Notes: I am using bacula 2.2.8, which has none of those features
969 Item 1: enable persistent naming/number of SQL queries
976 Change the parsing of the query.sql file and the query command so that
977 queries are named/numbered by a fixed value, not their order in the
982 One of the real strengths of bacula is the ability to query the
983 database, and the fact that complex queries can be saved and
984 referenced from a file is very powerful. However, the choice
985 of query (both for interactive use, and by scripting input
986 to the bconsole command) is completely dependent on the order
987 within the query.sql file. The descriptve labels are helpful for
988 interactive use, but users become used to calling a particular
989 query "by number", or may use scripts to execute queries. This
990 presents a problem if the number or order of queries in the file
993 If the query.sql file used the numeric tags as a real value (rather
994 than a comment), then users could have a higher confidence that they
995 are executing the intended query, that their local changes wouldn't
996 conflict with future bacula upgrades.
998 For scripting, it's very important that the intended query is
999 what's actually executed. The current method of parsing the
1000 query.sql file discourages scripting because the addition or
1001 deletion of queries within the file will require corresponding
1002 changes to scripts. It may not be obvious to users that deleting
1003 query "17" in the query.sql file will require changing all
1004 references to higher numbered queries. Similarly, when new
1005 bacula distributions change the number of "official" queries,
1006 user-developed queries cannot simply be appended to the file
1007 without also changing any references to those queries in scripts
1008 or procedural documentation, etc.
1010 In addition, using fixed numbers for queries would encourage more
1011 user-initiated development of queries, by supporting conventions
1014 queries numbered 1-50 are supported/developed/distributed by
1015 with official bacula releases
1017 queries numbered 100-200 are community contributed, and are
1018 related to media management
1020 queries numbered 201-300 are community contributed, and are
1021 related to checksums, finding duplicated files across
1022 different backups, etc.
1024 queries numbered 301-400 are community contributed, and are
1025 related to backup statistics (average file size, size per
1026 client per backup level, time for all clients by backup level,
1027 storage capacity by media type, etc.)
1029 queries numbered 500-999 are locally created
1032 Alternatively, queries could be called by keyword (tag), rather
1035 Item 1: Implementation of running Job speed limit.
1036 Origin: Alex F, alexxzell at yahoo dot com
1037 Date: 29 January 2009
1039 What: I noticed the need for an integrated bandwidth limiter for
1040 running jobs. It would be very useful just to specify another
1041 field in bacula-dir.conf, like speed = how much speed you wish
1042 for that specific job to run at
1044 Why: Because of a couple of reasons. First, it's very hard to implement a
1045 traffic shaping utility and also make it reliable. Second, it is very
1046 uncomfortable to have to implement these apps to, let's say 50 clients
1047 (including desktops, servers). This would also be unreliable because you
1048 have to make sure that the apps are properly working when needed; users
1049 could also disable them (accidentally or not). It would be very useful
1050 to provide Bacula this ability. All information would be centralized,
1051 you would not have to go to 50 different clients in 10 different
1052 locations for configuration; eliminating 3rd party additions help in
1053 establishing efficiency. Would also avoid bandwidth congestion,
1054 especially where there is little available.
1056 Item n: Restore from volumes on multiple storage daemons
1058 Origin: Graham Keeling (graham@equiinet.com)
1064 What: The ability to restore from volumes held by multiple storage daemons
1065 would be very useful.
1067 Why: It is useful to be able to backup to any number of different storage
1068 daemons. For example, your first storage daemon may run out of space, so you
1069 switch to your second and carry on. Bacula will currently let you do this.
1070 However, once you come to restore, bacula cannot cope when volumes on different
1071 storage daemons are required.
1073 Notes: The director knows that more than one storage daemon is needed, as
1074 bconsole outputs something like the following table.
1076 The job will require the following
1077 Volume(s) Storage(s) SD Device(s)
1078 ===========================================================================
1080 backup-0001 Disk 1 Disk 1.0
1081 backup-0002 Disk 2 Disk 2.0
1083 However, the bootstrap file that it creates gets sent to the first storage
1084 daemon only, which then stalls for a long time, 'waiting for a mount request'
1085 for the volume that it doesn't have.
1086 The bootstrap file contains no knowledge of the storage daemon.
1087 Under the current design:
1089 The director connects to the storage daemon, and gets an sd_auth_key.
1090 The director then connects to the file daemon, and gives it the
1091 sd_auth_key with the 'jobcmd'.
1092 (restoring of files happens)
1093 The director does a 'wait_for_storage_daemon_termination()'.
1094 The director waits for the file daemon to indicate the end of the job.
1098 The director connects to the file daemon.
1099 Then, for each storage daemon in the .bsr file... {
1100 The director connects to the storage daemon, and gets an sd_auth_key.
1101 The director then connects to the file daemon, and gives it the
1102 sd_auth_key with the 'storaddr' command.
1103 (restoring of files happens)
1104 The director does a 'wait_for_storage_daemon_termination()'.
1105 The director waits for the file daemon to indicate the end of the
1106 work on this storage.
1108 The director tells the file daemon that there are no more storages to contact.
1109 The director waits for the file daemon to indicate the end of the job.
1111 As you can see, each restore between the file daemon and storage daemon is
1112 handled in the same way that it is currently handled, using the same method
1113 for authentication, except that the sd_auth_key is moved from the 'jobcmd' to
1114 the 'storaddr' command - where it logically belongs.
1116 Item n: 'restore' menu: enter a JobId, automatically select dependents
1118 Origin: Graham Keeling (graham@equiinet.com)
1124 What: Add to the bconsole 'restore' menu the ability to select a job
1125 by JobId, and have bacula automatically select all the dependent jobs.
1127 Why: Currently, you either have to...
1128 a) laboriously type in a date that is greater than the date of the backup that
1129 you want and is less than the subsequent backup (bacula then figures out the
1131 b) manually figure out all the JobIds that you want and laboriously type them
1133 It would be extremely useful (in a programmatical sense, as well as for humans)
1134 to be able to just give it a single JobId and let bacula do the hard work (work
1135 that it already knows how to do).
1137 Notes (Kern): I think this should either be modified to have Bacula print
1138 a list of dates that the user can choose from as is done in bwx-console and
1139 bat or the name of this command must be carefully chosen so that the user
1140 clearly understands that the JobId is being used to specify what Job and the
1141 date to which he wishes the restore to happen.
1146 ============= Empty Feature Request form ===========
1147 Item n: One line summary ...
1148 Date: Date submitted
1149 Origin: Name and email of originator.
1152 What: More detailed explanation ...
1154 Why: Why it is important ...
1156 Notes: Additional notes or features (omit if not used)
1157 ============== End Feature Request form ==============
1160 ========== Items put on hold by Kern ============================
1162 Item h2: Implement support for stacking arbitrary stream filters, sinks.
1163 Date: 23 November 2006
1164 Origin: Landon Fuller <landonf@threerings.net>
1165 Status: Planning. Assigned to landonf.
1167 What: Implement support for the following:
1168 - Stacking arbitrary stream filters (eg, encryption, compression,
1169 sparse data handling))
1170 - Attaching file sinks to terminate stream filters (ie, write out
1171 the resultant data to a file)
1172 - Refactor the restoration state machine accordingly
1174 Why: The existing stream implementation suffers from the following: - All
1175 state (compression, encryption, stream restoration), is
1176 global across the entire restore process, for all streams. There are
1177 multiple entry and exit points in the restoration state machine, and
1178 thus multiple places where state must be allocated, deallocated,
1179 initialized, or reinitialized. This results in exceptional complexity
1180 for the author of a stream filter.
1181 - The developer must enumerate all possible combinations of filters
1182 and stream types (ie, win32 data with encryption, without encryption,
1183 with encryption AND compression, etc).
1185 Notes: This feature request only covers implementing the stream filters/
1186 sinks, and refactoring the file daemon's restoration
1187 implementation accordingly. If I have extra time, I will also
1188 rewrite the backup implementation. My intent in implementing the
1189 restoration first is to solve pressing bugs in the restoration
1190 handling, and to ensure that the new restore implementation
1191 handles existing backups correctly.
1193 I do not plan on changing the network or tape data structures to
1194 support defining arbitrary stream filters, but supporting that
1195 functionality is the ultimate goal.
1197 Assistance with either code or testing would be fantastic.
1199 Notes: Kern: this project has a lot of merit, and we need to do it, but
1200 it is really an issue for developers rather than a new feature
1201 for users, so I have removed it from the voting list, but kept it
1202 here, but at some point, it will be implemented.
1204 Item h3: Filesystem watch triggered backup.
1205 Date: 31 August 2006
1206 Origin: Jesper Krogh <jesper@krogh.cc>
1209 What: With inotify and similar filesystem triggeret notification
1210 systems is it possible to have the file-daemon to monitor
1211 filesystem changes and initiate backup.
1213 Why: There are 2 situations where this is nice to have.
1214 1) It is possible to get a much finer-grained backup than
1215 the fixed schedules used now.. A file created and deleted
1216 a few hours later, can automatically be caught.
1218 2) The introduced load on the system will probably be
1219 distributed more even on the system.
1221 Notes: This can be combined with configration that specifies
1222 something like: "at most every 15 minutes or when changes
1225 Kern Notes: I would rather see this implemented by an external program
1226 that monitors the Filesystem changes, then uses the console
1229 Item h4: Directive/mode to backup only file changes, not entire file
1230 Date: 11 November 2005
1231 Origin: Joshua Kugler <joshua dot kugler at uaf dot edu>
1232 Marek Bajon <mbajon at bimsplus dot com dot pl>
1235 What: Currently when a file changes, the entire file will be backed up in
1236 the next incremental or full backup. To save space on the tapes
1237 it would be nice to have a mode whereby only the changes to the
1238 file would be backed up when it is changed.
1240 Why: This would save lots of space when backing up large files such as
1241 logs, mbox files, Outlook PST files and the like.
1243 Notes: This would require the usage of disk-based volumes as comparing
1244 files would not be feasible using a tape drive.
1246 Notes: Kern: I don't know how to implement this. Put on hold until someone
1247 provides a detailed implementation plan.
1250 Item h5: Implement multiple numeric backup levels as supported by dump
1252 Origin: Daniel Rich <drich@employees.org>
1254 What: Dump allows specification of backup levels numerically instead of just
1255 "full", "incr", and "diff". In this system, at any given level,
1256 all files are backed up that were were modified since the last
1257 backup of a higher level (with 0 being the highest and 9 being the
1258 lowest). A level 0 is therefore equivalent to a full, level 9 an
1259 incremental, and the levels 1 through 8 are varying levels of
1260 differentials. For bacula's sake, these could be represented as
1261 "full", "incr", and "diff1", "diff2", etc.
1263 Why: Support of multiple backup levels would provide for more advanced
1264 backup rotation schemes such as "Towers of Hanoi". This would
1265 allow better flexibility in performing backups, and can lead to
1266 shorter recover times.
1268 Notes: Legato Networker supports a similar system with full, incr, and 1-9 as
1271 Notes: Kern: I don't see the utility of this, and it would be a *huge*
1272 modification to existing code.
1274 Item h6: Implement NDMP protocol support
1279 What: Network Data Management Protocol is implemented by a number of
1280 NAS filer vendors to enable backups using third-party
1283 Why: This would allow NAS filer backups in Bacula without incurring
1284 the overhead of NFS or SBM/CIFS.
1286 Notes: Further information is available:
1288 http://www.ndmp.org/wp/wp.shtml
1289 http://www.traakan.com/ndmjob/index.html
1291 There are currently no viable open-source NDMP
1292 implementations. There is a reference SDK and example
1293 app available from ndmp.org but it has problems
1294 compiling on recent Linux and Solaris OS'. The ndmjob
1295 reference implementation from Traakan is known to
1296 compile on Solaris 10.
1298 Notes: Kern: I am not at all in favor of this until NDMP becomes
1299 an Open Standard or until there are Open Source libraries
1300 that interface to it.
1302 Item h7: Commercial database support
1303 Origin: Russell Howe <russell_howe dot wreckage dot org>
1307 What: It would be nice for the database backend to support more databases.
1308 I'm thinking of SQL Server at the moment, but I guess Oracle, DB2,
1309 MaxDB, etc are all candidates. SQL Server would presumably be
1310 implemented using FreeTDS or maybe an ODBC library?
1312 Why: We only really have one database server, which is MS SQL Server 2000.
1313 Maintaining a second one for the backup software (we grew out of
1314 SQLite, which I liked, but which didn't work so well with our
1315 database size). We don't really have a machine with the resources
1316 to run postgres, and would rather only maintain a single DBMS.
1317 We're stuck with SQL Server because pretty much all the company's
1318 custom applications (written by consultants) are locked into SQL
1319 Server 2000. I can imagine this scenario is fairly common, and it
1320 would be nice to use the existing properly specced database server
1321 for storing Bacula's catalog, rather than having to run a second
1324 Notes: This might be nice, but someone other than me will probably need to
1325 implement it, and at the moment, proprietary code cannot legally
1326 be mixed with Bacula GPLed code. This would be possible only
1327 providing the vendors provide GPLed (or OpenSource) interface
1330 Item h8: Incorporation of XACML2/SAML2 parsing
1331 Date: 19 January 2006
1332 Origin: Adam Thornton <athornton@sinenomine.net>
1335 What: XACML is "eXtensible Access Control Markup Language" and "SAML is
1336 the "Security Assertion Markup Language"--an XML standard for
1337 making statements about identity and authorization. Having these
1338 would give us a framework to approach ACLs in a generic manner,
1339 and in a way flexible enough to support the four major sorts of
1340 ACLs I see as a concern to Bacula at this point, as well as
1341 (probably) to deal with new sorts of ACLs that may appear in the
1344 Why: Bacula is beginning to need to back up systems with ACLs that do not
1345 map cleanly onto traditional Unix permissions. I see four sets of
1346 ACLs--in general, mutually incompatible with one another--that
1347 we're going to need to deal with. These are: NTFS ACLs, POSIX
1348 ACLs, NFSv4 ACLS, and AFS ACLS. (Some may question the relevance
1349 of AFS; AFS is one of Sine Nomine's core consulting businesses,
1350 and having a reputable file-level backup and restore technology
1351 for it (as Tivoli is probably going to drop AFS support soon since
1352 IBM no longer supports AFS) would be of huge benefit to our
1353 customers; we'd most likely create the AFS support at Sine Nomine
1354 for inclusion into the Bacula (and perhaps some changes to the
1355 OpenAFS volserver) core code.)
1357 Now, obviously, Bacula already handles NTFS just fine. However, I
1358 think there's a lot of value in implementing a generic ACL model,
1359 so that it's easy to support whatever particular instances of ACLs
1360 come down the pike: POSIX ACLS (think SELinux) and NFSv4 are the
1361 obvious things arriving in the Linux world in a big way in the
1362 near future. XACML, although overcomplicated for our needs,
1363 provides this framework, and we should be able to leverage other
1364 people's implementations to minimize the amount of work *we* have
1365 to do to get a generic ACL framework. Basically, the costs of
1366 implementation are high, but they're largely both external to
1367 Bacula and already sunk.
1369 Notes: As you indicate this is a bit of "blue sky" or in other words,
1370 at the moment, it is a bit esoteric to consider for Bacula.
1372 Item h9: Archive data
1374 Origin: calvin streeting calvin at absentdream dot com
1377 What: The abilty to archive to media (dvd/cd) in a uncompressed format
1378 for dead filing (archiving not backing up)
1380 Why: At work when jobs are finished and moved off of the main
1381 file servers (raid based systems) onto a simple Linux
1382 file server (ide based system) so users can find old
1383 information without contacting the IT dept.
1385 So this data dosn't realy change it only gets added to,
1386 But it also needs backing up. At the moment it takes
1387 about 8 hours to back up our servers (working data) so
1388 rather than add more time to existing backups i am trying
1389 to implement a system where we backup the acrhive data to
1390 cd/dvd these disks would only need to be appended to
1391 (burn only new/changed files to new disks for off site
1392 storage). basialy understand the differnce between
1393 achive data and live data.
1395 Notes: Scan the data and email me when it needs burning divide
1396 into predefined chunks keep a recored of what is on what
1397 disk make me a label (simple php->mysql=>pdf stuff) i
1398 could do this bit ability to save data uncompresed so
1399 it can be read in any other system (future proof data)
1400 save the catalog with the disk as some kind of menu
1403 Notes: Kern: I don't understand this item, and in any case, if it
1404 is specific to DVD/CDs, which we do not recommend using,
1405 it is unlikely to be implemented except as a user
1409 Item h10: Clustered file-daemons
1410 Origin: Alan Brown ajb2 at mssl dot ucl dot ac dot uk
1413 What: A "virtual" filedaemon, which is actually a cluster of real ones.
1415 Why: In the case of clustered filesystems (SAN setups, GFS, or OCFS2, etc)
1416 multiple machines may have access to the same set of filesystems
1418 For performance reasons, one may wish to initate backups from
1419 several of these machines simultaneously, instead of just using
1420 one backup source for the common clustered filesystem.
1422 For obvious reasons, normally backups of $A-FD/$PATH and
1423 B-FD/$PATH are treated as different backup sets. In this case
1424 they are the same communal set.
1426 Likewise when restoring, it would be easier to just specify
1427 one of the cluster machines and let bacula decide which to use.
1429 This can be faked to some extent using DNS round robin entries
1430 and a virtual IP address, however it means "status client" will
1431 always give bogus answers. Additionally there is no way of
1432 spreading the load evenly among the servers.
1434 What is required is something similar to the storage daemon
1435 autochanger directives, so that Bacula can keep track of
1436 operating backups/restores and direct new jobs to a "free"
1439 Notes: Kern: I don't understand the request enough to be able to
1440 implement it. A lot more design detail should be presented
1441 before voting on this project.
1443 Feature Request Form