3 Bacula Projects Roadmap
5 (prioritized by user vote)
8 Item 1: Implement data encryption (as opposed to comm encryption)
9 Item 2: Implement Migration that moves Jobs from one Pool to another.
10 Item 3: Accurate restoration of renamed/deleted files from
11 Item 4: Implement a Bacula GUI/management tool using Python.
12 Item 5: Implement Base jobs.
13 Item 6: Allow FD to initiate a backup
14 Item 7: Improve Bacula's tape and drive usage and cleaning management.
15 Item 8: Implement creation and maintenance of copy pools
16 Item 9: Implement new {Client}Run{Before|After}Job feature.
17 Item 10: Merge multiple backups (Synthetic Backup or Consolidation).
18 Item 11: Deletion of Disk-Based Bacula Volumes
19 Item 12: Directive/mode to backup only file changes, not entire file
20 Item 13: Multiple threads in file daemon for the same job
21 Item 14: Implement red/black binary tree routines.
22 Item 15: Add support for FileSets in user directories CACHEDIR.TAG
23 Item 16: Implement extraction of Win32 BackupWrite data.
24 Item 17: Implement a Python interface to the Bacula catalog.
25 Item 18: Archival (removal) of User Files to Tape
26 Item 19: Add Plug-ins to the FileSet Include statements.
27 Item 20: Implement more Python events in Bacula.
28 Item 21: Quick release of FD-SD connection after backup.
29 Item 22: Permit multiple Media Types in an Autochanger
30 Item 23: Allow different autochanger definitions for one autochanger.
31 Item 24: Automatic disabling of devices
32 Item 25: Implement huge exclude list support using hashing.
35 Below, you will find more information on future projects:
37 Item 1: Implement data encryption (as opposed to comm encryption)
39 Origin: Sponsored by Landon and 13 contributors to EFF.
40 Status: Landon Fuller has implemented this in 1.39.x.
42 What: Currently the data that is stored on the Volume is not
43 encrypted. For confidentiality, encryption of data at
44 the File daemon level is essential.
45 Data encryption encrypts the data in the File daemon and
46 decrypts the data in the File daemon during a restore.
48 Why: Large sites require this.
50 Item 2: Implement Migration that moves Jobs from one Pool to another.
51 Origin: Sponsored by Riege Software International GmbH. Contact:
52 Daniel Holtkamp <holtkamp at riege dot com>
54 Status: Partially working in 1.39, more to do. Assigned to
57 What: The ability to copy, move, or archive data that is on a
58 device to another device is very important.
60 Why: An ISP might want to backup to disk, but after 30 days
61 migrate the data to tape backup and delete it from
62 disk. Bacula should be able to handle this
63 automatically. It needs to know what was put where,
64 and when, and what to migrate -- it is a bit like
65 retention periods. Doing so would allow space to be
66 freed up for current backups while maintaining older
69 Notes: Riege Software have asked for the following migration
72 Highwater mark (stopped by Lowwater mark?)
74 Notes: Migration could be additionally triggered by:
78 Item 3: Accurate restoration of renamed/deleted files from
79 Incremental/Differential backups
80 Date: 28 November 2005
81 Origin: Martin Simmons (martin at lispworks dot com)
84 What: When restoring a fileset for a specified date (including "most
85 recent"), Bacula should give you exactly the files and directories
86 that existed at the time of the last backup prior to that date.
88 Currently this only works if the last backup was a Full backup.
89 When the last backup was Incremental/Differential, files and
90 directories that have been renamed or deleted since the last Full
91 backup are not currently restored correctly. Ditto for files with
92 extra/fewer hard links than at the time of the last Full backup.
94 Why: Incremental/Differential would be much more useful if this worked.
96 Notes: Item 14 (Merging of multiple backups into a single one) seems to
97 rely on this working, otherwise the merged backups will not be
98 truly equivalent to a Full backup.
100 Kern: notes shortened. This can be done without the need for
101 inodes. It is essentially the same as the current Verify job,
102 but one additional database record must be written, which does
103 not need any database change.
105 Kern: see if we can correct restoration of directories if
106 replace=ifnewer is set. Currently, if the directory does not
107 exist, a "dummy" directory is created, then when all the files
108 are updated, the dummy directory is newer so the real values
111 Item 4: Implement a Bacula GUI/management tool using Python.
113 Date: 28 October 2005
114 Status: Lucus is working on this for Python GTK+.
116 What: Implement a Bacula console, and management tools
117 using Python and Qt or GTK.
119 Why: Don't we already have a wxWidgets GUI? Yes, but
120 it is written in C++ and changes to the user interface
121 must be hand tailored using C++ code. By developing
122 the user interface using Qt designer, the interface
123 can be very easily updated and most of the new Python
124 code will be automatically created. The user interface
125 changes become very simple, and only the new features
126 must be implement. In addition, the code will be in
127 Python, which will give many more users easy (or easier)
128 access to making additions or modifications.
130 Notes: This is currently being implemented using Python-GTK by
131 Lucas Di Pentima <lucas at lunix dot com dot ar>
133 Item 5: Implement Base jobs.
134 Date: 28 October 2005
138 What: A base job is sort of like a Full save except that you
139 will want the FileSet to contain only files that are
140 unlikely to change in the future (i.e. a snapshot of
141 most of your system after installing it). After the
142 base job has been run, when you are doing a Full save,
143 you specify one or more Base jobs to be used. All
144 files that have been backed up in the Base job/jobs but
145 not modified will then be excluded from the backup.
146 During a restore, the Base jobs will be automatically
147 pulled in where necessary.
149 Why: This is something none of the competition does, as far as
150 we know (except perhaps BackupPC, which is a Perl program that
151 saves to disk only). It is big win for the user, it
152 makes Bacula stand out as offering a unique
153 optimization that immediately saves time and money.
154 Basically, imagine that you have 100 nearly identical
155 Windows or Linux machine containing the OS and user
156 files. Now for the OS part, a Base job will be backed
157 up once, and rather than making 100 copies of the OS,
158 there will be only one. If one or more of the systems
159 have some files updated, no problem, they will be
160 automatically restored.
162 Notes: Huge savings in tape usage even for a single machine.
163 Will require more resources because the DIR must send
164 FD a list of files/attribs, and the FD must search the
165 list and compare it for each file to be saved.
167 Item 6: Allow FD to initiate a backup
168 Origin: Frank Volf (frank at deze dot org)
169 Date: 17 November 2005
172 What: Provide some means, possibly by a restricted console that
173 allows a FD to initiate a backup, and that uses the connection
174 established by the FD to the Director for the backup so that
175 a Director that is firewalled can do the backup.
177 Why: Makes backup of laptops much easier.
179 Item 7: Improve Bacula's tape and drive usage and cleaning management.
180 Date: 8 November 2005, November 11, 2005
181 Origin: Adam Thornton <athornton at sinenomine dot net>,
182 Arno Lehmann <al at its-lehmann dot de>
185 What: Make Bacula manage tape life cycle information, tape reuse
186 times and drive cleaning cycles.
188 Why: All three parts of this project are important when operating
190 We need to know which tapes need replacement, and we need to
191 make sure the drives are cleaned when necessary. While many
192 tape libraries and even autoloaders can handle all this
193 automatically, support by Bacula can be helpful for smaller
194 (older) libraries and single drives. Limiting the number of
195 times a tape is used might prevent tape errors when using
196 tapes until the drives can't read it any more. Also, checking
197 drive status during operation can prevent some failures (as I
198 [Arno] had to learn the hard way...)
200 Notes: First, Bacula could (and even does, to some limited extent)
201 record tape and drive usage. For tapes, the number of mounts,
202 the amount of data, and the time the tape has actually been
203 running could be recorded. Data fields for Read and Write
204 time and Number of mounts already exist in the catalog (I'm
205 not sure if VolBytes is the sum of all bytes ever written to
206 that volume by Bacula). This information can be important
207 when determining which media to replace. The ability to mark
208 Volumes as "used up" after a given number of write cycles
209 should also be implemented so that a tape is never actually
210 worn out. For the tape drives known to Bacula, similar
211 information is interesting to determine the device status and
212 expected life time: Time it's been Reading and Writing, number
213 of tape Loads / Unloads / Errors. This information is not yet
214 recorded as far as I [Arno] know. A new volume status would
215 be necessary for the new state, like "Used up" or "Worn out".
216 Volumes with this state could be used for restores, but not
217 for writing. These volumes should be migrated first (assuming
218 migration is implemented) and, once they are no longer needed,
219 could be moved to a Trash pool.
221 The next step would be to implement a drive cleaning setup.
222 Bacula already has knowledge about cleaning tapes. Once it
223 has some information about cleaning cycles (measured in drive
224 run time, number of tapes used, or calender days, for example)
225 it can automatically execute tape cleaning (with an
226 autochanger, obviously) or ask for operator assistance loading
229 The final step would be to implement TAPEALERT checks not only
230 when changing tapes and only sending the information to the
231 administrator, but rather checking after each tape error,
232 checking on a regular basis (for example after each tape
233 file), and also before unloading and after loading a new tape.
234 Then, depending on the drives TAPEALERT state and the known
235 drive cleaning state Bacula could automatically schedule later
236 cleaning, clean immediately, or inform the operator.
238 Implementing this would perhaps require another catalog change
239 and perhaps major changes in SD code and the DIR-SD protocol,
240 so I'd only consider this worth implementing if it would
241 actually be used or even needed by many people.
243 Implementation of these projects could happen in three distinct
244 sub-projects: Measuring Tape and Drive usage, retiring
245 volumes, and handling drive cleaning and TAPEALERTs.
247 Item 8: Implement creation and maintenance of copy pools
248 Date: 27 November 2005
249 Origin: David Boyes (dboyes at sinenomine dot net)
252 What: I would like Bacula to have the capability to write copies
253 of backed-up data on multiple physical volumes selected
254 from different pools without transferring the data
255 multiple times, and to accept any of the copy volumes
256 as valid for restore.
258 Why: In many cases, businesses are required to keep offsite
259 copies of backup volumes, or just wish for simple
260 protection against a human operator dropping a storage
261 volume and damaging it. The ability to generate multiple
262 volumes in the course of a single backup job allows
263 customers to simple check out one copy and send it
264 offsite, marking it as out of changer or otherwise
265 unavailable. Currently, the library and magazine
266 management capability in Bacula does not make this process
269 Restores would use the copy of the data on the first
270 available volume, in order of copy pool chain definition.
272 This is also a major scalability issue -- as the number of
273 clients increases beyond several thousand, and the volume
274 of data increases, transferring the data multiple times to
275 produce additional copies of the backups will become
276 physically impossible due to transfer speed
277 issues. Generating multiple copies at server side will
278 become the only practical option.
280 How: I suspect that this will require adding a multiplexing
281 SD that appears to be a SD to a specific FD, but 1-n FDs
282 to the specific back end SDs managing the primary and copy
283 pools. Storage pools will also need to acquire parameters
284 to define the pools to be used for copies.
286 Notes: I would commit some of my developers' time if we can agree
287 on the design and behavior.
289 Item 9: Implement new {Client}Run{Before|After}Job feature.
290 Date: 26 September 2005
291 Origin: Phil Stracchino
292 Status: Done. This has been implemented by Eric Bollengier
294 What: Some time ago, there was a discussion of RunAfterJob and
295 ClientRunAfterJob, and the fact that they do not run after failed
296 jobs. At the time, there was a suggestion to add a
297 RunAfterFailedJob directive (and, presumably, a matching
298 ClientRunAfterFailedJob directive), but to my knowledge these
299 were never implemented.
301 The current implementation doesn't permit to add new feature easily.
303 An alternate way of approaching the problem has just occurred to
304 me. Suppose the RunBeforeJob and RunAfterJob directives were
305 expanded in a manner like this example:
308 Command = "/opt/bacula/etc/checkhost %c"
309 RunsOnClient = No # default
310 AbortJobOnError = Yes # default
314 Command = c:/bacula/systemstate.bat
322 Command = c:/bacula/deletestatefile.bat
327 It's now possible to specify more than 1 command per Job.
328 (you can stop your database and your webserver without a script)
333 JobDefs = "DefaultJob"
334 Write Bootstrap = "/tmp/bacula/var/bacula/working/Client1.bsr"
337 RunBeforeJob = "echo test before ; echo test before2"
338 RunBeforeJob = "echo test before (2nd time)"
339 RunBeforeJob = "echo test before (3rd time)"
340 RunAfterJob = "echo test after"
341 ClientRunAfterJob = "echo test after client"
344 Command = "echo test RunScript in error"
348 RunsWhen = After # never by default
351 Command = "echo test RunScript on success"
353 RunsOnSuccess = yes # default
354 RunsOnFailure = no # default
359 Why: It would be a significant change to the structure of the
360 directives, but allows for a lot more flexibility, including
361 RunAfter commands that will run regardless of whether the job
362 succeeds, or RunBefore tasks that still allow the job to run even
363 if that specific RunBefore fails.
365 Notes: (More notes from Phil, Kern, David and Eric)
366 I would prefer to have a single new Resource called
369 RunsWhen = After|Before|Always
370 RunsAtJobLevels = All|Full|Diff|Inc # not yet implemented
372 The AbortJobOnError, RunsOnSuccess and RunsOnFailure directives
373 could be optional, and possibly RunWhen as well.
375 AbortJobOnError would be ignored unless RunsWhen was set to Before
376 and would default to Yes if omitted.
377 If AbortJobOnError was set to No, failure of the script
378 would still generate a warning.
380 RunsOnSuccess would be ignored unless RunsWhen was set to After
381 (or RunsBeforeJob set to No), and default to Yes.
383 RunsOnFailure would be ignored unless RunsWhen was set to After,
386 Allow having the before/after status on the script command
387 line so that the same script can be used both before/after.
389 Item 10: Merge multiple backups (Synthetic Backup or Consolidation).
390 Origin: Marc Cousin and Eric Bollengier
391 Date: 15 November 2005
392 Status: Depends on first implementing project Item 1 (Migration).
394 What: A merged backup is a backup made without connecting to the Client.
395 It would be a Merge of existing backups into a single backup.
396 In effect, it is like a restore but to the backup medium.
398 For instance, say that last Sunday we made a full backup. Then
399 all week long, we created incremental backups, in order to do
400 them fast. Now comes Sunday again, and we need another full.
401 The merged backup makes it possible to do instead an incremental
402 backup (during the night for instance), and then create a merged
403 backup during the day, by using the full and incrementals from
404 the week. The merged backup will be exactly like a full made
405 Sunday night on the tape, but the production interruption on the
406 Client will be minimal, as the Client will only have to send
409 In fact, if it's done correctly, you could merge all the
410 Incrementals into single Incremental, or all the Incrementals
411 and the last Differential into a new Differential, or the Full,
412 last differential and all the Incrementals into a new Full
413 backup. And there is no need to involve the Client.
415 Why: The benefit is that :
416 - the Client just does an incremental ;
417 - the merged backup on tape is just as a single full backup,
418 and can be restored very fast.
420 This is also a way of reducing the backup data since the old
421 data can then be pruned (or not) from the catalog, possibly
422 allowing older volumes to be recycled
424 Item 11: Deletion of Disk-Based Bacula Volumes
426 Origin: Ross Boylan <RossBoylan at stanfordalumni dot org> (edited
430 What: Provide a way for Bacula to automatically remove Volumes
431 from the filesystem, or optionally to truncate them.
432 Obviously, the Volume must be pruned prior removal.
434 Why: This would allow users more control over their Volumes and
435 prevent disk based volumes from consuming too much space.
437 Notes: The following two directives might do the trick:
439 Volume Data Retention = <time period>
440 Remove Volume After = <time period>
442 The migration project should also remove a Volume that is
443 migrated. This might also work for tape Volumes.
445 Item 12: Directive/mode to backup only file changes, not entire file
446 Date: 11 November 2005
447 Origin: Joshua Kugler <joshua dot kugler at uaf dot edu>
448 Marek Bajon <mbajon at bimsplus dot com dot pl>
451 What: Currently when a file changes, the entire file will be backed up in
452 the next incremental or full backup. To save space on the tapes
453 it would be nice to have a mode whereby only the changes to the
454 file would be backed up when it is changed.
456 Why: This would save lots of space when backing up large files such as
457 logs, mbox files, Outlook PST files and the like.
459 Notes: This would require the usage of disk-based volumes as comparing
460 files would not be feasible using a tape drive.
462 Item 13: Multiple threads in file daemon for the same job
463 Date: 27 November 2005
464 Origin: Ove Risberg (Ove.Risberg at octocode dot com)
467 What: I want the file daemon to start multiple threads for a backup
468 job so the fastest possible backup can be made.
470 The file daemon could parse the FileSet information and start
471 one thread for each File entry located on a separate
474 A configuration option in the job section should be used to
475 enable or disable this feature. The configuration option could
476 specify the maximum number of threads in the file daemon.
478 If the theads could spool the data to separate spool files
479 the restore process will not be much slower.
481 Why: Multiple concurrent backups of a large fileserver with many
482 disks and controllers will be much faster.
484 Notes: I am willing to try to implement this but I will probably
485 need some help and advice. (No problem -- Kern)
487 Item 14: Implement red/black binary tree routines.
488 Date: 28 October 2005
490 Status: Class code is complete. Code needs to be integrated into
493 What: Implement a red/black binary tree class. This could
494 then replace the current binary insert/search routines
495 used in the restore in memory tree. This could significantly
496 speed up the creation of the in memory restore tree.
498 Why: Performance enhancement.
500 Item 15: Add support for FileSets in user directories CACHEDIR.TAG
501 Origin: Norbert Kiesel <nkiesel at tbdnetworks dot com>
502 Date: 21 November 2005
503 Status: (I think this is better done using a Python event that I
504 will implement in version 1.39.x).
506 What: CACHDIR.TAG is a proposal for identifying directories which
507 should be ignored for archiving/backup. It works by ignoring
508 directory trees which have a file named CACHEDIR.TAG with a
509 specific content. See
510 http://www.brynosaurus.com/cachedir/spec.html
514 I suggest that if this is implemented (I've also asked for this
515 feature some year ago) that it is made compatible with Legato
516 Networkers ".nsr" files where you can specify a lot of options on
517 how to handle files/directories (including denying further
518 parsing of .nsr files lower down into the directory trees). A
519 PDF version of the .nsr man page can be viewed at:
521 http://www.ifm.liu.se/~peter/nsr.pdf
523 Why: It's a nice alternative to "exclude" patterns for directories
524 which don't have regular pathnames. Also, it allows users to
525 control backup for themselves. Implementation should be pretty
526 simple. GNU tar >= 1.14 or so supports it, too.
528 Notes: I envision this as an optional feature to a fileset
532 Item 16: Implement extraction of Win32 BackupWrite data.
533 Origin: Thorsten Engel <thorsten.engel at matrix-computer dot com>
534 Date: 28 October 2005
535 Status: Done. Assigned to Thorsten. Implemented in current CVS
537 What: This provides the Bacula File daemon with code that
538 can pick apart the stream output that Microsoft writes
539 for BackupWrite data, and thus the data can be read
540 and restored on non-Win32 machines.
542 Why: BackupWrite data is the portable=no option in Win32
543 FileSets, and in previous Baculas, this data could
544 only be extracted using a Win32 FD. With this new code,
545 the Windows data can be extracted and restored on
549 Item 18: Implement a Python interface to the Bacula catalog.
550 Date: 28 October 2005
554 What: Implement an interface for Python scripts to access
555 the catalog through Bacula.
557 Why: This will permit users to customize Bacula through
560 Item 18: Archival (removal) of User Files to Tape
564 Origin: Ray Pengelly [ray at biomed dot queensu dot ca
567 What: The ability to archive data to storage based on certain parameters
568 such as age, size, or location. Once the data has been written to
569 storage and logged it is then pruned from the originating
570 filesystem. Note! We are talking about user's files and not
573 Why: This would allow fully automatic storage management which becomes
574 useful for large datastores. It would also allow for auto-staging
575 from one media type to another.
577 Example 1) Medical imaging needs to store large amounts of data.
578 They decide to keep data on their servers for 6 months and then put
579 it away for long term storage. The server then finds all files
580 older than 6 months writes them to tape. The files are then removed
583 Example 2) All data that hasn't been accessed in 2 months could be
584 moved from high-cost, fibre-channel disk storage to a low-cost
585 large-capacity SATA disk storage pool which doesn't have as quick of
586 access time. Then after another 6 months (or possibly as one
587 storage pool gets full) data is migrated to Tape.
589 Item 19: Add Plug-ins to the FileSet Include statements.
590 Date: 28 October 2005
592 Status: Partially coded in 1.37 -- much more to do.
594 What: Allow users to specify wild-card and/or regular
595 expressions to be matched in both the Include and
596 Exclude directives in a FileSet. At the same time,
597 allow users to define plug-ins to be called (based on
598 regular expression/wild-card matching).
600 Why: This would give the users the ultimate ability to control
601 how files are backed up/restored. A user could write a
602 plug-in knows how to backup his Oracle database without
603 stopping/starting it, for example.
605 Item 20: Implement more Python events in Bacula.
606 Date: 28 October 2005
610 What: Allow Python scripts to be called at more places
611 within Bacula and provide additional access to Bacula
614 Why: This will permit users to customize Bacula through
622 Also add a way to get a listing of currently running
623 jobs (possibly also scheduled jobs).
626 Item 21: Quick release of FD-SD connection after backup.
627 Origin: Frank Volf (frank at deze dot org)
628 Date: 17 November 2005
631 What: In the Bacula implementation a backup is finished after all data
632 and attributes are successfully written to storage. When using a
633 tape backup it is very annoying that a backup can take a day,
634 simply because the current tape (or whatever) is full and the
635 administrator has not put a new one in. During that time the
636 system cannot be taken off-line, because there is still an open
637 session between the storage daemon and the file daemon on the
640 Although this is a very good strategy for making "safe backups"
641 This can be annoying for e.g. laptops, that must remain
642 connected until the backup is completed.
644 Using a new feature called "migration" it will be possible to
645 spool first to harddisk (using a special 'spool' migration
646 scheme) and then migrate the backup to tape.
648 There is still the problem of getting the attributes committed.
649 If it takes a very long time to do, with the current code, the
650 job has not terminated, and the File daemon is not freed up. The
651 Storage daemon should release the File daemon as soon as all the
652 file data and all the attributes have been sent to it (the SD).
653 Currently the SD waits until everything is on tape and all the
654 attributes are transmitted to the Director before signaling
655 completion to the FD. I don't think I would have any problem
656 changing this. The reason is that even if the FD reports back to
657 the Dir that all is OK, the job will not terminate until the SD
658 has done the same thing -- so in a way keeping the SD-FD link
659 open to the very end is not really very productive ...
661 Why: Makes backup of laptops much easier.
663 Item 22: Permit multiple Media Types in an Autochanger
665 Status: Done. Implemented in 1.38.9 (I think).
667 What: Modify the Storage daemon so that multiple Media Types
668 can be specified in an autochanger. This would be somewhat
669 of a simplistic implementation in that each drive would
670 still be allowed to have only one Media Type. However,
671 the Storage daemon will ensure that only a drive with
672 the Media Type that matches what the Director specifies
675 Why: This will permit user with several different drive types
676 to make full use of their autochangers.
678 Item 23: Allow different autochanger definitions for one autochanger.
679 Date: 28 October 2005
683 What: Currently, the autochanger script is locked based on
684 the autochanger. That is, if multiple drives are being
685 simultaneously used, the Storage daemon ensures that only
686 one drive at a time can access the mtx-changer script.
687 This change would base the locking on the control device,
688 rather than the autochanger. It would then permit two autochanger
689 definitions for the same autochanger, but with different
690 drives. Logically, the autochanger could then be "partitioned"
691 for different jobs, clients, or class of jobs, and if the locking
692 is based on the control device (e.g. /dev/sg0) the mtx-changer
693 script will be locked appropriately.
695 Why: This will permit users to partition autochangers for specific
696 use. It would also permit implementation of multiple Media
697 Types with no changes to the Storage daemon.
699 Item 24: Automatic disabling of devices
701 Origin: Peter Eriksson <peter at ifm.liu dot se>
704 What: After a configurable amount of fatal errors with a tape drive
705 Bacula should automatically disable further use of a certain
706 tape drive. There should also be "disable"/"enable" commands in
709 Why: On a multi-drive jukebox there is a possibility of tape drives
710 going bad during large backups (needing a cleaning tape run,
711 tapes getting stuck). It would be advantageous if Bacula would
712 automatically disable further use of a problematic tape drive
713 after a configurable amount of errors has occurred.
715 An example: I have a multi-drive jukebox (6 drives, 380+ slots)
716 where tapes occasionally get stuck inside the drive. Bacula will
717 notice that the "mtx-changer" command will fail and then fail
718 any backup jobs trying to use that drive. However, it will still
719 keep on trying to run new jobs using that drive and fail -
720 forever, and thus failing lots and lots of jobs... Since we have
721 many drives Bacula could have just automatically disabled
722 further use of that drive and used one of the other ones
725 Item 25: Implement huge exclude list support using hashing.
726 Date: 28 October 2005
730 What: Allow users to specify very large exclude list (currently
731 more than about 1000 files is too many).
733 Why: This would give the users the ability to exclude all
734 files that are loaded with the OS (e.g. using rpms
735 or debs). If the user can restore the base OS from
736 CDs, there is no need to backup all those files. A
737 complete restore would be to restore the base OS, then
738 do a Bacula restore. By excluding the base OS files, the
739 backup set will be *much* smaller.
742 ============= Empty Feature Request form ===========
743 Item n: One line summary ...
745 Origin: Name and email of originator.
748 What: More detailed explanation ...
750 Why: Why it is important ...
752 Notes: Additional notes or features (omit if not used)
753 ============== End Feature Request form ==============
756 ===============================================
757 Feature requests submitted after cutoff for December 2005 vote
758 ===============================================
759 Item n: Allow skipping execution of Jobs
760 Date: 29 November 2005
761 Origin: Florian Schnabel <florian.schnabel at docufy dot de>
764 What: An easy option to skip a certain job on a certain date.
765 Why: You could then easily skip tape backups on holidays. Especially
766 if you got no autochanger and can only fit one backup on a tape
767 that would be really handy, other jobs could proceed normally
768 and you won't get errors that way.
770 ===================================================
774 Origin: calvin streeting calvin at absentdream dot com
777 What: The abilty to archive to media (dvd/cd) in a uncompressd format
778 for dead filing (archiving not backing up)
780 Why: At my works when jobs are finished and moved off of the main file
781 servers (raid based systems) onto a simple linux file server (ide based
782 system) so users can find old information without contacting the IT
785 So this data dosn't realy change it only gets added to,
786 But it also needs backing up. At the moment it takes
787 about 8 hours to back up our servers (working data) so
788 rather than add more time to existing backups i am trying
789 to implement a system where we backup the acrhive data to
790 cd/dvd these disks would only need to be appended to
791 (burn only new/changed files to new disks for off site
792 storage). basialy understand the differnce between
793 achive data and live data.
795 Notes: scan the data and email me when it needs burning divide
796 into predifind chunks keep a recored of what is on what
797 disk make me a label (simple php->mysql=>pdf stuff) i
798 could do this bit ability to save data uncompresed so
799 it can be read in any other system (future proof data)
800 save the catalog with the disk as some kind of menu