1 <? require_once("inc/header.php"); ?>
10 Bacula Projects Roadmap
12 (prioritized by user vote)
15 Item 1: Implement data encryption (as opposed to comm encryption)
16 Item 2: Implement Migration that moves Jobs from one Pool to another.
17 Item 3: Accurate restoration of renamed/deleted files from
18 Item 4: Implement a Bacula GUI/management tool using Python.
19 Item 5: Implement Base jobs.
20 Item 6: Allow FD to initiate a backup
21 Item 7: Improve Bacula's tape and drive usage and cleaning management.
22 Item 8: Implement creation and maintenance of copy pools
23 Item 9: Implement new {Client}Run{Before|After}Job feature.
24 Item 10: Merge multiple backups (Synthetic Backup or Consolidation).
25 Item 11: Deletion of Disk-Based Bacula Volumes
26 Item 12: Directive/mode to backup only file changes, not entire file
27 Item 13: Multiple threads in file daemon for the same job
28 Item 14: Implement red/black binary tree routines.
29 Item 15: Add support for FileSets in user directories CACHEDIR.TAG
30 Item 16: Implement extraction of Win32 BackupWrite data.
31 Item 17: Implement a Python interface to the Bacula catalog.
32 Item 18: Archival (removal) of User Files to Tape
33 Item 19: Add Plug-ins to the FileSet Include statements.
34 Item 20: Implement more Python events in Bacula.
35 Item 21: Quick release of FD-SD connection after backup.
36 Item 22: Permit multiple Media Types in an Autochanger
37 Item 23: Allow different autochanger definitions for one autochanger.
38 Item 24: Automatic disabling of devices
39 Item 25: Implement huge exclude list support using hashing.
42 Below, you will find more information on future projects:
44 Item 1: Implement data encryption (as opposed to comm encryption)
46 Origin: Sponsored by Landon and 13 contributors to EFF.
47 Status: Landon Fuller has implemented this in 1.39.x.
49 What: Currently the data that is stored on the Volume is not
50 encrypted. For confidentiality, encryption of data at
51 the File daemon level is essential.
52 Data encryption encrypts the data in the File daemon and
53 decrypts the data in the File daemon during a restore.
55 Why: Large sites require this.
57 Item 2: Implement Migration that moves Jobs from one Pool to another.
58 Origin: Sponsored by Riege Software International GmbH. Contact:
59 Daniel Holtkamp <holtkamp at riege dot com>
61 Status: Partially working in 1.39, more to do. Assigned to
64 What: The ability to copy, move, or archive data that is on a
65 device to another device is very important.
67 Why: An ISP might want to backup to disk, but after 30 days
68 migrate the data to tape backup and delete it from
69 disk. Bacula should be able to handle this
70 automatically. It needs to know what was put where,
71 and when, and what to migrate -- it is a bit like
72 retention periods. Doing so would allow space to be
73 freed up for current backups while maintaining older
76 Notes: Riege Software have asked for the following migration
79 Highwater mark (stopped by Lowwater mark?)
81 Notes: Migration could be additionally triggered by:
85 Item 3: Accurate restoration of renamed/deleted files from
86 Incremental/Differential backups
87 Date: 28 November 2005
88 Origin: Martin Simmons (martin at lispworks dot com)
91 What: When restoring a fileset for a specified date (including "most
92 recent"), Bacula should give you exactly the files and directories
93 that existed at the time of the last backup prior to that date.
95 Currently this only works if the last backup was a Full backup.
96 When the last backup was Incremental/Differential, files and
97 directories that have been renamed or deleted since the last Full
98 backup are not currently restored correctly. Ditto for files with
99 extra/fewer hard links than at the time of the last Full backup.
101 Why: Incremental/Differential would be much more useful if this worked.
103 Notes: Item 14 (Merging of multiple backups into a single one) seems to
104 rely on this working, otherwise the merged backups will not be
105 truly equivalent to a Full backup.
107 Kern: notes shortened. This can be done without the need for
108 inodes. It is essentially the same as the current Verify job,
109 but one additional database record must be written, which does
110 not need any database change.
112 Kern: see if we can correct restoration of directories if
113 replace=ifnewer is set. Currently, if the directory does not
114 exist, a "dummy" directory is created, then when all the files
115 are updated, the dummy directory is newer so the real values
118 Item 4: Implement a Bacula GUI/management tool using Python.
120 Date: 28 October 2005
121 Status: Lucus is working on this for Python GTK+.
123 What: Implement a Bacula console, and management tools
124 using Python and Qt or GTK.
126 Why: Don't we already have a wxWidgets GUI? Yes, but
127 it is written in C++ and changes to the user interface
128 must be hand tailored using C++ code. By developing
129 the user interface using Qt designer, the interface
130 can be very easily updated and most of the new Python
131 code will be automatically created. The user interface
132 changes become very simple, and only the new features
133 must be implement. In addition, the code will be in
134 Python, which will give many more users easy (or easier)
135 access to making additions or modifications.
137 Notes: This is currently being implemented using Python-GTK by
138 Lucas Di Pentima <lucas at lunix dot com dot ar>
140 Item 5: Implement Base jobs.
141 Date: 28 October 2005
145 What: A base job is sort of like a Full save except that you
146 will want the FileSet to contain only files that are
147 unlikely to change in the future (i.e. a snapshot of
148 most of your system after installing it). After the
149 base job has been run, when you are doing a Full save,
150 you specify one or more Base jobs to be used. All
151 files that have been backed up in the Base job/jobs but
152 not modified will then be excluded from the backup.
153 During a restore, the Base jobs will be automatically
154 pulled in where necessary.
156 Why: This is something none of the competition does, as far as
157 we know (except perhaps BackupPC, which is a Perl program that
158 saves to disk only). It is big win for the user, it
159 makes Bacula stand out as offering a unique
160 optimization that immediately saves time and money.
161 Basically, imagine that you have 100 nearly identical
162 Windows or Linux machine containing the OS and user
163 files. Now for the OS part, a Base job will be backed
164 up once, and rather than making 100 copies of the OS,
165 there will be only one. If one or more of the systems
166 have some files updated, no problem, they will be
167 automatically restored.
169 Notes: Huge savings in tape usage even for a single machine.
170 Will require more resources because the DIR must send
171 FD a list of files/attribs, and the FD must search the
172 list and compare it for each file to be saved.
174 Item 6: Allow FD to initiate a backup
175 Origin: Frank Volf (frank at deze dot org)
176 Date: 17 November 2005
179 What: Provide some means, possibly by a restricted console that
180 allows a FD to initiate a backup, and that uses the connection
181 established by the FD to the Director for the backup so that
182 a Director that is firewalled can do the backup.
184 Why: Makes backup of laptops much easier.
186 Item 7: Improve Bacula's tape and drive usage and cleaning management.
187 Date: 8 November 2005, November 11, 2005
188 Origin: Adam Thornton <athornton at sinenomine dot net>,
189 Arno Lehmann <al at its-lehmann dot de>
192 What: Make Bacula manage tape life cycle information, tape reuse
193 times and drive cleaning cycles.
195 Why: All three parts of this project are important when operating
197 We need to know which tapes need replacement, and we need to
198 make sure the drives are cleaned when necessary. While many
199 tape libraries and even autoloaders can handle all this
200 automatically, support by Bacula can be helpful for smaller
201 (older) libraries and single drives. Limiting the number of
202 times a tape is used might prevent tape errors when using
203 tapes until the drives can't read it any more. Also, checking
204 drive status during operation can prevent some failures (as I
205 [Arno] had to learn the hard way...)
207 Notes: First, Bacula could (and even does, to some limited extent)
208 record tape and drive usage. For tapes, the number of mounts,
209 the amount of data, and the time the tape has actually been
210 running could be recorded. Data fields for Read and Write
211 time and Number of mounts already exist in the catalog (I'm
212 not sure if VolBytes is the sum of all bytes ever written to
213 that volume by Bacula). This information can be important
214 when determining which media to replace. The ability to mark
215 Volumes as "used up" after a given number of write cycles
216 should also be implemented so that a tape is never actually
217 worn out. For the tape drives known to Bacula, similar
218 information is interesting to determine the device status and
219 expected life time: Time it's been Reading and Writing, number
220 of tape Loads / Unloads / Errors. This information is not yet
221 recorded as far as I [Arno] know. A new volume status would
222 be necessary for the new state, like "Used up" or "Worn out".
223 Volumes with this state could be used for restores, but not
224 for writing. These volumes should be migrated first (assuming
225 migration is implemented) and, once they are no longer needed,
226 could be moved to a Trash pool.
228 The next step would be to implement a drive cleaning setup.
229 Bacula already has knowledge about cleaning tapes. Once it
230 has some information about cleaning cycles (measured in drive
231 run time, number of tapes used, or calender days, for example)
232 it can automatically execute tape cleaning (with an
233 autochanger, obviously) or ask for operator assistance loading
236 The final step would be to implement TAPEALERT checks not only
237 when changing tapes and only sending the information to the
238 administrator, but rather checking after each tape error,
239 checking on a regular basis (for example after each tape
240 file), and also before unloading and after loading a new tape.
241 Then, depending on the drives TAPEALERT state and the known
242 drive cleaning state Bacula could automatically schedule later
243 cleaning, clean immediately, or inform the operator.
245 Implementing this would perhaps require another catalog change
246 and perhaps major changes in SD code and the DIR-SD protocol,
247 so I'd only consider this worth implementing if it would
248 actually be used or even needed by many people.
250 Implementation of these projects could happen in three distinct
251 sub-projects: Measuring Tape and Drive usage, retiring
252 volumes, and handling drive cleaning and TAPEALERTs.
254 Item 8: Implement creation and maintenance of copy pools
255 Date: 27 November 2005
256 Origin: David Boyes (dboyes at sinenomine dot net)
259 What: I would like Bacula to have the capability to write copies
260 of backed-up data on multiple physical volumes selected
261 from different pools without transferring the data
262 multiple times, and to accept any of the copy volumes
263 as valid for restore.
265 Why: In many cases, businesses are required to keep offsite
266 copies of backup volumes, or just wish for simple
267 protection against a human operator dropping a storage
268 volume and damaging it. The ability to generate multiple
269 volumes in the course of a single backup job allows
270 customers to simple check out one copy and send it
271 offsite, marking it as out of changer or otherwise
272 unavailable. Currently, the library and magazine
273 management capability in Bacula does not make this process
276 Restores would use the copy of the data on the first
277 available volume, in order of copy pool chain definition.
279 This is also a major scalability issue -- as the number of
280 clients increases beyond several thousand, and the volume
281 of data increases, transferring the data multiple times to
282 produce additional copies of the backups will become
283 physically impossible due to transfer speed
284 issues. Generating multiple copies at server side will
285 become the only practical option.
287 How: I suspect that this will require adding a multiplexing
288 SD that appears to be a SD to a specific FD, but 1-n FDs
289 to the specific back end SDs managing the primary and copy
290 pools. Storage pools will also need to acquire parameters
291 to define the pools to be used for copies.
293 Notes: I would commit some of my developers' time if we can agree
294 on the design and behavior.
296 Item 9: Implement new {Client}Run{Before|After}Job feature.
297 Date: 26 September 2005
298 Origin: Phil Stracchino
299 Status: Done. This has been implemented by Eric Bollengier
301 What: Some time ago, there was a discussion of RunAfterJob and
302 ClientRunAfterJob, and the fact that they do not run after failed
303 jobs. At the time, there was a suggestion to add a
304 RunAfterFailedJob directive (and, presumably, a matching
305 ClientRunAfterFailedJob directive), but to my knowledge these
306 were never implemented.
308 The current implementation doesn't permit to add new feature easily.
310 An alternate way of approaching the problem has just occurred to
311 me. Suppose the RunBeforeJob and RunAfterJob directives were
312 expanded in a manner like this example:
315 Command = "/opt/bacula/etc/checkhost %c"
316 RunsOnClient = No # default
317 AbortJobOnError = Yes # default
321 Command = c:/bacula/systemstate.bat
329 Command = c:/bacula/deletestatefile.bat
334 It's now possible to specify more than 1 command per Job.
335 (you can stop your database and your webserver without a script)
340 JobDefs = "DefaultJob"
341 Write Bootstrap = "/tmp/bacula/var/bacula/working/Client1.bsr"
344 RunBeforeJob = "echo test before ; echo test before2"
345 RunBeforeJob = "echo test before (2nd time)"
346 RunBeforeJob = "echo test before (3rd time)"
347 RunAfterJob = "echo test after"
348 ClientRunAfterJob = "echo test after client"
351 Command = "echo test RunScript in error"
355 RunsWhen = After # never by default
358 Command = "echo test RunScript on success"
360 RunsOnSuccess = yes # default
361 RunsOnFailure = no # default
366 Why: It would be a significant change to the structure of the
367 directives, but allows for a lot more flexibility, including
368 RunAfter commands that will run regardless of whether the job
369 succeeds, or RunBefore tasks that still allow the job to run even
370 if that specific RunBefore fails.
372 Notes: (More notes from Phil, Kern, David and Eric)
373 I would prefer to have a single new Resource called
376 RunsWhen = After|Before|Always
377 RunsAtJobLevels = All|Full|Diff|Inc # not yet implemented
379 The AbortJobOnError, RunsOnSuccess and RunsOnFailure directives
380 could be optional, and possibly RunWhen as well.
382 AbortJobOnError would be ignored unless RunsWhen was set to Before
383 and would default to Yes if omitted.
384 If AbortJobOnError was set to No, failure of the script
385 would still generate a warning.
387 RunsOnSuccess would be ignored unless RunsWhen was set to After
388 (or RunsBeforeJob set to No), and default to Yes.
390 RunsOnFailure would be ignored unless RunsWhen was set to After,
393 Allow having the before/after status on the script command
394 line so that the same script can be used both before/after.
396 Item 10: Merge multiple backups (Synthetic Backup or Consolidation).
397 Origin: Marc Cousin and Eric Bollengier
398 Date: 15 November 2005
399 Status: Depends on first implementing project Item 1 (Migration).
401 What: A merged backup is a backup made without connecting to the Client.
402 It would be a Merge of existing backups into a single backup.
403 In effect, it is like a restore but to the backup medium.
405 For instance, say that last Sunday we made a full backup. Then
406 all week long, we created incremental backups, in order to do
407 them fast. Now comes Sunday again, and we need another full.
408 The merged backup makes it possible to do instead an incremental
409 backup (during the night for instance), and then create a merged
410 backup during the day, by using the full and incrementals from
411 the week. The merged backup will be exactly like a full made
412 Sunday night on the tape, but the production interruption on the
413 Client will be minimal, as the Client will only have to send
416 In fact, if it's done correctly, you could merge all the
417 Incrementals into single Incremental, or all the Incrementals
418 and the last Differential into a new Differential, or the Full,
419 last differential and all the Incrementals into a new Full
420 backup. And there is no need to involve the Client.
422 Why: The benefit is that :
423 - the Client just does an incremental ;
424 - the merged backup on tape is just as a single full backup,
425 and can be restored very fast.
427 This is also a way of reducing the backup data since the old
428 data can then be pruned (or not) from the catalog, possibly
429 allowing older volumes to be recycled
431 Item 11: Deletion of Disk-Based Bacula Volumes
433 Origin: Ross Boylan <RossBoylan at stanfordalumni dot org> (edited
437 What: Provide a way for Bacula to automatically remove Volumes
438 from the filesystem, or optionally to truncate them.
439 Obviously, the Volume must be pruned prior removal.
441 Why: This would allow users more control over their Volumes and
442 prevent disk based volumes from consuming too much space.
444 Notes: The following two directives might do the trick:
446 Volume Data Retention = <time period>
447 Remove Volume After = <time period>
449 The migration project should also remove a Volume that is
450 migrated. This might also work for tape Volumes.
452 Item 12: Directive/mode to backup only file changes, not entire file
453 Date: 11 November 2005
454 Origin: Joshua Kugler <joshua dot kugler at uaf dot edu>
455 Marek Bajon <mbajon at bimsplus dot com dot pl>
458 What: Currently when a file changes, the entire file will be backed up in
459 the next incremental or full backup. To save space on the tapes
460 it would be nice to have a mode whereby only the changes to the
461 file would be backed up when it is changed.
463 Why: This would save lots of space when backing up large files such as
464 logs, mbox files, Outlook PST files and the like.
466 Notes: This would require the usage of disk-based volumes as comparing
467 files would not be feasible using a tape drive.
469 Item 13: Multiple threads in file daemon for the same job
470 Date: 27 November 2005
471 Origin: Ove Risberg (Ove.Risberg at octocode dot com)
474 What: I want the file daemon to start multiple threads for a backup
475 job so the fastest possible backup can be made.
477 The file daemon could parse the FileSet information and start
478 one thread for each File entry located on a separate
481 A configuration option in the job section should be used to
482 enable or disable this feature. The configuration option could
483 specify the maximum number of threads in the file daemon.
485 If the theads could spool the data to separate spool files
486 the restore process will not be much slower.
488 Why: Multiple concurrent backups of a large fileserver with many
489 disks and controllers will be much faster.
491 Notes: I am willing to try to implement this but I will probably
492 need some help and advice. (No problem -- Kern)
494 Item 14: Implement red/black binary tree routines.
495 Date: 28 October 2005
497 Status: Class code is complete. Code needs to be integrated into
500 What: Implement a red/black binary tree class. This could
501 then replace the current binary insert/search routines
502 used in the restore in memory tree. This could significantly
503 speed up the creation of the in memory restore tree.
505 Why: Performance enhancement.
507 Item 15: Add support for FileSets in user directories CACHEDIR.TAG
508 Origin: Norbert Kiesel <nkiesel at tbdnetworks dot com>
509 Date: 21 November 2005
510 Status: (I think this is better done using a Python event that I
511 will implement in version 1.39.x).
513 What: CACHDIR.TAG is a proposal for identifying directories which
514 should be ignored for archiving/backup. It works by ignoring
515 directory trees which have a file named CACHEDIR.TAG with a
516 specific content. See
517 http://www.brynosaurus.com/cachedir/spec.html
521 I suggest that if this is implemented (I've also asked for this
522 feature some year ago) that it is made compatible with Legato
523 Networkers ".nsr" files where you can specify a lot of options on
524 how to handle files/directories (including denying further
525 parsing of .nsr files lower down into the directory trees). A
526 PDF version of the .nsr man page can be viewed at:
528 http://www.ifm.liu.se/~peter/nsr.pdf
530 Why: It's a nice alternative to "exclude" patterns for directories
531 which don't have regular pathnames. Also, it allows users to
532 control backup for themselves. Implementation should be pretty
533 simple. GNU tar >= 1.14 or so supports it, too.
535 Notes: I envision this as an optional feature to a fileset
539 Item 16: Implement extraction of Win32 BackupWrite data.
540 Origin: Thorsten Engel <thorsten.engel at matrix-computer dot com>
541 Date: 28 October 2005
542 Status: Done. Assigned to Thorsten. Implemented in current CVS
544 What: This provides the Bacula File daemon with code that
545 can pick apart the stream output that Microsoft writes
546 for BackupWrite data, and thus the data can be read
547 and restored on non-Win32 machines.
549 Why: BackupWrite data is the portable=no option in Win32
550 FileSets, and in previous Baculas, this data could
551 only be extracted using a Win32 FD. With this new code,
552 the Windows data can be extracted and restored on
556 Item 18: Implement a Python interface to the Bacula catalog.
557 Date: 28 October 2005
561 What: Implement an interface for Python scripts to access
562 the catalog through Bacula.
564 Why: This will permit users to customize Bacula through
567 Item 18: Archival (removal) of User Files to Tape
571 Origin: Ray Pengelly [ray at biomed dot queensu dot ca
574 What: The ability to archive data to storage based on certain parameters
575 such as age, size, or location. Once the data has been written to
576 storage and logged it is then pruned from the originating
577 filesystem. Note! We are talking about user's files and not
580 Why: This would allow fully automatic storage management which becomes
581 useful for large datastores. It would also allow for auto-staging
582 from one media type to another.
584 Example 1) Medical imaging needs to store large amounts of data.
585 They decide to keep data on their servers for 6 months and then put
586 it away for long term storage. The server then finds all files
587 older than 6 months writes them to tape. The files are then removed
590 Example 2) All data that hasn't been accessed in 2 months could be
591 moved from high-cost, fibre-channel disk storage to a low-cost
592 large-capacity SATA disk storage pool which doesn't have as quick of
593 access time. Then after another 6 months (or possibly as one
594 storage pool gets full) data is migrated to Tape.
596 Item 19: Add Plug-ins to the FileSet Include statements.
597 Date: 28 October 2005
599 Status: Partially coded in 1.37 -- much more to do.
601 What: Allow users to specify wild-card and/or regular
602 expressions to be matched in both the Include and
603 Exclude directives in a FileSet. At the same time,
604 allow users to define plug-ins to be called (based on
605 regular expression/wild-card matching).
607 Why: This would give the users the ultimate ability to control
608 how files are backed up/restored. A user could write a
609 plug-in knows how to backup his Oracle database without
610 stopping/starting it, for example.
612 Item 20: Implement more Python events in Bacula.
613 Date: 28 October 2005
617 What: Allow Python scripts to be called at more places
618 within Bacula and provide additional access to Bacula
621 Why: This will permit users to customize Bacula through
629 Also add a way to get a listing of currently running
630 jobs (possibly also scheduled jobs).
633 Item 21: Quick release of FD-SD connection after backup.
634 Origin: Frank Volf (frank at deze dot org)
635 Date: 17 November 2005
638 What: In the Bacula implementation a backup is finished after all data
639 and attributes are successfully written to storage. When using a
640 tape backup it is very annoying that a backup can take a day,
641 simply because the current tape (or whatever) is full and the
642 administrator has not put a new one in. During that time the
643 system cannot be taken off-line, because there is still an open
644 session between the storage daemon and the file daemon on the
647 Although this is a very good strategy for making "safe backups"
648 This can be annoying for e.g. laptops, that must remain
649 connected until the backup is completed.
651 Using a new feature called "migration" it will be possible to
652 spool first to harddisk (using a special 'spool' migration
653 scheme) and then migrate the backup to tape.
655 There is still the problem of getting the attributes committed.
656 If it takes a very long time to do, with the current code, the
657 job has not terminated, and the File daemon is not freed up. The
658 Storage daemon should release the File daemon as soon as all the
659 file data and all the attributes have been sent to it (the SD).
660 Currently the SD waits until everything is on tape and all the
661 attributes are transmitted to the Director before signaling
662 completion to the FD. I don't think I would have any problem
663 changing this. The reason is that even if the FD reports back to
664 the Dir that all is OK, the job will not terminate until the SD
665 has done the same thing -- so in a way keeping the SD-FD link
666 open to the very end is not really very productive ...
668 Why: Makes backup of laptops much easier.
670 Item 22: Permit multiple Media Types in an Autochanger
672 Status: Done. Implemented in 1.38.9 (I think).
674 What: Modify the Storage daemon so that multiple Media Types
675 can be specified in an autochanger. This would be somewhat
676 of a simplistic implementation in that each drive would
677 still be allowed to have only one Media Type. However,
678 the Storage daemon will ensure that only a drive with
679 the Media Type that matches what the Director specifies
682 Why: This will permit user with several different drive types
683 to make full use of their autochangers.
685 Item 23: Allow different autochanger definitions for one autochanger.
686 Date: 28 October 2005
690 What: Currently, the autochanger script is locked based on
691 the autochanger. That is, if multiple drives are being
692 simultaneously used, the Storage daemon ensures that only
693 one drive at a time can access the mtx-changer script.
694 This change would base the locking on the control device,
695 rather than the autochanger. It would then permit two autochanger
696 definitions for the same autochanger, but with different
697 drives. Logically, the autochanger could then be "partitioned"
698 for different jobs, clients, or class of jobs, and if the locking
699 is based on the control device (e.g. /dev/sg0) the mtx-changer
700 script will be locked appropriately.
702 Why: This will permit users to partition autochangers for specific
703 use. It would also permit implementation of multiple Media
704 Types with no changes to the Storage daemon.
706 Item 24: Automatic disabling of devices
708 Origin: Peter Eriksson <peter at ifm.liu dot se>
711 What: After a configurable amount of fatal errors with a tape drive
712 Bacula should automatically disable further use of a certain
713 tape drive. There should also be "disable"/"enable" commands in
716 Why: On a multi-drive jukebox there is a possibility of tape drives
717 going bad during large backups (needing a cleaning tape run,
718 tapes getting stuck). It would be advantageous if Bacula would
719 automatically disable further use of a problematic tape drive
720 after a configurable amount of errors has occurred.
722 An example: I have a multi-drive jukebox (6 drives, 380+ slots)
723 where tapes occasionally get stuck inside the drive. Bacula will
724 notice that the "mtx-changer" command will fail and then fail
725 any backup jobs trying to use that drive. However, it will still
726 keep on trying to run new jobs using that drive and fail -
727 forever, and thus failing lots and lots of jobs... Since we have
728 many drives Bacula could have just automatically disabled
729 further use of that drive and used one of the other ones
732 Item 25: Implement huge exclude list support using hashing.
733 Date: 28 October 2005
737 What: Allow users to specify very large exclude list (currently
738 more than about 1000 files is too many).
740 Why: This would give the users the ability to exclude all
741 files that are loaded with the OS (e.g. using rpms
742 or debs). If the user can restore the base OS from
743 CDs, there is no need to backup all those files. A
744 complete restore would be to restore the base OS, then
745 do a Bacula restore. By excluding the base OS files, the
746 backup set will be *much* smaller.
749 ============= Empty Feature Request form ===========
750 Item n: One line summary ...
752 Origin: Name and email of originator.
755 What: More detailed explanation ...
757 Why: Why it is important ...
759 Notes: Additional notes or features (omit if not used)
760 ============== End Feature Request form ==============
763 ===============================================
764 Feature requests submitted after cutoff for December 2005 vote
765 ===============================================
766 Item n: Allow skipping execution of Jobs
767 Date: 29 November 2005
768 Origin: Florian Schnabel <florian.schnabel at docufy dot de>
771 What: An easy option to skip a certain job on a certain date.
772 Why: You could then easily skip tape backups on holidays. Especially
773 if you got no autochanger and can only fit one backup on a tape
774 that would be really handy, other jobs could proceed normally
775 and you won't get errors that way.
777 ===================================================
781 Origin: calvin streeting calvin at absentdream dot com
784 What: The abilty to archive to media (dvd/cd) in a uncompressd format
785 for dead filing (archiving not backing up)
787 Why: At my works when jobs are finished and moved off of the main file
788 servers (raid based systems) onto a simple linux file server (ide based
789 system) so users can find old information without contacting the IT
792 So this data dosn't realy change it only gets added to,
793 But it also needs backing up. At the moment it takes
794 about 8 hours to back up our servers (working data) so
795 rather than add more time to existing backups i am trying
796 to implement a system where we backup the acrhive data to
797 cd/dvd these disks would only need to be appended to
798 (burn only new/changed files to new disks for off site
799 storage). basialy understand the differnce between
800 achive data and live data.
802 Notes: scan the data and email me when it needs burning divide
803 into predifind chunks keep a recored of what is on what
804 disk make me a label (simple php->mysql=>pdf stuff) i
805 could do this bit ability to save data uncompresed so
806 it can be read in any other system (future proof data)
807 save the catalog with the disk as some kind of menu
816 <? require_once("inc/footer.php"); ?>