3 Bacula Projects Roadmap
5 (prioritized by user vote)
8 Item 1: Implement data encryption (as opposed to comm encryption)
9 Item 2: Implement Migration that moves Jobs from one Pool to another.
10 Item 3: Accurate restoration of renamed/deleted files from
11 Item 4: Implement a Bacula GUI/management tool using Python.
12 Item 5: Implement Base jobs.
13 Item 6: Allow FD to initiate a backup
14 Item 7: Improve Bacula's tape and drive usage and cleaning management.
15 Item 8: Implement creation and maintenance of copy pools
16 Item 9: Implement new {Client}Run{Before|After}Job feature.
17 Item 10: Merge multiple backups (Synthetic Backup or Consolidation).
18 Item 11: Deletion of Disk-Based Bacula Volumes
19 Item 12: Directive/mode to backup only file changes, not entire file
20 Item 13: Multiple threads in file daemon for the same job
21 Item 14: Implement red/black binary tree routines.
22 Item 15: Add support for FileSets in user directories CACHEDIR.TAG
23 Item 16: Implement extraction of Win32 BackupWrite data.
24 Item 17: Implement a Python interface to the Bacula catalog.
25 Item 18: Archival (removal) of User Files to Tape
26 Item 19: Add Plug-ins to the FileSet Include statements.
27 Item 20: Implement more Python events in Bacula.
28 Item 21: Quick release of FD-SD connection after backup.
29 Item 22: Permit multiple Media Types in an Autochanger
30 Item 23: Allow different autochanger definitions for one autochanger.
31 Item 24: Automatic disabling of devices
32 Item 25: Implement huge exclude list support using hashing.
35 Below, you will find more information on future projects:
37 Item 1: Implement data encryption (as opposed to comm encryption)
39 Origin: Sponsored by Landon and 13 contributors to EFF.
40 Status: Landon Fuller is currently implementing this.
42 What: Currently the data that is stored on the Volume is not
43 encrypted. For confidentiality, encryption of data at
44 the File daemon level is essential.
45 Data encryption encrypts the data in the File daemon and
46 decrypts the data in the File daemon during a restore.
48 Why: Large sites require this.
50 Item 2: Implement Migration that moves Jobs from one Pool to another.
51 Origin: Sponsored by Riege Software International GmbH. Contact:
52 Daniel Holtkamp <holtkamp at riege dot com>
54 Status: Partially coded in 1.37 -- much more to do. Assigned to
57 What: The ability to copy, move, or archive data that is on a
58 device to another device is very important.
60 Why: An ISP might want to backup to disk, but after 30 days
61 migrate the data to tape backup and delete it from
62 disk. Bacula should be able to handle this
63 automatically. It needs to know what was put where,
64 and when, and what to migrate -- it is a bit like
65 retention periods. Doing so would allow space to be
66 freed up for current backups while maintaining older
69 Notes: Riege Software have asked for the following migration
72 Highwater mark (stopped by Lowwater mark?)
74 Notes: Migration could be additionally triggered by:
78 Item 3: Accurate restoration of renamed/deleted files from
79 Incremental/Differential backups
80 Date: 28 November 2005
81 Origin: Martin Simmons (martin at lispworks dot com)
84 What: When restoring a fileset for a specified date (including "most
85 recent"), Bacula should give you exactly the files and directories
86 that existed at the time of the last backup prior to that date.
88 Currently this only works if the last backup was a Full backup.
89 When the last backup was Incremental/Differential, files and
90 directories that have been renamed or deleted since the last Full
91 backup are not currently restored correctly. Ditto for files with
92 extra/fewer hard links than at the time of the last Full backup.
94 Why: Incremental/Differential would be much more useful if this worked.
96 Notes: Item 14 (Merging of multiple backups into a single one) seems to
97 rely on this working, otherwise the merged backups will not be
98 truly equivalent to a Full backup.
100 Kern: notes shortened. This can be done without the need for
101 inodes. It is essentially the same as the current Verify job,
102 but one additional database record must be written, which does
103 not need any database change.
105 Kern: see if we can correct restoration of directories if
106 replace=ifnewer is set. Currently, if the directory does not
107 exist, a "dummy" directory is created, then when all the files
108 are updated, the dummy directory is newer so the real values
111 Item 4: Implement a Bacula GUI/management tool using Python.
113 Date: 28 October 2005
116 What: Implement a Bacula console, and management tools
117 using Python and Qt or GTK.
119 Why: Don't we already have a wxWidgets GUI? Yes, but
120 it is written in C++ and changes to the user interface
121 must be hand tailored using C++ code. By developing
122 the user interface using Qt designer, the interface
123 can be very easily updated and most of the new Python
124 code will be automatically created. The user interface
125 changes become very simple, and only the new features
126 must be implement. In addition, the code will be in
127 Python, which will give many more users easy (or easier)
128 access to making additions or modifications.
130 Notes: This is currently being implemented using Python-GTK by
131 Lucas Di Pentima <lucas at lunix dot com dot ar>
133 Item 5: Implement Base jobs.
134 Date: 28 October 2005
138 What: A base job is sort of like a Full save except that you
139 will want the FileSet to contain only files that are
140 unlikely to change in the future (i.e. a snapshot of
141 most of your system after installing it). After the
142 base job has been run, when you are doing a Full save,
143 you specify one or more Base jobs to be used. All
144 files that have been backed up in the Base job/jobs but
145 not modified will then be excluded from the backup.
146 During a restore, the Base jobs will be automatically
147 pulled in where necessary.
149 Why: This is something none of the competition does, as far as
150 we know (except perhaps BackupPC, which is a Perl program that
151 saves to disk only). It is big win for the user, it
152 makes Bacula stand out as offering a unique
153 optimization that immediately saves time and money.
154 Basically, imagine that you have 100 nearly identical
155 Windows or Linux machine containing the OS and user
156 files. Now for the OS part, a Base job will be backed
157 up once, and rather than making 100 copies of the OS,
158 there will be only one. If one or more of the systems
159 have some files updated, no problem, they will be
160 automatically restored.
162 Notes: Huge savings in tape usage even for a single machine.
163 Will require more resources because the DIR must send
164 FD a list of files/attribs, and the FD must search the
165 list and compare it for each file to be saved.
167 Item 6: Allow FD to initiate a backup
168 Origin: Frank Volf (frank at deze dot org)
169 Date: 17 November 2005
172 What: Provide some means, possibly by a restricted console that
173 allows a FD to initiate a backup, and that uses the connection
174 established by the FD to the Director for the backup so that
175 a Director that is firewalled can do the backup.
177 Why: Makes backup of laptops much easier.
179 Item 7: Improve Bacula's tape and drive usage and cleaning management.
180 Date: 8 November 2005, November 11, 2005
181 Origin: Adam Thornton <athornton at sinenomine dot net>,
182 Arno Lehmann <al at its-lehmann dot de>
185 What: Make Bacula manage tape life cycle information, tape reuse
186 times and drive cleaning cycles.
188 Why: All three parts of this project are important when operating
190 We need to know which tapes need replacement, and we need to
191 make sure the drives are cleaned when necessary. While many
192 tape libraries and even autoloaders can handle all this
193 automatically, support by Bacula can be helpful for smaller
194 (older) libraries and single drives. Limiting the number of
195 times a tape is used might prevent tape errors when using
196 tapes until the drives can't read it any more. Also, checking
197 drive status during operation can prevent some failures (as I
198 [Arno] had to learn the hard way...)
200 Notes: First, Bacula could (and even does, to some limited extent)
201 record tape and drive usage. For tapes, the number of mounts,
202 the amount of data, and the time the tape has actually been
203 running could be recorded. Data fields for Read and Write
204 time and Number of mounts already exist in the catalog (I'm
205 not sure if VolBytes is the sum of all bytes ever written to
206 that volume by Bacula). This information can be important
207 when determining which media to replace. The ability to mark
208 Volumes as "used up" after a given number of write cycles
209 should also be implemented so that a tape is never actually
210 worn out. For the tape drives known to Bacula, similar
211 information is interesting to determine the device status and
212 expected life time: Time it's been Reading and Writing, number
213 of tape Loads / Unloads / Errors. This information is not yet
214 recorded as far as I [Arno] know. A new volume status would
215 be necessary for the new state, like "Used up" or "Worn out".
216 Volumes with this state could be used for restores, but not
217 for writing. These volumes should be migrated first (assuming
218 migration is implemented) and, once they are no longer needed,
219 could be moved to a Trash pool.
221 The next step would be to implement a drive cleaning setup.
222 Bacula already has knowledge about cleaning tapes. Once it
223 has some information about cleaning cycles (measured in drive
224 run time, number of tapes used, or calender days, for example)
225 it can automatically execute tape cleaning (with an
226 autochanger, obviously) or ask for operator assistance loading
229 The final step would be to implement TAPEALERT checks not only
230 when changing tapes and only sending the information to the
231 administrator, but rather checking after each tape error,
232 checking on a regular basis (for example after each tape
233 file), and also before unloading and after loading a new tape.
234 Then, depending on the drives TAPEALERT state and the known
235 drive cleaning state Bacula could automatically schedule later
236 cleaning, clean immediately, or inform the operator.
238 Implementing this would perhaps require another catalog change
239 and perhaps major changes in SD code and the DIR-SD protocol,
240 so I'd only consider this worth implementing if it would
241 actually be used or even needed by many people.
243 Implementation of these projects could happen in three distinct
244 sub-projects: Measuring Tape and Drive usage, retiring
245 volumes, and handling drive cleaning and TAPEALERTs.
247 Item 8: Implement creation and maintenance of copy pools
248 Date: 27 November 2005
249 Origin: David Boyes (dboyes at sinenomine dot net)
252 What: I would like Bacula to have the capability to write copies
253 of backed-up data on multiple physical volumes selected
254 from different pools without transferring the data
255 multiple times, and to accept any of the copy volumes
256 as valid for restore.
258 Why: In many cases, businesses are required to keep offsite
259 copies of backup volumes, or just wish for simple
260 protection against a human operator dropping a storage
261 volume and damaging it. The ability to generate multiple
262 volumes in the course of a single backup job allows
263 customers to simple check out one copy and send it
264 offsite, marking it as out of changer or otherwise
265 unavailable. Currently, the library and magazine
266 management capability in Bacula does not make this process
269 Restores would use the copy of the data on the first
270 available volume, in order of copy pool chain definition.
272 This is also a major scalability issue -- as the number of
273 clients increases beyond several thousand, and the volume
274 of data increases, transferring the data multiple times to
275 produce additional copies of the backups will become
276 physically impossible due to transfer speed
277 issues. Generating multiple copies at server side will
278 become the only practical option.
280 How: I suspect that this will require adding a multiplexing
281 SD that appears to be a SD to a specific FD, but 1-n FDs
282 to the specific back end SDs managing the primary and copy
283 pools. Storage pools will also need to acquire parameters
284 to define the pools to be used for copies.
286 Notes: I would commit some of my developers' time if we can agree
287 on the design and behavior.
289 Item 9: Implement new {Client}Run{Before|After}Job feature.
290 Date: 26 September 2005
291 Origin: Phil Stracchino <phil.stracchino at speakeasy dot net>
294 What: Some time ago, there was a discussion of RunAfterJob and
295 ClientRunAfterJob, and the fact that they do not run after failed
296 jobs. At the time, there was a suggestion to add a
297 RunAfterFailedJob directive (and, presumably, a matching
298 ClientRunAfterFailedJob directive), but to my knowledge these
299 were never implemented.
301 An alternate way of approaching the problem has just occurred to
302 me. Suppose the RunBeforeJob and RunAfterJob directives were
303 expanded in a manner something like this example:
306 Command = "/opt/bacula/etc/checkhost %c"
308 RunsAtJobLevels = All # All, Full, Diff, Inc
309 AbortJobOnError = Yes
312 Command = c:/bacula/systemstate.bat
314 RunsAtJobLevels = All # All, Full, Diff, Inc
319 Command = c:/bacula/deletestatefile.bat
321 RunsAtJobLevels = All # All, Full, Diff, Inc
326 Command = c:/bacula/somethingelse.bat
328 RunsAtJobLevels = All
333 Command = "/opt/bacula/etc/checkhost -v %c"
335 RunsAtJobLevels = All
341 Why: It would be a significant change to the structure of the
342 directives, but allows for a lot more flexibility, including
343 RunAfter commands that will run regardless of whether the job
344 succeeds, or RunBefore tasks that still allow the job to run even
345 if that specific RunBefore fails.
347 Notes: By Kern: I would prefer to have a single new Resource called
348 RunScript. More notes from Phil:
350 RunBeforeJob = yes|no
352 RunsAtJobLevels = All|Full|Diff|Inc
354 The AbortJobOnError, RunsOnSuccess and RunsOnFailure directives
355 could be optional, and possibly RunsWhen as well.
357 AbortJobOnError would be ignored unless RunsWhen was set to Before
358 (or RunsBefore Job set to Yes), and would default to Yes if
359 omitted. If AbortJobOnError was set to No, failure of the script
360 would still generate a warning.
362 RunsOnSuccess would be ignored unless RunsWhen was set to After
363 (or RunsBeforeJob set to No), and default to Yes.
365 RunsOnFailure would be ignored unless RunsWhen was set to After,
368 Allow having the before/after status on the script command
369 line so that the same script can be used both before/after.
372 Item 10: Merge multiple backups (Synthetic Backup or Consolidation).
373 Origin: Marc Cousin and Eric Bollengier
374 Date: 15 November 2005
375 Status: Depends on first implementing project Item 1 (Migration).
377 What: A merged backup is a backup made without connecting to the Client.
378 It would be a Merge of existing backups into a single backup.
379 In effect, it is like a restore but to the backup medium.
381 For instance, say that last Sunday we made a full backup. Then
382 all week long, we created incremental backups, in order to do
383 them fast. Now comes Sunday again, and we need another full.
384 The merged backup makes it possible to do instead an incremental
385 backup (during the night for instance), and then create a merged
386 backup during the day, by using the full and incrementals from
387 the week. The merged backup will be exactly like a full made
388 Sunday night on the tape, but the production interruption on the
389 Client will be minimal, as the Client will only have to send
392 In fact, if it's done correctly, you could merge all the
393 Incrementals into single Incremental, or all the Incrementals
394 and the last Differential into a new Differential, or the Full,
395 last differential and all the Incrementals into a new Full
396 backup. And there is no need to involve the Client.
398 Why: The benefit is that :
399 - the Client just does an incremental ;
400 - the merged backup on tape is just as a single full backup,
401 and can be restored very fast.
403 This is also a way of reducing the backup data since the old
404 data can then be pruned (or not) from the catalog, possibly
405 allowing older volumes to be recycled
407 Item 11: Deletion of Disk-Based Bacula Volumes
409 Origin: Ross Boylan <RossBoylan at stanfordalumni dot org> (edited
413 What: Provide a way for Bacula to automatically remove Volumes
414 from the filesystem, or optionally to truncate them.
415 Obviously, the Volume must be pruned prior removal.
417 Why: This would allow users more control over their Volumes and
418 prevent disk based volumes from consuming too much space.
420 Notes: The following two directives might do the trick:
422 Volume Data Retention = <time period>
423 Remove Volume After = <time period>
425 The migration project should also remove a Volume that is
426 migrated. This might also work for tape Volumes.
428 Item 12: Directive/mode to backup only file changes, not entire file
429 Date: 11 November 2005
430 Origin: Joshua Kugler <joshua dot kugler at uaf dot edu>
431 Marek Bajon <mbajon at bimsplus dot com dot pl>
434 What: Currently when a file changes, the entire file will be backed up in
435 the next incremental or full backup. To save space on the tapes
436 it would be nice to have a mode whereby only the changes to the
437 file would be backed up when it is changed.
439 Why: This would save lots of space when backing up large files such as
440 logs, mbox files, Outlook PST files and the like.
442 Notes: This would require the usage of disk-based volumes as comparing
443 files would not be feasible using a tape drive.
445 Item 13: Multiple threads in file daemon for the same job
446 Date: 27 November 2005
447 Origin: Ove Risberg (Ove.Risberg at octocode dot com)
450 What: I want the file daemon to start multiple threads for a backup
451 job so the fastest possible backup can be made.
453 The file daemon could parse the FileSet information and start
454 one thread for each File entry located on a separate
457 A configuration option in the job section should be used to
458 enable or disable this feature. The configuration option could
459 specify the maximum number of threads in the file daemon.
461 If the theads could spool the data to separate spool files
462 the restore process will not be much slower.
464 Why: Multiple concurrent backups of a large fileserver with many
465 disks and controllers will be much faster.
467 Notes: I am willing to try to implement this but I will probably
468 need some help and advice. (No problem -- Kern)
470 Item 14: Implement red/black binary tree routines.
471 Date: 28 October 2005
475 What: Implement a red/black binary tree class. This could
476 then replace the current binary insert/search routines
477 used in the restore in memory tree. This could significantly
478 speed up the creation of the in memory restore tree.
480 Why: Performance enhancement.
482 Item 15: Add support for FileSets in user directories CACHEDIR.TAG
483 Origin: Norbert Kiesel <nkiesel at tbdnetworks dot com>
484 Date: 21 November 2005
487 What: CACHDIR.TAG is a proposal for identifying directories which
488 should be ignored for archiving/backup. It works by ignoring
489 directory trees which have a file named CACHEDIR.TAG with a
490 specific content. See
491 http://www.brynosaurus.com/cachedir/spec.html
495 I suggest that if this is implemented (I've also asked for this
496 feature some year ago) that it is made compatible with Legato
497 Networkers ".nsr" files where you can specify a lot of options on
498 how to handle files/directories (including denying further
499 parsing of .nsr files lower down into the directory trees). A
500 PDF version of the .nsr man page can be viewed at:
502 http://www.ifm.liu.se/~peter/nsr.pdf
504 Why: It's a nice alternative to "exclude" patterns for directories
505 which don't have regular pathnames. Also, it allows users to
506 control backup for themselves. Implementation should be pretty
507 simple. GNU tar >= 1.14 or so supports it, too.
509 Notes: I envision this as an optional feature to a fileset
513 Item 16: Implement extraction of Win32 BackupWrite data.
514 Origin: Thorsten Engel <thorsten.engel at matrix-computer dot com>
515 Date: 28 October 2005
516 Status: Assigned to Thorsten. Implemented in current CVS
518 What: This provides the Bacula File daemon with code that
519 can pick apart the stream output that Microsoft writes
520 for BackupWrite data, and thus the data can be read
521 and restored on non-Win32 machines.
523 Why: BackupWrite data is the portable=no option in Win32
524 FileSets, and in previous Baculas, this data could
525 only be extracted using a Win32 FD. With this new code,
526 the Windows data can be extracted and restored on
530 Item 18: Implement a Python interface to the Bacula catalog.
531 Date: 28 October 2005
535 What: Implement an interface for Python scripts to access
536 the catalog through Bacula.
538 Why: This will permit users to customize Bacula through
541 Item 18: Archival (removal) of User Files to Tape
545 Origin: Ray Pengelly [ray at biomed dot queensu dot ca
548 What: The ability to archive data to storage based on certain parameters
549 such as age, size, or location. Once the data has been written to
550 storage and logged it is then pruned from the originating
551 filesystem. Note! We are talking about user's files and not
554 Why: This would allow fully automatic storage management which becomes
555 useful for large datastores. It would also allow for auto-staging
556 from one media type to another.
558 Example 1) Medical imaging needs to store large amounts of data.
559 They decide to keep data on their servers for 6 months and then put
560 it away for long term storage. The server then finds all files
561 older than 6 months writes them to tape. The files are then removed
564 Example 2) All data that hasn't been accessed in 2 months could be
565 moved from high-cost, fibre-channel disk storage to a low-cost
566 large-capacity SATA disk storage pool which doesn't have as quick of
567 access time. Then after another 6 months (or possibly as one
568 storage pool gets full) data is migrated to Tape.
570 Item 19: Add Plug-ins to the FileSet Include statements.
571 Date: 28 October 2005
573 Status: Partially coded in 1.37 -- much more to do.
575 What: Allow users to specify wild-card and/or regular
576 expressions to be matched in both the Include and
577 Exclude directives in a FileSet. At the same time,
578 allow users to define plug-ins to be called (based on
579 regular expression/wild-card matching).
581 Why: This would give the users the ultimate ability to control
582 how files are backed up/restored. A user could write a
583 plug-in knows how to backup his Oracle database without
584 stopping/starting it, for example.
586 Item 20: Implement more Python events in Bacula.
587 Date: 28 October 2005
591 What: Allow Python scripts to be called at more places
592 within Bacula and provide additional access to Bacula
595 Why: This will permit users to customize Bacula through
603 Also add a way to get a listing of currently running
604 jobs (possibly also scheduled jobs).
607 Item 21: Quick release of FD-SD connection after backup.
608 Origin: Frank Volf (frank at deze dot org)
609 Date: 17 November 2005
612 What: In the Bacula implementation a backup is finished after all data
613 and attributes are successfully written to storage. When using a
614 tape backup it is very annoying that a backup can take a day,
615 simply because the current tape (or whatever) is full and the
616 administrator has not put a new one in. During that time the
617 system cannot be taken off-line, because there is still an open
618 session between the storage daemon and the file daemon on the
621 Although this is a very good strategy for making "safe backups"
622 This can be annoying for e.g. laptops, that must remain
623 connected until the backup is completed.
625 Using a new feature called "migration" it will be possible to
626 spool first to harddisk (using a special 'spool' migration
627 scheme) and then migrate the backup to tape.
629 There is still the problem of getting the attributes committed.
630 If it takes a very long time to do, with the current code, the
631 job has not terminated, and the File daemon is not freed up. The
632 Storage daemon should release the File daemon as soon as all the
633 file data and all the attributes have been sent to it (the SD).
634 Currently the SD waits until everything is on tape and all the
635 attributes are transmitted to the Director before signaling
636 completion to the FD. I don't think I would have any problem
637 changing this. The reason is that even if the FD reports back to
638 the Dir that all is OK, the job will not terminate until the SD
639 has done the same thing -- so in a way keeping the SD-FD link
640 open to the very end is not really very productive ...
642 Why: Makes backup of laptops much easier.
644 Item 22: Permit multiple Media Types in an Autochanger
646 Status: Now implemented
648 What: Modify the Storage daemon so that multiple Media Types
649 can be specified in an autochanger. This would be somewhat
650 of a simplistic implementation in that each drive would
651 still be allowed to have only one Media Type. However,
652 the Storage daemon will ensure that only a drive with
653 the Media Type that matches what the Director specifies
656 Why: This will permit user with several different drive types
657 to make full use of their autochangers.
659 Item 23: Allow different autochanger definitions for one autochanger.
660 Date: 28 October 2005
664 What: Currently, the autochanger script is locked based on
665 the autochanger. That is, if multiple drives are being
666 simultaneously used, the Storage daemon ensures that only
667 one drive at a time can access the mtx-changer script.
668 This change would base the locking on the control device,
669 rather than the autochanger. It would then permit two autochanger
670 definitions for the same autochanger, but with different
671 drives. Logically, the autochanger could then be "partitioned"
672 for different jobs, clients, or class of jobs, and if the locking
673 is based on the control device (e.g. /dev/sg0) the mtx-changer
674 script will be locked appropriately.
676 Why: This will permit users to partition autochangers for specific
677 use. It would also permit implementation of multiple Media
678 Types with no changes to the Storage daemon.
680 Item 24: Automatic disabling of devices
682 Origin: Peter Eriksson <peter at ifm.liu dot se>
685 What: After a configurable amount of fatal errors with a tape drive
686 Bacula should automatically disable further use of a certain
687 tape drive. There should also be "disable"/"enable" commands in
690 Why: On a multi-drive jukebox there is a possibility of tape drives
691 going bad during large backups (needing a cleaning tape run,
692 tapes getting stuck). It would be advantageous if Bacula would
693 automatically disable further use of a problematic tape drive
694 after a configurable amount of errors has occurred.
696 An example: I have a multi-drive jukebox (6 drives, 380+ slots)
697 where tapes occasionally get stuck inside the drive. Bacula will
698 notice that the "mtx-changer" command will fail and then fail
699 any backup jobs trying to use that drive. However, it will still
700 keep on trying to run new jobs using that drive and fail -
701 forever, and thus failing lots and lots of jobs... Since we have
702 many drives Bacula could have just automatically disabled
703 further use of that drive and used one of the other ones
706 Item 25: Implement huge exclude list support using hashing.
707 Date: 28 October 2005
711 What: Allow users to specify very large exclude list (currently
712 more than about 1000 files is too many).
714 Why: This would give the users the ability to exclude all
715 files that are loaded with the OS (e.g. using rpms
716 or debs). If the user can restore the base OS from
717 CDs, there is no need to backup all those files. A
718 complete restore would be to restore the base OS, then
719 do a Bacula restore. By excluding the base OS files, the
720 backup set will be *much* smaller.
722 ===============================================
723 Not in Dec 2005 Vote:
724 Item n: Allow skipping execution of Jobs
725 Date: 29 November 2005
726 Origin: Florian Schnabel <florian.schnabel at docufy dot de>
729 What: An easy option to skip a certain job on a certain date.
730 Why: You could then easily skip tape backups on holidays. Especially
731 if you got no autochanger and can only fit one backup on a tape
732 that would be really handy, other jobs could proceed normally
733 and you won't get errors that way.
735 ============= Empty Feature Request form ===========
736 Item n: One line summary ...
738 Origin: Name and email of originator.
741 What: More detailed explanation ...
743 Why: Why it is important ...
745 Notes: Additional notes or features (omit if not used)
746 ============== End Feature Request form ==============