X-Git-Url: https://git.sur5r.net/?a=blobdiff_plain;f=bacula%2Fprojects;h=94c6047c4e3948a1d347b6b936f06286bfd0c5b4;hb=2fe63270075b56f945034210985242ba26ea24c7;hp=4af15b1ce71a4e35a8cfb704ab4c2d2223efc073;hpb=ce0c0525164af818757fbf5fe8f733a1ca4ea5b1;p=bacula%2Fbacula diff --git a/bacula/projects b/bacula/projects index 4af15b1ce7..94c6047c4e 100644 --- a/bacula/projects +++ b/bacula/projects @@ -1,85 +1,53 @@ Projects: Bacula Projects Roadmap - Prioritized by user vote 07 December 2005 - Status updated 30 July 2006 + Status updated 15 December 2006 + Summary: -Item 1: Implement data encryption (as opposed to comm encryption) -Item 2: Implement Migration that moves Jobs from one Pool to another. -Item 3: Accurate restoration of renamed/deleted files from -Item 4: Implement a Bacula GUI/management tool using Python. -Item 5: Implement Base jobs. -Item 6: Allow FD to initiate a backup -Item 7: Improve Bacula's tape and drive usage and cleaning management. -Item 8: Implement creation and maintenance of copy pools -Item 9: Implement new {Client}Run{Before|After}Job feature. -Item 10: Merge multiple backups (Synthetic Backup or Consolidation). -Item 11: Deletion of Disk-Based Bacula Volumes -Item 12: Directive/mode to backup only file changes, not entire file -Item 13: Multiple threads in file daemon for the same job -Item 14: Implement red/black binary tree routines. -Item 15: Add support for FileSets in user directories CACHEDIR.TAG -Item 16: Implement extraction of Win32 BackupWrite data. -Item 17: Implement a Python interface to the Bacula catalog. -Item 18: Archival (removal) of User Files to Tape -Item 19: Add Plug-ins to the FileSet Include statements. -Item 20: Implement more Python events in Bacula. -Item 21: Quick release of FD-SD connection after backup. -Item 22: Permit multiple Media Types in an Autochanger -Item 23: Allow different autochanger definitions for one autochanger. -Item 24: Automatic disabling of devices -Item 25: Implement huge exclude list support using hashing. +Item 1: Accurate restoration of renamed/deleted files +Item 2: Implement a Bacula GUI/management tool. +Item 3: Implement Base jobs. +Item 4: Implement from-client and to-client on restore command line. +Item 5: Implement creation and maintenance of copy pools +Item 6: Merge multiple backups (Synthetic Backup or Consolidation). +Item 8: Deletion of Disk-Based Bacula Volumes +Item 9: Implement a Python interface to the Bacula catalog. +Item 10: Archival (removal) of User Files to Tape +Item 11: Add Plug-ins to the FileSet Include statements. +Item 12: Implement more Python events in Bacula. +Item 13: Quick release of FD-SD connection after backup. +Item 14: Implement huge exclude list support using hashing. +Item 15: Allow skipping execution of Jobs +Item 16: Tray monitor window cleanups +Item 17: Split documentation +Item 18: Automatic promotion of backup levels +Item 19: Add an override in Schedule for Pools based on backup types. +Item 20: An option to operate on all pools with update vol parameters +Item 21: Include JobID in spool file name +Item 22: Include timestamp of job launch in "stat clients" output +Item 23: Message mailing based on backup types +Item 24: Allow inclusion/exclusion of files in a fileset by creation/mod times +Item 25: Add a scheduling syntax that permits weekly rotations +Item 26: Improve Bacula's tape and drive usage and cleaning management. +Item 27: Implement support for stacking arbitrary stream filters, sinks. +Item 28: Allow FD to initiate a backup +Item 29: Directive/mode to backup only file changes, not entire file +Item 30: Automatic disabling of devices +Item 31: Incorporation of XACML2/SAML2 parsing +Item 32: Clustered file-daemons +Item 33: Commercial database support +Item 34: Archive data +Item 35: Filesystem watch triggered backup. +Item 36: Implement multiple numeric backup levels as supported by dump Below, you will find more information on future projects: -Item 1: Implement data encryption (as opposed to comm encryption) - Date: 28 October 2005 - Origin: Sponsored by Landon and 13 contributors to EFF. - Status: Done: Landon Fuller has implemented this in 1.39.x. - - What: Currently the data that is stored on the Volume is not - encrypted. For confidentiality, encryption of data at - the File daemon level is essential. - Data encryption encrypts the data in the File daemon and - decrypts the data in the File daemon during a restore. - - Why: Large sites require this. - -Item 2: Implement Migration that moves Jobs from one Pool to another. - Origin: Sponsored by Riege Software International GmbH. Contact: - Daniel Holtkamp - Date: 28 October 2005 - Status: 90% complete: Working in 1.39, more to do. Assigned to - Kern. - - What: The ability to copy, move, or archive data that is on a - device to another device is very important. - - Why: An ISP might want to backup to disk, but after 30 days - migrate the data to tape backup and delete it from - disk. Bacula should be able to handle this - automatically. It needs to know what was put where, - and when, and what to migrate -- it is a bit like - retention periods. Doing so would allow space to be - freed up for current backups while maintaining older - data on tape drives. - - Notes: Riege Software have asked for the following migration - triggers: - Age of Job - Highwater mark (stopped by Lowwater mark?) - - Notes: Migration could be additionally triggered by: - Number of Jobs - Number of Volumes - -Item 3: Accurate restoration of renamed/deleted files from - Incremental/Differential backups +Item 1: Accurate restoration of renamed/deleted files Date: 28 November 2005 Origin: Martin Simmons (martin at lispworks dot com) - Status: + Status: Robert Nelson will implement this What: When restoring a fileset for a specified date (including "most recent"), Bacula should give you exactly the files and directories @@ -93,7 +61,7 @@ Item 3: Accurate restoration of renamed/deleted files from Why: Incremental/Differential would be much more useful if this worked. - Notes: Item 14 (Merging of multiple backups into a single one) seems to + Notes: Merging of multiple backups into a single one seems to rely on this working, otherwise the merged backups will not be truly equivalent to a Full backup. @@ -108,13 +76,13 @@ Item 3: Accurate restoration of renamed/deleted files from are updated, the dummy directory is newer so the real values are not updated. -Item 4: Implement a Bacula GUI/management tool using Python. +Item 2: Implement a Bacula GUI/management tool. Origin: Kern Date: 28 October 2005 - Status: Lucus is working on this for Python GTK+. + Status: What: Implement a Bacula console, and management tools - using Python and Qt or GTK. + probably using Qt3 and C++. Why: Don't we already have a wxWidgets GUI? Yes, but it is written in C++ and changes to the user interface @@ -127,10 +95,12 @@ Item 4: Implement a Bacula GUI/management tool using Python. Python, which will give many more users easy (or easier) access to making additions or modifications. - Notes: This is currently being implemented using Python-GTK by - Lucas Di Pentima + Notes: There is a partial Python-GTK implementation + Lucas Di Pentima but + it is no longer being developed. + -Item 5: Implement Base jobs. +Item 3: Implement Base jobs. Date: 28 October 2005 Origin: Kern Status: @@ -164,87 +134,28 @@ Item 5: Implement Base jobs. FD a list of files/attribs, and the FD must search the list and compare it for each file to be saved. -Item 6: Allow FD to initiate a backup - Origin: Frank Volf (frank at deze dot org) - Date: 17 November 2005 - Status: - - What: Provide some means, possibly by a restricted console that - allows a FD to initiate a backup, and that uses the connection - established by the FD to the Director for the backup so that - a Director that is firewalled can do the backup. - - Why: Makes backup of laptops much easier. - -Item 7: Improve Bacula's tape and drive usage and cleaning management. - Date: 8 November 2005, November 11, 2005 - Origin: Adam Thornton , - Arno Lehmann - Status: - - What: Make Bacula manage tape life cycle information, tape reuse - times and drive cleaning cycles. - - Why: All three parts of this project are important when operating - backups. - We need to know which tapes need replacement, and we need to - make sure the drives are cleaned when necessary. While many - tape libraries and even autoloaders can handle all this - automatically, support by Bacula can be helpful for smaller - (older) libraries and single drives. Limiting the number of - times a tape is used might prevent tape errors when using - tapes until the drives can't read it any more. Also, checking - drive status during operation can prevent some failures (as I - [Arno] had to learn the hard way...) - - Notes: First, Bacula could (and even does, to some limited extent) - record tape and drive usage. For tapes, the number of mounts, - the amount of data, and the time the tape has actually been - running could be recorded. Data fields for Read and Write - time and Number of mounts already exist in the catalog (I'm - not sure if VolBytes is the sum of all bytes ever written to - that volume by Bacula). This information can be important - when determining which media to replace. The ability to mark - Volumes as "used up" after a given number of write cycles - should also be implemented so that a tape is never actually - worn out. For the tape drives known to Bacula, similar - information is interesting to determine the device status and - expected life time: Time it's been Reading and Writing, number - of tape Loads / Unloads / Errors. This information is not yet - recorded as far as I [Arno] know. A new volume status would - be necessary for the new state, like "Used up" or "Worn out". - Volumes with this state could be used for restores, but not - for writing. These volumes should be migrated first (assuming - migration is implemented) and, once they are no longer needed, - could be moved to a Trash pool. - - The next step would be to implement a drive cleaning setup. - Bacula already has knowledge about cleaning tapes. Once it - has some information about cleaning cycles (measured in drive - run time, number of tapes used, or calender days, for example) - it can automatically execute tape cleaning (with an - autochanger, obviously) or ask for operator assistance loading - a cleaning tape. - - The final step would be to implement TAPEALERT checks not only - when changing tapes and only sending the information to the - administrator, but rather checking after each tape error, - checking on a regular basis (for example after each tape - file), and also before unloading and after loading a new tape. - Then, depending on the drives TAPEALERT state and the known - drive cleaning state Bacula could automatically schedule later - cleaning, clean immediately, or inform the operator. - - Implementing this would perhaps require another catalog change - and perhaps major changes in SD code and the DIR-SD protocol, - so I'd only consider this worth implementing if it would - actually be used or even needed by many people. - - Implementation of these projects could happen in three distinct - sub-projects: Measuring Tape and Drive usage, retiring - volumes, and handling drive cleaning and TAPEALERTs. - -Item 8: Implement creation and maintenance of copy pools +Item 4: Implement from-client and to-client on restore command line. + Date: 11 December 2006 + Origin: Discussion on Bacula-users entitled 'Scripted restores to + different clients', December 2006 + Status: New feature request + + What: While using bconsole interactively, you can specify the client + that a backup job is to be restored for, and then you can + specify later a different client to send the restored files + back to. However, using the 'restore' command with all options + on the command line, this cannot be done, due to the ambiguous + 'client' parameter. Additionally, this parameter means different + things depending on if it's specified on the command line or + afterwards, in the Modify Job screens. + + Why: This feature would enable restore jobs to be more completely + automated, for example by a web or GUI front-end. + + Notes: client can also be implied by specifying the jobid on the command + line + +Item 5: Implement creation and maintenance of copy pools Date: 27 November 2005 Origin: David Boyes (dboyes at sinenomine dot net) Status: @@ -286,111 +197,11 @@ Item 8: Implement creation and maintenance of copy pools Notes: I would commit some of my developers' time if we can agree on the design and behavior. -Item 9: Implement new {Client}Run{Before|After}Job feature. - Date: 26 September 2005 - Origin: Phil Stracchino - Status: Done. This has been implemented by Eric Bollengier - - What: Some time ago, there was a discussion of RunAfterJob and - ClientRunAfterJob, and the fact that they do not run after failed - jobs. At the time, there was a suggestion to add a - RunAfterFailedJob directive (and, presumably, a matching - ClientRunAfterFailedJob directive), but to my knowledge these - were never implemented. - - The current implementation doesn't permit to add new feature easily. - - An alternate way of approaching the problem has just occurred to - me. Suppose the RunBeforeJob and RunAfterJob directives were - expanded in a manner like this example: - - RunScript { - Command = "/opt/bacula/etc/checkhost %c" - RunsOnClient = No # default - AbortJobOnError = Yes # default - RunsWhen = Before - } - RunScript { - Command = c:/bacula/systemstate.bat - RunsOnClient = yes - AbortJobOnError = No - RunsWhen = After - RunsOnFailure = yes - } - - RunScript { - Command = c:/bacula/deletestatefile.bat - Target = rico-fd - RunsWhen = Always - } - - It's now possible to specify more than 1 command per Job. - (you can stop your database and your webserver without a script) - - ex : - Job { - Name = "Client1" - JobDefs = "DefaultJob" - Write Bootstrap = "/tmp/bacula/var/bacula/working/Client1.bsr" - FileSet = "Minimal" - - RunBeforeJob = "echo test before ; echo test before2" - RunBeforeJob = "echo test before (2nd time)" - RunBeforeJob = "echo test before (3rd time)" - RunAfterJob = "echo test after" - ClientRunAfterJob = "echo test after client" - - RunScript { - Command = "echo test RunScript in error" - Runsonclient = yes - RunsOnSuccess = no - RunsOnFailure = yes - RunsWhen = After # never by default - } - RunScript { - Command = "echo test RunScript on success" - Runsonclient = yes - RunsOnSuccess = yes # default - RunsOnFailure = no # default - RunsWhen = After - } - } - - Why: It would be a significant change to the structure of the - directives, but allows for a lot more flexibility, including - RunAfter commands that will run regardless of whether the job - succeeds, or RunBefore tasks that still allow the job to run even - if that specific RunBefore fails. - - Notes: (More notes from Phil, Kern, David and Eric) - I would prefer to have a single new Resource called - RunScript. - - RunsWhen = After|Before|Always - RunsAtJobLevels = All|Full|Diff|Inc # not yet implemented - - The AbortJobOnError, RunsOnSuccess and RunsOnFailure directives - could be optional, and possibly RunWhen as well. - - AbortJobOnError would be ignored unless RunsWhen was set to Before - and would default to Yes if omitted. - If AbortJobOnError was set to No, failure of the script - would still generate a warning. - - RunsOnSuccess would be ignored unless RunsWhen was set to After - (or RunsBeforeJob set to No), and default to Yes. - - RunsOnFailure would be ignored unless RunsWhen was set to After, - and default to No. - - Allow having the before/after status on the script command - line so that the same script can be used both before/after. - -Item 10: Merge multiple backups (Synthetic Backup or Consolidation). +Item 6: Merge multiple backups (Synthetic Backup or Consolidation). Origin: Marc Cousin and Eric Bollengier Date: 15 November 2005 Status: Waiting implementation. Depends on first implementing - project Item 2 (Migration). + project Item 2 (Migration) which is now done. What: A merged backup is a backup made without connecting to the Client. It would be a Merge of existing backups into a single backup. @@ -422,7 +233,7 @@ Item 10: Merge multiple backups (Synthetic Backup or Consolidation). data can then be pruned (or not) from the catalog, possibly allowing older volumes to be recycled -Item 11: Deletion of Disk-Based Bacula Volumes +Item 8: Deletion of Disk-Based Bacula Volumes Date: Nov 25, 2005 Origin: Ross Boylan (edited by Kern) @@ -443,111 +254,7 @@ Item 11: Deletion of Disk-Based Bacula Volumes The migration project should also remove a Volume that is migrated. This might also work for tape Volumes. -Item 12: Directive/mode to backup only file changes, not entire file - Date: 11 November 2005 - Origin: Joshua Kugler - Marek Bajon - Status: - - What: Currently when a file changes, the entire file will be backed up in - the next incremental or full backup. To save space on the tapes - it would be nice to have a mode whereby only the changes to the - file would be backed up when it is changed. - - Why: This would save lots of space when backing up large files such as - logs, mbox files, Outlook PST files and the like. - - Notes: This would require the usage of disk-based volumes as comparing - files would not be feasible using a tape drive. - -Item 13: Multiple threads in file daemon for the same job - Date: 27 November 2005 - Origin: Ove Risberg (Ove.Risberg at octocode dot com) - Status: - - What: I want the file daemon to start multiple threads for a backup - job so the fastest possible backup can be made. - - The file daemon could parse the FileSet information and start - one thread for each File entry located on a separate - filesystem. - - A configuration option in the job section should be used to - enable or disable this feature. The configuration option could - specify the maximum number of threads in the file daemon. - - If the theads could spool the data to separate spool files - the restore process will not be much slower. - - Why: Multiple concurrent backups of a large fileserver with many - disks and controllers will be much faster. - - Notes: I am willing to try to implement this but I will probably - need some help and advice. (No problem -- Kern) - -Item 14: Implement red/black binary tree routines. - Date: 28 October 2005 - Origin: Kern - Status: Class code is complete. Code needs to be integrated into - restore tree code. - - What: Implement a red/black binary tree class. This could - then replace the current binary insert/search routines - used in the restore in memory tree. This could significantly - speed up the creation of the in memory restore tree. - - Why: Performance enhancement. - -Item 15: Add support for FileSets in user directories CACHEDIR.TAG - Origin: Norbert Kiesel - Date: 21 November 2005 - Status: (I think this is better done using a Python event that I - will implement in version 1.39.x). - - What: CACHDIR.TAG is a proposal for identifying directories which - should be ignored for archiving/backup. It works by ignoring - directory trees which have a file named CACHEDIR.TAG with a - specific content. See - http://www.brynosaurus.com/cachedir/spec.html - for details. - - From Peter Eriksson: - I suggest that if this is implemented (I've also asked for this - feature some year ago) that it is made compatible with Legato - Networkers ".nsr" files where you can specify a lot of options on - how to handle files/directories (including denying further - parsing of .nsr files lower down into the directory trees). A - PDF version of the .nsr man page can be viewed at: - - http://www.ifm.liu.se/~peter/nsr.pdf - - Why: It's a nice alternative to "exclude" patterns for directories - which don't have regular pathnames. Also, it allows users to - control backup for themselves. Implementation should be pretty - simple. GNU tar >= 1.14 or so supports it, too. - - Notes: I envision this as an optional feature to a fileset - specification. - - -Item 16: Implement extraction of Win32 BackupWrite data. - Origin: Thorsten Engel - Date: 28 October 2005 - Status: Done. Assigned to Thorsten. Implemented in current CVS - - What: This provides the Bacula File daemon with code that - can pick apart the stream output that Microsoft writes - for BackupWrite data, and thus the data can be read - and restored on non-Win32 machines. - - Why: BackupWrite data is the portable=no option in Win32 - FileSets, and in previous Baculas, this data could - only be extracted using a Win32 FD. With this new code, - the Windows data can be extracted and restored on - any OS. - - -Item 18: Implement a Python interface to the Bacula catalog. +Item 9: Implement a Python interface to the Bacula catalog. Date: 28 October 2005 Origin: Kern Status: @@ -558,7 +265,7 @@ Item 18: Implement a Python interface to the Bacula catalog. Why: This will permit users to customize Bacula through Python scripts. -Item 18: Archival (removal) of User Files to Tape +Item 10: Archival (removal) of User Files to Tape Date: Nov. 24/2005 @@ -587,7 +294,7 @@ Item 18: Archival (removal) of User Files to Tape access time. Then after another 6 months (or possibly as one storage pool gets full) data is migrated to Tape. -Item 19: Add Plug-ins to the FileSet Include statements. +Item 11: Add Plug-ins to the FileSet Include statements. Date: 28 October 2005 Origin: Status: Partially coded in 1.37 -- much more to do. @@ -603,9 +310,9 @@ Item 19: Add Plug-ins to the FileSet Include statements. plug-in knows how to backup his Oracle database without stopping/starting it, for example. -Item 20: Implement more Python events in Bacula. +Item 12: Implement more Python events in Bacula. Date: 28 October 2005 - Origin: + Origin: Kern Status: What: Allow Python scripts to be called at more places @@ -624,7 +331,7 @@ Item 20: Implement more Python events in Bacula. jobs (possibly also scheduled jobs). -Item 21: Quick release of FD-SD connection after backup. +Item 13: Quick release of FD-SD connection after backup. Origin: Frank Volf (frank at deze dot org) Date: 17 November 2005 Status: @@ -659,71 +366,11 @@ Item 21: Quick release of FD-SD connection after backup. has done the same thing -- so in a way keeping the SD-FD link open to the very end is not really very productive ... - Why: Makes backup of laptops much easier. - -Item 22: Permit multiple Media Types in an Autochanger - Origin: Kern - Status: Done. Implemented in 1.38.9 (I think). - - What: Modify the Storage daemon so that multiple Media Types - can be specified in an autochanger. This would be somewhat - of a simplistic implementation in that each drive would - still be allowed to have only one Media Type. However, - the Storage daemon will ensure that only a drive with - the Media Type that matches what the Director specifies - is chosen. - - Why: This will permit user with several different drive types - to make full use of their autochangers. - -Item 23: Allow different autochanger definitions for one autochanger. - Date: 28 October 2005 - Origin: Kern - Status: - - What: Currently, the autochanger script is locked based on - the autochanger. That is, if multiple drives are being - simultaneously used, the Storage daemon ensures that only - one drive at a time can access the mtx-changer script. - This change would base the locking on the control device, - rather than the autochanger. It would then permit two autochanger - definitions for the same autochanger, but with different - drives. Logically, the autochanger could then be "partitioned" - for different jobs, clients, or class of jobs, and if the locking - is based on the control device (e.g. /dev/sg0) the mtx-changer - script will be locked appropriately. - - Why: This will permit users to partition autochangers for specific - use. It would also permit implementation of multiple Media - Types with no changes to the Storage daemon. - -Item 24: Automatic disabling of devices - Date: 2005-11-11 - Origin: Peter Eriksson - Status: - - What: After a configurable amount of fatal errors with a tape drive - Bacula should automatically disable further use of a certain - tape drive. There should also be "disable"/"enable" commands in - the "bconsole" tool. + Why: Makes backup of laptops much faster. - Why: On a multi-drive jukebox there is a possibility of tape drives - going bad during large backups (needing a cleaning tape run, - tapes getting stuck). It would be advantageous if Bacula would - automatically disable further use of a problematic tape drive - after a configurable amount of errors has occurred. - An example: I have a multi-drive jukebox (6 drives, 380+ slots) - where tapes occasionally get stuck inside the drive. Bacula will - notice that the "mtx-changer" command will fail and then fail - any backup jobs trying to use that drive. However, it will still - keep on trying to run new jobs using that drive and fail - - forever, and thus failing lots and lots of jobs... Since we have - many drives Bacula could have just automatically disabled - further use of that drive and used one of the other ones - instead. -Item 25: Implement huge exclude list support using hashing. +Item 14: Implement huge exclude list support using hashing. Date: 28 October 2005 Origin: Kern Status: @@ -740,25 +387,7 @@ Item 25: Implement huge exclude list support using hashing. backup set will be *much* smaller. -============= Empty Feature Request form =========== -Item n: One line summary ... - Date: Date submitted - Origin: Name and email of originator. - Status: - - What: More detailed explanation ... - - Why: Why it is important ... - - Notes: Additional notes or features (omit if not used) -============== End Feature Request form ============== - - -=============================================== -Feature requests submitted after cutoff for December 2005 vote - and not yet discussed. -=============================================== -Item n: Allow skipping execution of Jobs +Item 15: Allow skipping execution of Jobs Date: 29 November 2005 Origin: Florian Schnabel Status: @@ -769,40 +398,8 @@ Item n: Allow skipping execution of Jobs that would be really handy, other jobs could proceed normally and you won't get errors that way. -=================================================== - -Item n: archive data - - Origin: calvin streeting calvin at absentdream dot com - Date: 15/5/2006 - - What: The abilty to archive to media (dvd/cd) in a uncompressd format - for dead filing (archiving not backing up) - - Why: At my works when jobs are finished and moved off of the main file - servers (raid based systems) onto a simple linux file server (ide based - system) so users can find old information without contacting the IT - dept. - - So this data dosn't realy change it only gets added to, - But it also needs backing up. At the moment it takes - about 8 hours to back up our servers (working data) so - rather than add more time to existing backups i am trying - to implement a system where we backup the acrhive data to - cd/dvd these disks would only need to be appended to - (burn only new/changed files to new disks for off site - storage). basialy understand the differnce between - achive data and live data. - - Notes: scan the data and email me when it needs burning divide - into predifind chunks keep a recored of what is on what - disk make me a label (simple php->mysql=>pdf stuff) i - could do this bit ability to save data uncompresed so - it can be read in any other system (future proof data) - save the catalog with the disk as some kind of menu - system -Item : Tray monitor window cleanups +Item 16: Tray monitor window cleanups Origin: Alan Brown ajb2 at mssl dot ucl dot ac dot uk Date: 24 July 2006 Status: @@ -812,135 +409,452 @@ Item : Tray monitor window cleanups window often ends up larger than the available screen, making the trailing items difficult to read. - Notes: - Item : Clustered file-daemons - Origin: Alan Brown ajb2 at mssl dot ucl dot ac dot uk - Date: 24 July 2006 +Item 17: Split documentation + Origin: Maxx + Date: 27th July 2006 Status: - What: A "virtual" filedaemon, which is actually a cluster of real ones. - Why: In the case of clustered filesystems (SAN setups, GFS, or OCFS2, etc) - multiple machines may have access to the same set of filesystems + What: Split documentation in several books - For performance reasons, one may wish to initate backups from - several of these machines simultaneously, instead of just using - one backup source for the common clustered filesystem. + Why: Bacula manual has now more than 600 pages, and looking for + implementation details is getting complicated. I think + it would be good to split the single volume in two or + maybe three parts: - For obvious reasons, normally backups of $A-FD/$PATH and - B-FD/$PATH are treated as different backup sets. In this case - they are the same communal set. + 1) Introduction, requirements and tutorial, typically + are useful only until first installation time - Likewise when restoring, it would be easier to just specify - one of the cluster machines and let bacula decide which to use. + 2) Basic installation and configuration, with all the + gory details about the directives supported 3) + Advanced Bacula: testing, troubleshooting, GUI and + ancillary programs, security managements, scripting, + etc. - This can be faked to some extent using DNS round robin entries - and a virtual IP address, however it means "status client" will - always give bogus answers. Additionally there is no way of - spreading the load evenly among the servers. - What is required is something similar to the storage daemon - autochanger directives, so that Bacula can keep track of - operating backups/restores and direct new jobs to a "free" - client. - Notes: +Item 18: Automatic promotion of backup levels + Date: 19 January 2006 + Origin: Adam Thornton + Status: Blue sky -Item : Tray monitor window cleanups - Origin: Alan Brown ajb2 at mssl dot ucl dot ac dot uk - Date: 24 July 2006 - Status: - What: Resizeable and scrollable windows in the tray monitor. + What: Amanda has a feature whereby it estimates the space that a + differential, incremental, and full backup would take. If the + difference in space required between the scheduled level and the next + level up is beneath some user-defined critical threshold, the backup + level is bumped to the next type. Doing this minimizes the number of + volumes necessary during a restore, with a fairly minimal cost in + backup media space. - Why: With multiple clients, or with many jobs running, the displayed - window often ends up larger than the available screen, making - the trailing items difficult to read. + Why: I know at least one (quite sophisticated and smart) user + for whom the absence of this feature is a deal-breaker in terms of + using Bacula; if we had it it would eliminate the one cool thing + Amanda can do and we can't (at least, the one cool thing I know of). - Notes: -Item: Commercial database support - Origin: Russell Howe - Date: 26 July 2006 +Item 19: Add an override in Schedule for Pools based on backup types. +Date: 19 Jan 2005 +Origin: Chad Slater +Status: + + What: Adding a FullStorage=BigTapeLibrary in the Schedule resource + would help those of us who use different storage devices for different + backup levels cope with the "auto-upgrade" of a backup. + + Why: Assume I add several new device to be backed up, i.e. several + hosts with 1TB RAID. To avoid tape switching hassles, incrementals are + stored in a disk set on a 2TB RAID. If you add these devices in the + middle of the month, the incrementals are upgraded to "full" backups, + but they try to use the same storage device as requested in the + incremental job, filling up the RAID holding the differentials. If we + could override the Storage parameter for full and/or differential + backups, then the Full job would use the proper Storage device, which + has more capacity (i.e. a 8TB tape library. + +Item 20: An option to operate on all pools with update vol parameters + Origin: Dmitriy Pinchukov + Date: 16 August 2006 + Status: + + What: When I do update -> Volume parameters -> All Volumes + from Pool, then I have to select pools one by one. I'd like + console to have an option like "0: All Pools" in the list of + defined pools. + + Why: I have many pools and therefore unhappy with manually + updating each of them using update -> Volume parameters -> All + Volumes from Pool -> pool #. + + + +Item 21: Include JobID in spool file name + Origin: Mark Bergman + Date: Tue Aug 22 17:13:39 EDT 2006 Status: - What: It would be nice for the database backend to support more - databases. I'm thinking of SQL Server at the moment, but I guess Oracle, - DB2, MaxDB, etc are all candidates. SQL Server would presumably be - implemented using FreeTDS or maybe an ODBC library? + What: Change the name of the spool file to include the JobID - Why: We only really have one database server, which is MS SQL Server - 2000. Maintaining a second one for the backup software (we grew out of - SQLite, which I liked, but which didn't work so well with our database - size). We don't really have a machine with the resources to run - postgres, and would rather only maintain a single DBMS. We're stuck with - SQL Server because pretty much all the company's custom applications - (written by consultants) are locked into SQL Server 2000. I can imagine - this scenario is fairly common, and it would be nice to use the existing - properly specced database server for storing Bacula's catalog, rather - than having to run a second DBMS. + Why: JobIDs are the common key used to refer to jobs, yet the + spoolfile name doesn't include that information. The date/time + stamp is useful (and should be retained). -Item n: Split documentation - Origin: Maxx - Date: 27th July 2006 + +Item 22: Include timestamp of job launch in "stat clients" output + Origin: Mark Bergman + Date: Tue Aug 22 17:13:39 EDT 2006 Status: - What: Split documentation in several books + What: The "stat clients" command doesn't include any detail on when + the active backup jobs were launched. - Why: Bacula manual has now more than 600 pages, and looking for - implementation details is getting complicated. I think - it would be good to split the single volume in two or - maybe three parts: + Why: Including the timestamp would make it much easier to decide whether + a job is running properly. - 1) Introduction, requirements and tutorial, typically - are useful only until first installation time + Notes: It may be helpful to have the output from "stat clients" formatted + more like that from "stat dir" (and other commands), in a column + format. The per-client information that's currently shown (level, + client name, JobId, Volume, pool, device, Files, etc.) is good, but + somewhat hard to parse (both programmatically and visually), + particularly when there are many active clients. - 2) Basic installation and configuration, with all the - gory details about the directives supported 3) - Advanced Bacula: testing, troubleshooting, GUI and - ancillary programs, security managements, scripting, - etc. - Notes: -Item n: Include an option to operate on all pools when doing - update vol parameters +Item 23: Message mailing based on backup types +Origin: Evan Kaufman + Date: January 6, 2006 +Status: - Origin: Dmitriy Pinchukov - Date: 16 August 2006 - Status: + What: In the "Messages" resource definitions, allowing messages + to be mailed based on the type (backup, restore, etc.) and level + (full, differential, etc) of job that created the originating + message(s). - What: When I do update -> Volume parameters -> All Volumes - from Pool, then I have to select pools one by one. I'd like - console to have an option like "0: All Pools" in the list of - defined pools. +Why: It would, for example, allow someone's boss to be emailed + automatically only when a Full Backup job runs, so he can + retrieve the tapes for offsite storage, even if the IT dept. + doesn't (or can't) explicitly notify him. At the same time, his + mailbox wouldnt be filled by notifications of Verifies, Restores, + or Incremental/Differential Backups (which would likely be kept + onsite). - Why: I have many pools and therefore unhappy with manually - updating each of them using update -> Volume parameters -> All - Volumes from Pool -> pool #. +Notes: One way this could be done is through additional message types, for example: -Item n: Automatic promotion of backup levels - Date: 19 January 2006 - Origin: Adam Thornton - Status: Blue sky + Messages { + # email the boss only on full system backups + Mail = boss@mycompany.com = full, !incremental, !differential, !restore, + !verify, !admin + # email us only when something breaks + MailOnError = itdept@mycompany.com = all + } - What: Amanda has a feature whereby it estimates the space that a - differential, incremental, and full backup would take. If the - difference in space required between the scheduled level and the next - level up is beneath some user-defined critical threshold, the backup - level is bumped to the next type. Doing this minimizes the number of - volumes necessary during a restore, with a fairly minimal cost in - backup media space. - Why: I know at least one (quite sophisticated and smart) user - for whom the absence of this feature is a deal-breaker in terms of - using Bacula; if we had it it would eliminate the one cool thing - Amanda can do and we can't (at least, the one cool thing I know of). +Item 24: Allow inclusion/exclusion of files in a fileset by creation/mod times + Origin: Evan Kaufman + Date: January 11, 2006 + Status: + + What: In the vein of the Wild and Regex directives in a Fileset's + Options, it would be helpful to allow a user to include or exclude + files and directories by creation or modification times. + + You could factor the Exclude=yes|no option in much the same way it + affects the Wild and Regex directives. For example, you could exclude + all files modified before a certain date: + + Options { + Exclude = yes + Modified Before = #### + } + + Or you could exclude all files created/modified since a certain date: + + Options { + Exclude = yes + Created Modified Since = #### + } + + The format of the time/date could be done several ways, say the number + of seconds since the epoch: + 1137008553 = Jan 11 2006, 1:42:33PM # result of `date +%s` + + Or a human readable date in a cryptic form: + 20060111134233 = Jan 11 2006, 1:42:33PM # YYYYMMDDhhmmss + + Why: I imagine a feature like this could have many uses. It would + allow a user to do a full backup while excluding the base operating + system files, so if I installed a Linux snapshot from a CD yesterday, + I'll *exclude* all files modified *before* today. If I need to + recover the system, I use the CD I already have, plus the tape backup. + Or if, say, a Windows client is hit by a particularly corrosive + virus, and I need to *exclude* any files created/modified *since* the + time of infection. + + Notes: Of course, this feature would work in concert with other + in/exclude rules, and wouldnt override them (or each other). + Notes: The directives I'd imagine would be along the lines of + "[Created] [Modified] [Before|Since] = ". + So one could compare against 'ctime' and/or 'mtime', but ONLY 'before' + or 'since'. +Item 25: Add a scheduling syntax that permits weekly rotations + Date: 15 December 2006 + Origin: Gregory Brauer (greg at wildbrain dot com) + Status: + + What: Currently, Bacula only understands how to deal with weeks of the + month or weeks of the year in schedules. This makes it impossible + to do a true weekly rotation of tapes. There will always be a + discontinuity that will require disruptive manual intervention at + least monthly or yearly because week boundaries never align with + month or year boundaries. + + A solution would be to add a new syntax that defines (at least) + a start timestamp, and repetition period. + + Why: Rotated backups done at weekly intervals are useful, and Bacula + cannot currently do them without extensive hacking. + + Notes: Here is an example syntax showing a 3-week rotation where full + Backups would be performed every week on Saturday, and an + incremental would be performed every week on Tuesday. Each + set of tapes could be removed from the loader for the following + two cycles before coming back and being reused on the third + week. Since the execution times are determined by intervals + from a given point in time, there will never be any issues with + having to adjust to any sort of arbitrary time boundary. In + the example provided, I even define the starting schedule + as crossing both a year and a month boundary, but the run times + would be based on the "Repeat" value and would therefore happen + weekly as desired. + + + Schedule { + Name = "Week 1 Rotation" + #Saturday. Would run Dec 30, Jan 20, Feb 10, etc. + Run { + Options { + Type = Full + Start = 2006-12-30 01:00 + Repeat = 3w + } + } + #Tuesday. Would run Jan 2, Jan 23, Feb 13, etc. + Run { + Options { + Type = Incremental + Start = 2007-01-02 01:00 + Repeat = 3w + } + } + } + + Schedule { + Name = "Week 2 Rotation" + #Saturday. Would run Jan 6, Jan 27, Feb 17, etc. + Run { + Options { + Type = Full + Start = 2007-01-06 01:00 + Repeat = 3w + } + } + #Tuesday. Would run Jan 9, Jan 30, Feb 20, etc. + Run { + Options { + Type = Incremental + Start = 2007-01-09 01:00 + Repeat = 3w + } + } + } + + Schedule { + Name = "Week 3 Rotation" + #Saturday. Would run Jan 13, Feb 3, Feb 24, etc. + Run { + Options { + Type = Full + Start = 2007-01-13 01:00 + Repeat = 3w + } + } + #Tuesday. Would run Jan 16, Feb 6, Feb 27, etc. + Run { + Options { + Type = Incremental + Start = 2007-01-16 01:00 + Repeat = 3w + } + } + } + + +Item 26: Improve Bacula's tape and drive usage and cleaning management. + Date: 8 November 2005, November 11, 2005 + Origin: Adam Thornton , + Arno Lehmann + Status: + + What: Make Bacula manage tape life cycle information, tape reuse + times and drive cleaning cycles. + + Why: All three parts of this project are important when operating + backups. + We need to know which tapes need replacement, and we need to + make sure the drives are cleaned when necessary. While many + tape libraries and even autoloaders can handle all this + automatically, support by Bacula can be helpful for smaller + (older) libraries and single drives. Limiting the number of + times a tape is used might prevent tape errors when using + tapes until the drives can't read it any more. Also, checking + drive status during operation can prevent some failures (as I + [Arno] had to learn the hard way...) -Item n+1: Incorporation of XACML2/SAML2 parsing + Notes: First, Bacula could (and even does, to some limited extent) + record tape and drive usage. For tapes, the number of mounts, + the amount of data, and the time the tape has actually been + running could be recorded. Data fields for Read and Write + time and Number of mounts already exist in the catalog (I'm + not sure if VolBytes is the sum of all bytes ever written to + that volume by Bacula). This information can be important + when determining which media to replace. The ability to mark + Volumes as "used up" after a given number of write cycles + should also be implemented so that a tape is never actually + worn out. For the tape drives known to Bacula, similar + information is interesting to determine the device status and + expected life time: Time it's been Reading and Writing, number + of tape Loads / Unloads / Errors. This information is not yet + recorded as far as I [Arno] know. A new volume status would + be necessary for the new state, like "Used up" or "Worn out". + Volumes with this state could be used for restores, but not + for writing. These volumes should be migrated first (assuming + migration is implemented) and, once they are no longer needed, + could be moved to a Trash pool. + + The next step would be to implement a drive cleaning setup. + Bacula already has knowledge about cleaning tapes. Once it + has some information about cleaning cycles (measured in drive + run time, number of tapes used, or calender days, for example) + it can automatically execute tape cleaning (with an + autochanger, obviously) or ask for operator assistance loading + a cleaning tape. + + The final step would be to implement TAPEALERT checks not only + when changing tapes and only sending the information to the + administrator, but rather checking after each tape error, + checking on a regular basis (for example after each tape + file), and also before unloading and after loading a new tape. + Then, depending on the drives TAPEALERT state and the known + drive cleaning state Bacula could automatically schedule later + cleaning, clean immediately, or inform the operator. + + Implementing this would perhaps require another catalog change + and perhaps major changes in SD code and the DIR-SD protocol, + so I'd only consider this worth implementing if it would + actually be used or even needed by many people. + + Implementation of these projects could happen in three distinct + sub-projects: Measuring Tape and Drive usage, retiring + volumes, and handling drive cleaning and TAPEALERTs. + +Item 27: Implement support for stacking arbitrary stream filters, sinks. +Date: 23 November 2006 +Origin: Landon Fuller +Status: Planning. Assigned to landonf. + +What: + Implement support for the following: + - Stacking arbitrary stream filters (eg, encryption, compression, + sparse data handling)) + - Attaching file sinks to terminate stream filters (ie, write out + the resultant data to a file) + - Refactor the restoration state machine accordingly + +Why: + The existing stream implementation suffers from the following: + - All state (compression, encryption, stream restoration), is + global across the entire restore process, for all streams. There are + multiple entry and exit points in the restoration state machine, and + thus multiple places where state must be allocated, deallocated, + initialized, or reinitialized. This results in exceptional complexity + for the author of a stream filter. + - The developer must enumerate all possible combinations of filters + and stream types (ie, win32 data with encryption, without encryption, + with encryption AND compression, etc). + +Notes: + This feature request only covers implementing the stream filters/ + sinks, and refactoring the file daemon's restoration implementation + accordingly. If I have extra time, I will also rewrite the backup + implementation. My intent in implementing the restoration first is to + solve pressing bugs in the restoration handling, and to ensure that + the new restore implementation handles existing backups correctly. + + I do not plan on changing the network or tape data structures to + support defining arbitrary stream filters, but supporting that + functionality is the ultimate goal. + + Assistance with either code or testing would be fantastic. + +Item 28: Allow FD to initiate a backup + Origin: Frank Volf (frank at deze dot org) + Date: 17 November 2005 + Status: + + What: Provide some means, possibly by a restricted console that + allows a FD to initiate a backup, and that uses the connection + established by the FD to the Director for the backup so that + a Director that is firewalled can do the backup. + + Why: Makes backup of laptops much easier. + +Item 29: Directive/mode to backup only file changes, not entire file + Date: 11 November 2005 + Origin: Joshua Kugler + Marek Bajon + Status: + + What: Currently when a file changes, the entire file will be backed up in + the next incremental or full backup. To save space on the tapes + it would be nice to have a mode whereby only the changes to the + file would be backed up when it is changed. + + Why: This would save lots of space when backing up large files such as + logs, mbox files, Outlook PST files and the like. + + Notes: This would require the usage of disk-based volumes as comparing + files would not be feasible using a tape drive. + +Item 30: Automatic disabling of devices + Date: 2005-11-11 + Origin: Peter Eriksson + Status: + + What: After a configurable amount of fatal errors with a tape drive + Bacula should automatically disable further use of a certain + tape drive. There should also be "disable"/"enable" commands in + the "bconsole" tool. + + Why: On a multi-drive jukebox there is a possibility of tape drives + going bad during large backups (needing a cleaning tape run, + tapes getting stuck). It would be advantageous if Bacula would + automatically disable further use of a problematic tape drive + after a configurable amount of errors has occurred. + + An example: I have a multi-drive jukebox (6 drives, 380+ slots) + where tapes occasionally get stuck inside the drive. Bacula will + notice that the "mtx-changer" command will fail and then fail + any backup jobs trying to use that drive. However, it will still + keep on trying to run new jobs using that drive and fail - + forever, and thus failing lots and lots of jobs... Since we have + many drives Bacula could have just automatically disabled + further use of that drive and used one of the other ones + instead. + +Item 31: Incorporation of XACML2/SAML2 parsing Date: 19 January 2006 Origin: Adam Thornton Status: Blue sky @@ -977,23 +891,201 @@ Item n+1: Incorporation of XACML2/SAML2 parsing a generic ACL framework. Basically, the costs of implementation are high, but they're largely both external to Bacula and already sunk. -Item 1: Add an over-ride in the Schedule configuration to use a - different pool for different backup types. -Date: 19 Jan 2005 -Origin: Chad Slater -Status: - - What: Adding a FullStorage=BigTapeLibrary in the Schedule resource - would help those of us who use different storage devices for different - backup levels cope with the "auto-upgrade" of a backup. +Item 32: Clustered file-daemons + Origin: Alan Brown ajb2 at mssl dot ucl dot ac dot uk + Date: 24 July 2006 + Status: + What: A "virtual" filedaemon, which is actually a cluster of real ones. - Why: Assume I add several new device to be backed up, i.e. several - hosts with 1TB RAID. To avoid tape switching hassles, incrementals are - stored in a disk set on a 2TB RAID. If you add these devices in the - middle of the month, the incrementals are upgraded to "full" backups, - but they try to use the same storage device as requested in the - incremental job, filling up the RAID holding the differentials. If we - could override the Storage parameter for full and/or differential - backups, then the Full job would use the proper Storage device, which - has more capacity (i.e. a 8TB tape library. + Why: In the case of clustered filesystems (SAN setups, GFS, or OCFS2, etc) + multiple machines may have access to the same set of filesystems + + For performance reasons, one may wish to initate backups from + several of these machines simultaneously, instead of just using + one backup source for the common clustered filesystem. + + For obvious reasons, normally backups of $A-FD/$PATH and + B-FD/$PATH are treated as different backup sets. In this case + they are the same communal set. + + Likewise when restoring, it would be easier to just specify + one of the cluster machines and let bacula decide which to use. + + This can be faked to some extent using DNS round robin entries + and a virtual IP address, however it means "status client" will + always give bogus answers. Additionally there is no way of + spreading the load evenly among the servers. + + What is required is something similar to the storage daemon + autochanger directives, so that Bacula can keep track of + operating backups/restores and direct new jobs to a "free" + client. + + Notes: + +Item 33: Commercial database support + Origin: Russell Howe + Date: 26 July 2006 + Status: + + What: It would be nice for the database backend to support more + databases. I'm thinking of SQL Server at the moment, but I guess Oracle, + DB2, MaxDB, etc are all candidates. SQL Server would presumably be + implemented using FreeTDS or maybe an ODBC library? + + Why: We only really have one database server, which is MS SQL Server + 2000. Maintaining a second one for the backup software (we grew out of + SQLite, which I liked, but which didn't work so well with our database + size). We don't really have a machine with the resources to run + postgres, and would rather only maintain a single DBMS. We're stuck with + SQL Server because pretty much all the company's custom applications + (written by consultants) are locked into SQL Server 2000. I can imagine + this scenario is fairly common, and it would be nice to use the existing + properly specced database server for storing Bacula's catalog, rather + than having to run a second DBMS. + + +Item 34: Archive data + Date: 15/5/2006 + Origin: calvin streeting calvin at absentdream dot com + Status: + + What: The abilty to archive to media (dvd/cd) in a uncompressed format + for dead filing (archiving not backing up) + + Why: At my works when jobs are finished and moved off of the main file + servers (raid based systems) onto a simple linux file server (ide based + system) so users can find old information without contacting the IT + dept. + + So this data dosn't realy change it only gets added to, + But it also needs backing up. At the moment it takes + about 8 hours to back up our servers (working data) so + rather than add more time to existing backups i am trying + to implement a system where we backup the acrhive data to + cd/dvd these disks would only need to be appended to + (burn only new/changed files to new disks for off site + storage). basialy understand the differnce between + achive data and live data. + + Notes: Scan the data and email me when it needs burning divide + into predifind chunks keep a recored of what is on what + disk make me a label (simple php->mysql=>pdf stuff) i + could do this bit ability to save data uncompresed so + it can be read in any other system (future proof data) + save the catalog with the disk as some kind of menu + system + +Item 35: Filesystem watch triggered backup. + Date: 31 August 2006 + Origin: Jesper Krogh + Status: Unimplemented, depends probably on "client initiated backups" + + What: With inotify and similar filesystem triggeret notification + systems is it possible to have the file-daemon to monitor + filesystem changes and initiate backup. + + Why: There are 2 situations where this is nice to have. + 1) It is possible to get a much finer-grained backup than + the fixed schedules used now.. A file created and deleted + a few hours later, can automatically be caught. + + 2) The introduced load on the system will probably be + distributed more even on the system. + + Notes: This can be combined with configration that specifies + something like: "at most every 15 minutes or when changes + consumed XX MB". + +Kern Notes: I would rather see this implemented by an external program + that monitors the Filesystem changes, then uses the console + to start the appropriate job. + +Item 36: Implement multiple numeric backup levels as supported by dump +Date: 3 April 2006 +Origin: Daniel Rich +Status: +What: Dump allows specification of backup levels numerically instead of just + "full", "incr", and "diff". In this system, at any given level, all + files are backed up that were were modified since the last backup of a + higher level (with 0 being the highest and 9 being the lowest). A + level 0 is therefore equivalent to a full, level 9 an incremental, and + the levels 1 through 8 are varying levels of differentials. For + bacula's sake, these could be represented as "full", "incr", and + "diff1", "diff2", etc. + +Why: Support of multiple backup levels would provide for more advanced backup + rotation schemes such as "Towers of Hanoi". This would allow better + flexibility in performing backups, and can lead to shorter recover + times. + +Notes: Legato Networker supports a similar system with full, incr, and 1-9 as + levels. +Item 1: Implement a server-side compression feature + Date: 18 December 2006 + Origin: Vadim A. Umanski , e-mail umanski@ext.ru + Status: + What: The ability to compress backup data on server receiving data + instead of doing that on client sending data. + Why: The need is practical. I've got some machines that can send + data to the network 4 or 5 times faster than compressing + them (I've measured that). They're using fast enough SCSI/FC + disk subsystems but rather slow CPUs (ex. UltraSPARC II). + And the backup server has got a quite fast CPUs (ex. Dual P4 + Xeons) and quite a low load. When you have 20, 50 or 100 GB + of raw data - running a job 4 to 5 times faster - that + really matters. On the other hand, the data can be + compressed 50% or better - so losing twice more space for + disk backup is not good at all. And the network is all mine + (I have a dedicated management/provisioning network) and I + can get as high bandwidth as I need - 100Mbps, 1000Mbps... + That's why the server-side compression feature is needed! + Notes: + +Item 1: Cause daemons to use a specific IP address to source communications + Origin: Bill Moran + Date: 18 Dec 2006 + Status: + What: Cause Bacula daemons (dir, fd, sd) to always use the ip address + specified in the [DIR|DF|SD]Addr directive as the source IP + for initiating communication. + Why: On complex networks, as well as extremely secure networks, it's + not unusual to have multiple possible routes through the network. + Often, each of these routes is secured by different policies + (effectively, firewalls allow or deny different traffic depending + on the source address) + Unfortunately, it can sometimes be difficult or impossible to + represent this in a system routing table, as the result is + excessive subnetting that quickly exhausts available IP space. + The best available workaround is to provide multiple IPs to + a single machine that are all on the same subnet. In order + for this to work properly, applications must support the ability + to bind outgoing connections to a specified address, otherwise + the operating system will always choose the first IP that + matches the required route. + Notes: Many other programs support this. For example, the following + can be configured in BIND: + query-source address 10.0.0.1; + transfer-source 10.0.0.2; + Which means queries from this server will always come from + 10.0.0.1 and zone transfers will always originate from + 10.0.0.2. + +Kern notes: I think this would add very little functionality, but a *lot* of + additional overhead to Bacula. + + + +============= Empty Feature Request form =========== +Item n: One line summary ... + Date: Date submitted + Origin: Name and email of originator. + Status: + + What: More detailed explanation ... + + Why: Why it is important ... + + Notes: Additional notes or features (omit if not used) +============== End Feature Request form ==============