Projects:
Bacula Projects Roadmap
- Status updated 15 December 2006
-
+ Status updated 3 January 2007
Summary:
-Item 1: Accurate restoration of renamed/deleted files
+Item 1: Accurate restoration of renamed/deleted files
Item 2: Implement a Bacula GUI/management tool.
Item 3: Implement Base jobs.
Item 4: Implement from-client and to-client on restore command line.
Item 34: Archive data
Item 35: Filesystem watch triggered backup.
Item 36: Implement multiple numeric backup levels as supported by dump
-
+Item 37: Implement a server-side compression feature
+Item 38: Cause daemons to use a specific IP address to source communications
+Item 39: Multiple threads in file daemon for the same job
+Item 40: Restore only file attributes (permissions, ACL, owner, group...)
+Item 41: Add an item to the restore option where you can select a pool
Below, you will find more information on future projects:
FD a list of files/attribs, and the FD must search the
list and compare it for each file to be saved.
-Item 4: Implement from-client and to-client on restore command line.
- Date: 11 December 2006
- Origin: Discussion on Bacula-users entitled 'Scripted restores to
- different clients', December 2006
- Status: New feature request
+Item 4: Implement from-client and to-client on restore command line.
+ Date: 11 December 2006
+ Origin: Discussion on Bacula-users entitled 'Scripted restores to
+ different clients', December 2006
+ Status: New feature request
- What: While using bconsole interactively, you can specify the client
- that a backup job is to be restored for, and then you can
- specify later a different client to send the restored files
- back to. However, using the 'restore' command with all options
- on the command line, this cannot be done, due to the ambiguous
- 'client' parameter. Additionally, this parameter means different
- things depending on if it's specified on the command line or
- afterwards, in the Modify Job screens.
+ What: While using bconsole interactively, you can specify the client
+ that a backup job is to be restored for, and then you can
+ specify later a different client to send the restored files
+ back to. However, using the 'restore' command with all options
+ on the command line, this cannot be done, due to the ambiguous
+ 'client' parameter. Additionally, this parameter means different
+ things depending on if it's specified on the command line or
+ afterwards, in the Modify Job screens.
- Why: This feature would enable restore jobs to be more completely
- automated, for example by a web or GUI front-end.
+ Why: This feature would enable restore jobs to be more completely
+ automated, for example by a web or GUI front-end.
Notes: client can also be implied by specifying the jobid on the command
- line
+ line
Item 5: Implement creation and maintenance of copy pools
Date: 27 November 2005
Origin: Florian Schnabel <florian.schnabel at docufy dot de>
Status:
- What: An easy option to skip a certain job on a certain date.
- Why: You could then easily skip tape backups on holidays. Especially
- if you got no autochanger and can only fit one backup on a tape
- that would be really handy, other jobs could proceed normally
- and you won't get errors that way.
+ What: An easy option to skip a certain job on a certain date.
+ Why: You could then easily skip tape backups on holidays. Especially
+ if you got no autochanger and can only fit one backup on a tape
+ that would be really handy, other jobs could proceed normally
+ and you won't get errors that way.
Item 16: Tray monitor window cleanups
Item 18: Automatic promotion of backup levels
- Date: 19 January 2006
- Origin: Adam Thornton <athornton@sinenomine.net>
- Status: Blue sky
+ Date: 19 January 2006
+ Origin: Adam Thornton <athornton@sinenomine.net>
+ Status:
- What: Amanda has a feature whereby it estimates the space that a
- differential, incremental, and full backup would take. If the
- difference in space required between the scheduled level and the next
- level up is beneath some user-defined critical threshold, the backup
- level is bumped to the next type. Doing this minimizes the number of
- volumes necessary during a restore, with a fairly minimal cost in
- backup media space.
+ What: Amanda has a feature whereby it estimates the space that a
+ differential, incremental, and full backup would take. If the
+ difference in space required between the scheduled level and the next
+ level up is beneath some user-defined critical threshold, the backup
+ level is bumped to the next type. Doing this minimizes the number of
+ volumes necessary during a restore, with a fairly minimal cost in
+ backup media space.
- Why: I know at least one (quite sophisticated and smart) user
- for whom the absence of this feature is a deal-breaker in terms of
- using Bacula; if we had it it would eliminate the one cool thing
- Amanda can do and we can't (at least, the one cool thing I know of).
+ Why: I know at least one (quite sophisticated and smart) user
+ for whom the absence of this feature is a deal-breaker in terms of
+ using Bacula; if we had it it would eliminate the one cool thing
+ Amanda can do and we can't (at least, the one cool thing I know of).
Item 19: Add an override in Schedule for Pools based on backup types.
has more capacity (i.e. a 8TB tape library.
Item 20: An option to operate on all pools with update vol parameters
- Origin: Dmitriy Pinchukov <absh@bossdev.kiev.ua>
- Date: 16 August 2006
- Status:
+ Origin: Dmitriy Pinchukov <absh@bossdev.kiev.ua>
+ Date: 16 August 2006
+ Status:
What: When I do update -> Volume parameters -> All Volumes
from Pool, then I have to select pools one by one. I'd like
Item 23: Message mailing based on backup types
-Origin: Evan Kaufman <evan.kaufman@gmail.com>
- Date: January 6, 2006
-Status:
+ Origin: Evan Kaufman <evan.kaufman@gmail.com>
+ Date: January 6, 2006
+ Status:
- What: In the "Messages" resource definitions, allowing messages
- to be mailed based on the type (backup, restore, etc.) and level
- (full, differential, etc) of job that created the originating
- message(s).
+ What: In the "Messages" resource definitions, allowing messages
+ to be mailed based on the type (backup, restore, etc.) and level
+ (full, differential, etc) of job that created the originating
+ message(s).
-Why: It would, for example, allow someone's boss to be emailed
- automatically only when a Full Backup job runs, so he can
- retrieve the tapes for offsite storage, even if the IT dept.
- doesn't (or can't) explicitly notify him. At the same time, his
- mailbox wouldnt be filled by notifications of Verifies, Restores,
- or Incremental/Differential Backups (which would likely be kept
- onsite).
+ Why: It would, for example, allow someone's boss to be emailed
+ automatically only when a Full Backup job runs, so he can
+ retrieve the tapes for offsite storage, even if the IT dept.
+ doesn't (or can't) explicitly notify him. At the same time, his
+ mailbox wouldnt be filled by notifications of Verifies, Restores,
+ or Incremental/Differential Backups (which would likely be kept
+ onsite).
-Notes: One way this could be done is through additional message types, for example:
+ Notes: One way this could be done is through additional message types, for example:
Messages {
# email the boss only on full system backups
Origin: Landon Fuller <landonf@threerings.net>
Status: Planning. Assigned to landonf.
-What:
- Implement support for the following:
- - Stacking arbitrary stream filters (eg, encryption, compression,
- sparse data handling))
- - Attaching file sinks to terminate stream filters (ie, write out
- the resultant data to a file)
- - Refactor the restoration state machine accordingly
-
-Why:
- The existing stream implementation suffers from the following:
- - All state (compression, encryption, stream restoration), is
- global across the entire restore process, for all streams. There are
- multiple entry and exit points in the restoration state machine, and
- thus multiple places where state must be allocated, deallocated,
- initialized, or reinitialized. This results in exceptional complexity
- for the author of a stream filter.
- - The developer must enumerate all possible combinations of filters
- and stream types (ie, win32 data with encryption, without encryption,
- with encryption AND compression, etc).
-
-Notes:
- This feature request only covers implementing the stream filters/
- sinks, and refactoring the file daemon's restoration implementation
- accordingly. If I have extra time, I will also rewrite the backup
- implementation. My intent in implementing the restoration first is to
- solve pressing bugs in the restoration handling, and to ensure that
- the new restore implementation handles existing backups correctly.
-
- I do not plan on changing the network or tape data structures to
- support defining arbitrary stream filters, but supporting that
- functionality is the ultimate goal.
-
- Assistance with either code or testing would be fantastic.
+ What: Implement support for the following:
+ - Stacking arbitrary stream filters (eg, encryption, compression,
+ sparse data handling))
+ - Attaching file sinks to terminate stream filters (ie, write out
+ the resultant data to a file)
+ - Refactor the restoration state machine accordingly
+
+ Why: The existing stream implementation suffers from the following:
+ - All state (compression, encryption, stream restoration), is
+ global across the entire restore process, for all streams. There are
+ multiple entry and exit points in the restoration state machine, and
+ thus multiple places where state must be allocated, deallocated,
+ initialized, or reinitialized. This results in exceptional complexity
+ for the author of a stream filter.
+ - The developer must enumerate all possible combinations of filters
+ and stream types (ie, win32 data with encryption, without encryption,
+ with encryption AND compression, etc).
+
+ Notes: This feature request only covers implementing the stream filters/
+ sinks, and refactoring the file daemon's restoration implementation
+ accordingly. If I have extra time, I will also rewrite the backup
+ implementation. My intent in implementing the restoration first is to
+ solve pressing bugs in the restoration handling, and to ensure that
+ the new restore implementation handles existing backups correctly.
+
+ I do not plan on changing the network or tape data structures to
+ support defining arbitrary stream filters, but supporting that
+ functionality is the ultimate goal.
+
+ Assistance with either code or testing would be fantastic.
Item 28: Allow FD to initiate a backup
Origin: Frank Volf (frank at deze dot org)
files would not be feasible using a tape drive.
Item 30: Automatic disabling of devices
- Date: 2005-11-11
- Origin: Peter Eriksson <peter at ifm.liu dot se>
- Status:
+ Date: 2005-11-11
+ Origin: Peter Eriksson <peter at ifm.liu dot se>
+ Status:
What: After a configurable amount of fatal errors with a tape drive
Bacula should automatically disable further use of a certain
What: The abilty to archive to media (dvd/cd) in a uncompressed format
for dead filing (archiving not backing up)
- Why: At my works when jobs are finished and moved off of the main file
- servers (raid based systems) onto a simple linux file server (ide based
- system) so users can find old information without contacting the IT
- dept.
-
- So this data dosn't realy change it only gets added to,
- But it also needs backing up. At the moment it takes
- about 8 hours to back up our servers (working data) so
- rather than add more time to existing backups i am trying
- to implement a system where we backup the acrhive data to
- cd/dvd these disks would only need to be appended to
- (burn only new/changed files to new disks for off site
- storage). basialy understand the differnce between
- achive data and live data.
-
- Notes: Scan the data and email me when it needs burning divide
- into predifind chunks keep a recored of what is on what
- disk make me a label (simple php->mysql=>pdf stuff) i
- could do this bit ability to save data uncompresed so
- it can be read in any other system (future proof data)
- save the catalog with the disk as some kind of menu
- system
+ Why: At my works when jobs are finished and moved off of the main file
+ servers (raid based systems) onto a simple linux file server (ide based
+ system) so users can find old information without contacting the IT
+ dept.
+
+ So this data dosn't realy change it only gets added to,
+ But it also needs backing up. At the moment it takes
+ about 8 hours to back up our servers (working data) so
+ rather than add more time to existing backups i am trying
+ to implement a system where we backup the acrhive data to
+ cd/dvd these disks would only need to be appended to
+ (burn only new/changed files to new disks for off site
+ storage). basialy understand the differnce between
+ achive data and live data.
+
+ Notes: Scan the data and email me when it needs burning divide
+ into predifind chunks keep a recored of what is on what
+ disk make me a label (simple php->mysql=>pdf stuff) i
+ could do this bit ability to save data uncompresed so
+ it can be read in any other system (future proof data)
+ save the catalog with the disk as some kind of menu
+ system
Item 35: Filesystem watch triggered backup.
Date: 31 August 2006
Notes: Legato Networker supports a similar system with full, incr, and 1-9 as
levels.
-Item 1: Implement a server-side compression feature
+
+Item 37: Implement a server-side compression feature
Date: 18 December 2006
Origin: Vadim A. Umanski , e-mail umanski@ext.ru
Status:
That's why the server-side compression feature is needed!
Notes:
-Item 1: Cause daemons to use a specific IP address to source communications
- Origin: Bill Moran <wmoran@collaborativefusion.com>
- Date: 18 Dec 2006
+Item 38: Cause daemons to use a specific IP address to source communications
+ Origin: Bill Moran <wmoran@collaborativefusion.com>
+ Date: 18 Dec 2006
Status:
- What: Cause Bacula daemons (dir, fd, sd) to always use the ip address
- specified in the [DIR|DF|SD]Addr directive as the source IP
- for initiating communication.
- Why: On complex networks, as well as extremely secure networks, it's
- not unusual to have multiple possible routes through the network.
- Often, each of these routes is secured by different policies
- (effectively, firewalls allow or deny different traffic depending
- on the source address)
- Unfortunately, it can sometimes be difficult or impossible to
- represent this in a system routing table, as the result is
- excessive subnetting that quickly exhausts available IP space.
- The best available workaround is to provide multiple IPs to
- a single machine that are all on the same subnet. In order
- for this to work properly, applications must support the ability
- to bind outgoing connections to a specified address, otherwise
- the operating system will always choose the first IP that
- matches the required route.
- Notes: Many other programs support this. For example, the following
- can be configured in BIND:
- query-source address 10.0.0.1;
- transfer-source 10.0.0.2;
- Which means queries from this server will always come from
- 10.0.0.1 and zone transfers will always originate from
- 10.0.0.2.
-
-Item n: Multiple threads in file daemon for the same job
+ What: Cause Bacula daemons (dir, fd, sd) to always use the ip address
+ specified in the [DIR|DF|SD]Addr directive as the source IP
+ for initiating communication.
+ Why: On complex networks, as well as extremely secure networks, it's
+ not unusual to have multiple possible routes through the network.
+ Often, each of these routes is secured by different policies
+ (effectively, firewalls allow or deny different traffic depending
+ on the source address)
+ Unfortunately, it can sometimes be difficult or impossible to
+ represent this in a system routing table, as the result is
+ excessive subnetting that quickly exhausts available IP space.
+ The best available workaround is to provide multiple IPs to
+ a single machine that are all on the same subnet. In order
+ for this to work properly, applications must support the ability
+ to bind outgoing connections to a specified address, otherwise
+ the operating system will always choose the first IP that
+ matches the required route.
+ Notes: Many other programs support this. For example, the following
+ can be configured in BIND:
+ query-source address 10.0.0.1;
+ transfer-source 10.0.0.2;
+ Which means queries from this server will always come from
+ 10.0.0.1 and zone transfers will always originate from
+ 10.0.0.2.
+
+Item 39: Multiple threads in file daemon for the same job
Date: 27 November 2005
Origin: Ove Risberg (Ove.Risberg at octocode dot com)
Status:
Why: Multiple concurrent backups of a large fileserver with many
disks and controllers will be much faster.
-Item n: Restore only file attributes (permissions, ACL, owner, group...)
+Item 40: Restore only file attributes (permissions, ACL, owner, group...)
Origin: Eric Bollengier
Date: 30/12/2006
Status:
If the file isn't here, we can create an empty one and apply
rights or do nothing.
+Item 41: Add an item to the restore option where you can select a pool
+ Origin: kshatriyak at gmail dot com
+ Date: 1/1/2006
+ Status:
+
+ What: In the restore option (Select the most recent backup for a
+ client) it would be useful to add an option where you can limit
+ the selection to a certain pool.
+
+ Why: When using cloned jobs, most of the time you have 2 pools - a
+ disk pool and a tape pool. People who have 2 pools would like to
+ select the most recent backup from disk, not from tape (tape
+ would be only needed in emergency). However, the most recent
+ backup (which may just differ a second from the disk backup) may
+ be on tape and would be selected. The problem becomes bigger if
+ you have a full and differential - the most "recent" full backup
+ may be on disk, while the most recent differential may be on tape
+ (though the differential on disk may differ even only a second or
+ so). Bacula will complain that the backups reside on different
+ media then. For now the only solution now when restoring things
+ when you have 2 pools is to manually search for the right
+ job-id's and enter them by hand, which is a bit fault tolerant.
+
============= Empty Feature Request form ===========
Item n: One line summary ...
Date: Date submitted