Projects:
Bacula Projects Roadmap
- Status updated 04 Jun 2009
+ Status updated 14 Jun 2009
Summary:
-
-Item 1: Ability to restart failed jobs
-Item 2: 'restore' menu: enter a JobId, automatically select dependents
-Item 3: Scheduling syntax that permits more flexibility and options
-Item 4: Data encryption on storage daemon
-Item 5: Deletion of disk Volumes when pruned
-Item 6: Implement Base jobs
-Item 7: Add ability to Verify any specified Job.
-Item 8: Improve Bacula's tape and drive usage and cleaning management
-Item 9: Allow FD to initiate a backup
-Item 10: Restore from volumes on multiple storage daemons
-Item 11: Implement Storage daemon compression
-Item 12: Reduction of communications bandwidth for a backup
-Item 13: Ability to reconnect a disconnected comm line
-Item 14: Start spooling even when waiting on tape
-Item 15: Enable/disable compression depending on storage device (disk/tape)
-Item 16: Include all conf files in specified directory
-Item 17: Multiple threads in file daemon for the same job
-Item 18: Possibilty to schedule Jobs on last Friday of the month
-Item 19: Include timestamp of job launch in "stat clients" output
-Item 20: Cause daemons to use a specific IP address to source communications
-Item 21: Message mailing based on backup types
-Item 22: Ability to import/export Bacula database entities
-Item 23: "Maximum Concurrent Jobs" for drives when used with changer device
-Item 24: Implementation of running Job speed limit.
-Item 25: Add an override in Schedule for Pools based on backup types
-Item 26: Automatic promotion of backup levels based on backup size
-Item 27: Allow inclusion/exclusion of files in a fileset by creation/mod times
-Item 28: Archival (removal) of User Files to Tape
-Item 29: An option to operate on all pools with update vol parameters
-Item 30: Automatic disabling of devices
-Item 31: List InChanger flag when doing restore.
-Item 32: Ability to defer Batch Insert to a later time
-Item 33: Add MaxVolumeSize/MaxVolumeBytes statement to Storage resource
-Item 34: Enable persistent naming/number of SQL queries
-Item 35: Port bat to Win32
-Item 36: Bacula Dir, FD and SD to support proxies
-Item 37: Add Minumum Spool Size directive
-Item 38: Backup and Restore of Windows Encrypted Files using Win raw encryption
-Item 39: Implement an interface between Bacula and Amazon's S3.
-Item 40: Convert Bacula existing tray monitor on Windows to a stand alone program
+* => item complete
+
+ Item 1: Ability to restart failed jobs
+*Item 2: 'restore' menu: enter a JobId, automatically select dependents
+ Item 3: Scheduling syntax that permits more flexibility and options
+ Item 4: Data encryption on storage daemon
+ Item 5: Deletion of disk Volumes when pruned
+ Item 6: Implement Base jobs
+ Item 7: Add ability to Verify any specified Job.
+ Item 8: Improve Bacula's tape and drive usage and cleaning management
+ Item 9: Allow FD to initiate a backup
+ Item 10: Restore from volumes on multiple storage daemons
+ Item 11: Implement Storage daemon compression
+ Item 12: Reduction of communications bandwidth for a backup
+ Item 13: Ability to reconnect a disconnected comm line
+ Item 14: Start spooling even when waiting on tape
+ Item 15: Enable/disable compression depending on storage device (disk/tape)
+ Item 16: Include all conf files in specified directory
+ Item 17: Multiple threads in file daemon for the same job
+ Item 18: Possibilty to schedule Jobs on last Friday of the month
+ Item 19: Include timestamp of job launch in "stat clients" output
+ Item 20: Cause daemons to use a specific IP address to source communications
+ Item 21: Message mailing based on backup types
+ Item 22: Ability to import/export Bacula database entities
+ Item 23: "Maximum Concurrent Jobs" for drives when used with changer device
+ Item 24: Implementation of running Job speed limit.
+ Item 25: Add an override in Schedule for Pools based on backup types
+ Item 26: Automatic promotion of backup levels based on backup size
+ Item 27: Allow inclusion/exclusion of files in a fileset by creation/mod times
+ Item 28: Archival (removal) of User Files to Tape
+ Item 29: An option to operate on all pools with update vol parameters
+ Item 30: Automatic disabling of devices
+*Item 31: List InChanger flag when doing restore.
+ Item 32: Ability to defer Batch Insert to a later time
+ Item 33: Add MaxVolumeSize/MaxVolumeBytes statement to Storage resource
+ Item 34: Enable persistent naming/number of SQL queries
+ Item 35: Port bat to Win32
+ Item 36: Bacula Dir, FD and SD to support proxies
+ Item 37: Add Minumum Spool Size directive
+ Item 38: Backup and Restore of Windows Encrypted Files using Win raw encryption
+ Item 39: Implement an interface between Bacula and Amazon's S3.
+ Item 40: Convert Bacula existing tray monitor on Windows to a stand alone program
Item 1: Ability to restart failed jobs
Date: 26 April 2009
Item 2: 'restore' menu: enter a JobId, automatically select dependents
Origin: Graham Keeling (graham@equiinet.com)
Date: 13 March 2009
+Status: Done in 3.0.2
-Status: Proposing
-
-What: Add to the bconsole 'restore' menu the ability to select a job
- by JobId, and have bacula automatically select all the dependent jobs.
+What: Add to the bconsole 'restore' menu the ability to select a job
+ by JobId, and have bacula automatically select all the
+ dependent jobs.
Why: Currently, you either have to...
a) laboriously type in a date that is greater than the date of the backup that
Item 31: List InChanger flag when doing restore.
Origin: Jesper Krogh<jesper@krogh.cc>
Date: 17 Oct 2008
- Status:
+ Status: Done in version 3.0.2
What: When doing a restore the restore selection dialog ends by telling stuff
like this:
encrypted-file-related callback functions.
-Item 39: Implement an interface between Bacula and Amazon's S3.
+Item 39: Implement an interface between Bacula and Storage clould like Amazon's S3.
Date: 25 August 2008
Origin: Soren Hansen <soren@ubuntu.com>
Status: Not started.
What: Enable the storage daemon to store backup data on Amazon's
S3 service.
- Why: Amazon's S3 is a cheap way to store data off-site. Current
- ways to integrate Bacula and S3 involve storing all the data
- locally and syncing them to S3, and manually fetching them
- again when they're needed. This is very cumbersome.
+ Why: Amazon's S3 is a cheap way to store data off-site.
+
+ Notes: If we configure the Pool to put only one job per volume (they don't
+ support append operation), and the volume size isn't to big (100MB?),
+ it should be easy to adapt the disk-changer script to add get/put
+ procedure with curl. So, the data would be safetly copied during the
+ Job.
+
+ Cloud should be only used with Copy jobs, users should always have
+ a copy of their data on their site.
+ We should also think to have our own cache, trying always to have
+ cloud volume on the local disk. (I don't know if users want to store
+ 100GB on cloud, so it shouldn't be a disk size problem). For example,
+ if bacula want to recycle a volume, it will start by downloading the
+ file to truncate it few seconds later, if we can avoid that...
Item 40: Convert Bacula existing tray monitor on Windows to a stand alone program
Date: 26 April 2009