X-Git-Url: https://git.sur5r.net/?a=blobdiff_plain;f=bacula%2Fprojects;h=e67543e3a2be4351c4f714f324159e693dd251d2;hb=cd8edbd8625a004337224a70f684dc4e8d92c93e;hp=85bff3dcf31531ab6defa0b446a441390802426d;hpb=3e90a963b6a213be463e8bf14a29e4c36d21a855;p=bacula%2Fbacula diff --git a/bacula/projects b/bacula/projects index 85bff3dcf3..e67543e3a2 100644 --- a/bacula/projects +++ b/bacula/projects @@ -1,50 +1,51 @@ Projects: Bacula Projects Roadmap - Status updated 04 Jun 2009 + Status updated 14 Jun 2009 Summary: - -Item 1: Ability to restart failed jobs -Item 2: 'restore' menu: enter a JobId, automatically select dependents -Item 3: Scheduling syntax that permits more flexibility and options -Item 4: Data encryption on storage daemon -Item 5: Deletion of disk Volumes when pruned -Item 6: Implement Base jobs -Item 7: Add ability to Verify any specified Job. -Item 8: Improve Bacula's tape and drive usage and cleaning management -Item 9: Allow FD to initiate a backup -Item 10: Restore from volumes on multiple storage daemons -Item 11: Implement Storage daemon compression -Item 12: Reduction of communications bandwidth for a backup -Item 13: Ability to reconnect a disconnected comm line -Item 14: Start spooling even when waiting on tape -Item 15: Enable/disable compression depending on storage device (disk/tape) -Item 16: Include all conf files in specified directory -Item 17: Multiple threads in file daemon for the same job -Item 18: Possibilty to schedule Jobs on last Friday of the month -Item 19: Include timestamp of job launch in "stat clients" output -Item 20: Cause daemons to use a specific IP address to source communications -Item 21: Message mailing based on backup types -Item 22: Ability to import/export Bacula database entities -Item 23: "Maximum Concurrent Jobs" for drives when used with changer device -Item 24: Implementation of running Job speed limit. -Item 25: Add an override in Schedule for Pools based on backup types -Item 26: Automatic promotion of backup levels based on backup size -Item 27: Allow inclusion/exclusion of files in a fileset by creation/mod times -Item 28: Archival (removal) of User Files to Tape -Item 29: An option to operate on all pools with update vol parameters -Item 30: Automatic disabling of devices -Item 31: List InChanger flag when doing restore. -Item 32: Ability to defer Batch Insert to a later time -Item 33: Add MaxVolumeSize/MaxVolumeBytes statement to Storage resource -Item 34: Enable persistent naming/number of SQL queries -Item 35: Port bat to Win32 -Item 36: Bacula Dir, FD and SD to support proxies -Item 37: Add Minumum Spool Size directive -Item 38: Backup and Restore of Windows Encrypted Files using Win raw encryption -Item 39: Implement an interface between Bacula and Amazon's S3. -Item 40: Convert Bacula existing tray monitor on Windows to a stand alone program +* => item complete + + Item 1: Ability to restart failed jobs +*Item 2: 'restore' menu: enter a JobId, automatically select dependents + Item 3: Scheduling syntax that permits more flexibility and options + Item 4: Data encryption on storage daemon + Item 5: Deletion of disk Volumes when pruned + Item 6: Implement Base jobs + Item 7: Add ability to Verify any specified Job. + Item 8: Improve Bacula's tape and drive usage and cleaning management + Item 9: Allow FD to initiate a backup +*Item 10: Restore from volumes on multiple storage daemons + Item 11: Implement Storage daemon compression + Item 12: Reduction of communications bandwidth for a backup + Item 13: Ability to reconnect a disconnected comm line + Item 14: Start spooling even when waiting on tape + Item 15: Enable/disable compression depending on storage device (disk/tape) + Item 16: Include all conf files in specified directory + Item 17: Multiple threads in file daemon for the same job + Item 18: Possibilty to schedule Jobs on last Friday of the month + Item 19: Include timestamp of job launch in "stat clients" output +*Item 20: Cause daemons to use a specific IP address to source communications + Item 21: Message mailing based on backup types + Item 22: Ability to import/export Bacula database entities +*Item 23: "Maximum Concurrent Jobs" for drives when used with changer device + Item 24: Implementation of running Job speed limit. + Item 25: Add an override in Schedule for Pools based on backup types + Item 26: Automatic promotion of backup levels based on backup size + Item 27: Allow inclusion/exclusion of files in a fileset by creation/mod times + Item 28: Archival (removal) of User Files to Tape + Item 29: An option to operate on all pools with update vol parameters + Item 30: Automatic disabling of devices +*Item 31: List InChanger flag when doing restore. + Item 32: Ability to defer Batch Insert to a later time + Item 33: Add MaxVolumeSize/MaxVolumeBytes statement to Storage resource + Item 34: Enable persistent naming/number of SQL queries + Item 35: Port bat to Win32 + Item 36: Bacula Dir, FD and SD to support proxies + Item 37: Add Minumum Spool Size directive + Item 38: Backup and Restore of Windows Encrypted Files using Win raw encryption + Item 39: Implement an interface between Bacula and Amazon's S3. + Item 40: Convert Bacula existing tray monitor on Windows to a stand alone program Item 1: Ability to restart failed jobs Date: 26 April 2009 @@ -70,27 +71,28 @@ Item 1: Ability to restart failed jobs Item 2: 'restore' menu: enter a JobId, automatically select dependents Origin: Graham Keeling (graham@equiinet.com) Date: 13 March 2009 - Status: Done in 3.0.2 -What: Add to the bconsole 'restore' menu the ability to select a job - by JobId, and have bacula automatically select all the dependent jobs. +What: Add to the bconsole 'restore' menu the ability to select a job + by JobId, and have bacula automatically select all the + dependent jobs. Why: Currently, you either have to... - a) laboriously type in a date that is greater than the date of the backup that - you want and is less than the subsequent backup (bacula then figures out the - dependent jobs), or - b) manually figure out all the JobIds that you want and laboriously type them - all in. - It would be extremely useful (in a programmatical sense, as well as for humans) - to be able to just give it a single JobId and let bacula do the hard work (work - that it already knows how to do). - - Notes (Kern): I think this should either be modified to have Bacula print - a list of dates that the user can choose from as is done in bwx-console and - bat or the name of this command must be carefully chosen so that the user - clearly understands that the JobId is being used to specify what Job and the - date to which he wishes the restore to happen. + + a) laboriously type in a date that is greater than the date of the + backup that you want and is less than the subsequent backup (bacula + then figures out the dependent jobs), or + b) manually figure out all the JobIds that you want and laboriously + type them all in. It would be extremely useful (in a programmatical + sense, as well as for humans) to be able to just give it a single JobId + and let bacula do the hard work (work that it already knows how to do). + + Notes (Kern): I think this should either be modified to have Bacula + print a list of dates that the user can choose from as is done in + bwx-console and bat or the name of this command must be carefully + chosen so that the user clearly understands that the JobId is being + used to specify what Job and the date to which he wishes the restore to + happen. Item 3: Scheduling syntax that permits more flexibility and options @@ -204,9 +206,14 @@ Item 4: Data encryption on storage daemon Date: 04 February 2009 Status: new - What: The storage demon should be able to do the data encryption that can currently be done by the file daemon. + What: The storage demon should be able to do the data encryption that can + currently be done by the file daemon. - Why: This would have 2 advantages: 1) one could encrypt the data of unencrypted tapes by doing a migration job, and 2) the storage daemon would be the only machine that would have to keep the encryption keys. + Why: This would have 2 advantages: + 1) one could encrypt the data of unencrypted tapes by doing a + migration job + 2) the storage daemon would be the only machine that would have + to keep the encryption keys. Notes from Landon: As an addendum to the feature request, here are some crypto @@ -387,59 +394,58 @@ Why: Makes backup of laptops much easier. Item 10: Restore from volumes on multiple storage daemons Origin: Graham Keeling (graham@equiinet.com) Date: 12 March 2009 -Status: Proposing +Status: Done in 3.0.2 What: The ability to restore from volumes held by multiple storage daemons would be very useful. -Why: It is useful to be able to backup to any number of different storage - daemons. For example, your first storage daemon may run out of space, so you - switch to your second and carry on. Bacula will currently let you do this. - However, once you come to restore, bacula cannot cope when volumes on different - storage daemons are required. +Why: It is useful to be able to backup to any number of different storage + daemons. For example, your first storage daemon may run out of space, + so you switch to your second and carry on. Bacula will currently let + you do this. However, once you come to restore, bacula cannot cope + when volumes on different storage daemons are required. - Notes: The director knows that more than one storage daemon is needed, as - bconsole outputs something like the following table. + Notes: The director knows that more than one storage daemon is needed, + as bconsole outputs something like the following table. The job will require the following Volume(s) Storage(s) SD Device(s) - =========================================================================== + ===================================================================== - backup-0001 Disk 1 Disk 1.0 - backup-0002 Disk 2 Disk 2.0 - - However, the bootstrap file that it creates gets sent to the first storage - daemon only, which then stalls for a long time, 'waiting for a mount request' - for the volume that it doesn't have. - The bootstrap file contains no knowledge of the storage daemon. - Under the current design: - - The director connects to the storage daemon, and gets an sd_auth_key. - The director then connects to the file daemon, and gives it the - sd_auth_key with the 'jobcmd'. - (restoring of files happens) - The director does a 'wait_for_storage_daemon_termination()'. - The director waits for the file daemon to indicate the end of the job. + backup-0001 Disk 1 Disk 1.0 + backup-0002 Disk 2 Disk 2.0 + + However, the bootstrap file that it creates gets sent to the first + storage daemon only, which then stalls for a long time, 'waiting for a + mount request' for the volume that it doesn't have. The bootstrap file + contains no knowledge of the storage daemon. Under the current design: + + The director connects to the storage daemon, and gets an sd_auth_key. + The director then connects to the file daemon, and gives it the + sd_auth_key with the 'jobcmd'. (restoring of files happens) The + director does a 'wait_for_storage_daemon_termination()'. The director + waits for the file daemon to indicate the end of the job. With my idea: The director connects to the file daemon. Then, for each storage daemon in the .bsr file... { - The director connects to the storage daemon, and gets an sd_auth_key. - The director then connects to the file daemon, and gives it the - sd_auth_key with the 'storaddr' command. - (restoring of files happens) - The director does a 'wait_for_storage_daemon_termination()'. - The director waits for the file daemon to indicate the end of the - work on this storage. + The director connects to the storage daemon, and gets an sd_auth_key. + The director then connects to the file daemon, and gives it the + sd_auth_key with the 'storaddr' command. + (restoring of files happens) + The director does a 'wait_for_storage_daemon_termination()'. + The director waits for the file daemon to indicate the end of the + work on this storage. } - The director tells the file daemon that there are no more storages to contact. - The director waits for the file daemon to indicate the end of the job. - As you can see, each restore between the file daemon and storage daemon is - handled in the same way that it is currently handled, using the same method - for authentication, except that the sd_auth_key is moved from the 'jobcmd' to - the 'storaddr' command - where it logically belongs. + The director tells the file daemon that there are no more storages to + contact. The director waits for the file daemon to indicate the end of + the job. As you can see, each restore between the file daemon and + storage daemon is handled in the same way that it is currently handled, + using the same method for authentication, except that the sd_auth_key + is moved from the 'jobcmd' to the 'storaddr' command - where it + logically belongs. Item 11: Implement Storage daemon compression @@ -632,8 +638,8 @@ Status: would expand to this {first|last} {Month|Week|Day|Mo-Fri} of the {Year|Month|Week} you would be able to run really flexible jobs. - To got a certain Job run on the last Friday of the Month for example one could - then write + To got a certain Job run on the last Friday of the Month for example + one could then write Run = pool=Monthly last Fri of the Month at 23:50 @@ -672,7 +678,7 @@ Item 19: Include timestamp of job launch in "stat clients" output Item 20: Cause daemons to use a specific IP address to source communications Origin: Bill Moran Date: 18 Dec 2006 - Status: Done + Status: Done in 3.0.2 What: Cause Bacula daemons (dir, fd, sd) to always use the ip address specified in the [DIR|DF|SD]Addr directive as the source IP for initiating communication. @@ -717,7 +723,8 @@ Item 21: Message mailing based on backup types or Incremental/Differential Backups (which would likely be kept onsite). - Notes: One way this could be done is through additional message types, for example: + Notes: One way this could be done is through additional message types, for + example: Messages { # email the boss only on full system backups @@ -748,7 +755,7 @@ Item 22: Ability to import/export Bacula database entities Item 23: "Maximum Concurrent Jobs" for drives when used with changer device Origin: Ralf Gross ralf-lists ralfgross.de Date: 2008-12-12 - Status: Initial Request + Status: Done in 3.0.3 What: respect the "Maximum Concurrent Jobs" directive in the _drives_ Storage section in addition to the changer section @@ -759,9 +766,9 @@ Item 23: "Maximum Concurrent Jobs" for drives when used with changer device per drive in this situation. Notes: Using different priorities for these jobs lead to problems that other - jobs are blocked. On the user list I got the advice to use the "Prefer Mounted - Volumes" directive, but Kern advised against using "Prefer Mounted - Volumes" in an other thread: + jobs are blocked. On the user list I got the advice to use the + "Prefer Mounted Volumes" directive, but Kern advised against using + "Prefer Mounted Volumes" in an other thread: http://article.gmane.org/gmane.comp.sysutils.backup.bacula.devel/11876/ In addition I'm not sure if this would be the same as respecting the @@ -792,7 +799,8 @@ Item 23: "Maximum Concurrent Jobs" for drives when used with changer device [2 more drives] - The "Maximum Concurrent Jobs = 1" directive in the drive's section is ignored. + The "Maximum Concurrent Jobs = 1" directive in the drive's section is + ignored. Item 24: Implementation of running Job speed limit. @@ -980,10 +988,10 @@ Item 30: Automatic disabling of devices Item 31: List InChanger flag when doing restore. Origin: Jesper Krogh Date: 17 Oct 2008 - Status: + Status: Done in version 3.0.2 - What: When doing a restore the restore selection dialog ends by telling stuff - like this: + What: When doing a restore the restore selection dialog ends by telling + stuff like this: The job will require the following Volume(s) Storage(s) SD Device(s) =========================================================================== @@ -1241,18 +1249,29 @@ Item 38: Backup and Restore of Windows Encrypted Files using Win raw encryption encrypted-file-related callback functions. -Item 39: Implement an interface between Bacula and Amazon's S3. +Item 39: Implement an interface between Bacula and Storage clould like Amazon's S3. Date: 25 August 2008 Origin: Soren Hansen Status: Not started. What: Enable the storage daemon to store backup data on Amazon's S3 service. - Why: Amazon's S3 is a cheap way to store data off-site. Current - ways to integrate Bacula and S3 involve storing all the data - locally and syncing them to S3, and manually fetching them - again when they're needed. This is very cumbersome. + Why: Amazon's S3 is a cheap way to store data off-site. + + Notes: If we configure the Pool to put only one job per volume (they don't + support append operation), and the volume size isn't to big (100MB?), + it should be easy to adapt the disk-changer script to add get/put + procedure with curl. So, the data would be safetly copied during the + Job. + + Cloud should be only used with Copy jobs, users should always have + a copy of their data on their site. + We should also think to have our own cache, trying always to have + cloud volume on the local disk. (I don't know if users want to store + 100GB on cloud, so it shouldn't be a disk size problem). For example, + if bacula want to recycle a volume, it will start by downloading the + file to truncate it few seconds later, if we can avoid that... Item 40: Convert Bacula existing tray monitor on Windows to a stand alone program Date: 26 April 2009 @@ -1279,18 +1298,19 @@ Item 1: Relabel disk volume after recycling Date: 07 May 2009. Status: Not implemented yet, no code written. - What: The ability to relabel the disk volume (and thus rename the file on the disk) - after it has been recycled. Useful when you have a single job per disk volume, - and you use a custom Label format, for example: - Label Format = "${Client}-${Level}-${NumVols:p/4/0/r}-${Year}_${Month}_${Day}-${Hour}_${Minute}" - - Why: Disk volumes in Bacula get the label/filename when they are used for the first time. - If you use recycling and custom label format like above, the disk - volume name doesn't match the contents after it has been recycled. - This feature makes it possible to keep the label/filename in sync - with the content and thus makes it easy to check/monitor the backups - from the shell and/or normal file management tools, because the filenames - of the disk volumes match the content. + What: The ability to relabel the disk volume (and thus rename the file on the + disk) after it has been recycled. Useful when you have a single job + per disk volume, and you use a custom Label format, for example: + Label Format = + "${Client}-${Level}-${NumVols:p/4/0/r}-${Year}_${Month}_${Day}-${Hour}_${Minute}" + + Why: Disk volumes in Bacula get the label/filename when they are used for the + first time. If you use recycling and custom label format like above, + the disk volume name doesn't match the contents after it has been + recycled. This feature makes it possible to keep the label/filename + in sync with the content and thus makes it easy to check/monitor the + backups from the shell and/or normal file management tools, because + the filenames of the disk volumes match the content. Notes: The configuration option could be "Relabel after Recycling = Yes".