X-Git-Url: https://git.sur5r.net/?a=blobdiff_plain;f=bacula%2Fprojects;h=0f997e0bf600dc5c4f4ec019c50e4c2708253739;hb=cdf1f277887756e94a8c663791e90ca1b39faf2a;hp=3653905742db006563ec0853c4f8542b04d3f8fd;hpb=220906ea289c39fc82584629257c0a8794ce54af;p=bacula%2Fbacula diff --git a/bacula/projects b/bacula/projects index 3653905742..0f997e0bf6 100644 --- a/bacula/projects +++ b/bacula/projects @@ -1,322 +1,384 @@ Projects: Bacula Projects Roadmap - Status updated 18 August 2007 - After removing items completed in version - 2.2.0 and renumbering - -Items Completed: + Status updated 8 August 2010 Summary: -Item 1: Accurate restoration of renamed/deleted files -Item 2: Allow FD to initiate a backup -Item 3: Merge multiple backups (Synthetic Backup or Consolidation) -Item 4: Implement Catalog directive for Pool resource in Director -Item 5: Add an item to the restore option where you can select a Pool -Item 6: Deletion of disk Volumes when pruned -Item 7: Implement Base jobs -Item 8: Implement Copy pools -Item 9: Scheduling syntax that permits more flexibility and options -Item 10: Message mailing based on backup types -Item 11: Cause daemons to use a specific IP address to source communications -Item 12: Add Plug-ins to the FileSet Include statements. -Item 13: Restore only file attributes (permissions, ACL, owner, group...) -Item 14: Add an override in Schedule for Pools based on backup types -Item 15: Implement more Python events and functions -Item 16: Allow inclusion/exclusion of files in a fileset by creation/mod times -Item 17: Automatic promotion of backup levels based on backup size -Item 18: Better control over Job execution -Item 19: Automatic disabling of devices -Item 20: An option to operate on all pools with update vol parameters -Item 21: Include timestamp of job launch in "stat clients" output -Item 22: Implement Storage daemon compression -Item 23: Improve Bacula's tape and drive usage and cleaning management -Item 24: Multiple threads in file daemon for the same job -Item 25: Archival (removal) of User Files to Tape - - -Item 1: Accurate restoration of renamed/deleted files - Date: 28 November 2005 - Origin: Martin Simmons (martin at lispworks dot com) - Status: Robert Nelson will implement this - - What: When restoring a fileset for a specified date (including "most - recent"), Bacula should give you exactly the files and directories - that existed at the time of the last backup prior to that date. - - Currently this only works if the last backup was a Full backup. - When the last backup was Incremental/Differential, files and - directories that have been renamed or deleted since the last Full - backup are not currently restored correctly. Ditto for files with - extra/fewer hard links than at the time of the last Full backup. - - Why: Incremental/Differential would be much more useful if this worked. - - Notes: Merging of multiple backups into a single one seems to - rely on this working, otherwise the merged backups will not be - truly equivalent to a Full backup. - - Note: Kern: notes shortened. This can be done without the need for - inodes. It is essentially the same as the current Verify job, - but one additional database record must be written, which does - not need any database change. - - Notes: Kern: see if we can correct restoration of directories if - replace=ifnewer is set. Currently, if the directory does not - exist, a "dummy" directory is created, then when all the files - are updated, the dummy directory is newer so the real values - are not updated. - -Item 2: Allow FD to initiate a backup - Origin: Frank Volf (frank at deze dot org) - Date: 17 November 2005 - Status: +* => item complete + +Item 1: Ability to restart failed jobs +Item 2: SD redesign +Item* 3: NDMP backup/restore +Item 4: SAP backup/restore +Item 5: Oracle backup/restore +Item 6: Zimbra and Zarafa backup/restore +Item* 7: Include timestamp of job launch in "stat clients" output +Item 8: Include all conf files in specified directory +Item 9: Reduction of communications bandwidth for a backup +Item 10: Concurrent spooling and despooling within a single job. +Item 11: Start spooling even when waiting on tape +Item*12: Add ability to Verify any specified Job. +Item 13: Data encryption on storage daemon +Item 14: Possibilty to schedule Jobs on last Friday of the month +Item 15: Scheduling syntax that permits more flexibility and options +Item 16: Ability to defer Batch Insert to a later time +Item 17: Add MaxVolumeSize/MaxVolumeBytes to Storage resource +Item 18: Message mailing based on backup types +Item 19: Handle Windows Encrypted Files using Win raw encryption +Item 20: Job migration between different SDs +Item 19. Allow FD to initiate a backup +Item 21: Implement Storage daemon compression +Item 22: Ability to import/export Bacula database entities +Item*23: Implementation of running Job speed limit. +Item 24: Add an override in Schedule for Pools based on backup types +Item 25: Automatic promotion of backup levels based on backup size +Item 26: Allow FileSet inclusion/exclusion by creation/mod times +Item 27: Archival (removal) of User Files to Tape +Item 28: Ability to reconnect a disconnected comm line +Item 29: Multiple threads in file daemon for the same job +Item 30: Automatic disabling of devices +Item 31: Enable persistent naming/number of SQL queries +Item 32: Bacula Dir, FD and SD to support proxies +Item 33: Add Minumum Spool Size directive +Item 34: Command that releases all drives in an autochanger +Item 35: Run bscan on a remote storage daemon from within bconsole. +Item 36: Implement a Migration job type that will create a reverse +Item 37: Separate "Storage" and "Device" in the bacula-dir.conf +Item 38: Least recently used device selection for tape drives in autochanger. +Item 39: Implement a Storage device like Amazon's S3. +Item*40: Convert tray monitor on Windows to a stand alone program +Item 41: Improve Bacula's tape and drive usage and cleaning management +Item 42: Relabel disk volume after recycling + +Item 1: Ability to restart failed jobs + Date: 26 April 2009 + Origin: Kern/Eric + Status: + + What: Often jobs fail because of a communications line drop or max run time, + cancel, or some other non-critical problem. Currrently any data + saved is lost. This implementation should modify the Storage daemon + so that it saves all the files that it knows are completely backed + up to the Volume + + The jobs should then be marked as incomplete and a subsequent + Incremental Accurate backup will then take into account all the + previously saved job. + + Why: Avoids backuping data already saved. + + Notes: Requires Accurate to restart correctly. Must completed have a minimum + volume of data or files stored on Volume before enabling. + +Item 2: SD redesign + Date: 8 August 2010 + Origin: Kern + Status: + + What: Various ideas for redesigns planned for the SD: + 1. One thread per drive + 2. Design a class structure for all objects in the SD. + 3. Make Device into C++ classes for each device type + 4. Make Device have a proxy (front end intercept class) that will permit control over locking and changing the real device pointer. It can also permit delaying opening, so that we can adapt to having another program that tells us the Archive device name. + 5. Allow plugins to create new on the fly devices + 6. Separate SD volume manager + 7. Volume manager tells Bacula what drive or device to use for a given volume + + Why: It will simplify the SD, make it more modular, reduce locking + conflicts, and allow multiple buffer backups. - What: Provide some means, possibly by a restricted console that - allows a FD to initiate a backup, and that uses the connection - established by the FD to the Director for the backup so that - a Director that is firewalled can do the backup. - Why: Makes backup of laptops much easier. +Item 3: NDMP backup/restore + Date: 8 August 2010 + Origin: Bacula Systems + Status: Enterprise only if implemented by Bacula Systems + What: Backup/restore via NDMP -- most important NetApp compatibility -Item 3: Merge multiple backups (Synthetic Backup or Consolidation) - Origin: Marc Cousin and Eric Bollengier - Date: 15 November 2005 - Status: - What: A merged backup is a backup made without connecting to the Client. - It would be a Merge of existing backups into a single backup. - In effect, it is like a restore but to the backup medium. - - For instance, say that last Sunday we made a full backup. Then - all week long, we created incremental backups, in order to do - them fast. Now comes Sunday again, and we need another full. - The merged backup makes it possible to do instead an incremental - backup (during the night for instance), and then create a merged - backup during the day, by using the full and incrementals from - the week. The merged backup will be exactly like a full made - Sunday night on the tape, but the production interruption on the - Client will be minimal, as the Client will only have to send - incrementals. - - In fact, if it's done correctly, you could merge all the - Incrementals into single Incremental, or all the Incrementals - and the last Differential into a new Differential, or the Full, - last differential and all the Incrementals into a new Full - backup. And there is no need to involve the Client. - - Why: The benefit is that : - - the Client just does an incremental ; - - the merged backup on tape is just as a single full backup, - and can be restored very fast. - - This is also a way of reducing the backup data since the old - data can then be pruned (or not) from the catalog, possibly - allowing older volumes to be recycled - -Item 4: Implement Catalog directive for Pool resource in Director - Origin: Alan Davis adavis@ruckus.com - Date: 6 March 2007 - Status: Submitted - - What: The current behavior is for the director to create all pools - found in the configuration file in all catalogs. Add a - Catalog directive to the Pool resource to specify which - catalog to use for each pool definition. - - Why: This allows different catalogs to have different pool - attributes and eliminates the side-effect of adding - pools to catalogs that don't need/use them. - - Notes: Kern: I think this is relatively easy to do, and it is really - a pre-requisite to a number of the Copy pool, ... projects - that are listed here. -Item 5: Add an item to the restore option where you can select a Pool - Origin: kshatriyak at gmail dot com - Date: 1/1/2006 - Status: +Item 4: SAP backup/restore + Date: 8 August 2010 + Origin: Bacula Systems + Status: Enterprise only if implemented by Bacula Systems - What: In the restore option (Select the most recent backup for a - client) it would be useful to add an option where you can limit - the selection to a certain pool. + What: Backup/restore SAP databases (MaxDB, Oracle, possibly DB2) - Why: When using cloned jobs, most of the time you have 2 pools - a - disk pool and a tape pool. People who have 2 pools would like to - select the most recent backup from disk, not from tape (tape - would be only needed in emergency). However, the most recent - backup (which may just differ a second from the disk backup) may - be on tape and would be selected. The problem becomes bigger if - you have a full and differential - the most "recent" full backup - may be on disk, while the most recent differential may be on tape - (though the differential on disk may differ even only a second or - so). Bacula will complain that the backups reside on different - media then. For now the only solution now when restoring things - when you have 2 pools is to manually search for the right - job-id's and enter them by hand, which is a bit fault tolerant. - Notes: Kern: This is a nice idea. It could also be the way to support - Jobs that have been Copied (similar to migration, but not yet - implemented). +Item 5: Oracle backup/restore + Date: 8 August 2010 + Origin: Bacula Systems + Status: Enterprise only if implemented by Bacula Systems + What: Backup/restore Oracle databases -Item 6: Deletion of disk Volumes when pruned - Date: Nov 25, 2005 - Origin: Ross Boylan (edited - by Kern) - Status: - What: Provide a way for Bacula to automatically remove Volumes - from the filesystem, or optionally to truncate them. - Obviously, the Volume must be pruned prior removal. +Item 6: Zimbra and Zarafa backup/restore + Date: 8 August 2010 + Origin: Bacula Systems + Status: Enterprise only if implemented by Bacula Systems - Why: This would allow users more control over their Volumes and - prevent disk based volumes from consuming too much space. + What: Backup/restore for Zimbra and Zarafa - Notes: The following two directives might do the trick: - Volume Data Retention =