X-Git-Url: https://git.sur5r.net/?a=blobdiff_plain;ds=sidebyside;f=bacula%2Fprojects;h=37cc6dcd883fde7684c8a3f0edbf5dfcdc6f84b9;hb=eadfe3c5f02b3140a18864e828f3609ac67a9584;hp=8704fca47120eed53ab29c6a3a9c1b467dd4d597;hpb=3196de4605769d5f49af9218968fd5a8cbb4aaae;p=bacula%2Fbacula diff --git a/bacula/projects b/bacula/projects index 8704fca471..37cc6dcd88 100644 --- a/bacula/projects +++ b/bacula/projects @@ -1,105 +1,386 @@ Projects: Bacula Projects Roadmap - Status updated 04 February 2009 + Status updated 8 August 2010 Summary: -Item 2: Allow FD to initiate a backup -Item 6: Deletion of disk Volumes when pruned -Item 7: Implement Base jobs -Item 9: Scheduling syntax that permits more flexibility and options -Item 10: Message mailing based on backup types -Item 11: Cause daemons to use a specific IP address to source communications -Item 14: Add an override in Schedule for Pools based on backup types -Item 15: Implement more Python events and functions --- Abandoned for plugins -Item 16: Allow inclusion/exclusion of files in a fileset by creation/mod times -Item 17: Automatic promotion of backup levels based on backup size -Item 19: Automatic disabling of devices -Item 20: An option to operate on all pools with update vol parameters -Item 21: Include timestamp of job launch in "stat clients" output -Item 22: Implement Storage daemon compression -Item 23: Improve Bacula's tape and drive usage and cleaning management -Item 24: Multiple threads in file daemon for the same job -Item 25: Archival (removal) of User Files to Tape - - -Item 2: Allow FD to initiate a backup - Origin: Frank Volf (frank at deze dot org) - Date: 17 November 2005 +* => item complete + +Item 1: Ability to restart failed jobs +Item 2: SD redesign +Item 3: NDMP backup/restore +Item 4: SAP backup/restore +Item 5: Oracle backup/restore +Item 6: Zimbra and Zarafa backup/restore +Item* 7: Include timestamp of job launch in "stat clients" output +Item 8: Include all conf files in specified directory +Item 9: Reduction of communications bandwidth for a backup +Item 10: Concurrent spooling and despooling within a single job. +Item 11: Start spooling even when waiting on tape +Item 12: Add ability to Verify any specified Job. +Item 13: Data encryption on storage daemon +Item 14: Possibilty to schedule Jobs on last Friday of the month +Item 15: Scheduling syntax that permits more flexibility and options +Item 16: Ability to defer Batch Insert to a later time +Item 17: Add MaxVolumeSize/MaxVolumeBytes to Storage resource +Item 18: Message mailing based on backup types +Item 19: Handle Windows Encrypted Files using Win raw encryption +Item 20: Job migration between different SDs +Item 19. Allow FD to initiate a backup +Item 21: Implement Storage daemon compression +Item 22: Ability to import/export Bacula database entities +Item 23: Implementation of running Job speed limit. +Item 24: Add an override in Schedule for Pools based on backup types +Item 25: Automatic promotion of backup levels based on backup size +Item 26: Allow FileSet inclusion/exclusion by creation/mod times +Item 27: Archival (removal) of User Files to Tape +Item 28: Ability to reconnect a disconnected comm line +Item 29: Multiple threads in file daemon for the same job +Item 30: Automatic disabling of devices +Item 31: Enable persistent naming/number of SQL queries +Item 32: Bacula Dir, FD and SD to support proxies +Item 33: Add Minumum Spool Size directive +Item 34: Command that releases all drives in an autochanger +Item 35: Run bscan on a remote storage daemon from within bconsole. +Item 36: Implement a Migration job type that will create a reverse +Item 37: Extend the verify code to make it possible to verify +Item 38: Separate "Storage" and "Device" in the bacula-dir.conf +Item 39: Least recently used device selection for tape drives in autochanger. +Item 40: Implement a Storage device like Amazon's S3. +Item 41: Convert tray monitor on Windows to a stand alone program +Item 42: Improve Bacula's tape and drive usage and cleaning management +Item 43: Relabel disk volume after recycling + + +Item 1: Ability to restart failed jobs + Date: 26 April 2009 + Origin: Kern/Eric + Status: + + What: Often jobs fail because of a communications line drop or max run time, + cancel, or some other non-critical problem. Currrently any data + saved is lost. This implementation should modify the Storage daemon + so that it saves all the files that it knows are completely backed + up to the Volume + + The jobs should then be marked as incomplete and a subsequent + Incremental Accurate backup will then take into account all the + previously saved job. + + Why: Avoids backuping data already saved. + + Notes: Requires Accurate to restart correctly. Must completed have a minimum + volume of data or files stored on Volume before enabling. + +Item 2: SD redesign + Date: 8 August 2010 + Origin: Kern + Status: + + What: Various ideas for redesigns planned for the SD: + 1. One thread per drive + 2. Design a class structure for all objects in the SD. + 3. Make Device into C++ classes for each device type + 4. Make Device have a proxy (front end intercept class) that will permit control over locking and changing the real device pointer. It can also permit delaying opening, so that we can adapt to having another program that tells us the Archive device name. + 5. Allow plugins to create new on the fly devices + 6. Separate SD volume manager + 7. Volume manager tells Bacula what drive or device to use for a given volume + + Why: It will simplify the SD, make it more modular, reduce locking + conflicts, and allow multiple buffer backups. + + +Item 3: NDMP backup/restore + Date: 8 August 2010 + Origin: Bacula Systems + Status: Enterprise only if implemented by Bacula Systems + + What: Backup/restore via NDMP -- most important NetApp compatibility + + + +Item 4: SAP backup/restore + Date: 8 August 2010 + Origin: Bacula Systems + Status: Enterprise only if implemented by Bacula Systems + + What: Backup/restore SAP databases (MaxDB, Oracle, possibly DB2) + + + +Item 5: Oracle backup/restore + Date: 8 August 2010 + Origin: Bacula Systems + Status: Enterprise only if implemented by Bacula Systems + + What: Backup/restore Oracle databases + + +Item 6: Zimbra and Zarafa backup/restore + Date: 8 August 2010 + Origin: Bacula Systems + Status: Enterprise only if implemented by Bacula Systems + + What: Backup/restore for Zimbra and Zarafa + + + +Item 7: Include timestamp of job launch in "stat clients" output + Origin: Mark Bergman + Date: Tue Aug 22 17:13:39 EDT 2006 + Status: Done + + What: The "stat clients" command doesn't include any detail on when + the active backup jobs were launched. + + Why: Including the timestamp would make it much easier to decide whether + a job is running properly. + + Notes: It may be helpful to have the output from "stat clients" formatted + more like that from "stat dir" (and other commands), in a column + format. The per-client information that's currently shown (level, + client name, JobId, Volume, pool, device, Files, etc.) is good, but + somewhat hard to parse (both programmatically and visually), + particularly when there are many active clients. + + +Item 8: Include all conf files in specified directory +Date: 18 October 2008 +Origin: Database, Lda. Maputo, Mozambique +Contact:Cameron Smith / cameron.ord@database.co.mz +Status: New request + +What: A directive something like "IncludeConf = /etc/bacula/subconfs" Every + time Bacula Director restarts or reloads, it will walk the given + directory (non-recursively) and include the contents of any files + therein, as though they were appended to bacula-dir.conf + +Why: Permits simplified and safer configuration for larger installations with + many client PCs. Currently, through judicious use of JobDefs and + similar directives, it is possible to reduce the client-specific part of + a configuration to a minimum. The client-specific directives can be + prepared according to a standard template and dropped into a known + directory. However it is still necessary to add a line to the "master" + (bacula-dir.conf) referencing each new file. This exposes the master to + unnecessary risk of accidental mistakes and makes automation of adding + new client-confs, more difficult (it is easier to automate dropping a + file into a dir, than rewriting an existing file). Ken has previously + made a convincing argument for NOT including Bacula's core configuration + in an RDBMS, but I believe that the present request is a reasonable + extension to the current "flat-file-based" configuration philosophy. + +Notes: There is NO need for any special syntax to these files. They should + contain standard directives which are simply "inlined" to the parent + file as already happens when you explicitly reference an external file. + +Notes: (kes) this can already be done with scripting + From: John Jorgensen + The bacula-dir.conf at our site contains these lines: + + # + # Include subfiles associated with configuration of clients. + # They define the bulk of the Clients, Jobs, and FileSets. + # + @|"sh -c 'for f in /etc/bacula/clientdefs/*.conf ; do echo @${f} ; done'" + + and when we get a new client, we just put its configuration into + a new file called something like: + + /etc/bacula/clientdefs/clientname.conf + + + + +Item 9: Reduction of communications bandwidth for a backup + Date: 14 October 2008 + Origin: Robin O'Leary (Equiinet) + Status: + + What: Using rdiff techniques, Bacula could significantly reduce + the network data transfer volume to do a backup. + + Why: Faster backup across the Internet + + Notes: This requires retaining certain data on the client during a Full + backup that will speed up subsequent backups. + + +Item 10: Concurrent spooling and despooling within a single job. +Date: 17 nov 2009 +Origin: Jesper Krogh +Status: NEW +What: When a job has spooling enabled and the spool area size is + less than the total volumes size the storage daemon will: + 1) Spool to spool-area + 2) Despool to tape + 3) Go to 1 if more data to be backed up. + + Typical disks will serve data with a speed of 100MB/s when + dealing with large files, network it typical capable of doing 115MB/s + (GbitE). Tape drives will despool with 50-90MB/s (LTO3) 70-120MB/s + (LTO4) depending on compression and data. + + As bacula currently works it'll hold back data from the client until + de-spooling is done, now matter if the spool area can handle another + block of data. Say given a FileSet of 4TB and a spool-area of 100GB and + a Maximum Job Spool Size set to 50GB then above sequence could be + changed to allow to spool to the other 50GB while despooling the first + 50GB and not holding back the client while doing it. As above numbers + show, depending on tape-drive and disk-arrays this potentially leads to + a cut of the backup-time of 50% for the individual jobs. + + Real-world example, backing up 112.6GB (large files) to LTO4 tapes + (despools with ~75MB/s, data is gzipped on the remote filesystem. + Maximum Job Spool Size = 8GB + + Current: + Size: 112.6GB + Elapsed time (total time): 46m 15s => 2775s + Despooling time: 25m 41s => 1541s (55%) + Spooling time: 20m 34s => 1234s (45%) + Reported speed: 40.58MB/s + Spooling speed: 112.6GB/1234s => 91.25MB/s + Despooling speed: 112.6GB/1541s => 73.07MB/s + + So disk + net can "keep up" with the LTO4 drive (in this test) + + Prosed change would effectively make the backup run in the "despooling + time" 1541s giving a reduction to 55% of the total run time. + + In the situation where the individual job cannot keep up with LTO-drive + spooling enables efficient multiplexing of multiple concurrent jobs onto + the same drive. + +Why: When dealing with larger volumes the general utillization of the + network/disk is important to maximize in order to be able to run a full + backup over a weekend. Current work-around is to split the FileSet in + smaller FileSet and Jobs but that leads to more configuration mangement + and is harder to review for completeness. Subsequently it makes restores + more complex. + + + +Item 11: Start spooling even when waiting on tape + Origin: Tobias Barth + Date: 25 April 2008 Status: - What: Provide some means, possibly by a restricted console that - allows a FD to initiate a backup, and that uses the connection - established by the FD to the Director for the backup so that - a Director that is firewalled can do the backup. + What: If a job can be spooled to disk before writing it to tape, it should + be spooled immediately. Currently, bacula waits until the correct + tape is inserted into the drive. + + Why: It could save hours. When bacula waits on the operator who must insert + the correct tape (e.g. a new tape or a tape from another media + pool), bacula could already prepare the spooled data in the spooling + directory and immediately start despooling when the tape was + inserted by the operator. + + 2nd step: Use 2 or more spooling directories. When one directory is + currently despooling, the next (on different disk drives) could + already be spooling the next data. + + Notes: I am using bacula 2.2.8, which has none of those features + implemented. + + +Item 12: Add ability to Verify any specified Job. +Date: 17 January 2008 +Origin: portrix.net Hamburg, Germany. +Contact: Christian Sabelmann +Status: 70% of the required Code is part of the Verify function since v. 2.x - Why: Makes backup of laptops much easier. + What: + The ability to tell Bacula which Job should verify instead of + automatically verify just the last one. + + Why: + It is sad that such a powerfull feature like Verify Jobs + (VolumeToCatalog) is restricted to be used only with the last backup Job + of a client. Actual users who have to do daily Backups are forced to + also do daily Verify Jobs in order to take advantage of this useful + feature. This Daily Verify after Backup conduct is not always desired + and Verify Jobs have to be sometimes scheduled. (Not necessarily + scheduled in Bacula). With this feature Admins can verify Jobs once a + Week or less per month, selecting the Jobs they want to verify. This + feature is also not to difficult to implement taking in account older bug + reports about this feature and the selection of the Job to be verified. + + Notes: For the verify Job, the user could select the Job to be verified + from a List of the latest Jobs of a client. It would also be possible to + verify a certain volume. All of these would naturaly apply only for + Jobs whose file information are still in the catalog. +Item 13: Data encryption on storage daemon + Origin: Tobias Barth + Date: 04 February 2009 + Status: new -Item 6: Deletion of disk Volumes when pruned - Date: Nov 25, 2005 - Origin: Ross Boylan (edited - by Kern) - Status: + What: The storage demon should be able to do the data encryption that can + currently be done by the file daemon. - What: Provide a way for Bacula to automatically remove Volumes - from the filesystem, or optionally to truncate them. - Obviously, the Volume must be pruned prior removal. + Why: This would have 2 advantages: + 1) one could encrypt the data of unencrypted tapes by doing a + migration job + 2) the storage daemon would be the only machine that would have + to keep the encryption keys. - Why: This would allow users more control over their Volumes and - prevent disk based volumes from consuming too much space. + Notes from Landon: + As an addendum to the feature request, here are some crypto + implementation details I wrote up regarding SD-encryption back in Jan + 2008: + http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg28860.html - Notes: The following two directives might do the trick: - Volume Data Retention =