-Item 1: Accurate restoration of renamed/deleted files
-Item 2: Allow FD to initiate a backup
-Item 3: Merge multiple backups (Synthetic Backup or Consolidation)
-Item 4: Implement Catalog directive for Pool resource in Director
-Item 5: Add an item to the restore option where you can select a Pool
-Item 6: Deletion of disk Volumes when pruned
-Item 7: Implement Base jobs
-Item 8: Implement Copy pools
-Item 9: Scheduling syntax that permits more flexibility and options
-Item 10: Message mailing based on backup types
-Item 11: Cause daemons to use a specific IP address to source communications
-Item 12: Add Plug-ins to the FileSet Include statements.
-Item 13: Restore only file attributes (permissions, ACL, owner, group...)
-Item 14: Add an override in Schedule for Pools based on backup types
-Item 15: Implement more Python events and functions
-Item 16: Allow inclusion/exclusion of files in a fileset by creation/mod times
-Item 17: Automatic promotion of backup levels based on backup size
-Item 18: Better control over Job execution
-Item 19: Automatic disabling of devices
-Item 20: An option to operate on all pools with update vol parameters
-Item 21: Include timestamp of job launch in "stat clients" output
-Item 22: Implement Storage daemon compression
-Item 23: Improve Bacula's tape and drive usage and cleaning management
-Item 24: Multiple threads in file daemon for the same job
-Item 25: Archival (removal) of User Files to Tape
-
-
-Item 1: Accurate restoration of renamed/deleted files
- Date: 28 November 2005
- Origin: Martin Simmons (martin at lispworks dot com)
- Status:
-
- What: When restoring a fileset for a specified date (including "most
- recent"), Bacula should give you exactly the files and directories
- that existed at the time of the last backup prior to that date.
-
- Currently this only works if the last backup was a Full backup.
- When the last backup was Incremental/Differential, files and
- directories that have been renamed or deleted since the last Full
- backup are not currently restored correctly. Ditto for files with
- extra/fewer hard links than at the time of the last Full backup.
-
- Why: Incremental/Differential would be much more useful if this worked.
-
- Notes: Merging of multiple backups into a single one seems to
- rely on this working, otherwise the merged backups will not be
- truly equivalent to a Full backup.
-
- Note: Kern: notes shortened. This can be done without the need for
- inodes. It is essentially the same as the current Verify job,
- but one additional database record must be written, which does
- not need any database change.
-
- Notes: Kern: see if we can correct restoration of directories if
- replace=ifnewer is set. Currently, if the directory does not
- exist, a "dummy" directory is created, then when all the files
- are updated, the dummy directory is newer so the real values
- are not updated.
-
-Item 2: Allow FD to initiate a backup
- Origin: Frank Volf (frank at deze dot org)
- Date: 17 November 2005
- Status:
-
- What: Provide some means, possibly by a restricted console that
- allows a FD to initiate a backup, and that uses the connection
- established by the FD to the Director for the backup so that
- a Director that is firewalled can do the backup.
-
- Why: Makes backup of laptops much easier.
-
-
-Item 3: Merge multiple backups (Synthetic Backup or Consolidation)
- Origin: Marc Cousin and Eric Bollengier
- Date: 15 November 2005
- Status:
-
- What: A merged backup is a backup made without connecting to the Client.
- It would be a Merge of existing backups into a single backup.
- In effect, it is like a restore but to the backup medium.
-
- For instance, say that last Sunday we made a full backup. Then
- all week long, we created incremental backups, in order to do
- them fast. Now comes Sunday again, and we need another full.
- The merged backup makes it possible to do instead an incremental
- backup (during the night for instance), and then create a merged
- backup during the day, by using the full and incrementals from
- the week. The merged backup will be exactly like a full made
- Sunday night on the tape, but the production interruption on the
- Client will be minimal, as the Client will only have to send
- incrementals.
-
- In fact, if it's done correctly, you could merge all the
- Incrementals into single Incremental, or all the Incrementals
- and the last Differential into a new Differential, or the Full,
- last differential and all the Incrementals into a new Full
- backup. And there is no need to involve the Client.
-
- Why: The benefit is that :
- - the Client just does an incremental ;
- - the merged backup on tape is just as a single full backup,
- and can be restored very fast.
-
- This is also a way of reducing the backup data since the old
- data can then be pruned (or not) from the catalog, possibly
- allowing older volumes to be recycled
-
-Item 4: Implement Catalog directive for Pool resource in Director
- Origin: Alan Davis adavis@ruckus.com
- Date: 6 March 2007
- Status: Submitted
-
- What: The current behavior is for the director to create all pools
- found in the configuration file in all catalogs. Add a
- Catalog directive to the Pool resource to specify which
- catalog to use for each pool definition.
-
- Why: This allows different catalogs to have different pool
- attributes and eliminates the side-effect of adding
- pools to catalogs that don't need/use them.
-
- Notes: Kern: I think this is relatively easy to do, and it is really
- a pre-requisite to a number of the Copy pool, ... projects
- that are listed here.
-
-Item 5: Add an item to the restore option where you can select a Pool
- Origin: kshatriyak at gmail dot com
- Date: 1/1/2006
- Status:
-
- What: In the restore option (Select the most recent backup for a
- client) it would be useful to add an option where you can limit
- the selection to a certain pool.
-
- Why: When using cloned jobs, most of the time you have 2 pools - a
- disk pool and a tape pool. People who have 2 pools would like to
- select the most recent backup from disk, not from tape (tape
- would be only needed in emergency). However, the most recent
- backup (which may just differ a second from the disk backup) may
- be on tape and would be selected. The problem becomes bigger if
- you have a full and differential - the most "recent" full backup
- may be on disk, while the most recent differential may be on tape
- (though the differential on disk may differ even only a second or
- so). Bacula will complain that the backups reside on different
- media then. For now the only solution now when restoring things
- when you have 2 pools is to manually search for the right
- job-id's and enter them by hand, which is a bit fault tolerant.
-
- Notes: Kern: This is a nice idea. It could also be the way to support
- Jobs that have been Copied (similar to migration, but not yet
- implemented).
-
-
-
-Item 6: Deletion of disk Volumes when pruned
- Date: Nov 25, 2005
- Origin: Ross Boylan <RossBoylan at stanfordalumni dot org> (edited
- by Kern)
- Status:
-
- What: Provide a way for Bacula to automatically remove Volumes
- from the filesystem, or optionally to truncate them.
- Obviously, the Volume must be pruned prior removal.
-
- Why: This would allow users more control over their Volumes and
- prevent disk based volumes from consuming too much space.
-
- Notes: The following two directives might do the trick:
-
- Volume Data Retention = <time period>
- Remove Volume After = <time period>
-
- The migration project should also remove a Volume that is
- migrated. This might also work for tape Volumes.
-
-Item 7: Implement Base jobs
- Date: 28 October 2005
- Origin: Kern
- Status:
-
- What: A base job is sort of like a Full save except that you
- will want the FileSet to contain only files that are
- unlikely to change in the future (i.e. a snapshot of
- most of your system after installing it). After the
- base job has been run, when you are doing a Full save,
- you specify one or more Base jobs to be used. All
- files that have been backed up in the Base job/jobs but
- not modified will then be excluded from the backup.
- During a restore, the Base jobs will be automatically
- pulled in where necessary.
-
- Why: This is something none of the competition does, as far as
- we know (except perhaps BackupPC, which is a Perl program that
- saves to disk only). It is big win for the user, it
- makes Bacula stand out as offering a unique
- optimization that immediately saves time and money.
- Basically, imagine that you have 100 nearly identical
- Windows or Linux machine containing the OS and user
- files. Now for the OS part, a Base job will be backed
- up once, and rather than making 100 copies of the OS,
- there will be only one. If one or more of the systems
- have some files updated, no problem, they will be
- automatically restored.
-
- Notes: Huge savings in tape usage even for a single machine.
- Will require more resources because the DIR must send
- FD a list of files/attribs, and the FD must search the
- list and compare it for each file to be saved.
-
-
-Item 8: Implement Copy pools
- Date: 27 November 2005
- Origin: David Boyes (dboyes at sinenomine dot net)
- Status:
-
- What: I would like Bacula to have the capability to write copies
- of backed-up data on multiple physical volumes selected
- from different pools without transferring the data
- multiple times, and to accept any of the copy volumes
- as valid for restore.
-
- Why: In many cases, businesses are required to keep offsite
- copies of backup volumes, or just wish for simple
- protection against a human operator dropping a storage
- volume and damaging it. The ability to generate multiple
- volumes in the course of a single backup job allows
- customers to simple check out one copy and send it
- offsite, marking it as out of changer or otherwise
- unavailable. Currently, the library and magazine
- management capability in Bacula does not make this process
- simple.
-
- Restores would use the copy of the data on the first
- available volume, in order of Copy pool chain definition.
-
- This is also a major scalability issue -- as the number of
- clients increases beyond several thousand, and the volume
- of data increases, transferring the data multiple times to
- produce additional copies of the backups will become
- physically impossible due to transfer speed
- issues. Generating multiple copies at server side will
- become the only practical option.
-
- How: I suspect that this will require adding a multiplexing
- SD that appears to be a SD to a specific FD, but 1-n FDs
- to the specific back end SDs managing the primary and copy
- pools. Storage pools will also need to acquire parameters
- to define the pools to be used for copies.
-
- Notes: I would commit some of my developers' time if we can agree
- on the design and behavior.
-
- Notes: Additional notes from David:
- I think there's two areas where new configuration would be needed.
-
- 1) Identify a "SD mux" SD (specify it in the config just like a normal
- SD. The SD configuration would need something like a "Daemon Type =
- Normal/Mux" keyword to identify it as a multiplexor. (The director code
- would need modification to add the ability to do the multiple session
- setup, but the impact of the change would be new code that was invoked
- only when a SDmux is needed).
-
- 2) Additional keywords in the Pool definition to identify the need to
- create copies. Each pool would acquire a Copypool= attribute (may be
- repeated to generate more than one copy. 3 is about the practical limit,
- but no point in hardcoding that).
-
- Example:
- Pool {
- Name = Primary
- Pool Type = Backup
- Copypool = Copy1
- Copypool = OffsiteCopy2
- }
-
- where Copy1 and OffsiteCopy2 are valid pools.
-
- In terms of function (shorthand):
- Backup job X is defined normally, specifying pool Primary as the pool to
- use. Job gets scheduled, and Bacula starts scheduling resources.
- Scheduler looks at pool definition for Primary, sees that there are a
- non-zero number of copypool keywords. The director then connects to an
- available SDmux, passes it the pool ids for Primary, Copy1, and
- OffsiteCopy2 and waits. SDmux then goes out and reserves devices and
- volumes in the normal SDs that serve Primary, Copy1 and OffsiteCopy2.
- When all are ready, the SDmux signals ready back to the director, and
- the FD is given the address of the SDmux as the SD to communicate with.
- Backup proceeds normally, with the SDmux duplicating blocks to each
- connected normal SD, and returning ready when all defined copies have
- been written. At EOJ, FD shuts down connection with SDmux, which closes
- down the normal SD connections and goes back to an idle state.
- SDmux does not update database; normal SDs do (noting that file is
- present on each volume it has been written to).
-
- On restore, director looks for the volume containing the file in pool
- Primary first, then Copy1, then OffsiteCopy2. If the volume holding the
- file in pool Primary is missing or busy (being written in another job,
- etc), or one of the volumes from the copypool list that have the file in
- question is already mounted and ready for some reason, use it to do the
- restore, else mount one of the copypool volumes and proceed.
-
-
-Item 9: Scheduling syntax that permits more flexibility and options
- Date: 15 December 2006
+* => item complete
+
+Item 1: Ability to restart failed jobs
+Item 2: Scheduling syntax that permits more flexibility and options
+Item 3: Data encryption on storage daemon
+Item 4: Add ability to Verify any specified Job.
+Item 5: Improve Bacula's tape and drive usage and cleaning management
+Item 6: Allow FD to initiate a backup
+Item 7: Implement Storage daemon compression
+Item 8: Reduction of communications bandwidth for a backup
+Item 9: Ability to reconnect a disconnected comm line
+Item 10: Start spooling even when waiting on tape
+Item 11: Include all conf files in specified directory
+Item 12: Multiple threads in file daemon for the same job
+Item 13: Possibilty to schedule Jobs on last Friday of the month
+Item 14: Include timestamp of job launch in "stat clients" output
+Item 15: Message mailing based on backup types
+Item 16: Ability to import/export Bacula database entities
+Item 17: Implementation of running Job speed limit.
+Item 18: Add an override in Schedule for Pools based on backup types
+Item 19: Automatic promotion of backup levels based on backup size
+Item 20: Allow FileSet inclusion/exclusion by creation/mod times
+Item 21: Archival (removal) of User Files to Tape
+Item 22: An option to operate on all pools with update vol parameters
+Item 23: Automatic disabling of devices
+Item 24: Ability to defer Batch Insert to a later time
+Item 25: Add MaxVolumeSize/MaxVolumeBytes to Storage resource
+Item 26: Enable persistent naming/number of SQL queries
+Item 27: Bacula Dir, FD and SD to support proxies
+Item 28: Add Minumum Spool Size directive
+Item 29: Handle Windows Encrypted Files using Win raw encryption
+Item 30: Implement a Storage device like Amazon's S3.
+Item 31: Convert tray monitor on Windows to a stand alone program
+Item 32: Relabel disk volume after recycling
+Item 33: Command that releases all drives in an autochanger
+Item 34: Run bscan on a remote storage daemon from within bconsole.
+Item 35: Implement a Migration job type that will create a reverse
+Item 36: Job migration between different SDs
+Item 37: Concurrent spooling and despooling withini a single job.
+Item 39: Extend the verify code to make it possible to verify
+Item 40: Separate "Storage" and "Device" in the bacula-dir.conf
+Item 41: Least recently used device selection for tape drives in autochanger.
+
+
+Item 1: Ability to restart failed jobs
+ Date: 26 April 2009
+ Origin: Kern/Eric
+ Status:
+
+ What: Often jobs fail because of a communications line drop or max run time,
+ cancel, or some other non-critical problem. Currrently any data
+ saved is lost. This implementation should modify the Storage daemon
+ so that it saves all the files that it knows are completely backed
+ up to the Volume
+
+ The jobs should then be marked as incomplete and a subsequent
+ Incremental Accurate backup will then take into account all the
+ previously saved job.
+
+ Why: Avoids backuping data already saved.
+
+ Notes: Requires Accurate to restart correctly. Must completed have a minimum
+ volume of data or files stored on Volume before enabling.
+
+
+Item 2: Scheduling syntax that permits more flexibility and options
+ Date: 15 December 2006