%%
\section*{The Bacula Console Restore Command}
-\label{_ChapterStart13}
+\label{RestoreChapter}
\index[general]{Command!Bacula Console Restore }
\index[general]{Bacula Console Restore Command }
\addcontentsline{toc}{section}{Bacula Console Restore Command}
restored. This mode is somewhat similar to the standard Unix {\bf restore}
program's interactive file selection mode.
-If your Files have been pruned, the {\bf restore} command will be unable
-to find any files to restore. See below for more details on this.
+If a Job's file records have been pruned from the catalog, the {\bf
+restore} command will be unable to find any files to restore. See below
+for more details on this.
Within the Console program, after entering the {\bf restore} command, you are
presented with the following selection prompt:
\end{verbatim}
\normalsize
+There are a lot of options, and as a point of reference, most people will
+want to slect item 5 (the most recent backup for a client). The details
+of the above options are:
+
\begin{itemize}
\item Item 1 will list the last 20 jobs run. If you find the Job you want,
you can then select item 3 and enter its JobId(s).
\item Item 3 allows you the enter a list of comma separated JobIds whose
files will be put into the directory tree. You may then select which
- files from those JobIds to restore.
-
-\item Item 4 allows you to enter any arbitrary SQL command. This is probably
- the most primitive way of finding the desired JobIds, but at the same time,
- the most flexible. Once you have found the JobId(s), you can select item 3
- and enter them.
-
-\item Item 5 will automatically select the most recent Full backup and all
+ files from those JobIds to restore. Normally, you would use this option
+ if you have a particular version of a file that you want to restore and
+ you know its JobId. The most common options (5 and 6) will not select
+ a job that did not terminate normally, so if you know a file is
+ backed up by a Job that failed (possibly because of a system crash), you
+ can access it through this option by specifying the JobId.
+
+\item Item 4 allows you to enter any arbitrary SQL command. This is
+ probably the most primitive way of finding the desired JobIds, but at
+ the same time, the most flexible. Once you have found the JobId(s), you
+ can select item 3 and enter them.
+
+\item Item 5 will automatically select the most recent Full backup and all
subsequent incremental and differential backups for a specified Client.
These are the Jobs and Files which, if reloaded, will restore your
system to the most current saved state. It automatically enters the
- JobIds found into the directory tree. This is probably the most
- convenient of all the above options to use if you wish to restore a
- selected Client to its most recent state.
+ JobIds found into the directory tree in an optimal way such that only
+ the most recent copy of any particular file found in the set of Jobs
+ will be restored. This is probably the most convenient of all the above
+ options to use if you wish to restore a selected Client to its most
+ recent state.
There are two important things to note. First, this automatic selection
will never select a job that failed (terminated with an error status).
will then propose doing a full restore (non-selective) of those JobIds.
This is possible because Bacula still knows where the beginning of the
Job data is on the Volumes, even if it does not know where particular
- files are located.
+ files are located or what their names are.
\item Item 6 allows you to specify a date and time, after which Bacula will
automatically select the most recent Full backup and all subsequent
incremental and differential backups that started before the specified date
- and time.
+ and time.
\item Item 7 allows you to specify one or more filenames (complete path
required) to be restored. Each filename is entered one at a time or if you
prefix a filename with the less-than symbol (\lt{}) Bacula will read that
- file and assume it is a list of filenames to be restored. The filename entry
- mode is terminated by entering a blank line.
+ file and assume it is a list of filenames to be restored. If you
+ prefix the filename with a question mark (?), then the filename will
+ be interpreted as an SQL table name, and Bacula will include the rows
+ of that table in the list to be restored. The table must contain the
+ JobId in the first column and the FileIndex in the second column.
+ This table feature is intended for external programs that want to build
+ their own list of files to be restored.
+ The filename entry mode is terminated by entering a blank line.
\item Item 8 allows you to specify a date and time before entering the
filenames. See Item 7 above for more details.
\end{itemize}
As an example, suppose that we select item 5 (restore to most recent state).
-It will then ask for the desired Client, which on my system, will print all
+If you have not specified a client=xxx on the command line, it
+it will then ask for the desired Client, which on my system, will print all
the Clients found in the database as follows:
\footnotesize
8: RufusVerify
9: Watchdog
Select Client (File daemon) resource (1-9):
-
\end{verbatim}
\normalsize
-You will probably have far fewer Clients than this example, and if you have
-only one Client, it will be automatically selected. In this case, I enter
+You will probably have far fewer Clients than this example, and if you have
+only one Client, it will be automatically selected. In this case, I enter
{\bf Rufus} to select the Client. Then Bacula needs to know what FileSet is
-to be restored, so it prompts with:
+to be restored, so it prompts with:
\footnotesize
\begin{verbatim}
1: Full Set
2: Kerns Files
Select FileSet resource (1-2):
-
\end{verbatim}
\normalsize
-I choose item 1, which is my full backup. Normally, you will only have a
-single FileSet for each Job, and if your machines are similar (all Linux) you
-may only have one FileSet for all your Clients.
+If you have only one FileSet defined for the Client, it will be selected
+automatically. I choose item 1, which is my full backup. Normally, you
+will only have a single FileSet for each Job, and if your machines are
+similar (all Linux) you may only have one FileSet for all your Clients.
At this point, {\bf Bacula} has all the information it needs to find the most
recent set of backups. It will then query the database, which may take a bit
move around the directory tree and to select files.
If you want all the files to automatically be marked when the directory
-tree is built, enter the command {\bf restore all}.
+tree is built, you could have entered the command {\bf restore all}, or
+at the \$ prompt, you can simply enter {\bf mark *}.
Instead of choosing item 5 on the first menu (Select the most recent backup
for a client), if we had chosen item 3 (Enter list of JobIds to select) and we
\footnotesize
\begin{verbatim}
Bootstrap records written to /home/kern/bacula/working/restore.bsr
-The restore job will require the following Volumes:
-
- DLT-19Jul02
- DLT-04Aug02
+The job will require the following
+ Volume(s) Storage(s) SD Device(s)
+===========================================================================
+
+ DLT-19Jul02 Tape DLT8000
+ DLT-04Aug02 Tape DLT8000
+
128401 files selected to restore.
Run Restore job
JobName: kernsrestore
Replace: always
FileSet: Kerns Files
Client: Rufus
-Storage: SDT-10000
-JobId: *None*
+Storage: Tape
+When: 2006-12-11 18:20:33
+Catalog: MyCatalog
+Priority: 10
OK to run? (yes/mod/no):
\end{verbatim}
\item {\bf before=YYYY-MM-DD HH:MM:SS} -- specify a date and time to which
the system should be restored. Only Jobs started before the specified
date/time will be selected, and as is the case for {\bf current} Bacula will
-automatically find the most recent prior Full save and all Differential and
-Incremental saves run before the date you specify. Note, this command is not
-too user friendly in that you must specify the date/time exactly as shown.
+ automatically find the most recent prior Full save and all Differential and
+ Incremental saves run before the date you specify. Note, this command is not
+ too user friendly in that you must specify the date/time exactly as shown.
\item {\bf file=filename} -- specify a filename to be restored. You must
specify the full path and filename. Prefixing the entry with a less-than
-sign
+ sign
(\lt{}) will cause Bacula to assume that the filename is on your system and
-contains a list of files to be restored. Bacula will thus read the list from
-that file. Multiple file=xxx specifications may be specified on the command
-line.
+ contains a list of files to be restored. Bacula will thus read the list from
+ that file. Multiple file=xxx specifications may be specified on the command
+ line.
\item {\bf jobid=nnn} -- specify a JobId to be restored.
\item {\bf pool=pool-name} -- specify a Pool name to be used for selection of
Volumes when specifying options 5 and 6 (restore current system, and restore
current system before given date). This permits you to have several Pools,
-possibly one offsite, and to select the Pool to be used for restoring.
+ possibly one offsite, and to select the Pool to be used for restoring.
\item {\bf yes} -- automatically run the restore without prompting for
modifications (most useful in batch scripts).
- \end{itemize}
+\end{itemize}
\subsection*{Restoring Directory Attributes}
\index[general]{Attributes!Restoring Directory }
\item Set "Minimum Block Size = 512" and "Maximum Block Size = 512" and
try the restore. If you are able to determine the block size your drive
was previously using, you should try that size if 512 does not work.
+ This is a really horrible solution, and it is not at all recommended
+ to continue backing up your data without correcting this condition.
+ Please see the Tape Testing chapter for more on this.
\item Try editing the restore.bsr file at the Run xxx yes/mod/no prompt
before starting the restore job and remove all the VolBlock statements.
These are what causes Bacula to reposition the tape, and where problems
\begin{verbatim}
*query
Available queries:
- 1: List Job totals:
- 2: List up to 20 places where a File is saved regardless of the directory:
- 3: List where the most recent copies of a file are saved:
- 4: List last 20 Full Backups for a Client:
- 5: List all backups for a Client after a specified time
- 6: List all backups for a Client
- 7: List Volume Attributes for a selected Volume:
- 8: List Volumes used by selected JobId:
- 9: List Volumes to Restore All Files:
- 10: List Pool Attributes for a selected Pool:
- 11: List total files/bytes by Job:
- 12: List total files/bytes by Volume:
- 13: List Files for a selected JobId:
- 14: List Jobs stored in a selected MediaId:
- 15: List Jobs stored for a given Volume name:
-Choose a query (1-15):
+ 1: List up to 20 places where a File is saved regardless of the
+directory
+ 2: List where the most recent copies of a file are saved
+ 3: List last 20 Full Backups for a Client
+ 4: List all backups for a Client after a specified time
+ 5: List all backups for a Client
+ 6: List Volume Attributes for a selected Volume
+ 7: List Volumes used by selected JobId
+ 8: List Volumes to Restore All Files
+ 9: List Pool Attributes for a selected Pool
+ 10: List total files/bytes by Job
+ 11: List total files/bytes by Volume
+ 12: List Files for a selected JobId
+ 13: List Jobs stored on a selected MediaId
+ 14: List Jobs stored for a given Volume name
+ 15: List Volumes Bacula thinks are in changer
+ 16: List Volumes likely to need replacement from age or errors
+Choose a query (1-16):
\end{verbatim}
\normalsize
--- /dev/null
+
+Projects:
+ Bacula Projects Roadmap
+ Prioritized by user vote 07 December 2005
+ Status updated 15 December 2006
+
+Summary:
+Item 1: Implement data encryption (as opposed to comm encryption)
+Item 2: Implement Migration that moves Jobs from one Pool to another.
+Item 3: Accurate restoration of renamed/deleted files from
+Item 4: Implement a Bacula GUI/management tool using Python.
+Item 5: Implement Base jobs.
+Item 6: Allow FD to initiate a backup
+Item 7: Improve Bacula's tape and drive usage and cleaning management.
+Item 8: Implement creation and maintenance of copy pools
+Item 9: Implement new {Client}Run{Before|After}Job feature.
+Item 10: Merge multiple backups (Synthetic Backup or Consolidation).
+Item 11: Deletion of Disk-Based Bacula Volumes
+Item 12: Directive/mode to backup only file changes, not entire file
+Item 13: Multiple threads in file daemon for the same job
+Item 14: Implement red/black binary tree routines.
+Item 15: Add support for FileSets in user directories CACHEDIR.TAG
+Item 16: Implement extraction of Win32 BackupWrite data.
+Item 17: Implement a Python interface to the Bacula catalog.
+Item 18: Archival (removal) of User Files to Tape
+Item 19: Add Plug-ins to the FileSet Include statements.
+Item 20: Implement more Python events in Bacula.
+Item 21: Quick release of FD-SD connection after backup.
+Item 22: Permit multiple Media Types in an Autochanger
+Item 23: Allow different autochanger definitions for one autochanger.
+Item 24: Automatic disabling of devices
+Item 25: Implement huge exclude list support using hashing.
+
+Items complete and to be released in version 1.40.0:
+Item 1: Implement data encryption (as opposed to comm encryption)
+Item 2: Implement Migration that moves Jobs from one Pool to another.
+Item 9: Implement new {Client}Run{Before|After}Job feature.
+Item 16: Implement extraction of Win32 BackupWrite data.
+
+Items implemented but not tested and hence consequences are unknown:
+Item 22: Permit multiple Media Types in an Autochanger
+
+
+Below, you will find more information on future projects:
+
+Item 1: Implement data encryption (as opposed to comm encryption)
+ Date: 28 October 2005
+ Origin: Sponsored by Landon and 13 contributors to EFF.
+ Status: Done: Landon Fuller has implemented this in 1.39.x.
+
+ What: Currently the data that is stored on the Volume is not
+ encrypted. For confidentiality, encryption of data at
+ the File daemon level is essential.
+ Data encryption encrypts the data in the File daemon and
+ decrypts the data in the File daemon during a restore.
+
+ Why: Large sites require this.
+
+Item 2: Implement Migration that moves Jobs from one Pool to another.
+ Origin: Sponsored by Riege Software International GmbH. Contact:
+ Daniel Holtkamp <holtkamp at riege dot com>
+ Date: 28 October 2005
+ Status: Done. Completed in version 1.39.31 by Kern.
+
+ What: The ability to copy, move, or archive data that is on a
+ device to another device is very important.
+
+ Why: An ISP might want to backup to disk, but after 30 days
+ migrate the data to tape backup and delete it from
+ disk. Bacula should be able to handle this
+ automatically. It needs to know what was put where,
+ and when, and what to migrate -- it is a bit like
+ retention periods. Doing so would allow space to be
+ freed up for current backups while maintaining older
+ data on tape drives.
+
+ Notes: Riege Software have asked for the following migration
+ triggers:
+ Age of Job
+ Highwater mark (stopped by Lowwater mark?)
+
+ Notes: Migration could be additionally triggered by:
+ Number of Jobs
+ Number of Volumes
+
+Item 3: Accurate restoration of renamed/deleted files from
+ Incremental/Differential backups
+ Date: 28 November 2005
+ Origin: Martin Simmons (martin at lispworks dot com)
+ Status:
+
+ What: When restoring a fileset for a specified date (including "most
+ recent"), Bacula should give you exactly the files and directories
+ that existed at the time of the last backup prior to that date.
+
+ Currently this only works if the last backup was a Full backup.
+ When the last backup was Incremental/Differential, files and
+ directories that have been renamed or deleted since the last Full
+ backup are not currently restored correctly. Ditto for files with
+ extra/fewer hard links than at the time of the last Full backup.
+
+ Why: Incremental/Differential would be much more useful if this worked.
+
+ Notes: Item 14 (Merging of multiple backups into a single one) seems to
+ rely on this working, otherwise the merged backups will not be
+ truly equivalent to a Full backup.
+
+ Kern: notes shortened. This can be done without the need for
+ inodes. It is essentially the same as the current Verify job,
+ but one additional database record must be written, which does
+ not need any database change.
+
+ Kern: see if we can correct restoration of directories if
+ replace=ifnewer is set. Currently, if the directory does not
+ exist, a "dummy" directory is created, then when all the files
+ are updated, the dummy directory is newer so the real values
+ are not updated.
+
+Item 4: Implement a Bacula GUI/management tool using Python.
+ Origin: Kern
+ Date: 28 October 2005
+ Status: Lucus is working on this for Python GTK+.
+
+ What: Implement a Bacula console, and management tools
+ using Python and Qt or GTK.
+
+ Why: Don't we already have a wxWidgets GUI? Yes, but
+ it is written in C++ and changes to the user interface
+ must be hand tailored using C++ code. By developing
+ the user interface using Qt designer, the interface
+ can be very easily updated and most of the new Python
+ code will be automatically created. The user interface
+ changes become very simple, and only the new features
+ must be implement. In addition, the code will be in
+ Python, which will give many more users easy (or easier)
+ access to making additions or modifications.
+
+ Notes: This is currently being implemented using Python-GTK by
+ Lucas Di Pentima <lucas at lunix dot com dot ar>
+
+Item 5: Implement Base jobs.
+ Date: 28 October 2005
+ Origin: Kern
+ Status:
+
+ What: A base job is sort of like a Full save except that you
+ will want the FileSet to contain only files that are
+ unlikely to change in the future (i.e. a snapshot of
+ most of your system after installing it). After the
+ base job has been run, when you are doing a Full save,
+ you specify one or more Base jobs to be used. All
+ files that have been backed up in the Base job/jobs but
+ not modified will then be excluded from the backup.
+ During a restore, the Base jobs will be automatically
+ pulled in where necessary.
+
+ Why: This is something none of the competition does, as far as
+ we know (except perhaps BackupPC, which is a Perl program that
+ saves to disk only). It is big win for the user, it
+ makes Bacula stand out as offering a unique
+ optimization that immediately saves time and money.
+ Basically, imagine that you have 100 nearly identical
+ Windows or Linux machine containing the OS and user
+ files. Now for the OS part, a Base job will be backed
+ up once, and rather than making 100 copies of the OS,
+ there will be only one. If one or more of the systems
+ have some files updated, no problem, they will be
+ automatically restored.
+
+ Notes: Huge savings in tape usage even for a single machine.
+ Will require more resources because the DIR must send
+ FD a list of files/attribs, and the FD must search the
+ list and compare it for each file to be saved.
+
+Item 6: Allow FD to initiate a backup
+ Origin: Frank Volf (frank at deze dot org)
+ Date: 17 November 2005
+ Status:
+
+ What: Provide some means, possibly by a restricted console that
+ allows a FD to initiate a backup, and that uses the connection
+ established by the FD to the Director for the backup so that
+ a Director that is firewalled can do the backup.
+
+ Why: Makes backup of laptops much easier.
+
+Item 7: Improve Bacula's tape and drive usage and cleaning management.
+ Date: 8 November 2005, November 11, 2005
+ Origin: Adam Thornton <athornton at sinenomine dot net>,
+ Arno Lehmann <al at its-lehmann dot de>
+ Status:
+
+ What: Make Bacula manage tape life cycle information, tape reuse
+ times and drive cleaning cycles.
+
+ Why: All three parts of this project are important when operating
+ backups.
+ We need to know which tapes need replacement, and we need to
+ make sure the drives are cleaned when necessary. While many
+ tape libraries and even autoloaders can handle all this
+ automatically, support by Bacula can be helpful for smaller
+ (older) libraries and single drives. Limiting the number of
+ times a tape is used might prevent tape errors when using
+ tapes until the drives can't read it any more. Also, checking
+ drive status during operation can prevent some failures (as I
+ [Arno] had to learn the hard way...)
+
+ Notes: First, Bacula could (and even does, to some limited extent)
+ record tape and drive usage. For tapes, the number of mounts,
+ the amount of data, and the time the tape has actually been
+ running could be recorded. Data fields for Read and Write
+ time and Number of mounts already exist in the catalog (I'm
+ not sure if VolBytes is the sum of all bytes ever written to
+ that volume by Bacula). This information can be important
+ when determining which media to replace. The ability to mark
+ Volumes as "used up" after a given number of write cycles
+ should also be implemented so that a tape is never actually
+ worn out. For the tape drives known to Bacula, similar
+ information is interesting to determine the device status and
+ expected life time: Time it's been Reading and Writing, number
+ of tape Loads / Unloads / Errors. This information is not yet
+ recorded as far as I [Arno] know. A new volume status would
+ be necessary for the new state, like "Used up" or "Worn out".
+ Volumes with this state could be used for restores, but not
+ for writing. These volumes should be migrated first (assuming
+ migration is implemented) and, once they are no longer needed,
+ could be moved to a Trash pool.
+
+ The next step would be to implement a drive cleaning setup.
+ Bacula already has knowledge about cleaning tapes. Once it
+ has some information about cleaning cycles (measured in drive
+ run time, number of tapes used, or calender days, for example)
+ it can automatically execute tape cleaning (with an
+ autochanger, obviously) or ask for operator assistance loading
+ a cleaning tape.
+
+ The final step would be to implement TAPEALERT checks not only
+ when changing tapes and only sending the information to the
+ administrator, but rather checking after each tape error,
+ checking on a regular basis (for example after each tape
+ file), and also before unloading and after loading a new tape.
+ Then, depending on the drives TAPEALERT state and the known
+ drive cleaning state Bacula could automatically schedule later
+ cleaning, clean immediately, or inform the operator.
+
+ Implementing this would perhaps require another catalog change
+ and perhaps major changes in SD code and the DIR-SD protocol,
+ so I'd only consider this worth implementing if it would
+ actually be used or even needed by many people.
+
+ Implementation of these projects could happen in three distinct
+ sub-projects: Measuring Tape and Drive usage, retiring
+ volumes, and handling drive cleaning and TAPEALERTs.
+
+Item 8: Implement creation and maintenance of copy pools
+ Date: 27 November 2005
+ Origin: David Boyes (dboyes at sinenomine dot net)
+ Status:
+
+ What: I would like Bacula to have the capability to write copies
+ of backed-up data on multiple physical volumes selected
+ from different pools without transferring the data
+ multiple times, and to accept any of the copy volumes
+ as valid for restore.
+
+ Why: In many cases, businesses are required to keep offsite
+ copies of backup volumes, or just wish for simple
+ protection against a human operator dropping a storage
+ volume and damaging it. The ability to generate multiple
+ volumes in the course of a single backup job allows
+ customers to simple check out one copy and send it
+ offsite, marking it as out of changer or otherwise
+ unavailable. Currently, the library and magazine
+ management capability in Bacula does not make this process
+ simple.
+
+ Restores would use the copy of the data on the first
+ available volume, in order of copy pool chain definition.
+
+ This is also a major scalability issue -- as the number of
+ clients increases beyond several thousand, and the volume
+ of data increases, transferring the data multiple times to
+ produce additional copies of the backups will become
+ physically impossible due to transfer speed
+ issues. Generating multiple copies at server side will
+ become the only practical option.
+
+ How: I suspect that this will require adding a multiplexing
+ SD that appears to be a SD to a specific FD, but 1-n FDs
+ to the specific back end SDs managing the primary and copy
+ pools. Storage pools will also need to acquire parameters
+ to define the pools to be used for copies.
+
+ Notes: I would commit some of my developers' time if we can agree
+ on the design and behavior.
+
+Item 9: Implement new {Client}Run{Before|After}Job feature.
+ Date: 26 September 2005
+ Origin: Phil Stracchino
+ Status: Done. This has been implemented by Eric Bollengier
+
+ What: Some time ago, there was a discussion of RunAfterJob and
+ ClientRunAfterJob, and the fact that they do not run after failed
+ jobs. At the time, there was a suggestion to add a
+ RunAfterFailedJob directive (and, presumably, a matching
+ ClientRunAfterFailedJob directive), but to my knowledge these
+ were never implemented.
+
+ The current implementation doesn't permit to add new feature easily.
+
+ An alternate way of approaching the problem has just occurred to
+ me. Suppose the RunBeforeJob and RunAfterJob directives were
+ expanded in a manner like this example:
+
+ RunScript {
+ Command = "/opt/bacula/etc/checkhost %c"
+ RunsOnClient = No # default
+ AbortJobOnError = Yes # default
+ RunsWhen = Before
+ }
+ RunScript {
+ Command = c:/bacula/systemstate.bat
+ RunsOnClient = yes
+ AbortJobOnError = No
+ RunsWhen = After
+ RunsOnFailure = yes
+ }
+
+ RunScript {
+ Command = c:/bacula/deletestatefile.bat
+ Target = rico-fd
+ RunsWhen = Always
+ }
+
+ It's now possible to specify more than 1 command per Job.
+ (you can stop your database and your webserver without a script)
+
+ ex :
+ Job {
+ Name = "Client1"
+ JobDefs = "DefaultJob"
+ Write Bootstrap = "/tmp/bacula/var/bacula/working/Client1.bsr"
+ FileSet = "Minimal"
+
+ RunBeforeJob = "echo test before ; echo test before2"
+ RunBeforeJob = "echo test before (2nd time)"
+ RunBeforeJob = "echo test before (3rd time)"
+ RunAfterJob = "echo test after"
+ ClientRunAfterJob = "echo test after client"
+
+ RunScript {
+ Command = "echo test RunScript in error"
+ Runsonclient = yes
+ RunsOnSuccess = no
+ RunsOnFailure = yes
+ RunsWhen = After # never by default
+ }
+ RunScript {
+ Command = "echo test RunScript on success"
+ Runsonclient = yes
+ RunsOnSuccess = yes # default
+ RunsOnFailure = no # default
+ RunsWhen = After
+ }
+ }
+
+ Why: It would be a significant change to the structure of the
+ directives, but allows for a lot more flexibility, including
+ RunAfter commands that will run regardless of whether the job
+ succeeds, or RunBefore tasks that still allow the job to run even
+ if that specific RunBefore fails.
+
+ Notes: (More notes from Phil, Kern, David and Eric)
+ I would prefer to have a single new Resource called
+ RunScript.
+
+ RunsWhen = After|Before|Always
+ RunsAtJobLevels = All|Full|Diff|Inc # not yet implemented
+
+ The AbortJobOnError, RunsOnSuccess and RunsOnFailure directives
+ could be optional, and possibly RunWhen as well.
+
+ AbortJobOnError would be ignored unless RunsWhen was set to Before
+ and would default to Yes if omitted.
+ If AbortJobOnError was set to No, failure of the script
+ would still generate a warning.
+
+ RunsOnSuccess would be ignored unless RunsWhen was set to After
+ (or RunsBeforeJob set to No), and default to Yes.
+
+ RunsOnFailure would be ignored unless RunsWhen was set to After,
+ and default to No.
+
+ Allow having the before/after status on the script command
+ line so that the same script can be used both before/after.
+
+Item 10: Merge multiple backups (Synthetic Backup or Consolidation).
+ Origin: Marc Cousin and Eric Bollengier
+ Date: 15 November 2005
+ Status: Waiting implementation. Depends on first implementing
+ project Item 2 (Migration).
+
+ What: A merged backup is a backup made without connecting to the Client.
+ It would be a Merge of existing backups into a single backup.
+ In effect, it is like a restore but to the backup medium.
+
+ For instance, say that last Sunday we made a full backup. Then
+ all week long, we created incremental backups, in order to do
+ them fast. Now comes Sunday again, and we need another full.
+ The merged backup makes it possible to do instead an incremental
+ backup (during the night for instance), and then create a merged
+ backup during the day, by using the full and incrementals from
+ the week. The merged backup will be exactly like a full made
+ Sunday night on the tape, but the production interruption on the
+ Client will be minimal, as the Client will only have to send
+ incrementals.
+
+ In fact, if it's done correctly, you could merge all the
+ Incrementals into single Incremental, or all the Incrementals
+ and the last Differential into a new Differential, or the Full,
+ last differential and all the Incrementals into a new Full
+ backup. And there is no need to involve the Client.
+
+ Why: The benefit is that :
+ - the Client just does an incremental ;
+ - the merged backup on tape is just as a single full backup,
+ and can be restored very fast.
+
+ This is also a way of reducing the backup data since the old
+ data can then be pruned (or not) from the catalog, possibly
+ allowing older volumes to be recycled
+
+Item 11: Deletion of Disk-Based Bacula Volumes
+ Date: Nov 25, 2005
+ Origin: Ross Boylan <RossBoylan at stanfordalumni dot org> (edited
+ by Kern)
+ Status:
+
+ What: Provide a way for Bacula to automatically remove Volumes
+ from the filesystem, or optionally to truncate them.
+ Obviously, the Volume must be pruned prior removal.
+
+ Why: This would allow users more control over their Volumes and
+ prevent disk based volumes from consuming too much space.
+
+ Notes: The following two directives might do the trick:
+
+ Volume Data Retention = <time period>
+ Remove Volume After = <time period>
+
+ The migration project should also remove a Volume that is
+ migrated. This might also work for tape Volumes.
+
+Item 12: Directive/mode to backup only file changes, not entire file
+ Date: 11 November 2005
+ Origin: Joshua Kugler <joshua dot kugler at uaf dot edu>
+ Marek Bajon <mbajon at bimsplus dot com dot pl>
+ Status:
+
+ What: Currently when a file changes, the entire file will be backed up in
+ the next incremental or full backup. To save space on the tapes
+ it would be nice to have a mode whereby only the changes to the
+ file would be backed up when it is changed.
+
+ Why: This would save lots of space when backing up large files such as
+ logs, mbox files, Outlook PST files and the like.
+
+ Notes: This would require the usage of disk-based volumes as comparing
+ files would not be feasible using a tape drive.
+
+Item 13: Multiple threads in file daemon for the same job
+ Date: 27 November 2005
+ Origin: Ove Risberg (Ove.Risberg at octocode dot com)
+ Status:
+
+ What: I want the file daemon to start multiple threads for a backup
+ job so the fastest possible backup can be made.
+
+ The file daemon could parse the FileSet information and start
+ one thread for each File entry located on a separate
+ filesystem.
+
+ A configuration option in the job section should be used to
+ enable or disable this feature. The configuration option could
+ specify the maximum number of threads in the file daemon.
+
+ If the threads could spool the data to separate spool files
+ the restore process will not be much slower.
+
+ Why: Multiple concurrent backups of a large fileserver with many
+ disks and controllers will be much faster.
+
+ Notes: I am willing to try to implement this but I will probably
+ need some help and advice. (No problem -- Kern)
+
+Item 14: Implement red/black binary tree routines.
+ Date: 28 October 2005
+ Origin: Kern
+ Status: Class code is complete. Code needs to be integrated into
+ restore tree code.
+
+ What: Implement a red/black binary tree class. This could
+ then replace the current binary insert/search routines
+ used in the restore in memory tree. This could significantly
+ speed up the creation of the in memory restore tree.
+
+ Why: Performance enhancement.
+
+Item 15: Add support for FileSets in user directories CACHEDIR.TAG
+ Origin: Norbert Kiesel <nkiesel at tbdnetworks dot com>
+ Date: 21 November 2005
+ Status: (I think this is better done using a Python event that I
+ will implement in version 1.39.x).
+
+ What: CACHDIR.TAG is a proposal for identifying directories which
+ should be ignored for archiving/backup. It works by ignoring
+ directory trees which have a file named CACHEDIR.TAG with a
+ specific content. See
+ http://www.brynosaurus.com/cachedir/spec.html
+ for details.
+
+ From Peter Eriksson:
+ I suggest that if this is implemented (I've also asked for this
+ feature some year ago) that it is made compatible with Legato
+ Networkers ".nsr" files where you can specify a lot of options on
+ how to handle files/directories (including denying further
+ parsing of .nsr files lower down into the directory trees). A
+ PDF version of the .nsr man page can be viewed at:
+
+ http://www.ifm.liu.se/~peter/nsr.pdf
+
+ Why: It's a nice alternative to "exclude" patterns for directories
+ which don't have regular pathnames. Also, it allows users to
+ control backup for themselves. Implementation should be pretty
+ simple. GNU tar >= 1.14 or so supports it, too.
+
+ Notes: I envision this as an optional feature to a fileset
+ specification.
+
+
+Item 16: Implement extraction of Win32 BackupWrite data.
+ Origin: Thorsten Engel <thorsten.engel at matrix-computer dot com>
+ Date: 28 October 2005
+ Status: Done. Assigned to Thorsten. Implemented in current CVS
+
+ What: This provides the Bacula File daemon with code that
+ can pick apart the stream output that Microsoft writes
+ for BackupWrite data, and thus the data can be read
+ and restored on non-Win32 machines.
+
+ Why: BackupWrite data is the portable=no option in Win32
+ FileSets, and in previous Baculas, this data could
+ only be extracted using a Win32 FD. With this new code,
+ the Windows data can be extracted and restored on
+ any OS.
+
+
+Item 18: Implement a Python interface to the Bacula catalog.
+ Date: 28 October 2005
+ Origin: Kern
+ Status:
+
+ What: Implement an interface for Python scripts to access
+ the catalog through Bacula.
+
+ Why: This will permit users to customize Bacula through
+ Python scripts.
+
+Item 18: Archival (removal) of User Files to Tape
+
+ Date: Nov. 24/2005
+
+ Origin: Ray Pengelly [ray at biomed dot queensu dot ca
+ Status:
+
+ What: The ability to archive data to storage based on certain parameters
+ such as age, size, or location. Once the data has been written to
+ storage and logged it is then pruned from the originating
+ filesystem. Note! We are talking about user's files and not
+ Bacula Volumes.
+
+ Why: This would allow fully automatic storage management which becomes
+ useful for large datastores. It would also allow for auto-staging
+ from one media type to another.
+
+ Example 1) Medical imaging needs to store large amounts of data.
+ They decide to keep data on their servers for 6 months and then put
+ it away for long term storage. The server then finds all files
+ older than 6 months writes them to tape. The files are then removed
+ from the server.
+
+ Example 2) All data that hasn't been accessed in 2 months could be
+ moved from high-cost, fibre-channel disk storage to a low-cost
+ large-capacity SATA disk storage pool which doesn't have as quick of
+ access time. Then after another 6 months (or possibly as one
+ storage pool gets full) data is migrated to Tape.
+
+Item 19: Add Plug-ins to the FileSet Include statements.
+ Date: 28 October 2005
+ Origin:
+ Status: Partially coded in 1.37 -- much more to do.
+
+ What: Allow users to specify wild-card and/or regular
+ expressions to be matched in both the Include and
+ Exclude directives in a FileSet. At the same time,
+ allow users to define plug-ins to be called (based on
+ regular expression/wild-card matching).
+
+ Why: This would give the users the ultimate ability to control
+ how files are backed up/restored. A user could write a
+ plug-in knows how to backup his Oracle database without
+ stopping/starting it, for example.
+
+Item 20: Implement more Python events in Bacula.
+ Date: 28 October 2005
+ Origin:
+ Status:
+
+ What: Allow Python scripts to be called at more places
+ within Bacula and provide additional access to Bacula
+ internal variables.
+
+ Why: This will permit users to customize Bacula through
+ Python scripts.
+
+ Notes: Recycle event
+ Scratch pool event
+ NeedVolume event
+ MediaFull event
+
+ Also add a way to get a listing of currently running
+ jobs (possibly also scheduled jobs).
+
+
+Item 21: Quick release of FD-SD connection after backup.
+ Origin: Frank Volf (frank at deze dot org)
+ Date: 17 November 2005
+ Status:
+
+ What: In the Bacula implementation a backup is finished after all data
+ and attributes are successfully written to storage. When using a
+ tape backup it is very annoying that a backup can take a day,
+ simply because the current tape (or whatever) is full and the
+ administrator has not put a new one in. During that time the
+ system cannot be taken off-line, because there is still an open
+ session between the storage daemon and the file daemon on the
+ client.
+
+ Although this is a very good strategy for making "safe backups"
+ This can be annoying for e.g. laptops, that must remain
+ connected until the backup is completed.
+
+ Using a new feature called "migration" it will be possible to
+ spool first to harddisk (using a special 'spool' migration
+ scheme) and then migrate the backup to tape.
+
+ There is still the problem of getting the attributes committed.
+ If it takes a very long time to do, with the current code, the
+ job has not terminated, and the File daemon is not freed up. The
+ Storage daemon should release the File daemon as soon as all the
+ file data and all the attributes have been sent to it (the SD).
+ Currently the SD waits until everything is on tape and all the
+ attributes are transmitted to the Director before signaling
+ completion to the FD. I don't think I would have any problem
+ changing this. The reason is that even if the FD reports back to
+ the Dir that all is OK, the job will not terminate until the SD
+ has done the same thing -- so in a way keeping the SD-FD link
+ open to the very end is not really very productive ...
+
+ Why: Makes backup of laptops much easier.
+
+Item 22: Permit multiple Media Types in an Autochanger
+ Origin: Kern
+ Status: Done. Implemented in 1.38.9 (I think).
+
+ What: Modify the Storage daemon so that multiple Media Types
+ can be specified in an autochanger. This would be somewhat
+ of a simplistic implementation in that each drive would
+ still be allowed to have only one Media Type. However,
+ the Storage daemon will ensure that only a drive with
+ the Media Type that matches what the Director specifies
+ is chosen.
+
+ Why: This will permit user with several different drive types
+ to make full use of their autochangers.
+
+Item 23: Allow different autochanger definitions for one autochanger.
+ Date: 28 October 2005
+ Origin: Kern
+ Status:
+
+ What: Currently, the autochanger script is locked based on
+ the autochanger. That is, if multiple drives are being
+ simultaneously used, the Storage daemon ensures that only
+ one drive at a time can access the mtx-changer script.
+ This change would base the locking on the control device,
+ rather than the autochanger. It would then permit two autochanger
+ definitions for the same autochanger, but with different
+ drives. Logically, the autochanger could then be "partitioned"
+ for different jobs, clients, or class of jobs, and if the locking
+ is based on the control device (e.g. /dev/sg0) the mtx-changer
+ script will be locked appropriately.
+
+ Why: This will permit users to partition autochangers for specific
+ use. It would also permit implementation of multiple Media
+ Types with no changes to the Storage daemon.
+
+Item 24: Automatic disabling of devices
+ Date: 2005-11-11
+ Origin: Peter Eriksson <peter at ifm.liu dot se>
+ Status:
+
+ What: After a configurable amount of fatal errors with a tape drive
+ Bacula should automatically disable further use of a certain
+ tape drive. There should also be "disable"/"enable" commands in
+ the "bconsole" tool.
+
+ Why: On a multi-drive jukebox there is a possibility of tape drives
+ going bad during large backups (needing a cleaning tape run,
+ tapes getting stuck). It would be advantageous if Bacula would
+ automatically disable further use of a problematic tape drive
+ after a configurable amount of errors has occurred.
+
+ An example: I have a multi-drive jukebox (6 drives, 380+ slots)
+ where tapes occasionally get stuck inside the drive. Bacula will
+ notice that the "mtx-changer" command will fail and then fail
+ any backup jobs trying to use that drive. However, it will still
+ keep on trying to run new jobs using that drive and fail -
+ forever, and thus failing lots and lots of jobs... Since we have
+ many drives Bacula could have just automatically disabled
+ further use of that drive and used one of the other ones
+ instead.
+
+Item 25: Implement huge exclude list support using hashing.
+ Date: 28 October 2005
+ Origin: Kern
+ Status:
+
+ What: Allow users to specify very large exclude list (currently
+ more than about 1000 files is too many).
+
+ Why: This would give the users the ability to exclude all
+ files that are loaded with the OS (e.g. using rpms
+ or debs). If the user can restore the base OS from
+ CDs, there is no need to backup all those files. A
+ complete restore would be to restore the base OS, then
+ do a Bacula restore. By excluding the base OS files, the
+ backup set will be *much* smaller.
+
+
+============= Empty Feature Request form ===========
+Item n: One line summary ...
+ Date: Date submitted
+ Origin: Name and email of originator.
+ Status:
+
+ What: More detailed explanation ...
+
+ Why: Why it is important ...
+
+ Notes: Additional notes or features (omit if not used)
+============== End Feature Request form ==============
+
+
+===============================================
+Feature requests submitted after cutoff for December 2005 vote
+ and not yet discussed.
+===============================================
+Item n: Allow skipping execution of Jobs
+ Date: 29 November 2005
+ Origin: Florian Schnabel <florian.schnabel at docufy dot de>
+ Status:
+
+ What: An easy option to skip a certain job on a certain date.
+ Why: You could then easily skip tape backups on holidays. Especially
+ if you got no autochanger and can only fit one backup on a tape
+ that would be really handy, other jobs could proceed normally
+ and you won't get errors that way.
+
+===================================================
+
+Item n: archive data
+
+ Origin: calvin streeting calvin at absentdream dot com
+ Date: 15/5/2006
+
+ What: The abilty to archive to media (dvd/cd) in a uncompressd format
+ for dead filing (archiving not backing up)
+
+ Why: At my works when jobs are finished and moved off of the main file
+ servers (raid based systems) onto a simple linux file server (ide based
+ system) so users can find old information without contacting the IT
+ dept.
+
+ So this data dosn't realy change it only gets added to,
+ But it also needs backing up. At the moment it takes
+ about 8 hours to back up our servers (working data) so
+ rather than add more time to existing backups i am trying
+ to implement a system where we backup the acrhive data to
+ cd/dvd these disks would only need to be appended to
+ (burn only new/changed files to new disks for off site
+ storage). basialy understand the differnce between
+ achive data and live data.
+
+ Notes: scan the data and email me when it needs burning divide
+ into predifind chunks keep a recored of what is on what
+ disk make me a label (simple php->mysql=>pdf stuff) i
+ could do this bit ability to save data uncompresed so
+ it can be read in any other system (future proof data)
+ save the catalog with the disk as some kind of menu
+ system
+
+Item : Tray monitor window cleanups
+ Origin: Alan Brown ajb2 at mssl dot ucl dot ac dot uk
+ Date: 24 July 2006
+ Status:
+ What: Resizeable and scrollable windows in the tray monitor.
+
+ Why: With multiple clients, or with many jobs running, the displayed
+ window often ends up larger than the available screen, making
+ the trailing items difficult to read.
+
+ Notes:
+
+ Item : Clustered file-daemons
+ Origin: Alan Brown ajb2 at mssl dot ucl dot ac dot uk
+ Date: 24 July 2006
+ Status:
+ What: A "virtual" filedaemon, which is actually a cluster of real ones.
+
+ Why: In the case of clustered filesystems (SAN setups, GFS, or OCFS2, etc)
+ multiple machines may have access to the same set of filesystems
+
+ For performance reasons, one may wish to initate backups from
+ several of these machines simultaneously, instead of just using
+ one backup source for the common clustered filesystem.
+
+ For obvious reasons, normally backups of $A-FD/$PATH and
+ B-FD/$PATH are treated as different backup sets. In this case
+ they are the same communal set.
+
+ Likewise when restoring, it would be easier to just specify
+ one of the cluster machines and let bacula decide which to use.
+
+ This can be faked to some extent using DNS round robin entries
+ and a virtual IP address, however it means "status client" will
+ always give bogus answers. Additionally there is no way of
+ spreading the load evenly among the servers.
+
+ What is required is something similar to the storage daemon
+ autochanger directives, so that Bacula can keep track of
+ operating backups/restores and direct new jobs to a "free"
+ client.
+
+ Notes:
+
+Item : Tray monitor window cleanups
+ Origin: Alan Brown ajb2 at mssl dot ucl dot ac dot uk
+ Date: 24 July 2006
+ Status:
+ What: Resizeable and scrollable windows in the tray monitor.
+
+ Why: With multiple clients, or with many jobs running, the displayed
+ window often ends up larger than the available screen, making
+ the trailing items difficult to read.
+
+ Notes:
+
+Item: Commercial database support
+ Origin: Russell Howe <russell_howe dot wreckage dot org>
+ Date: 26 July 2006
+ Status:
+
+ What: It would be nice for the database backend to support more
+ databases. I'm thinking of SQL Server at the moment, but I guess Oracle,
+ DB2, MaxDB, etc are all candidates. SQL Server would presumably be
+ implemented using FreeTDS or maybe an ODBC library?
+
+ Why: We only really have one database server, which is MS SQL Server
+ 2000. Maintaining a second one for the backup software (we grew out of
+ SQLite, which I liked, but which didn't work so well with our database
+ size). We don't really have a machine with the resources to run
+ postgres, and would rather only maintain a single DBMS. We're stuck with
+ SQL Server because pretty much all the company's custom applications
+ (written by consultants) are locked into SQL Server 2000. I can imagine
+ this scenario is fairly common, and it would be nice to use the existing
+ properly specced database server for storing Bacula's catalog, rather
+ than having to run a second DBMS.
+
+
+Item n: Split documentation
+ Origin: Maxx <maxxatworkat gmail dot com>
+ Date: 27th July 2006
+ Status:
+
+ What: Split documentation in several books
+
+ Why: Bacula manual has now more than 600 pages, and looking for
+ implementation details is getting complicated. I think
+ it would be good to split the single volume in two or
+ maybe three parts:
+
+ 1) Introduction, requirements and tutorial, typically
+ are useful only until first installation time
+
+ 2) Basic installation and configuration, with all the
+ gory details about the directives supported 3)
+ Advanced Bacula: testing, troubleshooting, GUI and
+ ancillary programs, security managements, scripting,
+ etc.
+
+ Notes:
+
+Item n: Include an option to operate on all pools when doing
+ update vol parameters
+
+ Origin: Dmitriy Pinchukov <absh@bossdev.kiev.ua>
+ Date: 16 August 2006
+ Status:
+
+ What: When I do update -> Volume parameters -> All Volumes
+ from Pool, then I have to select pools one by one. I'd like
+ console to have an option like "0: All Pools" in the list of
+ defined pools.
+
+ Why: I have many pools and therefore unhappy with manually
+ updating each of them using update -> Volume parameters -> All
+ Volumes from Pool -> pool #.
+
+Item n: Automatic promotion of backup levels
+ Date: 19 January 2006
+ Origin: Adam Thornton <athornton@sinenomine.net>
+ Status: Blue sky
+
+ What: Amanda has a feature whereby it estimates the space that a
+ differential, incremental, and full backup would take. If the
+ difference in space required between the scheduled level and the next
+ level up is beneath some user-defined critical threshold, the backup
+ level is bumped to the next type. Doing this minimizes the number of
+ volumes necessary during a restore, with a fairly minimal cost in
+ backup media space.
+
+ Why: I know at least one (quite sophisticated and smart) user
+ for whom the absence of this feature is a deal-breaker in terms of
+ using Bacula; if we had it it would eliminate the one cool thing
+ Amanda can do and we can't (at least, the one cool thing I know of).
+
+
+
+
+Item n+1: Incorporation of XACML2/SAML2 parsing
+ Date: 19 January 2006
+ Origin: Adam Thornton <athornton@sinenomine.net>
+ Status: Blue sky
+
+ What: XACML is "eXtensible Access Control Markup Language" and
+ "SAML is the "Security Assertion Markup Language"--an XML standard
+ for making statements about identity and authorization. Having these
+ would give us a framework to approach ACLs in a generic manner, and
+ in a way flexible enough to support the four major sorts of ACLs I
+ see as a concern to Bacula at this point, as well as (probably) to
+ deal with new sorts of ACLs that may appear in the future.
+
+ Why: Bacula is beginning to need to back up systems with ACLs
+ that do not map cleanly onto traditional Unix permissions. I see
+ four sets of ACLs--in general, mutually incompatible with one
+ another--that we're going to need to deal with. These are: NTFS
+ ACLs, POSIX ACLs, NFSv4 ACLS, and AFS ACLS. (Some may question the
+ relevance of AFS; AFS is one of Sine Nomine's core consulting
+ businesses, and having a reputable file-level backup and restore
+ technology for it (as Tivoli is probably going to drop AFS support
+ soon since IBM no longer supports AFS) would be of huge benefit to
+ our customers; we'd most likely create the AFS support at Sine Nomine
+ for inclusion into the Bacula (and perhaps some changes to the
+ OpenAFS volserver) core code.)
+
+ Now, obviously, Bacula already handles NTFS just fine. However, I
+ think there's a lot of value in implementing a generic ACL model, so
+ that it's easy to support whatever particular instances of ACLs come
+ down the pike: POSIX ACLS (think SELinux) and NFSv4 are the obvious
+ things arriving in the Linux world in a big way in the near future.
+ XACML, although overcomplicated for our needs, provides this
+ framework, and we should be able to leverage other people's
+ implementations to minimize the amount of work *we* have to do to get
+ a generic ACL framework. Basically, the costs of implementation are
+ high, but they're largely both external to Bacula and already sunk.
+
+Item 1: Add an over-ride in the Schedule configuration to use a
+ different pool for different backup types.
+
+Date: 19 Jan 2005
+Origin: Chad Slater <chad.slater@clickfox.com>
+Status:
+
+ What: Adding a FullStorage=BigTapeLibrary in the Schedule resource
+ would help those of us who use different storage devices for different
+ backup levels cope with the "auto-upgrade" of a backup.
+
+ Why: Assume I add several new device to be backed up, i.e. several
+ hosts with 1TB RAID. To avoid tape switching hassles, incrementals are
+ stored in a disk set on a 2TB RAID. If you add these devices in the
+ middle of the month, the incrementals are upgraded to "full" backups,
+ but they try to use the same storage device as requested in the
+ incremental job, filling up the RAID holding the differentials. If we
+ could override the Storage parameter for full and/or differential
+ backups, then the Full job would use the proper Storage device, which
+ has more capacity (i.e. a 8TB tape library.
+
+
+Item: Implement multiple numeric backup levels as supported by dump
+Date: 3 April 2006
+Origin: Daniel Rich <drich@employees.org>
+Status:
+What: Dump allows specification of backup levels numerically instead of just
+ "full", "incr", and "diff". In this system, at any given level, all
+ files are backed up that were were modified since the last backup of a
+ higher level (with 0 being the highest and 9 being the lowest). A
+ level 0 is therefore equivalent to a full, level 9 an incremental, and
+ the levels 1 through 8 are varying levels of differentials. For
+ bacula's sake, these could be represented as "full", "incr", and
+ "diff1", "diff2", etc.
+
+Why: Support of multiple backup levels would provide for more advanced backup
+ rotation schemes such as "Towers of Hanoi". This would allow better
+ flexibility in performing backups, and can lead to shorter recover
+ times.
+
+Notes: Legato Networker supports a similar system with full, incr, and 1-9 as
+ levels.
+
+Kern notes: I think this would add very little functionality, but a *lot* of
+ additional overhead to Bacula.
+
+Item 1: include JobID in spool file name
+ Origin: Mark Bergman <mark.bergman@uphs.upenn.edu>
+ Date: Tue Aug 22 17:13:39 EDT 2006
+ Status:
+
+ What: Change the name of the spool file to include the JobID
+
+ Why: JobIDs are the common key used to refer to jobs, yet the
+ spoolfile name doesn't include that information. The date/time
+ stamp is useful (and should be retained).
+
+
+
+Item 2: include timestamp of job launch in "stat clients" output
+ Origin: Mark Bergman <mark.bergman@uphs.upenn.edu>
+ Date: Tue Aug 22 17:13:39 EDT 2006
+ Status:
+
+ What: The "stat clients" command doesn't include any detail on when
+ the active backup jobs were launched.
+
+ Why: Including the timestamp would make it much easier to decide whether
+ a job is running properly.
+
+ Notes: It may be helpful to have the output from "stat clients" formatted
+ more like that from "stat dir" (and other commands), in a column
+ format. The per-client information that's currently shown (level,
+ client name, JobId, Volume, pool, device, Files, etc.) is good, but
+ somewhat hard to parse (both programmatically and visually),
+ particularly when there are many active clients.
+
+Item 1: Filesystemwatch triggered backup.
+ Date: 31 August 2006
+ Origin: Jesper Krogh <jesper@krogh.cc>
+ Status: Unimplemented, depends probably on "client initiated backups"
+
+ What: With inotify and similar filesystem triggeret notification
+ systems is it possible to have the file-daemon to monitor
+ filesystem changes and initiate backup.
+
+ Why: There are 2 situations where this is nice to have.
+ 1) It is possible to get a much finer-grained backup than
+ the fixed schedules used now.. A file created and deleted
+ a few hours later, can automatically be caught.
+
+ 2) The introduced load on the system will probably be
+ distributed more even on the system.
+
+ Notes: This can be combined with configration that specifies
+ something like: "at most every 15 minutes or when changes
+ consumed XX MB".
+
+Item n: Message mailing based on backup types
+Origin: Evan Kaufman <evan.kaufman@gmail.com>
+ Date: January 6, 2006
+Status:
+
+ What: In the "Messages" resource definitions, allowing messages
+ to be mailed based on the type (backup, restore, etc.) and level
+ (full, differential, etc) of job that created the originating
+ message(s).
+
+Why: It would, for example, allow someone's boss to be emailed
+ automatically only when a Full Backup job runs, so he can
+ retrieve the tapes for offsite storage, even if the IT dept.
+ doesn't (or can't) explicitly notify him. At the same time, his
+ mailbox wouldnt be filled by notifications of Verifies, Restores,
+ or Incremental/Differential Backups (which would likely be kept
+ onsite).
+
+Notes:
+ One way this could be done is through additional message types, for example:
+
+ Messages {
+ # email the boss only on full system backups
+ Mail = boss@mycompany.com = full, !incremental, !differential, !restore,
+ !verify, !admin
+ # email us only when something breaks
+ MailOnError = itdept@mycompany.com = all
+ }
+
+
+Item n: Allow inclusion/exclusion of files in a fileset by creation/mod times
+ Origin: Evan Kaufman <evan.kaufman@gmail.com>
+ Date: January 11, 2006
+ Status:
+
+ What: In the vein of the Wild and Regex directives in a Fileset's
+ Options, it would be helpful to allow a user to include or exclude
+ files and directories by creation or modification times.
+
+ You could factor the Exclude=yes|no option in much the same way it
+ affects the Wild and Regex directives. For example, you could exclude
+ all files modified before a certain date:
+
+ Options {
+ Exclude = yes
+ Modified Before = ####
+ }
+
+ Or you could exclude all files created/modified since a certain date:
+
+ Options {
+ Exclude = yes
+ Created Modified Since = ####
+ }
+
+ The format of the time/date could be done several ways, say the number
+ of seconds since the epoch:
+ 1137008553 = Jan 11 2006, 1:42:33PM # result of `date +%s`
+
+ Or a human readable date in a cryptic form:
+ 20060111134233 = Jan 11 2006, 1:42:33PM # YYYYMMDDhhmmss
+
+ Why: I imagine a feature like this could have many uses. It would
+ allow a user to do a full backup while excluding the base operating
+ system files, so if I installed a Linux snapshot from a CD yesterday,
+ I'll *exclude* all files modified *before* today. If I need to
+ recover the system, I use the CD I already have, plus the tape backup.
+ Or if, say, a Windows client is hit by a particularly corrosive
+ virus, and I need to *exclude* any files created/modified *since* the
+ time of infection.
+
+ Notes: Of course, this feature would work in concert with other
+ in/exclude rules, and wouldnt override them (or each other).
+
+ Notes: The directives I'd imagine would be along the lines of
+ "[Created] [Modified] [Before|Since] = <date>".
+ So one could compare against 'ctime' and/or 'mtime', but ONLY 'before'
+ or 'since'.
+
+
+Item: Implement support for stacking arbitrary stream filters, sinks.
+Date: 23 November 2006
+Origin: Landon Fuller <landonf@threerings.net>
+Status: Planning. Assigned to landonf.
+
+What:
+ Implement support for the following:
+ - Stacking arbitrary stream filters (eg, encryption, compression,
+ sparse data handling))
+ - Attaching file sinks to terminate stream filters (ie, write out
+ the resultant data to a file)
+ - Refactor the restoration state machine accordingly
+
+Why:
+ The existing stream implementation suffers from the following:
+ - All state (compression, encryption, stream restoration), is
+ global across the entire restore process, for all streams. There are
+ multiple entry and exit points in the restoration state machine, and
+ thus multiple places where state must be allocated, deallocated,
+ initialized, or reinitialized. This results in exceptional complexity
+ for the author of a stream filter.
+ - The developer must enumerate all possible combinations of filters
+ and stream types (ie, win32 data with encryption, without encryption,
+ with encryption AND compression, etc).
+
+Notes:
+ This feature request only covers implementing the stream filters/
+ sinks, and refactoring the file daemon's restoration implementation
+ accordingly. If I have extra time, I will also rewrite the backup
+ implementation. My intent in implementing the restoration first is to
+ solve pressing bugs in the restoration handling, and to ensure that
+ the new restore implementation handles existing backups correctly.
+
+ I do not plan on changing the network or tape data structures to
+ support defining arbitrary stream filters, but supporting that
+ functionality is the ultimate goal.
+
+ Assistance with either code or testing would be fantastic.
+
+Item 1: On the bconsole "restore" command line, implement separate
+ option for specifying the host to restore from, and the
+ host to restore to.
+
+ Date: 11 December 2006
+
+ Origin: Discussion on Bacula-users entitled 'Scripted restores to
+ different clients', December 2006
+
+ Status: New feature request
+
+ What: While using bconsole interactively, you can specify the client
+ that a backup job is to be restored for, and then you can
+ specify later a different client to send the restored files
+ back to. However, using the 'restore' command with all options
+ on the command line, this cannot be done, due to the ambiguous
+ 'client' parameter. Additionally, this parameter means different
+ things depending on if it's specified on the command line or
+ afterwards, in the Modify Job screens.
+
+ Why: This feature would enable restore jobs to be more completely
+ automated, for example by a web or GUI front-end.
+
+ Notes: client can also be implied by specifying the jobid on the command
+ line