- Repeat = 3w
- }
- }
- }
-
- Notes: Kern: I have merged the previously separate project of skipping
- jobs (via Schedule syntax) into this.
-
-
-Item 3: Data encryption on storage daemon
- Origin: Tobias Barth <tobias.barth at web-arts.com>
- Date: 04 February 2009
- Status: new
-
- What: The storage demon should be able to do the data encryption that can
- currently be done by the file daemon.
-
- Why: This would have 2 advantages:
- 1) one could encrypt the data of unencrypted tapes by doing a
- migration job
- 2) the storage daemon would be the only machine that would have
- to keep the encryption keys.
-
- Notes from Landon:
- As an addendum to the feature request, here are some crypto
- implementation details I wrote up regarding SD-encryption back in Jan
- 2008:
- http://www.mail-archive.com/bacula-users@lists.sourceforge.net/msg28860.html
-
-
-Item 4: Add ability to Verify any specified Job.
-Date: 17 January 2008
-Origin: portrix.net Hamburg, Germany.
-Contact: Christian Sabelmann
-Status: 70% of the required Code is part of the Verify function since v. 2.x
-
- What:
- The ability to tell Bacula which Job should verify instead of
- automatically verify just the last one.
-
- Why:
- It is sad that such a powerfull feature like Verify Jobs
- (VolumeToCatalog) is restricted to be used only with the last backup Job
- of a client. Actual users who have to do daily Backups are forced to
- also do daily Verify Jobs in order to take advantage of this useful
- feature. This Daily Verify after Backup conduct is not always desired
- and Verify Jobs have to be sometimes scheduled. (Not necessarily
- scheduled in Bacula). With this feature Admins can verify Jobs once a
- Week or less per month, selecting the Jobs they want to verify. This
- feature is also not to difficult to implement taking in account older bug
- reports about this feature and the selection of the Job to be verified.
-
- Notes: For the verify Job, the user could select the Job to be verified
- from a List of the latest Jobs of a client. It would also be possible to
- verify a certain volume. All of these would naturaly apply only for
- Jobs whose file information are still in the catalog.
-
-
-Item 5: Improve Bacula's tape and drive usage and cleaning management
- Date: 8 November 2005, November 11, 2005
- Origin: Adam Thornton <athornton at sinenomine dot net>,
- Arno Lehmann <al at its-lehmann dot de>
- Status:
-
- What: Make Bacula manage tape life cycle information, tape reuse
- times and drive cleaning cycles.
-
- Why: All three parts of this project are important when operating
- backups.
- We need to know which tapes need replacement, and we need to
- make sure the drives are cleaned when necessary. While many
- tape libraries and even autoloaders can handle all this
- automatically, support by Bacula can be helpful for smaller
- (older) libraries and single drives. Limiting the number of
- times a tape is used might prevent tape errors when using
- tapes until the drives can't read it any more. Also, checking
- drive status during operation can prevent some failures (as I
- [Arno] had to learn the hard way...)
-
- Notes: First, Bacula could (and even does, to some limited extent)
- record tape and drive usage. For tapes, the number of mounts,
- the amount of data, and the time the tape has actually been
- running could be recorded. Data fields for Read and Write
- time and Number of mounts already exist in the catalog (I'm
- not sure if VolBytes is the sum of all bytes ever written to
- that volume by Bacula). This information can be important
- when determining which media to replace. The ability to mark
- Volumes as "used up" after a given number of write cycles
- should also be implemented so that a tape is never actually
- worn out. For the tape drives known to Bacula, similar
- information is interesting to determine the device status and
- expected life time: Time it's been Reading and Writing, number
- of tape Loads / Unloads / Errors. This information is not yet
- recorded as far as I [Arno] know. A new volume status would
- be necessary for the new state, like "Used up" or "Worn out".
- Volumes with this state could be used for restores, but not
- for writing. These volumes should be migrated first (assuming
- migration is implemented) and, once they are no longer needed,
- could be moved to a Trash pool.
-
- The next step would be to implement a drive cleaning setup.
- Bacula already has knowledge about cleaning tapes. Once it
- has some information about cleaning cycles (measured in drive
- run time, number of tapes used, or calender days, for example)
- it can automatically execute tape cleaning (with an
- autochanger, obviously) or ask for operator assistance loading
- a cleaning tape.
-
- The final step would be to implement TAPEALERT checks not only
- when changing tapes and only sending the information to the
- administrator, but rather checking after each tape error,
- checking on a regular basis (for example after each tape
- file), and also before unloading and after loading a new tape.
- Then, depending on the drives TAPEALERT state and the known
- drive cleaning state Bacula could automatically schedule later
- cleaning, clean immediately, or inform the operator.
-
- Implementing this would perhaps require another catalog change
- and perhaps major changes in SD code and the DIR-SD protocol,
- so I'd only consider this worth implementing if it would
- actually be used or even needed by many people.
-
- Implementation of these projects could happen in three distinct
- sub-projects: Measuring Tape and Drive usage, retiring
- volumes, and handling drive cleaning and TAPEALERTs.
-
-
-Item 6: Allow FD to initiate a backup
-Origin: Frank Volf (frank at deze dot org)
-Date: 17 November 2005
-Status:
-
-What: Provide some means, possibly by a restricted console that
- allows a FD to initiate a backup, and that uses the connection
- established by the FD to the Director for the backup so that
- a Director that is firewalled can do the backup.
-Why: Makes backup of laptops much easier.
-Notes: - The FD already has code for the monitor interface
- - It could be nice to have a .job command that lists authorized
- jobs.
- - Commands need to be restricted on the Director side
- (for example by re-using the runscript flag)
- - The Client resource can be used to authorize the connection
- - In a first time, the client can't modify job parameters
- - We need a way to run a status command to follow job progression
-
- This project consists of the following points
- 1. Modify the FD to have a "mini-console" interface that
- permits it to connect to the Director and start a
- backup job of itself.
- 2. The list of jobs that can be started by the FD are
- defined in the Director (possibly via a restricted
- console).
- 3. Modify the existing tray monitor code in the Win32 FD
- so that it is a separate program from the FD.
- 4. The tray monitor program should be extended to permit
- initiating a backup.
- 5. No new Director directives should be added without
- prior consultation with the Bacula developers.
- 6. The comm line used by the FD to connect to the Director
- should be re-used by the Director to do the backup.
- This feature is partially implemented in the Director.
- 7. The FD may have a new directive that allows it to start
- a backup when the FD starts.
- 8. The console interface to the FD should be extended to
- permit a properly authorized console to initiate a
- backup via the FD.
-
-
-Item 7: Implement Storage daemon compression
- Date: 18 December 2006
- Origin: Vadim A. Umanski , e-mail umanski@ext.ru
- Status:
- What: The ability to compress backup data on the SD receiving data
- instead of doing that on client sending data.
- Why: The need is practical. I've got some machines that can send
- data to the network 4 or 5 times faster than compressing
- them (I've measured that). They're using fast enough SCSI/FC
- disk subsystems but rather slow CPUs (ex. UltraSPARC II).
- And the backup server has got a quite fast CPUs (ex. Dual P4
- Xeons) and quite a low load. When you have 20, 50 or 100 GB
- of raw data - running a job 4 to 5 times faster - that
- really matters. On the other hand, the data can be
- compressed 50% or better - so losing twice more space for
- disk backup is not good at all. And the network is all mine
- (I have a dedicated management/provisioning network) and I
- can get as high bandwidth as I need - 100Mbps, 1000Mbps...
- That's why the server-side compression feature is needed!
- Notes:
-
-
-Item 8: Reduction of communications bandwidth for a backup
- Date: 14 October 2008
- Origin: Robin O'Leary (Equiinet)
- Status:
-
- What: Using rdiff techniques, Bacula could significantly reduce
- the network data transfer volume to do a backup.
-
- Why: Faster backup across the Internet
-
- Notes: This requires retaining certain data on the client during a Full
- backup that will speed up subsequent backups.
-
-
-Item 9: Ability to reconnect a disconnected comm line
- Date: 26 April 2009
- Origin: Kern/Eric
- Status:
-
- What: Often jobs fail because of a communications line drop. In that
- case, Bacula should be able to reconnect to the other daemon and
- resume the job.
-
- Why: Avoids backuping data already saved.
-
- Notes: *Very* complicated from a design point of view because of authenication.
-
-Item 10: Start spooling even when waiting on tape
- Origin: Tobias Barth <tobias.barth@web-arts.com>
- Date: 25 April 2008
- Status:
-
- What: If a job can be spooled to disk before writing it to tape, it should
- be spooled immediately. Currently, bacula waits until the correct
- tape is inserted into the drive.
-
- Why: It could save hours. When bacula waits on the operator who must insert
- the correct tape (e.g. a new tape or a tape from another media
- pool), bacula could already prepare the spooled data in the spooling
- directory and immediately start despooling when the tape was
- inserted by the operator.
-
- 2nd step: Use 2 or more spooling directories. When one directory is
- currently despooling, the next (on different disk drives) could
- already be spooling the next data.
-
- Notes: I am using bacula 2.2.8, which has none of those features
- implemented.
-
-
-Item 11: Include all conf files in specified directory
-Date: 18 October 2008
-Origin: Database, Lda. Maputo, Mozambique
-Contact:Cameron Smith / cameron.ord@database.co.mz
-Status: New request
-
-What: A directive something like "IncludeConf = /etc/bacula/subconfs" Every
- time Bacula Director restarts or reloads, it will walk the given
- directory (non-recursively) and include the contents of any files
- therein, as though they were appended to bacula-dir.conf
-
-Why: Permits simplified and safer configuration for larger installations with
- many client PCs. Currently, through judicious use of JobDefs and
- similar directives, it is possible to reduce the client-specific part of
- a configuration to a minimum. The client-specific directives can be
- prepared according to a standard template and dropped into a known
- directory. However it is still necessary to add a line to the "master"
- (bacula-dir.conf) referencing each new file. This exposes the master to
- unnecessary risk of accidental mistakes and makes automation of adding
- new client-confs, more difficult (it is easier to automate dropping a
- file into a dir, than rewriting an existing file). Ken has previously
- made a convincing argument for NOT including Bacula's core configuration
- in an RDBMS, but I believe that the present request is a reasonable
- extension to the current "flat-file-based" configuration philosophy.
-
-Notes: There is NO need for any special syntax to these files. They should
- contain standard directives which are simply "inlined" to the parent
- file as already happens when you explicitly reference an external file.
-
-Notes: (kes) this can already be done with scripting
- From: John Jorgensen <jorgnsn@lcd.uregina.ca>
- The bacula-dir.conf at our site contains these lines:
-
- #
- # Include subfiles associated with configuration of clients.
- # They define the bulk of the Clients, Jobs, and FileSets.
- #
- @|"sh -c 'for f in /etc/bacula/clientdefs/*.conf ; do echo @${f} ; done'"
-
- and when we get a new client, we just put its configuration into
- a new file called something like:
-
- /etc/bacula/clientdefs/clientname.conf
-
-
-Item 12: Multiple threads in file daemon for the same job
- Date: 27 November 2005
- Origin: Ove Risberg (Ove.Risberg at octocode dot com)
- Status:
-
- What: I want the file daemon to start multiple threads for a backup
- job so the fastest possible backup can be made.
-
- The file daemon could parse the FileSet information and start
- one thread for each File entry located on a separate
- filesystem.
-
- A confiuration option in the job section should be used to
- enable or disable this feature. The confgutration option could
- specify the maximum number of threads in the file daemon.
-
- If the theads could spool the data to separate spool files
- the restore process will not be much slower.
-
- Why: Multiple concurrent backups of a large fileserver with many
- disks and controllers will be much faster.
-
- Notes: (KES) This is not necessary and could be accomplished
- by having two jobs. In addition, the current VSS code
- is single thread.
-
-
-Item 13: Possibilty to schedule Jobs on last Friday of the month
-Origin: Carsten Menke <bootsy52 at gmx dot net>
-Date: 02 March 2008
-Status:
-
- What: Currently if you want to run your monthly Backups on the last
- Friday of each month this is only possible with workarounds (e.g
- scripting) (As some months got 4 Fridays and some got 5 Fridays)
- The same is true if you plan to run your yearly Backups on the
- last Friday of the year. It would be nice to have the ability to
- use the builtin scheduler for this.
-
- Why: In many companies the last working day of the week is Friday (or
- Saturday), so to get the most data of the month onto the monthly
- tape, the employees are advised to insert the tape for the
- monthly backups on the last friday of the month.
-
- Notes: To give this a complete functionality it would be nice if the
- "first" and "last" Keywords could be implemented in the
- scheduler, so it is also possible to run monthy backups at the
- first friday of the month and many things more. So if the syntax
- would expand to this {first|last} {Month|Week|Day|Mo-Fri} of the
- {Year|Month|Week} you would be able to run really flexible jobs.
-
- To got a certain Job run on the last Friday of the Month for example
- one could then write
-
- Run = pool=Monthly last Fri of the Month at 23:50