+ Origin: Michael Mohr, SAG Mohr.External@infineon.com
+
+ Date: 22 February 2008
+
+ Status:
+
+ What: Make it possible to backup and restore Encypted Files from and to
+ Windows systems without the need to decrypt it by using the raw
+ encryption functions API (see:
+ http://msdn2.microsoft.com/en-us/library/aa363783.aspx)
+
+ that is provided for that reason by Microsoft.
+ If a file ist encrypted could be examined by evaluating the
+ FILE_ATTRIBUTE_ENCRYTED flag of the GetFileAttributes
+ function.
+
+ Why: Without the usage of this interface the fd-daemon running
+ under the system account can't read encypted Files because
+ the key needed for the decrytion is missed by them. As a result
+ actually encrypted files are not backed up
+ by bacula and also no error is shown while missing these files.
+
+ Notes: ./.
+
+ Item 1: Possibilty to schedule Jobs on last Friday of the month
+ Origin: Carsten Menke <bootsy52 at gmx dot net>
+ Date: 02 March 2008
+ Status:
+
+ What: Currently if you want to run your monthly Backups on the last
+ Friday of each month this is only possible with workarounds (e.g
+ scripting) (As some months got 4 Fridays and some got 5 Fridays)
+ The same is true if you plan to run your yearly Backups on the
+ last Friday of the year. It would be nice to have the ability to
+ use the builtin scheduler for this.
+
+ Why: In many companies the last working day of the week is Friday (or
+ Saturday), so to get the most data of the month onto the monthly
+ tape, the employees are advised to insert the tape for the
+ monthly backups on the last friday of the month.
+
+ Notes: To give this a complete functionality it would be nice if the
+ "first" and "last" Keywords could be implemented in the
+ scheduler, so it is also possible to run monthy backups at the
+ first friday of the month and many things more. So if the syntax
+ would expand to this {first|last} {Month|Week|Day|Mo-Fri} of the
+ {Year|Month|Week} you would be able to run really flexible jobs.
+
+ To got a certain Job run on the last Friday of the Month for example one could
+ then write
+
+ Run = pool=Monthly last Fri of the Month at 23:50
+
+ ## Yearly Backup
+
+ Run = pool=Yearly last Fri of the Year at 23:50
+
+ ## Certain Jobs the last Week of a Month
+
+ Run = pool=LastWeek last Week of the Month at 23:50
+
+ ## Monthly Backup on the last day of the month
+
+ Run = pool=Monthly last Day of the Month at 23:50
+
+ Date: 20 March 2008
+
+ Origin: Frank Sweetser <fs@wpi.edu>
+
+ What: Add a new SD directive, "minimum spool size" (or similar). This
+ directive would specify a minimum level of free space available for
+ spooling. If the unused spool space is less than this level, any
+ new spooling requests would be blocked as if the "maximum spool
+ size" threshold had bee reached. Already spooling jobs would be
+ unaffected by this directive.
+
+ Why: I've been bitten by this scenario a couple of times:
+
+ Assume a maximum spool size of 100M. Two concurrent jobs, A and B,
+ are both running. Due to timing quirks and previously running jobs,
+ job A has used 99.9M of space in the spool directory. While A is
+ busy despooling to disk, B is happily using the remaining 0.1M of
+ spool space. This ends up in a spool/despool sequence every 0.1M of
+ data. In addition to fragmenting the data on the volume far more
+ than was necessary, in larger data sets (ie, tens or hundreds of
+ gigabytes) it can easily produce multi-megabyte report emails!
+
+ Item n?: Expand the Verify Job capability to verify Jobs older than the
+ last one. For VolumeToCatalog Jobs
+ Date: 17 Januar 2008
+ Origin: portrix.net Hamburg, Germany.
+ Contact: Christian Sabelmann
+ Status: 70% of the required Code is part of the Verify function since v. 2.x
+
+ What:
+ The ability to tell Bacula which Job should verify instead of
+ automatically verify just the last one.
+
+ Why:
+ It is sad that such a powerfull feature like Verify Jobs
+ (VolumeToCatalog) is restricted to be used only with the last backup Job
+ of a client. Actual users who have to do daily Backups are forced to
+ also do daily Verify Jobs in order to take advantage of this useful
+ feature. This Daily Verify after Backup conduct is not always desired
+ and Verify Jobs have to be sometimes scheduled. (Not necessarily
+ scheduled in Bacula). With this feature Admins can verify Jobs once a
+ Week or less per month, selecting the Jobs they want to verify. This
+ feature is also not to difficult to implement taking in account older bug
+ reports about this feature and the selection of the Job to be verified.
+
+ Notes: For the verify Job, the user could select the Job to be verified
+ from a List of the latest Jobs of a client. It would also be possible to
+ verify a certain volume. All of these would naturaly apply only for
+ Jobs whose file information are still in the catalog.
+
+Item X: Add EFS support on Windows
+ Origin: Alex Ehrlich (Alex.Ehrlich-at-mail.ee)
+ Date: 05 August 2008
+ Status:
+
+ What: For each file backed up or restored by FD on Windows, check if
+ the file is encrypted; if so then use OpenEncryptedFileRaw,
+ ReadEncryptedFileRaw, WriteEncryptedFileRaw,
+ CloseEncryptedFileRaw instead of BackupRead and BackupWrite
+ API calls.
+
+ Why: Many laptop users utilize the EFS functionality today; so do.
+ some non-laptop ones, too.
+ Currently files encrypted by means of EFS cannot be backed up.
+ It means a Windows boutique cannot rely on Bacula as its
+ backup solution, at least when using Windows 2K, XPP,
+ "better" Vista etc on workstations, unless EFS is
+ forbidden by policies.
+ The current situation might result into "false sense of
+ security" among the end-users.
+
+ Notes: Using xxxEncryptedFileRaw API would allow to backup and
+ restore EFS-encrypted files without decrypting their data.
+ Note that such files cannot be restored "portably" (at least,
+ easily) but they would be restoreable to a different (or
+ reinstalled) Win32 machine; the restore would require setup
+ of a EFS recovery agent in advance, of course, and this shall
+ be clearly reflected in the documentation, but this is the
+ normal Windows SysAdmin's business.
+ When "portable" backup is requested the EFS-encrypted files
+ shall be clearly reported as errors.
+ See MSDN on the "Backup and Restore of Encrypted Files" topic:
+ http://msdn.microsoft.com/en-us/library/aa363783.aspx
+ Maybe the EFS support requires a new flag in the database for
+ each file, too?
+ Unfortunately, the implementation is not as straightforward as
+ 1-to-1 replacement of BackupRead with ReadEncryptedFileRaw,
+ requiring some FD code rewrite to work with
+ encrypted-file-related callback functions.
+
+Item n: Data encryption on storage daemon
+ Origin: Tobias Barth <tobias.barth at web-arts.com>
+ Date: 04 February 2009
+ Status: new
+
+ What: The storage demon should be able to do the data encryption that can currently be done by the file daemon.
+
+ Why: This would have 2 advantages: 1) one could encrypt the data of unencrypted tapes by doing a migration job, and 2) the storage daemon would be the only machine that would have to keep the encryption keys.
+
+
+Item 1: "Maximum Concurrent Jobs" for drives when used with changer device
+ Origin: Ralf Gross ralf-lists <at> ralfgross.de
+ Date: 2008-12-12
+ Status: Initial Request
+
+ What: respect the "Maximum Concurrent Jobs" directive in the _drives_
+ Storage section in addition to the changer section
+
+ Why: I have a 3 drive changer where I want to be able to let 3 concurrent
+ jobs run in parallel. But only one job per drive at the same time.
+ Right now I don't see how I could limit the number of concurrent jobs
+ per drive in this situation.
+
+ Notes: Using different priorities for these jobs lead to problems that other
+ jobs are blocked. On the user list I got the advice to use the "Prefer Mounted
+ Volumes" directive, but Kern advised against using "Prefer Mounted
+ Volumes" in an other thread:
+ http://article.gmane.org/gmane.comp.sysutils.backup.bacula.devel/11876/
+
+ In addition I'm not sure if this would be the same as respecting the
+ drive's "Maximum Concurrent Jobs" setting.
+
+ Example:
+
+ Storage {
+ Name = Neo4100
+ Address = ....
+ SDPort = 9103
+ Password = "wiped"
+ Device = Neo4100
+ Media Type = LTO4
+ Autochanger = yes
+ Maximum Concurrent Jobs = 3
+ }
+
+ Storage {
+ Name = Neo4100-LTO4-D1
+ Address = ....
+ SDPort = 9103
+ Password = "wiped"
+ Device = ULTRIUM-TD4-D1
+ Media Type = LTO4
+ Maximum Concurrent Jobs = 1
+ }
+
+ [2 more drives]
+
+ The "Maximum Concurrent Jobs = 1" directive in the drive's section is ignored.
+
+ Item n: Add MaxVolumeSize/MaxVolumeBytes statement to Storage resource
+ Origin: Bastian Friedrich <bastian.friedrich@collax.com>
+ Date: 2008-07-09
+ Status: -
+
+ What: SD has a "Maximum Volume Size" statement, which is deprecated
+ and superseded by the Pool resource statement "Maximum Volume Bytes". It
+ would be good if either statement could be used in Storage resources.
+
+ Why: Pools do not have to be restricted to a single storage
+ type/device; thus, it may be impossible to define Maximum Volume Bytes in
+ the Pool resource. The old MaxVolSize statement is deprecated, as it is
+ SD side only.
+ I am using the same pool for different devices.
+
+ Notes: State of idea currently unknown. Storage resources in the dir
+ config currently translate to very slim catalog entries; these entries
+ would require extensions to implement what is described here. Quite
+ possibly, numerous other statements that are currently available in Pool
+ resources could be used in Storage resources too quite well.
+
+Item 1: Start spooling even when waiting on tape
+ Origin: Tobias Barth <tobias.barth@web-arts.com>
+ Date: 25 April 2008
+ Status:
+
+ What: If a job can be spooled to disk before writing it to tape, it
+should be spooled immediately.
+ Currently, bacula waits until the correct tape is inserted
+into the drive.
+
+ Why: It could save hours. When bacula waits on the operator who
+must insert the correct tape (e.g. a new
+ tape or a tape from another media pool), bacula could already
+prepare the spooled data in the
+ spooling directory and immediately start despooling when the
+tape was inserted by the operator.
+
+ 2nd step: Use 2 or more spooling directories. When one directory is
+currently despooling, the next (on different
+ disk drives) could already be spooling the next data.
+
+ Notes: I am using bacula 2.2.8, which has none of those features
+implemented.
+
+Item 1: enable persistent naming/number of SQL queries
+
+ Date: 24 Jan, 2007
+ Origin: Mark Bergman
+ Status:
+
+ What:
+ Change the parsing of the query.sql file and the query command so that
+ queries are named/numbered by a fixed value, not their order in the
+ file.
+
+
+ Why:
+ One of the real strengths of bacula is the ability to query the
+ database, and the fact that complex queries can be saved and
+ referenced from a file is very powerful. However, the choice
+ of query (both for interactive use, and by scripting input
+ to the bconsole command) is completely dependent on the order
+ within the query.sql file. The descriptve labels are helpful for
+ interactive use, but users become used to calling a particular
+ query "by number", or may use scripts to execute queries. This
+ presents a problem if the number or order of queries in the file
+ changes.
+
+ If the query.sql file used the numeric tags as a real value (rather
+ than a comment), then users could have a higher confidence that they
+ are executing the intended query, that their local changes wouldn't
+ conflict with future bacula upgrades.
+
+ For scripting, it's very important that the intended query is
+ what's actually executed. The current method of parsing the
+ query.sql file discourages scripting because the addition or
+ deletion of queries within the file will require corresponding
+ changes to scripts. It may not be obvious to users that deleting
+ query "17" in the query.sql file will require changing all
+ references to higher numbered queries. Similarly, when new
+ bacula distributions change the number of "official" queries,
+ user-developed queries cannot simply be appended to the file
+ without also changing any references to those queries in scripts
+ or procedural documentation, etc.
+
+ In addition, using fixed numbers for queries would encourage more
+ user-initiated development of queries, by supporting conventions
+ such as:
+
+ queries numbered 1-50 are supported/developed/distributed by
+ with official bacula releases
+
+ queries numbered 100-200 are community contributed, and are
+ related to media management
+
+ queries numbered 201-300 are community contributed, and are
+ related to checksums, finding duplicated files across
+ different backups, etc.
+
+ queries numbered 301-400 are community contributed, and are
+ related to backup statistics (average file size, size per
+ client per backup level, time for all clients by backup level,
+ storage capacity by media type, etc.)
+
+ queries numbered 500-999 are locally created
+
+ Notes:
+ Alternatively, queries could be called by keyword (tag), rather
+ than by number.
+
+Item 1: Implementation of running Job speed limit.
+Origin: Alex F, alexxzell at yahoo dot com
+Date: 29 January 2009
+
+What: I noticed the need for an integrated bandwidth limiter for
+ running jobs. It would be very useful just to specify another
+ field in bacula-dir.conf, like speed = how much speed you wish
+ for that specific job to run at
+
+Why: Because of a couple of reasons. First, it's very hard to implement a
+ traffic shaping utility and also make it reliable. Second, it is very
+ uncomfortable to have to implement these apps to, let's say 50 clients
+ (including desktops, servers). This would also be unreliable because you
+ have to make sure that the apps are properly working when needed; users
+ could also disable them (accidentally or not). It would be very useful
+ to provide Bacula this ability. All information would be centralized,
+ you would not have to go to 50 different clients in 10 different
+ locations for configuration; eliminating 3rd party additions help in
+ establishing efficiency. Would also avoid bandwidth congestion,
+ especially where there is little available.