- What: Make it possible to backup and restore Encypted Files from and to
- Windows systems without the need to decrypt it by using the raw
- encryption functions API (see:
- http://msdn2.microsoft.com/en-us/library/aa363783.aspx)
-
- that is provided for that reason by Microsoft.
- If a file ist encrypted could be examined by evaluating the
- FILE_ATTRIBUTE_ENCRYTED flag of the GetFileAttributes
- function.
-
- Why: Without the usage of this interface the fd-daemon running
- under the system account can't read encypted Files because
- the key needed for the decrytion is missed by them. As a result
- actually encrypted files are not backed up
- by bacula and also no error is shown while missing these files.
-
- Notes: ./.
-
- Item 1: Possibilty to schedule Jobs on last Friday of the month
- Origin: Carsten Menke <bootsy52 at gmx dot net>
- Date: 02 March 2008
- Status:
-
- What: Currently if you want to run your monthly Backups on the last
- Friday of each month this is only possible with workarounds (e.g
- scripting) (As some months got 4 Fridays and some got 5 Fridays)
- The same is true if you plan to run your yearly Backups on the
- last Friday of the year. It would be nice to have the ability to
- use the builtin scheduler for this.
-
- Why: In many companies the last working day of the week is Friday (or
- Saturday), so to get the most data of the month onto the monthly
- tape, the employees are advised to insert the tape for the
- monthly backups on the last friday of the month.
-
- Notes: To give this a complete functionality it would be nice if the
- "first" and "last" Keywords could be implemented in the
- scheduler, so it is also possible to run monthy backups at the
- first friday of the month and many things more. So if the syntax
- would expand to this {first|last} {Month|Week|Day|Mo-Fri} of the
- {Year|Month|Week} you would be able to run really flexible jobs.
-
- To got a certain Job run on the last Friday of the Month for example one could
- then write
-
- Run = pool=Monthly last Fri of the Month at 23:50
-
- ## Yearly Backup
-
- Run = pool=Yearly last Fri of the Year at 23:50
-
- ## Certain Jobs the last Week of a Month
-
- Run = pool=LastWeek last Week of the Month at 23:50
-
- ## Monthly Backup on the last day of the month
-
- Run = pool=Monthly last Day of the Month at 23:50
-
- Date: 20 March 2008
-
- Origin: Frank Sweetser <fs@wpi.edu>
-
- What: Add a new SD directive, "minimum spool size" (or similar). This
- directive would specify a minimum level of free space available for
- spooling. If the unused spool space is less than this level, any
- new spooling requests would be blocked as if the "maximum spool
- size" threshold had bee reached. Already spooling jobs would be
- unaffected by this directive.
-
- Why: I've been bitten by this scenario a couple of times:
-
- Assume a maximum spool size of 100M. Two concurrent jobs, A and B,
- are both running. Due to timing quirks and previously running jobs,
- job A has used 99.9M of space in the spool directory. While A is
- busy despooling to disk, B is happily using the remaining 0.1M of
- spool space. This ends up in a spool/despool sequence every 0.1M of
- data. In addition to fragmenting the data on the volume far more
- than was necessary, in larger data sets (ie, tens or hundreds of
- gigabytes) it can easily produce multi-megabyte report emails!
-
- Item n?: Expand the Verify Job capability to verify Jobs older than the
- last one. For VolumeToCatalog Jobs
- Date: 17 Januar 2008
- Origin: portrix.net Hamburg, Germany.
- Contact: Christian Sabelmann
- Status: 70% of the required Code is part of the Verify function since v. 2.x
-
- What:
- The ability to tell Bacula which Job should verify instead of
- automatically verify just the last one.
-
- Why:
- It is sad that such a powerfull feature like Verify Jobs
- (VolumeToCatalog) is restricted to be used only with the last backup Job
- of a client. Actual users who have to do daily Backups are forced to
- also do daily Verify Jobs in order to take advantage of this useful
- feature. This Daily Verify after Backup conduct is not always desired
- and Verify Jobs have to be sometimes scheduled. (Not necessarily
- scheduled in Bacula). With this feature Admins can verify Jobs once a
- Week or less per month, selecting the Jobs they want to verify. This
- feature is also not to difficult to implement taking in account older bug
- reports about this feature and the selection of the Job to be verified.
-
- Notes: For the verify Job, the user could select the Job to be verified
- from a List of the latest Jobs of a client. It would also be possible to
- verify a certain volume. All of these would naturaly apply only for
- Jobs whose file information are still in the catalog.
-
-Item X: Add EFS support on Windows
- Origin: Alex Ehrlich (Alex.Ehrlich-at-mail.ee)
- Date: 05 August 2008
- Status:
-
- What: For each file backed up or restored by FD on Windows, check if
- the file is encrypted; if so then use OpenEncryptedFileRaw,
- ReadEncryptedFileRaw, WriteEncryptedFileRaw,
- CloseEncryptedFileRaw instead of BackupRead and BackupWrite
- API calls.
-
- Why: Many laptop users utilize the EFS functionality today; so do.
- some non-laptop ones, too.
- Currently files encrypted by means of EFS cannot be backed up.
- It means a Windows boutique cannot rely on Bacula as its
- backup solution, at least when using Windows 2K, XPP,
- "better" Vista etc on workstations, unless EFS is
- forbidden by policies.
- The current situation might result into "false sense of
- security" among the end-users.
-
- Notes: Using xxxEncryptedFileRaw API would allow to backup and
- restore EFS-encrypted files without decrypting their data.
- Note that such files cannot be restored "portably" (at least,
- easily) but they would be restoreable to a different (or
- reinstalled) Win32 machine; the restore would require setup
- of a EFS recovery agent in advance, of course, and this shall
- be clearly reflected in the documentation, but this is the
- normal Windows SysAdmin's business.
- When "portable" backup is requested the EFS-encrypted files
- shall be clearly reported as errors.
- See MSDN on the "Backup and Restore of Encrypted Files" topic:
- http://msdn.microsoft.com/en-us/library/aa363783.aspx
- Maybe the EFS support requires a new flag in the database for
- each file, too?
- Unfortunately, the implementation is not as straightforward as
- 1-to-1 replacement of BackupRead with ReadEncryptedFileRaw,
- requiring some FD code rewrite to work with
- encrypted-file-related callback functions.
-
-Item n: Data encryption on storage daemon
- Origin: Tobias Barth <tobias.barth at web-arts.com>
- Date: 04 February 2009
- Status: new
-
- What: The storage demon should be able to do the data encryption that can currently be done by the file daemon.
-
- Why: This would have 2 advantages: 1) one could encrypt the data of unencrypted tapes by doing a migration job, and 2) the storage daemon would be the only machine that would have to keep the encryption keys.
-
-
-Item 1: "Maximum Concurrent Jobs" for drives when used with changer device
- Origin: Ralf Gross ralf-lists <at> ralfgross.de
- Date: 2008-12-12
- Status: Initial Request
-
- What: respect the "Maximum Concurrent Jobs" directive in the _drives_
- Storage section in addition to the changer section
-
- Why: I have a 3 drive changer where I want to be able to let 3 concurrent
- jobs run in parallel. But only one job per drive at the same time.
- Right now I don't see how I could limit the number of concurrent jobs
- per drive in this situation.
-
- Notes: Using different priorities for these jobs lead to problems that other
- jobs are blocked. On the user list I got the advice to use the "Prefer Mounted
- Volumes" directive, but Kern advised against using "Prefer Mounted
- Volumes" in an other thread:
- http://article.gmane.org/gmane.comp.sysutils.backup.bacula.devel/11876/
-
- In addition I'm not sure if this would be the same as respecting the
- drive's "Maximum Concurrent Jobs" setting.
-
- Example:
-
- Storage {
- Name = Neo4100
- Address = ....
- SDPort = 9103
- Password = "wiped"
- Device = Neo4100
- Media Type = LTO4
- Autochanger = yes
- Maximum Concurrent Jobs = 3
- }
-
- Storage {
- Name = Neo4100-LTO4-D1
- Address = ....
- SDPort = 9103
- Password = "wiped"
- Device = ULTRIUM-TD4-D1
- Media Type = LTO4
- Maximum Concurrent Jobs = 1
- }
-
- [2 more drives]
-
- The "Maximum Concurrent Jobs = 1" directive in the drive's section is ignored.
-
- Item n: Add MaxVolumeSize/MaxVolumeBytes statement to Storage resource
- Origin: Bastian Friedrich <bastian.friedrich@collax.com>
- Date: 2008-07-09
- Status: -
-
- What: SD has a "Maximum Volume Size" statement, which is deprecated
- and superseded by the Pool resource statement "Maximum Volume Bytes". It
- would be good if either statement could be used in Storage resources.
-
- Why: Pools do not have to be restricted to a single storage
- type/device; thus, it may be impossible to define Maximum Volume Bytes in
- the Pool resource. The old MaxVolSize statement is deprecated, as it is
- SD side only.
- I am using the same pool for different devices.
-
- Notes: State of idea currently unknown. Storage resources in the dir
- config currently translate to very slim catalog entries; these entries
- would require extensions to implement what is described here. Quite
- possibly, numerous other statements that are currently available in Pool
- resources could be used in Storage resources too quite well.
-
-Item 1: Start spooling even when waiting on tape
- Origin: Tobias Barth <tobias.barth@web-arts.com>
- Date: 25 April 2008
- Status: