-
-Item 26: Store and restore extended attributes, especially selinux file contexts
- Date: 28 December 2007
- Origin: Frank Sweetser <fs@wpi.edu>
- Status: Done
- What: The ability to store and restore extended attributes on
- filesystems that support them, such as ext3.
-
- Why: Security Enhanced Linux (SELinux) enabled systems make extensive
- use of extended attributes. In addition to the standard user,
- group, and permission, each file has an associated SELinux context
- stored as an extended attribute. This context is used to define
- which operations a given program is permitted to perform on that
- file. Storing contexts on an SELinux system is as critical as
- storing ownership and permissions. In the case of a full system
- restore, the system will not even be able to boot until all
- critical system files have been properly relabeled.
-
- Notes: Fedora ships with a version of tar that has been patched to handle
- extended attributes. The patch has not been integrated upstream
- yet, so could serve as a good starting point.
-
- http://linux.die.net/man/2/getxattr
- http://linux.die.net/man/2/setxattr
- http://linux.die.net/man/2/listxattr
- ===
- http://linux.die.net/man/3/getfilecon
- http://linux.die.net/man/3/setfilecon
-
-Item 27: make changing "spooldata=yes|no" possible for
- manual/interactive jobs
- Origin: Marc Schiffbauer <marc@schiffbauer.net>
- Date: 12 April 2007)
- Status: Done
-
- What: Make it possible to modify the spooldata option
- for a job when being run from within the console.
- Currently it is possible to modify the backup level
- and the spooldata setting in a Schedule resource.
- It is also possible to modify the backup level when using
- the "run" command in the console.
- But it is currently not possible to to the same
- with "spooldata=yes|no" like:
-
- run job=MyJob level=incremental spooldata=yes
-
- Why: In some situations it would be handy to be able to switch
- spooldata on or off for interactive/manual jobs based on
- which data the admin expects or how fast the LAN/WAN
- connection currently is.
-
- Notes: ./.
-
-Item 28: Implement an option to modify the last written date for volumes
-Date: 16 September 2008
-Origin: Franck (xeoslaenor at gmail dot com)
-Status: Done
-What: The ability to modify the last written date for a volume
-Why: It's sometime necessary to jump a volume when you have a pool of volume
- which recycles the oldest volume at each backup.
- Sometime, it needs to cancel a set of backup (one day
- backup, completely) and we want to avoid that bacula
- choose the volume (which is not written at all) from
- the cancelled backup (It has to jump to next volume).
- in this case, we just need to update the written date
- manually to avoir the "oldest volume" purge.
-Notes: An option can be add to "update volume" command (like 'written date'
- choice for example)
-
-
-========= New Items since the last vote =================
-
-Item 26: Add a new directive to bacula-dir.conf which permits inclusion of all subconfiguration files in a given directory
-Date: 18 October 2008
-Origin: Database, Lda. Maputo, Mozambique
-Contact:Cameron Smith / cameron.ord@database.co.mz
-Status: New request
-
-What: A directive something like "IncludeConf = /etc/bacula/subconfs" Every
- time Bacula Director restarts or reloads, it will walk the given
- directory (non-recursively) and include the contents of any files
- therein, as though they were appended to bacula-dir.conf
-
-Why: Permits simplified and safer configuration for larger installations with
- many client PCs. Currently, through judicious use of JobDefs and
- similar directives, it is possible to reduce the client-specific part of
- a configuration to a minimum. The client-specific directives can be
- prepared according to a standard template and dropped into a known
- directory. However it is still necessary to add a line to the "master"
- (bacula-dir.conf) referencing each new file. This exposes the master to
- unnecessary risk of accidental mistakes and makes automation of adding
- new client-confs, more difficult (it is easier to automate dropping a
- file into a dir, than rewriting an existing file). Ken has previously
- made a convincing argument for NOT including Bacula's core configuration
- in an RDBMS, but I believe that the present request is a reasonable
- extension to the current "flat-file-based" configuration philosophy.
-
-Notes: There is NO need for any special syntax to these files. They should
- contain standard directives which are simply "inlined" to the parent
- file as already happens when you explicitly reference an external file.
-
- Item n: List inChanger flag when doing restore.
- Origin: Jesper Krogh<jesper@krogh.cc>
- Date: 17 oct. 2008
- Status:
-
- What: When doing a restore the restore selection dialog ends by telling stuff
- like this:
- The job will require the following
- Volume(s) Storage(s) SD Device(s)
- ===========================================================================
- 000741L3 LTO-4 LTO3
- 000866L3 LTO-4 LTO3
- 000765L3 LTO-4 LTO3
- 000764L3 LTO-4 LTO3
- 000756L3 LTO-4 LTO3
- 001759L3 LTO-4 LTO3
- 001763L3 LTO-4 LTO3
- 001762L3 LTO-4 LTO3
- 001767L3 LTO-4 LTO3
-
- When having an autochanger, it would be really nice with an inChanger
- column so the operator knew if this restore job would stop waiting for
- operator intervention. This is done just by selecting the inChanger flag
- from the catalog and printing it in a seperate column.
-
-
- Why: This would help getting large restores through minimizing the
- time spent waiting for operator to drop by and change tapes in the library.
-
- Notes: [Kern] I think it would also be good to have the Slot as well,
- or some indication that Bacula thinks the volume is in the autochanger
- because it depends on both the InChanger flag and the Slot being
- valid.
-
-
-Item 1: Implement an interface between Bacula and Amazon's S3.
- Date: 25 August 2008
- Origin: Soren Hansen <soren@ubuntu.com>
- Status: Not started.
- What: Enable the storage daemon to store backup data on Amazon's
- S3 service.
-
- Why: Amazon's S3 is a cheap way to store data off-site. Current
- ways to integrate Bacula and S3 involve storing all the data
- locally and syncing them to S3, and manually fetching them
- again when they're needed. This is very cumbersome.
-
-
-Item 1: enable/disable compression depending on storage device (disk/tape)
- Origin: Ralf Gross ralf-lists@ralfgross.de
- Date: 2008-01-11
- Status: Initial Request
-
- What: Add a new option to the storage resource of the director. Depending
- on this option, compression will be enabled/disabled for a device.
-
- Why: If different devices (disks/tapes) are used for full/diff/incr
- backups, software compression will be enabled for all backups
- because of the FileSet compression option. For backup to tapes
- wich are able to do hardware compression this is not desired.
-
-
- Notes:
- http://news.gmane.org/gmane.comp.sysutils.backup.bacula.devel/cutoff=11124
- It must be clear to the user, that the FileSet compression option
- must still be enabled use compression for a backup job at all.
- Thus a name for the new option in the director must be
- well-defined.
-
- Notes: KES I think the Storage definition should probably override what
- is in the Job definition or vice-versa, but in any case, it must
- be well defined.
-
-
-Item 1: Backup and Restore of Windows Encrypted Files through raw encryption
- functions
-
- Origin: Michael Mohr, SAG Mohr.External@infineon.com
-
- Date: 22 February 2008
-
- Status:
-
- What: Make it possible to backup and restore Encypted Files from and to
- Windows systems without the need to decrypt it by using the raw
- encryption functions API (see:
- http://msdn2.microsoft.com/en-us/library/aa363783.aspx)
-
- that is provided for that reason by Microsoft.
- If a file ist encrypted could be examined by evaluating the
- FILE_ATTRIBUTE_ENCRYTED flag of the GetFileAttributes
- function.
-
- Why: Without the usage of this interface the fd-daemon running
- under the system account can't read encypted Files because
- the key needed for the decrytion is missed by them. As a result
- actually encrypted files are not backed up
- by bacula and also no error is shown while missing these files.
-
- Notes: ./.
-
- Item 1: Possibilty to schedule Jobs on last Friday of the month
- Origin: Carsten Menke <bootsy52 at gmx dot net>
- Date: 02 March 2008
- Status:
-
- What: Currently if you want to run your monthly Backups on the last
- Friday of each month this is only possible with workarounds (e.g
- scripting) (As some months got 4 Fridays and some got 5 Fridays)
- The same is true if you plan to run your yearly Backups on the
- last Friday of the year. It would be nice to have the ability to
- use the builtin scheduler for this.
-
- Why: In many companies the last working day of the week is Friday (or
- Saturday), so to get the most data of the month onto the monthly
- tape, the employees are advised to insert the tape for the
- monthly backups on the last friday of the month.
-
- Notes: To give this a complete functionality it would be nice if the
- "first" and "last" Keywords could be implemented in the
- scheduler, so it is also possible to run monthy backups at the
- first friday of the month and many things more. So if the syntax
- would expand to this {first|last} {Month|Week|Day|Mo-Fri} of the
- {Year|Month|Week} you would be able to run really flexible jobs.
-
- To got a certain Job run on the last Friday of the Month for example one could
- then write
-
- Run = pool=Monthly last Fri of the Month at 23:50
-
- ## Yearly Backup
-
- Run = pool=Yearly last Fri of the Year at 23:50
-
- ## Certain Jobs the last Week of a Month
-
- Run = pool=LastWeek last Week of the Month at 23:50
-
- ## Monthly Backup on the last day of the month
-
- Run = pool=Monthly last Day of the Month at 23:50
-
- Date: 20 March 2008
-
- Origin: Frank Sweetser <fs@wpi.edu>
-
- What: Add a new SD directive, "minimum spool size" (or similar). This
- directive would specify a minimum level of free space available for
- spooling. If the unused spool space is less than this level, any
- new spooling requests would be blocked as if the "maximum spool
- size" threshold had bee reached. Already spooling jobs would be
- unaffected by this directive.
-
- Why: I've been bitten by this scenario a couple of times:
-
- Assume a maximum spool size of 100M. Two concurrent jobs, A and B,
- are both running. Due to timing quirks and previously running jobs,
- job A has used 99.9M of space in the spool directory. While A is
- busy despooling to disk, B is happily using the remaining 0.1M of
- spool space. This ends up in a spool/despool sequence every 0.1M of
- data. In addition to fragmenting the data on the volume far more
- than was necessary, in larger data sets (ie, tens or hundreds of
- gigabytes) it can easily produce multi-megabyte report emails!
-
- Item n?: Expand the Verify Job capability to verify Jobs older than the
- last one. For VolumeToCatalog Jobs
- Date: 17 Januar 2008
- Origin: portrix.net Hamburg, Germany.
- Contact: Christian Sabelmann
- Status: 70% of the required Code is part of the Verify function since v. 2.x
-
- What:
- The ability to tell Bacula which Job should verify instead of
- automatically verify just the last one.
-
- Why:
- It is sad that such a powerfull feature like Verify Jobs
- (VolumeToCatalog) is restricted to be used only with the last backup Job
- of a client. Actual users who have to do daily Backups are forced to
- also do daily Verify Jobs in order to take advantage of this useful
- feature. This Daily Verify after Backup conduct is not always desired
- and Verify Jobs have to be sometimes scheduled. (Not necessarily
- scheduled in Bacula). With this feature Admins can verify Jobs once a
- Week or less per month, selecting the Jobs they want to verify. This
- feature is also not to difficult to implement taking in account older bug
- reports about this feature and the selection of the Job to be verified.
-
- Notes: For the verify Job, the user could select the Job to be verified
- from a List of the latest Jobs of a client. It would also be possible to
- verify a certain volume. All of these would naturaly apply only for
- Jobs whose file information are still in the catalog.
-
-Item X: Add EFS support on Windows
- Origin: Alex Ehrlich (Alex.Ehrlich-at-mail.ee)
- Date: 05 August 2008
- Status:
-
- What: For each file backed up or restored by FD on Windows, check if
- the file is encrypted; if so then use OpenEncryptedFileRaw,
- ReadEncryptedFileRaw, WriteEncryptedFileRaw,
- CloseEncryptedFileRaw instead of BackupRead and BackupWrite
- API calls.
-
- Why: Many laptop users utilize the EFS functionality today; so do.
- some non-laptop ones, too.
- Currently files encrypted by means of EFS cannot be backed up.
- It means a Windows boutique cannot rely on Bacula as its
- backup solution, at least when using Windows 2K, XPP,
- "better" Vista etc on workstations, unless EFS is
- forbidden by policies.
- The current situation might result into "false sense of
- security" among the end-users.
-
- Notes: Using xxxEncryptedFileRaw API would allow to backup and
- restore EFS-encrypted files without decrypting their data.
- Note that such files cannot be restored "portably" (at least,
- easily) but they would be restoreable to a different (or
- reinstalled) Win32 machine; the restore would require setup
- of a EFS recovery agent in advance, of course, and this shall
- be clearly reflected in the documentation, but this is the
- normal Windows SysAdmin's business.
- When "portable" backup is requested the EFS-encrypted files
- shall be clearly reported as errors.
- See MSDN on the "Backup and Restore of Encrypted Files" topic:
- http://msdn.microsoft.com/en-us/library/aa363783.aspx
- Maybe the EFS support requires a new flag in the database for
- each file, too?
- Unfortunately, the implementation is not as straightforward as
- 1-to-1 replacement of BackupRead with ReadEncryptedFileRaw,
- requiring some FD code rewrite to work with
- encrypted-file-related callback functions.
-
- encrypted-file-related callback functions.
-========== Already implemented ================================
-
-
-============= Empty Feature Request form ===========
-Item n: One line summary ...
- Date: Date submitted
- Origin: Name and email of originator.
- Status:
-
- What: More detailed explanation ...
-
- Why: Why it is important ...
-
- Notes: Additional notes or features (omit if not used)
-============== End Feature Request form ==============
-
-========== Items on put hold by Kern ============================
-
-Item h1: Split documentation
- Origin: Maxx <maxxatworkat gmail dot com>
- Date: 27th July 2006
- Status: Approved, awaiting implementation
-
- What: Split documentation in several books
-
- Why: Bacula manual has now more than 600 pages, and looking for
- implementation details is getting complicated. I think
- it would be good to split the single volume in two or
- maybe three parts:
-
- 1) Introduction, requirements and tutorial, typically
- are useful only until first installation time
-
- 2) Basic installation and configuration, with all the
- gory details about the directives supported 3)
- Advanced Bacula: testing, troubleshooting, GUI and
- ancillary programs, security managements, scripting,
- etc.
-
- Notes: This is a project that needs to be done, and will be implemented,
- but it is really a developer issue of timing, and does not
- needed to be included in the voting.
-
-
-Item h2: Implement support for stacking arbitrary stream filters, sinks.
-Date: 23 November 2006
-Origin: Landon Fuller <landonf@threerings.net>
-Status: Planning. Assigned to landonf.
-
- What: Implement support for the following:
- - Stacking arbitrary stream filters (eg, encryption, compression,
- sparse data handling))
- - Attaching file sinks to terminate stream filters (ie, write out
- the resultant data to a file)
- - Refactor the restoration state machine accordingly
-
- Why: The existing stream implementation suffers from the following: - All
- state (compression, encryption, stream restoration), is
- global across the entire restore process, for all streams. There are
- multiple entry and exit points in the restoration state machine, and
- thus multiple places where state must be allocated, deallocated,
- initialized, or reinitialized. This results in exceptional complexity
- for the author of a stream filter.
- - The developer must enumerate all possible combinations of filters
- and stream types (ie, win32 data with encryption, without encryption,
- with encryption AND compression, etc).
-
- Notes: This feature request only covers implementing the stream filters/
- sinks, and refactoring the file daemon's restoration
- implementation accordingly. If I have extra time, I will also
- rewrite the backup implementation. My intent in implementing the
- restoration first is to solve pressing bugs in the restoration
- handling, and to ensure that the new restore implementation
- handles existing backups correctly.
-
- I do not plan on changing the network or tape data structures to
- support defining arbitrary stream filters, but supporting that
- functionality is the ultimate goal.
-
- Assistance with either code or testing would be fantastic.
-
- Notes: Kern: this project has a lot of merit, and we need to do it, but
- it is really an issue for developers rather than a new feature
- for users, so I have removed it from the voting list, but kept it
- here, but at some point, it will be implemented.
-
-Item h3: Filesystem watch triggered backup.
- Date: 31 August 2006
- Origin: Jesper Krogh <jesper@krogh.cc>
- Status:
-
- What: With inotify and similar filesystem triggeret notification
- systems is it possible to have the file-daemon to monitor
- filesystem changes and initiate backup.
-
- Why: There are 2 situations where this is nice to have.
- 1) It is possible to get a much finer-grained backup than
- the fixed schedules used now.. A file created and deleted
- a few hours later, can automatically be caught.
-
- 2) The introduced load on the system will probably be
- distributed more even on the system.
-
- Notes: This can be combined with configration that specifies
- something like: "at most every 15 minutes or when changes
- consumed XX MB".
-
-Kern Notes: I would rather see this implemented by an external program
- that monitors the Filesystem changes, then uses the console
-
-
-Item h4: Directive/mode to backup only file changes, not entire file
- Date: 11 November 2005
- Origin: Joshua Kugler <joshua dot kugler at uaf dot edu>
- Marek Bajon <mbajon at bimsplus dot com dot pl>