+ What:
+ Change the parsing of the query.sql file and the query command so that
+ queries are named/numbered by a fixed value, not their order in the
+ file.
+
+
+ Why:
+ One of the real strengths of bacula is the ability to query the
+ database, and the fact that complex queries can be saved and
+ referenced from a file is very powerful. However, the choice
+ of query (both for interactive use, and by scripting input
+ to the bconsole command) is completely dependent on the order
+ within the query.sql file. The descriptve labels are helpful for
+ interactive use, but users become used to calling a particular
+ query "by number", or may use scripts to execute queries. This
+ presents a problem if the number or order of queries in the file
+ changes.
+
+ If the query.sql file used the numeric tags as a real value (rather
+ than a comment), then users could have a higher confidence that they
+ are executing the intended query, that their local changes wouldn't
+ conflict with future bacula upgrades.
+
+ For scripting, it's very important that the intended query is
+ what's actually executed. The current method of parsing the
+ query.sql file discourages scripting because the addition or
+ deletion of queries within the file will require corresponding
+ changes to scripts. It may not be obvious to users that deleting
+ query "17" in the query.sql file will require changing all
+ references to higher numbered queries. Similarly, when new
+ bacula distributions change the number of "official" queries,
+ user-developed queries cannot simply be appended to the file
+ without also changing any references to those queries in scripts
+ or procedural documentation, etc.
+
+ In addition, using fixed numbers for queries would encourage more
+ user-initiated development of queries, by supporting conventions
+ such as:
+
+ queries numbered 1-50 are supported/developed/distributed by
+ with official bacula releases
+
+ queries numbered 100-200 are community contributed, and are
+ related to media management
+
+ queries numbered 201-300 are community contributed, and are
+ related to checksums, finding duplicated files across
+ different backups, etc.
+
+ queries numbered 301-400 are community contributed, and are
+ related to backup statistics (average file size, size per
+ client per backup level, time for all clients by backup level,
+ storage capacity by media type, etc.)
+
+ queries numbered 500-999 are locally created
+
+ Notes:
+ Alternatively, queries could be called by keyword (tag), rather
+ than by number.
+
+
+Item 35: Port bat to Win32
+ Date: 26 April 2009
+ Origin: Kern/Eric
+ Status:
+
+ What: Make bat run on Win32/64.
+
+ Why: To have GUI on Windows
+
+ Notes:
+
+
+Item 36: Bacula Dir, FD and SD to support proxies
+Origin: Karl Grindley @ MIT Lincoln Laboratory <kgrindley at ll dot mit dot edu>
+Date: 25 March 2009
+Status: proposed
+
+What: Support alternate methods for nailing up a TCP session such
+ as SOCKS5, SOCKS4 and HTTP (CONNECT) proxies. Such a feature
+ would allow tunneling of bacula traffic in and out of proxied
+ networks.
+
+Why: Currently, bacula is architected to only function on a flat network, with
+ no barriers or limitations. Due to the large configuration states of
+ any network and the infinite configuration where file daemons and
+ storage daemons may sit in relation to one another, bacula often is
+ not usable on a network where filtered or air-gaped networks exist.
+ While often solutions such as ACL modifications to firewalls or port
+ redirection via SNAT or DNAT will solve the issue, often however,
+ these solutions are not adequate or not allowed by hard policy.
+
+ In an air-gapped network with only a highly locked down proxy services
+ are provided (SOCKS4/5 and/or HTTP and/or SSH outbound) ACLs or
+ iptable rules will not work.
+
+Notes: Director resource tunneling: This configuration option to utilize a
+ proxy to connect to a client should be specified in the client
+ resource Client resource tunneling: should be configured in the client
+ resource in the director config file? Or configured on the bacula-fd
+ configuration file on the fd host itself? If the ladder, this would
+ allow only certain clients to use a proxy, where others do not when
+ establishing the TCP connection to the storage server.
+
+ Also worth noting, there are other 3rd party, light weight apps that
+ could be utilized to bootstrap this. Instead of sockifing bacula
+ itself, use an external program to broker proxy authentication, and
+ connection to the remote host. OpenSSH does this by using the
+ "ProxyCommand" syntax in the client configuration and uses stdin and
+ stdout to the command. Connect.c is a very popular one.
+ (http://bent.latency.net/bent/darcs/goto-san-connect-1.85/src/connect.html).
+ One could also possibly use stunnel, netcat, etc.
+
+
+Item 37: Add Minumum Spool Size directive
+Date: 20 March 2008
+Origin: Frank Sweetser <fs@wpi.edu>
+
+ What: Add a new SD directive, "minimum spool size" (or similar). This
+ directive would specify a minimum level of free space available for
+ spooling. If the unused spool space is less than this level, any
+ new spooling requests would be blocked as if the "maximum spool
+ size" threshold had bee reached. Already spooling jobs would be
+ unaffected by this directive.
+
+ Why: I've been bitten by this scenario a couple of times:
+
+ Assume a maximum spool size of 100M. Two concurrent jobs, A and B,
+ are both running. Due to timing quirks and previously running jobs,
+ job A has used 99.9M of space in the spool directory. While A is
+ busy despooling to disk, B is happily using the remaining 0.1M of
+ spool space. This ends up in a spool/despool sequence every 0.1M of
+ data. In addition to fragmenting the data on the volume far more
+ than was necessary, in larger data sets (ie, tens or hundreds of
+ gigabytes) it can easily produce multi-megabyte report emails!
+
+
+Item 38: Backup and Restore of Windows Encrypted Files using Win raw encryption
+ Origin: Michael Mohr, SAG Mohr.External@infineon.com
+ Date: 22 February 2008
+ Origin: Alex Ehrlich (Alex.Ehrlich-at-mail.ee)
+ Date: 05 August 2008
+ Status:
+
+ What: Make it possible to backup and restore Encypted Files from and to
+ Windows systems without the need to decrypt it by using the raw
+ encryption functions API (see:
+ http://msdn2.microsoft.com/en-us/library/aa363783.aspx)
+ that is provided for that reason by Microsoft.
+ If a file ist encrypted could be examined by evaluating the
+ FILE_ATTRIBUTE_ENCRYTED flag of the GetFileAttributes
+ function.
+ For each file backed up or restored by FD on Windows, check if
+ the file is encrypted; if so then use OpenEncryptedFileRaw,
+ ReadEncryptedFileRaw, WriteEncryptedFileRaw,
+ CloseEncryptedFileRaw instead of BackupRead and BackupWrite
+ API calls.
+
+ Why: Without the usage of this interface the fd-daemon running
+ under the system account can't read encypted Files because
+ the key needed for the decrytion is missed by them. As a result
+ actually encrypted files are not backed up
+ by bacula and also no error is shown while missing these files.
+
+ Notes: Using xxxEncryptedFileRaw API would allow to backup and
+ restore EFS-encrypted files without decrypting their data.
+ Note that such files cannot be restored "portably" (at least,
+ easily) but they would be restoreable to a different (or
+ reinstalled) Win32 machine; the restore would require setup
+ of a EFS recovery agent in advance, of course, and this shall
+ be clearly reflected in the documentation, but this is the
+ normal Windows SysAdmin's business.
+ When "portable" backup is requested the EFS-encrypted files
+ shall be clearly reported as errors.
+ See MSDN on the "Backup and Restore of Encrypted Files" topic:
+ http://msdn.microsoft.com/en-us/library/aa363783.aspx
+ Maybe the EFS support requires a new flag in the database for
+ each file, too?
+ Unfortunately, the implementation is not as straightforward as
+ 1-to-1 replacement of BackupRead with ReadEncryptedFileRaw,
+ requiring some FD code rewrite to work with
+ encrypted-file-related callback functions.
+
+
+Item 39: Implement an interface between Bacula and Storage clould like Amazon's S3.
+ Date: 25 August 2008
+ Origin: Soren Hansen <soren@ubuntu.com>
+ Status: Not started.
+ What: Enable the storage daemon to store backup data on Amazon's
+ S3 service.
+
+ Why: Amazon's S3 is a cheap way to store data off-site.
+
+ Notes: If we configure the Pool to put only one job per volume (they don't
+ support append operation), and the volume size isn't to big (100MB?),
+ it should be easy to adapt the disk-changer script to add get/put
+ procedure with curl. So, the data would be safetly copied during the
+ Job.
+
+ Cloud should be only used with Copy jobs, users should always have
+ a copy of their data on their site.
+
+ We should also think to have our own cache, trying always to have
+ cloud volume on the local disk. (I don't know if users want to store
+ 100GB on cloud, so it shouldn't be a disk size problem). For example,
+ if bacula want to recycle a volume, it will start by downloading the
+ file to truncate it few seconds later, if we can avoid that...
+
+Item 40: Convert Bacula existing tray monitor on Windows to a stand alone program
+ Date: 26 April 2009
+ Origin: Kern/Eric
+ Status:
+
+ What: Separate Win32 tray monitor to be a separate program.
+
+ Why: Vista does not allow SYSTEM services to interact with the
+ desktop, so the current tray monitor does not work on Vista
+ machines.
+
+ Notes: Requires communicating with the FD via the network (simulate
+ a console connection).
+
+
+
+========= End items voted on May 2009 ==================
+
+========= New items after last vote ====================
+
+Item 1: Relabel disk volume after recycling
+ Origin: Pasi Kärkkäinen <pasik@iki.fi>
+ Date: 07 May 2009.
+ Status: Not implemented yet, no code written.
+
+ What: The ability to relabel the disk volume (and thus rename the file on the
+ disk) after it has been recycled. Useful when you have a single job
+ per disk volume, and you use a custom Label format, for example:
+ Label Format =
+ "${Client}-${Level}-${NumVols:p/4/0/r}-${Year}_${Month}_${Day}-${Hour}_${Minute}"
+
+ Why: Disk volumes in Bacula get the label/filename when they are used for the
+ first time. If you use recycling and custom label format like above,
+ the disk volume name doesn't match the contents after it has been
+ recycled. This feature makes it possible to keep the label/filename
+ in sync with the content and thus makes it easy to check/monitor the
+ backups from the shell and/or normal file management tools, because
+ the filenames of the disk volumes match the content.
+
+ Notes: The configuration option could be "Relabel after Recycling = Yes".
+
+Item n: Command that releases all drives in an autochanger
+ Origin: Blake Dunlap (blake@nxs.net)
+ Date: 10/07/2009
+ Status: Request
+
+ What: It would be nice if there was a release command that
+ would release all drives in an autochanger instead of having to
+ do each one in turn.
+
+ Why: It can take some time for a release to occur, and the
+ commands must be given for each drive in turn, which can quicky
+ scale if there are several drives in the library. (Having to
+ watch the console, to give each command can waste a good bit of
+ time when you start getting into the 16 drive range when the
+ tapes can take up to 3 minutes to eject each)
+
+ Notes: Due to the way some autochangers/libraries work, you
+ cannot assume that new tapes inserted will go into slots that are
+ not currently believed to be in use by bacula (the tape from that
+ slot is in a drive). This would make any changes in
+ configuration quicker/easier, as all drives need to be released
+ before any modifications to slots.
+
+Item n: Run bscan on a remote storage daemon from within bconsole.
+ Date: 07 October 2009
+ Origin: Graham Keeling <graham@equiinet.com>
+ Status: Proposing
+
+ What: The ability to be able to run bscan on a remote storage daemon from
+ within bconsole in order to populate your catalog.
+
+ Why: Currently, it seems you have to:
+ a) log in to a console on the remote machine
+ b) figure out where the storage daemon config file is
+ c) figure out the storage device from the config file
+ d) figure out the catalog IP address
+ e) figure out the catalog port
+ f) open the port on the catalog firewall
+ g) configure the catalog database to accept connections from the
+ remote host
+ h) build a 'bscan' command from (b)-(e) above and run it
+ It would be much nicer to be able to type something like this into
+ bconsole:
+ *bscan storage=<storage> device=<device> volume=<volume>
+ or something like:
+ *bscan storage=<storage> all
+ It seems to me that the scan could also do a better job than the
+ external bscan program currently does. It would possibly be able to
+ deduce some extra details, such as the catalog StorageId for the
+ volumes.
+
+ Notes: (Kern). If you need to do a bscan, you have done something wrong,
+ so this functionality should not need to be integrated into the
+ the Storage daemon. However, I am not opposed to someone implementing
+ this feature providing that all the code is in a shared object (or dll)
+ and does not add significantly to the size of the Storage daemon. In
+ addition, the code should be written in a way such that the same source
+ code is used in both the bscan program and the Storage daemon to avoid
+ adding a lot of new code that must be maintained by the project.
+
+Item n: Implement a Migration job type that will create a reverse
+ incremental (or decremental) backup from two existing full backups.
+ Date: 05 October 2009
+ Origin: Griffith College Dublin. Some sponsorship available.
+ Contact: Gavin McCullagh <gavin.mccullagh@gcd.ie>
+ Status: