+
+Item 31: Enable persistent naming/number of SQL queries
+ Date: 24 Jan, 2007
+ Origin: Mark Bergman
+ Status:
+
+ What:
+ Change the parsing of the query.sql file and the query command so that
+ queries are named/numbered by a fixed value, not their order in the
+ file.
+
+
+ Why:
+ One of the real strengths of bacula is the ability to query the
+ database, and the fact that complex queries can be saved and
+ referenced from a file is very powerful. However, the choice
+ of query (both for interactive use, and by scripting input
+ to the bconsole command) is completely dependent on the order
+ within the query.sql file. The descriptve labels are helpful for
+ interactive use, but users become used to calling a particular
+ query "by number", or may use scripts to execute queries. This
+ presents a problem if the number or order of queries in the file
+ changes.
+
+ If the query.sql file used the numeric tags as a real value (rather
+ than a comment), then users could have a higher confidence that they
+ are executing the intended query, that their local changes wouldn't
+ conflict with future bacula upgrades.
+
+ For scripting, it's very important that the intended query is
+ what's actually executed. The current method of parsing the
+ query.sql file discourages scripting because the addition or
+ deletion of queries within the file will require corresponding
+ changes to scripts. It may not be obvious to users that deleting
+ query "17" in the query.sql file will require changing all
+ references to higher numbered queries. Similarly, when new
+ bacula distributions change the number of "official" queries,
+ user-developed queries cannot simply be appended to the file
+ without also changing any references to those queries in scripts
+ or procedural documentation, etc.
+
+ In addition, using fixed numbers for queries would encourage more
+ user-initiated development of queries, by supporting conventions
+ such as:
+
+ queries numbered 1-50 are supported/developed/distributed by
+ with official bacula releases
+
+ queries numbered 100-200 are community contributed, and are
+ related to media management
+
+ queries numbered 201-300 are community contributed, and are
+ related to checksums, finding duplicated files across
+ different backups, etc.
+
+ queries numbered 301-400 are community contributed, and are
+ related to backup statistics (average file size, size per
+ client per backup level, time for all clients by backup level,
+ storage capacity by media type, etc.)
+
+ queries numbered 500-999 are locally created
+
+ Notes:
+ Alternatively, queries could be called by keyword (tag), rather
+ than by number.
+
+
+Item 32: Bacula Dir, FD and SD to support proxies
+Origin: Karl Grindley @ MIT Lincoln Laboratory <kgrindley at ll dot mit dot edu>
+Date: 25 March 2009
+Status: proposed
+
+What: Support alternate methods for nailing up a TCP session such
+ as SOCKS5, SOCKS4 and HTTP (CONNECT) proxies. Such a feature
+ would allow tunneling of bacula traffic in and out of proxied
+ networks.
+
+Why: Currently, bacula is architected to only function on a flat network, with
+ no barriers or limitations. Due to the large configuration states of
+ any network and the infinite configuration where file daemons and
+ storage daemons may sit in relation to one another, bacula often is
+ not usable on a network where filtered or air-gaped networks exist.
+ While often solutions such as ACL modifications to firewalls or port
+ redirection via SNAT or DNAT will solve the issue, often however,
+ these solutions are not adequate or not allowed by hard policy.
+
+ In an air-gapped network with only a highly locked down proxy services
+ are provided (SOCKS4/5 and/or HTTP and/or SSH outbound) ACLs or
+ iptable rules will not work.
+
+Notes: Director resource tunneling: This configuration option to utilize a
+ proxy to connect to a client should be specified in the client
+ resource Client resource tunneling: should be configured in the client
+ resource in the director config file? Or configured on the bacula-fd
+ configuration file on the fd host itself? If the ladder, this would
+ allow only certain clients to use a proxy, where others do not when
+ establishing the TCP connection to the storage server.
+
+ Also worth noting, there are other 3rd party, light weight apps that
+ could be utilized to bootstrap this. Instead of sockifing bacula
+ itself, use an external program to broker proxy authentication, and
+ connection to the remote host. OpenSSH does this by using the
+ "ProxyCommand" syntax in the client configuration and uses stdin and
+ stdout to the command. Connect.c is a very popular one.
+ (http://bent.latency.net/bent/darcs/goto-san-connect-1.85/src/connect.html).
+ One could also possibly use stunnel, netcat, etc.
+
+
+Item 33: Add Minumum Spool Size directive
+Date: 20 March 2008
+Origin: Frank Sweetser <fs@wpi.edu>
+
+ What: Add a new SD directive, "minimum spool size" (or similar). This
+ directive would specify a minimum level of free space available for
+ spooling. If the unused spool space is less than this level, any
+ new spooling requests would be blocked as if the "maximum spool
+ size" threshold had bee reached. Already spooling jobs would be
+ unaffected by this directive.
+
+ Why: I've been bitten by this scenario a couple of times:
+
+ Assume a maximum spool size of 100M. Two concurrent jobs, A and B,
+ are both running. Due to timing quirks and previously running jobs,
+ job A has used 99.9M of space in the spool directory. While A is
+ busy despooling to disk, B is happily using the remaining 0.1M of
+ spool space. This ends up in a spool/despool sequence every 0.1M of
+ data. In addition to fragmenting the data on the volume far more
+ than was necessary, in larger data sets (ie, tens or hundreds of
+ gigabytes) it can easily produce multi-megabyte report emails!
+
+
+
+
+
+Item 34: Command that releases all drives in an autochanger
+ Origin: Blake Dunlap (blake@nxs.net)
+ Date: 10/07/2009
+ Status: Request
+
+ What: It would be nice if there was a release command that
+ would release all drives in an autochanger instead of having to
+ do each one in turn.
+
+ Why: It can take some time for a release to occur, and the
+ commands must be given for each drive in turn, which can quicky
+ scale if there are several drives in the library. (Having to
+ watch the console, to give each command can waste a good bit of
+ time when you start getting into the 16 drive range when the
+ tapes can take up to 3 minutes to eject each)
+
+ Notes: Due to the way some autochangers/libraries work, you
+ cannot assume that new tapes inserted will go into slots that are
+ not currently believed to be in use by bacula (the tape from that
+ slot is in a drive). This would make any changes in
+ configuration quicker/easier, as all drives need to be released
+ before any modifications to slots.
+
+Item 35: Run bscan on a remote storage daemon from within bconsole.
+ Date: 07 October 2009
+ Origin: Graham Keeling <graham@equiinet.com>
+ Status: Proposing
+
+ What: The ability to be able to run bscan on a remote storage daemon from
+ within bconsole in order to populate your catalog.
+
+ Why: Currently, it seems you have to:
+ a) log in to a console on the remote machine
+ b) figure out where the storage daemon config file is
+ c) figure out the storage device from the config file
+ d) figure out the catalog IP address
+ e) figure out the catalog port
+ f) open the port on the catalog firewall
+ g) configure the catalog database to accept connections from the
+ remote host
+ h) build a 'bscan' command from (b)-(e) above and run it
+ It would be much nicer to be able to type something like this into
+ bconsole:
+ *bscan storage=<storage> device=<device> volume=<volume>
+ or something like:
+ *bscan storage=<storage> all
+ It seems to me that the scan could also do a better job than the
+ external bscan program currently does. It would possibly be able to
+ deduce some extra details, such as the catalog StorageId for the
+ volumes.
+
+ Notes: (Kern). If you need to do a bscan, you have done something wrong,
+ so this functionality should not need to be integrated into the
+ the Storage daemon. However, I am not opposed to someone implementing
+ this feature providing that all the code is in a shared object (or dll)
+ and does not add significantly to the size of the Storage daemon. In
+ addition, the code should be written in a way such that the same source
+ code is used in both the bscan program and the Storage daemon to avoid
+ adding a lot of new code that must be maintained by the project.
+
+Item 36: Implement a Migration job type that will create a reverse
+ incremental (or decremental) backup from two existing full backups.
+ Date: 05 October 2009
+ Origin: Griffith College Dublin. Some sponsorship available.
+ Contact: Gavin McCullagh <gavin.mccullagh@gcd.ie>