+ What:
+ Change the parsing of the query.sql file and the query command so that
+ queries are named/numbered by a fixed value, not their order in the
+ file.
+
+
+ Why:
+ One of the real strengths of bacula is the ability to query the
+ database, and the fact that complex queries can be saved and
+ referenced from a file is very powerful. However, the choice
+ of query (both for interactive use, and by scripting input
+ to the bconsole command) is completely dependent on the order
+ within the query.sql file. The descriptve labels are helpful for
+ interactive use, but users become used to calling a particular
+ query "by number", or may use scripts to execute queries. This
+ presents a problem if the number or order of queries in the file
+ changes.
+
+ If the query.sql file used the numeric tags as a real value (rather
+ than a comment), then users could have a higher confidence that they
+ are executing the intended query, that their local changes wouldn't
+ conflict with future bacula upgrades.
+
+ For scripting, it's very important that the intended query is
+ what's actually executed. The current method of parsing the
+ query.sql file discourages scripting because the addition or
+ deletion of queries within the file will require corresponding
+ changes to scripts. It may not be obvious to users that deleting
+ query "17" in the query.sql file will require changing all
+ references to higher numbered queries. Similarly, when new
+ bacula distributions change the number of "official" queries,
+ user-developed queries cannot simply be appended to the file
+ without also changing any references to those queries in scripts
+ or procedural documentation, etc.
+
+ In addition, using fixed numbers for queries would encourage more
+ user-initiated development of queries, by supporting conventions
+ such as:
+
+ queries numbered 1-50 are supported/developed/distributed by
+ with official bacula releases
+
+ queries numbered 100-200 are community contributed, and are
+ related to media management
+
+ queries numbered 201-300 are community contributed, and are
+ related to checksums, finding duplicated files across
+ different backups, etc.
+
+ queries numbered 301-400 are community contributed, and are
+ related to backup statistics (average file size, size per
+ client per backup level, time for all clients by backup level,
+ storage capacity by media type, etc.)
+
+ queries numbered 500-999 are locally created
+
+ Notes:
+ Alternatively, queries could be called by keyword (tag), rather
+ than by number.
+
+Item 29: Implementation of running Job speed limit.
+Origin: Alex F, alexxzell at yahoo dot com
+Date: 29 January 2009
+
+What: I noticed the need for an integrated bandwidth limiter for
+ running jobs. It would be very useful just to specify another
+ field in bacula-dir.conf, like speed = how much speed you wish
+ for that specific job to run at
+
+Why: Because of a couple of reasons. First, it's very hard to implement a
+ traffic shaping utility and also make it reliable. Second, it is very
+ uncomfortable to have to implement these apps to, let's say 50 clients
+ (including desktops, servers). This would also be unreliable because you
+ have to make sure that the apps are properly working when needed; users
+ could also disable them (accidentally or not). It would be very useful
+ to provide Bacula this ability. All information would be centralized,
+ you would not have to go to 50 different clients in 10 different
+ locations for configuration; eliminating 3rd party additions help in
+ establishing efficiency. Would also avoid bandwidth congestion,
+ especially where there is little available.
+
+Item 30: Restore from volumes on multiple storage daemons
+Origin: Graham Keeling (graham@equiinet.com)
+Date: 12 March 2009
+Status: Proposing
+
+What: The ability to restore from volumes held by multiple storage daemons
+ would be very useful.
+
+Why: It is useful to be able to backup to any number of different storage
+ daemons. For example, your first storage daemon may run out of space, so you
+ switch to your second and carry on. Bacula will currently let you do this.
+ However, once you come to restore, bacula cannot cope when volumes on different
+ storage daemons are required.
+
+ Notes: The director knows that more than one storage daemon is needed, as
+ bconsole outputs something like the following table.
+
+ The job will require the following
+ Volume(s) Storage(s) SD Device(s)
+ ===========================================================================
+
+ backup-0001 Disk 1 Disk 1.0
+ backup-0002 Disk 2 Disk 2.0
+
+ However, the bootstrap file that it creates gets sent to the first storage
+ daemon only, which then stalls for a long time, 'waiting for a mount request'
+ for the volume that it doesn't have.
+ The bootstrap file contains no knowledge of the storage daemon.
+ Under the current design:
+
+ The director connects to the storage daemon, and gets an sd_auth_key.
+ The director then connects to the file daemon, and gives it the
+ sd_auth_key with the 'jobcmd'.
+ (restoring of files happens)
+ The director does a 'wait_for_storage_daemon_termination()'.
+ The director waits for the file daemon to indicate the end of the job.
+
+ With my idea:
+
+ The director connects to the file daemon.
+ Then, for each storage daemon in the .bsr file... {
+ The director connects to the storage daemon, and gets an sd_auth_key.
+ The director then connects to the file daemon, and gives it the
+ sd_auth_key with the 'storaddr' command.
+ (restoring of files happens)
+ The director does a 'wait_for_storage_daemon_termination()'.
+ The director waits for the file daemon to indicate the end of the
+ work on this storage.
+ }
+ The director tells the file daemon that there are no more storages to contact.
+ The director waits for the file daemon to indicate the end of the job.
+
+ As you can see, each restore between the file daemon and storage daemon is
+ handled in the same way that it is currently handled, using the same method
+ for authentication, except that the sd_auth_key is moved from the 'jobcmd' to
+ the 'storaddr' command - where it logically belongs.
+
+
+Item 31: 'restore' menu: enter a JobId, automatically select dependents
+Origin: Graham Keeling (graham@equiinet.com)
+Date: 13 March 2009
+
+Status: Proposing
+
+What: Add to the bconsole 'restore' menu the ability to select a job
+ by JobId, and have bacula automatically select all the dependent jobs.
+
+ Why: Currently, you either have to...
+ a) laboriously type in a date that is greater than the date of the backup that
+ you want and is less than the subsequent backup (bacula then figures out the
+ dependent jobs), or
+ b) manually figure out all the JobIds that you want and laboriously type them
+ all in.
+ It would be extremely useful (in a programmatical sense, as well as for humans)
+ to be able to just give it a single JobId and let bacula do the hard work (work
+ that it already knows how to do).
+
+ Notes (Kern): I think this should either be modified to have Bacula print
+ a list of dates that the user can choose from as is done in bwx-console and
+ bat or the name of this command must be carefully chosen so that the user
+ clearly understands that the JobId is being used to specify what Job and the
+ date to which he wishes the restore to happen.
+
+
+
+Item 32: Possibilty to schedule Jobs on last Friday of the month
+Origin: Carsten Menke <bootsy52 at gmx dot net>
+Date: 02 March 2008
+Status:
+
+ What: Currently if you want to run your monthly Backups on the last
+ Friday of each month this is only possible with workarounds (e.g
+ scripting) (As some months got 4 Fridays and some got 5 Fridays)
+ The same is true if you plan to run your yearly Backups on the
+ last Friday of the year. It would be nice to have the ability to
+ use the builtin scheduler for this.
+
+ Why: In many companies the last working day of the week is Friday (or
+ Saturday), so to get the most data of the month onto the monthly
+ tape, the employees are advised to insert the tape for the
+ monthly backups on the last friday of the month.
+
+ Notes: To give this a complete functionality it would be nice if the
+ "first" and "last" Keywords could be implemented in the
+ scheduler, so it is also possible to run monthy backups at the
+ first friday of the month and many things more. So if the syntax
+ would expand to this {first|last} {Month|Week|Day|Mo-Fri} of the
+ {Year|Month|Week} you would be able to run really flexible jobs.
+
+ To got a certain Job run on the last Friday of the Month for example one could
+ then write
+
+ Run = pool=Monthly last Fri of the Month at 23:50
+
+ ## Yearly Backup
+
+ Run = pool=Yearly last Fri of the Year at 23:50
+
+ ## Certain Jobs the last Week of a Month
+
+ Run = pool=LastWeek last Week of the Month at 23:50
+
+ ## Monthly Backup on the last day of the month
+
+ Run = pool=Monthly last Day of the Month at 23:50
+
+Item 33: Add Minumum Spool Size directive
+Date: 20 March 2008
+Origin: Frank Sweetser <fs@wpi.edu>
+
+ What: Add a new SD directive, "minimum spool size" (or similar). This
+ directive would specify a minimum level of free space available for
+ spooling. If the unused spool space is less than this level, any
+ new spooling requests would be blocked as if the "maximum spool
+ size" threshold had bee reached. Already spooling jobs would be
+ unaffected by this directive.
+
+ Why: I've been bitten by this scenario a couple of times:
+
+ Assume a maximum spool size of 100M. Two concurrent jobs, A and B,
+ are both running. Due to timing quirks and previously running jobs,
+ job A has used 99.9M of space in the spool directory. While A is
+ busy despooling to disk, B is happily using the remaining 0.1M of
+ spool space. This ends up in a spool/despool sequence every 0.1M of
+ data. In addition to fragmenting the data on the volume far more
+ than was necessary, in larger data sets (ie, tens or hundreds of
+ gigabytes) it can easily produce multi-megabyte report emails!
+
+
+Item 34: Cause daemons to use a specific IP address to source communications
+ Origin: Bill Moran <wmoran@collaborativefusion.com>
+ Date: 18 Dec 2006
+ Status:
+ What: Cause Bacula daemons (dir, fd, sd) to always use the ip address
+ specified in the [DIR|DF|SD]Addr directive as the source IP
+ for initiating communication.
+ Why: On complex networks, as well as extremely secure networks, it's
+ not unusual to have multiple possible routes through the network.
+ Often, each of these routes is secured by different policies
+ (effectively, firewalls allow or deny different traffic depending
+ on the source address)
+ Unfortunately, it can sometimes be difficult or impossible to
+ represent this in a system routing table, as the result is
+ excessive subnetting that quickly exhausts available IP space.
+ The best available workaround is to provide multiple IPs to
+ a single machine that are all on the same subnet. In order
+ for this to work properly, applications must support the ability
+ to bind outgoing connections to a specified address, otherwise
+ the operating system will always choose the first IP that
+ matches the required route.
+ Notes: Many other programs support this. For example, the following
+ can be configured in BIND:
+ query-source address 10.0.0.1;
+ transfer-source 10.0.0.2;
+ Which means queries from this server will always come from
+ 10.0.0.1 and zone transfers will always originate from
+ 10.0.0.2.
+
+
+
+Item 35: Add ability to Verify any specified Job.
+Date: 17 January 2008
+Origin: portrix.net Hamburg, Germany.
+Contact: Christian Sabelmann
+Status: 70% of the required Code is part of the Verify function since v. 2.x
+
+ What:
+ The ability to tell Bacula which Job should verify instead of
+ automatically verify just the last one.
+
+ Why:
+ It is sad that such a powerfull feature like Verify Jobs
+ (VolumeToCatalog) is restricted to be used only with the last backup Job
+ of a client. Actual users who have to do daily Backups are forced to
+ also do daily Verify Jobs in order to take advantage of this useful
+ feature. This Daily Verify after Backup conduct is not always desired
+ and Verify Jobs have to be sometimes scheduled. (Not necessarily
+ scheduled in Bacula). With this feature Admins can verify Jobs once a
+ Week or less per month, selecting the Jobs they want to verify. This
+ feature is also not to difficult to implement taking in account older bug
+ reports about this feature and the selection of the Job to be verified.
+
+ Notes: For the verify Job, the user could select the Job to be verified
+ from a List of the latest Jobs of a client. It would also be possible to
+ verify a certain volume. All of these would naturaly apply only for
+ Jobs whose file information are still in the catalog.
+
+Item 36: Automatic disabling of devices
+ Date: 2005-11-11
+ Origin: Peter Eriksson <peter at ifm.liu dot se>
+ Status: