+Item 37: Concurrent spooling and despooling withini a single job.
+Date: 17 nov 2009
+Origin: Jesper Krogh <jesper@krogh.cc>
+Status: NEW
+What: When a job has spooling enabled and the spool area size is
+ less than the total volumes size the storage daemon will:
+ 1) Spool to spool-area
+ 2) Despool to tape
+ 3) Go to 1 if more data to be backed up.
+
+ Typical disks will serve data with a speed of 100MB/s when
+ dealing with large files, network it typical capable of doing 115MB/s
+ (GbitE). Tape drives will despool with 50-90MB/s (LTO3) 70-120MB/s
+ (LTO4) depending on compression and data.
+
+ As bacula currently works it'll hold back data from the client until
+ de-spooling is done, now matter if the spool area can handle another
+ block of data. Say given a FileSet of 4TB and a spool-area of 100GB and
+ a Maximum Job Spool Size set to 50GB then above sequence could be
+ changed to allow to spool to the other 50GB while despooling the first
+ 50GB and not holding back the client while doing it. As above numbers
+ show, depending on tape-drive and disk-arrays this potentially leads to
+ a cut of the backup-time of 50% for the individual jobs.
+
+ Real-world example, backing up 112.6GB (large files) to LTO4 tapes
+ (despools with ~75MB/s, data is gzipped on the remote filesystem.
+ Maximum Job Spool Size = 8GB
+
+ Current:
+ Size: 112.6GB
+ Elapsed time (total time): 46m 15s => 2775s
+ Despooling time: 25m 41s => 1541s (55%)
+ Spooling time: 20m 34s => 1234s (45%)
+ Reported speed: 40.58MB/s
+ Spooling speed: 112.6GB/1234s => 91.25MB/s
+ Despooling speed: 112.6GB/1541s => 73.07MB/s
+
+ So disk + net can "keep up" with the LTO4 drive (in this test)
+
+ Prosed change would effectively make the backup run in the "despooling
+ time" 1541s giving a reduction to 55% of the total run time.
+
+ In the situation where the individual job cannot keep up with LTO-drive
+ spooling enables efficient multiplexing of multiple concurrent jobs onto
+ the same drive.
+
+Why: When dealing with larger volumes the general utillization of the
+ network/disk is important to maximize in order to be able to run a full
+ backup over a weekend. Current work-around is to split the FileSet in
+ smaller FileSet and Jobs but that leads to more configuration mangement
+ and is harder to review for completeness. Subsequently it makes restores
+ more complex.
+
+Item 39: Extend the verify code to make it possible to verify
+ older jobs, not only the last one that has finished
+ Date: 10 April 2009
+ Origin: Ralf Gross (Ralf-Lists <at> ralfgross.de)
+ Status: not implemented or documented
+
+ What: At the moment a VolumeToCatalog job compares only the
+ last job with the data in the catalog. It's not possible
+ to compare the data (md5sums) of an older volume with the
+ data in the catalog.
+
+ Why: If a verify job fails, one has to immediately check the
+ source of the problem, fix it and rerun the verify job.
+ This has to happen before the next backup of the
+ verified backup job starts.
+ More important: It's not possible to check jobs that are
+ kept for a long time (archiv). If a jobid could be
+ specified for a verify job, older backups/tapes could be
+ checked on a regular base.
+
+ Notes: verify documentation:
+ VolumeToCatalog: This level causes Bacula to read the file
+ attribute data written to the Volume from the last Job [...]
+
+ Verify Job = <Job-Resource-Name> If you run a verify job
+ without this directive, the last job run will be compared
+ with the catalog, which means that you must immediately
+ follow a backup by a verify command. If you specify a Verify
+ Job Bacula will find the last job with that name that ran [...]
+
+ example bconsole verify dialog:
+
+ Run Verify job
+ JobName: VerifyServerXXX
+ Level: VolumeToCatalog
+ Client: ServerXXX-fd
+ FileSet: ServerXXX-Vol1
+ Pool: Full (From Job resource)
+ Storage: Neo4100 (From Pool resource)
+ Verify Job: ServerXXX-Vol1
+ Verify List:
+ When: 2009-04-20 09:03:04
+ Priority: 10
+ OK to run? (yes/mod/no): m
+ Parameters to modify:
+ 1: Level
+ 2: Storage
+ 3: Job
+ 4: FileSet
+ 5: Client
+ 6: When
+ 7: Priority
+ 8: Pool
+ 9: Verify Job
+
+
+
+Item 40: Separate "Storage" and "Device" in the bacula-dir.conf
+ Date: 29 April 2009
+ Origin: "James Harper" <james.harper@bendigoit.com.au>
+ Status: not implemented or documented
+
+ What: Separate "Storage" and "Device" in the bacula-dir.conf
+ The resulting config would looks something like:
+
+ Storage {
+ Name = name_of_server
+ Address = hostname/IP address
+ SDPort = 9103
+ Password = shh_its_a_secret
+ Maximum Concurrent Jobs = 7
+ }
+
+ Device {
+ Name = name_of_device
+ Storage = name_of_server
+ Device = name_of_device_on_sd
+ Media Type = media_type
+ Maximum Concurrent Jobs = 1
+ }
+
+ Maximum Concurrent Jobs would be specified with a server and a device
+ maximum, which would both be honoured by the director. Almost everything
+ that mentions a 'Storage' would need to be changed to 'Device', although
+ perhaps a 'Storage' would just be a synonym for 'Device' for backwards
+ compatibility...
+
+ Why: If you have multiple Storage definitions pointing to different
+ Devices in the same Storage daemon, the "status storage" command
+ prompts for each different device, but they all give the same
+ information.
+
+ Notes:
+
+Item 41: Least recently used device selection for tape drives in autochanger.
+Date: 12 October 2009
+Origin: Thomas Carter <tcarter@memc.com>
+Status: Proposal
+
+What: A better tape drive selection algorithm for multi-drive
+ autochangers. The AUTOCHANGER class contains an array list of tape
+ devices. When a tape drive is needed, this list is always searched in
+ order. This causes lower number drives (specifically drive 0) to do a
+ majority of the work with higher numbered drives possibly never being
+ used. When a drive in an autochanger is reserved for use, its entry should
+ be moved to the end of the list; this would give a rough LRU drive
+ selection.
+
+Why: The current implementation places a majority of use and wear on drive
+ 0 of a multi-drive autochanger.
+
+Notes:
+
+========= New items after last vote ====================
+
+