6 - !!! Cannot restore two jobs a the same time that were
7 written simultaneously unless they were totally spooled.
8 - Document cleaning up the spool files:
9 db, pid, state, bsr, mail, conmsg, spool
10 - Document the multiple-drive-changer.txt script.
11 - Pruning with Admin job.
12 - Does WildFile match against full name? Doc.
13 - %d and %v only valid on Director, not for ClientRunBefore/After.
14 - During tests with the 260 char fix code, I found one problem:
15 if the system "sees" a long path once, it seems to forget it's
16 working drive (e.g. c:\), which will lead to a problem during
17 the next job (create bootstrap file will fail). Here is the
18 workaround: specify absolute working and pid directory in
19 bacula-fd.conf (e.g. c:\bacula\working instead of
21 - Document techniques for restoring large numbers of files.
22 - Document setting my.cnf to big file usage.
23 - Add example of proper index output to doc. show index from File;
24 - Correct the Include syntax in the m4.xxx files in examples/conf
25 - Document JobStatus and Termination codes.
26 - Fix the error with the "DVI file can't be opened" while
27 building the French PDF.
28 - Document more DVD stuff
36 - Document all the little details of setting up certificates for
37 the Bacula data encryption code.
38 - Document more precisely how to use master keys -- especially
39 for disaster recovery.
42 - Migration from other vendors
46 - Backup conf/exe (all daemons)
47 - Backup up system state
48 - Detect state change of system (verify)
49 - Synthetic Full, Diff, Inc (Virtual, Reconstructed)
51 - Modules for Databases, Exchange, ...
52 - Novell NSS backup http://www.novell.com/coolsolutions/tools/18952.html
53 - Compliance norms that compare restored code hash code.
54 - When glibc crash, get address with
56 - How to sync remote offices.
58 http://www.microsoft.com/technet/itshowcase/content/exchbkup.mspx
61 Extract capability (#25)
62 Continued enhancement of bweb
63 Threshold triggered migration jobs (not currently in list, but will be
65 Client triggered backups
66 Complete rework of the scheduling system (not in list)
67 Performance and usage instrumentation (not in list)
68 See email of 21Aug2007 for details.
69 - Look at: http://tech.groups.yahoo.com/group/cfg2html
70 and http://www.openeyet.nl/scc/ for managing customer changes
73 - Look at in src/filed/backup.c
74 > pm_strcpy(ff_pkt->fname, ff_pkt->fname_save);
75 > pm_strcpy(ff_pkt->link, ff_pkt->link_save);
76 - Add Catalog = to Pool resource so that pools will exist
77 in only one catalog -- currently Pools are "global".
78 - New directive "Delete purged Volumes"
80 - Prune by Job Level (Full, Differential, Incremental)
81 - Strict automatic pruning
82 - Implement unmount of USB volumes.
83 - Use "./config no-idea no-mdc2 no-rc5" on building OpenSSL for
84 Win32 to avoid patent problems.
85 - Implement Bacula plugins -- design API
86 - modify pruning to keep a fixed number of versions of a file,
88 === Duplicate jobs ===
89 hese apply only to backup jobs.
91 1. Allow Duplicate Jobs = Yes | No | Higher (Yes)
93 2. Duplicate Job Interval = <time-interval> (0)
95 The defaults are in parenthesis and would produce the same behavior as today.
97 If Allow Duplicate Jobs is set to No, then any job starting while a job of the
98 same name is running will be canceled.
100 If Allow Duplicate Jobs is set to Higher, then any job starting with the same
101 or lower level will be canceled, but any job with a Higher level will start.
102 The Levels are from High to Low: Full, Differential, Incremental
104 Finally, if you have Duplicate Job Interval set to a non-zero value, any job
105 of the same name which starts <time-interval> after a previous job of the
106 same name would run, any one that starts within <time-interval> would be
107 subject to the above rules. Another way of looking at it is that the Allow
108 Duplicate Jobs directive will only apply after <time-interval> of when the
109 previous job finished (i.e. it is the minimum interval between jobs).
113 Allow Duplicate Jobs = Yes | No | HigherLevel | CancelLowerLevel (Yes)
115 Where HigherLevel cancels any waiting job but not any running job.
116 Where CancelLowerLevel is same as HigherLevel but cancels any running job or
119 Duplicate Job Proximity = <time-interval> (0)
121 Skip = Do not allow two or more jobs with the same name to run
122 simultaneously within the proximity interval. The second and subsequent
123 jobs are skipped without further processing (other than to note the job
124 and exit immediately), and are not considered errors.
126 Fail = The second and subsequent jobs that attempt to run during the
127 proximity interval are cancelled and treated as error-terminated jobs.
129 Promote = If a job is running, and a second/subsequent job of higher
130 level attempts to start, the running job is promoted to the higher level
131 of processing using the resources already allocated, and the subsequent
132 job is treated as in Skip above.
134 - the cd-command should allow complete paths
135 i.e. cd /foo/bar/foo/bar
136 -> if a customer mails me the path to a certain file,
137 its faster to enter the specified directory
138 - Fix bpipe.c so that it does not modify results pointer.
139 ***FIXME*** calling sequence should be changed.
140 - Make tree walk routines like cd, ls, ... more user friendly
141 by handling spaces better.
145 MA = (last_MA * 3 + rate) / 4
146 rate = (bytes - last_bytes) / (runtime - last_runtime)
147 - Add a recursive mark command (rmark) to restore.
148 - "Minimum Job Interval = nnn" sets minimum interval between Jobs
149 of the same level and does not permit multiple simultaneous
150 running of that Job (i.e. lets any previous invocation finish
151 before doing Interval testing).
152 - Look at simplifying File exclusions.
154 - Auto update of slot:
155 rufus-dir: ua_run.c:456-10 JobId=10 NewJobId=10 using pool Full priority=10
156 02-Nov 12:58 rufus-dir JobId 10: Start Backup JobId 10, Job=kernsave.2007-11-02_12.58.03
157 02-Nov 12:58 rufus-dir JobId 10: Using Device "DDS-4"
158 02-Nov 12:58 rufus-sd JobId 10: Invalid slot=0 defined in catalog for Volume "Vol001" on "DDS-4" (/dev/nst0). Manual load my be required.
159 02-Nov 12:58 rufus-sd JobId 10: 3301 Issuing autochanger "loaded? drive 0" command.
160 02-Nov 12:58 rufus-sd JobId 10: 3302 Autochanger "loaded? drive 0", result is Slot 2.
161 02-Nov 12:58 rufus-sd JobId 10: Wrote label to prelabeled Volume "Vol001" on device "DDS-4" (/dev/nst0)
162 02-Nov 12:58 rufus-sd JobId 10: Alert: TapeAlert[7]: Media Life: The tape has reached the end of its useful life.
163 02-Nov 12:58 rufus-dir JobId 10: Bacula rufus-dir 2.3.6 (26Oct07): 02-Nov-2007 12:58:51
164 - Eliminate: /var is a different filesystem. Will not descend from / into /var
165 - Separate Files and Directories in catalog
166 - Create FileVersions table
167 - Look at rsysnc for incremental updates and dedupping
168 - Add MD5 or SHA1 check in SD for data validation
169 - finish implementation of fdcalled -- see ua_run.c:105
170 - Fix problem in postgresql.c in my_postgresql_query, where the
171 generation of the error message doesn't differentiate result==NULL
172 and a bad status from that result. Not only that, the result is
173 cleared on a bail_out without having generated the error message.
175 - Implement SDErrors (must return from SD)
176 - Implement USB keyboard support in rescue CD.
177 - Implement continue spooling while despooling.
178 - Remove all install temp files in Win32 PLUGINSDIR.
179 - Audit retention periods to make sure everything is 64 bit.
180 - No where in restore causes kaboom.
181 - Performance: multiple spool files for a single job.
182 - Performance: despool attributes when despooling data (problem
183 multiplexing Dir connection).
184 - Make restore use the in-use volume reservation algorithm.
185 - Add TLS to bat (should be done).
186 - When Pool specifies Storage command override does not work.
187 - Implement wait_for_sysop() message display in wait_for_device(), which
188 now prints warnings too often.
189 - Ensure that each device in an Autochanger has a different
191 - Look at sg_logs -a /dev/sg0 for getting soft errors.
192 - btape "test" command with Offline on Unmount = yes
194 This test is essential to Bacula.
196 I'm going to write one record in file 0,
197 two records in file 1,
198 and three records in file 2
200 02-Feb 11:00 btape: ABORTING due to ERROR in dev.c:715
201 dev.c:714 Bad call to rewind. Device "LTO" (/dev/nst0) not open
202 02-Feb 11:00 btape: Fatal Error because: Bacula interrupted by signal 11: Segmentation violation
203 Kaboom! btape, btape got signal 11. Attempting traceback.
205 - Encryption -- email from Landon
206 > The backup encryption algorithm is currently not configurable, and is
207 > set to AES_128_CBC in src/filed/backup.c. The encryption code
208 > supports a number of different ciphers (as well as adding arbitrary
209 > new ones) -- only a small bit of code would be required to map a
210 > configuration string value to a CRYPTO_CIPHER_* value, if anyone is
211 > interested in implementing this functionality.
213 - Figure out some way to "automatically" backup conf changes.
214 - Add the OS version back to the Win32 client info.
215 - Restarted jobs have a NULL in the from field.
216 - Modify SD status command to indicate when the SD is writing
217 to a DVD (the device is not open -- see bug #732).
218 - Look at the possibility of adding "SET NAMES UTF8" for MySQL,
219 and possibly changing the blobs into varchar.
220 - Ensure that the SD re-reads the Media record if the JobFiles
221 does not match -- it may have been updated by another job.
223 - Test Volume compatibility between machine architectures
224 - Encryption documentation
225 - Wrong jobbytes with query 12 (todo)
226 - Bare-metal recovery Windows (todo)
231 - Access Mode = Read-Only, Read-Write, Unavailable, Destroyed, Offsite
233 - Maximum number of scratch volumes
235 - Next Pool (already have)
236 - Reclamation threshold
238 - Reuse delay (after all files purged from volume before it can be used)
239 - Copy Pool = xx, yyy (or multiple lines).
241 - Allow pool selection during restore.
243 - Average tape size from Eric
244 SELECT COALESCE(media_avg_size.volavg,0) * count(Media.MediaId) AS volmax, GROUP BY Media.MediaType, Media.PoolId, media_avg_size.volavg
245 count(Media.MediaId) AS volnum,
246 sum(Media.VolBytes) AS voltotal,
247 Media.PoolId AS PoolId,
248 Media.MediaType AS MediaType
250 LEFT JOIN (SELECT avg(Media.VolBytes) AS volavg,
251 Media.MediaType AS MediaType
253 WHERE Media.VolStatus = 'Full'
254 GROUP BY Media.MediaType
255 ) AS media_avg_size ON (Media.MediaType = media_avg_size.MediaType)
256 GROUP BY Media.MediaType, Media.PoolId, media_avg_size.volavg
260 - Add doc for bweb -- especially Installation
262 http://www.orangecrate.com/modules.php?name=News&file=article&sid=501
264 - Despool attributes in separate thread
267 - Check why restore repeatedly sends Rechdrs between
268 each data chunk -- according to James Harper 9Jan07.
271 - Full at least once a month, ...
272 - Cancel Inc if Diff/Full running
273 - More intelligent re-run
274 - New/deleted file backup
276 - Incremental backup -- rsync, Stow
280 - Try to fix bscan not working with multiple DVD volumes bug #912.
281 - Look at mondo/mindi
282 - Make Bacula by default not backup tmpfs, procfs, sysfs, ...
283 - Fix hardlinked immutable files when linking a second file, the
284 immutable flag must be removed prior to trying to link it.
285 - Implement Python event for backing up/restoring a file.
286 - Change dbcheck to tell users to use native tools for fixing
287 broken databases, and to ensure they have the proper indexes.
288 - add udev rules for Bacula devices.
289 - If a job terminates, the DIR connection can close before the
290 Volume info is updated, leaving the File count wrong.
291 - Look at why SIGPIPE during connection can cause seg fault in
292 writing the daemon message, when Dir dropped to bacula:bacula
293 - Look at zlib 32 => 64 problems.
294 - Possibly turn on St. Bernard code.
295 - Fix bextract to restore ACLs, or better yet, use common routines.
296 - Do we migrate appendable Volumes?
297 - Remove queue.c code.
298 - Print warning message if LANG environment variable does not specify
300 - New dot commands from Arno.
301 .show device=xxx lists information from one storage device, including
302 devices (I'm not even sure that information exists in the DIR...)
303 .move eject device=xxx mostly the same as 'unmount xxx' but perhaps with
304 better machine-readable output like "Ok" or "Error busy"
305 .move eject device=xxx toslot=yyy the same as above, but with a new
306 target slot. The catalog should be updated accordingly.
307 .move transfer device=xxx fromslot=yyy toslot=zzz
310 - Article: http://www.heise.de/open/news/meldung/83231
311 - Article: http://www.golem.de/0701/49756.html
312 - Article: http://lwn.net/Articles/209809/
313 - Article: http://www.onlamp.com/pub/a/onlamp/2004/01/09/bacula.html
314 - Article: http://www.linuxdevcenter.com/pub/a/linux/2005/04/07/bacula.html
315 - Article: http://www.osreviews.net/reviews/admin/bacula
316 - Article: http://www.debianhelp.co.uk/baculaweb.htm
318 - Wikis mentioning Bacula
319 http://wiki.finkproject.org/index.php/Admin:Backups
320 http://wiki.linuxquestions.org/wiki/Bacula
321 http://www.openpkg.org/product/packages/?package=bacula
322 http://www.iterating.com/products/Bacula
323 http://net-snmp.sourceforge.net/wiki/index.php/Net-snmp_extensions
324 http://www.section6.net/wiki/index.php/Using_Bacula_for_Tape_Backups
325 http://bacula.darwinports.com/
326 http://wiki.mandriva.com/en/Releases/Corporate/Server_4/Notes#Bacula
327 http://en.wikipedia.org/wiki/Bacula
330 http://www.devco.net/pubwiki/Bacula/
331 http://paramount.ind.wpi.edu/wiki/doku.php
332 http://gentoo-wiki.com/HOWTO_Backup
333 http://www.georglutz.de/wiki/Bacula
334 http://www.clarkconnect.com/wiki/index.php?title=Modules_-_LAN_Backup/Recovery
335 http://linuxwiki.de/Bacula (in German)
337 - Possibly allow SD to spool even if a tape is not mounted.
338 - Fix re-read of last block to check if job has actually written
339 a block, and check if block was written by a different job
340 (i.e. multiple simultaneous jobs writing).
341 - Figure out how to configure query.sql. Suggestion to use m4:
342 == changequote.m4 ===
343 changequote(`[',`]')dnl
344 ==== query.sql.in ===
345 :List next 20 volumes to expire
347 Pool.Name AS PoolName,
352 [ FROM_UNIXTIME(UNIX_TIMESTAMP(Media.LastWritten) Media.VolRetention) AS Expire, ])dnl
354 [ media.lastwritten + interval '1 second' * media.volretention as expire, ])dnl
358 ON Media.PoolId=Pool.PoolId
359 WHERE Media.LastWritten>0
363 Command: m4 -DmySQL changequote.m4 query.sql.in >query.sql
365 The problem is that it requires m4, which is not present on all machines
367 - Given all the problems with FIFOs, I think the solution is to do something a
368 little different, though I will look at the code and see if there is not some
369 simple solution (i.e. some bug that was introduced). What might be a better
370 solution would be to use a FIFO as a sort of "key" to tell Bacula to read and
371 write data to a program rather than the FIFO. For example, suppose you
376 Then, I could imagine if you backup and restore this file with a direct
377 reference as is currently done for fifos, instead, during backup Bacula will
380 /home/kern/my-fifo.backup
382 and read the data that my-fifo.backup writes to stdout. For restore, Bacula
385 /home/kern/my-fifo.restore
387 and send the data backed up to stdout. These programs can either be an
388 executable or a shell script and they need only read/write to stdin/stdout.
390 I think this would give a lot of flexibility to the user without making any
391 significant changes to Bacula.
396 select FilenameId from Filename where Name='';
397 # Get list of all directories referenced in a Backup.
398 select Path.Path from Path,File where File.JobId=nnn and
399 File.FilenameId=(FilenameId-from-above) and File.PathId=Path.PathId
400 order by Path.Path ASC;
402 - Look into using Dart for testing
403 http://public.kitware.com/Dart/HTML/Index.shtml
405 - Look into replacing autotools with cmake
406 http://www.cmake.org/HTML/Index.html
408 - Mount on an Autochanger with no tape in the drive causes:
409 Automatically selected Storage: LTO-changer
410 Enter autochanger drive[0]: 0
411 3301 Issuing autochanger "loaded drive 0" command.
412 3302 Autochanger "loaded drive 0", result: nothing loaded.
413 3301 Issuing autochanger "loaded drive 0" command.
414 3302 Autochanger "loaded drive 0", result: nothing loaded.
415 3902 Cannot mount Volume on Storage Device "LTO-Drive1" (/dev/nst0) because:
416 Couldn't rewind device "LTO-Drive1" (/dev/nst0): ERR=dev.c:678 Rewind error on "LTO-Drive1" (/dev/nst0). ERR=No medium found.
417 3905 Device "LTO-Drive1" (/dev/nst0) open but no Bacula volume is mounted.
418 If this is not a blank tape, try unmounting and remounting the Volume.
419 - If Drive 0 is blocked, and drive 1 is set "Autoselect=no", drive 1 will
421 - Autochanger did not change volumes.
422 select * from Storage;
423 +-----------+-------------+-------------+
424 | StorageId | Name | AutoChanger |
425 +-----------+-------------+-------------+
426 | 1 | LTO-changer | 0 |
427 +-----------+-------------+-------------+
428 05-May 03:50 roxie-sd: 3302 Autochanger "loaded drive 0", result is Slot 11.
429 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Warning: Director wanted Volume "LT
430 Current Volume "LT0-002" not acceptable because:
431 1997 Volume "LT0-002" not in catalog.
432 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Error: Autochanger Volume "LT0-002"
433 Setting InChanger to zero in catalog.
434 05-May 03:50 roxie-dir: Tibs.2006-05-05_03.05.02 Error: Unable to get Media record
436 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Fatal error: Error getting Volume i
437 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Fatal error: Job 530 canceled.
438 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Fatal error: spool.c:249 Fatal appe
439 05-May 03:49 Tibs: Tibs.2006-05-05_03.05.02 Fatal error: c:\cygwin\home\kern\bacula
448 FirstWritten: 2006-05-05 03:11:54
449 LastWritten: 2006-05-05 03:50:23
450 LabelDate: 2005-12-26 16:52:40
461 VolRetention: 31,536,000
473 Note VolStatus is blank!!!!!
480 FirstWritten: 0000-00-00 00:00:00
481 LastWritten: 0000-00-00 00:00:00
482 LabelDate: 2005-12-26 16:52:40
493 VolRetention: 31,536,000
506 Automatically selected Storage: LTO-changer
507 Enter autochanger drive[0]: 0
508 3301 Issuing autochanger "loaded drive 0" command.
509 3302 Autochanger "loaded drive 0", result: nothing loaded.
510 3301 Issuing autochanger "loaded drive 0" command.
511 3302 Autochanger "loaded drive 0", result: nothing loaded.
512 3902 Cannot mount Volume on Storage Device "LTO-Drive1" (/dev/nst0) because:
513 Couldn't rewind device "LTO-Drive1" (/dev/nst0): ERR=dev.c:678 Rewind error on "LTO-Drive1" (/dev/nst0). ERR=No medium found.
515 3905 Device "LTO-Drive1" (/dev/nst0) open but no Bacula volume is mounted.
516 If this is not a blank tape, try unmounting and remounting the Volume.
518 - http://www.dwheeler.com/essays/commercial-floss.html
519 - Add VolumeLock to prevent all but lock holder (SD) from updating
520 the Volume data (with the exception of VolumeState).
521 - The btape fill command does not seem to use the Autochanger
522 - Make Windows installer default to system disk drive.
523 - Look at using ioctl(FIOBMAP, ...) on Linux, and
524 DeviceIoControl(..., FSCTL_QUERY_ALLOCATED_RANGES, ...) on
525 Win32 for sparse files.
526 http://www.flexhex.com/docs/articles/sparse-files.phtml
527 http://www.informatik.uni-frankfurt.de/~loizides/reiserfs/fibmap.html
528 - Directive: at <event> "command"
529 - Command: pycmd "command" generates "command" event. How to
530 attach to a specific job?
531 - Integrate Christopher's St. Bernard code.
532 - run_cmd() returns int should return JobId_t
533 - get_next_jobid_from_list() returns int should return JobId_t
534 - Document export LDFLAGS=-L/usr/lib64
535 - Don't attempt to restore from "Disabled" Volumes.
536 - Network error on Win32 should set Win32 error code.
537 - What happens when you rename a Disk Volume?
538 - Job retention period in a Pool (and hence Volume). The job would
540 - Look at -D_FORTIFY_SOURCE=2
541 - Add Win32 FileSet definition somewhere
542 - Look at fixing restore status stats in SD.
543 - Look at using ioctl(FIMAP) and FIGETBSZ for sparse files.
544 http://www.informatik.uni-frankfurt.de/~loizides/reiserfs/fibmap.html
545 - Implement a mode that says when a hard read error is
546 encountered, read many times (as it currently does), and if the
547 block cannot be read, skip to the next block, and try again. If
548 that fails, skip to the next file and try again, ...
550 create table LevelType (LevelType binary(1), LevelTypeLong tinyblob);
551 insert into LevelType (LevelType,LevelTypeLong) values
555 - Show files/second in client status output.
556 - new pool XXX with ScratchPoolId = MyScratchPool's PoolId and
557 let it fill itself, and RecyclePoolId = XXX's PoolId so I can
558 see if it become stable and I just have to supervise
560 - If I want to remove this pool, I set RecyclePoolId = MyScratchPool's
561 PoolId, and when it is empty remove it.
563 - Allow Check Labels to be used with Bacula labels.
564 - "Resuming" a failed backup (lost line for example) by using the
565 failed backup as a sort of "base" job.
567 - Email to the user when the tape is about to need changing x
568 days before it needs changing.
569 - Command to show next tape that will be used for a job even
570 if the job is not scheduled.
571 - From: Arunav Mandal <amandal@trolltech.com>
572 1. When jobs are running and bacula for some reason crashes or if I do a
573 restart it remembers and jobs it was running before it crashed or restarted
574 as of now I loose all jobs if I restart it.
576 2. When spooling and in the midway if client is disconnected for instance a
577 laptop bacula completely discard the spool. It will be nice if it can write
578 that spool to tape so there will be some backups for that client if not all.
580 3. We have around 150 clients machines it will be nice to have a option to
581 upgrade all the client machines bacula version automatically.
583 4. Atleast one connection should be reserved for the bconsole so at heavy load
584 I should connect to the director via bconsole which at sometimes I can't
586 5. Another most important feature that is missing, say at 10am I manually
587 started backup of client abc and it was a full backup since client abc has
588 no backup history and at 10.30am bacula again automatically started backup of
589 client abc as that was in the schedule. So now we have 2 multiple Full
590 backups of the same client and if we again try to start a full backup of
591 client backup abc bacula won't complain. That should be fixed.
593 - For Windows disaster recovery see http://unattended.sf.net/
594 - regardless of the retention period, Bacula will not prune the
595 last Full, Diff, or Inc File data until a month after the
596 retention period for the last Full backup that was done.
597 - update volume=xxx --- add status=Full
598 - Remove old spool files on startup.
599 - Exclude SD spool/working directory.
600 - Refuse to prune last valid Full backup. Same goes for Catalog.
602 - Make a callback when Rerun failed levels is called.
603 - Give Python program access to Scheduled jobs.
604 - Add setting Volume State via Python.
605 - Python script to save with Python, not save, save with Bacula.
606 - Python script to do backup.
608 - Change the Priority, Client, Storage, JobStatus (error)
609 at the start of a job.
610 - Why is SpoolDirectory = /home/bacula/spool; not reported
611 as an error when writing a DVD?
612 - Make bootstrap file handle multiple MediaTypes (SD)
613 - Remove all old Device resource code in Dir and code to pass it
614 back in SD -- better, rework it to pass back device statistics.
615 - Check locking of resources -- be sure to lock devices where previously
616 resources were locked.
617 - The last part is left in the spool dir.
620 - In restore don't compare byte count on a raw device -- directory
621 entry does not contain bytes.
624 - Max Vols limit in Pool off by one?
625 - Implement Files/Bytes,... stats for restore job.
626 - Implement Total Bytes Written, ... for restore job.
627 - Despool attributes simultaneously with data in a separate
628 thread, rejoined at end of data spooling.
629 - 7. Implement new Console commands to allow offlining/reserving drives,
630 and possibly manipulating the autochanger (much asked for).
631 - Add start/end date editing in messages (%t %T, %e?) ...
632 - Add ClientDefs similar to JobDefs.
633 - Print more info when bextract -p accepts a bad block.
634 - Fix FD JobType to be set before RunBeforeJob in FD.
635 - Look at adding full Volume and Pool information to a Volume
636 label so that bscan can get *all* the info.
637 - If the user puts "Purge Oldest Volume = yes" or "Recycle Oldest Volume = yes"
638 and there is only one volume in the pool, refuse to do it -- otherwise
639 he fills the Volume, then immediately starts reusing it.
640 - Implement copies and stripes.
641 - Add history file to console.
642 - Each file on tape creates a JobMedia record. Peter has 4 million
643 files spread over 10000 tape files and four tapes. A restore takes
644 16 hours to build the restore list.
645 - Add and option to check if the file size changed during backup.
646 - Make sure SD deletes spool files on error exit.
647 - Delete old spool files when SD starts.
648 - When labeling tapes, if you enter 000026, Bacula uses
649 the tape index rather than the Volume name 000026.
650 - Add offline tape command to Bacula console.
652 Enter MediaId or Volume name: 32
653 Enter new Volume name: DLT-20Dec04
654 Automatically selected Pool: Default
655 Connecting to Storage daemon DLTDrive at 192.168.68.104:9103 ...
656 Sending relabel command from "DLT-28Jun03" to "DLT-20Dec04" ...
657 block.c:552 Write error at 0:0 on device /dev/nst0. ERR=Bad file descriptor.
658 Error writing final EOF to tape. This tape may not be readable.
659 dev.c:1207 ioctl MTWEOF error on /dev/nst0. ERR=Permission denied.
660 askdir.c:219 NULL Volume name. This shouldn't happen!!!
661 3912 Failed to label Volume: ERR=dev.c:1207 ioctl MTWEOF error on /dev/nst0. ERR=Permission denied.
662 Label command failed for Volume DLT-20Dec04.
663 Do not forget to mount the drive!!!
664 - Bug: if a job is manually scheduled to run later, it does not appear
665 in any status report and cannot be cancelled.
667 ==== Keeping track of deleted/new files ====
668 - To mark files as deleted, run essentially a Verify to disk, and
669 when a file is found missing (MarkId != JobId), then create
670 a new File record with FileIndex == -1. This could be done
671 by the FD at the same time as the backup.
673 My "trick" for keeping track of deletions is the following.
674 Assuming the user turns on this option, after all the files
675 have been backed up, but before the job has terminated, the
676 FD will make a pass through all the files and send their
677 names to the DIR (*exactly* the same as what a Verify job
678 currently does). This will probably be done at the same
679 time the files are being sent to the SD avoiding a second
680 pass. The DIR will then compare that to what is stored in
681 the catalog. Any files in the catalog but not in what the
682 FD sent will receive a catalog File entry that indicates
683 that at that point in time the file was deleted. This
684 either transmitted to the FD or simultaneously computed in
685 the FD, so that the FD can put a record on the tape that
686 indicates that the file has been deleted at this point.
687 A delete file entry could potentially be one with a FileIndex
688 of 0 or perhaps -1 (need to check if FileIndex is used for
689 some other thing as many of the Bacula fields are "overloaded"
692 During a restore, any file initially picked up by some
693 backup (Full, ...) then subsequently having a File entry
694 marked "delete" will be removed from the tree, so will not
695 be restored. If a file with the same name is later OK it
696 will be inserted in the tree -- this already happens. All
697 will be consistent except for possible changes during the
700 Since I'm on the subject, some of you may be wondering what
701 the utility of the in memory tree is if you are going to
702 restore everything (at least it comes up from time to time
703 on the list). Well, it is still *very* useful because it
704 allows only the last item found for a particular filename
705 (full path) to be entered into the tree, and thus if a file
706 is backed up 10 times, only the last copy will be restored.
707 I recently (last Friday) restored a complete directory, and
708 the Full and all the Differential and Incremental backups
709 spanned 3 Volumes. The first Volume was not even mounted
710 because all the files had been updated and hence backed up
711 since the Full backup was made. In this case, the tree
712 saved me a *lot* of time.
714 Make sure this information is stored on the tape too so
715 that it can be restored directly from the tape.
717 All the code (with the exception of formally generating and
718 saving the delete file entries) already exists in the Verify
719 Catalog command. It explicitly recognizes added/deleted files since
720 the last InitCatalog. It is more or less a "simple" matter of
721 taking that code and adapting it slightly to work for backups.
723 Comments from Martin Simmons (I think they are all covered):
724 Ok, that should cover the basics. There are few issues though:
726 - Restore will depend on the catalog. I think it is better to include the
727 extra data in the backup as well, so it can be seen by bscan and bextract.
729 - I'm not sure if it will preserve multiple hard links to the same inode. Or
730 maybe adding or removing links will cause the data to be dumped again?
732 - I'm not sure if it will handle renamed directories. Possibly it will work
733 by dumping the whole tree under a renamed directory?
735 - It remains to be seen how the backup performance of the DIR's will be
736 affected when comparing the catalog for a large filesystem.
738 1. Use the current Director in-memory tree code (very fast), but currently in
739 memory. It probably could be paged.
741 2. Use some DB such as Berkeley DB or SQLite. SQLite is already compiled and
742 built for Win32, and it is something we could compile into the program.
744 3. Implement our own custom DB code.
746 Note, by appropriate use of Directives in the Director, we can dynamically
747 decide if the work is done in the Director or in the FD, and we can even
748 allow the user to choose.
750 === most recent accurate file backup/restore ===
751 Here is a sketch (i.e. more details must be filled in later) that I recently
752 made of an algorithm for doing Accurate Backup.
754 1. Dir informs FD that it is doing an Accurate backup and lookup done by
757 2. FD passes through the file system doing a normal backup based on normal
758 conditions, recording the names of all files and their attributes, and
759 indicating which files were backed up. This is very similar to what Verify
762 3. The Director receives the two lists of files at the end of the FD backup.
763 One, files backed up, and one files not backed up. It then looks up all the
764 files not backed up (using Verify style code).
766 4. The Dir sends the FD a list of:
767 a. Additional files to backup (based on user specified criteria, name, size
768 inode date, hash, ...).
771 5. Dir deletes list of file not backed up.
773 6. FD backs up additional files generates a list of those backed up and sends
774 it to the Director, which adds it to the list of files backed up. The list
775 is now complete and current.
777 7. The FD generates delete records for all the files that were deleted and
780 8. The Dir deletes the previous CurrentBackup list, and then does a
781 transaction insert of the new list that it has.
783 9. The rest works as before ...
787 Two new tables needed.
788 1. CurrentBackupId table that contains Client, JobName, FileSet, and a unique
789 BackupId. This is created during a Full save, and the BackupId can be set to
790 the JobId of the Full save. It will remain the same until another Full
791 backup is done. That is when new records are added during a Differential or
792 Incremental, they must use the same BackupId.
794 2. CurrentBackup table that contains essentially a File record (less a number
795 of fields, but with a few extra fields) -- e.g. a flag that the File was
796 backed up by a Full save (this permits doing a Differential). The unique
797 BackupId allows us to look up the CurrentBackup for a particular Client,
798 Jobname, FileSet using that unique BackupId as the key, so this table must be
799 indexed by the BackupId.
801 Note any time a file is saved by the FD other than during a Full save, the
802 Full save flag is cleared. When doing a Differential backup, if a file has
803 the Full save flag set, it is skipped, otherwise it is backed up. For an
804 Incremental backup, we check to see if the file has changed since the last
805 time we backed it up.
807 Deleted files should have FileIndex == 0
811 How about introducing a Type = MgmtPolicy job type? That job type would
812 be responsible for scanning the Bacula environment looking for specific
813 conditions, and submitting the appropriate jobs for implementing said
817 Name = "Migration-Policy"
819 Policy Selection Job Type = Migrate
820 Scope = "<keyword> <operator> <regexp>"
821 Threshold = "<keyword> <operator> <regexp>"
822 Job Template = <template-name>
825 Where <keyword> is any legal job keyword, <operator> is a comparison
826 operator (=,<,>,!=, logical operators AND/OR/NOT) and <regexp> is a
827 appropriate regexp. I could see an argument for Scope and Threshold
828 being SQL queries if we want to support full flexibility. The
829 Migration-Policy job would then get scheduled as frequently as a site
830 felt necessary (suggested default: every 15 minutes).
835 Name = "Migration-Policy"
837 Policy Selection Job Type = Migration
839 Threshold = "Migration Selection Type = LowestUtil"
840 Job Template = "MigrationTemplate"
843 would select all pools for examination and generate a job based on
844 MigrationTemplate to automatically select the volume with the lowest
845 usage and migrate it's contents to the nextpool defined for that pool.
847 This policy abstraction would be really handy for adjusting the behavior
848 of Bacula according to site-selectable criteria (one thing that pops
849 into mind is Amanda's ability to automatically adjust backup levels
850 depending on various criteria).
856 - Add Pool/Storage override regression test.
857 - Add delete JobId to regression.
858 - Add a regression test for dbcheck.
859 - New test to add bscan to four-concurrent-jobs regression,
860 i.e. after the four-concurrent jobs zap the
861 database as is done in the bscan-test, then use bscan to
862 restore the database, do a restore and compare with the
864 - Add restore of specific JobId to regression (item 3
865 on the restore prompt)
866 - Add IPv6 to regression
867 - Add database test to regression. Test each function like delete,
870 - AntiVir can slow down backups on Win32 systems.
871 - Win32 systems with FAT32 can be much slower than NTFS for
872 more than 1000 files per directory.
876 - A HOLD command to stop all jobs from starting.
877 - A PAUSE command to pause all running jobs ==> release the
879 - Media Type = LTO,LTO-2,LTO-3
880 Media Type Read = LTO,LTO2,LTO3
881 Media Type Write = LTO2, LTO3
883 === From Carsten Menke <bootsy52@gmx.net>
885 Following is a list of what I think in the situations where I'm faced with,
886 could be a usefull enhancement to bacula, which I'm certain other users will
887 benefit from as well.
889 1. NextJob/NextJobs Directive within a Job Resource in the form of
890 NextJobs = job1,job2.
893 I currently solved the problem with running multiple jobs each after each
894 by setting the Max Wait Time for a job to 8 hours, and give
895 the jobs different Priorities. However, there scenarios where
896 1 Job is directly depending on another job, so if the former job fails,
897 the job after it needn't to be run
898 while maybe other jobs should run despite of that
901 A Backup Job and a Verify job, if the backup job fails there is no need to run
902 the verify job, as the backup job already failed. However, one may like
903 to backup the Catalog to disk despite of that the main backup job failed.
906 I see that this is related to the Event Handlers which are on the ToDo
907 list, also it is maybe a good idea to check for the return value and
908 execute different actions based on the return value
911 3. offline capability to bconsole
914 Currently I use a script which I execute within the last Job via the
915 RunAfterJob Directive, to release and eject the tape.
916 So I have to call bconsole "release=Storage-Name" and afterwards
917 mt -f /dev/nst0 eject to get the tape out.
919 If I have multiple Storage Devices, than these may not be /dev/nst0 and
920 I have to modify the script or call it with parameters etc.
921 This would actually not be needed, as everything is already defined
922 in bacula-sd.conf and if I can invoke bconsole with the
923 storage name via $1 in the script than I'm done and information is
926 4. %s for Storage Name added to the chars being substituted in "RunAfterJob"
930 For the reason mentioned in 3. to have the ability to call a
931 script with /scripts/foobar %s and in the script use $1
932 to pass the Storage Name to bconsole
934 5. Setting Volume State within a Job Resource
937 Instead of using "Maximum Volume Jobs" in the Pool Resource,
938 I would have the possibilty to define
939 in a Job Resource that after this certain job is run, the Volume State
940 should be set to "Volume State = Used", this give more flexibility (IMHO).
942 6. Localization of Bacula Messages
945 Unfortunatley many,many people I work with don't speak english very well.
946 So if at least the Reporting messages would be localized then they
947 would understand that they have to change the tape,etc. etc.
949 I volunteer to do the german translations, and if I can convince my wife also
950 french and Morre (western african language).
952 7. OK, this is evil, probably bound to security risks and maybe not possible
953 due to the design of bacula.
955 Implementation of Backtics ( `command` ) for shell comand execution to
956 the "Label Format" Directive.
960 Currently I have defined BACULA_DAY_OF_WEEK="day1|day2..." resulting in
961 Label Format = "HolyBackup-${BACULA_DAY_OF_WEEK[${WeekDay}]}". If I could
962 use backticks than I could use "Label Format = HolyBackup-`date +%A` to have
963 the localized name for the day of the week appended to the
964 format string. Then I have the tape labeled automatically with weekday
965 name in the correct language.
967 - Make output from status use html table tags for nicely
968 presenting in a browser.
969 - Can one write tapes faster with 8192 byte block sizes?
970 - Document security problems with the same password for everyone in
971 rpm and Win32 releases.
972 - Browse generations of files.
973 - I've seen an error when my catalog's File table fills up. I
974 then have to recreate the File table with a larger maximum row
975 size. Relevant information is at
976 http://dev.mysql.com/doc/mysql/en/Full_table.html ; I think the
977 "Installing and Configuring MySQL" chapter should talk a bit
978 about this potential problem, and recommend a solution.
979 - For Solaris must use POSIX awk.
980 - Want speed of writing to tape while despooling.
981 - Supported autochanger:
989 Wangtek 6525ES (SCSI-1 QIC drive, 525MB), under Linux 2.4.something,
990 bacula 1.36.0/1 works with blocksize 16k INSIDE bacula-sd.conf.
991 - Add regex from http://www.pcre.org to Bacula for Win32.
992 - Use only shell tools no make in CDROM package.
993 - Include within include does it work?
994 - Implement a Pool of type Cleaning?
995 - Implement VolReadTime and VolWriteTime in SD
996 - Modify Backing up Your Database to include a bootstrap file.
997 - Think about making certain database errors fatal.
998 - Look at correcting the time jump in the scheduler for daylight
999 savings time changes.
1000 - Add a "real" timer to network connections.
1001 - Promote to Full = Time period
1002 - Check dates entered by user for correctness (month/day/... ranges)
1003 - Compress restore Volume listing by date and first file.
1004 - Look at patches/bacula_db.b2z postgresql that loops during restore.
1006 - Perhaps add read/write programs and/or plugins to FileSets.
1007 - How to handle backing up portables ...
1008 - Add some sort of guaranteed Interval for upgrading jobs.
1009 - Can we write the state file after every job terminates? On Win32
1010 the system crashes and the state file is not updated.
1013 Documentation to do: (any release a little bit at a time)
1014 - Doc to do unmount before removing magazine.
1015 - Alternative to static linking "ldd prog" save all binaries listed,
1016 restore them and point LD_LIBRARY_PATH to them.
1017 - Document add "</dev/null >/dev/null 2>&1" to the bacula-fd command line
1018 - Document query file format.
1019 - Add more documentation for bsr files.
1020 - Document problems with Verify and pruning.
1021 - Document how to use multiple databases.
1022 - VXA drives have a "cleaning required"
1023 indicator, but Exabyte recommends preventive cleaning after every 75
1026 In this context, it should be noted that Exabyte has a command-line
1027 vxatool utility available for free download. (The current version is
1028 vxatool-3.72.) It can get diagnostic info, read, write and erase tapes,
1029 test the drive, unload tapes, change drive settings, flash new firmware,
1031 Of particular interest in this context is that vxatool <device> -i will
1032 report, among other details, the time since last cleaning in tape motion
1033 minutes. This information can be retrieved (and settings changed, for
1034 that matter) through the generic-SCSI device even when Bacula has the
1035 regular tape device locked. (Needless to say, I don't recommend
1036 changing tape settings while a job is running.)
1037 - Lookup HP cleaning recommendations.
1038 - Lookup HP tape replacement recommendations (see trouble shooting autochanger)
1039 - Document doing table repair
1042 ===================================
1043 - Add macro expansions in JobDefs.
1044 Run Before Job = "SomeFile %{Level} %{Client}"
1045 Write Bootstrap="/some/dir/%{JobName}_%{Client}.bsr"
1046 - Use non-blocking network I/O but if no data is available, use
1048 - Use gather write() for network I/O.
1049 - Autorestart on crash.
1050 - Add bandwidth limiting.
1051 - Add acks every once and a while from the SD to keep
1052 the line from timing out.
1053 - When an error in input occurs and conio beeps, you can back
1054 up through the prompt.
1055 - Detect fixed tape block mode during positioning by looking at
1056 block numbers in btape "test". Possibly adjust in Bacula.
1057 - Fix list volumes to output volume retention in some other
1058 units, perhaps via a directive.
1059 - Allow Simultaneous Priorities = yes => run up to Max concurrent jobs even
1060 with multiple priorities.
1061 - If you use restore replace=never, the directory attributes for
1062 non-existent directories will not be restored properly.
1064 - see lzma401.zip in others directory for new compression
1066 - Allow the user to select JobType for manual pruning/purging.
1067 - bscan does not put first of two volumes back with all info in
1069 - Implement the FreeBSD nodump flag in chflags.
1070 - Figure out how to make named console messages go only to that
1071 console and to the non-restricted console (new console class?).
1072 - Make restricted console prompt for password if *ask* is set or
1073 perhaps if password is undefined.
1074 - Implement "from ISO-date/time every x hours/days/weeks/months" in
1077 ==== from Marc Schoechlin
1078 - the help-command should be more verbose
1079 (it should explain the paramters of the different
1081 -> its time-comsuming to consult the manual anytime
1082 you need a special parameter
1083 -> maybe its more easy to maintain this, if the
1084 descriptions of that commands are outsourced to
1086 - if the password is not configured in bconsole.conf
1087 you should be asked for it.
1088 -> sometimes you like to do restore on a customer-machine
1089 which shouldnt know the password for bacula.
1090 -> adding the password to the file favours admins
1091 to forget to remove the password after usage
1093 the protection of that file is less important
1094 - long-listed-output of commands should be scrollable
1095 like the unix more/less-command does
1096 -> if someone runs 200 and more machines, the lists could
1097 be a little long and complex
1098 - command-output should be shown column by column
1099 to reduce scrolling and to increase clarity
1101 - lsmark should list the selected files with full
1103 - wildcards for selecting and file and directories would be nice
1104 - any actions should be interuptable with STRG+C
1105 - command-expansion would be pretty cool
1107 - When the replace Never option is set, new directory permissions
1108 are not restored. See bug 213. To fix this requires creating a
1109 list of newly restored directories so that those directory
1110 permissions *can* be restored.
1111 - Add prune all command
1112 - Document fact that purge can destroy a part of a restore by purging
1113 one volume while others remain valid -- perhaps mark Jobs.
1114 - Add multiple-media-types.txt
1115 - look at mxt-changer.html
1116 - Make ? do a help command (no return needed).
1117 - Implement restore directory.
1118 - Document streams and how to implement them.
1119 - Try not to re-backup a file if a new hard link is added.
1120 - Add feature to backup hard links only, but not the data.
1121 - Fix stream handling to be simpler.
1122 - Add Priority and Bootstrap to Run a Job.
1123 - Eliminate Restore "Run Restore Job" prompt by allowing new "run command
1125 - Remove View FileSet button from Run a Job dialog.
1126 - Handle prompt for restore job at end of Restore command.
1127 - Add display of total selected files to Restore window.
1128 - Add tree pane to left of window.
1129 - Add progress meter.
1130 - Max wait time or max run time causes seg fault -- see runtime-bug.txt
1131 - Add message to user to check for fixed block size when the forward
1132 space test fails in btape.
1133 - When unmarking a directory check if all files below are unmarked and
1134 then remove the + flag -- in the restore tree.
1135 - Possibly implement: Action = Unmount Device="TapeDrive1" in Admin jobs.
1136 - Setup lrrd graphs: (http://www.linpro.no/projects/lrrd/) Mike Acar.
1137 - Revisit the question of multiple Volumes (disk) on a single device.
1138 - Add a block copy option to bcopy.
1139 - Finish work on Gnome restore GUI.
1140 - Fix "llist jobid=xx" where no fileset or client exists.
1141 - For each job type (Admin, Restore, ...) require only the really necessary
1142 fields.- Pass Director resource name as an option to the Console.
1143 - Add a "batch" mode to the Console (no unsolicited queries, ...).
1144 - Add a .list all files in the restore tree (probably also a list all files)
1145 Do both a long and short form.
1146 - Allow browsing the catalog to see all versions of a file (with
1147 stat data on each file).
1148 - Restore attributes of directory if replace=never set but directory
1150 - Use SHA1 on authentication if possible.
1151 - See comtest-xxx.zip for Windows code to talk to USB.
1152 - Add John's appended files:
1153 Appended = { /files/server/logs/http/*log }
1154 and such files would be treated as follows.On a FULL backup, they would
1155 be backed up like any other file.On an INCREMENTAL backup, where a
1156 previous INCREMENTAL or FULL was already in thecatalogue and the length
1157 of the file wasgreater than the length of the last backup, only thedata
1158 added since the last backup will be dumped.On an INCREMENTAL backup, if
1159 the length of the file is less than thelength of the file with the same
1160 name last backed up, the completefile is dumped.On Windows systems, with
1161 creation date of files, we can be evensmarter about this and not count
1162 entirely upon the length.On a restore, the full and all incrementals
1163 since it will beapplied in sequence to restore the file.
1164 - Check new HAVE_WIN32 open bits.
1165 - Check if the tape has moved before writing.
1166 - Handling removable disks -- see below:
1167 - Keep track of tape use time, and report when cleaning is necessary.
1168 - Add FromClient and ToClient keywords on restore command (or
1169 BackupClient RestoreClient).
1170 - Implement a JobSet, which groups any number of jobs. If the
1171 JobSet is started, all the jobs are started together.
1172 Allow Pool, Level, and Schedule overrides.
1173 - Enhance cancel to timeout BSOCK packets after a specific delay.
1174 - Do scheduling by UTC using gmtime_r() in run_conf, scheduler, and
1175 ua_status.!!! Thanks to Alan Brown for this tip.
1176 - Look at updating Volume Jobs so that Max Volume Jobs = 1 will work
1177 correctly for multiple simultaneous jobs.
1178 - Correct code so that FileSet MD5 is calculated for < and | filename
1180 - Implement the Media record flag that indicates that the Volume does disk
1182 - Implement VolAddr, which is used when Volume is addressed like a disk,
1183 and form it from VolFile and VolBlock.
1184 - Make multiple restore jobs for multiple media types specifying
1185 the proper storage type.
1186 - Fix fast block rejection (stored/read_record.c:118). It passes a null
1187 pointer (rec) to try_repositioning().
1188 - Look at extracting Win data from BackupRead.
1189 - Implement RestoreJobRetention? Maybe better "JobRetention" in a Job,
1190 which would take precidence over the Catalog "JobRetention".
1191 - Implement Label Format in Add and Label console commands.
1192 - Possibly up network buffers to 65K. Put on variable.
1193 - Put email tape request delays on one or more variables. User wants
1194 to cancel the job after a certain time interval. Maximum Mount Wait?
1195 - Job, Client, Device, Pool, or Volume?
1196 Is it possible to make this a directive which is *optional* in multiple
1197 resources, like Level? If so, I think I'd make it an optional directive
1198 in Job, Client, and Pool, with precedence such that Job overrides Client
1199 which in turn overrides Pool.
1201 - New Storage specifications:
1202 - Want to write to multiple storage devices simultaneously
1203 - Want to write to multiple storage devices sequentially (in one job)
1204 - Want to read/write simultaneously
1205 - Key is MediaType -- it must match
1207 Passed to SD as a sort of BSR record called Storage Specification
1211 MediaType -> Next MediaType
1213 Device -> Next Device
1215 Allow multiple Storage specifications
1223 Allow Multiple Pool specifications (note, Pool currently
1225 Allow Multiple MediaType specifications in Dir conf
1226 Allow Multiple Device specifications in Dir conf
1227 Perhaps keep this in a single SSR
1228 Tie a Volume to a specific device by using a MediaType that
1229 is contained in only one device.
1230 In SD allow Device to have Multiple MediaTypes
1232 - Ideas from Jerry Scharf:
1233 First let's point out some big pluses that bacula has for this
1235 more importantly it's active. Thank you so much for that
1236 even more important, it's not flaky
1237 it has an open access catalog, opening many possibilities
1238 it's pushing toward heterogeneous systems capability
1240 Macintosh file client
1241 macs are an interesting niche, but I fear a server is a rathole
1242 working bare iron recovery for windows
1243 the option for inc/diff backups not reset on fileset revision
1244 a) use both change and inode update time against base time
1245 b) do the full catalog check (expensive but accurate)
1246 sizing guide (how much system is needed to back up N systems/files)
1247 consultants on using bacula in building a disaster recovery system
1248 an integration guide
1249 or how to get at fancy things that one could do with bacula
1250 logwatch code for bacula logs (or similar)
1251 linux distro inclusion of bacula (brings good and bad, but necessary)
1252 win2k/XP server capability (icky but you asked)
1253 support for Oracle database ??
1255 - Look at adding SQL server and Exchange support for Windows.
1256 - Make dev->file and dev->block_num signed integers so that -1 can
1257 be an invalid value which happens with BSR.
1258 - Create VolAddr for disk files in place of VolFile and VolBlock. This
1259 is needed to properly specify ranges.
1260 - Add progress of files/bytes to SD and FD.
1261 - Print warning message if FileId > 4 billion
1262 - do a "messages" before the first prompt in Console
1263 - Client does not show busy during Estimate command.
1264 - Implement Console mtx commands.
1265 - Implement a Mount Command and an Unmount Command where
1266 the users could specify a system command to be performed
1267 to do the mount, after which Bacula could attempt to
1268 read the device. This is for Removeable media such as a CDROM.
1269 - Most likely, this mount command would be invoked explicitly
1270 by the user using the current Console "mount" and "unmount"
1271 commands -- the Storage Daemon would do the right thing
1272 depending on the exact nature of the device.
1273 - As with tape drives, when Bacula wanted a new removable
1274 disk mounted, it would unmount the old one, and send a message
1275 to the user, who would then use "mount" as described above
1276 once he had actually inserted the disk.
1277 - Implement dump/print label to UA
1278 - Spool to disk only when the tape is full, then when a tape is hung move
1280 - bextract is sending everything to the log file ****FIXME****
1281 - Allow multiple Storage specifications (or multiple names on
1282 a single Storage specification) in the Job record. Thus a job
1283 can be backed up to a number of storage devices.
1284 - Implement some way for the File daemon to contact the Director
1285 to start a job or pass its DHCP obtained IP number.
1286 - Implement a query tape prompt/replace feature for a console
1287 - Copy console @ code to gnome2-console
1288 - Make sure that Bacula rechecks the tape after the 20 min wait.
1289 - Set IO_NOWAIT on Bacula TCP/IP packets.
1290 - Try doing a raw partition backup and restore by mounting a
1292 - From Lars Kellers:
1293 Yes, it would allow to highly automatic the request for new tapes. If a
1294 tape is empty, bacula reads the barcodes (native or simulated), and if
1295 an unused tape is found, it runs the label command with all the
1296 necessary parameters.
1298 By the way can bacula automatically "move" an empty/purged volume say
1299 in the "short" pool to the "long" pool if this pool runs out of volume
1301 - What to do about "list files job=xxx".
1302 - Look at how fuser works and /proc/PID/fd that is how Nic found the
1303 file descriptor leak in Bacula.
1304 - Implement WrapCounters in Counters.
1305 - Add heartbeat from FD to SD if hb interval expires.
1306 - Can we dynamically change FileSets?
1307 - If pool specified to label command and Label Format is specified,
1308 automatically generate the Volume name.
1309 - Why can't SQL do the filename sort for restore?
1310 - Add ExhautiveRestoreSearch
1311 - Look at the possibility of loading only the necessary
1312 data into the restore tree (i.e. do it one directory at a
1313 time as the user walks through the tree).
1314 - Possibly use the hash code if the user selects all for a restore command.
1315 - Fix "restore all" to bypass building the tree.
1316 - Prohibit backing up archive device (findlib/find_one.c:128)
1317 - Implement Release Device in the Job resource to unmount a drive.
1318 - Implement Acquire Device in the Job resource to mount a drive,
1319 be sure this works with admin jobs so that the user can get
1320 prompted to insert the correct tape. Possibly some way to say to
1321 run the job but don't save the files.
1322 - Make things like list where a file is saved case independent for
1324 - Use autochanger to handle multiple devices.
1325 - Implement a Recycle command
1326 - Start working on Base jobs.
1327 - Implement UnsavedFiles DB record.
1328 - From Phil Stracchino:
1329 It would probably be a per-client option, and would be called
1330 something like, say, "Automatically purge obsoleted jobs". What it
1331 would do is, when you successfully complete a Differential backup of a
1332 client, it would automatically purge all Incremental backups for that
1333 client that are rendered redundant by that Differential. Likewise,
1334 when a Full backup on a client completed, it would automatically purge
1335 all Differential and Incremental jobs obsoleted by that Full backup.
1336 This would let people minimize the number of tapes they're keeping on
1337 hand without having to master the art of retention times.
1338 - When doing a Backup send all attributes back to the Director, who
1339 would then figure out what files have been deleted.
1340 - Currently in mount.c:236 the SD simply creates a Volume. It should have
1341 explicit permission to do so. It should also mark the tape in error
1342 if there is an error.
1343 - Cancel waiting for Client connect in SD if FD goes away.
1345 - Implement timeout in response() when it should come quickly.
1346 - Implement a Slot priority (loaded/not loaded).
1347 - Implement "vacation" Incremental only saves.
1348 - Implement create "FileSet"?
1349 - Add prefixlinks to where or not where absolute links to FD.
1350 - Issue message to mount a new tape before the rewind.
1351 - Simplified client job initiation for portables.
1352 - If SD cannot open a drive, make it periodically retry.
1353 - Add more of the config info to the tape label.
1355 - Refine SD waiting output:
1356 Device is being positioned
1357 > Device is being positioned for append
1358 > Device is being positioned to file x
1360 - Figure out some way to estimate output size and to avoid splitting
1361 a backup across two Volumes -- this could be useful for writing CDROMs
1362 where you really prefer not to have it split -- not serious.
1363 - Have SD compute MD5 or SHA1 and compare to what FD computes.
1364 - Make VolumeToCatalog calculate an MD5 or SHA1 from the
1365 actual data on the Volume and compare it.
1366 - Make bcopy read through bad tape records.
1367 - Program files (i.e. execute a program to read/write files).
1368 Pass read date of last backup, size of file last time.
1369 - Add Signature type to File DB record.
1370 - CD into subdirectory when open()ing files for backup to
1371 speed up things. Test with testfind().
1372 - Priority job to go to top of list.
1373 - Why are save/restore of device different sizes (sparse?) Yup! Fix it.
1374 - Implement some way for the Console to dynamically create a job.
1375 - Solaris -I on tar for include list
1376 - Need a verbose mode in restore, perhaps to bsr.
1377 - bscan without -v is too quiet -- perhaps show jobs.
1378 - Add code to reject whole blocks if not wanted on restore.
1379 - Check if we can increase Bacula FD priorty in Win2000
1380 - Make sure the MaxVolFiles is fully implemented in SD
1381 - Check if both CatalogFiles and UseCatalog are set to SD.
1382 - Possibly add email to Watchdog if drive is unmounted too
1383 long and a job is waiting on the drive.
1384 - After unmount, if restore job started, ask to mount.
1385 - Add UA rc and history files.
1386 - put termcap (used by console) in ./configure and
1387 allow -with-termcap-dir.
1388 - Fix Autoprune for Volumes to respect need for full save.
1389 - Compare tape to Client files (attributes, or attributes and data)
1390 - Make all database Ids 64 bit.
1391 - Allow console commands to detach or run in background.
1392 - Add SD message variables to control operator wait time
1393 - Maximum Operator Wait
1394 - Minimum Message Interval
1395 - Maximum Message Interval
1396 - Send Operator message when cannot read tape label.
1397 - Verify level=Volume (scan only), level=Data (compare of data to file).
1398 Verify level=Catalog, level=InitCatalog
1400 - Add keyword search to show command in Console.
1401 - Events : tape has more than xxx bytes.
1402 - Complete code in Bacula Resources -- this will permit
1403 reading a new config file at any time.
1404 - Handle ctl-c in Console
1405 - Implement script driven addition of File daemon to config files.
1406 - Think about how to make Bacula work better with File (non-tape) archives.
1407 - Write Unix emulator for Windows.
1408 - Put memory utilization in Status output of each daemon
1409 if full status requested or if some level of debug on.
1410 - Make database type selectable by .conf files i.e. at runtime
1411 - Set flag for uname -a. Add to Volume label.
1412 - Restore files modified after date
1413 - SET LD_RUN_PATH=$HOME/mysql/lib/mysql
1414 - Remove duplicate fields from jcr (e.g. jcr.level and jcr.jr.Level, ...).
1415 - Timout a job or terminate if link goes down, or reopen link and query.
1416 - Concept of precious tapes (cannot be reused).
1417 - Make bcopy copy with a single tape drive.
1418 - Permit changing ownership during restore.
1421 > My suggestion: Add a feature on the systray menu-icon menu to request
1422 > an immediate backup now. This would be useful for laptop users who may
1423 > not be on the network when the regular scheduled backup is run.
1425 > My wife's suggestion: Add a setting to the win32 client to allow it to
1426 > shut down the machine after backup is complete (after, of course,
1427 > displaying a "System will shut down in one minute, click here to cancel"
1428 > warning dialog). This would be useful for sites that want user
1429 > woorkstations to be shut down overnight to save power.
1432 - Autolabel should be specified by DIR instead of SD.
1434 - Add media capacity
1435 - AutoScan (check checksum of tape)
1436 - Format command = "format /dev/nst0"
1440 - Seek resolution (usually corresponds to buffer size)
1441 - EODErrorCode=ENOSPC or code
1442 - Partial Read error code
1443 - Partial write error code
1444 - Nonformatted read error
1445 - Nonformatted write error
1446 - WriteProtected error
1450 - IgnoreCloseErrors=yes
1460 - FD sends unsaved file list to Director at end of job (see
1462 - File daemon should build list of files skipped, and then
1463 at end of save retry and report any errors.
1464 - Write a Storage daemon that uses pipes and
1465 standard Unix programs to write to the tape.
1467 - Need something that monitors the JCR queue and
1468 times out jobs by asking the deamons where they are.
1469 - Enhance Jmsg code to permit buffering and saving to disk.
1470 - device driver = "xxxx" for drives.
1471 - Verify from Volume
1472 - Ensure that /dev/null works
1473 - Need report class for messages. Perhaps
1474 report resource where report=group of messages
1475 - enhance scan_attrib and rename scan_jobtype, and
1476 fill in code for "since" option
1477 - Director needs a time after which the report status is sent
1478 anyway -- or better yet, a retry time for the job.
1479 - Don't reschedule a job if previous incarnation is still running.
1480 - Some way to automatically backup everything is needed????
1481 - Need a structure for pending actions:
1483 - termination status (part of buffered msgs?)
1485 Read, Write, Clean, Delete
1486 - Login to Bacula; Bacula users with different permissions:
1487 owner, group, user, quotas
1488 - Store info on each file system type (probably in the job header on tape.
1489 This could be the output of df; or perhaps some sort of /etc/mtab record.
1491 ========= ideas ===============
1492 From: "Jerry K. Schieffer" <jerry@skylinetechnology.com>
1493 To: <kern@sibbald.com>
1494 Subject: RE: [Bacula-users] future large programming jobs
1495 Date: Thu, 26 Feb 2004 11:34:54 -0600
1497 I noticed the subject thread and thought I would offer the following
1498 merely as sources of ideas, i.e. something to think about, not even as
1499 strong as a request. In my former life (before retiring) I often
1500 dealt with backups and storage management issues/products as a
1501 developer and as a consultant. I am currently migrating my personal
1502 network from amanda to bacula specifically because of the ability to
1503 cross media boundaries during storing backups.
1504 Are you familiar with the commercial product called ADSM (I think IBM
1505 now sells it under the Tivoli label)? It has a couple of interesting
1506 ideas that may apply to the following topics.
1508 1. Migration: Consider that when you need to restore a system, there
1509 may be pressure to hurry. If all the information for a single client
1510 can eventually end up on the same media (and in chronological order),
1511 the restore is facillitated by not having to search past information
1512 from other clients. ADSM has the concept of "client affinity" that
1513 may be associated with it's storage pools. It seems to me that this
1514 concept (as an optional feature) might fit in your architecture for
1517 ADSM also has the concept of defining one or more storage pools as
1518 "copy pools" (almost mirrors, but only in the sense of contents).
1519 These pools provide the ability to have duplicte data stored both
1520 onsite and offsite. The copy process can be scheduled to be handled
1521 by their storage manager during periods when there is no backup
1522 activity. Again, the migration process might be a place to consider
1523 implementing something like this.
1526 > It strikes me that it would be very nice to be able to do things
1528 > have the Job(s) backing up the machines run, and once they have all
1529 > completed, start a migration job to copy the data from disks Volumes
1531 > a tape library and then to offsite storage. Maybe this can already
1533 > done with some careful scheduling and Job prioritzation; the events
1534 > mechanism described below would probably make it very easy.
1536 This is the goal. In the first step (before events), you simply
1538 the Migration to tape later.
1540 2. Base jobs: In ADSM, each copy of each stored file is tracked in
1541 the database. Once a file (unique by path and metadata such as dates,
1542 size, ownership, etc.) is in a copy pool, no more copies are made. In
1543 other words, when you start ADSM, it begins like your concept of a
1544 base job. After that it is in the "incremental" mode. You can
1545 configure the number of "generations" of files to be retained, plus a
1546 retention date after which even old generations are purged. The
1547 database tracks the contents of media and projects the percentage of
1548 each volume that is valid. When the valid content of a volume drops
1549 below a configured percentage, the valid data are migrated to another
1550 volume and the old volume is marked as empty. Note, this requires
1551 ADSM to have an idea of the contents of a client, i.e. marking the
1552 database when an existing file was deleted, but this would solve your
1553 issue of restoring a client without restoring deleted files.
1555 This is pretty far from what bacula now does, but if you are going to
1556 rip things up for Base jobs,.....
1557 Also, the benefits of this are huge for very large shops, especially
1558 with media robots, but are a pain for shops with manual media
1562 > Base jobs sound pretty useful, but I'm not dying for them.
1564 Nobody is dying for them, but when you see what it does, you will die
1567 3. Restoring deleted files: Since I think my comments in (2) above
1568 have low probability of implementation, I'll also suggest that you
1569 could approach the issue of deleted files by a mechanism of having the
1570 fd report to the dir, a list of all files on the client for every
1571 backup job. The dir could note in the database entry for each file
1572 the date that the file was seen. Then if a restore as of date X takes
1573 place, only files that exist from before X until after X would be
1574 restored. Probably the major cost here is the extra date container in
1575 each row of the files table.
1577 Thanks for "listening". I hope some of this helps. If you want to
1578 contact me, please send me an email - I read some but not all of the
1579 mailing list traffic and might miss a reply there.
1581 Please accept my compliments for bacula. It is doing a great job for
1582 me!! I sympathize with you in the need to wrestle with excelence in
1583 execution vs. excelence in feature inclusion.
1588 ==============================
1591 - Design at hierarchial storage for Bacula. Migration and Clone.
1592 - Implement FSM (File System Modules).
1593 - Audit M_ error codes to ensure they are correct and consistent.
1594 - Add variable break characters to lex analyzer.
1595 Either a bit mask or a string of chars so that
1596 the caller can change the break characters.
1597 - Make a single T_BREAK to replace T_COMMA, etc.
1598 - Ensure that File daemon and Storage daemon can
1599 continue a save if the Director goes down (this
1600 is NOT currently the case). Must detect socket error,
1601 buffer messages for later.
1602 - Enhance time/duration input to allow multiple qualifiers e.g. 3d2h
1603 - Add ability to backup to two Storage devices (two SD sessions) at
1604 the same time -- e.g. onsite, offsite.
1605 - Compress or consolidate Volumes of old possibly deleted files. Perhaps
1606 someway to do so with every volume that has less than x% valid
1610 Migration: Move a backup from one Volume to another
1611 Clone: Copy a backup -- two Volumes
1614 ======================================================
1616 It is somewhat like a Full save becomes an incremental since
1617 the Base job (or jobs) plus other non-base files.
1619 - A Base backup is same as Full backup, just different type.
1620 - New BaseFiles table that contains:
1622 BaseJobId - Base JobId referenced for this FileId (needed ???)
1623 JobId - JobId currently running
1624 FileId - File not backed up, exists in Base Job
1625 FileIndex - FileIndex from Base Job.
1626 i.e. for each base file that exists but is not saved because
1627 it has not changed, the File daemon sends the JobId, BaseId,
1628 FileId, FileIndex back to the Director who creates the DB entry.
1629 - To initiate a Base save, the Director sends the FD
1630 the FileId, and full filename for each file in the Base.
1631 - When the FD finds a Base file, he requests the Director to
1632 send him the full File entry (stat packet plus MD5), or
1633 conversely, the FD sends it to the Director and the Director
1634 says yes or no. This can be quite rapid if the FileId is kept
1635 by the FD for each Base Filename.
1636 - It is probably better to have the comparison done by the FD
1637 despite the fact that the File entry must be sent across the
1639 - An alternative would be to send the FD the whole File entry
1640 from the start. The disadvantage is that it requires a lot of
1641 space. The advantage is that it requires less communications
1643 - The Job record must be updated to indicate that one or more
1645 - At end of Job, FD returns:
1646 1. Count of base files/bytes not written to tape (i.e. matches)
1647 2. Count of base file that were saved i.e. had changed.
1648 - No tape record would be written for a Base file that matches, in the
1649 same way that no tape record is written for Incremental jobs where
1650 the file is not saved because it is unchanged.
1651 - On a restore, all the Base file records must explicitly be
1652 found from the BaseFile tape. I.e. for each Full save that is marked
1653 to have one or more Base Jobs, search the BaseFile for all occurrences
1655 - An optimization might be to make the BaseFile have:
1661 This would avoid the need to explicitly fetch each File record for
1662 the Base job. The Base Job record will be fetched to get the
1663 VolSessionId and VolSessionTime.
1664 =========================================================
1669 Multiple drive autochanger data: see Alan Brown
1670 mtx -f xxx unloadStorage Element 1 is Already Full(drive 0 was empty)
1671 Unloading Data Transfer Element into Storage Element 1...source Element
1672 Address 480 is Empty
1674 (drive 0 was empty and so was slot 1)
1675 > mtx -f xxx load 15 0
1676 no response, just returns to the command prompt when complete.
1677 > mtx -f xxx status Storage Changer /dev/changer:2 Drives, 60 Slots ( 2 Import/Export )
1678 Data Transfer Element 0:Full (Storage Element 15 Loaded):VolumeTag = HX001
1679 Data Transfer Element 1:Empty
1680 Storage Element 1:Empty
1681 Storage Element 2:Full :VolumeTag=HX002
1682 Storage Element 3:Full :VolumeTag=HX003
1683 Storage Element 4:Full :VolumeTag=HX004
1684 Storage Element 5:Full :VolumeTag=HX005
1685 Storage Element 6:Full :VolumeTag=HX006
1686 Storage Element 7:Full :VolumeTag=HX007
1687 Storage Element 8:Full :VolumeTag=HX008
1688 Storage Element 9:Full :VolumeTag=HX009
1689 Storage Element 10:Full :VolumeTag=HX010
1690 Storage Element 11:Empty
1691 Storage Element 12:Empty
1692 Storage Element 13:Empty
1693 Storage Element 14:Empty
1694 Storage Element 15:Empty
1695 Storage Element 16:Empty....
1696 Storage Element 28:Empty
1697 Storage Element 29:Full :VolumeTag=CLNU01L1
1698 Storage Element 30:Empty....
1699 Storage Element 57:Empty
1700 Storage Element 58:Full :VolumeTag=NEX261L2
1701 Storage Element 59 IMPORT/EXPORT:Empty
1702 Storage Element 60 IMPORT/EXPORT:Empty
1704 Unloading Data Transfer Element into Storage Element 15...done
1706 (just to verify it remembers where it came from, however it can be
1707 overrriden with mtx unload {slotnumber} to go to any storage slot.)
1709 There needs to be a table of drive # to devices somewhere - If there are
1710 multiple changers or drives there may not be a 1:1 correspondance between
1711 changer drive number and system device name - and depending on the way the
1712 drives are hooked up to scsi busses, they may not be linearly numbered
1713 from an offset point either.something like
1715 Autochanger drives = 2
1716 Autochanger drive 0 = /dev/nst1
1717 Autochanger drive 1 = /dev/nst2
1718 IMHO, it would be _safest_ to use explicit mtx unload commands at all
1719 times, not just for multidrive changers. For a 1 drive changer, that's
1725 MTX's manpage (1.2.15):
1726 unload [<slotnum>] [ <drivenum> ]
1727 Unloads media from drive <drivenum> into slot
1728 <slotnum>. If <drivenum> is omitted, defaults to
1729 drive 0 (as do all commands). If <slotnum> is
1730 omitted, defaults to the slot that the drive was
1731 loaded from. Note that there's currently no way
1732 to say 'unload drive 1's media to the slot it
1733 came from', other than to explicitly use that
1734 slot number as the destination.AB
1740 undef# camcontrol devlist
1741 <WANGTEK 51000 SCSI M74H 12B3> at scbus0 target 2 lun 0 (pass0,sa0)
1742 <ARCHIVE 4586XX 28887-XXX 4BGD> at scbus0 target 4 lun 0 (pass1,sa1)
1743 <ARCHIVE 4586XX 28887-XXX 4BGD> at scbus0 target 4 lun 1 (pass2)
1745 tapeinfo -f /dev/sg0 with a bad tape in drive 1:
1746 [kern@rufus mtx-1.2.17kes]$ ./tapeinfo -f /dev/sg0
1747 Product Type: Tape Drive
1749 Product ID: 'C5713A '
1751 Attached Changer: No
1752 TapeAlert[3]: Hard Error: Uncorrectable read/write error.
1753 TapeAlert[20]: Clean Now: The tape drive neads cleaning NOW.
1760 Medium Type: Not Loaded
1763 DataCompEnabled: yes
1764 DataCompCapable: yes
1765 DataDeCompEnabled: yes
1772 Handling removable disks
1774 From: Karl Cunningham <karlc@keckec.com>
1776 My backups are only to hard disk these days, in removable bays. This is my
1777 idea of how a backup to hard disk would work more smoothly. Some of these
1778 things Bacula does already, but I mention them for completeness. If others
1779 have better ways to do this, I'd like to hear about it.
1781 1. Accommodate several disks, rotated similar to how tapes are. Identified
1782 by partition volume ID or perhaps by the name of a subdirectory.
1783 2. Abort & notify the admin if the wrong disk is in the bay.
1784 3. Write backups to different subdirectories for each machine to be backed
1786 4. Volumes (files) get created as needed in the proper subdirectory, one
1788 5. When a disk is recycled, remove or zero all old backup files. This is
1789 important as the disk being recycled may be close to full. This may be
1790 better done manually since the backup files for many machines may be
1791 scattered in many subdirectories.
1796 - Why the heck doesn't bacula drop root priviledges before connecting to
1798 - Look at using posix_fadvise(2) for backups -- see bug #751.
1799 Possibly add the code at findlib/bfile.c:795
1800 /* TCP socket options */
1801 #define TCP_KEEPIDLE 4 /* Start keeplives after this period */
1802 - Fix bnet_connect() code to set a timer and to use time to
1804 - Implement 4th argument to make_catalog_backup that passes hostname.
1805 - Test FIFO backup/restore -- make regression
1806 - Please mount volume "xxx" on Storage device ... should also list
1807 Pool and MediaType in case user needs to create a new volume.
1808 - On restore add Restore Client, Original Client.
1809 01-Apr 00:42 rufus-dir: Start Backup JobId 55, Job=kernsave.2007-04-01_00.42.48
1810 01-Apr 00:42 rufus-sd: Python SD JobStart: JobId=55 Client=Rufus
1811 01-Apr 00:42 rufus-dir: Created new Volume "Full0001" in catalog.
1812 01-Apr 00:42 rufus-dir: Using Device "File"
1813 01-Apr 00:42 rufus-sd: kernsave.2007-04-01_00.42.48 Warning: Device "File" (/tmp) not configured to autolabel Volumes.
1814 01-Apr 00:42 rufus-sd: kernsave.2007-04-01_00.42.48 Warning: Device "File" (/tmp) not configured to autolabel Volumes.
1815 01-Apr 00:42 rufus-sd: Please mount Volume "Full0001" on Storage Device "File" (/tmp) for Job kernsave.2007-04-01_00.42.48
1816 01-Apr 00:44 rufus-sd: Wrote label to prelabeled Volume "Full0001" on device "File" (/tmp)
1817 - Check if gnome-console works with TLS.
1818 - the director seg faulted when I omitted the pool directive from a
1819 job resource. I was experimenting and thought it redundant that I had
1820 specified Pool, Full Backup Pool. and Differential Backup Pool. but
1821 apparently not. This happened when I removed the pool directive and
1822 started the director.
1823 - Add Where: client:/.... to restore job report.
1824 - Ensure that moving a purged Volume in ua_purge.c to the RecyclePool
1825 does the right thing.
1826 - FD-SD quick disconnect
1827 - Building the in memory restore tree is slow.
1828 - Erabt if min_block_size > max_block_size
1829 - Add the ability to consolidate old backup sets (basically do a restore
1830 to tape and appropriately update the catalog). Compress Volume sets.
1831 Might need to spool via file is only one drive is available.
1832 - Why doesn't @"xxx abc" work in a conf file?
1833 - Don't restore Solaris Door files:
1834 #define S_IFDOOR in st_mode.
1835 see: http://docs.sun.com/app/docs/doc/816-5173/6mbb8ae23?a=view#indexterm-360
1836 - Figure out how to recycle Scratch volumes back to the Scratch Pool.
1837 - Implement Despooling data status.
1838 - Use E'xxx' to escape PostgreSQL strings.
1839 - Look at mincore: http://insights.oetiker.ch/linux/fadvise.html
1840 - Unicode input http://en.wikipedia.org/wiki/Byte_Order_Mark
1841 - Look at moving the Storage directive from the Job to the
1842 Pool in the default conf files.