Kern's ToDo List 23 March 2004 Documentation to do: (any release a little bit at a time) - DB upgrade to version 5 in bacula-1.27b, DB upgrade to version 6 in 1.31; DB upgrade to version 7 in 1.33/4. - Document running a test version. - Document query file format. - Document static linking - Document problems with Verify and pruning. - Document how to use multiple databases. - For FreeBSD typical /dev/nrsa0 and for mtx /dev/pass1 - VXA drives have a "cleaning required" indicator, but Exabyte recommends preventive cleaning after every 75 hours of operation. From Phil: In this context, it should be noted that Exabyte has a command-line vxatool utility available for free download.  (The current version is vxatool-3.72.) It can get diagnostic info, read, write and erase tapes, test the drive, unload tapes, change drive settings, flash new firmware, etc. Of particular interest in this context is that vxatool -i will report, among other details, the time since last cleaning in tape motion minutes.  This information can be retrieved (and settings changed, for that matter) through the generic-SCSI device even when Bacula has the regular tape device locked.  (Needless to say, I don't recommend changing tape settings while a job is running.) - Lookup HP cleaning recommendations. - Lookup HP tape replacement recommendations (see trouble shooting autochanger) - Create a man page for each binary (Debian package requirement). Testing to do: (painful) - Test drive polling! - that ALL console command line options work and are always implemented - blocksize recognition code. - Test if rewind at end of tape waits for tape to rewind. - Test cancel at EOM. For 1.33 Testing/Documentation: - Document new Include/Exclude ... - Add counter variable test. - Document ln -sf /usr/lib/libncurses.so /usr/lib/libtermcap.so and install the esound-dev  package for compiling Console on SuSE. This should read 'Document LDFLAGS="-L/usr/lib/termcap" ... ' - Add an example of using a FIFO in dirdconf.wml - Add an item to the FAQ about running jobs in different timezones. - Add some examples of job editing codes. - Document Dan's new --with-dir-user, ... options. See userid.txt - Figure out how to use ssh or stunnel to protect Bacula communications. Add Dan's work to manual See ssl.txt - Add db check test to regression. Test each function like delete, purge, ... - Add subsections to the Disaster Recovery index section. - Document Pool keyword for restore. - If you use restore replace=never, the directory attributes for non-existent directories will not be restored properly. - In the Bacula User Guide you write:"Note, one major disadvantage of writing to a NFS mounted volume as I do isthat if the other machine goes down, the OS will wait forever on the fopen()call that Bacula makes. As a consequence, Bacula will completely stall untilthe machine exporting the NSF mounts comes back up. If someone knows a wayaround this, please let me know."I haven't tried using NFS in years, but I think that the "soft" and "intr"remount options may well help you. The only way of being sure would be totry it.See, for example, http://howtos.linux.com/guides/nag2/x-087-2-nfs.mountd.shtml - Add the following devices as working: Adic Scalar 100 DLT Adic Fastor 22 DLT (both HVD) Overland LoaderXpress LTO (LVD) Overland Neo2000 (LVD) For 1.33 - Complete Win32 installer - Finish work on Gnome restore GUI. - On unknown client in restore "client=xxx" Could not find Client "Matou": ERR=Query failed: DROP TABLE temp1: ERR=no such table: temp1 - Implement multiple Volume in "purge jobs volume=". - Fix "llist jobid=xx" where no fileset or client exists. - Build console in client-only build. - Test Qmsg() code to be used in bnet.c to prevent recursion. Queue the message. If dequeueing toss the messages. Lock while dequeuing so that it cannot be called recursively and set dequeuing flag. - Finish work on conio.c -- particularly linking. - Phil says that Windows file sizes mismatch in Verify when they should, and that either the file size or the catalog size was zero. - Check time/dates printed during restore when using Win32 API. --- Maybe in 1.33 From Chris Hull: it seems to be complaining about 12:00pm which should be a valid 12 hour time. I changed the time to 11:59am and everything works fine. Also 12:00am works fine. 0:00pm also works (which I don't think should). None of the values 12:00pm - 12:59pm work for that matter. - Add level to estimate command. - Fix option 2 of restore -- list where file is backed up -- require Client, then list last 20 backups. - Add all pools in Dir conf to DB also update them to catch changed LabelFormats and such. - Update volume FromPool (or FromPool=xxx) refreshes the Volume defaults from Pool. - Update volumes FromPool=xxx does all volumes. For version 1.35: - Add a .list all files in the restore tree (probably also a list all files) Do both a long and short form. - Allow browsing the catalog to see all versions of a file (with stat data on each file). - Restore attributes of directory if replace=never set but directory did not exist. - Allow "delete job jobid=xxx,yyy,aaa-bbb" i.e. list + ranges. - Use SHA1 on authentication if possible. - See comtest-xxx.zip for Windows code to talk to USB. - Make btape accept Device Names in addition to Archive names. - Add Events and Perl scripting. - Add John's appended files: Appended = { /files/server/logs/http/*log } and such files would be treated as follows.On a FULL backup, they would be backed up like any other file.On an INCREMENTAL backup, where a previous INCREMENTAL or FULL was already in thecatalogue and the length of the file wasgreater than the length of the last backup, only thedata added since the last backup will be dumped.On an INCREMENTAL backup, if the length of the file is less than thelength of the file with the same name last backed up, the completefile is dumped.On Windows systems, with creation date of files, we can be evensmarter about this and not count entirely upon the length.On a restore, the full and all incrementals since it will beapplied in sequence to restore the file. - Add a regression test for dbcheck. - Add disk seeking on restore. - Allow for optional cancelling of SD and FD in case DIR gets a fatal error. Requested by Jesse Guardiani - Bizarre message: Error: Could not open WriteBootstrap file: - Build console in client only build. - Add "limit=n" for "list jobs" - Check new HAVE_WIN32 open bits. - Check if the tape has moved before writing. - Handling removable disks -- see below: - Multiple drive autochanger support -- see below. - Keep track of tape use time, and report when cleaning is necessary. - Fix FreeBSD mt_count problem. - Add FromClient and ToClient keywords on restore command (or BackupClient RestoreClient). - Automatic "update slots" on user configuration directive when a slot error occurs. - Implement a JobSet, which groups any number of jobs. If the JobSet is started, all the jobs are started together. Allow Pool, Level, and Schedule overrides. - Enhance cancel to timeout BSOCK packets after a specific delay. - When I restore to Windows the Created, Accessed and Modifiedtimes are those of the time of the restore, not those of the originalfile. The dates you will find in your restore log seem to be the original creation dates - Volume "add"ed to Pool gets recycled in first use. VolBytes=0 - Get rid of 0 dates in LastWritten, ... - If a tape is recycled while it is mounted, Stanislav Tvrudy must do an additional mount to deblock the job. - From Johan Decock: bscan: sql_update.c:65 UPDATE File SET MD5='Ij+5kwN6TFIxK+8l8+/I+A' WHERE FileId=0 bscan: bscan.c:1074 Could not add MD5/SHA1 to File record. ERR=sql_update.c:65 Update problem: affected_rows=0 - Do scheduling by UTC using gmtime_r() in run_conf, scheduler, and ua_status.!!! Thanks to Alan Brown for this tip. - Look at updating Volume Jobs so that Max Volume Jobs = 1 will work correctly for multiple simultaneous jobs. - Correct code so that FileSet MD5 is calculated for < and | filename generation. - Mark Volume in error on error from WEOF. - Implement the Media record flag that indicates that the Volume does disk addressing. - Implement VolAddr, which is used when Volume is addressed like a disk, and form it from VolFile and VolBlock. - Make multiple restore jobs for multiple media types specifying the proper storage type. - Implement MediaType keyword in bsr? - Fix fast block rejection (stored/read_record.c:118). It passes a null pointer (rec) to try_repositioning(). - Look at extracting Win data from BackupRead. - Having dashes in filenames apparently creates problems for restore by filename??? hard to believe. - Implement RestoreJobRetention? Maybe better "JobRetention" in a Job, which would take precidence over the Catalog "JobRetention". - Implement Label Format in Add and Label console commands. - Possibly up network buffers to 65K. Put on variable. - Put email tape request delays on one or more variables. User wants to cancel the job after a certain time interval. Maximum Mount Wait? - Job, Client, Device, Pool, or Volume? Is it possible to make this a directive which is *optional* in multiple resources, like Level? If so, I think I'd make it an optional directive in Job, Client, and Pool, with precedence such that Job overrides Client which in turn overrides Pool. - Print a message when a job starts if the conf file is not current. - To pass Include 1 or two letter commands I Name Include name - first record B Name Base name - repeat R "xxx" Regexp W "xxx" Wild Card E zzz Exclude expression (wild card) P "plugin" Plugin D "reader" Reader program T "writer" Writer program O Options In current commpressed format (compression, signature, onefs, recurse, sparse, replace, verify options, ...) N End option set B BaseName Start second option set any letter ... E F Number Number of filenames to follow B Name ... N End option set F Number Number of filenames to follow ... - Spooling ideas taken from Volker Sauer's and other's emails: > IMHO job spooling should be turned on > > 1) by job > 2) by schedule > 3) by sd > > where and 2) overrides 1) and 3) is independent. Yes, this is the minimum that I think is necessary. > > Reason(s): > It should be switched by job, because the job that backs up the machine > with the bacula-sd on doesn't need spooling. > It should be switched by schedule, because for full-backups I don't need > spooling, so I can switch it off (because the network faster then the > tapedrive) True, with the exception that if you have enough disk spool space, and you want to run concurrent jobs, spooling can eliminate the block interleaving restore inefficiencies. > And you should be able to turn it of by sd for sd-machines with low disk > capacity or if you just don't need or want this feature. > > There should be: > - definitly the possibility for multipe spool direcories Having multiple directories is no problem -- having different maximum sizes creates specification problems. At some point, I will probably have a common SD pool of spool directories as well as a set of private spool directories for each device. The first implementation will be a set of private spool directories for each device since managing a global pool with a bunch of threads writing into the same directory is *much* more complicated and prone to error. > - the ability to spool parts of a backup (not the whole client) This may change in the future, but for the moment, it will spool either to a job high water mark, or until the directory is full (reaches max spool size or I/O error). It will then write to tape, truncate the spool file, and begin spooling again. > - spooling while writing to tape Not within a job, but yes, if you run concurrent jobs -- each is a different thread. Within a job could be a feature, but *much* later. > - parallel spooling (like parallel jobs/ concurrent jobs) of clients Yes, this is one of my main motivations for doing it (aside from eliminating tape "shoe shine" during incremental backups. > - flushing a backup that only went to disk (like amflush in amanda) This will be a future feature, since spooling is different from backing up to disk. The future feature will be "migration" which will move a job from one backup Volume to another. - New Storage specifications: Passed to SD as a sort of BSR record called Storage Specification Record or SSR. SSR Next -> Next SSR MediaType -> Next MediaType Pool -> Next Pool Device -> Next Device Write Copy Resource that makes a copy of a resource. Job Resource Allow multiple Storage specifications New flags One Archive = yes One Device = yes One Storage = yes One MediaType = yes One Pool = yes Storage Allow Multiple Pool specifications (note, Pool currently in Job resource). Allow Multiple MediaType specifications Allow Multiple Device specifications Perhaps keep this in a single SSR Tie a Volume to a specific device by using a MediaType that is contained in only one device. In SD allow Device to have Multiple MediaTypes After 1.33: - Look at www.nu2.nu/pebuilder as a helper for full windows bare metal restore. Ideas from Jerry Scharf: First let's point out some big pluses that bacula has for this it's open source more importantly it's active. Thank you so much for that even more important, it's not flaky it has an open access catalog, opening many possibilities it's pushing toward heterogeneous systems capability simple things: I don't remember an include file directive for config files (not filesets, actual config directives) can you check the configs without starting the daemon? some warnings about possible common mistakes big things: doing the testing and blessing of concurrent backup writes this is absolutely necessary in the enterprise easy user recovery GUI with full access checking Macintosh file client macs are an interesting niche, but I fear a server is a rathole working bare iron recovery for windows much better handling on running config changes thinking through the logic of what happens to jobs in progress the option for inc/diff backups not reset on fileset revision a) use both change and inode update time against base time b) do the full catalog check (expensive but accurate) sizing guide (how much system is needed to back up N systems/files) consultants on using bacula in building a disaster recovery system an integration guide or how to get at fancy things that one could do with bacula logwatch code for bacula logs (or similar) linux distro inclusion of bacula (brings good and bad, but necessary) win2k/XP server capability (icky but you asked) support for Oracle database ?? === - Look at adding SQL server and Exchange support for Windows. - Restore: Enter Filename: 'C:/Documents and Settings/Comercial/My Documents/MOP/formulário de registro BELAS ARTES.doc' causes Bacula to crash. - Each DVD-RAM disk would be a volume, just like each tape is a volume. It's a 4.7GB media with random access, but there's nothing about it that I can see that makes it so different than a tape from  bacula's perspective. Why couldn't I back up to a bare floppy as a volume (ignoring the media capacity?) - Make dev->file and dev->block_num signed integers so that -1 can be an invalid value which happens with BSR. - Create VolAddr for disk files in place of VolFile and VolBlock. This is needed to properly specify ranges. - Print bsmtp output to job report so that problems will be seen. - Pass the number of files to be restored to the FD for reporting - Add progress of files/bytes to SD and FD. - Don't continue Restore if no files selected. - Print warning message if FileId > 4 billion - do a "messages" before the first prompt in Console - Add a date and time stamp at the beginning of every line in the Job report (Volker Sauer). - Client does not show busy during Estimate command. - Implement Console mtx commands. - Add a default DB password to MySQL. GRANT all privileges ON bacula.* TO bacula@localhost IDENTIFIED BY 'bacula_password'; FLUSH PRIVILEGES; - Implement a Mount Command and an Unmount Command where the users could specify a system command to be performed to do the mount, after which Bacula could attempt to read the device. This is for Removeable media such as a CDROM. - Most likely, this mount command would be invoked explicitly by the user using the current Console "mount" and "unmount" commands -- the Storage Daemon would do the right thing depending on the exact nature of the device. - As with tape drives, when Bacula wanted a new removable disk mounted, it would unmount the old one, and send a message to the user, who would then use "mount" as described above once he had actually inserted the disk. - Implement dump/print label to UA - Implement disk spooling. Two parts: 1. Spool to disk then immediately to tape to speed up tape operations. 2. Spool to disk only when the tape is full, then when a tape is hung move it to tape. - Scratch Pool where the volumes can be re-assigned to any Pool. - bextract is sending everything to the log file ****FIXME**** - Add Progress command that periodically reports the progress of a job or all jobs. - Restrict characters permitted in a Resource name, and don't permit duplicate names. - Allow multiple Storage specifications (or multiple names on a single Storage specification) in the Job record. Thus a job can be backed up to a number of storage devices. - Implement some way for the File daemon to contact the Director to start a job or pass its DHCP obtained IP number. - Implement multiple Consoles. - Implement a query tape prompt/replace feature for a console - From Johan? Two jobs ready to go, first one blocked waiting for media Cancel 2nd job ("waiting execution" one) Cancel blocked job boom - segfault* - Copy console @ code to gnome2-console - Make AES the only encryption algorithm see http://csrc.nist.gov/CryptoToolkit/aes/). It's an officially adopted standard, has survived peer review, and provides keys up to 256 bits. - Think about how space could be freed up on a tape -- perhaps this is a Merge or Compact feature that is needed. - Modify FileSet, did not upgrade the current Increment job, but waited for the next job to be upgraded. - Take a careful look at SetACL http://setacl.sourceforge.net - Implement a where command for the tree telling where a file is located. - Take a careful look at Level for the estimate command, maybe make it a command line option. - Add Volume name to "I cannot write on this volume because" - Make tree walk routines like cd, ls, ... more user friendly by handling spaces better. - Write your PID file and chown root:wheel before drop. - Make sure there is no symlink in a file before creating a file (attack). - Look at mktemp or mkstemp(3). mktemp and mkstemp create files with predictable names too. That's not the vulnerability. The vulnerability is in creating files without using the O_EXCL flag, which means "only create this file if it doesn't exist, including if the file is a dangling symlink." It is *NOT* enough to do the equivalent of if doesn't exist $filename then create $filename because between the test and the create another process could have gotten the CPU and created the file. You must use atomic functions (those that don't get interrupted by other processes) and O_EXCL is the only way for this particular example. - Automatically create pools, but instead of looking for what in in Job records, walk through the pool resources. - Check and double check tree code, why does it take so long? - Add device name to "Current Volume not acceptable because ..." - Make sure that Bacula rechecks the tape after the 20 min wait. - Set IO_NOWAIT on Bacula TCP/IP packets. - Try doing a raw partition backup and restore by mounting a Windows partition. - From Lars Kellers: Yes, it would allow to highly automatic the request for new tapes. If a tape is empty, bacula reads the barcodes (native or simulated), and if an unused tape is found, it runs the label command with all the necessary parameters. By the way can bacula automatically "move" an empty/purged volume say in the "short" pool to the "long" pool if this pool runs out of volume space? - Eliminate orphaned jobs: dbcheck, normal pruning, delete job command. Hm.  Well, there are the remaining orphaned job records: |   105 | Llioness Save  | 0000-00-00 00:00:00 | B    | D     |        0 |             0 | f         | |   110 | Llioness Save  | 0000-00-00 00:00:00 | B    | I     |        0 |             0 | f         | |   115 | Llioness Save  | 2003-09-10 02:22:03 | B    | I     |        0 |             0 | A         | |   128 | Catalog Save   | 2003-09-11 03:53:32 | B    | I     |        0 |             0 | C         | |   131 | Catalog Save   | 0000-00-00 00:00:00 | B    | I     |        0 |             0 | f         | As you can see, three of the five are failures.  I already deleted the one restore and one other failure using the by-client option.  Deciding what is an orphaned job is a tricky problem though, I agree.  All these records have or had 0 files/ 0 bytes, except for the restore.  With no files, of course, I don't know of the job ever actually becomes associated with a Volume. (I'm not sure if this is documented anywhere -- what are the meanings of all the possible JobStatus codes?) Looking at my database, it appears to me as though all the "orphaned" jobs fit into one of two categories: 1)  The Job record has a StartTime but no EndTime, and the job is not     currently running; or 2)  The Job record has an EndTime, indicating that it completed, but     it has no associated JobMedia record. This does suggest an approach.  If failed jobs (or jobs that, for some other reason, write no files) are associated with a volume via a JobMedia record, then they should be purged when the associated volume is purged.  I see two ways to handle jobs that are NOT associated with a specific volume: 1)  purge them automatically whenever any volume is manually purged; or 2)  add an option to the purge command to manually purge all jobs with     no associated volume. I think Restore jobs also fall into category 2 above .... so one might want to make that "The Job record has an EndTime,, but no associated JobMedia record, and is not a Restore job." - make "btape /tmp" work. - Make sure a rescheduled job is properly reported by status. - Walk through the Pool records rather than the Job records in dird.c to create/update pools. - What to do about "list files job=xxx". - When job rescheduled, status gives is waiting for Client Rufus to connect to Storage File. Dir needs to inform SD that job is rescheduled. - Make Dmsg look at global before calling subroutine. - Enable trace output at runtime for Win32 - Available volumes for autochangers (see patrick@baanboard.com 3 Sep 03 and 4 Sep) scan slots. - Get and test MySQL 4.0 - Do a complete audit of all pthreads_mutex, cond, ... to ensure that any that are dynamically initialized are destroyed when no longer used. - Look at how fuser works and /proc/PID/fd that is how Nic found the file descriptor leak in Bacula. - Implement WrapCounters in Counters. - Turn on SIGHUP in dird.c and test. - Use system dependent calls to get more precise info on tape errors. - Add heartbeat from FD to SD if hb interval expires. - Suppress read error on blank tape when doing a label. - Can we dynamically change FileSets? - If pool specified to label command and Label Format is specified, automatically generate the Volume name. - Why can't SQL do the filename sort for restore? - Look at libkse (man kse) for FreeBSD threading. - Look into Microsoft Volume Shadowcopy Service VSS for backing up system state components (Active Directory, System Volume, ...) - Add ExhautiveRestoreSearch - Look at the possibility of loading only the necessary data into the restore tree (i.e. do it one directory at a time as the user walks through the tree). - Possibly use the hash code if the user selects all for a restore command. - Orphaned Dir buffer at parse_conf.c:373 => store_dir - Fix "restore all" to bypass building the tree. - Prohibit backing up archive device (findlib/find_one.c:128) - Implement Release Device in the Job resource to unmount a drive. - Implement Acquire Device in the Job resource to mount a drive, be sure this works with admin jobs so that the user can get prompted to insert the correct tape. Possibly some way to say to run the job but don't save the files. - Implement FileOptions (see end of this document) - Make things like list where a file is saved case independent for Windows. - Implement migrate - Bacula needs to propagate SD errors. > > cluster-dir: Start Backup JobId 252, Job=REUTERS.2003-08-11_15.04.12 > > prod4-sd: REUTERS.2003-08-11_15.04.12 Error: Write error on device /dev/nst0. ERR=Input/output error. > > prod4-sd: REUTERS.2003-08-11_15.04.12 Error: Re-read of last block failed. Last block=5162 Current block=5164. > > prod4-sd: End of medium on Volume "REU007" Bytes=16,303,521,933 - Use autochanger to handle multiple devices. - Add SuSE install doc to list. - Check and rechedk "Invalid block number" - Make bextract release the drive properly between tapes so that an autochanger can be made to work. - User wants to NOT backup up certain big files (email files). - Maybe remove multiple simultaneous devices code in SD. - On Windows with very long path names, it may be impossible to create a file (and thus restore it) because the total length is too long. We must cd into the directory then create the file without the full path name. - lstat() is not going to work on Win32 for testing date. - Implement a Recycle command - Add client name to cram-md5 challenge so Director can immediately verify if it is the correct client. - Add JobLevel in FD status (but make sure it is defined). - Audit all UA commands to ensure that we always prompt where possible. - Check Jmsg in bnet, may not work, must dup bsock. - Suppress Job Name in Jmsg for console - Create Pools that are referenced in a Run statement at startup if possible. - Use runbeforejob to unload, then reload a volume previously used, then the next job run gets an error reading the drive. - Make bootstrap filename unique. - Test a second language e.g. french. - Start working on Base jobs. - Make "make binary-release" work from any directory. - Implement UnsavedFiles DB record. - Implement argc/argv for daemon command line scanning using table driven stuff below. - Implement table driven single argc/argv scanner to pickup all arguments. Much like xxx_conf.c scan table. keyword, handler(store_routine), store_address, code, flags, default. - From Phil Stracchino: It would probably be a per-client option, and would be called something like, say, "Automatically purge obsoleted jobs". What it would do is, when you successfully complete a Differential backup of a client, it would automatically purge all Incremental backups for that client that are rendered redundant by that Differential. Likewise, when a Full backup on a client completed, it would automatically purge all Differential and Incremental jobs obsoleted by that Full backup. This would let people minimize the number of tapes they're keeping on hand without having to master the art of retention times. - Implement a M_SECURITY message class. - When doing a Backup send all attributes back to the Director, who would then figure out what files have been deleted. - Currently in mount.c:236 the SD simply creates a Volume. It should have explicit permission to do so. It should also mark the tape in error if there is an error. - Make sure all restore counters are working correctly in the FD. - SD Bytes Read is wrong. - Look at ALL higher level routines that call block.c to be sure they don't expect something in errmsg. - Investigate doing RAW backup of Win32 partition. - Add thread specific data to hold the jcr -- send error messages from low level routines by accessing it and using Jmsg(). - Cancel waiting for Client connect in SD if FD goes away. - Examine Bare Metal restore problem (a FD crash exists somewhere ...). - Implement timeout in response() when it should come quickly. - Implement console @echo command. - Implement a Slot priority (loaded/not loaded). - Implement "vacation" Incremental only saves. - Implement single pane restore (much like the Gftp panes). - Implement Automatic Mount even in operator wait. - Implement create "FileSet"? - Fix watchdog pthread crash on Win32 (this is pthread_kill() Cygwin bug) - Implement "scratch pool" where tapes are defined and can be taken by any pool that needs them. - Implement restore "current system", but take all files without doing selection tree -- so that jobs without File records can be restored. - Add prefixlinks to where or not where absolute links to FD. - Issue message to mount a new tape before the rewind. - Simplified client job initiation for portables. - If SD cannot open a drive, make it periodically retry. - Add more of the config info to the tape label. - If tape is marked read-only, then try opening it read-only rather than failing, and remember that it cannot be written. - Refine SD waiting output: Device is being positioned > Device is being positioned for append > Device is being positioned to file x > - Figure out some way to estimate output size and to avoid splitting a backup across two Volumes -- this could be useful for writing CDROMs where you really prefer not to have it split -- not serious. - Have SD compute MD5 or SHA1 and compare to what FD computes. - Make VolumeToCatalog calculate an MD5 or SHA1 from the actual data on the Volume and compare it. - Implement Bacula plugins -- design API - Make bcopy read through bad tape records. - Program files (i.e. execute a program to read/write files). Pass read date of last backup, size of file last time. - Add Signature type to File DB record. - CD into subdirectory when open()ing files for backup to speed up things. Test with testfind(). - Priority job to go to top of list. - Why are save/restore of device different sizes (sparse?) Yup! Fix it. - Implement some way for the Console to dynamically create a job. - Restore to a particular time -- e.g. before date, after date. - Solaris -I on tar for include list - Need a verbose mode in restore, perhaps to bsr. - bscan without -v is too quiet -- perhaps show jobs. - Add code to reject whole blocks if not wanted on restore. - Check if we can increase Bacula FD priorty in Win2000 - Make sure the MaxVolFiles is fully implemented in SD - Check if both CatalogFiles and UseCatalog are set to SD. - Figure out how to do a bare metal Windows restore - Possibly add email to Watchdog if drive is unmounted too long and a job is waiting on the drive. - Restore program that errs in SD due to no tape, reports OK incorrectly in output. - After unmount, if restore job started, ask to mount. - Convert all %x substitution variables, which are hard to remember and read to %(variable-name). Idea from TMDA. - Remove NextId for SQLite. Optimize. - Move all SQL statements into a single location. - Add UA rc and history files. - put termcap (used by console) in ./configure and allow -with-termcap-dir. - Fix Autoprune for Volumes to respect need for full save. - Fix Win32 config file definition name on /install - Compare tape to Client files (attributes, or attributes and data) - Make all database Ids 64 bit. - Write an applet for Linux. - Allow console commands to detach or run in background. - Fix status delay on storage daemon during rewind. - Add SD message variables to control operator wait time - Maximum Operator Wait - Minimum Message Interval - Maximum Message Interval - Send Operator message when cannot read tape label. - Verify level=Volume (scan only), level=Data (compare of data to file). Verify level=Catalog, level=InitCatalog - Events file - Add keyword search to show command in Console. - Events : tape has more than xxx bytes. - Complete code in Bacula Resources -- this will permit reading a new config file at any time. - Handle ctl-c in Console - Implement script driven addition of File daemon to config files. - Think about how to make Bacula work better with File (non-tape) archives. - Write Unix emulator for Windows. - Put memory utilization in Status output of each daemon if full status requested or if some level of debug on. - Make database type selectable by .conf files i.e. at runtime - Set flag for uname -a. Add to Volume label. - Implement throttled work queue. - Restore files modified after date - SET LD_RUN_PATH=$HOME/mysql/lib/mysql - Implement Restore FileSet= - Create a protocol.h and protocol.c where all protocol messages are concentrated. - Remove duplicate fields from jcr (e.g. jcr.level and jcr.jr.Level, ...). - Timout a job or terminate if link goes down, or reopen link and query. - Concept of precious tapes (cannot be reused). - Make bcopy copy with a single tape drive. - Permit changing ownership during restore. - From Phil: > My suggestion: Add a feature on the systray menu-icon menu to request > an immediate backup now. This would be useful for laptop users who may > not be on the network when the regular scheduled backup is run. > > My wife's suggestion: Add a setting to the win32 client to allow it to > shut down the machine after backup is complete (after, of course, > displaying a "System will shut down in one minute, click here to cancel" > warning dialog). This would be useful for sites that want user > woorkstations to be shut down overnight to save power. > - From Terry Manderson jobdefs { # new structure name = "monthlyUnixBoxen" type = backup level = full schedule = monthly storage = DLT messages = Standard pool = MonthlyPool priority = 10 } job { name = "wakame" jobdefs = "genericUnixSet" client = wakame-fd } job { name = "durian" jobdefs = "genericUnixSet" client = durian-fd } job { name = "soy" jobdefs = "UnixDevelBoxSet" client = soy-fd } - Autolabel should be specified by DIR instead of SD. - Storage daemon - Add media capacity - AutoScan (check checksum of tape) - Format command = "format /dev/nst0" - MaxRewindTime - MinRewindTime - MaxBufferSize - Seek resolution (usually corresponds to buffer size) - EODErrorCode=ENOSPC or code - Partial Read error code - Partial write error code - Nonformatted read error - Nonformatted write error - WriteProtected error - IOTimeout - OpenRetries - OpenTimeout - IgnoreCloseErrors=yes - Tape=yes - NoRewind=yes - Pool - Maxwrites - Recycle period - Job - MaxWarnings - MaxErrors (job?) ===== - FD sends unsaved file list to Director at end of job (see RFC below). - File daemon should build list of files skipped, and then at end of save retry and report any errors. - Write a Storage daemon that uses pipes and standard Unix programs to write to the tape. See afbackup. - Need something that monitors the JCR queue and times out jobs by asking the deamons where they are. - Enhance Jmsg code to permit buffering and saving to disk. - device driver = "xxxx" for drives. - Verify from Volume - Ensure that /dev/null works - Need report class for messages. Perhaps report resource where report=group of messages - enhance scan_attrib and rename scan_jobtype, and fill in code for "since" option - Director needs a time after which the report status is sent anyway -- or better yet, a retry time for the job. - Don't reschedule a job if previous incarnation is still running. - Some way to automatically backup everything is needed???? - Need a structure for pending actions: - buffered messages - termination status (part of buffered msgs?) - Drive management Read, Write, Clean, Delete - Login to Bacula; Bacula users with different permissions: owner, group, user, quotas - Store info on each file system type (probably in the job header on tape. This could be the output of df; or perhaps some sort of /etc/mtab record. Longer term to do: - Design at hierarchial storage for Bacula. Migration and Clone. - Implement FSM (File System Modules). - Audit M_ error codes to ensure they are correct and consistent. - Add variable break characters to lex analyzer. Either a bit mask or a string of chars so that the caller can change the break characters. - Make a single T_BREAK to replace T_COMMA, etc. - Ensure that File daemon and Storage daemon can continue a save if the Director goes down (this is NOT currently the case). Must detect socket error, buffer messages for later. - Enhance time/duration input to allow multiple qualifiers e.g. 3d2h - Add ability to backup to two Storage devices (two SD sessions) at the same time -- e.g. onsite, offsite. - Add the ability to consolidate old backup sets (basically do a restore to tape and appropriately update the catalog). Compress Volume sets. Might need to spool via file is only one drive is available. - Compress or consolidate Volumes of old possibly deleted files. Perhaps someway to do so with every volume that has less than x% valid files. Migration: Move a backup from one Volume to another Clone: Copy a backup -- two Volumes Bacula Migration is based on Jobs (apparently Networker is file by file). Migration triggered by: Number of Jobs Number of Volumes Age of Jobs Highwater mark (keep total size) Lowwater mark ====================================================== Base Jobs design It is somewhat like a Full save becomes an incremental since the Base job (or jobs) plus other non-base files. Need: - A Base backup is same as Full backup, just different type. - New BaseFiles table that contains: BaseId - index BaseJobId - Base JobId referenced for this FileId (needed ???) JobId - JobId currently running FileId - File not backed up, exists in Base Job FileIndex - FileIndex from Base Job. i.e. for each base file that exists but is not saved because it has not changed, the File daemon sends the JobId, BaseId, FileId, FileIndex back to the Director who creates the DB entry. - To initiate a Base save, the Director sends the FD the FileId, and full filename for each file in the Base. - When the FD finds a Base file, he requests the Director to send him the full File entry (stat packet plus MD5), or conversely, the FD sends it to the Director and the Director says yes or no. This can be quite rapid if the FileId is kept by the FD for each Base Filename. - It is probably better to have the comparison done by the FD despite the fact that the File entry must be sent across the network. - An alternative would be to send the FD the whole File entry from the start. The disadvantage is that it requires a lot of space. The advantage is that it requires less communications during the save. - The Job record must be updated to indicate that one or more Bases were used. - At end of Job, FD returns: 1. Count of base files/bytes not written to tape (i.e. matches) 2. Count of base file that were saved i.e. had changed. - No tape record would be written for a Base file that matches, in the same way that no tape record is written for Incremental jobs where the file is not saved because it is unchanged. - On a restore, all the Base file records must explicitly be found from the BaseFile tape. I.e. for each Full save that is marked to have one or more Base Jobs, search the BaseFile for all occurrences of JobId. - An optimization might be to make the BaseFile have: JobId BaseId FileId plus FileIndex This would avoid the need to explicitly fetch each File record for the Base job. The Base Job record will be fetched to get the VolSessionId and VolSessionTime. ========================================================= ============================================================= Request For Comments For File Backup Options 10 November 2002 Subject: File Backup Options Problem: A few days ago, a Bacula user who is backing up to file volumes and using compression asked if it was possible to suppress compressing all .gz files since it was a waste of CPU time. Although Bacula currently permits using different options (compression, ...) on a directory by directory basis, it cannot do it on a file by file basis, which is clearly what was desired. Proposed Implementation: To solve this problem, I propose the following: - Add a new Director resource type called Options. - The Options resource will have records for all options that can currently be specified on the Include record (in a FileSet). Examples below. - The Options resource will permit an exclude option as well as a number of additional options. - The heart of the Options resource is the ability to supply any number of Match records which specify POSIX regular expressions. These Match regular expressions are applied to the fully qualified filename (path and all). If one matches, then the Options will be used. - When an Match specification matches an included file, the options specified in the Options resource will override the default options specified on the Include record. - Include records will be modified to permit referencing one or more Options resources. The Options will be used in the order listed on the Include record and the first one that matches will be applied. - Options (or specifications) currently supplied on the Include record will be deprecated (i.e. removed in a later version a year or so from now). - The Exclude record will be deprecated as the same functionality can be obtained by using an Exclude = yes in the Options. Options records: The following records can appear in the Options resource. An asterisk preceding the name indicates a feature not currently implemented. - Regexp "xxx" - Match regular expression - Wild "xxx" - Do a wild card match For Backup Jobs: - Compression= (GZIP, ...) - Signature= (MD5, SHA1, ...) - *Encryption= - OneFs= (yes/no) - remain on one filesystem - Recurse= (yes/no) - recurse into subdirectories - Sparse= (yes/no) - do sparse file backup - *Exclude= (yes/no) - exclude file from being saved - *Reader= (filename) - external read (backup) program - *Plugin= (filename) - read/write plugin module - Include= (yes/no) - Include the file matched no additional patterns are applied. For Verify Jobs: - verify= (ipnougsamc5) - verify options For Restore Jobs: - replace= (always/ifnewer/ifolder/never) - replace options currently implemented in 1.31 - *Writer= (filename) - external write (restore) program Implementation: Currently options specifying compression, MD5 signatures, recursion, ... of a FileSet are supplied on the Include record. These will now all be collected into a Options resource, which will be specified in the Include in place of the options. Multiple Options may be specified. Since the Options may contain regular expressions that are applied to the full filename, this will give the ability to specify backup options on a file by file basis to whatever level of detail you wish. Example: Today: FileSet { Name = "FullSet" Include = compression=GZIP signature=MD5 { / } } Proposal: FileSet { Name = "FullSet" Include { Compression = GZIP; Signature = MD5 Wild = /*.?*/ # matches all files. File = / } } That's a lot more to do the same thing, but it gives the ability to apply options on a file by file basis. For example, suppose you want to compress all files but not any file with extensions .gz or .Z. In that case, you will need to group two sets of options using the Options resource. Files may be anywhere except in an option set??? All OptionSets apply to all files in the order the OptionSets were specified. To have files included with different option sets without using wild-cards, use two or more Includes -- each one is handled in turn using only the files and optionsets specified in the include. FileSet { Name = "FullSet" Include { OptionSet { Signature = MD5 # Note multiple Matches are ORed Wild = "*.gz" # matches .gz files Wild = "*.Z" # matches .Z files } OptionSet { Compression = GZIP Signature = MD5 Wild = "*.?*" # matches all files } File = / } } Now, since the no Compression option is specified in the first group of Options, *.gz or *.Z file will have an MD5 signature computed, but will not be compressed. For all other files, the *.gz *.Z will not match, so the second group of options will be used which will include GZIP compression. Questions: - Is it necessary to provide some means of ANDing regular expressions and negation? (not currently planned) e.g. Wild = /*.gz/ && !/big.gz/ - I see that Networker has a "null" module which, if specified, does not backup the file, but does make an record of the file in the catalog so that the catalog will reflect an exact picture of the filesystem. The result is that the file can be "seen" when "browsing" the save sets, but it cannot be restored. Is this really useful? Should it be implemented in Bacula? Results: After implementing the above, the user will be able to specify on a file by file basis (using regular expressions) what options are applied for the backup. ============================================= ========================================================== Unsaved File design For each Incremental job that is run, there may be files that were found but not saved because they were locked (this applies only to Windows). Such a system could send back to the Director a list of Unsaved files. Need: - New UnSavedFiles table that contains: JobId PathId FilenameId - Then in the next Incremental job, the list of Unsaved Files will be feed to the FD, who will ensure that they are explicitly chosen even if standard date/time check would not have selected them. ============================================================= Done: (see kernsdone for more) === after 1.32c - John's Full save failed with 1.32c FD and 1.31 Dir no FD status, and no error message. - Add fd and st as Console keywords. - Recycling volume with a Slot requires an operator intervention: rufus-dir: Start Backup JobId 18, Job=kernsave.2003-11-01_21.23.52 rufus-dir: Pruned 1 Job on Volume Vol01 from catalog. rufus-dir: There are no Jobs associated with Volume Vol01. Marking it purged. rufus-dir: Recycled volume "Vol01" rufus-sd: Please mount Volume "Vol01" on Storage Device "DDS-4" for Job kernsave.2003-11-01_21.23.52 Use "mount" command to release Job. - Implement Dan's bacula script (email of 26 Oct). - Add JobName= to VerifyToCatalog so that all verifies can be done at the end. - Edit the Client/Storage name into authentication failure messages. - Fix packet too big problem. This is most likely a Windows TCP stack problem. - Implement ClientRunBeforeJob and ClientRunAfterJob. - Implement forward spacing block/file: position_device(bsr) -- just before read_block_from_device(); ===== Multiple drive autochanger data: see Alan Brown mtx -f xxx unloadStorage Element 1 is Already Full(drive 0 was empty) Unloading Data Transfer Element into Storage Element 1...source Element Address 480 is Empty (drive 0 was empty and so was slot 1) > mtx -f xxx load 15 0 no response, just returns to the command prompt when complete. > mtx -f xxx status Storage Changer /dev/changer:2 Drives, 60 Slots ( 2 Import/Export ) Data Transfer Element 0:Full (Storage Element 15 Loaded):VolumeTag = HX001 Data Transfer Element 1:Empty Storage Element 1:Empty Storage Element 2:Full :VolumeTag=HX002 Storage Element 3:Full :VolumeTag=HX003 Storage Element 4:Full :VolumeTag=HX004 Storage Element 5:Full :VolumeTag=HX005 Storage Element 6:Full :VolumeTag=HX006 Storage Element 7:Full :VolumeTag=HX007 Storage Element 8:Full :VolumeTag=HX008 Storage Element 9:Full :VolumeTag=HX009 Storage Element 10:Full :VolumeTag=HX010 Storage Element 11:Empty Storage Element 12:Empty Storage Element 13:Empty Storage Element 14:Empty Storage Element 15:Empty Storage Element 16:Empty.... Storage Element 28:Empty Storage Element 29:Full :VolumeTag=CLNU01L1 Storage Element 30:Empty.... Storage Element 57:Empty Storage Element 58:Full :VolumeTag=NEX261L2 Storage Element 59 IMPORT/EXPORT:Empty Storage Element 60 IMPORT/EXPORT:Empty $ mtx -f xxx unload Unloading Data Transfer Element into Storage Element 15...done (just to verify it remembers where it came from, however it can be overrriden with mtx unload {slotnumber} to go to any storage slot.) Configuration wise: There needs to be a table of drive # to devices somewhere - If there are multiple changers or drives there may not be a 1:1 correspondance between changer drive number and system device name - and depending on the way the drives are hooked up to scsi busses, they may not be linearly numbered from an offset point either.something like Autochanger drives = 2 Autochanger drive 0 = /dev/nst1 Autochanger drive 1 = /dev/nst2 IMHO, it would be _safest_ to use explicit mtx unload commands at all times, not just for multidrive changers. For a 1 drive changer, that's just: mtx load xx 0 mtx unload xx 0 MTX's manpage (1.2.15): unload [] [ ] Unloads media from drive into slot . If is omitted, defaults to drive 0 (as do all commands). If is omitted, defaults to the slot that the drive was loaded from. Note that there's currently no way to say 'unload drive 1's media to the slot it came from', other than to explicitly use that slot number as the destination.AB ==== ==== SCSI info: FreeBSD undef# camcontrol devlist at scbus0 target 2 lun 0 (pass0,sa0) at scbus0 target 4 lun 0 (pass1,sa1) at scbus0 target 4 lun 1 (pass2) tapeinfo -f /dev/sg0 with a bad tape in drive 1: [kern@rufus mtx-1.2.17kes]$ ./tapeinfo -f /dev/sg0 Product Type: Tape Drive Vendor ID: 'HP ' Product ID: 'C5713A ' Revision: 'H107' Attached Changer: No TapeAlert[3]: Hard Error: Uncorrectable read/write error. TapeAlert[20]: Clean Now: The tape drive neads cleaning NOW. MinBlock:1 MaxBlock:16777215 SCSI ID: 5 SCSI LUN: 0 Ready: yes BufferedMode: yes Medium Type: Not Loaded Density Code: 0x26 BlockSize: 0 DataCompEnabled: yes DataCompCapable: yes DataDeCompEnabled: yes CompType: 0x20 DeCompType: 0x0 Block Position: 0 ===== ==== Handling removable disks From: Karl Cunningham My backups are only to hard disk these days, in removable bays. This is my idea of how a backup to hard disk would work more smoothly. Some of these things Bacula does already, but I mention them for completeness. If others have better ways to do this, I'd like to hear about it. 1. Accommodate several disks, rotated similar to how tapes are. Identified by partition volume ID or perhaps by the name of a subdirectory. 2. Abort & notify the admin if the wrong disk is in the bay. 3. Write backups to different subdirectories for each machine to be backed up. 4. Volumes (files) get created as needed in the proper subdirectory, one for each backup. 5. When a disk is recycled, remove or zero all old backup files. This is important as the disk being recycled may be close to full. This may be better done manually since the backup files for many machines may be scattered in many subdirectories. ==== === Done in 1.33 - Change console to bconsole. - Change smtp to bsmtp. - Fix time difference problem between Bacula and Client so that everything is in GMT. - Fix TimeZone problem! - Mount a tape that is not right for the job (wrong # files on tape) Bacula asks for another tape, fix problems with first tape and say "mount". All works OK, but status shows: Device /dev/nst0 open but no Bacula volume is mounted. Total Bytes=1,153,820,213 Blocks=17,888 Bytes/block=64,502 Positioned at File=9 Block=3,951 Full Backup job Rufus.2003-10-26_16.45.31 using Volume "DLT-24Oct03" on device /dev/nst0 Files=21,003 Bytes=253,954,408 Bytes/sec=2,919,016 FDReadSeqNo=192,134 in_msg=129830 out_msg=5 fd=7 - Upgrade to cygwin 1.5 - Optimize fsf not to read. - Use ioctl() fsf if it exists. Figure out where we are from the mt_status command. Use slow fsf only if other does not work. - Enhance "update slots" to include a "scan" feature scan 1; scan 1-5; scan 1,2,4 ... to update the catalog - Allow a slot or range of slots on the label barcodes command. - Finish implementation of Verify=DiskToCatalog - Make sure that Volumes are recycled based on "Least recently used" rather than lowest MediaId. - Add flag to write only one EOF mark on the tape. - Implement autochanger testing in btape "test" command. - Implement lmark to list everyfile marked. - Make mark/unmark report how many files marked/unmarked. - Keep last 5 or 10 completed jobs and show them in a similar list. - Make a Running Jobs: output similar to current Scheduled Jobs: - Change "create_media_record in bscan to use Archive instead of Full. - Have some way to estimate the restore size or have it printed. - Volume problems occurs if you have valid volume, written, then it is truncated. You get 12-Nov-2003 11:48 rufus-sd: kernsave.2003-11-12_11.48.09 Warning: mount.c:228 Volume on /tmp is not a Bacula labeled Volume, because: block.c:640 Read zero bytes on device /tmp. - Make sure that 64 bit I/O packets are used on Cygwin. - Add to supported autochangers OS             : FreeBSD-4.9 Auto-Changer    : QUALSTAR TLS-4210   Manufufactur  : Qualstar   Tapes         : 12 (AIT1: 36GB, AIT2: 50GB all uncompressed)   Drives        : 2xAIT2 (installed in the Qualstar: SONY SDX-500C AIT2) - Document estimate command in tree. - Document lsmark command in tree. - Setup a standard job that builds a bootstrap file and saves it with the catalog database. - See if a restore job can add a file to the tape (prohibit this). - Restrict characters permitted in a name. - In restore, provide option for limiting to a particular Pool. - In restore, list FileSets that only have different base names -- i.e. any FileSet with the same name should be treated as the same. - Make Scheduler sort jobs by StartTime, Priority. - Make sure smtp and any other useful program is executable by the world in case Bacula is not running as root. - Look at Dan's field width problems in PostgreSQL. - Look at effect of removing GROUP BYs. - In restore take all filesets with same base name. - From Alan Brown BTW, there's a make install bug in 1.33 - with --enable-gnome, gnome-console is built, but the binary and .conf are not being installed. - Permit Bacula and apcupsd donations (not done for apcupsd). - Fix Ctl-C crashing the Console (readline?). - Look at code in recycle_oldes_purged_volume() recycle.c. Why not let SQL do ORDER BY LastWritten ASC? - Look at find_next_volume() algorithm. Currently, it selects: +---------+------------+---------------------+-----------+ | MediaId | VolumeName | LastWritten | VolBytes | +---------+------------+---------------------+-----------+ | 3 | Test13 | 0000-00-00 00:00:00 | 1 | | 4 | Test14 | 0000-00-00 00:00:00 | 1 | | 1 | test11 | 2003-12-03 18:39:55 | 4,004,926 | | 2 | test12 | 2004-01-04 15:25:56 | 2,078,691 | +---------+------------+---------------------+-----------+ but perhaps it should fill already used Volumes first, and use Append volumes before Purged, or Recycled, ... - Possibly remove the "|| ap == NULL" on lines 123 and 207 of lib/var.c, which creates compile problems on alpha systems. var.c:123: no match for `va_list & == long int' - Check "restore" 3 (JobId), then it asks for Storage resource. Does it verify that the correct volume is chosen? - Make Bacula "poll a drive". - Notes for final checking of Nic's code: Could I get you to double check the switch () statements in the job_check_maxwaittime and job_check_maxruntime functions in src/dird/job.c? - Define week of year for scheduler. W01, W02, ... Week 01 of a year is per definition the first week that has the Thursday in this year, which is equivalent to the week that contains the fourth day of January. In other words, the first week of a new year is the week that has the majority of its days in the new year. Week 01 might also contain days from the previous year and the week before week 01 of a year is the last week (52 or 53) of the previous year even if it contains days from the new year. A week starts with Monday (day 1) and ends with Sunday (day 7). For example, the first week of the year 1997 lasts from 1996-12-30 to 1997-01-05 and can be written in standard notation as 1997-W01 or 1997W01 The week notation can also be extended by a number indicating the day of the week. For example, the day 1996-12-31, which is the Tuesday (day 2) of the first week of 1997, can also be written as 1997-W01-2 or 1997W012 - Either restrict the characters in a name, or fix the problem emailing with names containing / (smtp command line breaks). - Implement .consolerc for Console - Implement scan: for every slot it finds, zero the slot of Volume other volume having that slot. - Make restore job check if all the files are actually restored. - Look at 2Gb limit for SQLite. - Fix get_storage_from_media_type (ua_restore) to use command line storage= - Don't print "Warning: Wrong Volume mounted ..." if mounting second volume. - Write a mini-readline with history and editing. - Take a careful look a the Basic recycling algorithm. When Bacula chooses, the order should be: - Look for Append - Look for Recycle or Purged - Prune volumes - Look for purged Instead of using lowest media Id, find the least recently used volume. When the tape is mounted and Bacula requests the status - Do everything possible to use it. Define a "available" status, which is the currently mounted Volume and all volumes that are currently in the autochanger. - Is a pool specification really needed for a restore? Yes, and you may want to exclude archive Pools. - Implement a PostgreSQL driver. - Fix restore to list errors if Invalid block found, and if # files restored does not match # expected. - Something is not right in last block of fill command. - Add FileSet to command line arguments for restore. - Enhance time and size scanning routines. - Add Console usr permissions -- do by adding filters for jobs, clients, storage, ... - Put max network buffer size on a directive. - Why does "mark cygwin" take so long!!!!!!!! - Implement alist processing for ACLs from Console. - When a file is set for restore, walk back up the chain of directories, setting them to be restored. - Figure out a way to set restore on a directory without recursively decending. (recurse off?). - Fix restore to only pull in last Differential and later Incrementals. - Implement 3 Pools for a Job: Job {   Name = ...   Full Backup Pool = xxx   Incremental Backup Pool = yyy   Differential Backup Pool = zzz } - Look at ASSERT() at 384 src/lib/bnet.c - Dates are wrong in restore list from Win32 FD. - Dates are wrong in catalog from Win32 FD. - Remove h_errno from bnet.c by including proper header. - For "list jobs" order by EndTime. - Make two tape fill test work. - Add atime preservation. - Do not err job if could not write bootstrap file. - Save and restore last_job across executions. - Have each daemon save the last_jobs structure when exiting and read it back in when starting up. - "restore jobid=1 select" calls get_storage_xxx, which prints "JobId 1 is not running." - Make column listing for running jobs JobId Level Type Started Name Status - Why does Bacula need the drive open to do "autochanger list" ? - Add data compare on write/read in btape "test". - Rescue builds incorrect script files on Rufus. - Release SQLite 2.8.9 - During install, copy any console.conf to bconsole.conf. - Check: Run = Level=Differential feb-dec 1 at 1:05 to see if wday is empty. - Look at installation file permissions with Scott so that make install and the rpms agree. - Finish code passing files=nnn to restore start. - Add ctl-c to console to stop current command and discard buffered output. - Estimate to Tibs never returns. - Symbolic link a directory to another one, then backup the symbolic link. - Check and possibly fix problems with hard links. - Fix query buffer clobber ua_query.c - Allow "delete job jobid=xx jobid=xxx".