Kern's ToDo List 14 June 2003 Documentation to do: (any release a little bit at a time) - Document running a test version. - Document query file format. - Document static linking - Document problems with Verify and pruning. - Document how to use multiple databases. - For FreeBSD typical /dev/nrsa0 and for mtx /dev/pass1 - VXA drives have a "cleaning required" indicator, but Exabyte recommends preventive cleaning after every 75 hours of operation. - Lookup HP cleaning recommendations. - Lookup HP tape replacement recommendations (see trouble shooting autochanger) - Document FInclude ... - Document need to add "-u root" to most of MySQL script calls (./create_mys... ./make_my...). Testing to do: (painful) - that ALL console command line options work and are always implemented - blocksize recognition code. - multiple simultaneous Volumes - Test if rewind at end of tape waits for tape to rewind. - Test cancel at EOM. - Test not zeroing Autochanger slot when it is wrong. - Test multiple simultaneous Volumes - That restoring a hard link that already exists works correctly. Same for soft link. - Test of last block is correct in JobMedia when splitting file over two volumes. - Figure out how to use ssh or stunnel to protect Bacula communications. For 1.31 release: - Add client name to cram-md5 challenge so Director can immediately verify if it is the correct client. - Implement a record that suppresses errors if the Client is not available. - Use runbeforejob to unload, then reload a volume previously used, then the next job run gets an error reading the drive. - Implement non-blocking writes and bsock->terminate in heartbeat thread, or set it in status.c cancel. - Add restore to specific date. - lstat() is not going to work on Win32 for testing date. - Implement a Recycle command - Something is not right in last block of fill command. - Implement List Volume Job=xxx or List scheduled volumes or Status Director - Instrument use_count on DEVICE packets and ensure that the device is being close()ed at the appropriate time. - Check if Incremental is working correctly when it looks for the previous Job (Phil's problem). - Add next Volume to be used to status output. - Make bootstrap filename unique. - Sort JobIds entered into recover tree. - The bsr for Dan's job has file indexes covering the whole range rather than only the range contained on the volume. Constrain FileIndex to be within range for Volume. - Test a second language e.g. french. - Start working on Base jobs. - Make "make binary-release" work from any directory. - Document c:/working directory better than /working directory. - Unsaved Flag in Job record. - Base Flag in Job record. - Implement UnsavedFiles DB record. - Implement argc/argv for daemon command line scanning using table driven stuff below. - Implement table driven single argc/argv scanner to pickup all arguments. Much like xxx_conf.c scan table. keyword, handler(store_routine), store_address, code, flags, default. - Make | and < work on FD side. - Pass prefix_links to FD. - Implement a M_SECURITY message class. - From Phil Stracchino: It would probably be a per-client option, and would be called something like, say, "Automatically purge obsoleted jobs". What it would do is, when you successfully complete a Differential backup of a client, it would automatically purge all Incremental backups for that client that are rendered redundant by that Differential. Likewise, when a Full backup on a client completed, it would automatically purge all Differential and Incremental jobs obsoleted by that Full backup. This would let people minimize the number of tapes they're keeping on hand without having to master the art of retention times. - Prohibit backing up archive device (findlib/find_one.c:128) - Make Restore report an error if FD or SD term codes are not OK. - Add JobLevel in FD status (but make sure it is defined). - Make Pool resource handle Counter resources. - Restrict characters permitted in a Resource name, and don't permit duplicate names. - Implement new serialize subroutines send(socket, "string", &Vol, "uint32", &i, NULL) - Audit all UA commands to ensure that we always prompt where possible. - Scratch Pool where the volumes can be re-assigned to any Pool. After 1.31: - When doing a Backup send all attributes back to the Director, who would then figure out what files have been deleted. - Currently in mount.c:236 the SD simply creates a Volume. It should have explicit permission to do so. It should also mark the tape in error if there is an error. - Make sure all restore counters are working correctly in the FD. - SD Bytes Read is wrong. - Configure mtx-changer to have correct path to mtx. - Look at ALL higher level routines that call block.c to be sure they don't expect something in errmsg. - Investigate doing RAW backup of Win32 partition. - Add JobName= to VerifyToCatalog so that all verifies can be done at the end. - Add thread specific data to hold the jcr -- send error messages from low level routines by accessing it and using Jmsg(). - Cancel waiting for Client connect in SD if FD goes away. - Testing Tibs job erred and hung director on Storage resource. This was because there were a whole pile of jobs hanging around in the SD waiting for a connection from the FD that was never coming. - Possibly update all client records at startup. - Add Progress command that periodically reports the progress of a job or all jobs. - Implement "Reschedule OnError=yes interval=nnn times=xxx" - One block was orphaned in the SD probably after cancel. - Add all command line arguments to "update", e.g. slot=nn volStatus=append, ... - Examine Bare Metal restore problem (a FD crash exists somewhere ...). - Implement timeout in response() when it should come quickly. - Implement console @echo command. - Implement a Slot priority (loaded/not loaded). - Implement "vacation" Incremental only saves. - Implement single pane restore (much like the Gftp panes). - Implement Automatic Mount even in operator wait. - Implement create "FileSet"? - Implement Release Device in the Job resource to unmount a drive. - Implement Acquire Device in the Job resource to mount a drive, be sure this works with admin jobs so that the user can get prompted to insert the correct tape. Possibly some way to say to run the job but don't save the files. - Implement all command line args on run. - Implement command line "restore" args. - Implement "restore current select=no" - Fix watchdog pthread crash on Win32 (this is pthread_kill() Cygwin bug) - Implement "scratch pool" where tapes are defined and can be taken by any pool that needs them. - Implement restore "current system", but take all files without doing selection tree -- so that jobs without File records can be restored. - Implement disk spooling. Two parts: 1. Spool to disk then immediately to tape to speed up tape operations. 2. Spool to disk only when the tape is full, then when a tape is hung move it to tape. - Implement a relocatable bacula.spec - Allow multiple Storage specifications (or multiple names on a single Storage specification) in the Job record. Thus a job can be backed up to a number of storage devices. - Implement dump/print label to UA - Add prefixlinks to where or not where absolute links to FD. - Issue message to mount a new tape before the rewind. - Simplified client job initiation for portables. - If SD cannot open a drive, make it periodically retry. - Implement LabelTemplate (at least first cut). - Add more of the config info to the tape label. - Implement a Mount Command and an Unmount Command where the users could specify a system command to be performed to do the mount, after which Bacula could attempt to read the device. This is for Removeable media such as a CDROM. - Most likely, this mount command would be invoked explicitly by the user using the current Console "mount" and "unmount" commands -- the Storage Daemon would do the right thing depending on the exact nature of the device. - As with tape drives, when Bacula wanted a new removable disk mounted, it would unmount the old one, and send a message to the user, who would then use "mount" as described above once he had actually inserted the disk. - Make some way so that if a machine is skipped because it is not up that Bacula will continue retrying for a specified period of time -- periodically. - If tape is marked read-only, then try opening it read-only rather than failing, and remember that it cannot be written. - Refine SD waiting output: Device is being positioned > Device is being positioned for append > Device is being positioned to file x > - Figure out some way to estimate output size and to avoid splitting a backup across two Volumes -- this could be useful for writing CDROMs where you really prefer not to have it split -- not serious. - Add RunBeforeJob and RunAfterJob to the Client program. - Have SD compute MD5 or SHA1 and compare to what FD computes. - Make VolumeToCatalog calculate an MD5 or SHA1 from the actual data on the Volume and compare it. - Implement FileOptions (see end of this document) - Implement Bacula plugins -- design API - Make bcopy read through bad tape records. - Fix read_record to handle multiple sessions. - Program files (i.e. execute a program to read/write files). Pass read date of last backup, size of file last time. - Add Signature type to File DB record. - CD into subdirectory when open()ing files for backup to speed up things. Test with testfind(). - Priority job to go to top of list. - Why are save/restore of device different sizes (sparse?) Yup! Fix it. - Implement some way for the Console to dynamically create a job. - Restore to a particular time -- e.g. before date, after date. - Solaris -I on tar for include list - Need a verbose mode in restore, perhaps to bsr. - bscan without -v is too quiet -- perhaps show jobs. - Add code to reject whole blocks if not wanted on restore. - Check if we can increase Bacula FD priorty in Win2000 - Make sure the MaxVolFiles is fully implemented in SD - Check if both CatalogFiles and UseCatalog are set to SD. - Need return status on read_cb() from read_records(). Need multiple records -- one per Job, maybe a JCR or some other structure with a block and a record. - Figure out how to do a bare metal Windows restore - Possibly add email to Watchdog if drive is unmounted too long and a job is waiting on the drive. - Use read_record.c in SD code. - Restore program that errors in SD due to no tape reports OK incorrectly in output. - After unmount, if restore job started, ask to mount. - Convert all %x substitution variables, which are hard to remember and read to %(variable-name). Idea from TMDA. - Remove NextId for SQLite. Optimize. - Move all SQL statements into a single location. - Add UA rc and history files. - put termcap (used by console) in ./configure and allow -with-termcap-dir. - Enhance time and size scanning routines. - Fix Autoprune for Volumes to respect need for full save. - Fix Win32 config file definition name on /install - Compare tape to Client files (attributes, or attributes and data) - Make all database Ids 64 bit. - Write an applet for Linux. - Add estimate to Console commands - Implement new daemon communications protocol. - Allow console commands to detach or run in background. - Fix status delay on storage daemon during rewind. - Add SD message variables to control operator wait time - Maximum Operator Wait - Minimum Message Interval - Maximum Message Interval - Send Operator message when cannot read tape label. - Verify level=Volume (scan only), level=Data (compare of data to file). Verify level=Catalog, level=InitCatalog - Events file - Add keyword search to show command in Console. - Events : tape has more than xxx bytes. - Complete code in Bacula Resources -- this will permit reading a new config file at any time. - Handle ctl-c in Console - Implement script driven addition of File daemon to config files. - Think about how to make Bacula work better with File (non-tape) archives. - Write Unix emulator for Windows. - Put memory utilization in Status output of each daemon if full status requested or if some level of debug on. - Make database type selectable by .conf files i.e. at runtime - Set flag for uname -a. Add to Volume label. - Implement throttled work queue. - Restore files modified after date - Restore file modified before date - Restore -- do nothing but show what would happen - SET LD_RUN_PATH=$HOME/mysql/lib/mysql - Implement Restore FileSet= - Create a protocol.h and protocol.c where all protocol messages are concentrated. - Remove duplicate fields from jcr (e.g. jcr.level and jcr.jr.Level, ...). - Timout a job or terminate if link goes down, or reopen link and query. - Concept of precious tapes (cannot be reused). - Make bcopy copy with a single tape drive. - Permit changing ownership during restore. - From Phil: > My suggestion: Add a feature on the systray menu-icon menu to request > an immediate backup now. This would be useful for laptop users who may > not be on the network when the regular scheduled backup is run. > > My wife's suggestion: Add a setting to the win32 client to allow it to > shut down the machine after backup is complete (after, of course, > displaying a "System will shut down in one minute, click here to cancel" > warning dialog). This would be useful for sites that want user > woorkstations to be shut down overnight to save power. > - Autolabel should be specified by DIR instead of SD. - Storage daemon - Add media capacity - AutoScan (check checksum of tape) - Format command = "format /dev/nst0" - MaxRewindTime - MinRewindTime - MaxBufferSize - Seek resolution (usually corresponds to buffer size) - EODErrorCode=ENOSPC or code - Partial Read error code - Partial write error code - Nonformatted read error - Nonformatted write error - WriteProtected error - IOTimeout - OpenRetries - OpenTimeout - IgnoreCloseErrors=yes - Tape=yes - NoRewind=yes - Pool - Maxwrites - Recycle period - Job - MaxWarnings - MaxErrors (job?) ===== - FD sends unsaved file list to Director at end of job (see RFC below). - File daemon should build list of files skipped, and then at end of save retry and report any errors. - Write a Storage daemon that uses pipes and standard Unix programs to write to the tape. See afbackup. - Need something that monitors the JCR queue and times out jobs by asking the deamons where they are. - Enhance Jmsg code to permit buffering and saving to disk. - device driver = "xxxx" for drives. - Verify from Volume - Ensure that /dev/null works - Need report class for messages. Perhaps report resource where report=group of messages - enhance scan_attrib and rename scan_jobtype, and fill in code for "since" option - Director needs a time after which the report status is sent anyway -- or better yet, a retry time for the job. - Don't reschedule a job if previous incarnation is still running. - Some way to automatically backup everything is needed???? - Need a structure for pending actions: - buffered messages - termination status (part of buffered msgs?) - Drive management Read, Write, Clean, Delete - Login to Bacula; Bacula users with different permissions: owner, group, user, quotas - Store info on each file system type (probably in the job header on tape. This could be the output of df; or perhaps some sort of /etc/mtab record. Longer term to do: - Design at hierarchial storage for Bacula. Migration and Clone. - Implement FSM (File System Modules). - Audit M_ error codes to ensure they are correct and consistent. - Add variable break characters to lex analyzer. Either a bit mask or a string of chars so that the caller can change the break characters. - Make a single T_BREAK to replace T_COMMA, etc. - Ensure that File daemon and Storage daemon can continue a save if the Director goes down (this is NOT currently the case). Must detect socket error, buffer messages for later. - Enhance time/duration input to allow multiple qualifiers e.g. 3d2h - Add ability to backup to two Storage devices (two SD sessions) at the same time -- e.g. onsite, offsite. - Add the ability to consolidate old backup sets (basically do a restore to tape and appropriately update the catalog). Compress Volume sets. Might need to spool via file is only one drive is available. - Compress or consolidate Volumes of old possibly deleted files. Perhaps someway to do so with every volume that has less than x% valid files. Migration: Move a backup from one Volume to another Clone: Copy a backup -- two Volumes Bacula Migration is based on Jobs (apparently Networker is file by file). Migration triggered by: Number of Jobs Number of Volumes Age of Jobs Highwater mark (keep total size) Lowwater mark Projects: Bacula Projects Roadmap 17 August 2002 last update 8 May 2003 Item 1: Multiple simultaneous Jobs. (done) Done -- Restore part needs better implementation to work correctly Also, it needs considerable testing What: Permit multiple simultaneous jobs in Bacula. Why: An enterprise level solution needs to go fast without the need for the system administrator to carefully tweak timing. Based on the benchmarks, during a full backup, NetWorker typically hit 10 times the bandwidth to the tape compared to Bacula--largely. This is probably due to running parallel jobs and multi-threaded filling of buffers and writing them to tape. This should also make things work better when you have a mix of fast and slow machines backing up at the same time. Notes: Bacula was designed to run multiple simultaneous jobs. Thus implementing this is a matter of some small cleanups and careful testing. Item 2: Make the Storage daemon use intermediate file storage to buffer data. Deferred -- not necessary yet. What: If data is coming into the SD too fast, buffer it to disk if the user has configured this option. Why: This would be nice, especially if it more or less falls out when implementing (1) above. If not, it probably should not be given a high priority because fundamentally the backup time is limited by the tape bandwidth. Even though you may finish a client job quicker by spilling to disk, you still have to eventually get it onto tape. If intermediate disk buffering allows us to improve write bandwidth to tape, it may make sense. Notes: Whether or not this is implemented will depend upon performance testing after item 1 is implemented. Item 3: Write the bscan program -- also write a bcopy program. Done What: Write a program that reads a Bacula tape and puts all the appropriate data into the catalog. This allows recovery from a tape that is no longer in the database, or it allows re-creation of a database if lost. Why: This is a fundamental robustness and disaster recovery tool which will increase the comfort level of a sysadmin considering adopting Bacula. Notes: A skeleton of this program already exists, but much work needs to be done. Implementing this will also make apparent any deficiencies in the current Bacula tape format. Item 4: Implement Base jobs. What: A base job is sort of like a Full save except that you will want the FileSet to contain only files that are unlikely to change in the future (i.e. a snapshot of most of your system after installing it). After the base job has been run, when you are doing a Full save, you can specify to exclude all files saved by the base job that have not been modified. Why: This is something none of the competition does, as far as we know (except BackupPC, which is a Perl program that saves to disk only). It is big win for the user, it makes Bacula stand out as offering a unique optimization that immediately saves time and money. Notes: Big savings in tape usage. Will require more resources because the e. DIR must send FD a list of files/attribs, and the FD must search the list and compare it for each file to be saved. Item 5: Implement Label templates What: This is a mechanism whereby Bacula can automatically create a tape label for new tapes according to a detailed specification provided by the user. Why: It is a major convenience item for folks who use automated label creation. Notes: Bacula already has a working form of automatic tape label creation, but it is very crude. The design for the complete tape labeling project is already documented in the manual. Item 6: Write a regression script. Done -- Continue to expand its testing. What: This is an automatic script that runs and tests as many features of Bacula as possible. The output is compared to previous versions of Bacula and any differences are reported. Why: This is an enormous help in preventing introduction of new errors in parts of the program that already work correctly. Notes: This probably should be ranked higher, it's something the typical user doesn't see. Depending on how it's implemented, it may make sense to defer it until the archival tape format and user interface mature. Item 7: GUI for interactive restore Item 8: GUI for interactive backup What: The current interactive restore is implemented with a tty interface. It would be much nicer to be able to "see" the list of files backed up in typical GUI tree format. The same mechanism could also be used for creating ad-hoc backup FileSets (item 8). Why: Ease of use -- especially for the end user. Notes: Rather than implementing in Gtk, we probably should go directly for a Browser implementation, even if doing so meant the capability wouldn't be available until much later. Not only is there the question of Windows sites, most Solaris/HP/IRIX, etc, shops can't currently run Gtk programs without installing lots of stuff admins are very wary about. Real sysadmins will always use the command line anyway, and the user who's doing an interactive restore or backup of his own files will in most cases be on a Windows machine running Exploder. Item 9: Add SSL to daemon communications. Inprogress as of version 1.31. What: This provides for secure communications between the daemons. Why: This would allow doing backup across the Internet without privacy concerns (or with much less concern). Notes: The vast majority of near term potential users will be backing up a single site over a LAN and, correctly or not, they probably won't be concerned with security, at least not enough to go to the trouble to set up keys, etc. to screw things down. We suspect that many users genuinely interested in multi-site backup already run some form of VPN software in their internetwork connections, and are willing to delegate security to that layer. Item 10: Define definitive tape format. Done (version 1.27) What: Define that definitive tape format that will not change for the next millennium. Why: Stability, security. Notes: See notes for item 11 below. Item 11: New daemon communication protocol. What: The current daemon to daemon protocol is basically an ASCII printf() and sending the buffer. On the receiving end, the buffer is sscanf()ed to unpack it. The new scheme would be a binary format that allows quick packing and unpacking of any data type with named fields. Why: Using binary packing would be faster. Named fields will permit error checking to ensure that what is sent is what the receiver really wants. Notes: These are internal improvements in the interest of the long-term stability and evolution of the program. On the one hand, the sooner they're done, the less code we have to rip up when the time comes to install them. On the other hand, they don't bring an immediately perceptible benefit to potential users. Item 10 and possibly item 11 should be deferred until Bacula is well established with a growing user community more or less happy with the feature set. At that time, it will make a good "next generation" upgrade in the interest of data immortality. ====================================================== Base Jobs design It is somewhat like a Full save becomes an incremental since the Base job (or jobs) plus other non-base files. Need: - New BaseFile table that contains: JobId, BaseJobId, FileId (from Base). i.e. for each base file that exists but is not saved because it has not changed, the File daemon sends the JobId, BaseId, and FileId back to the Director who creates the DB entry. - To initiate a Base save, the Director sends the FD the FileId, and full filename for each file in the Base. - When the FD finds a Base file, he requests the Director to send him the full File entry (stat packet plus MD5), or conversely, the FD sends it to the Director and the Director says yes or no. This can be quite rapid if the FileId is kept by the FD for each Base Filename. - It is probably better to have the comparison done by the FD despite the fact that the File entry must be sent across the network. - An alternative would be to send the FD the whole File entry from the start. The disadvantage is that it requires a lot of space. The advantage is that it requires less communications during the save. - The Job record must be updated to indicate that one or more Bases were used. - At end of Job, FD returns: 1. Count of base files/bytes not written to tape (i.e. matches) 2. Count of base file that were saved i.e. had changed. - No tape record would be written for a Base file that matches, in the same way that no tape record is written for Incremental jobs where the file is not saved because it is unchanged. - On a restore, all the Base file records must explicitly be found from the BaseFile tape. I.e. for each Full save that is marked to have one or more Base Jobs, search the BaseFile for all occurrences of JobId. - An optimization might be to make the BaseFile have: JobId BaseId FileId plus FileIndex This would avoid the need to explicitly fetch each File record for the Base job. The Base Job record will be fetched to get the VolSessionId and VolSessionTime. ========================================================= ============================================================= Request For Comments For File Backup Options 10 November 2002 Subject: File Backup Options Problem: A few days ago, a Bacula user who is backing up to file volumes and using compression asked if it was possible to suppress compressing all .gz files since it was a waste of CPU time. Although Bacula currently permits using different options (compression, ...) on a directory by directory basis, it cannot do it on a file by file basis, which is clearly what was desired. Proposed Implementation: To solve this problem, I propose the following: - Add a new Director resource type called FileOptions. - The FileOptions resource will have records for all options that can currently be specified on the Include record (in a FileSet). Examples below. - The FileOptions resource will permit an exclude option as well as a number of additional options. - The heart of the FileOptions resource is the ability to supply any number of ApplyTo records which specify POSIX regular expressions. These ApplyTo regular expressions are applied to the fully qualified filename (path and all). If one matches, then the FileOptions will be used. - When an ApplyTo specification matches an included file, the options specified in the FileOptions resource will override the default options specified on the Include record. - Include records will be modified to permit referencing one or more FileOptions resources. The FileOptions will be used in the order listed on the Include record and the first one that matches will be applied. - Options (or specifications) currently supplied on the Include record will be deprecated (i.e. removed in a later version a year or so from now). - The Exclude record will be deprecated as the same functionality can be obtained by using an Exclude = yes in the FileOptions. FileOptions records: The following records can appear in the FileOptions resource. An asterisk preceding the name indicates a feature not currently implemented. For Backup Jobs: - Compression= (GZIP, ...) - Signature= (MD5, SHA1, ...) - *Encryption= - OneFs= (yes/no) - remain on one filesystem - Recurse= (yes/no) - recurse into subdirectories - Sparse= (yes/no) - do sparse file backup - *Exclude= (yes/no) - exclude file from being saved - *Reader= (filename) - external read (backup) program - *Plugin= (filename) - read/write plugin module For Verify Jobs: - verify= (ipnougsamc5) - verify options For Restore Jobs: - replace= (always/ifnewer/ifolder/never) - replace options currently implemented in 1.27 - *Writer= (filename) - external write (restore) program Implementation: Currently options specifying compression, MD5 signatures, recursion, ... of a FileSet are supplied on the Include record. These will now all be collected into a FileOptions resource, which will be specified on the Include in place of the options. Multiple FileOptions may be specified. Since the FileOptions contain regular expressions that are applied to the full filename, this will give the ability to specify backup options on a file by file basis to whatever level of detail you wish. Example: Today: FileSet { Name = "FullSet" Include = compression=GZIP signature=MD5 { / } } Proposal: FileSet { Name = "FullSet" Include = FileOptions=Opts { / } } FileOptions { Name = Opts Compression = GZIP Signature = MD5 ApplyTo = /*.?*/ } That's a lot more to do the same thing, but it gives the ability to apply options on a file by file basis. For example, suppose you want to compress all files but not any file with extensions .gz or .Z. You could do so as follows: FileSet { Name = "FullSet" Include = FileOptions=NoCompress FileOptions=Opts { / } } FileOptions { Name = Opts Compression = GZIP Signature = MD5 ApplyTo = /*.?*/ # matches all files } FileOptions { Name = NoCompress Signature = MD5 # Note multiple ApplyTos are ORed ApplyTo = /*.gz/ # matches .gz files */ ApplyTo = /*.Z/ # matches .Z files */ } Now, since the NoCompress FileOptions is specified first on the Include line, any *.gz or *.Z file will have an MD5 signature computed, but will not be compressed. For all other files, the NoCompress will not match, so the Opts options will be used which will include GZIP compression. Questions: - Is it necessary to provide some means of ANDing regular expressions and negation? (not currently planned) e.g. ApplyTo = /*.gz/ && !/big.gz/ - I see that Networker has a "null" module which, if specified, does not backup the file, but does make an record of the file in the catalog so that the catalog will reflect an exact picture of the filesystem. The result is that the file can be "seen" when "browsing" the save sets, but it cannot be restored. Is this really useful? Should it be implemented in Bacula? Results: After implementing the above, the user will be able to specify on a file by file basis (using regular expressions) what options are applied for the backup. ============================================= ========================================================== Unsaved File design For each Incremental job that is run, there may be files that were found but not saved because they were locked (this applies only to Windows). Such a system could send back to the Director a list of Unsaved files. Need: - New UnSavedFiles table that contains: JobId PathId FilenameId - Then in the next Incremental job, the list of Unsaved Files will be feed to the FD, who will ensure that they are explicitly chosen even if standard date/time check would not have selected them. ============================================================= Done: (see kernsdone for more) - Heartbeat between daemons. - Fix Dir heartbeat in restore and verify vol. Be sure to make bnet_recv() ignore BNET_HEARTBEAT. - Implement HEART_BEAT while SD waiting for tapes. - Include RunBeforeJob and RunAfterJob output in the message stream. - Change M_INFO to M_RESTORED for all restored files. - Fix command prompt in gnome-console by checking on Ready. - Merge SQLite, MySQL, and Rel spec into a single file. - Fix config of "console" - Check if cancel works with FD (fixed). - Properly configure console and gconsole (currently for source not configured for installation). - Error labeling tape from console gets Jmsg error because of no Job. - Test and implement get_pint and get_yesno. - Implement global with DB name and add to btraceback.gdb - Remove subsysdir from conf files (used only in autostart scripts). - Fix the following: rufus-dir: Max configured use duration exceeded. Marking Volume "MatouBackup" as Used. rufus-sd: Volume "" previously written, moving to end of data. rufus-sd: Matou.2003-05-10_10.39.18 Error: I canot write on this volume because: The number of files mismatch! Volume=1 Catalog=0 rufus-sd: Matou.2003-05-10_10.39.18 Error: askdir.c:155 NULL Volume name. This shouldn't happen!!! - Shell character expansion is failing occassionally. - Add a section to the doc on Manual cycling of Volumes. - Check if Job/File retentions apply to multivolume jobs. - Fix missing casette in autoloader during read: 14-May-2003 14:41 undef-sd: RestoreFiles.2003-05-14_14.41.00 Warning: acquire.c:106 Volume name mismatch. Wanted TestVolume0005 got TestVolume0010 14-May-2003 14:41 undef-sd: 3301 Issuing autochanger "loaded" command. 14-May-2003 14:41 undef-sd: 3302 Issuing autochanger "unload" command. 14-May-2003 14:42 undef-sd: 3303 Issuing autochanger "load slot 1" command. 14-May-2003 14:42 undef-sd: 3304 Autochanger "load slot 1" status is OK. 14-May-2003 14:42 undef-sd: RestoreFiles.2003-05-14_14.41.00 Warning: acquire.c:106 Volume name mismatch. Wanted TestVolume0005 got TestVolume0009 14-May-2003 14:42 undef-sd: 3301 Issuing autochanger "loaded" command. 14-May-2003 14:42 undef-sd: RestoreFiles.2003-05-14_14.41.00 Warning: acquire.c:106 Volume name mismatch. Wanted TestVolume0005 got TestVolume0009 14-May-2003 14:42 undef-sd: 3301 Issuing autochanger "loaded" command. 14-May-2003 14:42 undef-sd: RestoreFiles.2003-05-14_14.41.00 Warning: acquire.c:106 Volume name mismatch. Wanted TestVolume0005 got TestVolume0009 14-May-2003 14:42 undef-sd: 3301 Issuing autochanger "loaded" command. 14-May-2003 14:42 undef-sd: RestoreFiles.2003-05-14_14.41.00 Warning: acquire.c:106 Volume name mismatch. Wanted TestVolume0005 got TestVolume0009 14-May-2003 14:42 undef-sd: 3301 Issuing autochanger "loaded" command. 14-May-2003 14:42 undef-sd: RestoreFiles.2003-05-14_14.41.00 Fatal error: acquire.c:129 Too many errors trying to mount device "/dev/nrsa0". 14-May-2003 14:42 undef-dir: Bacula 1.31 (12May03): 14-May-2003 14:42 - Fix problem reported by Christopher McCurdy xeon-fd: Could not stat c:/Documents and Settings/All Users/Application Data/Humc:\Documents and Settings\All User98_AIX.kbf: ERR=No such file or directory Cannot reproduce. - The following Re-read last block at EOT failed. ERR=block.c:523 Read zero bytes on device /dev/nrsa0. undef-sd: block.c:523 Read zero bytes on device /dev/nrsa0. apparently masks the standard EOM message. - BSD (probably) does not have strtoll() - BSD does not have ioctl() MTEOM - BSD defines a number of MT_xxx variables which conflict with those defined by Bacula. - Make default duration days if no qualifier (e.g. s) is specified. - BSDI fix finding gcc version - When the FD errs (e.g. disk full) have a more graceful shutdown. - Make sure Bacula prunes/purges canceled and failed jobs too and all jobs with zero JobFiles. - Implement Volume name checking. - Document what characters can go into Volume names. - Getting the following on all directories on Win32 19-May-2003 01:14 tibs-fd: Could not access c:/cygwin/home/kern/rxvt: ERR=Permission denied - Cancellation caused JobMedia error: babylon5-dir: Last FULL backup time not found. Doing FULL backup. babylon5-dir: Start Backup JobId 416, Job=Zocalo_Save.2003-05-19_02.15.06 babylon5-sd: End of media on Volume VXA-V17-Inc-001 Bytes=31,982,900,672 Blocks=495,781. babylon5-sd: Job Zocalo_Save.2003-05-19_02.15.06 waiting. Cannot find any appendable volumes. babylon5-sd: Someone woke me up, but I cannot find any appendable volumes for Job=Zocalo_Save.2003-05-19_02.15.06. babylon5-sd: Zocalo_Save.2003-05-19_02.15.06 Fatal error: Job Zocalo_Save.2003-05-19_02.15.06 canceled while waiting for mount on Storage Device "Ecrix_VXA-1". babylon5-sd: Zocalo_Save.2003-05-19_02.15.06 Fatal error: Cannot fixup device error. Job Zocalo_Save.2003-05-19_02.15.06 canceled while waiting for mount on Storage Device "Ecrix_VXA-1". babylon5-dir: Zocalo_Save.2003-05-19_02.15.06 Error: Catalog error creating JobMedia record. sql_create.c:125 Create JobMedia failed. Record already exists. babylon5-sd: Zocalo_Save.2003-05-19_02.15.06 Error: Error creating JobMedia record: 1991 Update JobMedia error babylon5-sd: Zocalo_Save.2003-05-19_02.15.06 Error: askdir.c:158 NULL Volume name. This shouldn't happen!!! zocalo-fd: Zocalo_Save.2003-05-19_02.15.06 Error: bnet.c:310 Write error sending to Storage daemon:babylon5:9103: ERR=Broken pipe - Volume names with spaces get jammed into the catalog with 0x1 i.e. the SD bashes the Volume but they are not unbased by Dir. jerom-dir: MonthlySave.2003-05-10_17.12.01 Error: Unable to get Media record for Volume Tape^A1: ERR=sql_get.c:788 Media record for Volume "Tape^A1" not found. jerom-sd: MonthlySave.2003-05-10_17.12.01 Error: Error updating Volume Info: 1991 Catalog Request failed: sql_get.c:788 Media record for Volume "Tape^A1" not found. - ChangeServiceConfig2A does not exist on WinNT (ADVAPI32.DLL). - Fix "access not allowed" for backup of files on WinXP. - Check for existence of all new Win32 API's. See LoadLibrary in winservice.cpp - Count errors during restore and print them in the Job report. - Bug: fix access problems on files restored on WinXP. - Put system type returned by FD into catalog. - Finish WIN32_DATA stream code (bextract, check if can handle stream) - Make SD keep track of Files, Bytes during restore. - If you enter the userid by hand for restore, you get: Enter JobId(s), comma separated, to restore: 74 You have selected the following JobId: 74 Building directory tree for JobId 74 ... 134645140 items inserted into the tree and marked for extraction. - Add SDWriteSeqNo to SD, and probably Read on FD side. - If bootstrap is non-zero for restore, do not show JobId in the OK to run? (yes/mod/no): list. - When all cassettes in magazine are used, got: 22-May-2003 18:24 undef-sd: 3304 Autochanger "load slot 1" status is OK. 22-May-2003 18:24 undef-sd: NightlySave.2003-05-22_14.08.16 Warning: mount.c:245 Director wanted Volume "TestVolume0009". Current Volume "TestVolume0005" not acceptable because: 1998 Volume "TestVolume0005" not Append or Recycle. 22-May-2003 18:24 undef-sd: NightlySave.2003-05-22_14.08.16 Error: Autochanger Volume "TestVolume0009" not found in slot 1. Setting slot to zero in catalog. 22-May-2003 18:24 undef-sd: Please mount Volume "TestVolume0009" on Storage Device "ARCHIVE 4586" for Job NightlySave.2003-05-22_14 .08.16 Use "mount" command to release Job. 22-May-2003 19:24 undef-sd: Please mount Volume "TestVolume0009" on Storage Device "ARCHIVE 4586" for Job NightlySave.2003-05-22_14 .08.16 Use "mount" command to release Job. - Don't zero the Slot when the wrong volume is found -- simply ask the operator. - Implement MTIOCERRSTAT on FreeBSD to clear tape error conditions. - Shell expansion fails for working_directory in SD from time to time. - File the Automatically selected: xxx to say Automatically selected Pool: xxx - Default duration with no qualifier is sec should be 1 day - zap sd_auth_key in SD after FD connection. - Find a solution for the multiple FileSet problem (when it is changed). Add date? - Look at Python for a Bacula scripting language -- www.python.org - When Marking a file in Restore that is a hard link, also mark the link so that the data will be reloaded. - Emergency restore info: - Backup Bacula - Backup working directory - Backup Catalog - Why don't we get an error message from Win32 FD when bootstrap file cannot be created for restore command? - Fix Win2000 error with no messages during startup. - Make restore more robust in counting error and not immediately bailing out. Also print error message once, but try to continue. - Add code to check that blocks are sequential on restore. - Remove "rufus" and such references from regress. - No READLINE_SRC if found in alternate directory. - If ./btape is called without /dev, assume argument is a Storage resource name. - Find general solution for sscanf size problems (as well as sprintf. Do at run time? - Bytes restored is wrong. - The "List last 20 Jobs run" doesnt work correctly in restore. It doesnt show the last 20 jobs , but some older ones. - Fix Verify VolumeToCatalog to use BSRs -- it is broken. - Implement Release Storage=xxx - Fix restore on Win95/98 - Remove the Jmsg() in sql_find.c:102 or only print on hard error. - Implement FileSet VolIndex -- done, but must update old records. - Check this below from Phil. This was SD reported data rather than FD data! > When the job was done, Bacula reported 11084 files restored: > > JobId: 527 > Job: Zocalo_Restore.2003-06-05_16.42.01 > Client: Zocalo > Start time: 05-Jun-2003 16:42 > End time: 06-Jun-2003 01:21 > Files Restored: 11,084 > Bytes Restored: 65,474,772 > Rate: 2.1 KB/s > FD termination status: OK > Termination: Restore OK > > when it should probably have reported 11084 files scanned, 250 restored. > The bytes restored count looks about right. > - Should Bacula make an Append tape as Purged when purging? - Use switch() in backup.c and restore.c in FD instead of giant if statement. - If during a restore, a hard linked file already exists (on option), delete the file and re-link it. This is to avoid the possibility that the user had re-linked the file between the backup and the restore. Do lstat() to see if it is already properly linked. Same for symlinked file. Make sure ifnewer, ifolder, never, ... apply correctly.