X-Git-Url: https://git.sur5r.net/?a=blobdiff_plain;f=bacula%2Fkernstodo;h=fe4d5703501cba30b30372920506725eb86c3dae;hb=b5176d7560168c760634c017c29eff45deccf61a;hp=c2ca15ec94cc5f0b225616e607386ec09ff1f84d;hpb=caddf82933674d2d117a71bcebc96d459ae5176c;p=bacula%2Fbacula diff --git a/bacula/kernstodo b/bacula/kernstodo index c2ca15ec94..fe4d570350 100644 --- a/bacula/kernstodo +++ b/bacula/kernstodo @@ -1,8 +1,17 @@ Kern's ToDo List - 14 December 2007 + 17 July 2009 + +Rescue: +Add to USB key: + gftp sshfs kile kate lsssci m4 mtx nfs-common nfs-server + patch squashfs-tools strace sg3-utils screen scsiadd + system-tools-backend telnet dpkg traceroute urar usbutils + whois apt-file autofs busybox chkrootkit clamav dmidecode + manpages-dev manpages-posix manpages-posix-dev Document: +- package sg3-utils, program sg_map - !!! Cannot restore two jobs a the same time that were written simultaneously unless they were totally spooled. - Document cleaning up the spool files: @@ -39,6 +48,12 @@ Document: for disaster recovery. Professional Needs: +- Nexenta (zfs + hardy + iscsi + nas + smf support) +- NDMP + - For NAS OpenNAS + - ndmfs -- File Server extention in NDMPv4. + - ndmjob -- NDMP backup/restore NDMPv2, NDMPv3, and NDMPv4 +- Base jobs - Migration from other vendors - Date change - Path change @@ -48,14 +63,11 @@ Professional Needs: - Detect state change of system (verify) - Synthetic Full, Diff, Inc (Virtual, Reconstructed) - SD to SD -- Modules for Databases, Exchange, ... - Novell NSS backup http://www.novell.com/coolsolutions/tools/18952.html - Compliance norms that compare restored code hash code. - When glibc crash, get address with info symbol 0x809780c - How to sync remote offices. -- Exchange backup: - http://www.microsoft.com/technet/itshowcase/content/exchbkup.mspx - David's priorities Copypools Extract capability (#25) @@ -66,20 +78,88 @@ Professional Needs: Complete rework of the scheduling system (not in list) Performance and usage instrumentation (not in list) See email of 21Aug2007 for details. -- Implement Diff,Inc Retention Periods - Look at: http://tech.groups.yahoo.com/group/cfg2html and http://www.openeyet.nl/scc/ for managing customer changes Priority: -- Complete Catalog in Pool -- Implement Bacula plugins -- design API -- Scripts +================ + +- Why no error message if restore has no permission on the where + directory? +- Possibly allow manual "purge" to purge a Volume that has not + yet been written (even if FirstWritten time is zero) see ua_purge.c + is_volume_purged(). +- Add disk block detection bsr code (make it work). +- Remove done bsrs. +- User options for plugins. +- Pool Storage override precedence over command line. +- Autolabel only if Volume catalog information indicates tape not + written. This will avoid overwriting a tape that gets an I/O + error on reading the volume label. +- I/O error, SD thinks it is not the right Volume, should check slot + then disable volume, but Asks for mount. +- Can be posible modify package to create and use configuration files in + the Debian manner? + + For example: + + /etc/bacula/bacula-dir.conf + /etc/bacula/conf.d/pools.conf + /etc/bacula/conf.d/clients.conf + /etc/bacula/conf.d/storages.conf + + and into bacula-dir.conf file include + + @/etc/bacula/conf.d/pools.conf + @/etc/bacula/conf.d/clients.conf + @/etc/bacula/conf.d/storages.conf +- Possibly add an Inconsistent state when a Volume is in error + for non I/O reasons. +- Fix #ifdefing so that smartalloc can be disabled. Check manual + -- the default is enabled. +- Change calling sequence to delete_job_id_range() in ua_cmds.c + the preceding strtok() is done inside the subroutine only once. +- Dangling softlinks are not restored properly. For example, take a + soft link such as src/testprogs/install-sh, which points to /usr/share/autoconf... + move the directory to another machine where the file /usr/share/autoconf does + not exist, back it up, then try a full restore. It fails. +- Softlinks that point to non-existent file are not restored in restore all, + but are restored if the file is individually selected. BUG! - Prune by Job -- Prune by Job Level -- True automatic pruning -- Duplicate Jobs - Run, Fail, Skip, Higher, Promote, CancelLowerLevel - Proximity +- Prune by Job Level (Full, Differential, Incremental) +- Strict automatic pruning +- Use "./config no-idea no-mdc2 no-rc5" on building OpenSSL for + Win32 to avoid patent problems. +- Implement multiple jobid specification for the cancel command, + similar to what is permitted on the update slots command. + - Better yet allow wild-cards or regexes. +- Add Group resource for grouping Jobs so they can all be + run at the same time or canceled at the same time. +- modify pruning to keep a fixed number of versions of a file, + if requested. +- the cd-command should allow complete paths + i.e. cd /foo/bar/foo/bar + -> if a customer mails me the path to a certain file, + its faster to enter the specified directory +- Make tree walk routines like cd, ls, ... more user friendly + by handling spaces better. +- When doing a restore, if the user does an "update slots" + after the job started in order to add a restore volume, the + values prior to the update slots will be put into the catalog. + Must retrieve catalog record merge it then write it back at the + end of the restore job, if we want to do this right. +=== rate design + jcr->last_rate + jcr->last_runtime + MA = (last_MA * 3 + rate) / 4 + rate = (bytes - last_bytes) / (runtime - last_runtime) +- Add a recursive mark command (rmark) to restore. +- "Minimum Job Interval = nnn" sets minimum interval between Jobs + of the same level and does not permit multiple simultaneous + running of that Job (i.e. lets any previous invocation finish + before doing Interval testing). +- Look at simplifying File exclusions. +- Scripts - Auto update of slot: rufus-dir: ua_run.c:456-10 JobId=10 NewJobId=10 using pool Full priority=10 02-Nov 12:58 rufus-dir JobId 10: Start Backup JobId 10, Job=kernsave.2007-11-02_12.58.03 @@ -90,13 +170,10 @@ Priority: 02-Nov 12:58 rufus-sd JobId 10: Wrote label to prelabeled Volume "Vol001" on device "DDS-4" (/dev/nst0) 02-Nov 12:58 rufus-sd JobId 10: Alert: TapeAlert[7]: Media Life: The tape has reached the end of its useful life. 02-Nov 12:58 rufus-dir JobId 10: Bacula rufus-dir 2.3.6 (26Oct07): 02-Nov-2007 12:58:51 -- Eliminate: /var is a different filesystem. Will not descend from / into /var - Separate Files and Directories in catalog - Create FileVersions table - Look at rsysnc for incremental updates and dedupping - Add MD5 or SHA1 check in SD for data validation -- modify pruning to keep a fixed number of versions of a file, - if requested. - finish implementation of fdcalled -- see ua_run.c:105 - Fix problem in postgresql.c in my_postgresql_query, where the generation of the error message doesn't differentiate result==NULL @@ -108,22 +185,16 @@ Priority: - Implement continue spooling while despooling. - Remove all install temp files in Win32 PLUGINSDIR. - Audit retention periods to make sure everything is 64 bit. -- Use E'xxx' to escape PostgreSQL strings. - No where in restore causes kaboom. - Performance: multiple spool files for a single job. - Performance: despool attributes when despooling data (problem multiplexing Dir connection). - Make restore use the in-use volume reservation algorithm. -- Look at mincore: http://insights.oetiker.ch/linux/fadvise.html -- Unicode input http://en.wikipedia.org/wiki/Byte_Order_Mark -- Add TLS to bat (should be done). - When Pool specifies Storage command override does not work. - Implement wait_for_sysop() message display in wait_for_device(), which now prints warnings too often. - Ensure that each device in an Autochanger has a different Device Index. -- Add Catalog = to Pool resource so that pools will exist - in only one catalog -- currently Pools are "global". - Look at sg_logs -a /dev/sg0 for getting soft errors. - btape "test" command with Offline on Unmount = yes @@ -146,7 +217,6 @@ Priority: > configuration string value to a CRYPTO_CIPHER_* value, if anyone is > interested in implementing this functionality. -- Why doesn't @"xxx abc" work in a conf file? - Figure out some way to "automatically" backup conf changes. - Add the OS version back to the Win32 client info. - Restarted jobs have a NULL in the from field. @@ -156,8 +226,6 @@ Priority: and possibly changing the blobs into varchar. - Ensure that the SD re-reads the Media record if the JobFiles does not match -- it may have been updated by another job. -- Look at moving the Storage directive from the Job to the - Pool in the default conf files. - Doc items - Test Volume compatibility between machine architectures - Encryption documentation @@ -207,24 +275,16 @@ Projects: each data chunk -- according to James Harper 9Jan07. - Features - Better scheduling - - Full at least once a month, ... - - Cancel Inc if Diff/Full running - More intelligent re-run - - New/deleted file backup - FD plugins - Incremental backup -- rsync, Stow - For next release: - Try to fix bscan not working with multiple DVD volumes bug #912. - Look at mondo/mindi -- Don't restore Solaris Door files: - #define S_IFDOOR in st_mode. - see: http://docs.sun.com/app/docs/doc/816-5173/6mbb8ae23?a=view#indexterm-360 - Make Bacula by default not backup tmpfs, procfs, sysfs, ... - Fix hardlinked immutable files when linking a second file, the immutable flag must be removed prior to trying to link it. -- Implement Python event for backing up/restoring a file. - Change dbcheck to tell users to use native tools for fixing broken databases, and to ensure they have the proper indexes. - add udev rules for Bacula devices. @@ -277,21 +337,6 @@ Low priority: http://linuxwiki.de/Bacula (in German) - Possibly allow SD to spool even if a tape is not mounted. -- It appears to me that you have run into some sort of race - condition where two threads want to use the same Volume and they - were both given access. Normally that is no problem. However, - one thread wanted the particular Volume in drive 0, but it was - loaded into drive 1 so it decided to unload it from drive 1 and - then loaded it into drive 0, while the second thread went on - thinking that the Volume could be used in drive 1 not realizing - that in between time, it was loaded in drive 0. - I'll look at the code to see if there is some way we can avoid - this kind of problem. Probably the best solution is to make the - first thread simply start using the Volume in drive 1 rather than - transferring it to drive 0. -- Fix re-read of last block to check if job has actually written - a block, and check if block was written by a different job - (i.e. multiple simultaneous jobs writing). - Figure out how to configure query.sql. Suggestion to use m4: == changequote.m4 === changequote(`[',`]')dnl @@ -318,32 +363,6 @@ Low priority: The problem is that it requires m4, which is not present on all machines at ./configure time. -- Given all the problems with FIFOs, I think the solution is to do something a - little different, though I will look at the code and see if there is not some - simple solution (i.e. some bug that was introduced). What might be a better - solution would be to use a FIFO as a sort of "key" to tell Bacula to read and - write data to a program rather than the FIFO. For example, suppose you - create a FIFO named: - - /home/kern/my-fifo - - Then, I could imagine if you backup and restore this file with a direct - reference as is currently done for fifos, instead, during backup Bacula will - execute: - - /home/kern/my-fifo.backup - - and read the data that my-fifo.backup writes to stdout. For restore, Bacula - will execute: - - /home/kern/my-fifo.restore - - and send the data backed up to stdout. These programs can either be an - executable or a shell script and they need only read/write to stdin/stdout. - - I think this would give a lot of flexibility to the user without making any - significant changes to Bacula. - ==== SQL # get null file @@ -359,70 +378,6 @@ select Path.Path from Path,File where File.JobId=nnn and - Look into replacing autotools with cmake http://www.cmake.org/HTML/Index.html -=== Migration from David === -What I'd like to see: - -Job { - Name = "-migrate" - Type = Migrate - Messages = Standard - Pool = Default - Migration Selection Type = LowestUtil | OldestVol | PoolOccupancy | -Client | PoolResidence | Volume | JobName | SQLquery - Migration Selection Pattern = "regexp" - Next Pool = -} - -There should be no need for a Level (migration is always Full, since you -don't calculate differential/incremental differences for migration), -Storage should be determined by the volume types in the pool, and Client -is really a selection issue. Migration should always occur to the -NextPool defined in the pool definition. If no nextpool is defined, the -job should end with a reason of "no place to go". If Next Pool statement -is present, we override the check in the pool definition and use the -pool specified. - -Here's how I'd define Migration Selection Types: - -With Regexes: -Client -- Migrate data from selected client only. Migration Selection -Pattern regexp provides pattern to select client names, eg ^FS00* makes -all client names starting with FS00 eligible for migration. - -Jobname -- Migration all jobs matching name. Migration Selection Pattern -regexp provides pattern to select jobnames existing in pool. - -Volume -- Migrate all data on specified volumes. Migration Selection -Pattern regexp provides selection criteria for volumes to be migrated. -Volumes must exist in pool to be eligible for migration. - - -With Regex optional: -LowestUtil -- Identify the volume in the pool with the least data on it -and empty it. No Migration Selection Pattern required. - -OldestVol -- Identify the LRU volume with data written, and empty it. No -Migration Selection Pattern required. - -PoolOccupancy -- if pool occupancy exceeds , migrate volumes -(starting with most full volumes) until pool occupancy drops below -. Pool highmig and lowmig values are in pool definition, no -Migration Selection Pattern required. - - -No regex: -SQLQuery -- Migrate all jobuids returned by the supplied SQL query. -Migration Selection Pattern contains SQL query to execute; should return -a list of 1 or more jobuids to migrate. - -PoolResidence -- Migrate data sitting in pool for longer than -PoolResidence value in pool definition. Migration Selection Pattern -optional; if specified, override value in pool definition (value in -minutes). - - -[ possibly a Python event -- kes ] -=== - Mount on an Autochanger with no tape in the drive causes: Automatically selected Storage: LTO-changer Enter autochanger drive[0]: 0 @@ -571,20 +526,12 @@ minutes). ("D","Diff"), ("I","Inc"); - Show files/second in client status output. -- Add a recursive mark command (rmark) to restore. -- "Minimum Job Interval = nnn" sets minimum interval between Jobs - of the same level and does not permit multiple simultaneous - running of that Job (i.e. lets any previous invocation finish - before doing Interval testing). -- Look at simplifying File exclusions. -- New directive "Delete purged Volumes" - new pool XXX with ScratchPoolId = MyScratchPool's PoolId and let it fill itself, and RecyclePoolId = XXX's PoolId so I can see if it become stable and I just have to supervise MyScratchPool - If I want to remove this pool, I set RecyclePoolId = MyScratchPool's PoolId, and when it is empty remove it. -- Figure out how to recycle Scratch volumes back to the Scratch Pool. - Add Volume=SCRTCH - Allow Check Labels to be used with Bacula labels. - "Resuming" a failed backup (lost line for example) by using the @@ -616,8 +563,6 @@ minutes). backups of the same client and if we again try to start a full backup of client backup abc bacula won't complain. That should be fixed. -- Fix bpipe.c so that it does not modify results pointer. - ***FIXME*** calling sequence should be changed. - For Windows disaster recovery see http://unattended.sf.net/ - regardless of the retention period, Bacula will not prune the last Full, Diff, or Inc File data until a month after the @@ -647,11 +592,8 @@ minutes). - In restore don't compare byte count on a raw device -- directory entry does not contain bytes. -=== rate design - jcr->last_rate - jcr->last_runtime - MA = (last_MA * 3 + rate) / 4 - rate = (bytes - last_bytes) / (runtime - last_runtime) + + - Max Vols limit in Pool off by one? - Implement Files/Bytes,... stats for restore job. - Implement Total Bytes Written, ... for restore job. @@ -695,76 +637,6 @@ minutes). - Bug: if a job is manually scheduled to run later, it does not appear in any status report and cannot be cancelled. -==== Keeping track of deleted/new files ==== -- To mark files as deleted, run essentially a Verify to disk, and - when a file is found missing (MarkId != JobId), then create - a new File record with FileIndex == -1. This could be done - by the FD at the same time as the backup. - - My "trick" for keeping track of deletions is the following. - Assuming the user turns on this option, after all the files - have been backed up, but before the job has terminated, the - FD will make a pass through all the files and send their - names to the DIR (*exactly* the same as what a Verify job - currently does). This will probably be done at the same - time the files are being sent to the SD avoiding a second - pass. The DIR will then compare that to what is stored in - the catalog. Any files in the catalog but not in what the - FD sent will receive a catalog File entry that indicates - that at that point in time the file was deleted. This - either transmitted to the FD or simultaneously computed in - the FD, so that the FD can put a record on the tape that - indicates that the file has been deleted at this point. - A delete file entry could potentially be one with a FileIndex - of 0 or perhaps -1 (need to check if FileIndex is used for - some other thing as many of the Bacula fields are "overloaded" - in the SD). - - During a restore, any file initially picked up by some - backup (Full, ...) then subsequently having a File entry - marked "delete" will be removed from the tree, so will not - be restored. If a file with the same name is later OK it - will be inserted in the tree -- this already happens. All - will be consistent except for possible changes during the - running of the FD. - - Since I'm on the subject, some of you may be wondering what - the utility of the in memory tree is if you are going to - restore everything (at least it comes up from time to time - on the list). Well, it is still *very* useful because it - allows only the last item found for a particular filename - (full path) to be entered into the tree, and thus if a file - is backed up 10 times, only the last copy will be restored. - I recently (last Friday) restored a complete directory, and - the Full and all the Differential and Incremental backups - spanned 3 Volumes. The first Volume was not even mounted - because all the files had been updated and hence backed up - since the Full backup was made. In this case, the tree - saved me a *lot* of time. - - Make sure this information is stored on the tape too so - that it can be restored directly from the tape. - - All the code (with the exception of formally generating and - saving the delete file entries) already exists in the Verify - Catalog command. It explicitly recognizes added/deleted files since - the last InitCatalog. It is more or less a "simple" matter of - taking that code and adapting it slightly to work for backups. - - Comments from Martin Simmons (I think they are all covered): - Ok, that should cover the basics. There are few issues though: - - - Restore will depend on the catalog. I think it is better to include the - extra data in the backup as well, so it can be seen by bscan and bextract. - - - I'm not sure if it will preserve multiple hard links to the same inode. Or - maybe adding or removing links will cause the data to be dumped again? - - - I'm not sure if it will handle renamed directories. Possibly it will work - by dumping the whole tree under a renamed directory? - - - It remains to be seen how the backup performance of the DIR's will be - affected when comparing the catalog for a large filesystem. ==== From David: @@ -924,23 +796,8 @@ Why: format string. Then I have the tape labeled automatically with weekday name in the correct language. ========== -- Yes, that is surely the case. I probably should turn those into Warning - errors. In addition, you just made me think that it might not be bad to - add an option to check the file size after backing up the file and - report if it changes. This would be done as an option because it would - add extra overhead. - - Kern, good idea. If you do do that, mention in the output: file - shrunk, or file expanded, just to make it obvious to the user - (without having to the refer to file size), just how the file size - changed. - - Would this option be for all file, or just one file? Or a fileset? - Make output from status use html table tags for nicely presenting in a browser. -- Can one write tapes faster with 8192 byte block sizes? -- Document security problems with the same password for everyone in - rpm and Win32 releases. - Browse generations of files. - I've seen an error when my catalog's File table fills up. I then have to recreate the File table with a larger maximum row @@ -1020,8 +877,6 @@ Documentation to do: (any release a little bit at a time) - Use gather write() for network I/O. - Autorestart on crash. - Add bandwidth limiting. -- Add acks every once and a while from the SD to keep - the line from timing out. - When an error in input occurs and conio beeps, you can back up through the prompt. - Detect fixed tape block mode during positioning by looking at @@ -1038,7 +893,6 @@ Documentation to do: (any release a little bit at a time) - Allow the user to select JobType for manual pruning/purging. - bscan does not put first of two volumes back with all info in bscan-test. -- Implement the FreeBSD nodump flag in chflags. - Figure out how to make named console messages go only to that console and to the non-restricted console (new console class?). - Make restricted console prompt for password if *ask* is set or @@ -1055,10 +909,6 @@ Documentation to do: (any release a little bit at a time) -> maybe its more easy to maintain this, if the descriptions of that commands are outsourced to a ceratin-file -- the cd-command should allow complete paths - i.e. cd /foo/bar/foo/bar - -> if a customer mails me the path to a certain file, - its faster to enter the specified directory - if the password is not configured in bconsole.conf you should be asked for it. -> sometimes you like to do restore on a customer-machine @@ -1112,13 +962,10 @@ Documentation to do: (any release a little bit at a time) - Setup lrrd graphs: (http://www.linpro.no/projects/lrrd/) Mike Acar. - Revisit the question of multiple Volumes (disk) on a single device. - Add a block copy option to bcopy. -- Finish work on Gnome restore GUI. - Fix "llist jobid=xx" where no fileset or client exists. - For each job type (Admin, Restore, ...) require only the really necessary fields.- Pass Director resource name as an option to the Console. - Add a "batch" mode to the Console (no unsolicited queries, ...). -- Add a .list all files in the restore tree (probably also a list all files) - Do both a long and short form. - Allow browsing the catalog to see all versions of a file (with stat data on each file). - Restore attributes of directory if replace=never set but directory @@ -1140,32 +987,22 @@ Documentation to do: (any release a little bit at a time) - Check new HAVE_WIN32 open bits. - Check if the tape has moved before writing. - Handling removable disks -- see below: -- Keep track of tape use time, and report when cleaning is necessary. - Add FromClient and ToClient keywords on restore command (or BackupClient RestoreClient). - Implement a JobSet, which groups any number of jobs. If the JobSet is started, all the jobs are started together. Allow Pool, Level, and Schedule overrides. -- Enhance cancel to timeout BSOCK packets after a specific delay. -- Do scheduling by UTC using gmtime_r() in run_conf, scheduler, and - ua_status.!!! Thanks to Alan Brown for this tip. - Look at updating Volume Jobs so that Max Volume Jobs = 1 will work correctly for multiple simultaneous jobs. -- Correct code so that FileSet MD5 is calculated for < and | filename - generation. - Implement the Media record flag that indicates that the Volume does disk addressing. - Implement VolAddr, which is used when Volume is addressed like a disk, and form it from VolFile and VolBlock. -- Make multiple restore jobs for multiple media types specifying - the proper storage type. - Fix fast block rejection (stored/read_record.c:118). It passes a null pointer (rec) to try_repositioning(). -- Look at extracting Win data from BackupRead. - Implement RestoreJobRetention? Maybe better "JobRetention" in a Job, which would take precidence over the Catalog "JobRetention". - Implement Label Format in Add and Label console commands. -- Possibly up network buffers to 65K. Put on variable. - Put email tape request delays on one or more variables. User wants to cancel the job after a certain time interval. Maximum Mount Wait? - Job, Client, Device, Pool, or Volume? @@ -1229,8 +1066,6 @@ Documentation to do: (any release a little bit at a time) support for Oracle database ?? === - Look at adding SQL server and Exchange support for Windows. -- Make dev->file and dev->block_num signed integers so that -1 can - be an invalid value which happens with BSR. - Create VolAddr for disk files in place of VolFile and VolBlock. This is needed to properly specify ranges. - Add progress of files/bytes to SD and FD. @@ -1260,9 +1095,6 @@ Documentation to do: (any release a little bit at a time) - Implement some way for the File daemon to contact the Director to start a job or pass its DHCP obtained IP number. - Implement a query tape prompt/replace feature for a console -- Copy console @ code to gnome2-console -- Make tree walk routines like cd, ls, ... more user friendly - by handling spaces better. - Make sure that Bacula rechecks the tape after the 20 min wait. - Set IO_NOWAIT on Bacula TCP/IP packets. - Try doing a raw partition backup and restore by mounting a @@ -1279,12 +1111,9 @@ Documentation to do: (any release a little bit at a time) - What to do about "list files job=xxx". - Look at how fuser works and /proc/PID/fd that is how Nic found the file descriptor leak in Bacula. -- Implement WrapCounters in Counters. -- Add heartbeat from FD to SD if hb interval expires. - Can we dynamically change FileSets? - If pool specified to label command and Label Format is specified, automatically generate the Volume name. -- Why can't SQL do the filename sort for restore? - Add ExhautiveRestoreSearch - Look at the possibility of loading only the necessary data into the restore tree (i.e. do it one directory at a @@ -1299,10 +1128,8 @@ Documentation to do: (any release a little bit at a time) run the job but don't save the files. - Make things like list where a file is saved case independent for Windows. -- Use autochanger to handle multiple devices. - Implement a Recycle command - Start working on Base jobs. -- Implement UnsavedFiles DB record. - From Phil Stracchino: It would probably be a per-client option, and would be called something like, say, "Automatically purge obsoleted jobs". What it @@ -1355,7 +1182,6 @@ Documentation to do: (any release a little bit at a time) - bscan without -v is too quiet -- perhaps show jobs. - Add code to reject whole blocks if not wanted on restore. - Check if we can increase Bacula FD priorty in Win2000 -- Make sure the MaxVolFiles is fully implemented in SD - Check if both CatalogFiles and UseCatalog are set to SD. - Possibly add email to Watchdog if drive is unmounted too long and a job is waiting on the drive. @@ -1383,8 +1209,6 @@ Documentation to do: (any release a little bit at a time) - Implement script driven addition of File daemon to config files. - Think about how to make Bacula work better with File (non-tape) archives. - Write Unix emulator for Windows. -- Put memory utilization in Status output of each daemon - if full status requested or if some level of debug on. - Make database type selectable by .conf files i.e. at runtime - Set flag for uname -a. Add to Volume label. - Restore files modified after date @@ -1435,19 +1259,13 @@ Documentation to do: (any release a little bit at a time) - MaxWarnings - MaxErrors (job?) ===== -- FD sends unsaved file list to Director at end of job (see - RFC below). -- File daemon should build list of files skipped, and then - at end of save retry and report any errors. - Write a Storage daemon that uses pipes and standard Unix programs to write to the tape. See afbackup. - Need something that monitors the JCR queue and times out jobs by asking the deamons where they are. - Enhance Jmsg code to permit buffering and saving to disk. -- device driver = "xxxx" for drives. - Verify from Volume -- Ensure that /dev/null works - Need report class for messages. Perhaps report resource where report=group of messages - enhance scan_attrib and rename scan_jobtype, and @@ -1542,35 +1360,12 @@ mounting. Nobody is dying for them, but when you see what it does, you will die without it. -3. Restoring deleted files: Since I think my comments in (2) above -have low probability of implementation, I'll also suggest that you -could approach the issue of deleted files by a mechanism of having the -fd report to the dir, a list of all files on the client for every -backup job. The dir could note in the database entry for each file -the date that the file was seen. Then if a restore as of date X takes -place, only files that exist from before X until after X would be -restored. Probably the major cost here is the extra date container in -each row of the files table. - -Thanks for "listening". I hope some of this helps. If you want to -contact me, please send me an email - I read some but not all of the -mailing list traffic and might miss a reply there. - -Please accept my compliments for bacula. It is doing a great job for -me!! I sympathize with you in the need to wrestle with excelence in -execution vs. excelence in feature inclusion. - Regards, Jerry Schieffer ============================== Longer term to do: -- Implement wait on multiple objects - - Multiple max times - - pthread signal - - socket input ready -- Design at hierarchial storage for Bacula. Migration and Clone. - Implement FSM (File System Modules). - Audit M_ error codes to ensure they are correct and consistent. - Add variable break characters to lex analyzer. @@ -1581,17 +1376,8 @@ Longer term to do: continue a save if the Director goes down (this is NOT currently the case). Must detect socket error, buffer messages for later. -- Enhance time/duration input to allow multiple qualifiers e.g. 3d2h - Add ability to backup to two Storage devices (two SD sessions) at the same time -- e.g. onsite, offsite. -- Compress or consolidate Volumes of old possibly deleted files. Perhaps - someway to do so with every volume that has less than x% valid - files. - - -Migration: Move a backup from one Volume to another -Clone: Copy a backup -- two Volumes - ====================================================== Base Jobs design @@ -1645,126 +1431,108 @@ Need: VolSessionId and VolSessionTime. ========================================================= +========================================================= + Preliminary design of Deletion of disk volumes + +tem 5: Deletion of disk Volumes when pruned + Date: Nov 25, 2005 + Origin: Ross Boylan (edited + by Kern) + Status: + + What: Provide a way for Bacula to automatically remove Volumes + from the filesystem, or optionally to truncate them. + Obviously, the Volume must be pruned prior removal. + + Why: This would allow users more control over their Volumes and + prevent disk based volumes from consuming too much space. + + Notes: The following two directives might do the trick: + + Volume Data Retention =