X-Git-Url: https://git.sur5r.net/?a=blobdiff_plain;f=bacula%2Fkernstodo;h=8a541ed63c471f0847cf1963d7a62a7d60ec6135;hb=ef440f6ff950294887e1e61e8304aba12de37996;hp=a88f2a6c68aabd51284cae17dd188212e13b6871;hpb=d4cdfcb0408560b741a6574e594962f65344e2e7;p=bacula%2Fbacula diff --git a/bacula/kernstodo b/bacula/kernstodo index a88f2a6c68..8a541ed63c 100644 --- a/bacula/kernstodo +++ b/bacula/kernstodo @@ -1,77 +1,409 @@ Kern's ToDo List - 13 August 2005 + 12 November 2006 Major development: Project Developer ======= ========= -Version 1.37 Kern (see below) -======================================================== - -Final items for 1.37 before release: -1. Fix bugs -- Look at fixing restore status stats in SD. -- Mount after manually unloading changer causes hang in SD -- Close STDOUT if debug_level == 0 -- Check if ANSI tape labeling works with drive in - read-only mode. - > > btape: label.c:299 write_volume_label() - > > btape: label.c:302 Label type=0 - > > btape: dev.c:648 rewind_dev fd=3 "VTS0" (/dev/tape0) - > > btape: label.c:530 Start create_volume_label() - > > - > > Volume Label: - > > Id : Bacula 1.0 immortal - > > VerNo : 11 - > > VolName : 450340 - > > PrevVolName : - > > VolFile : 0 - > > LabelType : PRE_LABEL - > > LabelSize : 0 - > > PoolName : Default - > > MediaType : VTS - > > PoolType : Backup - > > HostName : sysrmr.eia.doe.gov - > > btape: ansi_label.c:282 Write ANSI label type=2 - > > 15-Sep 13:12 btape: btape Fatal error: ansi_label.c:303 Could not - > > write ANSI VOL1 - > > label. ERR=Bad file descriptor - -- Check "update slots=7 scan storage=DLT drive=0" with - non-bacula tape in the drive. - -- --without-openssl breaks at least on Solaris. -- Figure out how to package gui, and rescue programs. -- Test TLS. -- Arno had to do -- to get update slots=x to work - UPDATE Media SET InChanger=0,Slot=0 WHERE InChanger>0 AND Slot>0; (MySQL) - -- Add recycle event. -- Add scratch pool event. -- Implement NeedVolume event -- Add Win32 FileSet definition somewhere - Document: -- Does ClientRunAfterJob fail the job on a bad return code? -- datadir for po files. -- AM_GNU_GETTEXT finds the library if you specify - --with-libintl-prefix - Document cleaning up the spool files: db, pid, state, bsr, mail, conmsg, spool - Document the multiple-drive-changer.txt script. - Pruning with Admin job. -- Restore of all files for a Job or set of jobs even if the file - records have been removed from the catalog. - Does WildFile match against full name? Doc. +- %d and %v only valid on Director, not for ClientRunBefore/After. +- During tests with the 260 char fix code, I found one problem: + if the system "sees" a long path once, it seems to forget it's + working drive (e.g. c:\), which will lead to a problem during + the next job (create bootstrap file will fail). Here is the + workaround: specify absolute working and pid directory in + bacula-fd.conf (e.g. c:\bacula\working instead of + \bacula\working). +- Document techniques for restoring large numbers of files. +- Document setting my.cnf to big file usage. +- Add example of proper index output to doc. show index from File; +- Correct the Include syntax in the m4.xxx files in examples/conf +- Document JobStatus and Termination codes. +- Fix the error with the "DVI file can't be opened" while + building the French PDF. +- Document more DVD stuff +- Doc + { "JobErrors", "i"}, + { "JobFiles", "i"}, + { "SDJobFiles", "i"}, + { "SDErrors", "i"}, + { "FDJobStatus","s"}, + { "SDJobStatus","s"}, +- Document all the little details of setting up certificates for + the Bacula data encryption code. +- Document more precisely how to use master keys -- especially + for disaster recovery. + + +Priority: +- Check if gnome-console works with TLS. +- Ensure that the SD re-reads the Media record if the JobFiles + does not match -- it may have been updated by another job. +- Look at moving the Storage directive from the Job to the + Pool in the default conf files. +- Migration Volume span bug +- Rescue release +- Bug reports +- Test FIFO backup/restore -- make regression +- Doc items +- Add encryption regression tests +- Test Volume compatibility between machine architectures +- Encryption documentation +- Wrong jobbytes with query 12 (todo) +- bacula-1.38.2-ssl.patch +- Bare-metal recovery Windows (todo) + + For 1.39: +- Fix hardlinked immutable files when linking a second file, the + immutable flag must be removed prior to trying to link it. +- Implement Python event for backing up/restoring a file. +- Change dbcheck to tell users to use native tools for fixing + broken databases, and to ensure they have the proper indexes. +- add udev rules for Bacula devices. +- If a job terminates, the DIR connection can close before the + Volume info is updated, leaving the File count wrong. +- Look at why SIGPIPE during connection can cause seg fault in + writing the daemon message, when Dir dropped to bacula:bacula +- Look at zlib 32 => 64 problems. +- Try turning on disk seek code. +- Possibly turn on St. Bernard code. +- Fix bextract to restore ACLs, or better yet, use common routines. +- Do we migrate appendable Volumes? +- Remove queue.c code. +- Some users claim that they must do two prune commands to get a + Volume marked as purged. +- Print warning message if LANG environment variable does not specify + UTF-8. +- New dot commands from Arno. + .show device=xxx lists information from one storage device, including + devices (I'm not even sure that information exists in the DIR...) + .move eject device=xxx mostly the same as 'unmount xxx' but perhaps with + better machine-readable output like "Ok" or "Error busy" + .move eject device=xxx toslot=yyy the same as above, but with a new + target slot. The catalog should be updated accordingly. + .move transfer device=xxx fromslot=yyy toslot=zzz + +Low priority: +- It appears to me that you have run into some sort of race + condition where two threads want to use the same Volume and they + were both given access. Normally that is no problem. However, + one thread wanted the particular Volume in drive 0, but it was + loaded into drive 1 so it decided to unload it from drive 1 and + then loaded it into drive 0, while the second thread went on + thinking that the Volume could be used in drive 1 not realizing + that in between time, it was loaded in drive 0. + I'll look at the code to see if there is some way we can avoid + this kind of problem. Probably the best solution is to make the + first thread simply start using the Volume in drive 1 rather than + transferring it to drive 0. +- After pruning, check to see if the Volume retention period has + expired. +- Check to see if jcr->stime is lost during rescheduling of + jobs in jobq.c +- Fix re-read of last block to check if job has actually written + a block, and check if block was written by a different job + (i.e. multiple simultaneous jobs writing). +- Figure out how to configure query.sql. Suggestion to use m4: + == changequote.m4 === + changequote(`[',`]')dnl + ==== query.sql.in === + :List next 20 volumes to expire + SELECT + Pool.Name AS PoolName, + Media.VolumeName, + Media.VolStatus, + Media.MediaType, + ifdef([MySQL], + [ FROM_UNIXTIME(UNIX_TIMESTAMP(Media.LastWritten) Media.VolRetention) AS Expire, ])dnl + ifdef([PostgreSQL], + [ media.lastwritten + interval '1 second' * media.volretention as expire, ])dnl + Media.LastWritten + FROM Pool + LEFT JOIN Media + ON Media.PoolId=Pool.PoolId + WHERE Media.LastWritten>0 + ORDER BY Expire + LIMIT 20; + ==== + Command: m4 -DmySQL changequote.m4 query.sql.in >query.sql + + The problem is that it requires m4, which is not present on all machines + at ./configure time. +- Get Perl replacement for bregex.c +- Given all the problems with FIFOs, I think the solution is to do something a + little different, though I will look at the code and see if there is not some + simple solution (i.e. some bug that was introduced). What might be a better + solution would be to use a FIFO as a sort of "key" to tell Bacula to read and + write data to a program rather than the FIFO. For example, suppose you + create a FIFO named: + + /home/kern/my-fifo + + Then, I could imagine if you backup and restore this file with a direct + reference as is currently done for fifos, instead, during backup Bacula will + execute: + + /home/kern/my-fifo.backup + + and read the data that my-fifo.backup writes to stdout. For restore, Bacula + will execute: + + /home/kern/my-fifo.restore + + and send the data backed up to stdout. These programs can either be an + executable or a shell script and they need only read/write to stdin/stdout. + + I think this would give a lot of flexibility to the user without making any + significant changes to Bacula. + + +==== SQL +# get null file +select FilenameId from Filename where Name=''; +# Get list of all directories referenced in a Backup. +select Path.Path from Path,File where File.JobId=nnn and + File.FilenameId=(FilenameId-from-above) and File.PathId=Path.PathId + order by Path.Path ASC; + +- Look into using Dart for testing + http://public.kitware.com/Dart/HTML/Index.shtml + +- Look into replacing autotools with cmake + http://www.cmake.org/HTML/Index.html + +=== Migration from David === +What I'd like to see: + +Job { + Name = "-migrate" + Type = Migrate + Messages = Standard + Pool = Default + Migration Selection Type = LowestUtil | OldestVol | PoolOccupancy | +Client | PoolResidence | Volume | JobName | SQLquery + Migration Selection Pattern = "regexp" + Next Pool = +} + +There should be no need for a Level (migration is always Full, since you +don't calculate differential/incremental differences for migration), +Storage should be determined by the volume types in the pool, and Client +is really a selection issue. Migration should always occur to the +NextPool defined in the pool definition. If no nextpool is defined, the +job should end with a reason of "no place to go". If Next Pool statement +is present, we override the check in the pool definition and use the +pool specified. + +Here's how I'd define Migration Selection Types: + +With Regexes: +Client -- Migrate data from selected client only. Migration Selection +Pattern regexp provides pattern to select client names, eg ^FS00* makes +all client names starting with FS00 eligible for migration. + +Jobname -- Migration all jobs matching name. Migration Selection Pattern +regexp provides pattern to select jobnames existing in pool. + +Volume -- Migrate all data on specified volumes. Migration Selection +Pattern regexp provides selection criteria for volumes to be migrated. +Volumes must exist in pool to be eligible for migration. + + +With Regex optional: +LowestUtil -- Identify the volume in the pool with the least data on it +and empty it. No Migration Selection Pattern required. + +OldestVol -- Identify the LRU volume with data written, and empty it. No +Migration Selection Pattern required. + +PoolOccupancy -- if pool occupancy exceeds , migrate volumes +(starting with most full volumes) until pool occupancy drops below +. Pool highmig and lowmig values are in pool definition, no +Migration Selection Pattern required. + + +No regex: +SQLQuery -- Migrate all jobuids returned by the supplied SQL query. +Migration Selection Pattern contains SQL query to execute; should return +a list of 1 or more jobuids to migrate. + +PoolResidence -- Migrate data sitting in pool for longer than +PoolResidence value in pool definition. Migration Selection Pattern +optional; if specified, override value in pool definition (value in +minutes). + + +[ possibly a Python event -- kes ] +=== +- Mount on an Autochanger with no tape in the drive causes: + Automatically selected Storage: LTO-changer + Enter autochanger drive[0]: 0 + 3301 Issuing autochanger "loaded drive 0" command. + 3302 Autochanger "loaded drive 0", result: nothing loaded. + 3301 Issuing autochanger "loaded drive 0" command. + 3302 Autochanger "loaded drive 0", result: nothing loaded. + 3902 Cannot mount Volume on Storage Device "LTO-Drive1" (/dev/nst0) because: + Couldn't rewind device "LTO-Drive1" (/dev/nst0): ERR=dev.c:678 Rewind error on "LTO-Drive1" (/dev/nst0). ERR=No medium found. + 3905 Device "LTO-Drive1" (/dev/nst0) open but no Bacula volume is mounted. + If this is not a blank tape, try unmounting and remounting the Volume. +- If Drive 0 is blocked, and drive 1 is set "Autoselect=no", drive 1 will + be used. +- Autochanger did not change volumes. + select * from Storage; + +-----------+-------------+-------------+ + | StorageId | Name | AutoChanger | + +-----------+-------------+-------------+ + | 1 | LTO-changer | 0 | + +-----------+-------------+-------------+ + 05-May 03:50 roxie-sd: 3302 Autochanger "loaded drive 0", result is Slot 11. + 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Warning: Director wanted Volume "LT + Current Volume "LT0-002" not acceptable because: + 1997 Volume "LT0-002" not in catalog. + 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Error: Autochanger Volume "LT0-002" + Setting InChanger to zero in catalog. + 05-May 03:50 roxie-dir: Tibs.2006-05-05_03.05.02 Error: Unable to get Media record + + 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Fatal error: Error getting Volume i + 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Fatal error: Job 530 canceled. + 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Fatal error: spool.c:249 Fatal appe + 05-May 03:49 Tibs: Tibs.2006-05-05_03.05.02 Fatal error: c:\cygwin\home\kern\bacula + , got + (missing) + llist volume=LTO-002 + MediaId: 6 + VolumeName: LTO-002 + Slot: 0 + PoolId: 1 + MediaType: LTO-2 + FirstWritten: 2006-05-05 03:11:54 + LastWritten: 2006-05-05 03:50:23 + LabelDate: 2005-12-26 16:52:40 + VolJobs: 1 + VolFiles: 0 + VolBlocks: 1 + VolMounts: 0 + VolBytes: 206 + VolErrors: 0 + VolWrites: 0 + VolCapacityBytes: 0 + VolStatus: + Recycle: 1 + VolRetention: 31,536,000 + VolUseDuration: 0 + MaxVolJobs: 0 + MaxVolFiles: 0 + MaxVolBytes: 0 + InChanger: 0 + EndFile: 0 + EndBlock: 0 + VolParts: 0 + LabelType: 0 + StorageId: 1 + + Note VolStatus is blank!!!!! + llist volume=LTO-003 + MediaId: 7 + VolumeName: LTO-003 + Slot: 12 + PoolId: 1 + MediaType: LTO-2 + FirstWritten: 0000-00-00 00:00:00 + LastWritten: 0000-00-00 00:00:00 + LabelDate: 2005-12-26 16:52:40 + VolJobs: 0 + VolFiles: 0 + VolBlocks: 0 + VolMounts: 0 + VolBytes: 1 + VolErrors: 0 + VolWrites: 0 + VolCapacityBytes: 0 + VolStatus: Append + Recycle: 1 + VolRetention: 31,536,000 + VolUseDuration: 0 + MaxVolJobs: 0 + MaxVolFiles: 0 + MaxVolBytes: 0 + InChanger: 0 + EndFile: 0 + EndBlock: 0 + VolParts: 0 + LabelType: 0 + StorageId: 1 +=== + mount + Automatically selected Storage: LTO-changer + Enter autochanger drive[0]: 0 + 3301 Issuing autochanger "loaded drive 0" command. + 3302 Autochanger "loaded drive 0", result: nothing loaded. + 3301 Issuing autochanger "loaded drive 0" command. + 3302 Autochanger "loaded drive 0", result: nothing loaded. + 3902 Cannot mount Volume on Storage Device "LTO-Drive1" (/dev/nst0) because: + Couldn't rewind device "LTO-Drive1" (/dev/nst0): ERR=dev.c:678 Rewind error on "LTO-Drive1" (/dev/nst0). ERR=No medium found. + + 3905 Device "LTO-Drive1" (/dev/nst0) open but no Bacula volume is mounted. + If this is not a blank tape, try unmounting and remounting the Volume. + +- Add VolumeState (enable, disable, archive) +- Add VolumeLock to prevent all but lock holder (SD) from updating + the Volume data (with the exception of VolumeState). +- The btape fill command does not seem to use the Autochanger +- Make Windows installer default to system disk drive. +- Look at using ioctl(FIOBMAP, ...) on Linux, and + DeviceIoControl(..., FSCTL_QUERY_ALLOCATED_RANGES, ...) on + Win32 for sparse files. + http://www.flexhex.com/docs/articles/sparse-files.phtml + http://www.informatik.uni-frankfurt.de/~loizides/reiserfs/fibmap.html +- Directive: at "command" +- Command: pycmd "command" generates "command" event. How to + attach to a specific job? +- Integrate Christopher's St. Bernard code. +- run_cmd() returns int should return JobId_t +- get_next_jobid_from_list() returns int should return JobId_t +- Document export LDFLAGS=-L/usr/lib64 +- Don't attempt to restore from "Disabled" Volumes. +- Network error on Win32 should set Win32 error code. +- What happens when you rename a Disk Volume? +- Job retention period in a Pool (and hence Volume). The job would + then be migrated. +- Detect resource deadlock in Migrate when same job wants to read + and write the same device. +- Queue warning/error messages during restore so that they + are reported at the end of the report rather than being + hidden in the file listing ... +- Look at -D_FORTIFY_SOURCE=2 +- Add Win32 FileSet definition somewhere +- Look at fixing restore status stats in SD. +- Make selection of Database used in restore correspond to + client. +- Look at using ioctl(FIMAP) and FIGETBSZ for sparse files. + http://www.informatik.uni-frankfurt.de/~loizides/reiserfs/fibmap.html +- Implement a mode that says when a hard read error is + encountered, read many times (as it currently does), and if the + block cannot be read, skip to the next block, and try again. If + that fails, skip to the next file and try again, ... +- Add level table: + create table LevelType (LevelType binary(1), LevelTypeLong tinyblob); + insert into LevelType (LevelType,LevelTypeLong) values + ("F","Full"), + ("D","Diff"), + ("I","Inc"); +- Add ACL to restore only to original location. +- Show files/second in client status output. - Add a recursive mark command (rmark) to restore. - "Minimum Job Interval = nnn" sets minimum interval between Jobs of the same level and does not permit multiple simultaneous running of that Job (i.e. lets any previous invocation finish before doing Interval testing). - Look at simplifying File exclusions. -- Fix store_yesno to be store_bitmask. - New directive "Delete purged Volumes" - new pool XXX with ScratchPoolId = MyScratchPool's PoolId and let it fill itself, and RecyclePoolId = XXX's PoolId so I can @@ -79,8 +411,7 @@ For 1.39: MyScratchPool - If I want to remove this pool, I set RecyclePoolId = MyScratchPool's PoolId, and when it is empty remove it. -- Figure out how to recycle Scratch volumes back to the Scratch - Pool. +- Figure out how to recycle Scratch volumes back to the Scratch Pool. - Add Volume=SCRTCH - Allow Check Labels to be used with Bacula labels. - "Resuming" a failed backup (lost line for example) by using the @@ -90,17 +421,6 @@ For 1.39: days before it needs changing. - Command to show next tape that will be used for a job even if the job is not scheduled. ---- create_file.c.orig Fri Jul 8 12:13:05 2005 -+++ create_file.c Fri Jul 8 12:13:07 2005 -@@ -195,6 +195,8 @@ - attr->ofname, be.strerror()); - return CF_ERROR; - } -+ } else if(S_ISSOCK(attr->statp.st_mode)) { -+ Dmsg1(200, "Skipping socket: %s\n", attr->ofname); - } else { - Dmsg1(200, "Restore node: %s\n", attr->ofname); - if (mknod(attr->ofname, attr->statp.st_mode, attr->statp.st_rdev) != 0 && errno != EEXIST) { - From: Arunav Mandal 1. When jobs are running and bacula for some reason crashes or if I do a restart it remembers and jobs it was running before it crashed or restarted @@ -125,12 +445,6 @@ For 1.39: - Fix bpipe.c so that it does not modify results pointer. ***FIXME*** calling sequence should be changed. -1.xx Major Projects: -#3 Migration (Move, Copy, Archive Jobs) -#7 Single Job Writing to Multiple Storage Devices -- Reserve blocks other restore jobs when first cannot connect - to SD. -- Add true/false to conf same as yes/no - For Windows disaster recovery see http://unattended.sf.net/ - regardless of the retention period, Bacula will not prune the last Full, Diff, or Inc File data until a month after the @@ -160,10 +474,6 @@ For 1.39: - In restore don't compare byte count on a raw device -- directory entry does not contain bytes. -- To mark files as deleted, run essentially a Verify to disk, and - when a file is found missing (MarkId != JobId), then create - a new File record with FileIndex == -1. This could be done - by the FD at the same time as the backup. === rate design jcr->last_rate jcr->last_runtime @@ -212,7 +522,12 @@ For 1.39: - Bug: if a job is manually scheduled to run later, it does not appear in any status report and cannot be cancelled. -==== Keeping track of deleted files ==== +==== Keeping track of deleted/new files ==== +- To mark files as deleted, run essentially a Verify to disk, and + when a file is found missing (MarkId != JobId), then create + a new File record with FileIndex == -1. This could be done + by the FD at the same time as the backup. + My "trick" for keeping track of deletions is the following. Assuming the user turns on this option, after all the files have been backed up, but before the job has terminated, the @@ -223,7 +538,14 @@ For 1.39: pass. The DIR will then compare that to what is stored in the catalog. Any files in the catalog but not in what the FD sent will receive a catalog File entry that indicates - that at that point in time the file was deleted. + that at that point in time the file was deleted. This + either transmitted to the FD or simultaneously computed in + the FD, so that the FD can put a record on the tape that + indicates that the file has been deleted at this point. + A delete file entry could potentially be one with a FileIndex + of 0 or perhaps -1 (need to check if FileIndex is used for + some other thing as many of the Bacula fields are "overloaded" + in the SD). During a restore, any file initially picked up by some backup (Full, ...) then subsequently having a File entry @@ -250,6 +572,12 @@ For 1.39: Make sure this information is stored on the tape too so that it can be restored directly from the tape. + All the code (with the exception of formally generating and + saving the delete file entries) already exists in the Verify + Catalog command. It explicitly recognizes added/deleted files since + the last InitCatalog. It is more or less a "simple" matter of + taking that code and adapting it slightly to work for backups. + Comments from Martin Simmons (I think they are all covered): Ok, that should cover the basics. There are few issues though: @@ -265,6 +593,50 @@ For 1.39: - It remains to be seen how the backup performance of the DIR's will be affected when comparing the catalog for a large filesystem. +==== +From David: +How about introducing a Type = MgmtPolicy job type? That job type would +be responsible for scanning the Bacula environment looking for specific +conditions, and submitting the appropriate jobs for implementing said +policy, eg: + +Job { + Name = "Migration-Policy" + Type = MgmtPolicy + Policy Selection Job Type = Migrate + Scope = " " + Threshold = " " + Job Template = +} + +Where is any legal job keyword, is a comparison +operator (=,<,>,!=, logical operators AND/OR/NOT) and is a +appropriate regexp. I could see an argument for Scope and Threshold +being SQL queries if we want to support full flexibility. The +Migration-Policy job would then get scheduled as frequently as a site +felt necessary (suggested default: every 15 minutes). + +Example: + +Job { + Name = "Migration-Policy" + Type = MgmtPolicy + Policy Selection Job Type = Migration + Scope = "Pool=*" + Threshold = "Migration Selection Type = LowestUtil" + Job Template = "MigrationTemplate" +} + +would select all pools for examination and generate a job based on +MigrationTemplate to automatically select the volume with the lowest +usage and migrate it's contents to the nextpool defined for that pool. + +This policy abstraction would be really handy for adjusting the behavior +of Bacula according to site-selectable criteria (one thing that pops +into mind is Amanda's ability to automatically adjust backup levels +depending on various criteria). + + ===== Regression tests: @@ -1284,214 +1656,99 @@ Block Position: 0 === Done -- Save mount point for directories not traversed with onefs=yes. -- Add seconds to start and end times in the Job report output. -- if 2 concurrent backups are attempted on the same tape - drive (autoloader) into different tape pools, one of them will exit - fatally instead of halting until the drive is idle -- Update StartTime if job held in Job Queue. -- Look at www.nu2.nu/pebuilder as a helper for full windows - bare metal restore. (done by Scott) -- Fix orphanned buffers: - Orphaned buffer: 24 bytes allocated at line 808 of rufus-dir job.c - Orphaned buffer: 40 bytes allocated at line 45 of rufus-dir alist.c -- Implement Preben's suggestion to add - File System Types = ext2, ext3 - to FileSets, thus simplifying backup of *all* local partitions. -- Try to open a device on each Job if it was not opened - when the SD started. -- Add dump of VolSessionId/Time and FileIndex with bls. -- If Bacula does not find the right tape in the Autochanger, - then mark the tape in error and move on rather than asking - for operator intervention. -- Cancel command should include JobId in list of Jobs. -- Add performance testing hooks -- Bootstrap from JobMedia records. -- Implement WildFile and WildDir to solve problem of - saving only *.doc files. -- Fix - Please use the "label" command to create a new Volume for: - Storage: DDS-4-changer - Media type: - Pool: Default - label - The defined Storage resources are: -- Copy Changer Device and Changer Command from Autochanger - to Device resource in SD if none given in Device resource. -- 1. Automatic use of more than one drive in an autochanger (done) -- 2. Automatic selection of the correct drive for each Job (i.e. - selects a drive with an appropriate Volume for the Job) (done) -- 6. Allow multiple simultaneous Jobs referencing the same pool write - to several tapes (some new directive(s) are are probably needed for - this) (done) -- Locking (done) -- Key on Storage rather than Pool (done) -- Allow multiple drives to use same Pool (change jobq.c DIR) (done). -- Synchronize multiple drives so that not more - than one loads a tape and any time (done) -- 4. Use Changer Device and Changer Command specified in the - Autochanger resource, if none is found in the Device resource. - You can continue to specify them in the Device resource if you want - or need them to be different for each device. -- 5. Implement a new Device directive (perhaps "Autoselect = yes/no") - that can allow a Device be part of an Autochanger, and hence the changer - script protected, but if set to no, will prevent the Device from being - automatically selected from the changer. This allows the device to - be directly accessed through its Device name, but not through the - AutoChanger name. -#6 Select one from among Multiple Storage Devices for Job -#5 Events that call a Python program - (Implemented in Dir/SD) -- Make sure the Device name is in the Query packet returned. -- Don't start a second file job if one is already running. -- Implement EOF/EOV labels for ANSI labels -- Implement IBM labels. -- When Python creates a new label, the tape is immediately - recycled and no label created. This happens when using - autolabeling -- even when Python doesn't generate the name. -- Scratch Pool where the volumes can be re-assigned to any Pool. -- 28-Mar 23:19 rufus-sd: acquire.c:379 Device "DDS-4" (/dev/nst0) - is busy reading. Job 6 canceled. -- Remove separate thread for opening devices in SD. On the other - hand, don't block waiting for open() for devices. -- Fix code to either handle updating NumVol or to calculate it in - Dir next_vol.c -- Ensure that you cannot exclude a directory or a file explicitly - Included with File. -#4 Embedded Python Scripting - (Implemented in Dir/SD/FD) -- Add Python writable variable for changing the Priority, - Client, Storage, JobStatus (error), ... -- SD Python - - Solicit Events -- Add disk seeking on restore; turn off seek on tapes. - stored/match_bsr.c -- Look at dird_conf.c:1000: warning: `int size' - might be used uninitialized in this function -- Indicate when a Job is purged/pruned during restore. -- Implement some way to turn off automatic pruning in Jobs. -- Implement a way an Admin Job can prune, possibly multiple - clients -- Python script? -- Look at Preben's acl.c error handling code. -- SD crashes after a tape restore then doing a backup. -- If drive is opened read/write, close it and re-open - read-only if doing a restore, and vice-versa. -- Windows restore: - data-fd: RestoreFiles.2004-12-07_15.56.42 Error: - > ..\findlib\../../findlib/create_file.c:275 Could not open e:/: ERR=Der - > Prozess kann nicht auf die Datei zugreifen, da sie von einem anderen - > Prozess verwendet wird. - Restore restores all files, but then fails at the end trying - to set the attributes of e: - from failed jobs.- Resolve the problem between Device name and Archive name, - and fix SD messages. -- Tell the "restore" user when browsing is no longer possible. -- Add a restore directory-x -- Write non-optimized bsrs from the JobMedia and Media records, - even after Files are pruned. -- Delete Stripe and Copy from VolParams to save space. -- Fix option 2 of restore -- list where file is backed up -- require Client, - then list last 20 backups. -- Finish implementation of passing all Storage and Device needs to - the SD. -- Move test for max wait time exceeded in job.c up -- Peter's idea. -## Consider moving docs to their own project. -## Move rescue to its own project. -- Add client version to the Client name line that prints in - the Job report. -- Fix the Rescue CDROM. -- By the way: on page http://www.bacula.org/?page=tapedrives , at the - bottom, the link to "Tape Testing Chapter" is broken. It goes to - /html-manual/... while the others point to /rel-manual/... -- Device resource needs the "name" of the SD. -- Specify a single directory to restore. -- Implement MediaType keyword in bsr? -- Add a date and time stamp at the beginning of every line in the - Job report (Volker Sauer). -- Add level to estimate command. -- Add "limit=n" for "list jobs" -- Make bootstrap filename unique. -- Make Dmsg look at global before calling subroutine. -- From Chris Hull: - it seems to be complaining about 12:00pm which should be a valid 12 - hour time. I changed the time to 11:59am and everything works fine. - Also 12:00am works fine. 0:00pm also works (which I don't think - should). None of the values 12:00pm - 12:59pm work for that matter. -- Require restore via the restore command or make a restore Job - get the bootstrap file. -- Implement Maximum Job Spool Size -- Fix 3993 error in SD. It forgets to look at autochanger - resource for device command, ... -- 3. Prevent two drives requesting the same Volume in any given - autochanger, by checking if a Volume is mounted on another drive - in an Autochanger. -- Upgrade to MySQL 4.1.12 See: - http://dev.mysql.com/doc/mysql/en/Server_SQL_mode.html -- Add # Job Level date to bsr file -- Implement "PreferMountedVolumes = yes|no" in Job resource. -## Integrate web-bacula into a new Bacula project with - bimagemgr. -- Cleaning tapes should have Status "Cleaning" rather than append. -- Make sure that Python has access to Client address/port so that - it can check if Clients are alive. -- Review all items in "restore". -- Fix PostgreSQL GROUP BY problems in restore. -- Fix PostgreSQL sql problems in bugs. -- After rename - 04-Jul 13:01 MainSD: Rufus.2005-07-04_01.05.02 Warning: Director wanted Volume - "DLT-13Feb04". - Current Volume "DLT-04Jul05" not acceptable because: - 1997 Volume "DLT-13Feb04" not in catalog. - 04-Jul 13:01 MainSD: Please mount Volume "DLT-04Jul05" on Storage Device - "HP DLT 80" (/dev/nst0) for Job Rufus.2005-07-04_01.05.02 -## Create a new GUI chapter explaining all the GUI programs. -- Make "update slots" when pointing to Autochanger, remove - all Volumes from other drives. "update slots all-drives"? - No, this is done by modifying mtx-changer to list what is - in the drives. -- Finish TLS implementation. -- Port limiting -m in iptables to prevent DoS attacks - could cause broken pipes on Bacula. -6. Build and test the Volume Shadow Copy (VSS) for Win32. -- Allow cancel of unknown Job -- State not saved when closing Win32 FD by icon -- bsr-opt-test fails. bsr deleted. Fix. -- Move Python daemon variables from Job to Bacula object. - WorkingDir, ConfigFile -- Document that Bootstrap files can be written with cataloging - turned off. -- Document details of ANSI/IBM labels -- OS linux 2.4 - 1) ADIC, DLT, FastStor 4000, 7*20GB -- Linux Sony LIB-D81, AIT-3 library works. -- Doc the following - to activate, check or disable the hardware compression feature on my - exb-8900 i use the exabyte "MammothTool" you can get it here: - http://www.exabyte.com/support/online/downloads/index.cfm - There is a solaris version of this tool. With option -C 0 or 1 you can - disable or activate compression. Start this tool without any options for - a small reference. -- Document Heartbeat Interval in the dealing with firewalls section. -- Document new CDROM directory. -- On Win32 working directory must have drive letter ???? -- On Win32 working directory must be writable by SYSTEM to - do restores. -- Document that ChangerDevice is used for Alert command. -- Add better documentation on how restores can be done -8. Take one more try at making DVD writing work (no go) -7. Write a bacula-web document -- Why isn't the DEVICE structure defined when doing - a reservation? -- Multi-drive changer seems to only use drive 0 - Multiple drives don't seem to be opened. -- My database is growing -- Call GetLastError() in the berrno constructor rather - than delaying until strerror. -- Tape xxx in drive 0, requested in drive 1 -- The mount command does not work with drives other than 0. -- A mount should cause the SD to re-examine what Slot is - loaded. -- The SD locks on to the first available drive then - wants a Volume that is released but in another drive -- - chaos. -- Run the regression scripts on Solaris and FreeBSD +- Make sure that all do_prompt() calls in Dir check for + -1 (error) and -2 (cancel) returns. +- Fix foreach_jcr() to have free_jcr() inside next(). + jcr=jcr_walk_start(); + for ( ; jcr; (jcr=jcr_walk_next(jcr)) ) + ... + jcr_walk_end(jcr); +- A Volume taken from Scratch should take on the retention period + of the new pool. +- Correct doc for Maximum Changer Wait (and others) accepting only + integers. +- Implement status that shows why a job is being held in reserve, or + rather why none of the drives are suitable. +- Implement a way to disable a drive (so you can use the second + drive of an autochanger, and the first one will not be used or + even defined). +- Make sure Maximum Volumes is respected in Pools when adding + Volumes (e.g. when pulling a Scratch volume). +- Keep same dcr when switching device ... +- Implement code that makes the Dir aware that a drive is an + autochanger (so the user doesn't need to use the Autochanger = yes + directive). +- Make catalog respect ACL. +- Add recycle count to Media record. +- Add initial write date to Media record. +- Fix store_yesno to be store_bitmask. +--- create_file.c.orig Fri Jul 8 12:13:05 2005 ++++ create_file.c Fri Jul 8 12:13:07 2005 +@@ -195,6 +195,8 @@ + attr->ofname, be.strerror()); + return CF_ERROR; + } ++ } else if(S_ISSOCK(attr->statp.st_mode)) { ++ Dmsg1(200, "Skipping socket: %s\n", attr->ofname); + } else { + Dmsg1(200, "Restore node: %s\n", attr->ofname); + if (mknod(attr->ofname, attr->statp.st_mode, attr->statp.st_rdev) != 0 && errno != EEXIST) { +- Add true/false to conf same as yes/no +- Reserve blocks other restore jobs when first cannot connect to SD. +- Fix Maximum Changer Wait, Maximum Open Wait, Maximum Rewind Wait to + accept time qualifiers. +- Does ClientRunAfterJob fail the job on a bad return code? +- Make hardlink code at line 240 of find_one.c use binary search. +- Add ACL error messages in src/filed/acl.c. +- Make authentication failures single threaded. +- Make Dir and SD authentication errors single threaded. +- Fix catreq.c digestbuf at line 411 in src/dird/catreq.c +- Make base64.c (bin_to_base64) take a buffer length + argument to avoid overruns. + and verify that other buffers cannot overrun. +- Implement VolumeState as discussed with Arno. +- Add LocationId to update volume +- Add LocationLog + LogId + Date + User text + MediaId + LocationId + NewState??? +- Add Comment to Media record +- Fix auth compatibility with 1.38 +- Update dbcheck to include Log table +- Update llist to include new fields. +- Make unmount unload autochanger. Make mount load slot. +- Fix bscan to report the JobType when restoring a job. +- Fix wx-console scanning problem with commas in names. +- Add manpages to the list of directories for make install. Notify + Scott +- Add bconsole option to use stdin/out instead of conio. +- Fix ClientRunBefore/AfterJob compatibility. +- Ensure that connection to daemon failure always indicates what + daemon it was trying to connect to. +- Freespace on DVD requested over and over even with no intervening + writes. +- .update volume [enabled|disabled|*see below] + > However, I could easily imagine an option to "update slots" that says + > "enable=yes|no" that would automatically enable or disable all the Volumes + > found in the autochanger. This will permit the user to optionally mark all + > the Volumes in the magazine disabled prior to taking them offsite, and mark + > them all enabled when bringing them back on site. Coupled with the options + > to the slots keyword, you can apply the enable/disable to any or all volumes. +- Restricted consoles start in the Default catalog even if it + is not permitted. +- When reading through parts on the DVD, the DVD is mounted and + unmounted for each part. +- Make sure that the restore options don't permit "seeing" other + Client's job data. +- Restore of a raw drive should not try to check the volume size. +- Lock tape drive door when open() +- Make release unload any autochanger. +- Arno's reservation deadlock. +- Eric's SD patch +- Make sure the new level=Full syntax is used in all + example conf files (especially in the manual). +- Fix prog copyright (SD) all other files. +- Document need for UTF-8 format