X-Git-Url: https://git.sur5r.net/?a=blobdiff_plain;f=bacula%2Fkernstodo;h=bf8f31e598a1ff6c9154b216f7ec48b765c7c6a8;hb=200821d1ecedf9bc34de69a1ab33f937cebb4972;hp=4b44aa49317dddb95bf3bb7af4618e4b763fc723;hpb=6ad207a664aa21a14cf5a888b54f39291b5cd013;p=bacula%2Fbacula diff --git a/bacula/kernstodo b/bacula/kernstodo index 4b44aa4931..bf8f31e598 100644 --- a/bacula/kernstodo +++ b/bacula/kernstodo @@ -1,41 +1,399 @@ Kern's ToDo List - 04 July 2005 + 19 August 2006 Major development: Project Developer ======= ========= -TLS Landon Fuller -Unicode in Win32 Thorsten Engel (done) -VSS Thorsten Engel (in beta testing) -Version 1.37 Kern (see below) -======================================================== -1.37 Major Projects: -#3 Migration (Move, Copy, Archive Jobs) - (probably not this version) -#7 Single Job Writing to Multiple Storage Devices - (probably not this version) +Document: +- Document cleaning up the spool files: + db, pid, state, bsr, mail, conmsg, spool +- Document the multiple-drive-changer.txt script. +- Pruning with Admin job. +- Does WildFile match against full name? Doc. +- %d and %v only valid on Director, not for ClientRunBefore/After. +- During tests with the 260 char fix code, I found one problem: + if the system "sees" a long path once, it seems to forget it's + working drive (e.g. c:\), which will lead to a problem during + the next job (create bootstrap file will fail). Here is the + workaround: specify absolute working and pid directory in + bacula-fd.conf (e.g. c:\bacula\working instead of + \bacula\working). +- Document techniques for restoring large numbers of files. +- Document setting my.cnf to big file usage. +- Add example of proper index output to doc. + show index from File; +- Correct the Include syntax in the m4.xxx files in examples/conf +- Document JobStatus and Termination codes. +- Fix the error with the "DVI file can't be opened" while + building the French PDF. +- Document more DVD stuff -- particularly that recycling doesn't work, + and all the other things too. + +Priority: + +For 1.39: +- Fix wx-console scanning problem with commas in names. +- Change dbcheck to tell users to use native tools for fixing + broken databases, and to ensure they have the proper indexes. +- add udev rules for Bacula devices. +- Add manpages to the list of directories for make install. +- If a job terminates, the DIR connection can close before the + Volume info is updated, leaving the File count wrong. +- Look at why SIGPIPE during connection can cause seg fault in + writing the daemon message, when Dir dropped to bacula:bacula +- Look at zlib 32 => 64 problems. +- Ensure that connection to daemon failure always indicates what + daemon it was trying to connect to. +- Try turning on disk seek code. +- Possibly turn on St. Bernard code. +- Fix bextract to restore ACLs, or better yet, use common + routines. +- Do we migrate appendable Volumes? +- Remove queue.c code. +- Add bconsole option to use stdin/out instead of conio. +- Fix ClientRunBefore/AfterJob compatibility. +- Fix re-read of last block to check if job has actually written + a block, and check if block was written by a different job + (i.e. multiple simultaneous jobs writing). +- Some users claim that they must do two prune commands to get a + Volume marked as purged. +- Print warning message if LANG environment variable does not specify + UTF-8. +- New dot commands from Arno. + .update volume [enabled|disabled|*see below] + > However, I could easily imagine an option to "update slots" that says + > "enable=yes|no" that would automatically enable or disable all the Volumes + > found in the autochanger. This will permit the user to optionally mark all + > the Volumes in the magazine disabled prior to taking them offsite, and mark + > them all enabled when bringing them back on site. Coupled with the options + > to the slots keyword, you can apply the enable/disable to any or all volumes. + .show device=xxx lists information from one storage device, including + devices (I'm not even sure that information exists in the DIR...) + .move eject device=xxx mostly the same as 'unmount xxx' but perhaps with + better machine-readable output like "Ok" or "Error busy" + .move eject device=xxx toslot=yyy the same as above, but with a new + target slot. The catalog should be updated accordingly. + .move transfer device=xxx fromslot=yyy toslot=zzz + +Low priority: +- Get Perl replacement for bregex.c +- Given all the problems with FIFOs, I think the solution is to do something a + little different, though I will look at the code and see if there is not some + simple solution (i.e. some bug that was introduced). What might be a better + solution would be to use a FIFO as a sort of "key" to tell Bacula to read and + write data to a program rather than the FIFO. For example, suppose you + create a FIFO named: + + /home/kern/my-fifo + + Then, I could imagine if you backup and restore this file with a direct + reference as is currently done for fifos, instead, during backup Bacula will + execute: + + /home/kern/my-fifo.backup + + and read the data that my-fifo.backup writes to stdout. For restore, Bacula + will execute: + + /home/kern/my-fifo.restore + + and send the data backed up to stdout. These programs can either be an + executable or a shell script and they need only read/write to stdin/stdout. + + I think this would give a lot of flexibility to the user without making any + significant changes to Bacula. + + +==== SQL +# get null file +select FilenameId from Filename where Name=''; +# Get list of all directories referenced in a Backup. +select Path.Path from Path,File where File.JobId=nnn and + File.FilenameId=(FilenameId-from-above) and File.PathId=Path.PathId + order by Path.Path ASC; + +- Look into using Dart for testing + http://public.kitware.com/Dart/HTML/Index.shtml + +- Look into replacing autotools with cmake + http://www.cmake.org/HTML/Index.html + +=== Migration from David === +What I'd like to see: + +Job { + Name = "-migrate" + Type = Migrate + Messages = Standard + Pool = Default + Migration Selection Type = LowestUtil | OldestVol | PoolOccupancy | +Client | PoolResidence | Volume | JobName | SQLquery + Migration Selection Pattern = "regexp" + Next Pool = +} + +There should be no need for a Level (migration is always Full, since you +don't calculate differential/incremental differences for migration), +Storage should be determined by the volume types in the pool, and Client +is really a selection issue. Migration should always occur to the +NextPool defined in the pool definition. If no nextpool is defined, the +job should end with a reason of "no place to go". If Next Pool statement +is present, we override the check in the pool definition and use the +pool specified. + +Here's how I'd define Migration Selection Types: + +With Regexes: +Client -- Migrate data from selected client only. Migration Selection +Pattern regexp provides pattern to select client names, eg ^FS00* makes +all client names starting with FS00 eligible for migration. + +Jobname -- Migration all jobs matching name. Migration Selection Pattern +regexp provides pattern to select jobnames existing in pool. + +Volume -- Migrate all data on specified volumes. Migration Selection +Pattern regexp provides selection criteria for volumes to be migrated. +Volumes must exist in pool to be eligible for migration. + + +With Regex optional: +LowestUtil -- Identify the volume in the pool with the least data on it +and empty it. No Migration Selection Pattern required. + +OldestVol -- Identify the LRU volume with data written, and empty it. No +Migration Selection Pattern required. + +PoolOccupancy -- if pool occupancy exceeds , migrate volumes +(starting with most full volumes) until pool occupancy drops below +. Pool highmig and lowmig values are in pool definition, no +Migration Selection Pattern required. + -## Create a new GUI chapter explaining all the GUI programs. +No regex: +SQLQuery -- Migrate all jobuids returned by the supplied SQL query. +Migration Selection Pattern contains SQL query to execute; should return +a list of 1 or more jobuids to migrate. -Autochangers: -- Make "update slots" when pointing to Autochanger, remove - all Volumes from other drives. "update slots all-drives"? +PoolResidence -- Migrate data sitting in pool for longer than +PoolResidence value in pool definition. Migration Selection Pattern +optional; if specified, override value in pool definition (value in +minutes). -For 1.37: + +[ possibly a Python event -- kes ] +=== +- Mount on an Autochanger with no tape in the drive causes: + Automatically selected Storage: LTO-changer + Enter autochanger drive[0]: 0 + 3301 Issuing autochanger "loaded drive 0" command. + 3302 Autochanger "loaded drive 0", result: nothing loaded. + 3301 Issuing autochanger "loaded drive 0" command. + 3302 Autochanger "loaded drive 0", result: nothing loaded. + 3902 Cannot mount Volume on Storage Device "LTO-Drive1" (/dev/nst0) because: + Couldn't rewind device "LTO-Drive1" (/dev/nst0): ERR=dev.c:678 Rewind error on "LTO-Drive1" (/dev/nst0). ERR=No medium found. + 3905 Device "LTO-Drive1" (/dev/nst0) open but no Bacula volume is mounted. + If this is not a blank tape, try unmounting and remounting the Volume. +- If Drive 0 is blocked, and drive 1 is set "Autoselect=no", drive 1 will + be used. +- Autochanger did not change volumes. + select * from Storage; + +-----------+-------------+-------------+ + | StorageId | Name | AutoChanger | + +-----------+-------------+-------------+ + | 1 | LTO-changer | 0 | + +-----------+-------------+-------------+ + 05-May 03:50 roxie-sd: 3302 Autochanger "loaded drive 0", result is Slot 11. + 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Warning: Director wanted Volume "LT + Current Volume "LT0-002" not acceptable because: + 1997 Volume "LT0-002" not in catalog. + 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Error: Autochanger Volume "LT0-002" + Setting InChanger to zero in catalog. + 05-May 03:50 roxie-dir: Tibs.2006-05-05_03.05.02 Error: Unable to get Media record + + 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Fatal error: Error getting Volume i + 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Fatal error: Job 530 canceled. + 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Fatal error: spool.c:249 Fatal appe + 05-May 03:49 Tibs: Tibs.2006-05-05_03.05.02 Fatal error: c:\cygwin\home\kern\bacula + , got + (missing) + llist volume=LTO-002 + MediaId: 6 + VolumeName: LTO-002 + Slot: 0 + PoolId: 1 + MediaType: LTO-2 + FirstWritten: 2006-05-05 03:11:54 + LastWritten: 2006-05-05 03:50:23 + LabelDate: 2005-12-26 16:52:40 + VolJobs: 1 + VolFiles: 0 + VolBlocks: 1 + VolMounts: 0 + VolBytes: 206 + VolErrors: 0 + VolWrites: 0 + VolCapacityBytes: 0 + VolStatus: + Recycle: 1 + VolRetention: 31,536,000 + VolUseDuration: 0 + MaxVolJobs: 0 + MaxVolFiles: 0 + MaxVolBytes: 0 + InChanger: 0 + EndFile: 0 + EndBlock: 0 + VolParts: 0 + LabelType: 0 + StorageId: 1 + + Note VolStatus is blank!!!!! + llist volume=LTO-003 + MediaId: 7 + VolumeName: LTO-003 + Slot: 12 + PoolId: 1 + MediaType: LTO-2 + FirstWritten: 0000-00-00 00:00:00 + LastWritten: 0000-00-00 00:00:00 + LabelDate: 2005-12-26 16:52:40 + VolJobs: 0 + VolFiles: 0 + VolBlocks: 0 + VolMounts: 0 + VolBytes: 1 + VolErrors: 0 + VolWrites: 0 + VolCapacityBytes: 0 + VolStatus: Append + Recycle: 1 + VolRetention: 31,536,000 + VolUseDuration: 0 + MaxVolJobs: 0 + MaxVolFiles: 0 + MaxVolBytes: 0 + InChanger: 0 + EndFile: 0 + EndBlock: 0 + VolParts: 0 + LabelType: 0 + StorageId: 1 +=== + mount + Automatically selected Storage: LTO-changer + Enter autochanger drive[0]: 0 + 3301 Issuing autochanger "loaded drive 0" command. + 3302 Autochanger "loaded drive 0", result: nothing loaded. + 3301 Issuing autochanger "loaded drive 0" command. + 3302 Autochanger "loaded drive 0", result: nothing loaded. + 3902 Cannot mount Volume on Storage Device "LTO-Drive1" (/dev/nst0) because: + Couldn't rewind device "LTO-Drive1" (/dev/nst0): ERR=dev.c:678 Rewind error on "LTO-Drive1" (/dev/nst0). ERR=No medium found. + + 3905 Device "LTO-Drive1" (/dev/nst0) open but no Bacula volume is mounted. + If this is not a blank tape, try unmounting and remounting the Volume. + +- Add VolumeState (enable, disable, archive) +- Add VolumeLock to prevent all but lock holder (SD) from updating + the Volume data (with the exception of VolumeState). +- The btape fill command does not seem to use the Autochanger +- Make Windows installer default to system disk drive. +- Look at using ioctl(FIOBMAP, ...) on Linux, and + DeviceIoControl(..., FSCTL_QUERY_ALLOCATED_RANGES, ...) on + Win32 for sparse files. + http://www.flexhex.com/docs/articles/sparse-files.phtml + http://www.informatik.uni-frankfurt.de/~loizides/reiserfs/fibmap.html +- Directive: at "command" +- Command: pycmd "command" generates "command" event. How to + attach to a specific job? +- Integrate Christopher's St. Bernard code. +- run_cmd() returns int should return JobId_t +- get_next_jobid_from_list() returns int should return JobId_t +- Document export LDFLAGS=-L/usr/lib64 +- Don't attempt to restore from "Disabled" Volumes. +- Network error on Win32 should set Win32 error code. +- What happens when you rename a Disk Volume? +- Job retention period in a Pool (and hence Volume). The job would + then be migrated. +- Detect resource deadlock in Migrate when same job wants to read + and write the same device. +- Queue warning/error messages during restore so that they + are reported at the end of the report rather than being + hidden in the file listing ... +- Look at -D_FORTIFY_SOURCE=2 +- Add Win32 FileSet definition somewhere +- Look at fixing restore status stats in SD. +- Make selection of Database used in restore correspond to + client. +- Look at using ioctl(FIMAP) and FIGETBSZ for sparse files. + http://www.informatik.uni-frankfurt.de/~loizides/reiserfs/fibmap.html +- Implement a mode that says when a hard read error is + encountered, read many times (as it currently does), and if the + block cannot be read, skip to the next block, and try again. If + that fails, skip to the next file and try again, ... +- Add level table: + create table LevelType (LevelType binary(1), LevelTypeLong tinyblob); + insert into LevelType (LevelType,LevelTypeLong) values + ("F","Full"), + ("D","Diff"), + ("I","Inc"); +- Add ACL to restore only to original location. +- Show files/second in client status output. +- Add a recursive mark command (rmark) to restore. +- "Minimum Job Interval = nnn" sets minimum interval between Jobs + of the same level and does not permit multiple simultaneous + running of that Job (i.e. lets any previous invocation finish + before doing Interval testing). +- Look at simplifying File exclusions. +- New directive "Delete purged Volumes" +- new pool XXX with ScratchPoolId = MyScratchPool's PoolId and + let it fill itself, and RecyclePoolId = XXX's PoolId so I can + see if it become stable and I just have to supervise + MyScratchPool +- If I want to remove this pool, I set RecyclePoolId = MyScratchPool's + PoolId, and when it is empty remove it. +- Figure out how to recycle Scratch volumes back to the Scratch Pool. +- Add Volume=SCRTCH +- Allow Check Labels to be used with Bacula labels. +- "Resuming" a failed backup (lost line for example) by using the + failed backup as a sort of "base" job. +- Look at NDMP +- Email to the user when the tape is about to need changing x + days before it needs changing. +- Command to show next tape that will be used for a job even + if the job is not scheduled. +- From: Arunav Mandal + 1. When jobs are running and bacula for some reason crashes or if I do a + restart it remembers and jobs it was running before it crashed or restarted + as of now I loose all jobs if I restart it. + + 2. When spooling and in the midway if client is disconnected for instance a + laptop bacula completely discard the spool. It will be nice if it can write + that spool to tape so there will be some backups for that client if not all. + + 3. We have around 150 clients machines it will be nice to have a option to + upgrade all the client machines bacula version automatically. + + 4. Atleast one connection should be reserved for the bconsole so at heavy load + I should connect to the director via bconsole which at sometimes I can't + + 5. Another most important feature that is missing, say at 10am I manually + started backup of client abc and it was a full backup since client abc has + no backup history and at 10.30am bacula again automatically started backup of + client abc as that was in the schedule. So now we have 2 multiple Full + backups of the same client and if we again try to start a full backup of + client backup abc bacula won't complain. That should be fixed. + +- Fix bpipe.c so that it does not modify results pointer. + ***FIXME*** calling sequence should be changed. +- For Windows disaster recovery see http://unattended.sf.net/ +- regardless of the retention period, Bacula will not prune the + last Full, Diff, or Inc File data until a month after the + retention period for the last Full backup that was done. - update volume=xxx --- add status=Full -- After rename - 04-Jul 13:01 MainSD: Rufus.2005-07-04_01.05.02 Warning: Director wanted Volume - "DLT-13Feb04". - Current Volume "DLT-04Jul05" not acceptable because: - 1997 Volume "DLT-13Feb04" not in catalog. - 04-Jul 13:01 MainSD: Please mount Volume "DLT-04Jul05" on Storage Device - "HP DLT 80" (/dev/nst0) for Job Rufus.2005-07-04_01.05.02 - Remove old spool files on startup. - Exclude SD spool/working directory. -- Finish TLS implementation. - Refuse to prune last valid Full backup. Same goes for Catalog. -- --without-openssl breaks at least on Solaris. - Python: - Make a callback when Rerun failed levels is called. - Give Python program access to Scheduled jobs. @@ -54,35 +412,7 @@ For 1.37: resources were locked. - The last part is left in the spool dir. -Document: -- Port limiting -m in iptables to prevent DoS attacks - could cause broken pipes on Bacula. -- Document that Bootstrap files can be written with cataloging - turned off. -- Pruning with Admin job. -- Add better documentation on how restores can be done -- OS linux 2.4 - 1) ADIC, DLT, FastStor 4000, 7*20GB - 2) Sun, DDS, (Suns name unknown - Archive Python DDS drive), 1.2GB - 3) Wangtek, QIC, 6525ES, 525MB (fixed block size 1k, block size etc. - driver dependent - aic7xxx works, ncr53c8xx with problems) - 4) HP, DDS-2, C1553A, 6*4GB -- Doc the following - to activate, check or disable the hardware compression feature on my - exb-8900 i use the exabyte "MammothTool" you can get it here: - http://www.exabyte.com/support/online/downloads/index.cfm - There is a solaris version of this tool. With option -C 0 or 1 you can - disable or activate compression. Start this tool without any options for - a small reference. -- Linux Sony LIB-D81, AIT-3 library works. -- Document PostgreSQL performance problems bug 131. -- Document testing -- Document that ChangerDevice is used for Alert command. -- Document new CDROM directory. -- Document Heartbeat Interval in the dealing with firewalls section. -- Document the multiple-drive-changer.txt script. -Maybe in 1.37: - In restore don't compare byte count on a raw device -- directory entry does not contain bytes. - To mark files as deleted, run essentially a Verify to disk, and @@ -137,6 +467,105 @@ Maybe in 1.37: - Bug: if a job is manually scheduled to run later, it does not appear in any status report and cannot be cancelled. +==== Keeping track of deleted files ==== + My "trick" for keeping track of deletions is the following. + Assuming the user turns on this option, after all the files + have been backed up, but before the job has terminated, the + FD will make a pass through all the files and send their + names to the DIR (*exactly* the same as what a Verify job + currently does). This will probably be done at the same + time the files are being sent to the SD avoiding a second + pass. The DIR will then compare that to what is stored in + the catalog. Any files in the catalog but not in what the + FD sent will receive a catalog File entry that indicates + that at that point in time the file was deleted. + + During a restore, any file initially picked up by some + backup (Full, ...) then subsequently having a File entry + marked "delete" will be removed from the tree, so will not + be restored. If a file with the same name is later OK it + will be inserted in the tree -- this already happens. All + will be consistent except for possible changes during the + running of the FD. + + Since I'm on the subject, some of you may be wondering what + the utility of the in memory tree is if you are going to + restore everything (at least it comes up from time to time + on the list). Well, it is still *very* useful because it + allows only the last item found for a particular filename + (full path) to be entered into the tree, and thus if a file + is backed up 10 times, only the last copy will be restored. + I recently (last Friday) restored a complete directory, and + the Full and all the Differential and Incremental backups + spanned 3 Volumes. The first Volume was not even mounted + because all the files had been updated and hence backed up + since the Full backup was made. In this case, the tree + saved me a *lot* of time. + + Make sure this information is stored on the tape too so + that it can be restored directly from the tape. + + Comments from Martin Simmons (I think they are all covered): + Ok, that should cover the basics. There are few issues though: + + - Restore will depend on the catalog. I think it is better to include the + extra data in the backup as well, so it can be seen by bscan and bextract. + + - I'm not sure if it will preserve multiple hard links to the same inode. Or + maybe adding or removing links will cause the data to be dumped again? + + - I'm not sure if it will handle renamed directories. Possibly it will work + by dumping the whole tree under a renamed directory? + + - It remains to be seen how the backup performance of the DIR's will be + affected when comparing the catalog for a large filesystem. + +==== +From David: +How about introducing a Type = MgmtPolicy job type? That job type would +be responsible for scanning the Bacula environment looking for specific +conditions, and submitting the appropriate jobs for implementing said +policy, eg: + +Job { + Name = "Migration-Policy" + Type = MgmtPolicy + Policy Selection Job Type = Migrate + Scope = " " + Threshold = " " + Job Template = +} + +Where is any legal job keyword, is a comparison +operator (=,<,>,!=, logical operators AND/OR/NOT) and is a +appropriate regexp. I could see an argument for Scope and Threshold +being SQL queries if we want to support full flexibility. The +Migration-Policy job would then get scheduled as frequently as a site +felt necessary (suggested default: every 15 minutes). + +Example: + +Job { + Name = "Migration-Policy" + Type = MgmtPolicy + Policy Selection Job Type = Migration + Scope = "Pool=*" + Threshold = "Migration Selection Type = LowestUtil" + Job Template = "MigrationTemplate" +} + +would select all pools for examination and generate a job based on +MigrationTemplate to automatically select the volume with the lowest +usage and migrate it's contents to the nextpool defined for that pool. + +This policy abstraction would be really handy for adjusting the behavior +of Bacula according to site-selectable criteria (one thing that pops +into mind is Amanda's ability to automatically adjust backup levels +depending on various criteria). + + +===== + Regression tests: - Add Pool/Storage override regression test. - Add delete JobId to regression. @@ -1154,156 +1583,69 @@ Block Position: 0 === Done -- Save mount point for directories not traversed with onefs=yes. -- Add seconds to start and end times in the Job report output. -- if 2 concurrent backups are attempted on the same tape - drive (autoloader) into different tape pools, one of them will exit - fatally instead of halting until the drive is idle -- Update StartTime if job held in Job Queue. -- Look at www.nu2.nu/pebuilder as a helper for full windows - bare metal restore. (done by Scott) -- Fix orphanned buffers: - Orphaned buffer: 24 bytes allocated at line 808 of rufus-dir job.c - Orphaned buffer: 40 bytes allocated at line 45 of rufus-dir alist.c -- Implement Preben's suggestion to add - File System Types = ext2, ext3 - to FileSets, thus simplifying backup of *all* local partitions. -- Try to open a device on each Job if it was not opened - when the SD started. -- Add dump of VolSessionId/Time and FileIndex with bls. -- If Bacula does not find the right tape in the Autochanger, - then mark the tape in error and move on rather than asking - for operator intervention. -- Cancel command should include JobId in list of Jobs. -- Add performance testing hooks -- Bootstrap from JobMedia records. -- Implement WildFile and WildDir to solve problem of - saving only *.doc files. -- Fix - Please use the "label" command to create a new Volume for: - Storage: DDS-4-changer - Media type: - Pool: Default - label - The defined Storage resources are: -- Copy Changer Device and Changer Command from Autochanger - to Device resource in SD if none given in Device resource. -- 1. Automatic use of more than one drive in an autochanger (done) -- 2. Automatic selection of the correct drive for each Job (i.e. - selects a drive with an appropriate Volume for the Job) (done) -- 6. Allow multiple simultaneous Jobs referencing the same pool write - to several tapes (some new directive(s) are are probably needed for - this) (done) -- Locking (done) -- Key on Storage rather than Pool (done) -- Allow multiple drives to use same Pool (change jobq.c DIR) (done). -- Synchronize multiple drives so that not more - than one loads a tape and any time (done) -- 4. Use Changer Device and Changer Command specified in the - Autochanger resource, if none is found in the Device resource. - You can continue to specify them in the Device resource if you want - or need them to be different for each device. -- 5. Implement a new Device directive (perhaps "Autoselect = yes/no") - that can allow a Device be part of an Autochanger, and hence the changer - script protected, but if set to no, will prevent the Device from being - automatically selected from the changer. This allows the device to - be directly accessed through its Device name, but not through the - AutoChanger name. -#6 Select one from among Multiple Storage Devices for Job -#5 Events that call a Python program - (Implemented in Dir/SD) -- Make sure the Device name is in the Query packet returned. -- Don't start a second file job if one is already running. -- Implement EOF/EOV labels for ANSI labels -- Implement IBM labels. -- When Python creates a new label, the tape is immediately - recycled and no label created. This happens when using - autolabeling -- even when Python doesn't generate the name. -- Scratch Pool where the volumes can be re-assigned to any Pool. -- 28-Mar 23:19 rufus-sd: acquire.c:379 Device "DDS-4" (/dev/nst0) - is busy reading. Job 6 canceled. -- Remove separate thread for opening devices in SD. On the other - hand, don't block waiting for open() for devices. -- Fix code to either handle updating NumVol or to calculate it in - Dir next_vol.c -- Ensure that you cannot exclude a directory or a file explicitly - Included with File. -#4 Embedded Python Scripting - (Implemented in Dir/SD/FD) -- Add Python writable variable for changing the Priority, - Client, Storage, JobStatus (error), ... -- SD Python - - Solicit Events -- Add disk seeking on restore; turn off seek on tapes. - stored/match_bsr.c -- Look at dird_conf.c:1000: warning: `int size' - might be used uninitialized in this function -- Indicate when a Job is purged/pruned during restore. -- Implement some way to turn off automatic pruning in Jobs. -- Implement a way an Admin Job can prune, possibly multiple - clients -- Python script? -- Look at Preben's acl.c error handling code. -- SD crashes after a tape restore then doing a backup. -- If drive is opened read/write, close it and re-open - read-only if doing a restore, and vice-versa. -- Windows restore: - data-fd: RestoreFiles.2004-12-07_15.56.42 Error: - > ..\findlib\../../findlib/create_file.c:275 Could not open e:/: ERR=Der - > Prozess kann nicht auf die Datei zugreifen, da sie von einem anderen - > Prozess verwendet wird. - Restore restores all files, but then fails at the end trying - to set the attributes of e: - from failed jobs.- Resolve the problem between Device name and Archive name, - and fix SD messages. -- Tell the "restore" user when browsing is no longer possible. -- Add a restore directory-x -- Write non-optimized bsrs from the JobMedia and Media records, - even after Files are pruned. -- Delete Stripe and Copy from VolParams to save space. -- Fix option 2 of restore -- list where file is backed up -- require Client, - then list last 20 backups. -- Finish implementation of passing all Storage and Device needs to - the SD. -- Move test for max wait time exceeded in job.c up -- Peter's idea. -## Consider moving docs to their own project. -## Move rescue to its own project. -- Add client version to the Client name line that prints in - the Job report. -- Fix the Rescue CDROM. -- By the way: on page http://www.bacula.org/?page=tapedrives , at the - bottom, the link to "Tape Testing Chapter" is broken. It goes to - /html-manual/... while the others point to /rel-manual/... -- Device resource needs the "name" of the SD. -- Specify a single directory to restore. -- Implement MediaType keyword in bsr? -- Add a date and time stamp at the beginning of every line in the - Job report (Volker Sauer). -- Add level to estimate command. -- Add "limit=n" for "list jobs" -- Make bootstrap filename unique. -- Make Dmsg look at global before calling subroutine. -- From Chris Hull: - it seems to be complaining about 12:00pm which should be a valid 12 - hour time. I changed the time to 11:59am and everything works fine. - Also 12:00am works fine. 0:00pm also works (which I don't think - should). None of the values 12:00pm - 12:59pm work for that matter. -- Require restore via the restore command or make a restore Job - get the bootstrap file. -- Implement Maximum Job Spool Size -- Fix 3993 error in SD. It forgets to look at autochanger - resource for device command, ... -- 3. Prevent two drives requesting the same Volume in any given - autochanger, by checking if a Volume is mounted on another drive - in an Autochanger. -- Upgrade to MySQL 4.1.12 See: - http://dev.mysql.com/doc/mysql/en/Server_SQL_mode.html -- Add # Job Level date to bsr file -- Implement "PreferMountedVolumes = yes|no" in Job resource. -## Integrate web-bacula into a new Bacula project with - bimagemgr. -- Cleaning tapes should have Status "Cleaning" rather than append. -- Make sure that Python has access to Client address/port so that - it can check if Clients are alive. -- Review all items in "restore". -- Fix PostgreSQL GROUP BY problems in restore. -- Fix PostgreSQL sql problems in bugs. +- Make sure that all do_prompt() calls in Dir check for + -1 (error) and -2 (cancel) returns. +- Fix foreach_jcr() to have free_jcr() inside next(). + jcr=jcr_walk_start(); + for ( ; jcr; (jcr=jcr_walk_next(jcr)) ) + ... + jcr_walk_end(jcr); +- A Volume taken from Scratch should take on the retention period + of the new pool. +- Correct doc for Maximum Changer Wait (and others) accepting only + integers. +- Implement status that shows why a job is being held in reserve, or + rather why none of the drives are suitable. +- Implement a way to disable a drive (so you can use the second + drive of an autochanger, and the first one will not be used or + even defined). +- Make sure Maximum Volumes is respected in Pools when adding + Volumes (e.g. when pulling a Scratch volume). +- Keep same dcr when switching device ... +- Implement code that makes the Dir aware that a drive is an + autochanger (so the user doesn't need to use the Autochanger = yes + directive). +- Make catalog respect ACL. +- Add recycle count to Media record. +- Add initial write date to Media record. +- Fix store_yesno to be store_bitmask. +--- create_file.c.orig Fri Jul 8 12:13:05 2005 ++++ create_file.c Fri Jul 8 12:13:07 2005 +@@ -195,6 +195,8 @@ + attr->ofname, be.strerror()); + return CF_ERROR; + } ++ } else if(S_ISSOCK(attr->statp.st_mode)) { ++ Dmsg1(200, "Skipping socket: %s\n", attr->ofname); + } else { + Dmsg1(200, "Restore node: %s\n", attr->ofname); + if (mknod(attr->ofname, attr->statp.st_mode, attr->statp.st_rdev) != 0 && errno != EEXIST) { +- Add true/false to conf same as yes/no +- Reserve blocks other restore jobs when first cannot connect to SD. +- Fix Maximum Changer Wait, Maximum Open Wait, Maximum Rewind Wait to + accept time qualifiers. +- Does ClientRunAfterJob fail the job on a bad return code? +- Make hardlink code at line 240 of find_one.c use binary search. +- Add ACL error messages in src/filed/acl.c. +- Make authentication failures single threaded. +- Make Dir and SD authentication errors single threaded. +- Install man pages +- Fix catreq.c digestbuf at line 411 in src/dird/catreq.c +- Make base64.c (bin_to_base64) take a buffer length + argument to avoid overruns. + and verify that other buffers cannot overrun. +- Implement VolumeState as discussed with Arno. +- Add LocationId to update volume +- Add LocationLog + LogId + Date + User text + MediaId + LocationId + NewState??? +- Add Comment to Media record +- Fix auth compatibility with 1.38 +- Update dbcheck to include Log table +- Update llist to include new fields. +- Make unmount unload autochanger. Make mount load slot. +- Fix bscan to report the JobType when restoring a job.