X-Git-Url: https://git.sur5r.net/?a=blobdiff_plain;f=bacula%2Fkernstodo;h=bf8f31e598a1ff6c9154b216f7ec48b765c7c6a8;hb=200821d1ecedf9bc34de69a1ab33f937cebb4972;hp=b8da8d6a10a0ef6ffd8c5b17ab62c025075e4833;hpb=f0a7af9afe0811cf10d4efde51d337c05a2bba10;p=bacula%2Fbacula diff --git a/bacula/kernstodo b/bacula/kernstodo index b8da8d6a10..bf8f31e598 100644 --- a/bacula/kernstodo +++ b/bacula/kernstodo @@ -1,93 +1,391 @@ Kern's ToDo List - 24 July 2005 + 19 August 2006 Major development: Project Developer ======= ========= -Version 1.37 Kern (see below) -======================================================== - -Final items for 1.37 before release: -1. Fix bugs -- Tape xxx in drive 0, requested in drive 1 -- Multi-drive changer seems to only use drive 0 - Multiple drives don't seem to be opened. -- Why isn't the DEVICE structure defined when doing - a reservation? -- The mount command does not work with drives other than 0. - -- --without-openssl breaks at least on Solaris. -9. Run the regression scripts on Solaris and FreeBSD -- Figure out how to package gui, and rescue programs. -- Test TLS. -- Arno had to do -- to get update slots=x to work - UPDATE Media SET InChanger=0,Slot=0 WHERE InChanger>0 AND Slot>0; (MySQL) - Document: - Document cleaning up the spool files: db, pid, state, bsr, mail, conmsg, spool - Document the multiple-drive-changer.txt script. - Pruning with Admin job. -- Restore of all files for a Job or set of jobs even if the file - records have been removed from the catalog. -========= probably not in 1.38 ============= - - MaximumPartSize = bytes (SD, Device resource) - Defines the maximum part size. - - Requires Mount = Yes/No (SD, Device resource) - Defines if the device require to be mounted to be read, and if it - must be written in a special way. If it set, the following directives - must be defined in the same Device resource: - + Mount Point = directory - Directory where the device must be mounted. - + Mount Command = name-string - Command that must be executed to mount the device. Before the command - is executed, %a is replaced with the Archive Device, and %m with the - Mount Point. - + Unmount Command = name-string - Command that must be executed to unmount the device. Before the - command is executed, %a is replaced with the Archive Device, and - %m with the Mount Point. - + Write Part Command = name-string - Command that must be executed to write a part to the device. Before - the command is executed, %a is replaced with the Archive Device, %m - with the Mount Point, %n with the current part number (0-based), - and %v with the current part filename. - + Free Space Command = name-string - Command that must be executed to check how much free space is left - on the device. Before the command is executed, %a is replaced with - the Archive Device, %m with the Mount Point, %n with the current part - number (0-based), and %v with the current part filename. - - Write Part After Job = Yes/No (DIR, Job Resource, and Schedule Resource) - If this directive is set to yes (default no), a new part file will be - created after the job is finished. -======= +- Does WildFile match against full name? Doc. +- %d and %v only valid on Director, not for ClientRunBefore/After. +- During tests with the 260 char fix code, I found one problem: + if the system "sees" a long path once, it seems to forget it's + working drive (e.g. c:\), which will lead to a problem during + the next job (create bootstrap file will fail). Here is the + workaround: specify absolute working and pid directory in + bacula-fd.conf (e.g. c:\bacula\working instead of + \bacula\working). +- Document techniques for restoring large numbers of files. +- Document setting my.cnf to big file usage. +- Add example of proper index output to doc. + show index from File; +- Correct the Include syntax in the m4.xxx files in examples/conf +- Document JobStatus and Termination codes. +- Fix the error with the "DVI file can't be opened" while + building the French PDF. +- Document more DVD stuff -- particularly that recycling doesn't work, + and all the other things too. + +Priority: For 1.39: +- Fix wx-console scanning problem with commas in names. +- Change dbcheck to tell users to use native tools for fixing + broken databases, and to ensure they have the proper indexes. +- add udev rules for Bacula devices. +- Add manpages to the list of directories for make install. +- If a job terminates, the DIR connection can close before the + Volume info is updated, leaving the File count wrong. +- Look at why SIGPIPE during connection can cause seg fault in + writing the daemon message, when Dir dropped to bacula:bacula +- Look at zlib 32 => 64 problems. +- Ensure that connection to daemon failure always indicates what + daemon it was trying to connect to. +- Try turning on disk seek code. +- Possibly turn on St. Bernard code. +- Fix bextract to restore ACLs, or better yet, use common + routines. +- Do we migrate appendable Volumes? +- Remove queue.c code. +- Add bconsole option to use stdin/out instead of conio. +- Fix ClientRunBefore/AfterJob compatibility. +- Fix re-read of last block to check if job has actually written + a block, and check if block was written by a different job + (i.e. multiple simultaneous jobs writing). +- Some users claim that they must do two prune commands to get a + Volume marked as purged. +- Print warning message if LANG environment variable does not specify + UTF-8. +- New dot commands from Arno. + .update volume [enabled|disabled|*see below] + > However, I could easily imagine an option to "update slots" that says + > "enable=yes|no" that would automatically enable or disable all the Volumes + > found in the autochanger. This will permit the user to optionally mark all + > the Volumes in the magazine disabled prior to taking them offsite, and mark + > them all enabled when bringing them back on site. Coupled with the options + > to the slots keyword, you can apply the enable/disable to any or all volumes. + .show device=xxx lists information from one storage device, including + devices (I'm not even sure that information exists in the DIR...) + .move eject device=xxx mostly the same as 'unmount xxx' but perhaps with + better machine-readable output like "Ok" or "Error busy" + .move eject device=xxx toslot=yyy the same as above, but with a new + target slot. The catalog should be updated accordingly. + .move transfer device=xxx fromslot=yyy toslot=zzz + +Low priority: +- Get Perl replacement for bregex.c +- Given all the problems with FIFOs, I think the solution is to do something a + little different, though I will look at the code and see if there is not some + simple solution (i.e. some bug that was introduced). What might be a better + solution would be to use a FIFO as a sort of "key" to tell Bacula to read and + write data to a program rather than the FIFO. For example, suppose you + create a FIFO named: + + /home/kern/my-fifo + + Then, I could imagine if you backup and restore this file with a direct + reference as is currently done for fifos, instead, during backup Bacula will + execute: + + /home/kern/my-fifo.backup + + and read the data that my-fifo.backup writes to stdout. For restore, Bacula + will execute: + + /home/kern/my-fifo.restore + + and send the data backed up to stdout. These programs can either be an + executable or a shell script and they need only read/write to stdin/stdout. + + I think this would give a lot of flexibility to the user without making any + significant changes to Bacula. + + +==== SQL +# get null file +select FilenameId from Filename where Name=''; +# Get list of all directories referenced in a Backup. +select Path.Path from Path,File where File.JobId=nnn and + File.FilenameId=(FilenameId-from-above) and File.PathId=Path.PathId + order by Path.Path ASC; + +- Look into using Dart for testing + http://public.kitware.com/Dart/HTML/Index.shtml + +- Look into replacing autotools with cmake + http://www.cmake.org/HTML/Index.html + +=== Migration from David === +What I'd like to see: + +Job { + Name = "-migrate" + Type = Migrate + Messages = Standard + Pool = Default + Migration Selection Type = LowestUtil | OldestVol | PoolOccupancy | +Client | PoolResidence | Volume | JobName | SQLquery + Migration Selection Pattern = "regexp" + Next Pool = +} + +There should be no need for a Level (migration is always Full, since you +don't calculate differential/incremental differences for migration), +Storage should be determined by the volume types in the pool, and Client +is really a selection issue. Migration should always occur to the +NextPool defined in the pool definition. If no nextpool is defined, the +job should end with a reason of "no place to go". If Next Pool statement +is present, we override the check in the pool definition and use the +pool specified. + +Here's how I'd define Migration Selection Types: + +With Regexes: +Client -- Migrate data from selected client only. Migration Selection +Pattern regexp provides pattern to select client names, eg ^FS00* makes +all client names starting with FS00 eligible for migration. + +Jobname -- Migration all jobs matching name. Migration Selection Pattern +regexp provides pattern to select jobnames existing in pool. + +Volume -- Migrate all data on specified volumes. Migration Selection +Pattern regexp provides selection criteria for volumes to be migrated. +Volumes must exist in pool to be eligible for migration. + + +With Regex optional: +LowestUtil -- Identify the volume in the pool with the least data on it +and empty it. No Migration Selection Pattern required. + +OldestVol -- Identify the LRU volume with data written, and empty it. No +Migration Selection Pattern required. + +PoolOccupancy -- if pool occupancy exceeds , migrate volumes +(starting with most full volumes) until pool occupancy drops below +. Pool highmig and lowmig values are in pool definition, no +Migration Selection Pattern required. + + +No regex: +SQLQuery -- Migrate all jobuids returned by the supplied SQL query. +Migration Selection Pattern contains SQL query to execute; should return +a list of 1 or more jobuids to migrate. + +PoolResidence -- Migrate data sitting in pool for longer than +PoolResidence value in pool definition. Migration Selection Pattern +optional; if specified, override value in pool definition (value in +minutes). + + +[ possibly a Python event -- kes ] +=== +- Mount on an Autochanger with no tape in the drive causes: + Automatically selected Storage: LTO-changer + Enter autochanger drive[0]: 0 + 3301 Issuing autochanger "loaded drive 0" command. + 3302 Autochanger "loaded drive 0", result: nothing loaded. + 3301 Issuing autochanger "loaded drive 0" command. + 3302 Autochanger "loaded drive 0", result: nothing loaded. + 3902 Cannot mount Volume on Storage Device "LTO-Drive1" (/dev/nst0) because: + Couldn't rewind device "LTO-Drive1" (/dev/nst0): ERR=dev.c:678 Rewind error on "LTO-Drive1" (/dev/nst0). ERR=No medium found. + 3905 Device "LTO-Drive1" (/dev/nst0) open but no Bacula volume is mounted. + If this is not a blank tape, try unmounting and remounting the Volume. +- If Drive 0 is blocked, and drive 1 is set "Autoselect=no", drive 1 will + be used. +- Autochanger did not change volumes. + select * from Storage; + +-----------+-------------+-------------+ + | StorageId | Name | AutoChanger | + +-----------+-------------+-------------+ + | 1 | LTO-changer | 0 | + +-----------+-------------+-------------+ + 05-May 03:50 roxie-sd: 3302 Autochanger "loaded drive 0", result is Slot 11. + 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Warning: Director wanted Volume "LT + Current Volume "LT0-002" not acceptable because: + 1997 Volume "LT0-002" not in catalog. + 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Error: Autochanger Volume "LT0-002" + Setting InChanger to zero in catalog. + 05-May 03:50 roxie-dir: Tibs.2006-05-05_03.05.02 Error: Unable to get Media record + + 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Fatal error: Error getting Volume i + 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Fatal error: Job 530 canceled. + 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Fatal error: spool.c:249 Fatal appe + 05-May 03:49 Tibs: Tibs.2006-05-05_03.05.02 Fatal error: c:\cygwin\home\kern\bacula + , got + (missing) + llist volume=LTO-002 + MediaId: 6 + VolumeName: LTO-002 + Slot: 0 + PoolId: 1 + MediaType: LTO-2 + FirstWritten: 2006-05-05 03:11:54 + LastWritten: 2006-05-05 03:50:23 + LabelDate: 2005-12-26 16:52:40 + VolJobs: 1 + VolFiles: 0 + VolBlocks: 1 + VolMounts: 0 + VolBytes: 206 + VolErrors: 0 + VolWrites: 0 + VolCapacityBytes: 0 + VolStatus: + Recycle: 1 + VolRetention: 31,536,000 + VolUseDuration: 0 + MaxVolJobs: 0 + MaxVolFiles: 0 + MaxVolBytes: 0 + InChanger: 0 + EndFile: 0 + EndBlock: 0 + VolParts: 0 + LabelType: 0 + StorageId: 1 + + Note VolStatus is blank!!!!! + llist volume=LTO-003 + MediaId: 7 + VolumeName: LTO-003 + Slot: 12 + PoolId: 1 + MediaType: LTO-2 + FirstWritten: 0000-00-00 00:00:00 + LastWritten: 0000-00-00 00:00:00 + LabelDate: 2005-12-26 16:52:40 + VolJobs: 0 + VolFiles: 0 + VolBlocks: 0 + VolMounts: 0 + VolBytes: 1 + VolErrors: 0 + VolWrites: 0 + VolCapacityBytes: 0 + VolStatus: Append + Recycle: 1 + VolRetention: 31,536,000 + VolUseDuration: 0 + MaxVolJobs: 0 + MaxVolFiles: 0 + MaxVolBytes: 0 + InChanger: 0 + EndFile: 0 + EndBlock: 0 + VolParts: 0 + LabelType: 0 + StorageId: 1 +=== + mount + Automatically selected Storage: LTO-changer + Enter autochanger drive[0]: 0 + 3301 Issuing autochanger "loaded drive 0" command. + 3302 Autochanger "loaded drive 0", result: nothing loaded. + 3301 Issuing autochanger "loaded drive 0" command. + 3302 Autochanger "loaded drive 0", result: nothing loaded. + 3902 Cannot mount Volume on Storage Device "LTO-Drive1" (/dev/nst0) because: + Couldn't rewind device "LTO-Drive1" (/dev/nst0): ERR=dev.c:678 Rewind error on "LTO-Drive1" (/dev/nst0). ERR=No medium found. + + 3905 Device "LTO-Drive1" (/dev/nst0) open but no Bacula volume is mounted. + If this is not a blank tape, try unmounting and remounting the Volume. + +- Add VolumeState (enable, disable, archive) +- Add VolumeLock to prevent all but lock holder (SD) from updating + the Volume data (with the exception of VolumeState). +- The btape fill command does not seem to use the Autochanger +- Make Windows installer default to system disk drive. +- Look at using ioctl(FIOBMAP, ...) on Linux, and + DeviceIoControl(..., FSCTL_QUERY_ALLOCATED_RANGES, ...) on + Win32 for sparse files. + http://www.flexhex.com/docs/articles/sparse-files.phtml + http://www.informatik.uni-frankfurt.de/~loizides/reiserfs/fibmap.html +- Directive: at "command" +- Command: pycmd "command" generates "command" event. How to + attach to a specific job? +- Integrate Christopher's St. Bernard code. +- run_cmd() returns int should return JobId_t +- get_next_jobid_from_list() returns int should return JobId_t +- Document export LDFLAGS=-L/usr/lib64 +- Don't attempt to restore from "Disabled" Volumes. +- Network error on Win32 should set Win32 error code. +- What happens when you rename a Disk Volume? +- Job retention period in a Pool (and hence Volume). The job would + then be migrated. +- Detect resource deadlock in Migrate when same job wants to read + and write the same device. +- Queue warning/error messages during restore so that they + are reported at the end of the report rather than being + hidden in the file listing ... +- Look at -D_FORTIFY_SOURCE=2 +- Add Win32 FileSet definition somewhere +- Look at fixing restore status stats in SD. +- Make selection of Database used in restore correspond to + client. +- Look at using ioctl(FIMAP) and FIGETBSZ for sparse files. + http://www.informatik.uni-frankfurt.de/~loizides/reiserfs/fibmap.html +- Implement a mode that says when a hard read error is + encountered, read many times (as it currently does), and if the + block cannot be read, skip to the next block, and try again. If + that fails, skip to the next file and try again, ... +- Add level table: + create table LevelType (LevelType binary(1), LevelTypeLong tinyblob); + insert into LevelType (LevelType,LevelTypeLong) values + ("F","Full"), + ("D","Diff"), + ("I","Inc"); +- Add ACL to restore only to original location. +- Show files/second in client status output. +- Add a recursive mark command (rmark) to restore. +- "Minimum Job Interval = nnn" sets minimum interval between Jobs + of the same level and does not permit multiple simultaneous + running of that Job (i.e. lets any previous invocation finish + before doing Interval testing). +- Look at simplifying File exclusions. +- New directive "Delete purged Volumes" +- new pool XXX with ScratchPoolId = MyScratchPool's PoolId and + let it fill itself, and RecyclePoolId = XXX's PoolId so I can + see if it become stable and I just have to supervise + MyScratchPool +- If I want to remove this pool, I set RecyclePoolId = MyScratchPool's + PoolId, and when it is empty remove it. +- Figure out how to recycle Scratch volumes back to the Scratch Pool. +- Add Volume=SCRTCH +- Allow Check Labels to be used with Bacula labels. +- "Resuming" a failed backup (lost line for example) by using the + failed backup as a sort of "base" job. +- Look at NDMP - Email to the user when the tape is about to need changing x days before it needs changing. - Command to show next tape that will be used for a job even if the job is not scheduled. ---- create_file.c.orig Fri Jul 8 12:13:05 2005 -+++ create_file.c Fri Jul 8 12:13:07 2005 -@@ -195,6 +195,8 @@ - attr->ofname, be.strerror()); - return CF_ERROR; - } -+ } else if(S_ISSOCK(attr->statp.st_mode)) { -+ Dmsg1(200, "Skipping socket: %s\n", attr->ofname); - } else { - Dmsg1(200, "Restore node: %s\n", attr->ofname); - if (mknod(attr->ofname, attr->statp.st_mode, attr->statp.st_rdev) != 0 && errno != EEXIST) { +- From: Arunav Mandal + 1. When jobs are running and bacula for some reason crashes or if I do a + restart it remembers and jobs it was running before it crashed or restarted + as of now I loose all jobs if I restart it. + + 2. When spooling and in the midway if client is disconnected for instance a + laptop bacula completely discard the spool. It will be nice if it can write + that spool to tape so there will be some backups for that client if not all. + + 3. We have around 150 clients machines it will be nice to have a option to + upgrade all the client machines bacula version automatically. + + 4. Atleast one connection should be reserved for the bconsole so at heavy load + I should connect to the director via bconsole which at sometimes I can't + + 5. Another most important feature that is missing, say at 10am I manually + started backup of client abc and it was a full backup since client abc has + no backup history and at 10.30am bacula again automatically started backup of + client abc as that was in the schedule. So now we have 2 multiple Full + backups of the same client and if we again try to start a full backup of + client backup abc bacula won't complain. That should be fixed. - Fix bpipe.c so that it does not modify results pointer. ***FIXME*** calling sequence should be changed. -1.xx Major Projects: -#3 Migration (Move, Copy, Archive Jobs) -#7 Single Job Writing to Multiple Storage Devices -- Reserve blocks other restore jobs when first cannot connect - to SD. -- Add true/false to conf same as yes/no - For Windows disaster recovery see http://unattended.sf.net/ - regardless of the retention period, Bacula will not prune the last Full, Diff, or Inc File data until a month after the @@ -222,6 +520,50 @@ For 1.39: - It remains to be seen how the backup performance of the DIR's will be affected when comparing the catalog for a large filesystem. +==== +From David: +How about introducing a Type = MgmtPolicy job type? That job type would +be responsible for scanning the Bacula environment looking for specific +conditions, and submitting the appropriate jobs for implementing said +policy, eg: + +Job { + Name = "Migration-Policy" + Type = MgmtPolicy + Policy Selection Job Type = Migrate + Scope = " " + Threshold = " " + Job Template = +} + +Where is any legal job keyword, is a comparison +operator (=,<,>,!=, logical operators AND/OR/NOT) and is a +appropriate regexp. I could see an argument for Scope and Threshold +being SQL queries if we want to support full flexibility. The +Migration-Policy job would then get scheduled as frequently as a site +felt necessary (suggested default: every 15 minutes). + +Example: + +Job { + Name = "Migration-Policy" + Type = MgmtPolicy + Policy Selection Job Type = Migration + Scope = "Pool=*" + Threshold = "Migration Selection Type = LowestUtil" + Job Template = "MigrationTemplate" +} + +would select all pools for examination and generate a job based on +MigrationTemplate to automatically select the volume with the lowest +usage and migrate it's contents to the nextpool defined for that pool. + +This policy abstraction would be really handy for adjusting the behavior +of Bacula according to site-selectable criteria (one thing that pops +into mind is Amanda's ability to automatically adjust backup levels +depending on various criteria). + + ===== Regression tests: @@ -1241,199 +1583,69 @@ Block Position: 0 === Done -- Save mount point for directories not traversed with onefs=yes. -- Add seconds to start and end times in the Job report output. -- if 2 concurrent backups are attempted on the same tape - drive (autoloader) into different tape pools, one of them will exit - fatally instead of halting until the drive is idle -- Update StartTime if job held in Job Queue. -- Look at www.nu2.nu/pebuilder as a helper for full windows - bare metal restore. (done by Scott) -- Fix orphanned buffers: - Orphaned buffer: 24 bytes allocated at line 808 of rufus-dir job.c - Orphaned buffer: 40 bytes allocated at line 45 of rufus-dir alist.c -- Implement Preben's suggestion to add - File System Types = ext2, ext3 - to FileSets, thus simplifying backup of *all* local partitions. -- Try to open a device on each Job if it was not opened - when the SD started. -- Add dump of VolSessionId/Time and FileIndex with bls. -- If Bacula does not find the right tape in the Autochanger, - then mark the tape in error and move on rather than asking - for operator intervention. -- Cancel command should include JobId in list of Jobs. -- Add performance testing hooks -- Bootstrap from JobMedia records. -- Implement WildFile and WildDir to solve problem of - saving only *.doc files. -- Fix - Please use the "label" command to create a new Volume for: - Storage: DDS-4-changer - Media type: - Pool: Default - label - The defined Storage resources are: -- Copy Changer Device and Changer Command from Autochanger - to Device resource in SD if none given in Device resource. -- 1. Automatic use of more than one drive in an autochanger (done) -- 2. Automatic selection of the correct drive for each Job (i.e. - selects a drive with an appropriate Volume for the Job) (done) -- 6. Allow multiple simultaneous Jobs referencing the same pool write - to several tapes (some new directive(s) are are probably needed for - this) (done) -- Locking (done) -- Key on Storage rather than Pool (done) -- Allow multiple drives to use same Pool (change jobq.c DIR) (done). -- Synchronize multiple drives so that not more - than one loads a tape and any time (done) -- 4. Use Changer Device and Changer Command specified in the - Autochanger resource, if none is found in the Device resource. - You can continue to specify them in the Device resource if you want - or need them to be different for each device. -- 5. Implement a new Device directive (perhaps "Autoselect = yes/no") - that can allow a Device be part of an Autochanger, and hence the changer - script protected, but if set to no, will prevent the Device from being - automatically selected from the changer. This allows the device to - be directly accessed through its Device name, but not through the - AutoChanger name. -#6 Select one from among Multiple Storage Devices for Job -#5 Events that call a Python program - (Implemented in Dir/SD) -- Make sure the Device name is in the Query packet returned. -- Don't start a second file job if one is already running. -- Implement EOF/EOV labels for ANSI labels -- Implement IBM labels. -- When Python creates a new label, the tape is immediately - recycled and no label created. This happens when using - autolabeling -- even when Python doesn't generate the name. -- Scratch Pool where the volumes can be re-assigned to any Pool. -- 28-Mar 23:19 rufus-sd: acquire.c:379 Device "DDS-4" (/dev/nst0) - is busy reading. Job 6 canceled. -- Remove separate thread for opening devices in SD. On the other - hand, don't block waiting for open() for devices. -- Fix code to either handle updating NumVol or to calculate it in - Dir next_vol.c -- Ensure that you cannot exclude a directory or a file explicitly - Included with File. -#4 Embedded Python Scripting - (Implemented in Dir/SD/FD) -- Add Python writable variable for changing the Priority, - Client, Storage, JobStatus (error), ... -- SD Python - - Solicit Events -- Add disk seeking on restore; turn off seek on tapes. - stored/match_bsr.c -- Look at dird_conf.c:1000: warning: `int size' - might be used uninitialized in this function -- Indicate when a Job is purged/pruned during restore. -- Implement some way to turn off automatic pruning in Jobs. -- Implement a way an Admin Job can prune, possibly multiple - clients -- Python script? -- Look at Preben's acl.c error handling code. -- SD crashes after a tape restore then doing a backup. -- If drive is opened read/write, close it and re-open - read-only if doing a restore, and vice-versa. -- Windows restore: - data-fd: RestoreFiles.2004-12-07_15.56.42 Error: - > ..\findlib\../../findlib/create_file.c:275 Could not open e:/: ERR=Der - > Prozess kann nicht auf die Datei zugreifen, da sie von einem anderen - > Prozess verwendet wird. - Restore restores all files, but then fails at the end trying - to set the attributes of e: - from failed jobs.- Resolve the problem between Device name and Archive name, - and fix SD messages. -- Tell the "restore" user when browsing is no longer possible. -- Add a restore directory-x -- Write non-optimized bsrs from the JobMedia and Media records, - even after Files are pruned. -- Delete Stripe and Copy from VolParams to save space. -- Fix option 2 of restore -- list where file is backed up -- require Client, - then list last 20 backups. -- Finish implementation of passing all Storage and Device needs to - the SD. -- Move test for max wait time exceeded in job.c up -- Peter's idea. -## Consider moving docs to their own project. -## Move rescue to its own project. -- Add client version to the Client name line that prints in - the Job report. -- Fix the Rescue CDROM. -- By the way: on page http://www.bacula.org/?page=tapedrives , at the - bottom, the link to "Tape Testing Chapter" is broken. It goes to - /html-manual/... while the others point to /rel-manual/... -- Device resource needs the "name" of the SD. -- Specify a single directory to restore. -- Implement MediaType keyword in bsr? -- Add a date and time stamp at the beginning of every line in the - Job report (Volker Sauer). -- Add level to estimate command. -- Add "limit=n" for "list jobs" -- Make bootstrap filename unique. -- Make Dmsg look at global before calling subroutine. -- From Chris Hull: - it seems to be complaining about 12:00pm which should be a valid 12 - hour time. I changed the time to 11:59am and everything works fine. - Also 12:00am works fine. 0:00pm also works (which I don't think - should). None of the values 12:00pm - 12:59pm work for that matter. -- Require restore via the restore command or make a restore Job - get the bootstrap file. -- Implement Maximum Job Spool Size -- Fix 3993 error in SD. It forgets to look at autochanger - resource for device command, ... -- 3. Prevent two drives requesting the same Volume in any given - autochanger, by checking if a Volume is mounted on another drive - in an Autochanger. -- Upgrade to MySQL 4.1.12 See: - http://dev.mysql.com/doc/mysql/en/Server_SQL_mode.html -- Add # Job Level date to bsr file -- Implement "PreferMountedVolumes = yes|no" in Job resource. -## Integrate web-bacula into a new Bacula project with - bimagemgr. -- Cleaning tapes should have Status "Cleaning" rather than append. -- Make sure that Python has access to Client address/port so that - it can check if Clients are alive. -- Review all items in "restore". -- Fix PostgreSQL GROUP BY problems in restore. -- Fix PostgreSQL sql problems in bugs. -- After rename - 04-Jul 13:01 MainSD: Rufus.2005-07-04_01.05.02 Warning: Director wanted Volume - "DLT-13Feb04". - Current Volume "DLT-04Jul05" not acceptable because: - 1997 Volume "DLT-13Feb04" not in catalog. - 04-Jul 13:01 MainSD: Please mount Volume "DLT-04Jul05" on Storage Device - "HP DLT 80" (/dev/nst0) for Job Rufus.2005-07-04_01.05.02 -## Create a new GUI chapter explaining all the GUI programs. -- Make "update slots" when pointing to Autochanger, remove - all Volumes from other drives. "update slots all-drives"? - No, this is done by modifying mtx-changer to list what is - in the drives. -- Finish TLS implementation. -- Port limiting -m in iptables to prevent DoS attacks - could cause broken pipes on Bacula. -6. Build and test the Volume Shadow Copy (VSS) for Win32. -- Allow cancel of unknown Job -- State not saved when closing Win32 FD by icon -- bsr-opt-test fails. bsr deleted. Fix. -- Move Python daemon variables from Job to Bacula object. - WorkingDir, ConfigFile -- Document that Bootstrap files can be written with cataloging - turned off. -- Document details of ANSI/IBM labels -- OS linux 2.4 - 1) ADIC, DLT, FastStor 4000, 7*20GB -- Linux Sony LIB-D81, AIT-3 library works. -- Doc the following - to activate, check or disable the hardware compression feature on my - exb-8900 i use the exabyte "MammothTool" you can get it here: - http://www.exabyte.com/support/online/downloads/index.cfm - There is a solaris version of this tool. With option -C 0 or 1 you can - disable or activate compression. Start this tool without any options for - a small reference. -- Document Heartbeat Interval in the dealing with firewalls section. -- Document new CDROM directory. -- On Win32 working directory must have drive letter ???? -- On Win32 working directory must be writable by SYSTEM to - do restores. -- Document that ChangerDevice is used for Alert command. -- Add better documentation on how restores can be done -8. Take one more try at making DVD writing work (no go) -7. Write a bacula-web document +- Make sure that all do_prompt() calls in Dir check for + -1 (error) and -2 (cancel) returns. +- Fix foreach_jcr() to have free_jcr() inside next(). + jcr=jcr_walk_start(); + for ( ; jcr; (jcr=jcr_walk_next(jcr)) ) + ... + jcr_walk_end(jcr); +- A Volume taken from Scratch should take on the retention period + of the new pool. +- Correct doc for Maximum Changer Wait (and others) accepting only + integers. +- Implement status that shows why a job is being held in reserve, or + rather why none of the drives are suitable. +- Implement a way to disable a drive (so you can use the second + drive of an autochanger, and the first one will not be used or + even defined). +- Make sure Maximum Volumes is respected in Pools when adding + Volumes (e.g. when pulling a Scratch volume). +- Keep same dcr when switching device ... +- Implement code that makes the Dir aware that a drive is an + autochanger (so the user doesn't need to use the Autochanger = yes + directive). +- Make catalog respect ACL. +- Add recycle count to Media record. +- Add initial write date to Media record. +- Fix store_yesno to be store_bitmask. +--- create_file.c.orig Fri Jul 8 12:13:05 2005 ++++ create_file.c Fri Jul 8 12:13:07 2005 +@@ -195,6 +195,8 @@ + attr->ofname, be.strerror()); + return CF_ERROR; + } ++ } else if(S_ISSOCK(attr->statp.st_mode)) { ++ Dmsg1(200, "Skipping socket: %s\n", attr->ofname); + } else { + Dmsg1(200, "Restore node: %s\n", attr->ofname); + if (mknod(attr->ofname, attr->statp.st_mode, attr->statp.st_rdev) != 0 && errno != EEXIST) { +- Add true/false to conf same as yes/no +- Reserve blocks other restore jobs when first cannot connect to SD. +- Fix Maximum Changer Wait, Maximum Open Wait, Maximum Rewind Wait to + accept time qualifiers. +- Does ClientRunAfterJob fail the job on a bad return code? +- Make hardlink code at line 240 of find_one.c use binary search. +- Add ACL error messages in src/filed/acl.c. +- Make authentication failures single threaded. +- Make Dir and SD authentication errors single threaded. +- Install man pages +- Fix catreq.c digestbuf at line 411 in src/dird/catreq.c +- Make base64.c (bin_to_base64) take a buffer length + argument to avoid overruns. + and verify that other buffers cannot overrun. +- Implement VolumeState as discussed with Arno. +- Add LocationId to update volume +- Add LocationLog + LogId + Date + User text + MediaId + LocationId + NewState??? +- Add Comment to Media record +- Fix auth compatibility with 1.38 +- Update dbcheck to include Log table +- Update llist to include new fields. +- Make unmount unload autochanger. Make mount load slot. +- Fix bscan to report the JobType when restoring a job.