Kern's ToDo List
- 16 June 2005
+ 04 July 2005
Major development:
Project Developer
#7 Single Job Writing to Multiple Storage Devices
(probably not this version)
-## Integrate web-bacula into a new Bacula project with
- bimagemgr.
## Create a new GUI chapter explaining all the GUI programs.
Autochangers:
-- 7. Implement new Console commands to allow offlining/reserving drives,
- and possibly manipulating the autochanger (much asked for).
- Make "update slots" when pointing to Autochanger, remove
all Volumes from other drives. "update slots all-drives"?
+For 1.37:
+- update volume=xxx --- add status=Full
+- Remove old spool files on startup.
+- Exclude SD spool/working directory.
+- Finish TLS implementation.
+- Refuse to prune last valid Full backup. Same goes for Catalog.
+- --without-openssl breaks at least on Solaris.
+- Python:
+ - Make a callback when Rerun failed levels is called.
+ - Give Python program access to Scheduled jobs.
+ - Add setting Volume State via Python.
+ - Python script to save with Python, not save, save with Bacula.
+ - Python script to do backup.
+ - What events?
+ - Change the Priority, Client, Storage, JobStatus (error)
+ at the start of a job.
+- Why is SpoolDirectory = /home/bacula/spool; not reported
+ as an error when writing a DVD?
+- Make bootstrap file handle multiple MediaTypes (SD)
+- Remove all old Device resource code in Dir and code to pass it
+ back in SD -- better, rework it to pass back device statistics.
+- Check locking of resources -- be sure to lock devices where previously
+ resources were locked.
+- The last part is left in the spool dir.
+
Document:
+- Port limiting -m in iptables to prevent DoS attacks
+ could cause broken pipes on Bacula.
- Document that Bootstrap files can be written with cataloging
turned off.
- Pruning with Admin job.
- Document Heartbeat Interval in the dealing with firewalls section.
- Document the multiple-drive-changer.txt script.
-For 1.37:
-- Add # Job Level date to bsr file
-- Implement "PreferMountedVolumes = yes|no" in Job resource.
+Maybe in 1.37:
+- In restore don't compare byte count on a raw device -- directory
+ entry does not contain bytes.
+- To mark files as deleted, run essentially a Verify to disk, and
+ when a file is found missing (MarkId != JobId), then create
+ a new File record with FileIndex == -1. This could be done
+ by the FD at the same time as the backup.
=== rate design
jcr->last_rate
jcr->last_runtime
MA = (last_MA * 3 + rate) / 4
rate = (bytes - last_bytes) / (runtime - last_runtime)
-- Despool attributes simultaneously with data in a separate
- thread, rejoined at end of data spooling.
+- Max Vols limit in Pool off by one?
- Implement Files/Bytes,... stats for restore job.
- Implement Total Bytes Written, ... for restore job.
-- Add setting Volume State via Python.
-- Max Vols limit in Pool off by one?
-- Make bootstrap file handle multiple MediaTypes (SD)
-- Test restoring into a user restricted directory on Win32 -- see
- bug report.
-- --without-openssl breaks at least on Solaris.
-- Python:
- - Make a callback when Rerun failed levels is called.
- - Give Python program access to Scheduled jobs.
- - Python script to save with Python, not save, save with Bacula.
- - Python script to do backup.
- - What events?
- - Change the Priority, Client, Storage, JobStatus (error)
- at the start of a job.
- - Make sure that Python has access to Client address/port so that
- it can check if Clients are alive.
-
-- Remove all old Device resource code in Dir and code to pass it
- back in SD -- better, rework it to pass back device statistics.
-- Check locking of resources -- be sure to lock devices where previously
- resources were locked.
-- Add global lock on all devices when creating a device structure.
-
-Maybe in 1.37:
+- Despool attributes simultaneously with data in a separate
+ thread, rejoined at end of data spooling.
+- 7. Implement new Console commands to allow offlining/reserving drives,
+ and possibly manipulating the autochanger (much asked for).
- Add start/end date editing in messages (%t %T, %e?) ...
- Add ClientDefs similar to JobDefs.
- Print more info when bextract -p accepts a bad block.
-- To mark files as deleted, run essentially a Verify to disk, and
- when a file is found missing (MarkId != JobId), then create
- a new File record with FileIndex == -1. This could be done
- by the FD at the same time as the backup.
- Fix FD JobType to be set before RunBeforeJob in FD.
- Look at adding full Volume and Pool information to a Volume
label so that bscan can get *all* the info.
- Bug: if a job is manually scheduled to run later, it does not appear
in any status report and cannot be cancelled.
+==== Keeping track of deleted files ====
+ My "trick" for keeping track of deletions is the following.
+ Assuming the user turns on this option, after all the files
+ have been backed up, but before the job has terminated, the
+ FD will make a pass through all the files and send their
+ names to the DIR (*exactly* the same as what a Verify job
+ currently does). This will probably be done at the same
+ time the files are being sent to the SD avoiding a second
+ pass. The DIR will then compare that to what is stored in
+ the catalog. Any files in the catalog but not in what the
+ FD sent will receive a catalog File entry that indicates
+ that at that point in time the file was deleted.
+
+ During a restore, any file initially picked up by some
+ backup (Full, ...) then subsequently having a File entry
+ marked "delete" will be removed from the tree, so will not
+ be restored. If a file with the same name is later OK it
+ will be inserted in the tree -- this already happens. All
+ will be consistent except for possible changes during the
+ running of the FD.
+
+ Since I'm on the subject, some of you may be wondering what
+ the utility of the in memory tree is if you are going to
+ restore everything (at least it comes up from time to time
+ on the list). Well, it is still *very* useful because it
+ allows only the last item found for a particular filename
+ (full path) to be entered into the tree, and thus if a file
+ is backed up 10 times, only the last copy will be restored.
+ I recently (last Friday) restored a complete directory, and
+ the Full and all the Differential and Incremental backups
+ spanned 3 Volumes. The first Volume was not even mounted
+ because all the files had been updated and hence backed up
+ since the Full backup was made. In this case, the tree
+ saved me a *lot* of time.
+
+ Make sure this information is stored on the tape too so
+ that it can be restored directly from the tape.
+=====
+
Regression tests:
- Add Pool/Storage override regression test.
- Add delete JobId to regression.
in an Autochanger.
- Upgrade to MySQL 4.1.12 See:
http://dev.mysql.com/doc/mysql/en/Server_SQL_mode.html
+- Add # Job Level date to bsr file
+- Implement "PreferMountedVolumes = yes|no" in Job resource.
+## Integrate web-bacula into a new Bacula project with
+ bimagemgr.
+- Cleaning tapes should have Status "Cleaning" rather than append.
+- Make sure that Python has access to Client address/port so that
+ it can check if Clients are alive.
+- Review all items in "restore".
+- Fix PostgreSQL GROUP BY problems in restore.
+- Fix PostgreSQL sql problems in bugs.
+- After rename
+ 04-Jul 13:01 MainSD: Rufus.2005-07-04_01.05.02 Warning: Director wanted Volume
+ "DLT-13Feb04".
+ Current Volume "DLT-04Jul05" not acceptable because:
+ 1997 Volume "DLT-13Feb04" not in catalog.
+ 04-Jul 13:01 MainSD: Please mount Volume "DLT-04Jul05" on Storage Device
+ "HP DLT 80" (/dev/nst0) for Job Rufus.2005-07-04_01.05.02
+