-IPv6_2 Meno Abels
-Version 1.37 Kern (see below)
-========================================================
-
-1.37 Major Projects:
-#3 Migration (Move, Copy, Archive Jobs)
-#4 Embedded Python Scripting
- (Implemented in Dir/SD)
-#5 Events that call a Python program
- (Implemented in Dir/SD)
-#6 Select one from among Multiple Storage Devices for Job
-#7 Single Job Writing to Multiple Storage Devices
-
-## Integrate web-bacula into a new Bacula project with
- bimagemgr.
-## Consider moving docs to their own project.
-
-Suggestions for Preben:
-- Look at adding Client run command that will use the
- port opened by the client.
-- Optimized bootstrap.
-
-Autochangers:
-- Copy Changer Device and Changer Command from Autochanger
- to Device resource in SD if none given in Device resource.
-- Doc the following
- to activate, check or disable the hardware compression feature on my
- exb-8900 i use the exabyte "MammothTool" you can get it here:
- http://www.exabyte.com/support/online/downloads/index.cfm
- There is a solaris version of this tool. With option -C 0 or 1 you can
- disable or activate compression. Start this tool without any options for
- a small reference.
-- 3.Prevent two drives requesting the same Volume in any given
- autochanger.
-- 4. Use Changer Device and Changer Command specified in the
- Autochanger resource, if none is found in the Device resource.
- You can continue to specify them in the Device resource if you want
- or need them to be different for each device.
-- 5. Implement a new Device directive (perhaps "Autoselect = yes/no")
- that can allow a Device be part of an Autochanger, and hence the changer
- script protected, but if set to no, will prevent the Device from being
- automatically selected from the changer. This allows the device to
- be directly accessed through its Device name, but not through the
- AutoChanger name.
-- 7. Implement new Console commands to allow offlining/reserving drives,
- and possibly manipulating the autochanger (much asked for).
-- 8. Automatic updating of Drive status from SD to DIR when something
- changes (Volume, offline, append, read, ...).
-
-Autochangers Done:
-- 1. Automatic use of more than one drive in an autochanger (done)
-- 2. Automatic selection of the correct drive for each Job (i.e.
- selects a drive with an appropriate Volume for the Job) (done)
-- 6. Allow multiple simultaneous Jobs referencing the same pool write
- to several tapes (some new directive(s) are are probably needed for
- this) (done)
-- Locking (done)
-- Key on Storage rather than Pool (done)
-- Allow multiple drives to use same Pool (change jobq.c DIR) (done).
-- Synchronize multiple drives so that not more
- than one loads a tape and any time (done)
-
-
-For 1.37:
-- Linux Sony LIB-D81, AIT-3 library works.
-- Device resource needs the "name" of the SD.
-- Add and option to see if the file size changed
- during backup.
-- Implement "update device" from SD so that DIR will
- always have current version of device.
-- Add disk seeking on restore.
-- Add Python writable variable for changing the Priority,
- Client, Storage, JobStatus (error), ...
-- SD Autochanger work
- - Lock all devices when using changer script.
- - Check if Volume is mounted on another device
- - Find a free drive if Changer name used.
-- SD Python
- - Solicit Events
-- FD Python
+
+Document:
+- Document cleaning up the spool files:
+ db, pid, state, bsr, mail, conmsg, spool
+- Document the multiple-drive-changer.txt script.
+- Pruning with Admin job.
+- Does WildFile match against full name? Doc.
+- %d and %v only valid on Director, not for ClientRunBefore/After.
+
+Priority:
+
+For 1.39:
+- Fix re-read of last block to check if job has actually written
+ a block, and check if block was written by a different job
+ (i.e. multiple simultaneous jobs writing).
+- JobStatus and Termination codes.
+- Some users claim that they must do two prune commands to get a
+ Volume marked as purged.
+- Print warning message if LANG environment variable does not specify
+ UTF-8.
+=== Migration from David ===
+What I'd like to see:
+
+Job {
+ Name = "<poolname>-migrate"
+ Type = Migrate
+ Messages = Standard
+ Pool = Default
+ Migration Selection Type = LowestUtil | OldestVol | PoolOccupancy |
+Client | PoolResidence | Volume | JobName | SQLquery
+ Migration Selection Pattern = "regexp"
+ Next Pool = <override>
+}
+
+There should be no need for a Level (migration is always Full, since you
+don't calculate differential/incremental differences for migration),
+Storage should be determined by the volume types in the pool, and Client
+is really a selection issue. Migration should always occur to the
+NextPool defined in the pool definition. If no nextpool is defined, the
+job should end with a reason of "no place to go". If Next Pool statement
+is present, we override the check in the pool definition and use the
+pool specified.
+
+Here's how I'd define Migration Selection Types:
+
+With Regexes:
+Client -- Migrate data from selected client only. Migration Selection
+Pattern regexp provides pattern to select client names, eg ^FS00* makes
+all client names starting with FS00 eligible for migration.
+
+Jobname -- Migration all jobs matching name. Migration Selection Pattern
+regexp provides pattern to select jobnames existing in pool.
+
+Volume -- Migrate all data on specified volumes. Migration Selection
+Pattern regexp provides selection criteria for volumes to be migrated.
+Volumes must exist in pool to be eligible for migration.
+
+
+With Regex optional:
+LowestUtil -- Identify the volume in the pool with the least data on it
+and empty it. No Migration Selection Pattern required.
+
+OldestVol -- Identify the LRU volume with data written, and empty it. No
+Migration Selection Pattern required.
+
+PoolOccupancy -- if pool occupancy exceeds <highmig>, migrate volumes
+(starting with most full volumes) until pool occupancy drops below
+<lowmig>. Pool highmig and lowmig values are in pool definition, no
+Migration Selection Pattern required.
+
+
+No regex:
+SQLQuery -- Migrate all jobuids returned by the supplied SQL query.
+Migration Selection Pattern contains SQL query to execute; should return
+a list of 1 or more jobuids to migrate.
+
+PoolResidence -- Migrate data sitting in pool for longer than
+PoolResidence value in pool definition. Migration Selection Pattern
+optional; if specified, override value in pool definition (value in
+minutes).
+
+
+[ possibly a Python event -- kes ]
+===
+- run_cmd() returns int should return JobId_t
+- get_next_jobid_from_list() returns int should return JobId_t
+- Document export LDFLAGS=-L/usr/lib64
+- Don't attempt to restore from "Disabled" Volumes.
+- Network error on Win32 should set Win32 error code.
+- What happens when you rename a Disk Volume?
+- Job retention period in a Pool (and hence Volume). The job would
+ then be migrated.
+- Detect resource deadlock in Migrate when same job wants to read
+ and write the same device.
+- Make hardlink code at line 240 of find_one.c use binary search.
+- Queue warning/error messages during restore so that they
+ are reported at the end of the report rather than being
+ hidden in the file listing ...
+- Look at -D_FORTIFY_SOURCE=2
+- Add Win32 FileSet definition somewhere
+- Look at fixing restore status stats in SD.
+- Make selection of Database used in restore correspond to
+ client.
+- Implement a mode that says when a hard read error is
+ encountered, read many times (as it currently does), and if the
+ block cannot be read, skip to the next block, and try again. If
+ that fails, skip to the next file and try again, ...
+- Add level table:
+ create table LevelType (LevelType binary(1), LevelTypeLong tinyblob);
+ insert into LevelType (LevelType,LevelTypeLong) values
+ ("F","Full"),
+ ("D","Diff"),
+ ("I","Inc");
+- Add ACL to restore only to original location.
+- Add a recursive mark command (rmark) to restore.
+- "Minimum Job Interval = nnn" sets minimum interval between Jobs
+ of the same level and does not permit multiple simultaneous
+ running of that Job (i.e. lets any previous invocation finish
+ before doing Interval testing).
+- Look at simplifying File exclusions.
+- New directive "Delete purged Volumes"
+- new pool XXX with ScratchPoolId = MyScratchPool's PoolId and
+ let it fill itself, and RecyclePoolId = XXX's PoolId so I can
+ see if it become stable and I just have to supervise
+ MyScratchPool
+- If I want to remove this pool, I set RecyclePoolId = MyScratchPool's
+ PoolId, and when it is empty remove it.
+- Figure out how to recycle Scratch volumes back to the Scratch Pool.
+- Add Volume=SCRTCH
+- Allow Check Labels to be used with Bacula labels.
+- "Resuming" a failed backup (lost line for example) by using the
+ failed backup as a sort of "base" job.
+- Look at NDMP
+- Email to the user when the tape is about to need changing x
+ days before it needs changing.
+- Command to show next tape that will be used for a job even
+ if the job is not scheduled.
+- From: Arunav Mandal <amandal@trolltech.com>
+ 1. When jobs are running and bacula for some reason crashes or if I do a
+ restart it remembers and jobs it was running before it crashed or restarted
+ as of now I loose all jobs if I restart it.
+
+ 2. When spooling and in the midway if client is disconnected for instance a
+ laptop bacula completely discard the spool. It will be nice if it can write
+ that spool to tape so there will be some backups for that client if not all.
+
+ 3. We have around 150 clients machines it will be nice to have a option to
+ upgrade all the client machines bacula version automatically.
+
+ 4. Atleast one connection should be reserved for the bconsole so at heavy load
+ I should connect to the director via bconsole which at sometimes I can't
+
+ 5. Another most important feature that is missing, say at 10am I manually
+ started backup of client abc and it was a full backup since client abc has
+ no backup history and at 10.30am bacula again automatically started backup of
+ client abc as that was in the schedule. So now we have 2 multiple Full
+ backups of the same client and if we again try to start a full backup of
+ client backup abc bacula won't complain. That should be fixed.
+
+- Fix bpipe.c so that it does not modify results pointer.
+ ***FIXME*** calling sequence should be changed.
+- For Windows disaster recovery see http://unattended.sf.net/
+- regardless of the retention period, Bacula will not prune the
+ last Full, Diff, or Inc File data until a month after the
+ retention period for the last Full backup that was done.
+- update volume=xxx --- add status=Full
+- Remove old spool files on startup.
+- Exclude SD spool/working directory.
+- Refuse to prune last valid Full backup. Same goes for Catalog.
+- Python:
+ - Make a callback when Rerun failed levels is called.
+ - Give Python program access to Scheduled jobs.
+ - Add setting Volume State via Python.