Kern's ToDo List
- 23 February 2008
+ 02 May 2008
Document:
+- This patch will give Bacula the option to specify files in
+ FileSets which can be dropped in directories which are Included
+ which will cause that directory not the be backed up.
+
+ For example, my FileSet contains:
+ # List of files to be backed up
+ FileSet {
+ Name = "Remote Specified1"
+ Include {
+ Options {
+ signature = MD5
+ }
+ File = "\\</etc/bacula-include"
+ IgnoreDir = .notthisone
+ }
+ Exclude {
+ File = "\\</etc/bacula-exclude"
+ }
+ }
+
+ And /etc/bacula-include contains:
+
+ /home
+
+ But in /home, there are hundreds of directories of users and some
+ people want to indicate that they don't want to have certain
+ directories backed-up:
+
+ /home/edwin/www/cache
+ /home/edwin/temp
+
+ So I can put them in /etc/bacula-exclude, but that is a system
+ file and not editable for mortal users. To make it possible for
+ users to make it clear to the system that certain directories
+ don't need to be backed up, they now can create file called
+ .notthisone:
+
+ /home/edwin/www/cache/.notthisone
+ /home/edwin/temp/.notthisone
+
+ so that the backup system will be clear of rubbish like stuff in
+ these two directories but still that I as administrator of the
+ system don't have to be involved in it.
- !!! Cannot restore two jobs a the same time that were
written simultaneously unless they were totally spooled.
- Document cleaning up the spool files:
and http://www.openeyet.nl/scc/ for managing customer changes
Priority:
-- Look at in src/filed/backup.c
-> pm_strcpy(ff_pkt->fname, ff_pkt->fname_save);
-> pm_strcpy(ff_pkt->link, ff_pkt->link_save);
-- Add Catalog = to Pool resource so that pools will exist
- in only one catalog -- currently Pools are "global".
+================
+- Change calling sequence to delete_job_id_range() in ua_cmds.c
+ the preceding strtok() is done inside the subroutine only once.
+- Dangling softlinks are not restored properly. For example, take a
+ soft link such as src/testprogs/install-sh, which points to /usr/share/autoconf...
+ move the directory to another machine where the file /usr/share/autoconf does
+ not exist, back it up, then try a full restore. It fails.
+- Check for FD compatibility -- eg .nobackup ...
+- Re-check new dcr->reserved_volume
+- Softlinks that point to non-existent file are not restored in restore all,
+ but are restored if the file is individually selected. BUG!
+- Doc Duplicate Jobs.
- New directive "Delete purged Volumes"
- Prune by Job
- Prune by Job Level (Full, Differential, Incremental)
- Implement unmount of USB volumes.
- Use "./config no-idea no-mdc2 no-rc5" on building OpenSSL for
Win32 to avoid patent problems.
+- Implement multiple jobid specification for the cancel command,
+ similar to what is permitted on the update slots command.
- Implement Bacula plugins -- design API
- modify pruning to keep a fixed number of versions of a file,
if requested.
-=== Duplicate jobs ===
- hese apply only to backup jobs.
-
- 1. Allow Duplicate Jobs = Yes | No | Higher (Yes)
-
- 2. Duplicate Job Interval = <time-interval> (0)
-
- The defaults are in parenthesis and would produce the same behavior as today.
-
- If Allow Duplicate Jobs is set to No, then any job starting while a job of the
- same name is running will be canceled.
-
- If Allow Duplicate Jobs is set to Higher, then any job starting with the same
- or lower level will be canceled, but any job with a Higher level will start.
- The Levels are from High to Low: Full, Differential, Incremental
-
- Finally, if you have Duplicate Job Interval set to a non-zero value, any job
- of the same name which starts <time-interval> after a previous job of the
- same name would run, any one that starts within <time-interval> would be
- subject to the above rules. Another way of looking at it is that the Allow
- Duplicate Jobs directive will only apply after <time-interval> of when the
- previous job finished (i.e. it is the minimum interval between jobs).
-
- So in summary:
-
- Allow Duplicate Jobs = Yes | No | HigherLevel | CancelLowerLevel (Yes)
-
- Where HigherLevel cancels any waiting job but not any running job.
- Where CancelLowerLevel is same as HigherLevel but cancels any running job or
- waiting job.
-
- Duplicate Job Proximity = <time-interval> (0)
-
- Skip = Do not allow two or more jobs with the same name to run
- simultaneously within the proximity interval. The second and subsequent
- jobs are skipped without further processing (other than to note the job
- and exit immediately), and are not considered errors.
-
- Fail = The second and subsequent jobs that attempt to run during the
- proximity interval are cancelled and treated as error-terminated jobs.
-
- Promote = If a job is running, and a second/subsequent job of higher
- level attempts to start, the running job is promoted to the higher level
- of processing using the resources already allocated, and the subsequent
- job is treated as in Skip above.
-===
- the cd-command should allow complete paths
i.e. cd /foo/bar/foo/bar
-> if a customer mails me the path to a certain file,
its faster to enter the specified directory
-- Fix bpipe.c so that it does not modify results pointer.
- ***FIXME*** calling sequence should be changed.
- Make tree walk routines like cd, ls, ... more user friendly
by handling spaces better.
=== rate design
- Performance: despool attributes when despooling data (problem
multiplexing Dir connection).
- Make restore use the in-use volume reservation algorithm.
-- Add TLS to bat (should be done).
- When Pool specifies Storage command override does not work.
- Implement wait_for_sysop() message display in wait_for_device(), which
now prints warnings too often.
- Unicode input http://en.wikipedia.org/wiki/Byte_Order_Mark
- Look at moving the Storage directive from the Job to the
Pool in the default conf files.
+- Look at in src/filed/backup.c
+> pm_strcpy(ff_pkt->fname, ff_pkt->fname_save);
+> pm_strcpy(ff_pkt->link, ff_pkt->link_save);
+- Add Catalog = to Pool resource so that pools will exist
+ in only one catalog -- currently Pools are "global".
+- Add TLS to bat (should be done).
+=== Duplicate jobs ===
+- Done, but implemented somewhat differently than described below!!!
+
+ hese apply only to backup jobs.
+
+ 1. Allow Duplicate Jobs = Yes | No | Higher (Yes)
+
+ 2. Duplicate Job Interval = <time-interval> (0)
+
+ The defaults are in parenthesis and would produce the same behavior as today.
+
+ If Allow Duplicate Jobs is set to No, then any job starting while a job of the
+ same name is running will be canceled.
+
+ If Allow Duplicate Jobs is set to Higher, then any job starting with the same
+ or lower level will be canceled, but any job with a Higher level will start.
+ The Levels are from High to Low: Full, Differential, Incremental
+
+ Finally, if you have Duplicate Job Interval set to a non-zero value, any job
+ of the same name which starts <time-interval> after a previous job of the
+ same name would run, any one that starts within <time-interval> would be
+ subject to the above rules. Another way of looking at it is that the Allow
+ Duplicate Jobs directive will only apply after <time-interval> of when the
+ previous job finished (i.e. it is the minimum interval between jobs).
+
+ So in summary:
+
+ Allow Duplicate Jobs = Yes | No | HigherLevel | CancelLowerLevel (Yes)
+
+ Where HigherLevel cancels any waiting job but not any running job.
+ Where CancelLowerLevel is same as HigherLevel but cancels any running job or
+ waiting job.
+
+ Duplicate Job Proximity = <time-interval> (0)
+
+ My suggestion was to define it as the minimum guard time between
+ executions of a specific job -- ie, if a job was scheduled within Job
+ Proximity number of seconds, it would be considered a duplicate and
+ consolidated.
+
+ Skip = Do not allow two or more jobs with the same name to run
+ simultaneously within the proximity interval. The second and subsequent
+ jobs are skipped without further processing (other than to note the job
+ and exit immediately), and are not considered errors.
+
+ Fail = The second and subsequent jobs that attempt to run during the
+ proximity interval are cancelled and treated as error-terminated jobs.
+
+ Promote = If a job is running, and a second/subsequent job of higher
+ level attempts to start, the running job is promoted to the higher level
+ of processing using the resources already allocated, and the subsequent
+ job is treated as in Skip above.
+
+
+DuplicateJobs {
+ Name = "xxx"
+ Description = "xxx"
+ Allow = yes|no (no = default)
+
+ AllowHigherLevel = yes|no (no)
+
+ AllowLowerLevel = yes|no (no)
+
+ AllowSameLevel = yes|no
+
+ Cancel = Running | New (no)
+
+ CancelledStatus = Fail | Skip (fail)
+
+ Job Proximity = <time-interval> (0)
+ My suggestion was to define it as the minimum guard time between
+ executions of a specific job -- ie, if a job was scheduled within Job
+ Proximity number of seconds, it would be considered a duplicate and
+ consolidated.
+
+}
+
+===
+- Fix bpipe.c so that it does not modify results pointer.
+ ***FIXME*** calling sequence should be changed.