X-Git-Url: https://git.sur5r.net/?a=blobdiff_plain;f=bacula%2Fkernstodo;h=58cbda77c745df74f812c626b6517e989056ba89;hb=d4bec3ebd408991f16f56bc31cb5139e92cfaa9f;hp=5a34b80e5bdde7424aadca8ffff22abd2f250130;hpb=8f35fc1ad87cd8d230d2bdaf8c161a2842ba33a4;p=bacula%2Fbacula diff --git a/bacula/kernstodo b/bacula/kernstodo index 5a34b80e5b..58cbda77c7 100644 --- a/bacula/kernstodo +++ b/bacula/kernstodo @@ -1,8 +1,51 @@ Kern's ToDo List - 23 February 2008 + 02 May 2008 Document: +- This patch will give Bacula the option to specify files in + FileSets which can be dropped in directories which are Included + which will cause that directory not the be backed up. + + For example, my FileSet contains: + # List of files to be backed up + FileSet { + Name = "Remote Specified1" + Include { + Options { + signature = MD5 + } + File = "\\ pm_strcpy(ff_pkt->fname, ff_pkt->fname_save); -> pm_strcpy(ff_pkt->link, ff_pkt->link_save); -- Add Catalog = to Pool resource so that pools will exist - in only one catalog -- currently Pools are "global". +================ +- Dangling softlinks are not restored properly. For example, take a + soft link such as src/testprogs/install-sh, which points to /usr/share/autoconf... + move the directory to another machine where the file /usr/share/autoconf does + not exist, back it up, then try a full restore. It fails. +- Check for FD compatibility -- eg .nobackup ... +- Re-check new dcr->reserved_volume +- Softlinks that point to non-existent file are not restored in restore all, + but are restored if the file is individually selected. BUG! +- Doc Duplicate Jobs. - New directive "Delete purged Volumes" - Prune by Job - Prune by Job Level (Full, Differential, Incremental) @@ -82,66 +130,15 @@ Priority: - Implement unmount of USB volumes. - Use "./config no-idea no-mdc2 no-rc5" on building OpenSSL for Win32 to avoid patent problems. +- Implement multiple jobid specification for the cancel command, + similar to what is permitted on the update slots command. - Implement Bacula plugins -- design API - modify pruning to keep a fixed number of versions of a file, if requested. -=== Duplicate jobs === - hese apply only to backup jobs. - - 1. Allow Duplicate Jobs = Yes | No | Higher (Yes) - - 2. Duplicate Job Interval = (0) - - The defaults are in parenthesis and would produce the same behavior as today. - - If Allow Duplicate Jobs is set to No, then any job starting while a job of the - same name is running will be canceled. - - If Allow Duplicate Jobs is set to Higher, then any job starting with the same - or lower level will be canceled, but any job with a Higher level will start. - The Levels are from High to Low: Full, Differential, Incremental - - Finally, if you have Duplicate Job Interval set to a non-zero value, any job - of the same name which starts after a previous job of the - same name would run, any one that starts within would be - subject to the above rules. Another way of looking at it is that the Allow - Duplicate Jobs directive will only apply after of when the - previous job finished (i.e. it is the minimum interval between jobs). - - So in summary: - - Allow Duplicate Jobs = Yes | No | HigherLevel | CancelLowerLevel (Yes) - - Where HigherLevel cancels any waiting job but not any running job. - Where CancelLowerLevel is same as HigherLevel but cancels any running job or - waiting job. - - Duplicate Job Proximity = (0) - - My suggestion was to define it as the minimum guard time between - executions of a specific job -- ie, if a job was scheduled within Job - Proximity number of seconds, it would be considered a duplicate and - consolidated. - - Skip = Do not allow two or more jobs with the same name to run - simultaneously within the proximity interval. The second and subsequent - jobs are skipped without further processing (other than to note the job - and exit immediately), and are not considered errors. - - Fail = The second and subsequent jobs that attempt to run during the - proximity interval are cancelled and treated as error-terminated jobs. - - Promote = If a job is running, and a second/subsequent job of higher - level attempts to start, the running job is promoted to the higher level - of processing using the resources already allocated, and the subsequent - job is treated as in Skip above. -=== - the cd-command should allow complete paths i.e. cd /foo/bar/foo/bar -> if a customer mails me the path to a certain file, its faster to enter the specified directory -- Fix bpipe.c so that it does not modify results pointer. - ***FIXME*** calling sequence should be changed. - Make tree walk routines like cd, ls, ... more user friendly by handling spaces better. === rate design @@ -187,7 +184,6 @@ Priority: - Performance: despool attributes when despooling data (problem multiplexing Dir connection). - Make restore use the in-use volume reservation algorithm. -- Add TLS to bat (should be done). - When Pool specifies Storage command override does not work. - Implement wait_for_sysop() message display in wait_for_device(), which now prints warnings too often. @@ -1845,3 +1841,89 @@ Block Position: 0 - Unicode input http://en.wikipedia.org/wiki/Byte_Order_Mark - Look at moving the Storage directive from the Job to the Pool in the default conf files. +- Look at in src/filed/backup.c +> pm_strcpy(ff_pkt->fname, ff_pkt->fname_save); +> pm_strcpy(ff_pkt->link, ff_pkt->link_save); +- Add Catalog = to Pool resource so that pools will exist + in only one catalog -- currently Pools are "global". +- Add TLS to bat (should be done). +=== Duplicate jobs === +- Done, but implemented somewhat differently than described below!!! + + hese apply only to backup jobs. + + 1. Allow Duplicate Jobs = Yes | No | Higher (Yes) + + 2. Duplicate Job Interval = (0) + + The defaults are in parenthesis and would produce the same behavior as today. + + If Allow Duplicate Jobs is set to No, then any job starting while a job of the + same name is running will be canceled. + + If Allow Duplicate Jobs is set to Higher, then any job starting with the same + or lower level will be canceled, but any job with a Higher level will start. + The Levels are from High to Low: Full, Differential, Incremental + + Finally, if you have Duplicate Job Interval set to a non-zero value, any job + of the same name which starts after a previous job of the + same name would run, any one that starts within would be + subject to the above rules. Another way of looking at it is that the Allow + Duplicate Jobs directive will only apply after of when the + previous job finished (i.e. it is the minimum interval between jobs). + + So in summary: + + Allow Duplicate Jobs = Yes | No | HigherLevel | CancelLowerLevel (Yes) + + Where HigherLevel cancels any waiting job but not any running job. + Where CancelLowerLevel is same as HigherLevel but cancels any running job or + waiting job. + + Duplicate Job Proximity = (0) + + My suggestion was to define it as the minimum guard time between + executions of a specific job -- ie, if a job was scheduled within Job + Proximity number of seconds, it would be considered a duplicate and + consolidated. + + Skip = Do not allow two or more jobs with the same name to run + simultaneously within the proximity interval. The second and subsequent + jobs are skipped without further processing (other than to note the job + and exit immediately), and are not considered errors. + + Fail = The second and subsequent jobs that attempt to run during the + proximity interval are cancelled and treated as error-terminated jobs. + + Promote = If a job is running, and a second/subsequent job of higher + level attempts to start, the running job is promoted to the higher level + of processing using the resources already allocated, and the subsequent + job is treated as in Skip above. + + +DuplicateJobs { + Name = "xxx" + Description = "xxx" + Allow = yes|no (no = default) + + AllowHigherLevel = yes|no (no) + + AllowLowerLevel = yes|no (no) + + AllowSameLevel = yes|no + + Cancel = Running | New (no) + + CancelledStatus = Fail | Skip (fail) + + Job Proximity = (0) + My suggestion was to define it as the minimum guard time between + executions of a specific job -- ie, if a job was scheduled within Job + Proximity number of seconds, it would be considered a duplicate and + consolidated. + +} + +=== +- Fix bpipe.c so that it does not modify results pointer. + ***FIXME*** calling sequence should be changed.