+- Erabt if min_block_size > max_block_size
+- Add the ability to consolidate old backup sets (basically do a restore
+ to tape and appropriately update the catalog). Compress Volume sets.
+ Might need to spool via file is only one drive is available.
+- Why doesn't @"xxx abc" work in a conf file?
+- Don't restore Solaris Door files:
+ #define S_IFDOOR in st_mode.
+ see: http://docs.sun.com/app/docs/doc/816-5173/6mbb8ae23?a=view#indexterm-360
+- Figure out how to recycle Scratch volumes back to the Scratch Pool.
+- Implement Despooling data status.
+- Use E'xxx' to escape PostgreSQL strings.
+- Look at mincore: http://insights.oetiker.ch/linux/fadvise.html
+- Unicode input http://en.wikipedia.org/wiki/Byte_Order_Mark
+- Look at moving the Storage directive from the Job to the
+ Pool in the default conf files.
+- Look at in src/filed/backup.c
+> pm_strcpy(ff_pkt->fname, ff_pkt->fname_save);
+> pm_strcpy(ff_pkt->link, ff_pkt->link_save);
+- Add Catalog = to Pool resource so that pools will exist
+ in only one catalog -- currently Pools are "global".
+- Add TLS to bat (should be done).
+=== Duplicate jobs ===
+- Done, but implemented somewhat differently than described below!!!
+
+ hese apply only to backup jobs.
+
+ 1. Allow Duplicate Jobs = Yes | No | Higher (Yes)
+
+ 2. Duplicate Job Interval = <time-interval> (0)
+
+ The defaults are in parenthesis and would produce the same behavior as today.
+
+ If Allow Duplicate Jobs is set to No, then any job starting while a job of the
+ same name is running will be canceled.
+
+ If Allow Duplicate Jobs is set to Higher, then any job starting with the same
+ or lower level will be canceled, but any job with a Higher level will start.
+ The Levels are from High to Low: Full, Differential, Incremental
+
+ Finally, if you have Duplicate Job Interval set to a non-zero value, any job
+ of the same name which starts <time-interval> after a previous job of the
+ same name would run, any one that starts within <time-interval> would be
+ subject to the above rules. Another way of looking at it is that the Allow
+ Duplicate Jobs directive will only apply after <time-interval> of when the
+ previous job finished (i.e. it is the minimum interval between jobs).
+
+ So in summary:
+
+ Allow Duplicate Jobs = Yes | No | HigherLevel | CancelLowerLevel (Yes)
+
+ Where HigherLevel cancels any waiting job but not any running job.
+ Where CancelLowerLevel is same as HigherLevel but cancels any running job or
+ waiting job.
+
+ Duplicate Job Proximity = <time-interval> (0)
+
+ My suggestion was to define it as the minimum guard time between
+ executions of a specific job -- ie, if a job was scheduled within Job
+ Proximity number of seconds, it would be considered a duplicate and
+ consolidated.
+
+ Skip = Do not allow two or more jobs with the same name to run
+ simultaneously within the proximity interval. The second and subsequent
+ jobs are skipped without further processing (other than to note the job
+ and exit immediately), and are not considered errors.
+
+ Fail = The second and subsequent jobs that attempt to run during the
+ proximity interval are cancelled and treated as error-terminated jobs.
+
+ Promote = If a job is running, and a second/subsequent job of higher
+ level attempts to start, the running job is promoted to the higher level
+ of processing using the resources already allocated, and the subsequent
+ job is treated as in Skip above.
+
+
+DuplicateJobs {
+ Name = "xxx"
+ Description = "xxx"
+ Allow = yes|no (no = default)
+
+ AllowHigherLevel = yes|no (no)
+
+ AllowLowerLevel = yes|no (no)
+
+ AllowSameLevel = yes|no
+
+ Cancel = Running | New (no)
+
+ CancelledStatus = Fail | Skip (fail)
+
+ Job Proximity = <time-interval> (0)
+ My suggestion was to define it as the minimum guard time between
+ executions of a specific job -- ie, if a job was scheduled within Job
+ Proximity number of seconds, it would be considered a duplicate and
+ consolidated.
+
+}
+
+===
+- Fix bpipe.c so that it does not modify results pointer.
+ ***FIXME*** calling sequence should be changed.