+ 1. Allow Duplicate Jobs = Yes | No | Higher (Yes)
+
+ 2. Duplicate Job Interval = <time-interval> (0)
+
+ The defaults are in parenthesis and would produce the same behavior as today.
+
+ If Allow Duplicate Jobs is set to No, then any job starting while a job of the
+ same name is running will be canceled.
+
+ If Allow Duplicate Jobs is set to Higher, then any job starting with the same
+ or lower level will be canceled, but any job with a Higher level will start.
+ The Levels are from High to Low: Full, Differential, Incremental
+
+ Finally, if you have Duplicate Job Interval set to a non-zero value, any job
+ of the same name which starts <time-interval> after a previous job of the
+ same name would run, any one that starts within <time-interval> would be
+ subject to the above rules. Another way of looking at it is that the Allow
+ Duplicate Jobs directive will only apply after <time-interval> of when the
+ previous job finished (i.e. it is the minimum interval between jobs).
+
+ So in summary:
+
+ Allow Duplicate Jobs = Yes | No | HigherLevel | CancelLowerLevel (Yes)
+
+ Where HigherLevel cancels any waiting job but not any running job.
+ Where CancelLowerLevel is same as HigherLevel but cancels any running job or
+ waiting job.
+
+ Duplicate Job Proximity = <time-interval> (0)
+
+ Skip = Do not allow two or more jobs with the same name to run
+ simultaneously within the proximity interval. The second and subsequent
+ jobs are skipped without further processing (other than to note the job
+ and exit immediately), and are not considered errors.
+
+ Fail = The second and subsequent jobs that attempt to run during the
+ proximity interval are cancelled and treated as error-terminated jobs.
+
+ Promote = If a job is running, and a second/subsequent job of higher
+ level attempts to start, the running job is promoted to the higher level
+ of processing using the resources already allocated, and the subsequent
+ job is treated as in Skip above.
+===
+- the cd-command should allow complete paths
+ i.e. cd /foo/bar/foo/bar
+ -> if a customer mails me the path to a certain file,
+ its faster to enter the specified directory
+- Fix bpipe.c so that it does not modify results pointer.
+ ***FIXME*** calling sequence should be changed.
+- Make tree walk routines like cd, ls, ... more user friendly
+ by handling spaces better.
+=== rate design
+ jcr->last_rate
+ jcr->last_runtime
+ MA = (last_MA * 3 + rate) / 4
+ rate = (bytes - last_bytes) / (runtime - last_runtime)
+- Add a recursive mark command (rmark) to restore.
+- "Minimum Job Interval = nnn" sets minimum interval between Jobs
+ of the same level and does not permit multiple simultaneous
+ running of that Job (i.e. lets any previous invocation finish
+ before doing Interval testing).
+- Look at simplifying File exclusions.
+- New directive "Delete purged Volumes"
+- It appears to me that you have run into some sort of race
+ condition where two threads want to use the same Volume and they
+ were both given access. Normally that is no problem. However,
+ one thread wanted the particular Volume in drive 0, but it was
+ loaded into drive 1 so it decided to unload it from drive 1 and
+ then loaded it into drive 0, while the second thread went on
+ thinking that the Volume could be used in drive 1 not realizing
+ that in between time, it was loaded in drive 0.
+ I'll look at the code to see if there is some way we can avoid
+ this kind of problem. Probably the best solution is to make the
+ first thread simply start using the Volume in drive 1 rather than
+ transferring it to drive 0.
+- Complete Catalog in Pool
+- Implement Bacula plugins -- design API
+- Scripts
+- Prune by Job
+- Prune by Job Level
+- True automatic pruning
+- Duplicate Jobs
+ Run, Fail, Skip, Higher, Promote, CancelLowerLevel
+ Proximity
+ New directive.
+- Auto update of slot:
+ rufus-dir: ua_run.c:456-10 JobId=10 NewJobId=10 using pool Full priority=10
+ 02-Nov 12:58 rufus-dir JobId 10: Start Backup JobId 10, Job=kernsave.2007-11-02_12.58.03
+ 02-Nov 12:58 rufus-dir JobId 10: Using Device "DDS-4"
+ 02-Nov 12:58 rufus-sd JobId 10: Invalid slot=0 defined in catalog for Volume "Vol001" on "DDS-4" (/dev/nst0). Manual load my be required.
+ 02-Nov 12:58 rufus-sd JobId 10: 3301 Issuing autochanger "loaded? drive 0" command.
+ 02-Nov 12:58 rufus-sd JobId 10: 3302 Autochanger "loaded? drive 0", result is Slot 2.
+ 02-Nov 12:58 rufus-sd JobId 10: Wrote label to prelabeled Volume "Vol001" on device "DDS-4" (/dev/nst0)
+ 02-Nov 12:58 rufus-sd JobId 10: Alert: TapeAlert[7]: Media Life: The tape has reached the end of its useful life.
+ 02-Nov 12:58 rufus-dir JobId 10: Bacula rufus-dir 2.3.6 (26Oct07): 02-Nov-2007 12:58:51
+- Eliminate: /var is a different filesystem. Will not descend from / into /var
+- Separate Files and Directories in catalog
+- Create FileVersions table
+- Look at rsysnc for incremental updates and dedupping
+- Add MD5 or SHA1 check in SD for data validation
+- modify pruning to keep a fixed number of versions of a file,
+ if requested.
+- finish implementation of fdcalled -- see ua_run.c:105
+- Fix problem in postgresql.c in my_postgresql_query, where the
+ generation of the error message doesn't differentiate result==NULL
+ and a bad status from that result. Not only that, the result is
+ cleared on a bail_out without having generated the error message.
+- KIWI
+- Implement SDErrors (must return from SD)
+- Implement USB keyboard support in rescue CD.
+- Implement continue spooling while despooling.
+- Remove all install temp files in Win32 PLUGINSDIR.
+- Audit retention periods to make sure everything is 64 bit.
+- Use E'xxx' to escape PostgreSQL strings.
+- No where in restore causes kaboom.
+- Performance: multiple spool files for a single job.
+- Performance: despool attributes when despooling data (problem
+ multiplexing Dir connection).
+- Make restore use the in-use volume reservation algorithm.
+- Look at mincore: http://insights.oetiker.ch/linux/fadvise.html
+- Unicode input http://en.wikipedia.org/wiki/Byte_Order_Mark
+- Add TLS to bat (should be done).
+- When Pool specifies Storage command override does not work.
+- Implement wait_for_sysop() message display in wait_for_device(), which
+ now prints warnings too often.
+- Ensure that each device in an Autochanger has a different
+ Device Index.
+- Add Catalog = to Pool resource so that pools will exist
+ in only one catalog -- currently Pools are "global".
+- Look at sg_logs -a /dev/sg0 for getting soft errors.
+- btape "test" command with Offline on Unmount = yes
+
+ This test is essential to Bacula.
+
+ I'm going to write one record in file 0,
+ two records in file 1,
+ and three records in file 2
+
+ 02-Feb 11:00 btape: ABORTING due to ERROR in dev.c:715
+ dev.c:714 Bad call to rewind. Device "LTO" (/dev/nst0) not open
+ 02-Feb 11:00 btape: Fatal Error because: Bacula interrupted by signal 11: Segmentation violation
+ Kaboom! btape, btape got signal 11. Attempting traceback.
+
+- Encryption -- email from Landon
+ > The backup encryption algorithm is currently not configurable, and is
+ > set to AES_128_CBC in src/filed/backup.c. The encryption code
+ > supports a number of different ciphers (as well as adding arbitrary
+ > new ones) -- only a small bit of code would be required to map a
+ > configuration string value to a CRYPTO_CIPHER_* value, if anyone is
+ > interested in implementing this functionality.
+
+- Figure out some way to "automatically" backup conf changes.
+- Add the OS version back to the Win32 client info.
+- Restarted jobs have a NULL in the from field.
+- Modify SD status command to indicate when the SD is writing
+ to a DVD (the device is not open -- see bug #732).
+- Look at the possibility of adding "SET NAMES UTF8" for MySQL,
+ and possibly changing the blobs into varchar.
+- Ensure that the SD re-reads the Media record if the JobFiles
+ does not match -- it may have been updated by another job.
+- Look at moving the Storage directive from the Job to the
+ Pool in the default conf files.
+- Doc items
+- Test Volume compatibility between machine architectures
+- Encryption documentation
+- Wrong jobbytes with query 12 (todo)
+- Bare-metal recovery Windows (todo)
+
+
+Projects:
+- Pool enhancements
+ - Access Mode = Read-Only, Read-Write, Unavailable, Destroyed, Offsite
+ - Pool Type = Copy
+ - Maximum number of scratch volumes
+ - Maximum File size
+ - Next Pool (already have)
+ - Reclamation threshold
+ - Reclamation Pool
+ - Reuse delay (after all files purged from volume before it can be used)
+ - Copy Pool = xx, yyy (or multiple lines).
+ - Catalog = xxx
+ - Allow pool selection during restore.
+
+- Average tape size from Eric
+ SELECT COALESCE(media_avg_size.volavg,0) * count(Media.MediaId) AS volmax, GROUP BY Media.MediaType, Media.PoolId, media_avg_size.volavg
+ count(Media.MediaId) AS volnum,
+ sum(Media.VolBytes) AS voltotal,
+ Media.PoolId AS PoolId,
+ Media.MediaType AS MediaType
+ FROM Media
+ LEFT JOIN (SELECT avg(Media.VolBytes) AS volavg,
+ Media.MediaType AS MediaType
+ FROM Media
+ WHERE Media.VolStatus = 'Full'
+ GROUP BY Media.MediaType
+ ) AS media_avg_size ON (Media.MediaType = media_avg_size.MediaType)
+ GROUP BY Media.MediaType, Media.PoolId, media_avg_size.volavg
+- GUI
+ - Admin
+ - Management reports
+ - Add doc for bweb -- especially Installation
+ - Look at Webmin
+ http://www.orangecrate.com/modules.php?name=News&file=article&sid=501
+- Performance
+ - Despool attributes in separate thread
+ - Database speedups
+ - Embedded MySQL
+ - Check why restore repeatedly sends Rechdrs between
+ each data chunk -- according to James Harper 9Jan07.
+- Features
+ - Better scheduling
+ - Full at least once a month, ...
+ - Cancel Inc if Diff/Full running
+ - More intelligent re-run
+ - New/deleted file backup
+ - FD plugins
+ - Incremental backup -- rsync, Stow
+
+
+For next release:
+- Try to fix bscan not working with multiple DVD volumes bug #912.
+- Look at mondo/mindi
+- Make Bacula by default not backup tmpfs, procfs, sysfs, ...
+- Fix hardlinked immutable files when linking a second file, the
+ immutable flag must be removed prior to trying to link it.