-- Make sure that all do_prompt() calls in Dir check for
- -1 (error) and -2 (cancel) returns.
-- Fix foreach_jcr() to have free_jcr() inside next().
- jcr=jcr_walk_start();
- for ( ; jcr; (jcr=jcr_walk_next(jcr)) )
- ...
- jcr_walk_end(jcr);
-- A Volume taken from Scratch should take on the retention period
- of the new pool.
-- Correct doc for Maximum Changer Wait (and others) accepting only
- integers.
-- Implement status that shows why a job is being held in reserve, or
- rather why none of the drives are suitable.
-- Implement a way to disable a drive (so you can use the second
- drive of an autochanger, and the first one will not be used or
- even defined).
-- Make sure Maximum Volumes is respected in Pools when adding
- Volumes (e.g. when pulling a Scratch volume).
-- Keep same dcr when switching device ...
-- Implement code that makes the Dir aware that a drive is an
- autochanger (so the user doesn't need to use the Autochanger = yes
- directive).
-- Make catalog respect ACL.
-- Add recycle count to Media record.
-- Add initial write date to Media record.
-- Fix store_yesno to be store_bitmask.
---- create_file.c.orig Fri Jul 8 12:13:05 2005
-+++ create_file.c Fri Jul 8 12:13:07 2005
-@@ -195,6 +195,8 @@
- attr->ofname, be.strerror());
- return CF_ERROR;
- }
-+ } else if(S_ISSOCK(attr->statp.st_mode)) {
-+ Dmsg1(200, "Skipping socket: %s\n", attr->ofname);
- } else {
- Dmsg1(200, "Restore node: %s\n", attr->ofname);
- if (mknod(attr->ofname, attr->statp.st_mode, attr->statp.st_rdev) != 0 && errno != EEXIST) {
-- Add true/false to conf same as yes/no
-- Reserve blocks other restore jobs when first cannot connect to SD.
-- Fix Maximum Changer Wait, Maximum Open Wait, Maximum Rewind Wait to
- accept time qualifiers.
-- Does ClientRunAfterJob fail the job on a bad return code?
-- Make hardlink code at line 240 of find_one.c use binary search.
-- Add ACL error messages in src/filed/acl.c.
-- Make authentication failures single threaded.
-- Make Dir and SD authentication errors single threaded.
-- Fix catreq.c digestbuf at line 411 in src/dird/catreq.c
-- Make base64.c (bin_to_base64) take a buffer length
- argument to avoid overruns.
- and verify that other buffers cannot overrun.
-- Implement VolumeState as discussed with Arno.
-- Add LocationId to update volume
-- Add LocationLog
- LogId
- Date
- User text
- MediaId
- LocationId
- NewState???
-- Add Comment to Media record
-- Fix auth compatibility with 1.38
-- Update dbcheck to include Log table
-- Update llist to include new fields.
-- Make unmount unload autochanger. Make mount load slot.
-- Fix bscan to report the JobType when restoring a job.
-- Fix wx-console scanning problem with commas in names.
-- Add manpages to the list of directories for make install. Notify
- Scott
-- Add bconsole option to use stdin/out instead of conio.
-- Fix ClientRunBefore/AfterJob compatibility.
-- Ensure that connection to daemon failure always indicates what
- daemon it was trying to connect to.
-- Freespace on DVD requested over and over even with no intervening
- writes.
-- .update volume [enabled|disabled|*see below]
- > However, I could easily imagine an option to "update slots" that says
- > "enable=yes|no" that would automatically enable or disable all the Volumes
- > found in the autochanger. This will permit the user to optionally mark all
- > the Volumes in the magazine disabled prior to taking them offsite, and mark
- > them all enabled when bringing them back on site. Coupled with the options
- > to the slots keyword, you can apply the enable/disable to any or all volumes.
-- Restricted consoles start in the Default catalog even if it
- is not permitted.
-- When reading through parts on the DVD, the DVD is mounted and
- unmounted for each part.
-- Make sure that the restore options don't permit "seeing" other
- Client's job data.
-- Restore of a raw drive should not try to check the volume size.
-- Lock tape drive door when open()
-- Make release unload any autochanger.
-- Arno's reservation deadlock.
-- Eric's SD patch
-- Make sure the new level=Full syntax is used in all
- example conf files (especially in the manual).
-- Fix prog copyright (SD) all other files.
-- Document need for UTF-8 format
-- Try turning on disk seek code.
-- Some users claim that they must do two prune commands to get a
- Volume marked as purged.
-- Document fact that CatalogACL now needed for Tray monitor (fixed).
-- If you have two Catalogs, it will take the first one.
-- Migration Volume span bug
-- Rescue release
-- Bug reports
+- Why the heck doesn't bacula drop root priviledges before connecting to
+ the DB?
+- Look at using posix_fadvise(2) for backups -- see bug #751.
+ Possibly add the code at findlib/bfile.c:795
+/* TCP socket options */
+#define TCP_KEEPIDLE 4 /* Start keeplives after this period */
+- Fix bnet_connect() code to set a timer and to use time to
+ measure the time.
+- Implement 4th argument to make_catalog_backup that passes hostname.
+- Test FIFO backup/restore -- make regression
+- Please mount volume "xxx" on Storage device ... should also list
+ Pool and MediaType in case user needs to create a new volume.
+- On restore add Restore Client, Original Client.
+01-Apr 00:42 rufus-dir: Start Backup JobId 55, Job=kernsave.2007-04-01_00.42.48
+01-Apr 00:42 rufus-sd: Python SD JobStart: JobId=55 Client=Rufus
+01-Apr 00:42 rufus-dir: Created new Volume "Full0001" in catalog.
+01-Apr 00:42 rufus-dir: Using Device "File"
+01-Apr 00:42 rufus-sd: kernsave.2007-04-01_00.42.48 Warning: Device "File" (/tmp) not configured to autolabel Volumes.
+01-Apr 00:42 rufus-sd: kernsave.2007-04-01_00.42.48 Warning: Device "File" (/tmp) not configured to autolabel Volumes.
+01-Apr 00:42 rufus-sd: Please mount Volume "Full0001" on Storage Device "File" (/tmp) for Job kernsave.2007-04-01_00.42.48
+01-Apr 00:44 rufus-sd: Wrote label to prelabeled Volume "Full0001" on device "File" (/tmp)
+- Check if gnome-console works with TLS.
+- the director seg faulted when I omitted the pool directive from a
+ job resource. I was experimenting and thought it redundant that I had
+ specified Pool, Full Backup Pool. and Differential Backup Pool. but
+ apparently not. This happened when I removed the pool directive and
+ started the director.
+- Add Where: client:/.... to restore job report.
+- Ensure that moving a purged Volume in ua_purge.c to the RecyclePool
+ does the right thing.
+- FD-SD quick disconnect
+- Building the in memory restore tree is slow.
+- Erabt if min_block_size > max_block_size
+- Add the ability to consolidate old backup sets (basically do a restore
+ to tape and appropriately update the catalog). Compress Volume sets.
+ Might need to spool via file is only one drive is available.
+- Why doesn't @"xxx abc" work in a conf file?
+- Don't restore Solaris Door files:
+ #define S_IFDOOR in st_mode.
+ see: http://docs.sun.com/app/docs/doc/816-5173/6mbb8ae23?a=view#indexterm-360
+- Figure out how to recycle Scratch volumes back to the Scratch Pool.
+- Implement Despooling data status.
+- Use E'xxx' to escape PostgreSQL strings.
+- Look at mincore: http://insights.oetiker.ch/linux/fadvise.html
+- Unicode input http://en.wikipedia.org/wiki/Byte_Order_Mark
+- Look at moving the Storage directive from the Job to the
+ Pool in the default conf files.
+- Look at in src/filed/backup.c
+> pm_strcpy(ff_pkt->fname, ff_pkt->fname_save);
+> pm_strcpy(ff_pkt->link, ff_pkt->link_save);
+- Add Catalog = to Pool resource so that pools will exist
+ in only one catalog -- currently Pools are "global".
+- Add TLS to bat (should be done).
+=== Duplicate jobs ===
+- Done, but implemented somewhat differently than described below!!!
+
+ hese apply only to backup jobs.
+
+ 1. Allow Duplicate Jobs = Yes | No | Higher (Yes)
+
+ 2. Duplicate Job Interval = <time-interval> (0)
+
+ The defaults are in parenthesis and would produce the same behavior as today.
+
+ If Allow Duplicate Jobs is set to No, then any job starting while a job of the
+ same name is running will be canceled.
+
+ If Allow Duplicate Jobs is set to Higher, then any job starting with the same
+ or lower level will be canceled, but any job with a Higher level will start.
+ The Levels are from High to Low: Full, Differential, Incremental
+
+ Finally, if you have Duplicate Job Interval set to a non-zero value, any job
+ of the same name which starts <time-interval> after a previous job of the
+ same name would run, any one that starts within <time-interval> would be
+ subject to the above rules. Another way of looking at it is that the Allow
+ Duplicate Jobs directive will only apply after <time-interval> of when the
+ previous job finished (i.e. it is the minimum interval between jobs).
+
+ So in summary:
+
+ Allow Duplicate Jobs = Yes | No | HigherLevel | CancelLowerLevel (Yes)
+
+ Where HigherLevel cancels any waiting job but not any running job.
+ Where CancelLowerLevel is same as HigherLevel but cancels any running job or
+ waiting job.
+
+ Duplicate Job Proximity = <time-interval> (0)
+
+ My suggestion was to define it as the minimum guard time between
+ executions of a specific job -- ie, if a job was scheduled within Job
+ Proximity number of seconds, it would be considered a duplicate and
+ consolidated.
+
+ Skip = Do not allow two or more jobs with the same name to run
+ simultaneously within the proximity interval. The second and subsequent
+ jobs are skipped without further processing (other than to note the job
+ and exit immediately), and are not considered errors.
+
+ Fail = The second and subsequent jobs that attempt to run during the
+ proximity interval are cancelled and treated as error-terminated jobs.
+
+ Promote = If a job is running, and a second/subsequent job of higher
+ level attempts to start, the running job is promoted to the higher level
+ of processing using the resources already allocated, and the subsequent
+ job is treated as in Skip above.
+
+
+DuplicateJobs {
+ Name = "xxx"
+ Description = "xxx"
+ Allow = yes|no (no = default)
+
+ AllowHigherLevel = yes|no (no)
+
+ AllowLowerLevel = yes|no (no)
+
+ AllowSameLevel = yes|no
+
+ Cancel = Running | New (no)
+
+ CancelledStatus = Fail | Skip (fail)
+
+ Job Proximity = <time-interval> (0)
+ My suggestion was to define it as the minimum guard time between
+ executions of a specific job -- ie, if a job was scheduled within Job
+ Proximity number of seconds, it would be considered a duplicate and
+ consolidated.
+
+}
+
+===
+- Fix bpipe.c so that it does not modify results pointer.
+ ***FIXME*** calling sequence should be changed.