+- Why doesn't @"xxx abc" work in a conf file?
+- Figure out some way to "automatically" backup conf changes.
+- Look at using posix_fadvise(2) for backups -- see bug #751.
+ Possibly add the code at findlib/bfile.c:795
+- Add the OS version back to the Win32 client info.
+- Restarted jobs have a NULL in the from field.
+- Modify SD status command to indicate when the SD is writing
+ to a DVD (the device is not open -- see bug #732).
+- Look at the possibility of adding "SET NAMES UTF8" for MySQL,
+ and possibly changing the blobs into varchar.
+- Check if gnome-console works with TLS.
+- Ensure that the SD re-reads the Media record if the JobFiles
+ does not match -- it may have been updated by another job.
+- Look at moving the Storage directive from the Job to the
+ Pool in the default conf files.
+- Test FIFO backup/restore -- make regression
+- Doc items
+- Test Volume compatibility between machine architectures
+- Encryption documentation
+- Wrong jobbytes with query 12 (todo)
+- bacula-1.38.2-ssl.patch
+- Bare-metal recovery Windows (todo)
+
+
+Projects:
+- GUI
+ - Admin
+ - Management reports
+ - Add doc for bweb -- especially Installation
+ - Look at Webmin
+ http://www.orangecrate.com/modules.php?name=News&file=article&sid=501
+- Performance
+ - FD-SD quick disconnect
+ - Despool attributes in separate thread
+ - Database speedups
+ - Embedded MySQL
+ - Check why restore repeatedly sends Rechdrs between
+ each data chunk -- according to James Harper 9Jan07.
+ - Building the in memory restore tree is slow.
+- Features
+ - Better scheduling
+ - Full at least once a month, ...
+ - Cancel Inc if Diff/Full running
+ - More intelligent re-run
+ - New/deleted file backup
+ - FD plugins
+ - Incremental backup -- rsync, Stow
+
+
+
+
+For next release:
+- Look at mondo/mindi
+- Don't restore Solaris Door files:
+ #define S_IFDOOR in st_mode.
+ see: http://docs.sun.com/app/docs/doc/816-5173/6mbb8ae23?a=view#indexterm-360
+- Make Bacula by default not backup tmpfs, procfs, sysfs, ...
+- Fix hardlinked immutable files when linking a second file, the
+ immutable flag must be removed prior to trying to link it.
+- Implement Python event for backing up/restoring a file.
+- Change dbcheck to tell users to use native tools for fixing
+ broken databases, and to ensure they have the proper indexes.
+- add udev rules for Bacula devices.
+- If a job terminates, the DIR connection can close before the
+ Volume info is updated, leaving the File count wrong.
+- Look at why SIGPIPE during connection can cause seg fault in
+ writing the daemon message, when Dir dropped to bacula:bacula
+- Look at zlib 32 => 64 problems.
+- Possibly turn on St. Bernard code.
+- Fix bextract to restore ACLs, or better yet, use common routines.
+- Do we migrate appendable Volumes?
+- Remove queue.c code.
+- Print warning message if LANG environment variable does not specify
+ UTF-8.
+- New dot commands from Arno.
+ .show device=xxx lists information from one storage device, including
+ devices (I'm not even sure that information exists in the DIR...)
+ .move eject device=xxx mostly the same as 'unmount xxx' but perhaps with
+ better machine-readable output like "Ok" or "Error busy"
+ .move eject device=xxx toslot=yyy the same as above, but with a new
+ target slot. The catalog should be updated accordingly.
+ .move transfer device=xxx fromslot=yyy toslot=zzz
+
+Low priority:
+- Article: http://www.heise.de/open/news/meldung/83231
+- Article: http://www.golem.de/0701/49756.html
+- Article: http://lwn.net/Articles/209809/
+- Article: http://www.onlamp.com/pub/a/onlamp/2004/01/09/bacula.html
+- Article: http://www.linuxdevcenter.com/pub/a/linux/2005/04/07/bacula.html
+- Article: http://www.osreviews.net/reviews/admin/bacula
+- Article: http://www.debianhelp.co.uk/baculaweb.htm
+- Article:
+- It appears to me that you have run into some sort of race
+ condition where two threads want to use the same Volume and they
+ were both given access. Normally that is no problem. However,
+ one thread wanted the particular Volume in drive 0, but it was
+ loaded into drive 1 so it decided to unload it from drive 1 and
+ then loaded it into drive 0, while the second thread went on
+ thinking that the Volume could be used in drive 1 not realizing
+ that in between time, it was loaded in drive 0.
+ I'll look at the code to see if there is some way we can avoid
+ this kind of problem. Probably the best solution is to make the
+ first thread simply start using the Volume in drive 1 rather than
+ transferring it to drive 0.
+- Fix re-read of last block to check if job has actually written
+ a block, and check if block was written by a different job
+ (i.e. multiple simultaneous jobs writing).
+- Figure out how to configure query.sql. Suggestion to use m4:
+ == changequote.m4 ===
+ changequote(`[',`]')dnl
+ ==== query.sql.in ===
+ :List next 20 volumes to expire
+ SELECT
+ Pool.Name AS PoolName,
+ Media.VolumeName,
+ Media.VolStatus,
+ Media.MediaType,
+ ifdef([MySQL],
+ [ FROM_UNIXTIME(UNIX_TIMESTAMP(Media.LastWritten) Media.VolRetention) AS Expire, ])dnl
+ ifdef([PostgreSQL],
+ [ media.lastwritten + interval '1 second' * media.volretention as expire, ])dnl
+ Media.LastWritten
+ FROM Pool
+ LEFT JOIN Media
+ ON Media.PoolId=Pool.PoolId
+ WHERE Media.LastWritten>0
+ ORDER BY Expire
+ LIMIT 20;
+ ====
+ Command: m4 -DmySQL changequote.m4 query.sql.in >query.sql
+
+ The problem is that it requires m4, which is not present on all machines
+ at ./configure time.
+- Given all the problems with FIFOs, I think the solution is to do something a
+ little different, though I will look at the code and see if there is not some
+ simple solution (i.e. some bug that was introduced). What might be a better
+ solution would be to use a FIFO as a sort of "key" to tell Bacula to read and
+ write data to a program rather than the FIFO. For example, suppose you
+ create a FIFO named:
+
+ /home/kern/my-fifo
+
+ Then, I could imagine if you backup and restore this file with a direct
+ reference as is currently done for fifos, instead, during backup Bacula will
+ execute:
+
+ /home/kern/my-fifo.backup
+
+ and read the data that my-fifo.backup writes to stdout. For restore, Bacula
+ will execute:
+
+ /home/kern/my-fifo.restore
+
+ and send the data backed up to stdout. These programs can either be an
+ executable or a shell script and they need only read/write to stdin/stdout.
+
+ I think this would give a lot of flexibility to the user without making any
+ significant changes to Bacula.
+
+
+==== SQL
+# get null file
+select FilenameId from Filename where Name='';
+# Get list of all directories referenced in a Backup.
+select Path.Path from Path,File where File.JobId=nnn and
+ File.FilenameId=(FilenameId-from-above) and File.PathId=Path.PathId
+ order by Path.Path ASC;
+
+- Look into using Dart for testing
+ http://public.kitware.com/Dart/HTML/Index.shtml
+
+- Look into replacing autotools with cmake
+ http://www.cmake.org/HTML/Index.html
+
+=== Migration from David ===
+What I'd like to see:
+
+Job {
+ Name = "<poolname>-migrate"
+ Type = Migrate
+ Messages = Standard
+ Pool = Default
+ Migration Selection Type = LowestUtil | OldestVol | PoolOccupancy |
+Client | PoolResidence | Volume | JobName | SQLquery
+ Migration Selection Pattern = "regexp"
+ Next Pool = <override>
+}
+
+There should be no need for a Level (migration is always Full, since you
+don't calculate differential/incremental differences for migration),
+Storage should be determined by the volume types in the pool, and Client
+is really a selection issue. Migration should always occur to the
+NextPool defined in the pool definition. If no nextpool is defined, the
+job should end with a reason of "no place to go". If Next Pool statement
+is present, we override the check in the pool definition and use the
+pool specified.
+
+Here's how I'd define Migration Selection Types:
+
+With Regexes:
+Client -- Migrate data from selected client only. Migration Selection
+Pattern regexp provides pattern to select client names, eg ^FS00* makes
+all client names starting with FS00 eligible for migration.
+
+Jobname -- Migration all jobs matching name. Migration Selection Pattern
+regexp provides pattern to select jobnames existing in pool.
+
+Volume -- Migrate all data on specified volumes. Migration Selection
+Pattern regexp provides selection criteria for volumes to be migrated.
+Volumes must exist in pool to be eligible for migration.
+
+
+With Regex optional:
+LowestUtil -- Identify the volume in the pool with the least data on it
+and empty it. No Migration Selection Pattern required.
+
+OldestVol -- Identify the LRU volume with data written, and empty it. No
+Migration Selection Pattern required.
+
+PoolOccupancy -- if pool occupancy exceeds <highmig>, migrate volumes
+(starting with most full volumes) until pool occupancy drops below
+<lowmig>. Pool highmig and lowmig values are in pool definition, no
+Migration Selection Pattern required.