Kern's ToDo List
- 27 November 2002
+ 28 April 2003
-Documentation to do: (a little bit at a time)
+Documentation to do: (any release a little bit at a time)
- Document running a test version.
-- Make sure restore options are documented
- Document query file format.
+- Document static linking
+- Document problems with Verify and pruning.
+- Document how to use multiple databases.
+- Add a section to the doc on Manual cycling of Volumes.
+
Testing to do: (painful)
-- that restore options work in FD.
-- that mod of restore options works.
-- that console command line options work
+- that ALL console command line options work and are always implemented
- blocksize recognition code.
+- multiple simultaneous Volumes
+
+For 1.30a release:
+- Examine Bare Metal restore problem.
+- Test multiple simultaneous Volumes
+- Document FInclude ...
+
+- Figure out how to use ssh or stunnel to protect Bacula communications.
+
+After 1.30:
+- Fix command prompt in gnome-console by checking on Ready.
+- Implement HEART_BEAT while SD waiting for tapes.
+- Include RunBeforeJob and RunAfterJob output in the message
+ stream.
+- Check if Job/File retentions apply to multivolume jobs.
+- Change M_INFO to M_RESTORED for all restored files.
+- Remove subsysdir from conf files (used only in autostart scripts).
+- Implement console @echo command.
+- Implement global with DB name and add to btraceback.gdb
+- Bug: fix access problems on files restored on WinXP.
+- Implement a Slot priority (loaded/not loaded).
+- Implement "vacation" Incremental only saves.
+- Implement single pane restore (much like the Gftp panes).
+- Implement Automatic Mount even in operator wait.
+- Implement create "FileSet"?
+- Implement Release Device in the Job resource to unmount a drive.
+- Implement Acquire Device in the Job resource to mount a drive,
+ be sure this works with admin jobs so that the user can get
+ prompted to insert the correct tape. Possibly some way to say to
+ run the job but don't save the files.
+- Implement all command line args on run.
+- Implement command line "restore" args.
+- Implement "restore current select=no"
+- Fix watchdog pthread crash on Win32
+- Fix "access not allowed" for backup of files on WinXP.
+- Implement "scratch pool" where tapes are defined and can be
+ taken by any pool that needs them.
+- Implement restore "current system", but take all files without
+ doing selection tree -- so that jobs without File records can
+ be restored.
+- Make | and < work on FD side.
+- Pass prefix_links to FD.
+- Implement a M_SECURITY message class.
+- Implement disk spooling. Two parts: 1. Spool to disk then
+ immediately to tape to speed up tape operations. 2. Spool to
+ disk only when the tape is full, then when a tape is hung move
+ it to tape.
+- From Phil Stracchino:
+ It would probably be a per-client option, and would be called
+ something like, say, "Automatically purge obsoleted jobs". What it
+ would do is, when you successfully complete a Differential backup of a
+ client, it would automatically purge all Incremental backups for that
+ client that are rendered redundant by that Differential. Likewise,
+ when a Full backup on a client completed, it would automatically purge
+ all Differential and Incremental jobs obsoleted by that Full backup.
+ This would let people minimize the number of tapes they're keeping on
+ hand without having to master the art of retention times.
-For 1.28 release:
-- Think about how to make Bacula work better with File archives.
-- Start working on Base jobs.
+- Allow multiple Storage specifications (or multiple names on
+ a single Storage specification) in the Job record. Thus a job
+ can be backed up to a number of storage devices.
+- Implement dump label to UA
+- Add prefixlinks to where or not where absolute links to FD.
+- Look at Python for a Bacula scripting language -- www.python.org
+- Issue message to mount a new tape before the rewind.
+- Simplified client job initiation for portables.
+- If SD cannot open a drive, make it periodically retry.
+- Implement LabelTemplate (at least first cut).
+- Add more of the config info to the tape label.
+- Implement a Mount Command and an Unmount Command where
+ the users could specify a system command to be performed
+ to do the mount, after which Bacula could attempt to
+ read the device. This is for Removeable media such as a CDROM.
+ - Most likely, this mount command would be invoked explicitly
+ by the user using the current Console "mount" and "unmount"
+ commands -- the Storage Daemon would do the right thing
+ depending on the exact nature of the device.
+ - As with tape drives, when Bacula wanted a new removable
+ disk mounted, it would unmount the old one, and send a message
+ to the user, who would then use "mount" as described above
+ once he had actually inserted the disk.
+
+- Make some way so that if a machine is skipped because it is not up
+ that Bacula will continue retrying for a specified period of time --
+ periodically.
+- If tape is marked read-only, then try opening it read-only rather than
+ failing, and remember that it cannot be written.
+- Refine SD waiting output:
+ Device is being positioned
+ > Device is being positioned for append
+ > Device is being positioned to file x
+ >
+- Figure out some way to estimate output size and to avoid splitting
+ a backup across two Volumes -- this could be useful for writing CDROMs
+ where you really prefer not to have it split -- not serious.
+- Add RunBeforeJob and RunAfterJob to the Client program.
+- Have SD compute MD5 or SHA1 and compare to what FD computes.
+- Make VolumeToCatalog calculate an MD5 or SHA1 from the
+ actual data on the Volume and compare it.
- Implement FileOptions (see end of this document)
-- Test a second language e.g. french.
-- Replace popen() and pclose() -- fail safe and timeout, no SIG dep.
-- Enhance schedule to have 1stSat, ...
-- Ensure that restore of differential jobs works (check SQL).
+- Implement Bacula plugins -- design API
+- Make bcopy read through bad tape records.
+- Fix read_record to handle multiple sessions.
+- Program files (i.e. execute a program to read/write files).
+ Pass read date of last backup, size of file last time.
+- Add Signature type to File DB record.
+- Make Restore report an error if FD or SD term codes are not OK.
+- CD into subdirectory when open()ing files for backup to
+ speed up things. Test with testfind().
+- Priority job to go to top of list.
+- Find out why Full saves run slower and slower (hashing?)
+- Why are save/restore of device different sizes (sparse?) Yup! Fix it.
+- Implement some way for the Console to dynamically create a job.
+- Restore to a particular time -- e.g. before date, after date.
+- Solaris -I on tar for include list
+- Prohibit backing up archive device (findlib/find_one.c:128)
+- Need a verbose mode in restore, perhaps to bsr.
+- bscan without -v is too quiet -- perhaps show jobs.
+- Add code to reject whole blocks if not wanted on restore.
+- Start working on Base jobs.
+- Check if we can increase Bacula FD priorty in Win2000
- Make sure the MaxVolFiles is fully implemented in SD
-- Make Job err if WriteBootstrap fails.
-- Flush all the daemon messages at the end of every job.
- Check if both CatalogFiles and UseCatalog are set to SD.
-- Check if we can bump Bacula FD priorty in Win2000
-- Make bcopy read through bad tape records.
- Need return status on read_cb() from read_records(). Need multiple
records -- one per Job, maybe a JCR or some other structure with
a block and a record.
-- Continue improving the restore process (handling
- of tapes, efficiency improvements e.g. use FSF to
- position the tape, ...)
-- Work more on how to to a Bacula restore beginning with
- just a Bacula tape and a boot floppy (bare metal recovery).
-- Try bare metal Windows restore
-- Fix read_record to handle multiple sessions.
-- Program files (i.e. execute a program to read/write files).
- Pass read date of last backup, size of file last time.
+- Figure out how to do a bare metal Windows restore
- Put system type returned by FD into catalog.
-- Add code to fast seek to proper place on tape/file
- when doing Restore. If it doesn't work, try linear
- search as before.
-- Add code to reject whole blocks if not wanted on restore.
- Possibly add email to Watchdog if drive is unmounted too
long and a job is waiting on the drive.
-- Strip trailing slashes from Include directory names in the FD.
- Use read_record.c in SD code.
- Why don't we get an error message from Win32 FD when bootstrap
file cannot be created for restore command?
mark the link so that the data will be reloaded.
- Restore program that errors in SD due to no tape reports
OK incorrectly in output.
-- Make BSR accept count (total files to be restored).
-- Make BSR return next_block when it knows record is not
- in block, done when count is reached, and possibly other
- optimizations. I.e. add a state word.
-
-- Running multiple simultaneous jobs has a problem when one Job
- must label the volume -- the others apparently do not wait.
- >
- > 13-Nov-2002 02:06 dump01-dir: Start Backup JobId 20, Job=save-rho.2002-11-13_02.06.00
- > 13-Nov-2002 02:06 dump01-dir: Start Backup JobId 21, Job=save-beta.2002-11-13_02.06.01
- > 13-Nov-2002 02:06 dump01-dir: Start Backup JobId 22, Job=save-dump01.2002-11-13_02.06.02
- > 13-Nov-2002 02:06 dump01-sd: Created Volume label File0013 on device /dump7/bacula.
- > 13-Nov-2002 02:06 dump01-sd: Wrote label to prelabeled Volume File0013 on device /dump7/bacula
- > 13-Nov-2002 02:06 dump01-sd: save-beta.2002-11-13_02.06.01 Fatal error: Device /dump7/bacula is
- > busy writing with another Volume.
- > 13-Nov-2002 02:06 dump01-sd: save-dump01.2002-11-13_02.06.02 Fatal error: Device /dump7/bacula is
- > busy writing with another Volume.
- > 13-Nov-2002 02:06 beta-fd: save-beta.2002-11-13_02.06.01 Error: bnet.c:292 Write error sending to
- > Storage daemon:dump01:9103: ERR=Broken pipe
- > 13-Nov-2002 02:06 dump01-fd: save-dump01.2002-11-13_02.06.02 Error: bnet.c:319 Write error sending
- > to Storage daemon:dump01:9103: ERR=Broken pipe
- > 13-Nov-2002 02:08 dump01-dir: 13-Nov-2002 02:08
- >
-
-- Figure out how compress everything except .gz,... files.
-- Make bcopy copy with a single tape drive.
-- Make sure catalog doesn't keep growing.
-- Permit changing ownership during restore.
- After unmount, if restore job started, ask to mount.
-- Fix db_get_fileset in cats/sql_get.c for multiple records.
-- Fix start/end blocks for File devices
-- Add new code to scheduler.c and run_conf.c
- schedule options (1-sat, 2-sat, ...).
-- Fix catalog filename truncation in sql_get and sql_create. Use
- only a single filename split routine.
-- Add command to reset VolFiles to a larger value (don't allow
- a smaller number or print big warning).
-- Make Restore report an error if FD or SD term codes are not OK.
- Convert all %x substitution variables, which are hard to remember
and read to %(variable-name). Idea from TMDA.
- Add JobLevel in FD status (but make sure it is defined).
- Make Pool resource handle Counter resources.
- Remove NextId for SQLite. Optimize.
-- Fix gethostbyname() to use gethostbyname_r()
-- Implement ./configure --with-client-only
-- Strip trailing / from Include
- Move all SQL statements into a single location.
-- Cleanup db_update_media and db_update_pool
- Add UA rc and history files.
- put termcap (used by console) in ./configure and
allow -with-termcap-dir.
- Enhance time and size scanning routines.
- Fix Autoprune for Volumes to respect need for full save.
-- DateWritten field on tape may be wrong.
- Fix Win32 config file definition name on /install
- No READLINE_SRC if found in alternate directory.
-- Add Client FS/OS id (Linux, Win95/98, ...).
+- Test a second language e.g. french.
+- Compare tape to Client files (attributes, or attributes and data)
+- Make all database Ids 64 bit.
+- Write an applet for Linux.
+- Add estimate to Console commands
+- Find solution to blank filename (i.e. path only) problem.
+- Implement new daemon communications protocol.
+- Remove PoolId from Job table, it exists in Media.
+- Allow console commands to detach or run in background.
+- Fix status delay on storage daemon during rewind.
+- Add SD message variables to control operator wait time
+ - Maximum Operator Wait
+ - Minimum Message Interval
+ - Maximum Message Interval
+- Send Operator message when cannot read tape label.
+- Verify level=Volume (scan only), level=Data (compare of data to file).
+ Verify level=Catalog, level=InitCatalog
+- Events file
+- Add keyword search to show command in Console.
+- Fix Win2000 error with no messages during startup.
+- Events : tape has more than xxx bytes.
+- Restrict characters permitted in a Resource name.
+- Complete code in Bacula Resources -- this will permit
+ reading a new config file at any time.
+- Handle ctl-c in Console
+- Implement script driven addition of File daemon to config files.
+- Think about how to make Bacula work better with File (non-tape) archives.
+- Write Unix emulator for Windows.
+- Implement new serialize subroutines
+ send(socket, "string", &Vol, "uint32", &i, NULL)
+- Audit all UA commands to ensure that we always prompt where possible.
+- If ./btape is called without /dev, assume argument is a Storage resource name.
+- Put memory utilization in Status output of each daemon
+ if full status requested or if some level of debug on.
+- Make database type selectable by .conf files i.e. at runtime
+- Set flag for uname -a. Add to Volume label.
+- Implement throttled work queue.
+- Check for EOT at ENOSPC or EIO or ENXIO (unix Pc)
+- Restore files modified after date
+- Restore file modified before date
+- Emergency restore info:
+ - Backup Bacula
+ - Backup working directory
+ - Backup Catalog
+- Restore -- do nothing but show what would happen
+- SET LD_RUN_PATH=$HOME/mysql/lib/mysql
+- Implement Restore FileSet=
+- Create a protocol.h and protocol.c where all protocol messages
+ are concentrated.
+- Remove duplicate fields from jcr (e.g. jcr.level and jcr.jr.Level, ...).
+- Timout a job or terminate if link goes down, or reopen link and query.
+- Find general solution for sscanf size problems (as well
+ as sprintf. Do at run time?
+- Concept of precious tapes (cannot be reused).
+- Make bcopy copy with a single tape drive.
+- Permit changing ownership during restore.
+
+- Autolabel should be specified by DIR instead of SD.
+- Find out how to get the system tape block limits, e.g.:
+ Apr 22 21:22:10 polymatou kernel: st1: Block limits 1 - 245760 bytes.
+ Apr 22 21:22:10 polymatou kernel: st0: Block limits 2 - 16777214 bytes.
+- Storage daemon
+ - Add media capacity
+ - AutoScan (check checksum of tape)
+ - Format command = "format /dev/nst0"
+ - MaxRewindTime
+ - MinRewindTime
+ - MaxBufferSize
+ - Seek resolution (usually corresponds to buffer size)
+ - EODErrorCode=ENOSPC or code
+ - Partial Read error code
+ - Partial write error code
+ - Nonformatted read error
+ - Nonformatted write error
+ - WriteProtected error
+ - IOTimeout
+ - OpenRetries
+ - OpenTimeout
+ - IgnoreCloseErrors=yes
+ - Tape=yes
+ - NoRewind=yes
+- Pool
+ - Maxwrites
+ - Recycle period
+- Job
+ - MaxWarnings
+ - MaxErrors (job?)
+=====
+- FD sends unsaved file list to Director at end of job (see
+ RFC below).
+- File daemon should build list of files skipped, and then
+ at end of save retry and report any errors.
+- Write a Storage daemon that uses pipes and
+ standard Unix programs to write to the tape.
+ See afbackup.
+- Need something that monitors the JCR queue and
+ times out jobs by asking the deamons where they are.
+- Enhance Jmsg code to permit buffering and saving to disk.
+- device driver = "xxxx" for drives.
+- restart: paranoid: read label fsf to
+ eom read append block, and go
+ super-paranoid: read label, read all files
+ in between, read append block, and go
+ verify: backspace, read append block, and go
+ permissive: same as above but frees drive
+ if tape is not valid.
+- Verify from Volume
+- Ensure that /dev/null works
+- Need report class for messages. Perhaps
+ report resource where report=group of messages
+- enhance scan_attrib and rename scan_jobtype, and
+ fill in code for "since" option
+- Director needs a time after which the report status is sent
+ anyway -- or better yet, a retry time for the job.
+ Don't reschedule a job if previous incarnation is still running.
+- Some way to automatically backup everything is needed????
+- Need a structure for pending actions:
+ - buffered messages
+ - termination status (part of buffered msgs?)
+- Drive management
+ Read, Write, Clean, Delete
+- Login to Bacula; Bacula users with different permissions:
+ owner, group, user, quotas
+- Store info on each file system type (probably in the job header on tape.
+ This could be the output of df; or perhaps some sort of /etc/mtab record.
+
+Longer term to do:
+- Design at hierarchial storage for Bacula. Migration and Clone.
+- Implement FSM (File System Modules).
+- Identify unchanged or "system" files and save them to a
+ special tape thus removing them from the standard
+ backup FileSet -- BASE backup.
+- Heartbeat between daemons.
+- Audit M_ error codes to ensure they are correct and consistent.
+- Add variable break characters to lex analyzer.
+ Either a bit mask or a string of chars so that
+ the caller can change the break characters.
+- Make a single T_BREAK to replace T_COMMA, etc.
+- Ensure that File daemon and Storage daemon can
+ continue a save if the Director goes down (this
+ is NOT currently the case). Must detect socket error,
+ buffer messages for later.
+- Enhance time/duration input to allow multiple qualifiers e.g. 3d2h
+- Add ability to backup to two Storage devices (two SD sessions) at
+ the same time -- e.g. onsite, offsite.
+- Add the ability to consolidate old backup sets (basically do a restore
+ to tape and appropriately update the catalog). Compress Volume sets.
+ Might need to spool via file is only one drive is available.
+- Compress or consolidate Volumes of old possibly deleted files. Perhaps
+ someway to do so with every volume that has less than x% valid
+ files.
Projects:
Bacula Projects Roadmap
17 August 2002
- last update 27 November 2002
+ last update 5 January 2003
Item 1: Multiple simultaneous Jobs. (done)
-Done
+Done -- Restore part needs better implementation to work correctly
What: Permit multiple simultaneous jobs in Bacula.
Item 6: Write a regression script.
-Started
+Done -- Continue to expand its testing.
What: This is an automatic script that runs and tests as many features
of Bacula as possible. The output is compared to previous
-I haven't put these in any particular order.
-
-Small projects:
-- Compare tape to Client files (attributes, or attributes and data)
-- Restore options (overwrite, overwrite if older,
- overwrite if newer, never overwrite, ...)
-- Restore to a particular time -- e.g. before date, after date.
-- Make all database Ids 64 bit.
-- Write an applet for Linux.
-- Add estimate to Console commands
-- Find solution to blank filename (i.e. path only) problem.
-- Implement new daemon communications protocol.
-
-To be done:
-- Remove PoolId from Job table, it exists in Media.
-- Allow console commands to detach or run in background.
-- Fix status delay on storage daemon during rewind.
-- Add SD message variables to control operator wait time
- - Maximum Operator Wait
- - Minimum Message Interval
- - Maximum Message Interval
-- Send Operator message when cannot read tape label.
-- Think about how to handle I/O error on MTEOM.
-- Verify level=Volume (scan only), level=Data (compare of data to file).
- Verify level=Catalog, level=InitCatalog
-- Events file
-- Add keyword search to show command in Console.
-- Fix Win2000 error with no messages during startup.
-- Events : tape has more than xxx bytes.
-- Write general list maintenance subroutines.
-- Restrict characters permitted in a Resource name.
-- Complete code in Bacula Resources -- this will permit
- reading a new config file at any time.
-- Handle ctl-c in Console
-- Implement LabelTemplate (at least first cut).
-- Implement script driven addition of File daemon to config files.
-
-- see setgroup and user for Bacula p4-5 of stunnel.c
-- Implement new serialize subroutines
- send(socket, "string", &Vol, "uint32", &i, NULL)
-- On I/O error, write EOF, then try to write again ????
-- Audit all UA commands to ensure that we always prompt where possible.
-- If ./btape is called without /dev, assume argument is a Storage resource name.
-- Put memory utilization in Status output of each daemon
- if full status requested or if some level of debug on.
-- Make database type selectable by .conf files i.e. at runtime
-- gethostbyname failure in bnet_connect() continues
- generating errors -- should stop.
-- Add HOST to Volume label.
-- Set flag for uname -a. Add to Volume label.
-- Implement throttled work queue.
-- Check for EOT at ENOSPC or EIO or ENXIO (unix Pc)
-- Allow multiple Storage specifications (or multiple names on
- a single Storage specification) in the Job record. Thus a job
- can be backed up to a number of storage devices.
-- Implement full MediaLabel code.
-- Implement dump label to UA
-- Copy volume using single drive.
-- Concept of VolumeSet during restore which is a list
- of Volume names needed.
-- Restore files modified after date
-- Restore file modified before date
-- Emergency restore info:
- - Backup Bacula
- - Backup working directory
- - Backup Catalog
-- Restore -- do nothing but show what would happen
-- SET LD_RUN_PATH=$HOME/mysql/lib/mysql
-- Implement Restore FileSet=
-- Create a protocol.h and protocol.c where all protocol messages
- are concentrated.
-- If SD cannot open a drive, make it periodically retry.
-- Put Bacula version somewhere in Job stream, probably Start Session
- Labels.
-- Remove duplicate fields from jcr (e.g. jcr.level and jcr.jr.Level, ...).
-- Timout a job or terminate if link goes down, or reopen link and query.
-- Fill all fields in Vol/Job Header -- ensure that everything
- needed is written to tape. Think about restore to Catalog
- from tape. Client record needs improving.
-- Find general solution for sscanf size problems (as well
- as sprintf. Do at run time?
-- Concept of precious tapes (cannot be reused).
-
-- Restore should get Device and Pool information from
- job record rather than from config.
-- Autolabel should be specified by DR instead of SD.
-- Find out how to get the system tape block limits, e.g.:
- Apr 22 21:22:10 polymatou kernel: st1: Block limits 1 - 245760 bytes.
- Apr 22 21:22:10 polymatou kernel: st0: Block limits 2 - 16777214 bytes.
-- Storage daemon
- - Add media capacity
- - AutoScan (check checksum of tape)
- - Format command = "format /dev/nst0"
- - MaxRewindTime
- - MinRewindTime
- - MaxBufferSize
- - Seek resolution (usually corresponds to buffer size)
- - EODErrorCode=ENOSPC or code
- - Partial Read error code
- - Partial write error code
- - Nonformatted read error
- - Nonformatted write error
- - WriteProtected error
- - IOTimeout
- - OpenRetries
- - OpenTimeout
- - IgnoreCloseErrors=yes
- - Tape=yes
- - NoRewind=yes
-- Pool
- - Maxwrites
- - Recycle period
-- Job
- - MaxWarnings
- - MaxErrors (job?)
-=====
-- FD sends unsaved file list to Director at end of job.
-- Write a Storage daemon that uses pipes and
- standard Unix programs to write to the tape.
- See afbackup.
-- Need something that monitors the JCR queue and
- times out jobs by asking the deamons where they are.
-
-- Enhance Jmsg code to permit buffering and saving to disk.
-- device driver = "xxxx" for drives.
-- restart: paranoid: read label fsf to
- eom read append block, and go
- super-paranoid: read label, read all files
- in between, read append block, and go
- verify: backspace, read append block, and go
- permissive: same as above but frees drive
- if tape is not valid.
-- Verify from Volume
-- Ensure that /dev/null works
-- File daemon should build list of files skipped, and then
- at end of save retry and report any errors.
-- Need report class for messages. Perhaps
- report resource where report=group of messages
-- enhance scan_attrib and rename scan_jobtype, and
- fill in code for "since" option
-- Need to save contents of FileSet to tape?
-- Director needs a time after which the report status is sent
- anyway -- or better yet, a retry time for the job.
- Don't reschedule a job if previous incarnation is still running.
-- Figure out how to save the catalog (possibly a special FileSet).
-- Figure out how to restore the catalog.
-- Some way to automatically backup everything is needed????
-- Need a structure for pending actions:
- - buffered messages
- - termination status (part of buffered msgs?)
-- Concept of grouping Storage devices and job can use
- any of a number of devices
-- Drive management
- Read, Write, Clean, Delete
-- Login to Bacula; Bacula users with different permissions:
- owner, group, user, quotas
-- Store info on each file system type (probably in the job header on tape.
- This could be the output of df; or perhaps some sort of /etc/mtab record.
-
-Longer term to do:
-- Implement FSM (File System Modules).
-- Identify unchanged or "system" files and save them to a
- special tape thus removing them from the standard
- backup FileSet -- BASE backup.
-- Turn virutally all sprintfs into snprintfs.
-- Heartbeat between daemons.
-- Audit M_ error codes to ensure they are correct and consistent.
-- Add variable break characters to lex analyzer.
- Either a bit mask or a string of chars so that
- the caller can change the break characters.
-- Make a single T_BREAK to replace T_COMMA, etc.
-- Ensure that File daemon and Storage daemon can
- continue a save if the Director goes down (this
- is NOT currently the case). Must detect socket error,
- buffer messages for later.
-- Enhance time/duration input to allow multiple qualifiers e.g. 3d2h
-
-
-Done: (see kernsdone for more)
-- Add EOM records? No, not at this time. The current system works and
- above all is simple.
-- Add VolumeUseDuration and MaximumVolumeJobs to Pool db record and
- to Media db record.
-- Add VOLUME_CAT_INFO to the EOS tape record (as
- well as to the EOD record). -- No, not at this time.
-- Put MaximumVolumeSize in Director (MaximumVolumeJobs, MaximumVolumeFiles,
- MaximumFileSize).
+======================================================
+ Base Jobs design
+It is somewhat like a Full save becomes an incremental since
+the Base job (or jobs) plus other non-base files.
+Need:
+- New BaseFile table that contains:
+ JobId, BaseJobId, FileId (from Base).
+ i.e. for each base file that exists but is not saved because
+ it has not changed, the File daemon sends the JobId, BaseId,
+ and FileId back to the Director who creates the DB entry.
+- To initiate a Base save, the Director sends the FD
+ the FileId, and full filename for each file in the Base.
+- When the FD finds a Base file, he requests the Director to
+ send him the full File entry (stat packet plus MD5), or
+ conversely, the FD sends it to the Director and the Director
+ says yes or no. This can be quite rapid if the FileId is kept
+ by the FD for each Base Filename.
+- It is probably better to have the comparison done by the FD
+ despite the fact that the File entry must be sent across the
+ network.
+- An alternative would be to send the FD the whole File entry
+ from the start. The disadvantage is that it requires a lot of
+ space. The advantage is that it requires less communications
+ during the save.
+- The Job record must be updated to indicate that one or more
+ Bases were used.
+- At end of Job, FD returns:
+ 1. Count of base files/bytes not written to tape (i.e. matches)
+ 2. Count of base file that were saved i.e. had changed.
+- No tape record would be written for a Base file that matches, in the
+ same way that no tape record is written for Incremental jobs where
+ the file is not saved because it is unchanged.
+- On a restore, all the Base file records must explicitly be
+ found from the BaseFile tape. I.e. for each Full save that is marked
+ to have one or more Base Jobs, search the BaseFile for all occurrences
+ of JobId.
+- An optimization might be to make the BaseFile have:
+ JobId
+ BaseId
+ FileId
+ plus
+ FileIndex
+ This would avoid the need to explicitly fetch each File record for
+ the Base job. The Base Job record will be fetched to get the
+ VolSessionId and VolSessionTime.
+=========================================================
+
+==========================================================
+ Unsaved File design
+For each Incremental job that is run, there may be files that
+were found but not saved because they were locked (this applies
+only to Windows). Such a system could send back to the Director
+a list of Unsaved files.
+Need:
+- New UnSavedFiles table that contains:
+ JobId
+ PathId
+ FilenameId
+- Then in the next Incremental job, the list of Unsaved Files will be
+ feed to the FD, who will ensure that they are explicitly chosen even
+ if standard date/time check would not have selected them.
+=============================================================
+
-====================================
+=============================================================
- Request For Comments
+ Request For Comments For File Backup Options
10 November 2002
Subject: File Backup Options
- Sparse= (yes/no) - do sparse file backup
- *Exclude= (yes/no) - exclude file from being saved
- *Reader= (filename) - external read (backup) program
+ - *Plugin= (filename) - read/write plugin module
For Verify Jobs:
- verify= (ipnougsamc5) - verify options
After implementing the above, the user will be able to specify
on a file by file basis (using regular expressions) what options are
applied for the backup.
+====================================
+
+=========================================
+Proposal by Bill Sellers
+
+Return-Path: <w.a.sellers@larc.nasa.gov>
+Received: from post.larc.nasa.gov (post.larc.nasa.gov [128.155.4.45]) by matou.sibbald.com (8.11.6/8.11.6) with ESMTP id h0ELUIm07622 for <kern@sibbald.com>; Tue, 14 Jan 2003 22:30:18 +0100
+Received: from Baron.larc.nasa.gov (baron.larc.nasa.gov [128.155.40.132]) by post.larc.nasa.gov (pohub4.6) with ESMTP id QAA09768 for <kern@sibbald.com>; Tue, 14 Jan 2003 16:30:14 -0500 (EST)
+Message-Id: <5.1.0.14.2.20030114153452.028dbae8@pop.larc.nasa.gov>
+X-Sender: w.a.sellers@pop.larc.nasa.gov
+X-Mailer: QUALCOMM Windows Eudora Version 5.1
+Date: Tue, 14 Jan 2003 16:30:18 -0500
+To: Kern Sibbald <kern@sibbald.com>
+From: Bill Sellers <w.a.sellers@larc.nasa.gov>
+Subject: Re: [Bacula-users] Bacula remote storage?
+In-Reply-To: <1042565382.1845.177.camel@rufus>
+References: <5.1.0.14.2.20030114113004.0293a210@pop.larc.nasa.gov> <5.1.0.14.2.20030113170650.028dad88@pop.larc.nasa.gov> <5.1.0.14.2.20030113170650.028dad88@pop.larc.nasa.gov> <5.1.0.14.2.20030114113004.0293a210@pop.larc.nasa.gov>
+Mime-Version: 1.0
+Content-Type: text/plain; charset="us-ascii"; format=flowed
+X-Annoyance-Filter-Junk-Probability: 0
+X-Annoyance-Filter-Classification: Mail
+At 06:29 PM 1/14/2003 +0100, you wrote:
+>Hello Bill,
+>
+>Well, if you cannot put a Bacula client on the machine,
+>then it is a big problem. If you know of some software
+>that can do what you want, let me know, because I
+>really just don't know how to do it -- at least not
+>directly.
+
+
+Hi Kern,
+
+We have been able to get Amanda to use the HSM as a storage
+device. Someone here wrote a driver for Amanda. BUT, Amanda doesn't
+handle Windows systems very well (or at all without Samba). So I am
+looking for a backup system that has a Windows client. I really like the
+Windows integration of Bacula.
+
+From the command line, its rather trivial to move the data around. We use
+something like-
+
+tar cf - ./files | gzip -c | rsh hsm dd of=path/file.tgz
+
+or if you use GNU tar:
+
+tar czf hsm:path/file.tgz ./files
+
+One idea for you to consider; Sendmail offers pipes in the aliases file;
+(mailpipe: "|/usr/bin/vacation root") and Perl supports pipes in the
+"open" statement (open FILE, "|/bin/nroff -man";) Could you could make a
+pipe available, as a storage device? Then we could use any command that
+handles stdin as a storage destination.
+
+Something like-
+
+Storage {
+ Name = HSM-RSH
+ Address = hsm
+ #Password is not used in rsh, but might be used in ftp.
+ Device = "| gzip -c | rsh hsm dd of=path/file.tgz"
+ MediaType = Pipe
+}
+
+Storage {
+ Name = HSM-FTP
+ Address = hsm
+ Password = "foobar&-"
+ Device = "| ncftpput -c hsm /path/file.bacula"
+ MediaType = Pipe
+}
+
+>If you have some local storage available, you could
+>use Bacula to backup to disk volumes, then use some
+>other software (ftp, scp) to move them to the HSM
+>machine. However, this is a bit kludgy.
+
+
+It is, but maybe worth a try. Is there some function in Bacula to put
+variables in filenames? i.e. backup.2003-01-15.root
+
+Thanks!
+Bill
+
+---
+Bill Sellers
+w.a.sellers@larc.nasa.gov
+
+==============================================
+ The Project for the above
+
+I finally realized that this is not at all
+the same as reader/writer programs or plugins,
+which are alternate ways of accessing the
+files to be backed up. Rather, it is an alternate
+form of storage device, and I have always planned
+that Bacula should be able to handle all sorts
+of storage devices.
+
+So, I propose the following phases:
+
+1. OK from you to invest some time in testing
+ this as I implement it (requires that you
+ know how to download from the SourceForge
+ cvs -- which I imagine is a piece of cake
+ for you).
+
+2. Dumb implementation by allowing a device to
+ be a fifo for write only.
+ Reason: easy to implement, proof of concept.
+
+3. Try reading from fifo but with fixed block
+ sizes.
+ Reason: proof of concept, easy to implement.
+
+4. Extend reading from fifo (restores) to handle
+ variable blocks.
+ Reason: requires some delicate low level coding
+ which could destabilize all of Bacula.
+
+5. Implementation of above but to a program. E.g.
+ Device = "|program" (not full pipeline).
+ Reason: routines already exist, and program can
+ be a shell script which contains anything.
+
+6. Add full pipeline as a possibility. E.g.
+ Device = "| gzip -c | rsh hsm dd of=path/file.tgz"
+ Reason: needs additional coding to implement full
+ pipeline (must fire off either a shell or all
+ programs and connect their pipes).
+
+There are a good number of details in each step
+that I have left out, but I will specify them at
+every stage, and there may be a few changes as things
+evolve. I expect that to get to stage 5 will take a
+few weeks, and at that point, you will have
+everything you need (just inside a script).
+Stage 6 will probably take longer, but if this
+project pleases you, what we do for 5 should
+be adequate for some time.
+
+
+
+=============================================
+
+Done: (see kernsdone for more)
+- Look into Pruning/purging problems or why there seem to
+ be so many files listed each night.
+- Fix cancel in find_one -- need jcr.
+- Cancel does not work for restore in FD.
+- Write SetJobStatus() function so cancel status not lost.
+- Add include list to end of chain in findlib
+- Zap sd_auth_key after use
+- Add Bar code reading capabilities (new mtx-changer)
+- Figure out some way to automatically backup all local partitions
+- Make hash table for linked files in findlib/find_one.c:161
+ (not necessary)
+- Rewrite find_one.c to use only pool_memory instead of
+ alloca and malloc (probably not necessary).
+- Make sure btraceback goes into /sbin not sysconf directory.
+- InitVerify is getting pruned and it shouldn't (document it)
+- Make 1.28c release ??? NO do 1.29 directly
+- Set timeout on opening fifo for save or restore (findlib)
+- Document FIFO storage device.
+- Document fifo and | and <
+====== 1.30 =======
+- Implement SHA1
+- Get correct error status from run_program or open_bpipe().
+- Restrict permissions on File Volumes (now 0640).
+- Umasked 022 daemons
+- Fix restore of hard linked file.
+- Figure out how to allow multiple simultaneous file Volumes on a single device.
+- Implement multiple simultaneous file Volumes on a single device.
+- Cleanup db_update_media and db_update_pool
+- Flush all the daemon messages at the end of every job.
+- Change stat1= fgets()!=NULL to stat1=fgest()==NULL; in
+ run_program -- bpipe.c
+- Apparently cancel does not work for jobs waiting to be
+ scheduled.
+- Implement TCP/IP connection for MySQL
+- Pull a canceled job from the Scheduling queue.
+- Implement max_file_size in block.c (already done, just tweaked).
+- Look at purge jobs volume (at least document it, and see if it is
+ logical).
+- Add list volumes does all pools. list volumes pool=xxx now works.
+- Add pool= to "list media" in ua_output.c
+- Strip trailing slashes from Include directory names in the FD.
+- Fix Error: bnet.c:408 gethostbyname() for lpmatou failed:
+ ERR=Operation not permited loop.
+- Add code if there is no mtio.h (cannot do -- too many ioctl defines needed)
+- Produce better error messages in when error/eof writing block.
+- Cancelling of a queued job does NOT work!!!!!!
+- Get two
+rufus-dir: Volume used once. Marking Volume "File0003" as Used.
+rufus-sd: Recycled volume File0003 on device /home/kern/bacula/working, all previous data lost.
+rufus-dir: Volume used once. Marking Volume "File0003" as Used.
+- Ability to backup to a file then later transfer to a tape -- Migration.
+ Migration based on MaxJobs(MinJobs),MaxVols(MinVols),AgeJobs,MaxBytes(MinBytes)
+ (i.e. HighwaterMark, LowwaterMark).
+- Eugeny Fisher <efischer@vip-rus.com> wants to cycle through a
+ set of volumes recycling the oldest volume when it is needed.
+- gethostbyname failure in bnet_connect() continues
+ generating errors -- should stop.
+- Add chflags() code for FreeBSD file flags
+- Bevan Anderson suggests having a run queue for each device
+ so that multiple simultaneous jobs can run but each writing
+ to a different Volume.
+- Look at handling <> in smtp doesn't work with exim.
+- Need to specify MaximumConcurrentJobs in the Job resource.
+- ***test GetFileAttributexEx, and remove MessageBox at 335 of winservice.cpp ****
+- Implement finer multiprocessing options.
+- Implement | and < in Exclude statements.
+- Figure out some way to specify a retention period for files
+ that no longer exist on the machine -- so that we maintain
+ say backups for 30 days, but if the file is deleted, we maintain
+ the last copy for 1 year. -- answer Volume retention.
+- Make non-zero status from RunJobBefore/After error the job.
+- Need define of int_least16_t in sha1.h for SuSE.
+- Implement bar code reader for autochangers
+- Document new MaximumConcurrentJob records (Job, Client, Storage)
+- Write up how to use/manage disk Volume Storage. ******
+- Remove kern and kelvin from mysql_grant...
+- Install grant_mysql...
+- Strip trailing / from Include
+- add #define ENABLE_NLS for Gnome compile on SuSE.
+- Add Client FS/OS id (Linux, Win95/98, ...).
+- Concept of VolumeSet during restore which is a list
+ of Volume names needed.
+- Turn virutally all sprintfs into snprintfs.
+- Update volume=Test01 requests pool, then lists volumes.
+ **** Test select_pool_and_media ...
+- Document relabel
+- Add IP address to authentication failures.
+- Add a default File storage so that new users can do backup
+ and restores right away.
+- Forbid sbindir and with-subsys-dir from being the same (otherwise
+ the binary gets deleted when the daemon is stopped in the
+ rc.d/inid.d directory.
+- Do not ignore SIGCHLD
+- Add Cleaning to list of volume statuses
+- Implement run at "xxx"
+- Document new transparent Console commands and wait command.
+- Document . and @ commands
+- Document run when.
+- Document Lutz Kittler's trick of using "Run Before Job" to
+ abort a job on a particular day.
+- Document Ludovic Strappazon's Win32 raw device save/restore.
+- Document not to restore .journal .autofsck
+- Document labeling a whole magazine using "cat"
+- Document how to automatically backup all local partitions
+- Document logrotate
+- Document OPTIMIZE TABLE in MySQL
+- Document new immediate File save configuration (walk user
+ through first save to file Volume with automatic Volume labeling?).
+- Implement scheduling of one time "run" jobs (i.e. instead of
+ starting immediately start at some specified time).
+- Bug: up arrow prints garbage in command line on gnome-console!