6 - !!! Cannot restore two jobs a the same time that were
7 written simultaneously unless they were totally spooled.
8 - Document cleaning up the spool files:
9 db, pid, state, bsr, mail, conmsg, spool
10 - Document the multiple-drive-changer.txt script.
11 - Pruning with Admin job.
12 - Does WildFile match against full name? Doc.
13 - %d and %v only valid on Director, not for ClientRunBefore/After.
14 - During tests with the 260 char fix code, I found one problem:
15 if the system "sees" a long path once, it seems to forget it's
16 working drive (e.g. c:\), which will lead to a problem during
17 the next job (create bootstrap file will fail). Here is the
18 workaround: specify absolute working and pid directory in
19 bacula-fd.conf (e.g. c:\bacula\working instead of
21 - Document techniques for restoring large numbers of files.
22 - Document setting my.cnf to big file usage.
23 - Add example of proper index output to doc. show index from File;
24 - Correct the Include syntax in the m4.xxx files in examples/conf
25 - Document JobStatus and Termination codes.
26 - Fix the error with the "DVI file can't be opened" while
27 building the French PDF.
28 - Document more DVD stuff
36 - Document all the little details of setting up certificates for
37 the Bacula data encryption code.
38 - Document more precisely how to use master keys -- especially
39 for disaster recovery.
42 - Migration from other vendors
46 - Backup conf/exe (all daemons)
47 - Backup up system state
48 - Detect state change of system (verify)
49 - Synthetic Full, Diff, Inc (Virtual, Reconstructed)
51 - Modules for Databases, Exchange, ...
52 - Novell NSS backup http://www.novell.com/coolsolutions/tools/18952.html
53 - Compliance norms that compare restored code hash code.
54 - When glibc crash, get address with
56 - How to sync remote offices.
58 http://www.microsoft.com/technet/itshowcase/content/exchbkup.mspx
61 Extract capability (#25)
62 Continued enhancement of bweb
63 Threshold triggered migration jobs (not currently in list, but will be
65 Client triggered backups
66 Complete rework of the scheduling system (not in list)
67 Performance and usage instrumentation (not in list)
68 See email of 21Aug2007 for details.
69 - Look at: http://tech.groups.yahoo.com/group/cfg2html
70 and http://www.openeyet.nl/scc/ for managing customer changes
74 Tom Ivar Helbekkmo <tih@hamartun.priv.no>
75 > There's definitely something fishy in the recording of start and
76 > end blocks in the JOBMEDIA table.
77 > - If several jobs start spooling at the same time, they will all get the
78 > current tape position noted as the StartFile/StartBlock for the job.
79 > If they end up despooling to the file that was current when they
80 > started spooling, this is what will end up in the JOBMEDIA table. If
81 > there is a file change before they despool, the setting of NewFile in
82 > the dcr structure will fix this up later, but the "start of session"
83 > label is already in the spool file, of course, so it holds the wrong
86 > - If the job is longer than the maximum spool size, it will get its
87 > first spool session despooled, and then start spooling again after the
88 > first despooling is over. The last blocks despooled to tape from the
89 > first session will not have been recorded, but they will be flushed
90 > later, when the next session despools. However, if another job has
91 > been despooling while this one is spooling its second round, the
92 > session label written to the spool file at its close will cause the
93 > EndFile/Endblock to be set to wherever the tape is at that time. When
94 > the dangling record is flushed to JOBMEDIA, it gets this wrong
95 > information. Both session labels in the spool file will be wrong,
96 > too, of course, because they reflect the state of the tape during
97 > spooling, not during despooling.
99 > I would have to study the code much more closely to work out what's the
100 > proper fix -- but it seems clear that it should involve creating the
101 > session labels only when something is actually written to the archive
102 > device, not during spooling. I'm tempted to try making do_append_data()
103 > not create session labels if we're spooling, and add the making of them
104 > to despool_data() in stored/spool.c. Sound reasonable?
108 - Re-check new dcr->reserved_volume
109 - Softlinks that point to non-existent file are not restored in restore all,
110 but are restored if the file is individually selected. BUG!
111 - Doc Duplicate Jobs.
112 - New directive "Delete purged Volumes"
114 - Prune by Job Level (Full, Differential, Incremental)
115 - Strict automatic pruning
116 - Implement unmount of USB volumes.
117 - Use "./config no-idea no-mdc2 no-rc5" on building OpenSSL for
118 Win32 to avoid patent problems.
119 - Implement Bacula plugins -- design API
120 - modify pruning to keep a fixed number of versions of a file,
122 - the cd-command should allow complete paths
123 i.e. cd /foo/bar/foo/bar
124 -> if a customer mails me the path to a certain file,
125 its faster to enter the specified directory
126 - Fix bpipe.c so that it does not modify results pointer.
127 ***FIXME*** calling sequence should be changed.
128 - Make tree walk routines like cd, ls, ... more user friendly
129 by handling spaces better.
133 MA = (last_MA * 3 + rate) / 4
134 rate = (bytes - last_bytes) / (runtime - last_runtime)
135 - Add a recursive mark command (rmark) to restore.
136 - "Minimum Job Interval = nnn" sets minimum interval between Jobs
137 of the same level and does not permit multiple simultaneous
138 running of that Job (i.e. lets any previous invocation finish
139 before doing Interval testing).
140 - Look at simplifying File exclusions.
142 - Auto update of slot:
143 rufus-dir: ua_run.c:456-10 JobId=10 NewJobId=10 using pool Full priority=10
144 02-Nov 12:58 rufus-dir JobId 10: Start Backup JobId 10, Job=kernsave.2007-11-02_12.58.03
145 02-Nov 12:58 rufus-dir JobId 10: Using Device "DDS-4"
146 02-Nov 12:58 rufus-sd JobId 10: Invalid slot=0 defined in catalog for Volume "Vol001" on "DDS-4" (/dev/nst0). Manual load my be required.
147 02-Nov 12:58 rufus-sd JobId 10: 3301 Issuing autochanger "loaded? drive 0" command.
148 02-Nov 12:58 rufus-sd JobId 10: 3302 Autochanger "loaded? drive 0", result is Slot 2.
149 02-Nov 12:58 rufus-sd JobId 10: Wrote label to prelabeled Volume "Vol001" on device "DDS-4" (/dev/nst0)
150 02-Nov 12:58 rufus-sd JobId 10: Alert: TapeAlert[7]: Media Life: The tape has reached the end of its useful life.
151 02-Nov 12:58 rufus-dir JobId 10: Bacula rufus-dir 2.3.6 (26Oct07): 02-Nov-2007 12:58:51
152 - Eliminate: /var is a different filesystem. Will not descend from / into /var
153 - Separate Files and Directories in catalog
154 - Create FileVersions table
155 - Look at rsysnc for incremental updates and dedupping
156 - Add MD5 or SHA1 check in SD for data validation
157 - finish implementation of fdcalled -- see ua_run.c:105
158 - Fix problem in postgresql.c in my_postgresql_query, where the
159 generation of the error message doesn't differentiate result==NULL
160 and a bad status from that result. Not only that, the result is
161 cleared on a bail_out without having generated the error message.
163 - Implement SDErrors (must return from SD)
164 - Implement USB keyboard support in rescue CD.
165 - Implement continue spooling while despooling.
166 - Remove all install temp files in Win32 PLUGINSDIR.
167 - Audit retention periods to make sure everything is 64 bit.
168 - No where in restore causes kaboom.
169 - Performance: multiple spool files for a single job.
170 - Performance: despool attributes when despooling data (problem
171 multiplexing Dir connection).
172 - Make restore use the in-use volume reservation algorithm.
173 - When Pool specifies Storage command override does not work.
174 - Implement wait_for_sysop() message display in wait_for_device(), which
175 now prints warnings too often.
176 - Ensure that each device in an Autochanger has a different
178 - Look at sg_logs -a /dev/sg0 for getting soft errors.
179 - btape "test" command with Offline on Unmount = yes
181 This test is essential to Bacula.
183 I'm going to write one record in file 0,
184 two records in file 1,
185 and three records in file 2
187 02-Feb 11:00 btape: ABORTING due to ERROR in dev.c:715
188 dev.c:714 Bad call to rewind. Device "LTO" (/dev/nst0) not open
189 02-Feb 11:00 btape: Fatal Error because: Bacula interrupted by signal 11: Segmentation violation
190 Kaboom! btape, btape got signal 11. Attempting traceback.
192 - Encryption -- email from Landon
193 > The backup encryption algorithm is currently not configurable, and is
194 > set to AES_128_CBC in src/filed/backup.c. The encryption code
195 > supports a number of different ciphers (as well as adding arbitrary
196 > new ones) -- only a small bit of code would be required to map a
197 > configuration string value to a CRYPTO_CIPHER_* value, if anyone is
198 > interested in implementing this functionality.
200 - Figure out some way to "automatically" backup conf changes.
201 - Add the OS version back to the Win32 client info.
202 - Restarted jobs have a NULL in the from field.
203 - Modify SD status command to indicate when the SD is writing
204 to a DVD (the device is not open -- see bug #732).
205 - Look at the possibility of adding "SET NAMES UTF8" for MySQL,
206 and possibly changing the blobs into varchar.
207 - Ensure that the SD re-reads the Media record if the JobFiles
208 does not match -- it may have been updated by another job.
210 - Test Volume compatibility between machine architectures
211 - Encryption documentation
212 - Wrong jobbytes with query 12 (todo)
213 - Bare-metal recovery Windows (todo)
218 - Access Mode = Read-Only, Read-Write, Unavailable, Destroyed, Offsite
220 - Maximum number of scratch volumes
222 - Next Pool (already have)
223 - Reclamation threshold
225 - Reuse delay (after all files purged from volume before it can be used)
226 - Copy Pool = xx, yyy (or multiple lines).
228 - Allow pool selection during restore.
230 - Average tape size from Eric
231 SELECT COALESCE(media_avg_size.volavg,0) * count(Media.MediaId) AS volmax, GROUP BY Media.MediaType, Media.PoolId, media_avg_size.volavg
232 count(Media.MediaId) AS volnum,
233 sum(Media.VolBytes) AS voltotal,
234 Media.PoolId AS PoolId,
235 Media.MediaType AS MediaType
237 LEFT JOIN (SELECT avg(Media.VolBytes) AS volavg,
238 Media.MediaType AS MediaType
240 WHERE Media.VolStatus = 'Full'
241 GROUP BY Media.MediaType
242 ) AS media_avg_size ON (Media.MediaType = media_avg_size.MediaType)
243 GROUP BY Media.MediaType, Media.PoolId, media_avg_size.volavg
247 - Add doc for bweb -- especially Installation
249 http://www.orangecrate.com/modules.php?name=News&file=article&sid=501
251 - Despool attributes in separate thread
254 - Check why restore repeatedly sends Rechdrs between
255 each data chunk -- according to James Harper 9Jan07.
258 - Full at least once a month, ...
259 - Cancel Inc if Diff/Full running
260 - More intelligent re-run
261 - New/deleted file backup
263 - Incremental backup -- rsync, Stow
267 - Try to fix bscan not working with multiple DVD volumes bug #912.
268 - Look at mondo/mindi
269 - Make Bacula by default not backup tmpfs, procfs, sysfs, ...
270 - Fix hardlinked immutable files when linking a second file, the
271 immutable flag must be removed prior to trying to link it.
272 - Implement Python event for backing up/restoring a file.
273 - Change dbcheck to tell users to use native tools for fixing
274 broken databases, and to ensure they have the proper indexes.
275 - add udev rules for Bacula devices.
276 - If a job terminates, the DIR connection can close before the
277 Volume info is updated, leaving the File count wrong.
278 - Look at why SIGPIPE during connection can cause seg fault in
279 writing the daemon message, when Dir dropped to bacula:bacula
280 - Look at zlib 32 => 64 problems.
281 - Possibly turn on St. Bernard code.
282 - Fix bextract to restore ACLs, or better yet, use common routines.
283 - Do we migrate appendable Volumes?
284 - Remove queue.c code.
285 - Print warning message if LANG environment variable does not specify
287 - New dot commands from Arno.
288 .show device=xxx lists information from one storage device, including
289 devices (I'm not even sure that information exists in the DIR...)
290 .move eject device=xxx mostly the same as 'unmount xxx' but perhaps with
291 better machine-readable output like "Ok" or "Error busy"
292 .move eject device=xxx toslot=yyy the same as above, but with a new
293 target slot. The catalog should be updated accordingly.
294 .move transfer device=xxx fromslot=yyy toslot=zzz
297 - Article: http://www.heise.de/open/news/meldung/83231
298 - Article: http://www.golem.de/0701/49756.html
299 - Article: http://lwn.net/Articles/209809/
300 - Article: http://www.onlamp.com/pub/a/onlamp/2004/01/09/bacula.html
301 - Article: http://www.linuxdevcenter.com/pub/a/linux/2005/04/07/bacula.html
302 - Article: http://www.osreviews.net/reviews/admin/bacula
303 - Article: http://www.debianhelp.co.uk/baculaweb.htm
305 - Wikis mentioning Bacula
306 http://wiki.finkproject.org/index.php/Admin:Backups
307 http://wiki.linuxquestions.org/wiki/Bacula
308 http://www.openpkg.org/product/packages/?package=bacula
309 http://www.iterating.com/products/Bacula
310 http://net-snmp.sourceforge.net/wiki/index.php/Net-snmp_extensions
311 http://www.section6.net/wiki/index.php/Using_Bacula_for_Tape_Backups
312 http://bacula.darwinports.com/
313 http://wiki.mandriva.com/en/Releases/Corporate/Server_4/Notes#Bacula
314 http://en.wikipedia.org/wiki/Bacula
317 http://www.devco.net/pubwiki/Bacula/
318 http://paramount.ind.wpi.edu/wiki/doku.php
319 http://gentoo-wiki.com/HOWTO_Backup
320 http://www.georglutz.de/wiki/Bacula
321 http://www.clarkconnect.com/wiki/index.php?title=Modules_-_LAN_Backup/Recovery
322 http://linuxwiki.de/Bacula (in German)
324 - Possibly allow SD to spool even if a tape is not mounted.
325 - Fix re-read of last block to check if job has actually written
326 a block, and check if block was written by a different job
327 (i.e. multiple simultaneous jobs writing).
328 - Figure out how to configure query.sql. Suggestion to use m4:
329 == changequote.m4 ===
330 changequote(`[',`]')dnl
331 ==== query.sql.in ===
332 :List next 20 volumes to expire
334 Pool.Name AS PoolName,
339 [ FROM_UNIXTIME(UNIX_TIMESTAMP(Media.LastWritten) Media.VolRetention) AS Expire, ])dnl
341 [ media.lastwritten + interval '1 second' * media.volretention as expire, ])dnl
345 ON Media.PoolId=Pool.PoolId
346 WHERE Media.LastWritten>0
350 Command: m4 -DmySQL changequote.m4 query.sql.in >query.sql
352 The problem is that it requires m4, which is not present on all machines
354 - Given all the problems with FIFOs, I think the solution is to do something a
355 little different, though I will look at the code and see if there is not some
356 simple solution (i.e. some bug that was introduced). What might be a better
357 solution would be to use a FIFO as a sort of "key" to tell Bacula to read and
358 write data to a program rather than the FIFO. For example, suppose you
363 Then, I could imagine if you backup and restore this file with a direct
364 reference as is currently done for fifos, instead, during backup Bacula will
367 /home/kern/my-fifo.backup
369 and read the data that my-fifo.backup writes to stdout. For restore, Bacula
372 /home/kern/my-fifo.restore
374 and send the data backed up to stdout. These programs can either be an
375 executable or a shell script and they need only read/write to stdin/stdout.
377 I think this would give a lot of flexibility to the user without making any
378 significant changes to Bacula.
383 select FilenameId from Filename where Name='';
384 # Get list of all directories referenced in a Backup.
385 select Path.Path from Path,File where File.JobId=nnn and
386 File.FilenameId=(FilenameId-from-above) and File.PathId=Path.PathId
387 order by Path.Path ASC;
389 - Look into using Dart for testing
390 http://public.kitware.com/Dart/HTML/Index.shtml
392 - Look into replacing autotools with cmake
393 http://www.cmake.org/HTML/Index.html
395 - Mount on an Autochanger with no tape in the drive causes:
396 Automatically selected Storage: LTO-changer
397 Enter autochanger drive[0]: 0
398 3301 Issuing autochanger "loaded drive 0" command.
399 3302 Autochanger "loaded drive 0", result: nothing loaded.
400 3301 Issuing autochanger "loaded drive 0" command.
401 3302 Autochanger "loaded drive 0", result: nothing loaded.
402 3902 Cannot mount Volume on Storage Device "LTO-Drive1" (/dev/nst0) because:
403 Couldn't rewind device "LTO-Drive1" (/dev/nst0): ERR=dev.c:678 Rewind error on "LTO-Drive1" (/dev/nst0). ERR=No medium found.
404 3905 Device "LTO-Drive1" (/dev/nst0) open but no Bacula volume is mounted.
405 If this is not a blank tape, try unmounting and remounting the Volume.
406 - If Drive 0 is blocked, and drive 1 is set "Autoselect=no", drive 1 will
408 - Autochanger did not change volumes.
409 select * from Storage;
410 +-----------+-------------+-------------+
411 | StorageId | Name | AutoChanger |
412 +-----------+-------------+-------------+
413 | 1 | LTO-changer | 0 |
414 +-----------+-------------+-------------+
415 05-May 03:50 roxie-sd: 3302 Autochanger "loaded drive 0", result is Slot 11.
416 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Warning: Director wanted Volume "LT
417 Current Volume "LT0-002" not acceptable because:
418 1997 Volume "LT0-002" not in catalog.
419 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Error: Autochanger Volume "LT0-002"
420 Setting InChanger to zero in catalog.
421 05-May 03:50 roxie-dir: Tibs.2006-05-05_03.05.02 Error: Unable to get Media record
423 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Fatal error: Error getting Volume i
424 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Fatal error: Job 530 canceled.
425 05-May 03:50 roxie-sd: Tibs.2006-05-05_03.05.02 Fatal error: spool.c:249 Fatal appe
426 05-May 03:49 Tibs: Tibs.2006-05-05_03.05.02 Fatal error: c:\cygwin\home\kern\bacula
435 FirstWritten: 2006-05-05 03:11:54
436 LastWritten: 2006-05-05 03:50:23
437 LabelDate: 2005-12-26 16:52:40
448 VolRetention: 31,536,000
460 Note VolStatus is blank!!!!!
467 FirstWritten: 0000-00-00 00:00:00
468 LastWritten: 0000-00-00 00:00:00
469 LabelDate: 2005-12-26 16:52:40
480 VolRetention: 31,536,000
493 Automatically selected Storage: LTO-changer
494 Enter autochanger drive[0]: 0
495 3301 Issuing autochanger "loaded drive 0" command.
496 3302 Autochanger "loaded drive 0", result: nothing loaded.
497 3301 Issuing autochanger "loaded drive 0" command.
498 3302 Autochanger "loaded drive 0", result: nothing loaded.
499 3902 Cannot mount Volume on Storage Device "LTO-Drive1" (/dev/nst0) because:
500 Couldn't rewind device "LTO-Drive1" (/dev/nst0): ERR=dev.c:678 Rewind error on "LTO-Drive1" (/dev/nst0). ERR=No medium found.
502 3905 Device "LTO-Drive1" (/dev/nst0) open but no Bacula volume is mounted.
503 If this is not a blank tape, try unmounting and remounting the Volume.
505 - http://www.dwheeler.com/essays/commercial-floss.html
506 - Add VolumeLock to prevent all but lock holder (SD) from updating
507 the Volume data (with the exception of VolumeState).
508 - The btape fill command does not seem to use the Autochanger
509 - Make Windows installer default to system disk drive.
510 - Look at using ioctl(FIOBMAP, ...) on Linux, and
511 DeviceIoControl(..., FSCTL_QUERY_ALLOCATED_RANGES, ...) on
512 Win32 for sparse files.
513 http://www.flexhex.com/docs/articles/sparse-files.phtml
514 http://www.informatik.uni-frankfurt.de/~loizides/reiserfs/fibmap.html
515 - Directive: at <event> "command"
516 - Command: pycmd "command" generates "command" event. How to
517 attach to a specific job?
518 - Integrate Christopher's St. Bernard code.
519 - run_cmd() returns int should return JobId_t
520 - get_next_jobid_from_list() returns int should return JobId_t
521 - Document export LDFLAGS=-L/usr/lib64
522 - Don't attempt to restore from "Disabled" Volumes.
523 - Network error on Win32 should set Win32 error code.
524 - What happens when you rename a Disk Volume?
525 - Job retention period in a Pool (and hence Volume). The job would
527 - Look at -D_FORTIFY_SOURCE=2
528 - Add Win32 FileSet definition somewhere
529 - Look at fixing restore status stats in SD.
530 - Look at using ioctl(FIMAP) and FIGETBSZ for sparse files.
531 http://www.informatik.uni-frankfurt.de/~loizides/reiserfs/fibmap.html
532 - Implement a mode that says when a hard read error is
533 encountered, read many times (as it currently does), and if the
534 block cannot be read, skip to the next block, and try again. If
535 that fails, skip to the next file and try again, ...
537 create table LevelType (LevelType binary(1), LevelTypeLong tinyblob);
538 insert into LevelType (LevelType,LevelTypeLong) values
542 - Show files/second in client status output.
543 - new pool XXX with ScratchPoolId = MyScratchPool's PoolId and
544 let it fill itself, and RecyclePoolId = XXX's PoolId so I can
545 see if it become stable and I just have to supervise
547 - If I want to remove this pool, I set RecyclePoolId = MyScratchPool's
548 PoolId, and when it is empty remove it.
550 - Allow Check Labels to be used with Bacula labels.
551 - "Resuming" a failed backup (lost line for example) by using the
552 failed backup as a sort of "base" job.
554 - Email to the user when the tape is about to need changing x
555 days before it needs changing.
556 - Command to show next tape that will be used for a job even
557 if the job is not scheduled.
558 - From: Arunav Mandal <amandal@trolltech.com>
559 1. When jobs are running and bacula for some reason crashes or if I do a
560 restart it remembers and jobs it was running before it crashed or restarted
561 as of now I loose all jobs if I restart it.
563 2. When spooling and in the midway if client is disconnected for instance a
564 laptop bacula completely discard the spool. It will be nice if it can write
565 that spool to tape so there will be some backups for that client if not all.
567 3. We have around 150 clients machines it will be nice to have a option to
568 upgrade all the client machines bacula version automatically.
570 4. Atleast one connection should be reserved for the bconsole so at heavy load
571 I should connect to the director via bconsole which at sometimes I can't
573 5. Another most important feature that is missing, say at 10am I manually
574 started backup of client abc and it was a full backup since client abc has
575 no backup history and at 10.30am bacula again automatically started backup of
576 client abc as that was in the schedule. So now we have 2 multiple Full
577 backups of the same client and if we again try to start a full backup of
578 client backup abc bacula won't complain. That should be fixed.
580 - For Windows disaster recovery see http://unattended.sf.net/
581 - regardless of the retention period, Bacula will not prune the
582 last Full, Diff, or Inc File data until a month after the
583 retention period for the last Full backup that was done.
584 - update volume=xxx --- add status=Full
585 - Remove old spool files on startup.
586 - Exclude SD spool/working directory.
587 - Refuse to prune last valid Full backup. Same goes for Catalog.
589 - Make a callback when Rerun failed levels is called.
590 - Give Python program access to Scheduled jobs.
591 - Add setting Volume State via Python.
592 - Python script to save with Python, not save, save with Bacula.
593 - Python script to do backup.
595 - Change the Priority, Client, Storage, JobStatus (error)
596 at the start of a job.
597 - Why is SpoolDirectory = /home/bacula/spool; not reported
598 as an error when writing a DVD?
599 - Make bootstrap file handle multiple MediaTypes (SD)
600 - Remove all old Device resource code in Dir and code to pass it
601 back in SD -- better, rework it to pass back device statistics.
602 - Check locking of resources -- be sure to lock devices where previously
603 resources were locked.
604 - The last part is left in the spool dir.
607 - In restore don't compare byte count on a raw device -- directory
608 entry does not contain bytes.
611 - Max Vols limit in Pool off by one?
612 - Implement Files/Bytes,... stats for restore job.
613 - Implement Total Bytes Written, ... for restore job.
614 - Despool attributes simultaneously with data in a separate
615 thread, rejoined at end of data spooling.
616 - 7. Implement new Console commands to allow offlining/reserving drives,
617 and possibly manipulating the autochanger (much asked for).
618 - Add start/end date editing in messages (%t %T, %e?) ...
619 - Add ClientDefs similar to JobDefs.
620 - Print more info when bextract -p accepts a bad block.
621 - Fix FD JobType to be set before RunBeforeJob in FD.
622 - Look at adding full Volume and Pool information to a Volume
623 label so that bscan can get *all* the info.
624 - If the user puts "Purge Oldest Volume = yes" or "Recycle Oldest Volume = yes"
625 and there is only one volume in the pool, refuse to do it -- otherwise
626 he fills the Volume, then immediately starts reusing it.
627 - Implement copies and stripes.
628 - Add history file to console.
629 - Each file on tape creates a JobMedia record. Peter has 4 million
630 files spread over 10000 tape files and four tapes. A restore takes
631 16 hours to build the restore list.
632 - Add and option to check if the file size changed during backup.
633 - Make sure SD deletes spool files on error exit.
634 - Delete old spool files when SD starts.
635 - When labeling tapes, if you enter 000026, Bacula uses
636 the tape index rather than the Volume name 000026.
637 - Add offline tape command to Bacula console.
639 Enter MediaId or Volume name: 32
640 Enter new Volume name: DLT-20Dec04
641 Automatically selected Pool: Default
642 Connecting to Storage daemon DLTDrive at 192.168.68.104:9103 ...
643 Sending relabel command from "DLT-28Jun03" to "DLT-20Dec04" ...
644 block.c:552 Write error at 0:0 on device /dev/nst0. ERR=Bad file descriptor.
645 Error writing final EOF to tape. This tape may not be readable.
646 dev.c:1207 ioctl MTWEOF error on /dev/nst0. ERR=Permission denied.
647 askdir.c:219 NULL Volume name. This shouldn't happen!!!
648 3912 Failed to label Volume: ERR=dev.c:1207 ioctl MTWEOF error on /dev/nst0. ERR=Permission denied.
649 Label command failed for Volume DLT-20Dec04.
650 Do not forget to mount the drive!!!
651 - Bug: if a job is manually scheduled to run later, it does not appear
652 in any status report and cannot be cancelled.
654 ==== Keeping track of deleted/new files ====
655 - To mark files as deleted, run essentially a Verify to disk, and
656 when a file is found missing (MarkId != JobId), then create
657 a new File record with FileIndex == -1. This could be done
658 by the FD at the same time as the backup.
660 My "trick" for keeping track of deletions is the following.
661 Assuming the user turns on this option, after all the files
662 have been backed up, but before the job has terminated, the
663 FD will make a pass through all the files and send their
664 names to the DIR (*exactly* the same as what a Verify job
665 currently does). This will probably be done at the same
666 time the files are being sent to the SD avoiding a second
667 pass. The DIR will then compare that to what is stored in
668 the catalog. Any files in the catalog but not in what the
669 FD sent will receive a catalog File entry that indicates
670 that at that point in time the file was deleted. This
671 either transmitted to the FD or simultaneously computed in
672 the FD, so that the FD can put a record on the tape that
673 indicates that the file has been deleted at this point.
674 A delete file entry could potentially be one with a FileIndex
675 of 0 or perhaps -1 (need to check if FileIndex is used for
676 some other thing as many of the Bacula fields are "overloaded"
679 During a restore, any file initially picked up by some
680 backup (Full, ...) then subsequently having a File entry
681 marked "delete" will be removed from the tree, so will not
682 be restored. If a file with the same name is later OK it
683 will be inserted in the tree -- this already happens. All
684 will be consistent except for possible changes during the
687 Since I'm on the subject, some of you may be wondering what
688 the utility of the in memory tree is if you are going to
689 restore everything (at least it comes up from time to time
690 on the list). Well, it is still *very* useful because it
691 allows only the last item found for a particular filename
692 (full path) to be entered into the tree, and thus if a file
693 is backed up 10 times, only the last copy will be restored.
694 I recently (last Friday) restored a complete directory, and
695 the Full and all the Differential and Incremental backups
696 spanned 3 Volumes. The first Volume was not even mounted
697 because all the files had been updated and hence backed up
698 since the Full backup was made. In this case, the tree
699 saved me a *lot* of time.
701 Make sure this information is stored on the tape too so
702 that it can be restored directly from the tape.
704 All the code (with the exception of formally generating and
705 saving the delete file entries) already exists in the Verify
706 Catalog command. It explicitly recognizes added/deleted files since
707 the last InitCatalog. It is more or less a "simple" matter of
708 taking that code and adapting it slightly to work for backups.
710 Comments from Martin Simmons (I think they are all covered):
711 Ok, that should cover the basics. There are few issues though:
713 - Restore will depend on the catalog. I think it is better to include the
714 extra data in the backup as well, so it can be seen by bscan and bextract.
716 - I'm not sure if it will preserve multiple hard links to the same inode. Or
717 maybe adding or removing links will cause the data to be dumped again?
719 - I'm not sure if it will handle renamed directories. Possibly it will work
720 by dumping the whole tree under a renamed directory?
722 - It remains to be seen how the backup performance of the DIR's will be
723 affected when comparing the catalog for a large filesystem.
725 1. Use the current Director in-memory tree code (very fast), but currently in
726 memory. It probably could be paged.
728 2. Use some DB such as Berkeley DB or SQLite. SQLite is already compiled and
729 built for Win32, and it is something we could compile into the program.
731 3. Implement our own custom DB code.
733 Note, by appropriate use of Directives in the Director, we can dynamically
734 decide if the work is done in the Director or in the FD, and we can even
735 allow the user to choose.
737 === most recent accurate file backup/restore ===
738 Here is a sketch (i.e. more details must be filled in later) that I recently
739 made of an algorithm for doing Accurate Backup.
741 1. Dir informs FD that it is doing an Accurate backup and lookup done by
744 2. FD passes through the file system doing a normal backup based on normal
745 conditions, recording the names of all files and their attributes, and
746 indicating which files were backed up. This is very similar to what Verify
749 3. The Director receives the two lists of files at the end of the FD backup.
750 One, files backed up, and one files not backed up. It then looks up all the
751 files not backed up (using Verify style code).
753 4. The Dir sends the FD a list of:
754 a. Additional files to backup (based on user specified criteria, name, size
755 inode date, hash, ...).
758 5. Dir deletes list of file not backed up.
760 6. FD backs up additional files generates a list of those backed up and sends
761 it to the Director, which adds it to the list of files backed up. The list
762 is now complete and current.
764 7. The FD generates delete records for all the files that were deleted and
767 8. The Dir deletes the previous CurrentBackup list, and then does a
768 transaction insert of the new list that it has.
770 9. The rest works as before ...
774 Two new tables needed.
775 1. CurrentBackupId table that contains Client, JobName, FileSet, and a unique
776 BackupId. This is created during a Full save, and the BackupId can be set to
777 the JobId of the Full save. It will remain the same until another Full
778 backup is done. That is when new records are added during a Differential or
779 Incremental, they must use the same BackupId.
781 2. CurrentBackup table that contains essentially a File record (less a number
782 of fields, but with a few extra fields) -- e.g. a flag that the File was
783 backed up by a Full save (this permits doing a Differential). The unique
784 BackupId allows us to look up the CurrentBackup for a particular Client,
785 Jobname, FileSet using that unique BackupId as the key, so this table must be
786 indexed by the BackupId.
788 Note any time a file is saved by the FD other than during a Full save, the
789 Full save flag is cleared. When doing a Differential backup, if a file has
790 the Full save flag set, it is skipped, otherwise it is backed up. For an
791 Incremental backup, we check to see if the file has changed since the last
792 time we backed it up.
794 Deleted files should have FileIndex == 0
798 How about introducing a Type = MgmtPolicy job type? That job type would
799 be responsible for scanning the Bacula environment looking for specific
800 conditions, and submitting the appropriate jobs for implementing said
804 Name = "Migration-Policy"
806 Policy Selection Job Type = Migrate
807 Scope = "<keyword> <operator> <regexp>"
808 Threshold = "<keyword> <operator> <regexp>"
809 Job Template = <template-name>
812 Where <keyword> is any legal job keyword, <operator> is a comparison
813 operator (=,<,>,!=, logical operators AND/OR/NOT) and <regexp> is a
814 appropriate regexp. I could see an argument for Scope and Threshold
815 being SQL queries if we want to support full flexibility. The
816 Migration-Policy job would then get scheduled as frequently as a site
817 felt necessary (suggested default: every 15 minutes).
822 Name = "Migration-Policy"
824 Policy Selection Job Type = Migration
826 Threshold = "Migration Selection Type = LowestUtil"
827 Job Template = "MigrationTemplate"
830 would select all pools for examination and generate a job based on
831 MigrationTemplate to automatically select the volume with the lowest
832 usage and migrate it's contents to the nextpool defined for that pool.
834 This policy abstraction would be really handy for adjusting the behavior
835 of Bacula according to site-selectable criteria (one thing that pops
836 into mind is Amanda's ability to automatically adjust backup levels
837 depending on various criteria).
843 - Add Pool/Storage override regression test.
844 - Add delete JobId to regression.
845 - Add a regression test for dbcheck.
846 - New test to add bscan to four-concurrent-jobs regression,
847 i.e. after the four-concurrent jobs zap the
848 database as is done in the bscan-test, then use bscan to
849 restore the database, do a restore and compare with the
851 - Add restore of specific JobId to regression (item 3
852 on the restore prompt)
853 - Add IPv6 to regression
854 - Add database test to regression. Test each function like delete,
857 - AntiVir can slow down backups on Win32 systems.
858 - Win32 systems with FAT32 can be much slower than NTFS for
859 more than 1000 files per directory.
863 - A HOLD command to stop all jobs from starting.
864 - A PAUSE command to pause all running jobs ==> release the
866 - Media Type = LTO,LTO-2,LTO-3
867 Media Type Read = LTO,LTO2,LTO3
868 Media Type Write = LTO2, LTO3
870 === From Carsten Menke <bootsy52@gmx.net>
872 Following is a list of what I think in the situations where I'm faced with,
873 could be a usefull enhancement to bacula, which I'm certain other users will
874 benefit from as well.
876 1. NextJob/NextJobs Directive within a Job Resource in the form of
877 NextJobs = job1,job2.
880 I currently solved the problem with running multiple jobs each after each
881 by setting the Max Wait Time for a job to 8 hours, and give
882 the jobs different Priorities. However, there scenarios where
883 1 Job is directly depending on another job, so if the former job fails,
884 the job after it needn't to be run
885 while maybe other jobs should run despite of that
888 A Backup Job and a Verify job, if the backup job fails there is no need to run
889 the verify job, as the backup job already failed. However, one may like
890 to backup the Catalog to disk despite of that the main backup job failed.
893 I see that this is related to the Event Handlers which are on the ToDo
894 list, also it is maybe a good idea to check for the return value and
895 execute different actions based on the return value
898 3. offline capability to bconsole
901 Currently I use a script which I execute within the last Job via the
902 RunAfterJob Directive, to release and eject the tape.
903 So I have to call bconsole "release=Storage-Name" and afterwards
904 mt -f /dev/nst0 eject to get the tape out.
906 If I have multiple Storage Devices, than these may not be /dev/nst0 and
907 I have to modify the script or call it with parameters etc.
908 This would actually not be needed, as everything is already defined
909 in bacula-sd.conf and if I can invoke bconsole with the
910 storage name via $1 in the script than I'm done and information is
913 4. %s for Storage Name added to the chars being substituted in "RunAfterJob"
917 For the reason mentioned in 3. to have the ability to call a
918 script with /scripts/foobar %s and in the script use $1
919 to pass the Storage Name to bconsole
921 5. Setting Volume State within a Job Resource
924 Instead of using "Maximum Volume Jobs" in the Pool Resource,
925 I would have the possibilty to define
926 in a Job Resource that after this certain job is run, the Volume State
927 should be set to "Volume State = Used", this give more flexibility (IMHO).
929 6. Localization of Bacula Messages
932 Unfortunatley many,many people I work with don't speak english very well.
933 So if at least the Reporting messages would be localized then they
934 would understand that they have to change the tape,etc. etc.
936 I volunteer to do the german translations, and if I can convince my wife also
937 french and Morre (western african language).
939 7. OK, this is evil, probably bound to security risks and maybe not possible
940 due to the design of bacula.
942 Implementation of Backtics ( `command` ) for shell comand execution to
943 the "Label Format" Directive.
947 Currently I have defined BACULA_DAY_OF_WEEK="day1|day2..." resulting in
948 Label Format = "HolyBackup-${BACULA_DAY_OF_WEEK[${WeekDay}]}". If I could
949 use backticks than I could use "Label Format = HolyBackup-`date +%A` to have
950 the localized name for the day of the week appended to the
951 format string. Then I have the tape labeled automatically with weekday
952 name in the correct language.
954 - Make output from status use html table tags for nicely
955 presenting in a browser.
956 - Can one write tapes faster with 8192 byte block sizes?
957 - Document security problems with the same password for everyone in
958 rpm and Win32 releases.
959 - Browse generations of files.
960 - I've seen an error when my catalog's File table fills up. I
961 then have to recreate the File table with a larger maximum row
962 size. Relevant information is at
963 http://dev.mysql.com/doc/mysql/en/Full_table.html ; I think the
964 "Installing and Configuring MySQL" chapter should talk a bit
965 about this potential problem, and recommend a solution.
966 - For Solaris must use POSIX awk.
967 - Want speed of writing to tape while despooling.
968 - Supported autochanger:
976 Wangtek 6525ES (SCSI-1 QIC drive, 525MB), under Linux 2.4.something,
977 bacula 1.36.0/1 works with blocksize 16k INSIDE bacula-sd.conf.
978 - Add regex from http://www.pcre.org to Bacula for Win32.
979 - Use only shell tools no make in CDROM package.
980 - Include within include does it work?
981 - Implement a Pool of type Cleaning?
982 - Implement VolReadTime and VolWriteTime in SD
983 - Modify Backing up Your Database to include a bootstrap file.
984 - Think about making certain database errors fatal.
985 - Look at correcting the time jump in the scheduler for daylight
986 savings time changes.
987 - Add a "real" timer to network connections.
988 - Promote to Full = Time period
989 - Check dates entered by user for correctness (month/day/... ranges)
990 - Compress restore Volume listing by date and first file.
991 - Look at patches/bacula_db.b2z postgresql that loops during restore.
993 - Perhaps add read/write programs and/or plugins to FileSets.
994 - How to handle backing up portables ...
995 - Add some sort of guaranteed Interval for upgrading jobs.
996 - Can we write the state file after every job terminates? On Win32
997 the system crashes and the state file is not updated.
1000 Documentation to do: (any release a little bit at a time)
1001 - Doc to do unmount before removing magazine.
1002 - Alternative to static linking "ldd prog" save all binaries listed,
1003 restore them and point LD_LIBRARY_PATH to them.
1004 - Document add "</dev/null >/dev/null 2>&1" to the bacula-fd command line
1005 - Document query file format.
1006 - Add more documentation for bsr files.
1007 - Document problems with Verify and pruning.
1008 - Document how to use multiple databases.
1009 - VXA drives have a "cleaning required"
1010 indicator, but Exabyte recommends preventive cleaning after every 75
1013 In this context, it should be noted that Exabyte has a command-line
1014 vxatool utility available for free download. (The current version is
1015 vxatool-3.72.) It can get diagnostic info, read, write and erase tapes,
1016 test the drive, unload tapes, change drive settings, flash new firmware,
1018 Of particular interest in this context is that vxatool <device> -i will
1019 report, among other details, the time since last cleaning in tape motion
1020 minutes. This information can be retrieved (and settings changed, for
1021 that matter) through the generic-SCSI device even when Bacula has the
1022 regular tape device locked. (Needless to say, I don't recommend
1023 changing tape settings while a job is running.)
1024 - Lookup HP cleaning recommendations.
1025 - Lookup HP tape replacement recommendations (see trouble shooting autochanger)
1026 - Document doing table repair
1029 ===================================
1030 - Add macro expansions in JobDefs.
1031 Run Before Job = "SomeFile %{Level} %{Client}"
1032 Write Bootstrap="/some/dir/%{JobName}_%{Client}.bsr"
1033 - Use non-blocking network I/O but if no data is available, use
1035 - Use gather write() for network I/O.
1036 - Autorestart on crash.
1037 - Add bandwidth limiting.
1038 - Add acks every once and a while from the SD to keep
1039 the line from timing out.
1040 - When an error in input occurs and conio beeps, you can back
1041 up through the prompt.
1042 - Detect fixed tape block mode during positioning by looking at
1043 block numbers in btape "test". Possibly adjust in Bacula.
1044 - Fix list volumes to output volume retention in some other
1045 units, perhaps via a directive.
1046 - Allow Simultaneous Priorities = yes => run up to Max concurrent jobs even
1047 with multiple priorities.
1048 - If you use restore replace=never, the directory attributes for
1049 non-existent directories will not be restored properly.
1051 - see lzma401.zip in others directory for new compression
1053 - Allow the user to select JobType for manual pruning/purging.
1054 - bscan does not put first of two volumes back with all info in
1056 - Implement the FreeBSD nodump flag in chflags.
1057 - Figure out how to make named console messages go only to that
1058 console and to the non-restricted console (new console class?).
1059 - Make restricted console prompt for password if *ask* is set or
1060 perhaps if password is undefined.
1061 - Implement "from ISO-date/time every x hours/days/weeks/months" in
1064 ==== from Marc Schoechlin
1065 - the help-command should be more verbose
1066 (it should explain the paramters of the different
1068 -> its time-comsuming to consult the manual anytime
1069 you need a special parameter
1070 -> maybe its more easy to maintain this, if the
1071 descriptions of that commands are outsourced to
1073 - if the password is not configured in bconsole.conf
1074 you should be asked for it.
1075 -> sometimes you like to do restore on a customer-machine
1076 which shouldnt know the password for bacula.
1077 -> adding the password to the file favours admins
1078 to forget to remove the password after usage
1080 the protection of that file is less important
1081 - long-listed-output of commands should be scrollable
1082 like the unix more/less-command does
1083 -> if someone runs 200 and more machines, the lists could
1084 be a little long and complex
1085 - command-output should be shown column by column
1086 to reduce scrolling and to increase clarity
1088 - lsmark should list the selected files with full
1090 - wildcards for selecting and file and directories would be nice
1091 - any actions should be interuptable with STRG+C
1092 - command-expansion would be pretty cool
1094 - When the replace Never option is set, new directory permissions
1095 are not restored. See bug 213. To fix this requires creating a
1096 list of newly restored directories so that those directory
1097 permissions *can* be restored.
1098 - Add prune all command
1099 - Document fact that purge can destroy a part of a restore by purging
1100 one volume while others remain valid -- perhaps mark Jobs.
1101 - Add multiple-media-types.txt
1102 - look at mxt-changer.html
1103 - Make ? do a help command (no return needed).
1104 - Implement restore directory.
1105 - Document streams and how to implement them.
1106 - Try not to re-backup a file if a new hard link is added.
1107 - Add feature to backup hard links only, but not the data.
1108 - Fix stream handling to be simpler.
1109 - Add Priority and Bootstrap to Run a Job.
1110 - Eliminate Restore "Run Restore Job" prompt by allowing new "run command
1112 - Remove View FileSet button from Run a Job dialog.
1113 - Handle prompt for restore job at end of Restore command.
1114 - Add display of total selected files to Restore window.
1115 - Add tree pane to left of window.
1116 - Add progress meter.
1117 - Max wait time or max run time causes seg fault -- see runtime-bug.txt
1118 - Add message to user to check for fixed block size when the forward
1119 space test fails in btape.
1120 - When unmarking a directory check if all files below are unmarked and
1121 then remove the + flag -- in the restore tree.
1122 - Possibly implement: Action = Unmount Device="TapeDrive1" in Admin jobs.
1123 - Setup lrrd graphs: (http://www.linpro.no/projects/lrrd/) Mike Acar.
1124 - Revisit the question of multiple Volumes (disk) on a single device.
1125 - Add a block copy option to bcopy.
1126 - Finish work on Gnome restore GUI.
1127 - Fix "llist jobid=xx" where no fileset or client exists.
1128 - For each job type (Admin, Restore, ...) require only the really necessary
1129 fields.- Pass Director resource name as an option to the Console.
1130 - Add a "batch" mode to the Console (no unsolicited queries, ...).
1131 - Add a .list all files in the restore tree (probably also a list all files)
1132 Do both a long and short form.
1133 - Allow browsing the catalog to see all versions of a file (with
1134 stat data on each file).
1135 - Restore attributes of directory if replace=never set but directory
1137 - Use SHA1 on authentication if possible.
1138 - See comtest-xxx.zip for Windows code to talk to USB.
1139 - Add John's appended files:
1140 Appended = { /files/server/logs/http/*log }
1141 and such files would be treated as follows.On a FULL backup, they would
1142 be backed up like any other file.On an INCREMENTAL backup, where a
1143 previous INCREMENTAL or FULL was already in thecatalogue and the length
1144 of the file wasgreater than the length of the last backup, only thedata
1145 added since the last backup will be dumped.On an INCREMENTAL backup, if
1146 the length of the file is less than thelength of the file with the same
1147 name last backed up, the completefile is dumped.On Windows systems, with
1148 creation date of files, we can be evensmarter about this and not count
1149 entirely upon the length.On a restore, the full and all incrementals
1150 since it will beapplied in sequence to restore the file.
1151 - Check new HAVE_WIN32 open bits.
1152 - Check if the tape has moved before writing.
1153 - Handling removable disks -- see below:
1154 - Keep track of tape use time, and report when cleaning is necessary.
1155 - Add FromClient and ToClient keywords on restore command (or
1156 BackupClient RestoreClient).
1157 - Implement a JobSet, which groups any number of jobs. If the
1158 JobSet is started, all the jobs are started together.
1159 Allow Pool, Level, and Schedule overrides.
1160 - Enhance cancel to timeout BSOCK packets after a specific delay.
1161 - Do scheduling by UTC using gmtime_r() in run_conf, scheduler, and
1162 ua_status.!!! Thanks to Alan Brown for this tip.
1163 - Look at updating Volume Jobs so that Max Volume Jobs = 1 will work
1164 correctly for multiple simultaneous jobs.
1165 - Correct code so that FileSet MD5 is calculated for < and | filename
1167 - Implement the Media record flag that indicates that the Volume does disk
1169 - Implement VolAddr, which is used when Volume is addressed like a disk,
1170 and form it from VolFile and VolBlock.
1171 - Make multiple restore jobs for multiple media types specifying
1172 the proper storage type.
1173 - Fix fast block rejection (stored/read_record.c:118). It passes a null
1174 pointer (rec) to try_repositioning().
1175 - Look at extracting Win data from BackupRead.
1176 - Implement RestoreJobRetention? Maybe better "JobRetention" in a Job,
1177 which would take precidence over the Catalog "JobRetention".
1178 - Implement Label Format in Add and Label console commands.
1179 - Possibly up network buffers to 65K. Put on variable.
1180 - Put email tape request delays on one or more variables. User wants
1181 to cancel the job after a certain time interval. Maximum Mount Wait?
1182 - Job, Client, Device, Pool, or Volume?
1183 Is it possible to make this a directive which is *optional* in multiple
1184 resources, like Level? If so, I think I'd make it an optional directive
1185 in Job, Client, and Pool, with precedence such that Job overrides Client
1186 which in turn overrides Pool.
1188 - New Storage specifications:
1189 - Want to write to multiple storage devices simultaneously
1190 - Want to write to multiple storage devices sequentially (in one job)
1191 - Want to read/write simultaneously
1192 - Key is MediaType -- it must match
1194 Passed to SD as a sort of BSR record called Storage Specification
1198 MediaType -> Next MediaType
1200 Device -> Next Device
1202 Allow multiple Storage specifications
1210 Allow Multiple Pool specifications (note, Pool currently
1212 Allow Multiple MediaType specifications in Dir conf
1213 Allow Multiple Device specifications in Dir conf
1214 Perhaps keep this in a single SSR
1215 Tie a Volume to a specific device by using a MediaType that
1216 is contained in only one device.
1217 In SD allow Device to have Multiple MediaTypes
1219 - Ideas from Jerry Scharf:
1220 First let's point out some big pluses that bacula has for this
1222 more importantly it's active. Thank you so much for that
1223 even more important, it's not flaky
1224 it has an open access catalog, opening many possibilities
1225 it's pushing toward heterogeneous systems capability
1227 Macintosh file client
1228 macs are an interesting niche, but I fear a server is a rathole
1229 working bare iron recovery for windows
1230 the option for inc/diff backups not reset on fileset revision
1231 a) use both change and inode update time against base time
1232 b) do the full catalog check (expensive but accurate)
1233 sizing guide (how much system is needed to back up N systems/files)
1234 consultants on using bacula in building a disaster recovery system
1235 an integration guide
1236 or how to get at fancy things that one could do with bacula
1237 logwatch code for bacula logs (or similar)
1238 linux distro inclusion of bacula (brings good and bad, but necessary)
1239 win2k/XP server capability (icky but you asked)
1240 support for Oracle database ??
1242 - Look at adding SQL server and Exchange support for Windows.
1243 - Make dev->file and dev->block_num signed integers so that -1 can
1244 be an invalid value which happens with BSR.
1245 - Create VolAddr for disk files in place of VolFile and VolBlock. This
1246 is needed to properly specify ranges.
1247 - Add progress of files/bytes to SD and FD.
1248 - Print warning message if FileId > 4 billion
1249 - do a "messages" before the first prompt in Console
1250 - Client does not show busy during Estimate command.
1251 - Implement Console mtx commands.
1252 - Implement a Mount Command and an Unmount Command where
1253 the users could specify a system command to be performed
1254 to do the mount, after which Bacula could attempt to
1255 read the device. This is for Removeable media such as a CDROM.
1256 - Most likely, this mount command would be invoked explicitly
1257 by the user using the current Console "mount" and "unmount"
1258 commands -- the Storage Daemon would do the right thing
1259 depending on the exact nature of the device.
1260 - As with tape drives, when Bacula wanted a new removable
1261 disk mounted, it would unmount the old one, and send a message
1262 to the user, who would then use "mount" as described above
1263 once he had actually inserted the disk.
1264 - Implement dump/print label to UA
1265 - Spool to disk only when the tape is full, then when a tape is hung move
1267 - bextract is sending everything to the log file ****FIXME****
1268 - Allow multiple Storage specifications (or multiple names on
1269 a single Storage specification) in the Job record. Thus a job
1270 can be backed up to a number of storage devices.
1271 - Implement some way for the File daemon to contact the Director
1272 to start a job or pass its DHCP obtained IP number.
1273 - Implement a query tape prompt/replace feature for a console
1274 - Copy console @ code to gnome2-console
1275 - Make sure that Bacula rechecks the tape after the 20 min wait.
1276 - Set IO_NOWAIT on Bacula TCP/IP packets.
1277 - Try doing a raw partition backup and restore by mounting a
1279 - From Lars Kellers:
1280 Yes, it would allow to highly automatic the request for new tapes. If a
1281 tape is empty, bacula reads the barcodes (native or simulated), and if
1282 an unused tape is found, it runs the label command with all the
1283 necessary parameters.
1285 By the way can bacula automatically "move" an empty/purged volume say
1286 in the "short" pool to the "long" pool if this pool runs out of volume
1288 - What to do about "list files job=xxx".
1289 - Look at how fuser works and /proc/PID/fd that is how Nic found the
1290 file descriptor leak in Bacula.
1291 - Implement WrapCounters in Counters.
1292 - Add heartbeat from FD to SD if hb interval expires.
1293 - Can we dynamically change FileSets?
1294 - If pool specified to label command and Label Format is specified,
1295 automatically generate the Volume name.
1296 - Why can't SQL do the filename sort for restore?
1297 - Add ExhautiveRestoreSearch
1298 - Look at the possibility of loading only the necessary
1299 data into the restore tree (i.e. do it one directory at a
1300 time as the user walks through the tree).
1301 - Possibly use the hash code if the user selects all for a restore command.
1302 - Fix "restore all" to bypass building the tree.
1303 - Prohibit backing up archive device (findlib/find_one.c:128)
1304 - Implement Release Device in the Job resource to unmount a drive.
1305 - Implement Acquire Device in the Job resource to mount a drive,
1306 be sure this works with admin jobs so that the user can get
1307 prompted to insert the correct tape. Possibly some way to say to
1308 run the job but don't save the files.
1309 - Make things like list where a file is saved case independent for
1311 - Use autochanger to handle multiple devices.
1312 - Implement a Recycle command
1313 - Start working on Base jobs.
1314 - Implement UnsavedFiles DB record.
1315 - From Phil Stracchino:
1316 It would probably be a per-client option, and would be called
1317 something like, say, "Automatically purge obsoleted jobs". What it
1318 would do is, when you successfully complete a Differential backup of a
1319 client, it would automatically purge all Incremental backups for that
1320 client that are rendered redundant by that Differential. Likewise,
1321 when a Full backup on a client completed, it would automatically purge
1322 all Differential and Incremental jobs obsoleted by that Full backup.
1323 This would let people minimize the number of tapes they're keeping on
1324 hand without having to master the art of retention times.
1325 - When doing a Backup send all attributes back to the Director, who
1326 would then figure out what files have been deleted.
1327 - Currently in mount.c:236 the SD simply creates a Volume. It should have
1328 explicit permission to do so. It should also mark the tape in error
1329 if there is an error.
1330 - Cancel waiting for Client connect in SD if FD goes away.
1332 - Implement timeout in response() when it should come quickly.
1333 - Implement a Slot priority (loaded/not loaded).
1334 - Implement "vacation" Incremental only saves.
1335 - Implement create "FileSet"?
1336 - Add prefixlinks to where or not where absolute links to FD.
1337 - Issue message to mount a new tape before the rewind.
1338 - Simplified client job initiation for portables.
1339 - If SD cannot open a drive, make it periodically retry.
1340 - Add more of the config info to the tape label.
1342 - Refine SD waiting output:
1343 Device is being positioned
1344 > Device is being positioned for append
1345 > Device is being positioned to file x
1347 - Figure out some way to estimate output size and to avoid splitting
1348 a backup across two Volumes -- this could be useful for writing CDROMs
1349 where you really prefer not to have it split -- not serious.
1350 - Have SD compute MD5 or SHA1 and compare to what FD computes.
1351 - Make VolumeToCatalog calculate an MD5 or SHA1 from the
1352 actual data on the Volume and compare it.
1353 - Make bcopy read through bad tape records.
1354 - Program files (i.e. execute a program to read/write files).
1355 Pass read date of last backup, size of file last time.
1356 - Add Signature type to File DB record.
1357 - CD into subdirectory when open()ing files for backup to
1358 speed up things. Test with testfind().
1359 - Priority job to go to top of list.
1360 - Why are save/restore of device different sizes (sparse?) Yup! Fix it.
1361 - Implement some way for the Console to dynamically create a job.
1362 - Solaris -I on tar for include list
1363 - Need a verbose mode in restore, perhaps to bsr.
1364 - bscan without -v is too quiet -- perhaps show jobs.
1365 - Add code to reject whole blocks if not wanted on restore.
1366 - Check if we can increase Bacula FD priorty in Win2000
1367 - Make sure the MaxVolFiles is fully implemented in SD
1368 - Check if both CatalogFiles and UseCatalog are set to SD.
1369 - Possibly add email to Watchdog if drive is unmounted too
1370 long and a job is waiting on the drive.
1371 - After unmount, if restore job started, ask to mount.
1372 - Add UA rc and history files.
1373 - put termcap (used by console) in ./configure and
1374 allow -with-termcap-dir.
1375 - Fix Autoprune for Volumes to respect need for full save.
1376 - Compare tape to Client files (attributes, or attributes and data)
1377 - Make all database Ids 64 bit.
1378 - Allow console commands to detach or run in background.
1379 - Add SD message variables to control operator wait time
1380 - Maximum Operator Wait
1381 - Minimum Message Interval
1382 - Maximum Message Interval
1383 - Send Operator message when cannot read tape label.
1384 - Verify level=Volume (scan only), level=Data (compare of data to file).
1385 Verify level=Catalog, level=InitCatalog
1387 - Add keyword search to show command in Console.
1388 - Events : tape has more than xxx bytes.
1389 - Complete code in Bacula Resources -- this will permit
1390 reading a new config file at any time.
1391 - Handle ctl-c in Console
1392 - Implement script driven addition of File daemon to config files.
1393 - Think about how to make Bacula work better with File (non-tape) archives.
1394 - Write Unix emulator for Windows.
1395 - Put memory utilization in Status output of each daemon
1396 if full status requested or if some level of debug on.
1397 - Make database type selectable by .conf files i.e. at runtime
1398 - Set flag for uname -a. Add to Volume label.
1399 - Restore files modified after date
1400 - SET LD_RUN_PATH=$HOME/mysql/lib/mysql
1401 - Remove duplicate fields from jcr (e.g. jcr.level and jcr.jr.Level, ...).
1402 - Timout a job or terminate if link goes down, or reopen link and query.
1403 - Concept of precious tapes (cannot be reused).
1404 - Make bcopy copy with a single tape drive.
1405 - Permit changing ownership during restore.
1408 > My suggestion: Add a feature on the systray menu-icon menu to request
1409 > an immediate backup now. This would be useful for laptop users who may
1410 > not be on the network when the regular scheduled backup is run.
1412 > My wife's suggestion: Add a setting to the win32 client to allow it to
1413 > shut down the machine after backup is complete (after, of course,
1414 > displaying a "System will shut down in one minute, click here to cancel"
1415 > warning dialog). This would be useful for sites that want user
1416 > woorkstations to be shut down overnight to save power.
1419 - Autolabel should be specified by DIR instead of SD.
1421 - Add media capacity
1422 - AutoScan (check checksum of tape)
1423 - Format command = "format /dev/nst0"
1427 - Seek resolution (usually corresponds to buffer size)
1428 - EODErrorCode=ENOSPC or code
1429 - Partial Read error code
1430 - Partial write error code
1431 - Nonformatted read error
1432 - Nonformatted write error
1433 - WriteProtected error
1437 - IgnoreCloseErrors=yes
1447 - FD sends unsaved file list to Director at end of job (see
1449 - File daemon should build list of files skipped, and then
1450 at end of save retry and report any errors.
1451 - Write a Storage daemon that uses pipes and
1452 standard Unix programs to write to the tape.
1454 - Need something that monitors the JCR queue and
1455 times out jobs by asking the deamons where they are.
1456 - Enhance Jmsg code to permit buffering and saving to disk.
1457 - device driver = "xxxx" for drives.
1458 - Verify from Volume
1459 - Ensure that /dev/null works
1460 - Need report class for messages. Perhaps
1461 report resource where report=group of messages
1462 - enhance scan_attrib and rename scan_jobtype, and
1463 fill in code for "since" option
1464 - Director needs a time after which the report status is sent
1465 anyway -- or better yet, a retry time for the job.
1466 - Don't reschedule a job if previous incarnation is still running.
1467 - Some way to automatically backup everything is needed????
1468 - Need a structure for pending actions:
1470 - termination status (part of buffered msgs?)
1472 Read, Write, Clean, Delete
1473 - Login to Bacula; Bacula users with different permissions:
1474 owner, group, user, quotas
1475 - Store info on each file system type (probably in the job header on tape.
1476 This could be the output of df; or perhaps some sort of /etc/mtab record.
1478 ========= ideas ===============
1479 From: "Jerry K. Schieffer" <jerry@skylinetechnology.com>
1480 To: <kern@sibbald.com>
1481 Subject: RE: [Bacula-users] future large programming jobs
1482 Date: Thu, 26 Feb 2004 11:34:54 -0600
1484 I noticed the subject thread and thought I would offer the following
1485 merely as sources of ideas, i.e. something to think about, not even as
1486 strong as a request. In my former life (before retiring) I often
1487 dealt with backups and storage management issues/products as a
1488 developer and as a consultant. I am currently migrating my personal
1489 network from amanda to bacula specifically because of the ability to
1490 cross media boundaries during storing backups.
1491 Are you familiar with the commercial product called ADSM (I think IBM
1492 now sells it under the Tivoli label)? It has a couple of interesting
1493 ideas that may apply to the following topics.
1495 1. Migration: Consider that when you need to restore a system, there
1496 may be pressure to hurry. If all the information for a single client
1497 can eventually end up on the same media (and in chronological order),
1498 the restore is facillitated by not having to search past information
1499 from other clients. ADSM has the concept of "client affinity" that
1500 may be associated with it's storage pools. It seems to me that this
1501 concept (as an optional feature) might fit in your architecture for
1504 ADSM also has the concept of defining one or more storage pools as
1505 "copy pools" (almost mirrors, but only in the sense of contents).
1506 These pools provide the ability to have duplicte data stored both
1507 onsite and offsite. The copy process can be scheduled to be handled
1508 by their storage manager during periods when there is no backup
1509 activity. Again, the migration process might be a place to consider
1510 implementing something like this.
1513 > It strikes me that it would be very nice to be able to do things
1515 > have the Job(s) backing up the machines run, and once they have all
1516 > completed, start a migration job to copy the data from disks Volumes
1518 > a tape library and then to offsite storage. Maybe this can already
1520 > done with some careful scheduling and Job prioritzation; the events
1521 > mechanism described below would probably make it very easy.
1523 This is the goal. In the first step (before events), you simply
1525 the Migration to tape later.
1527 2. Base jobs: In ADSM, each copy of each stored file is tracked in
1528 the database. Once a file (unique by path and metadata such as dates,
1529 size, ownership, etc.) is in a copy pool, no more copies are made. In
1530 other words, when you start ADSM, it begins like your concept of a
1531 base job. After that it is in the "incremental" mode. You can
1532 configure the number of "generations" of files to be retained, plus a
1533 retention date after which even old generations are purged. The
1534 database tracks the contents of media and projects the percentage of
1535 each volume that is valid. When the valid content of a volume drops
1536 below a configured percentage, the valid data are migrated to another
1537 volume and the old volume is marked as empty. Note, this requires
1538 ADSM to have an idea of the contents of a client, i.e. marking the
1539 database when an existing file was deleted, but this would solve your
1540 issue of restoring a client without restoring deleted files.
1542 This is pretty far from what bacula now does, but if you are going to
1543 rip things up for Base jobs,.....
1544 Also, the benefits of this are huge for very large shops, especially
1545 with media robots, but are a pain for shops with manual media
1549 > Base jobs sound pretty useful, but I'm not dying for them.
1551 Nobody is dying for them, but when you see what it does, you will die
1554 3. Restoring deleted files: Since I think my comments in (2) above
1555 have low probability of implementation, I'll also suggest that you
1556 could approach the issue of deleted files by a mechanism of having the
1557 fd report to the dir, a list of all files on the client for every
1558 backup job. The dir could note in the database entry for each file
1559 the date that the file was seen. Then if a restore as of date X takes
1560 place, only files that exist from before X until after X would be
1561 restored. Probably the major cost here is the extra date container in
1562 each row of the files table.
1564 Thanks for "listening". I hope some of this helps. If you want to
1565 contact me, please send me an email - I read some but not all of the
1566 mailing list traffic and might miss a reply there.
1568 Please accept my compliments for bacula. It is doing a great job for
1569 me!! I sympathize with you in the need to wrestle with excelence in
1570 execution vs. excelence in feature inclusion.
1575 ==============================
1578 - Design at hierarchial storage for Bacula. Migration and Clone.
1579 - Implement FSM (File System Modules).
1580 - Audit M_ error codes to ensure they are correct and consistent.
1581 - Add variable break characters to lex analyzer.
1582 Either a bit mask or a string of chars so that
1583 the caller can change the break characters.
1584 - Make a single T_BREAK to replace T_COMMA, etc.
1585 - Ensure that File daemon and Storage daemon can
1586 continue a save if the Director goes down (this
1587 is NOT currently the case). Must detect socket error,
1588 buffer messages for later.
1589 - Enhance time/duration input to allow multiple qualifiers e.g. 3d2h
1590 - Add ability to backup to two Storage devices (two SD sessions) at
1591 the same time -- e.g. onsite, offsite.
1592 - Compress or consolidate Volumes of old possibly deleted files. Perhaps
1593 someway to do so with every volume that has less than x% valid
1597 Migration: Move a backup from one Volume to another
1598 Clone: Copy a backup -- two Volumes
1601 ======================================================
1603 It is somewhat like a Full save becomes an incremental since
1604 the Base job (or jobs) plus other non-base files.
1606 - A Base backup is same as Full backup, just different type.
1607 - New BaseFiles table that contains:
1609 BaseJobId - Base JobId referenced for this FileId (needed ???)
1610 JobId - JobId currently running
1611 FileId - File not backed up, exists in Base Job
1612 FileIndex - FileIndex from Base Job.
1613 i.e. for each base file that exists but is not saved because
1614 it has not changed, the File daemon sends the JobId, BaseId,
1615 FileId, FileIndex back to the Director who creates the DB entry.
1616 - To initiate a Base save, the Director sends the FD
1617 the FileId, and full filename for each file in the Base.
1618 - When the FD finds a Base file, he requests the Director to
1619 send him the full File entry (stat packet plus MD5), or
1620 conversely, the FD sends it to the Director and the Director
1621 says yes or no. This can be quite rapid if the FileId is kept
1622 by the FD for each Base Filename.
1623 - It is probably better to have the comparison done by the FD
1624 despite the fact that the File entry must be sent across the
1626 - An alternative would be to send the FD the whole File entry
1627 from the start. The disadvantage is that it requires a lot of
1628 space. The advantage is that it requires less communications
1630 - The Job record must be updated to indicate that one or more
1632 - At end of Job, FD returns:
1633 1. Count of base files/bytes not written to tape (i.e. matches)
1634 2. Count of base file that were saved i.e. had changed.
1635 - No tape record would be written for a Base file that matches, in the
1636 same way that no tape record is written for Incremental jobs where
1637 the file is not saved because it is unchanged.
1638 - On a restore, all the Base file records must explicitly be
1639 found from the BaseFile tape. I.e. for each Full save that is marked
1640 to have one or more Base Jobs, search the BaseFile for all occurrences
1642 - An optimization might be to make the BaseFile have:
1648 This would avoid the need to explicitly fetch each File record for
1649 the Base job. The Base Job record will be fetched to get the
1650 VolSessionId and VolSessionTime.
1651 =========================================================
1656 Multiple drive autochanger data: see Alan Brown
1657 mtx -f xxx unloadStorage Element 1 is Already Full(drive 0 was empty)
1658 Unloading Data Transfer Element into Storage Element 1...source Element
1659 Address 480 is Empty
1661 (drive 0 was empty and so was slot 1)
1662 > mtx -f xxx load 15 0
1663 no response, just returns to the command prompt when complete.
1664 > mtx -f xxx status Storage Changer /dev/changer:2 Drives, 60 Slots ( 2 Import/Export )
1665 Data Transfer Element 0:Full (Storage Element 15 Loaded):VolumeTag = HX001
1666 Data Transfer Element 1:Empty
1667 Storage Element 1:Empty
1668 Storage Element 2:Full :VolumeTag=HX002
1669 Storage Element 3:Full :VolumeTag=HX003
1670 Storage Element 4:Full :VolumeTag=HX004
1671 Storage Element 5:Full :VolumeTag=HX005
1672 Storage Element 6:Full :VolumeTag=HX006
1673 Storage Element 7:Full :VolumeTag=HX007
1674 Storage Element 8:Full :VolumeTag=HX008
1675 Storage Element 9:Full :VolumeTag=HX009
1676 Storage Element 10:Full :VolumeTag=HX010
1677 Storage Element 11:Empty
1678 Storage Element 12:Empty
1679 Storage Element 13:Empty
1680 Storage Element 14:Empty
1681 Storage Element 15:Empty
1682 Storage Element 16:Empty....
1683 Storage Element 28:Empty
1684 Storage Element 29:Full :VolumeTag=CLNU01L1
1685 Storage Element 30:Empty....
1686 Storage Element 57:Empty
1687 Storage Element 58:Full :VolumeTag=NEX261L2
1688 Storage Element 59 IMPORT/EXPORT:Empty
1689 Storage Element 60 IMPORT/EXPORT:Empty
1691 Unloading Data Transfer Element into Storage Element 15...done
1693 (just to verify it remembers where it came from, however it can be
1694 overrriden with mtx unload {slotnumber} to go to any storage slot.)
1696 There needs to be a table of drive # to devices somewhere - If there are
1697 multiple changers or drives there may not be a 1:1 correspondance between
1698 changer drive number and system device name - and depending on the way the
1699 drives are hooked up to scsi busses, they may not be linearly numbered
1700 from an offset point either.something like
1702 Autochanger drives = 2
1703 Autochanger drive 0 = /dev/nst1
1704 Autochanger drive 1 = /dev/nst2
1705 IMHO, it would be _safest_ to use explicit mtx unload commands at all
1706 times, not just for multidrive changers. For a 1 drive changer, that's
1712 MTX's manpage (1.2.15):
1713 unload [<slotnum>] [ <drivenum> ]
1714 Unloads media from drive <drivenum> into slot
1715 <slotnum>. If <drivenum> is omitted, defaults to
1716 drive 0 (as do all commands). If <slotnum> is
1717 omitted, defaults to the slot that the drive was
1718 loaded from. Note that there's currently no way
1719 to say 'unload drive 1's media to the slot it
1720 came from', other than to explicitly use that
1721 slot number as the destination.AB
1727 undef# camcontrol devlist
1728 <WANGTEK 51000 SCSI M74H 12B3> at scbus0 target 2 lun 0 (pass0,sa0)
1729 <ARCHIVE 4586XX 28887-XXX 4BGD> at scbus0 target 4 lun 0 (pass1,sa1)
1730 <ARCHIVE 4586XX 28887-XXX 4BGD> at scbus0 target 4 lun 1 (pass2)
1732 tapeinfo -f /dev/sg0 with a bad tape in drive 1:
1733 [kern@rufus mtx-1.2.17kes]$ ./tapeinfo -f /dev/sg0
1734 Product Type: Tape Drive
1736 Product ID: 'C5713A '
1738 Attached Changer: No
1739 TapeAlert[3]: Hard Error: Uncorrectable read/write error.
1740 TapeAlert[20]: Clean Now: The tape drive neads cleaning NOW.
1747 Medium Type: Not Loaded
1750 DataCompEnabled: yes
1751 DataCompCapable: yes
1752 DataDeCompEnabled: yes
1759 Handling removable disks
1761 From: Karl Cunningham <karlc@keckec.com>
1763 My backups are only to hard disk these days, in removable bays. This is my
1764 idea of how a backup to hard disk would work more smoothly. Some of these
1765 things Bacula does already, but I mention them for completeness. If others
1766 have better ways to do this, I'd like to hear about it.
1768 1. Accommodate several disks, rotated similar to how tapes are. Identified
1769 by partition volume ID or perhaps by the name of a subdirectory.
1770 2. Abort & notify the admin if the wrong disk is in the bay.
1771 3. Write backups to different subdirectories for each machine to be backed
1773 4. Volumes (files) get created as needed in the proper subdirectory, one
1775 5. When a disk is recycled, remove or zero all old backup files. This is
1776 important as the disk being recycled may be close to full. This may be
1777 better done manually since the backup files for many machines may be
1778 scattered in many subdirectories.
1783 - Why the heck doesn't bacula drop root priviledges before connecting to
1785 - Look at using posix_fadvise(2) for backups -- see bug #751.
1786 Possibly add the code at findlib/bfile.c:795
1787 /* TCP socket options */
1788 #define TCP_KEEPIDLE 4 /* Start keeplives after this period */
1789 - Fix bnet_connect() code to set a timer and to use time to
1791 - Implement 4th argument to make_catalog_backup that passes hostname.
1792 - Test FIFO backup/restore -- make regression
1793 - Please mount volume "xxx" on Storage device ... should also list
1794 Pool and MediaType in case user needs to create a new volume.
1795 - On restore add Restore Client, Original Client.
1796 01-Apr 00:42 rufus-dir: Start Backup JobId 55, Job=kernsave.2007-04-01_00.42.48
1797 01-Apr 00:42 rufus-sd: Python SD JobStart: JobId=55 Client=Rufus
1798 01-Apr 00:42 rufus-dir: Created new Volume "Full0001" in catalog.
1799 01-Apr 00:42 rufus-dir: Using Device "File"
1800 01-Apr 00:42 rufus-sd: kernsave.2007-04-01_00.42.48 Warning: Device "File" (/tmp) not configured to autolabel Volumes.
1801 01-Apr 00:42 rufus-sd: kernsave.2007-04-01_00.42.48 Warning: Device "File" (/tmp) not configured to autolabel Volumes.
1802 01-Apr 00:42 rufus-sd: Please mount Volume "Full0001" on Storage Device "File" (/tmp) for Job kernsave.2007-04-01_00.42.48
1803 01-Apr 00:44 rufus-sd: Wrote label to prelabeled Volume "Full0001" on device "File" (/tmp)
1804 - Check if gnome-console works with TLS.
1805 - the director seg faulted when I omitted the pool directive from a
1806 job resource. I was experimenting and thought it redundant that I had
1807 specified Pool, Full Backup Pool. and Differential Backup Pool. but
1808 apparently not. This happened when I removed the pool directive and
1809 started the director.
1810 - Add Where: client:/.... to restore job report.
1811 - Ensure that moving a purged Volume in ua_purge.c to the RecyclePool
1812 does the right thing.
1813 - FD-SD quick disconnect
1814 - Building the in memory restore tree is slow.
1815 - Erabt if min_block_size > max_block_size
1816 - Add the ability to consolidate old backup sets (basically do a restore
1817 to tape and appropriately update the catalog). Compress Volume sets.
1818 Might need to spool via file is only one drive is available.
1819 - Why doesn't @"xxx abc" work in a conf file?
1820 - Don't restore Solaris Door files:
1821 #define S_IFDOOR in st_mode.
1822 see: http://docs.sun.com/app/docs/doc/816-5173/6mbb8ae23?a=view#indexterm-360
1823 - Figure out how to recycle Scratch volumes back to the Scratch Pool.
1824 - Implement Despooling data status.
1825 - Use E'xxx' to escape PostgreSQL strings.
1826 - Look at mincore: http://insights.oetiker.ch/linux/fadvise.html
1827 - Unicode input http://en.wikipedia.org/wiki/Byte_Order_Mark
1828 - Look at moving the Storage directive from the Job to the
1829 Pool in the default conf files.
1830 - Look at in src/filed/backup.c
1831 > pm_strcpy(ff_pkt->fname, ff_pkt->fname_save);
1832 > pm_strcpy(ff_pkt->link, ff_pkt->link_save);
1833 - Add Catalog = to Pool resource so that pools will exist
1834 in only one catalog -- currently Pools are "global".
1835 - Add TLS to bat (should be done).
1836 === Duplicate jobs ===
1837 hese apply only to backup jobs.
1839 1. Allow Duplicate Jobs = Yes | No | Higher (Yes)
1841 2. Duplicate Job Interval = <time-interval> (0)
1843 The defaults are in parenthesis and would produce the same behavior as today.
1845 If Allow Duplicate Jobs is set to No, then any job starting while a job of the
1846 same name is running will be canceled.
1848 If Allow Duplicate Jobs is set to Higher, then any job starting with the same
1849 or lower level will be canceled, but any job with a Higher level will start.
1850 The Levels are from High to Low: Full, Differential, Incremental
1852 Finally, if you have Duplicate Job Interval set to a non-zero value, any job
1853 of the same name which starts <time-interval> after a previous job of the
1854 same name would run, any one that starts within <time-interval> would be
1855 subject to the above rules. Another way of looking at it is that the Allow
1856 Duplicate Jobs directive will only apply after <time-interval> of when the
1857 previous job finished (i.e. it is the minimum interval between jobs).
1861 Allow Duplicate Jobs = Yes | No | HigherLevel | CancelLowerLevel (Yes)
1863 Where HigherLevel cancels any waiting job but not any running job.
1864 Where CancelLowerLevel is same as HigherLevel but cancels any running job or
1867 Duplicate Job Proximity = <time-interval> (0)
1869 My suggestion was to define it as the minimum guard time between
1870 executions of a specific job -- ie, if a job was scheduled within Job
1871 Proximity number of seconds, it would be considered a duplicate and
1874 Skip = Do not allow two or more jobs with the same name to run
1875 simultaneously within the proximity interval. The second and subsequent
1876 jobs are skipped without further processing (other than to note the job
1877 and exit immediately), and are not considered errors.
1879 Fail = The second and subsequent jobs that attempt to run during the
1880 proximity interval are cancelled and treated as error-terminated jobs.
1882 Promote = If a job is running, and a second/subsequent job of higher
1883 level attempts to start, the running job is promoted to the higher level
1884 of processing using the resources already allocated, and the subsequent
1885 job is treated as in Skip above.
1891 Allow = yes|no (no = default)
1893 AllowHigherLevel = yes|no (no)
1895 AllowLowerLevel = yes|no (no)
1897 AllowSameLevel = yes|no
1899 Cancel = Running | New (no)
1901 CancelledStatus = Fail | Skip (fail)
1903 Job Proximity = <time-interval> (0)
1904 My suggestion was to define it as the minimum guard time between
1905 executions of a specific job -- ie, if a job was scheduled within Job
1906 Proximity number of seconds, it would be considered a duplicate and