3 Bacula Projects Roadmap
4 Status updated 26 August 2008
9 Item 1: Accurate restoration of renamed/deleted files
10 Item 2: Allow FD to initiate a backup
11 Item 3: Merge multiple backups (Synthetic Backup or Consolidation)
12 Item 4: Implement Catalog directive for Pool resource in Director
13 Item 5: Add an item to the restore option where you can select a Pool
14 Item 6: Deletion of disk Volumes when pruned
15 Item 7: Implement Base jobs
16 Item 8: Implement Copy pools
17 Item 9: Scheduling syntax that permits more flexibility and options
18 Item 10: Message mailing based on backup types
19 Item 11: Cause daemons to use a specific IP address to source communications
20 Item 12: Add Plug-ins to the FileSet Include statements.
21 Item 13: Restore only file attributes (permissions, ACL, owner, group...)
22 Item 14: Add an override in Schedule for Pools based on backup types
23 Item 15: Implement more Python events and functions
24 Item 16: Allow inclusion/exclusion of files in a fileset by creation/mod times
25 Item 17: Automatic promotion of backup levels based on backup size
26 Item 18: Better control over Job execution
27 Item 19: Automatic disabling of devices
28 Item 20: An option to operate on all pools with update vol parameters
29 Item 21: Include timestamp of job launch in "stat clients" output
30 Item 22: Implement Storage daemon compression
31 Item 23: Improve Bacula's tape and drive usage and cleaning management
32 Item 24: Multiple threads in file daemon for the same job
33 Item 25: Archival (removal) of User Files to Tape
36 Item 1: Accurate restoration of renamed/deleted files
37 Date: 28 November 2005
38 Origin: Martin Simmons (martin at lispworks dot com)
41 What: When restoring a fileset for a specified date (including "most
42 recent"), Bacula should give you exactly the files and directories
43 that existed at the time of the last backup prior to that date.
45 Currently this only works if the last backup was a Full backup.
46 When the last backup was Incremental/Differential, files and
47 directories that have been renamed or deleted since the last Full
48 backup are not currently restored correctly. Ditto for files with
49 extra/fewer hard links than at the time of the last Full backup.
51 Why: Incremental/Differential would be much more useful if this worked.
53 Notes: Merging of multiple backups into a single one seems to
54 rely on this working, otherwise the merged backups will not be
55 truly equivalent to a Full backup.
57 Note: Kern: notes shortened. This can be done without the need for
58 inodes. It is essentially the same as the current Verify job,
59 but one additional database record must be written, which does
60 not need any database change.
62 Notes: Kern: see if we can correct restoration of directories if
63 replace=ifnewer is set. Currently, if the directory does not
64 exist, a "dummy" directory is created, then when all the files
65 are updated, the dummy directory is newer so the real values
68 Item 2: Allow FD to initiate a backup
69 Origin: Frank Volf (frank at deze dot org)
70 Date: 17 November 2005
73 What: Provide some means, possibly by a restricted console that
74 allows a FD to initiate a backup, and that uses the connection
75 established by the FD to the Director for the backup so that
76 a Director that is firewalled can do the backup.
78 Why: Makes backup of laptops much easier.
81 Item 3: Merge multiple backups (Synthetic Backup or Consolidation)
82 Origin: Marc Cousin and Eric Bollengier
83 Date: 15 November 2005
86 What: A merged backup is a backup made without connecting to the Client.
87 It would be a Merge of existing backups into a single backup.
88 In effect, it is like a restore but to the backup medium.
90 For instance, say that last Sunday we made a full backup. Then
91 all week long, we created incremental backups, in order to do
92 them fast. Now comes Sunday again, and we need another full.
93 The merged backup makes it possible to do instead an incremental
94 backup (during the night for instance), and then create a merged
95 backup during the day, by using the full and incrementals from
96 the week. The merged backup will be exactly like a full made
97 Sunday night on the tape, but the production interruption on the
98 Client will be minimal, as the Client will only have to send
101 In fact, if it's done correctly, you could merge all the
102 Incrementals into single Incremental, or all the Incrementals
103 and the last Differential into a new Differential, or the Full,
104 last differential and all the Incrementals into a new Full
105 backup. And there is no need to involve the Client.
107 Why: The benefit is that :
108 - the Client just does an incremental ;
109 - the merged backup on tape is just as a single full backup,
110 and can be restored very fast.
112 This is also a way of reducing the backup data since the old
113 data can then be pruned (or not) from the catalog, possibly
114 allowing older volumes to be recycled
116 Item 4: Implement Catalog directive for Pool resource in Director
117 Origin: Alan Davis adavis@ruckus.com
121 What: The current behavior is for the director to create all pools
122 found in the configuration file in all catalogs. Add a
123 Catalog directive to the Pool resource to specify which
124 catalog to use for each pool definition.
126 Why: This allows different catalogs to have different pool
127 attributes and eliminates the side-effect of adding
128 pools to catalogs that don't need/use them.
130 Notes: Kern: I think this is relatively easy to do, and it is really
131 a pre-requisite to a number of the Copy pool, ... projects
132 that are listed here.
134 Item 5: Add an item to the restore option where you can select a Pool
135 Origin: kshatriyak at gmail dot com
139 What: In the restore option (Select the most recent backup for a
140 client) it would be useful to add an option where you can limit
141 the selection to a certain pool.
143 Why: When using cloned jobs, most of the time you have 2 pools - a
144 disk pool and a tape pool. People who have 2 pools would like to
145 select the most recent backup from disk, not from tape (tape
146 would be only needed in emergency). However, the most recent
147 backup (which may just differ a second from the disk backup) may
148 be on tape and would be selected. The problem becomes bigger if
149 you have a full and differential - the most "recent" full backup
150 may be on disk, while the most recent differential may be on tape
151 (though the differential on disk may differ even only a second or
152 so). Bacula will complain that the backups reside on different
153 media then. For now the only solution now when restoring things
154 when you have 2 pools is to manually search for the right
155 job-id's and enter them by hand, which is a bit fault tolerant.
157 Notes: Kern: This is a nice idea. It could also be the way to support
158 Jobs that have been Copied (similar to migration, but not yet
163 Item 6: Deletion of disk Volumes when pruned
165 Origin: Ross Boylan <RossBoylan at stanfordalumni dot org> (edited
169 What: Provide a way for Bacula to automatically remove Volumes
170 from the filesystem, or optionally to truncate them.
171 Obviously, the Volume must be pruned prior removal.
173 Why: This would allow users more control over their Volumes and
174 prevent disk based volumes from consuming too much space.
176 Notes: The following two directives might do the trick:
178 Volume Data Retention = <time period>
179 Remove Volume After = <time period>
181 The migration project should also remove a Volume that is
182 migrated. This might also work for tape Volumes.
184 Item 7: Implement Base jobs
185 Date: 28 October 2005
189 What: A base job is sort of like a Full save except that you
190 will want the FileSet to contain only files that are
191 unlikely to change in the future (i.e. a snapshot of
192 most of your system after installing it). After the
193 base job has been run, when you are doing a Full save,
194 you specify one or more Base jobs to be used. All
195 files that have been backed up in the Base job/jobs but
196 not modified will then be excluded from the backup.
197 During a restore, the Base jobs will be automatically
198 pulled in where necessary.
200 Why: This is something none of the competition does, as far as
201 we know (except perhaps BackupPC, which is a Perl program that
202 saves to disk only). It is big win for the user, it
203 makes Bacula stand out as offering a unique
204 optimization that immediately saves time and money.
205 Basically, imagine that you have 100 nearly identical
206 Windows or Linux machine containing the OS and user
207 files. Now for the OS part, a Base job will be backed
208 up once, and rather than making 100 copies of the OS,
209 there will be only one. If one or more of the systems
210 have some files updated, no problem, they will be
211 automatically restored.
213 Notes: Huge savings in tape usage even for a single machine.
214 Will require more resources because the DIR must send
215 FD a list of files/attribs, and the FD must search the
216 list and compare it for each file to be saved.
219 Item 8: Implement Copy pools
220 Date: 27 November 2005
221 Origin: David Boyes (dboyes at sinenomine dot net)
224 What: I would like Bacula to have the capability to write copies
225 of backed-up data on multiple physical volumes selected
226 from different pools without transferring the data
227 multiple times, and to accept any of the copy volumes
228 as valid for restore.
230 Why: In many cases, businesses are required to keep offsite
231 copies of backup volumes, or just wish for simple
232 protection against a human operator dropping a storage
233 volume and damaging it. The ability to generate multiple
234 volumes in the course of a single backup job allows
235 customers to simple check out one copy and send it
236 offsite, marking it as out of changer or otherwise
237 unavailable. Currently, the library and magazine
238 management capability in Bacula does not make this process
241 Restores would use the copy of the data on the first
242 available volume, in order of Copy pool chain definition.
244 This is also a major scalability issue -- as the number of
245 clients increases beyond several thousand, and the volume
246 of data increases, transferring the data multiple times to
247 produce additional copies of the backups will become
248 physically impossible due to transfer speed
249 issues. Generating multiple copies at server side will
250 become the only practical option.
252 How: I suspect that this will require adding a multiplexing
253 SD that appears to be a SD to a specific FD, but 1-n FDs
254 to the specific back end SDs managing the primary and copy
255 pools. Storage pools will also need to acquire parameters
256 to define the pools to be used for copies.
258 Notes: I would commit some of my developers' time if we can agree
259 on the design and behavior.
261 Notes: Additional notes from David:
262 I think there's two areas where new configuration would be needed.
264 1) Identify a "SD mux" SD (specify it in the config just like a
265 normal SD. The SD configuration would need something like a
266 "Daemon Type = Normal/Mux" keyword to identify it as a
267 multiplexor. (The director code would need modification to add
268 the ability to do the multiple session setup, but the impact of
269 the change would be new code that was invoked only when a SDmux is
272 2) Additional keywords in the Pool definition to identify the need
273 to create copies. Each pool would acquire a Copypool= attribute
274 (may be repeated to generate more than one copy. 3 is about the
275 practical limit, but no point in hardcoding that).
282 Copypool = OffsiteCopy2
285 where Copy1 and OffsiteCopy2 are valid pools.
287 In terms of function (shorthand): Backup job X is defined
288 normally, specifying pool Primary as the pool to use. Job gets
289 scheduled, and Bacula starts scheduling resources. Scheduler
290 looks at pool definition for Primary, sees that there are a
291 non-zero number of copypool keywords. The director then connects
292 to an available SDmux, passes it the pool ids for Primary, Copy1,
293 and OffsiteCopy2 and waits. SDmux then goes out and reserves
294 devices and volumes in the normal SDs that serve Primary, Copy1
295 and OffsiteCopy2. When all are ready, the SDmux signals ready
296 back to the director, and the FD is given the address of the SDmux
297 as the SD to communicate with. Backup proceeds normally, with the
298 SDmux duplicating blocks to each connected normal SD, and
299 returning ready when all defined copies have been written. At
300 EOJ, FD shuts down connection with SDmux, which closes down the
301 normal SD connections and goes back to an idle state. SDmux does
302 not update database; normal SDs do (noting that file is present on
303 each volume it has been written to).
305 On restore, director looks for the volume containing the file in
306 pool Primary first, then Copy1, then OffsiteCopy2. If the volume
307 holding the file in pool Primary is missing or busy (being written
308 in another job, etc), or one of the volumes from the copypool list
309 that have the file in question is already mounted and ready for
310 some reason, use it to do the restore, else mount one of the
311 copypool volumes and proceed.
314 Item 9: Scheduling syntax that permits more flexibility and options
315 Date: 15 December 2006
316 Origin: Gregory Brauer (greg at wildbrain dot com) and
317 Florian Schnabel <florian.schnabel at docufy dot de>
320 What: Currently, Bacula only understands how to deal with weeks of the
321 month or weeks of the year in schedules. This makes it impossible
322 to do a true weekly rotation of tapes. There will always be a
323 discontinuity that will require disruptive manual intervention at
324 least monthly or yearly because week boundaries never align with
325 month or year boundaries.
327 A solution would be to add a new syntax that defines (at least)
328 a start timestamp, and repetition period.
330 An easy option to skip a certain job on a certain date.
333 Why: Rotated backups done at weekly intervals are useful, and Bacula
334 cannot currently do them without extensive hacking.
336 You could then easily skip tape backups on holidays. Especially
337 if you got no autochanger and can only fit one backup on a tape
338 that would be really handy, other jobs could proceed normally
339 and you won't get errors that way.
342 Notes: Here is an example syntax showing a 3-week rotation where full
343 Backups would be performed every week on Saturday, and an
344 incremental would be performed every week on Tuesday. Each
345 set of tapes could be removed from the loader for the following
346 two cycles before coming back and being reused on the third
347 week. Since the execution times are determined by intervals
348 from a given point in time, there will never be any issues with
349 having to adjust to any sort of arbitrary time boundary. In
350 the example provided, I even define the starting schedule
351 as crossing both a year and a month boundary, but the run times
352 would be based on the "Repeat" value and would therefore happen
357 Name = "Week 1 Rotation"
358 #Saturday. Would run Dec 30, Jan 20, Feb 10, etc.
362 Start = 2006-12-30 01:00
366 #Tuesday. Would run Jan 2, Jan 23, Feb 13, etc.
370 Start = 2007-01-02 01:00
377 Name = "Week 2 Rotation"
378 #Saturday. Would run Jan 6, Jan 27, Feb 17, etc.
382 Start = 2007-01-06 01:00
386 #Tuesday. Would run Jan 9, Jan 30, Feb 20, etc.
390 Start = 2007-01-09 01:00
397 Name = "Week 3 Rotation"
398 #Saturday. Would run Jan 13, Feb 3, Feb 24, etc.
402 Start = 2007-01-13 01:00
406 #Tuesday. Would run Jan 16, Feb 6, Feb 27, etc.
410 Start = 2007-01-16 01:00
416 Notes: Kern: I have merged the previously separate project of skipping
417 jobs (via Schedule syntax) into this.
420 Item 10: Message mailing based on backup types
421 Origin: Evan Kaufman <evan.kaufman@gmail.com>
422 Date: January 6, 2006
425 What: In the "Messages" resource definitions, allowing messages
426 to be mailed based on the type (backup, restore, etc.) and level
427 (full, differential, etc) of job that created the originating
430 Why: It would, for example, allow someone's boss to be emailed
431 automatically only when a Full Backup job runs, so he can
432 retrieve the tapes for offsite storage, even if the IT dept.
433 doesn't (or can't) explicitly notify him. At the same time, his
434 mailbox wouldnt be filled by notifications of Verifies, Restores,
435 or Incremental/Differential Backups (which would likely be kept
438 Notes: One way this could be done is through additional message types, for example:
441 # email the boss only on full system backups
442 Mail = boss@mycompany.com = full, !incremental, !differential, !restore,
444 # email us only when something breaks
445 MailOnError = itdept@mycompany.com = all
448 Notes: Kern: This should be rather trivial to implement.
451 Item 11: Cause daemons to use a specific IP address to source communications
452 Origin: Bill Moran <wmoran@collaborativefusion.com>
455 What: Cause Bacula daemons (dir, fd, sd) to always use the ip address
456 specified in the [DIR|DF|SD]Addr directive as the source IP
457 for initiating communication.
458 Why: On complex networks, as well as extremely secure networks, it's
459 not unusual to have multiple possible routes through the network.
460 Often, each of these routes is secured by different policies
461 (effectively, firewalls allow or deny different traffic depending
462 on the source address)
463 Unfortunately, it can sometimes be difficult or impossible to
464 represent this in a system routing table, as the result is
465 excessive subnetting that quickly exhausts available IP space.
466 The best available workaround is to provide multiple IPs to
467 a single machine that are all on the same subnet. In order
468 for this to work properly, applications must support the ability
469 to bind outgoing connections to a specified address, otherwise
470 the operating system will always choose the first IP that
471 matches the required route.
472 Notes: Many other programs support this. For example, the following
473 can be configured in BIND:
474 query-source address 10.0.0.1;
475 transfer-source 10.0.0.2;
476 Which means queries from this server will always come from
477 10.0.0.1 and zone transfers will always originate from
481 Item 12: Add Plug-ins to the FileSet Include statements.
482 Date: 28 October 2005
484 Status: Partially coded in 1.37 -- much more to do.
486 What: Allow users to specify wild-card and/or regular
487 expressions to be matched in both the Include and
488 Exclude directives in a FileSet. At the same time,
489 allow users to define plug-ins to be called (based on
490 regular expression/wild-card matching).
492 Why: This would give the users the ultimate ability to control
493 how files are backed up/restored. A user could write a
494 plug-in knows how to backup his Oracle database without
495 stopping/starting it, for example.
498 Item 13: Restore only file attributes (permissions, ACL, owner, group...)
499 Origin: Eric Bollengier
501 Status: Implemented by Eric, see project-restore-attributes-only.patch
503 What: The goal of this project is to be able to restore only rights
504 and attributes of files without crushing them.
506 Why: Who have never had to repair a chmod -R 777, or a wild update
507 of recursive right under Windows? At this time, you must have
508 enough space to restore data, dump attributes (easy with acl,
509 more complex with unix/windows rights) and apply them to your
510 broken tree. With this options, it will be very easy to compare
511 right or ACL over the time.
513 Notes: If the file is here, we skip restore and we change rights.
514 If the file isn't here, we can create an empty one and apply
515 rights or do nothing.
517 This will not work with win32 stream, because it seems that we
518 can't split the WriteBackup stream to get only ACL and ownerchip.
520 Item 14: Add an override in Schedule for Pools based on backup types
522 Origin: Chad Slater <chad.slater@clickfox.com>
525 What: Adding a FullStorage=BigTapeLibrary in the Schedule resource
526 would help those of us who use different storage devices for different
527 backup levels cope with the "auto-upgrade" of a backup.
529 Why: Assume I add several new devices to be backed up, i.e. several
530 hosts with 1TB RAID. To avoid tape switching hassles, incrementals are
531 stored in a disk set on a 2TB RAID. If you add these devices in the
532 middle of the month, the incrementals are upgraded to "full" backups,
533 but they try to use the same storage device as requested in the
534 incremental job, filling up the RAID holding the differentials. If we
535 could override the Storage parameter for full and/or differential
536 backups, then the Full job would use the proper Storage device, which
537 has more capacity (i.e. a 8TB tape library.
540 Item 15: Implement more Python events and functions
541 Date: 28 October 2005
545 What: Allow Python scripts to be called at more places
546 within Bacula and provide additional access to Bacula
549 Implement an interface for Python scripts to access the
550 catalog through Bacula.
552 Why: This will permit users to customize Bacula through
560 Also add a way to get a listing of currently running
561 jobs (possibly also scheduled jobs).
564 to start the appropriate job.
567 Item 16: Allow inclusion/exclusion of files in a fileset by creation/mod times
568 Origin: Evan Kaufman <evan.kaufman@gmail.com>
569 Date: January 11, 2006
572 What: In the vein of the Wild and Regex directives in a Fileset's
573 Options, it would be helpful to allow a user to include or exclude
574 files and directories by creation or modification times.
576 You could factor the Exclude=yes|no option in much the same way it
577 affects the Wild and Regex directives. For example, you could exclude
578 all files modified before a certain date:
582 Modified Before = ####
585 Or you could exclude all files created/modified since a certain date:
589 Created Modified Since = ####
592 The format of the time/date could be done several ways, say the number
593 of seconds since the epoch:
594 1137008553 = Jan 11 2006, 1:42:33PM # result of `date +%s`
596 Or a human readable date in a cryptic form:
597 20060111134233 = Jan 11 2006, 1:42:33PM # YYYYMMDDhhmmss
599 Why: I imagine a feature like this could have many uses. It would
600 allow a user to do a full backup while excluding the base operating
601 system files, so if I installed a Linux snapshot from a CD yesterday,
602 I'll *exclude* all files modified *before* today. If I need to
603 recover the system, I use the CD I already have, plus the tape backup.
604 Or if, say, a Windows client is hit by a particularly corrosive
605 virus, and I need to *exclude* any files created/modified *since* the
608 Notes: Of course, this feature would work in concert with other
609 in/exclude rules, and wouldnt override them (or each other).
611 Notes: The directives I'd imagine would be along the lines of
612 "[Created] [Modified] [Before|Since] = <date>".
613 So one could compare against 'ctime' and/or 'mtime', but ONLY 'before'
617 Item 17: Automatic promotion of backup levels based on backup size
618 Date: 19 January 2006
619 Origin: Adam Thornton <athornton@sinenomine.net>
622 What: Amanda has a feature whereby it estimates the space that a
623 differential, incremental, and full backup would take. If the
624 difference in space required between the scheduled level and the next
625 level up is beneath some user-defined critical threshold, the backup
626 level is bumped to the next type. Doing this minimizes the number of
627 volumes necessary during a restore, with a fairly minimal cost in
630 Why: I know at least one (quite sophisticated and smart) user
631 for whom the absence of this feature is a deal-breaker in terms of
632 using Bacula; if we had it it would eliminate the one cool thing
633 Amanda can do and we can't (at least, the one cool thing I know of).
636 Item 18: Better control over Job execution
641 What: Bacula needs a few extra features for better Job execution:
642 1. A way to prevent multiple Jobs of the same name from
643 being scheduled at the same time (ususally happens when
644 a job is missed because a client is down).
645 2. Directives that permit easier upgrading of Job types
646 based on a period of time. I.e. "do a Full at least
647 once every 2 weeks", or "do a differential at least
648 once a week". If a lower level job is scheduled when
649 it begins to run it will be upgraded depending on
650 the specified criteria.
655 Item 19: Automatic disabling of devices
657 Origin: Peter Eriksson <peter at ifm.liu dot se>
660 What: After a configurable amount of fatal errors with a tape drive
661 Bacula should automatically disable further use of a certain
662 tape drive. There should also be "disable"/"enable" commands in
665 Why: On a multi-drive jukebox there is a possibility of tape drives
666 going bad during large backups (needing a cleaning tape run,
667 tapes getting stuck). It would be advantageous if Bacula would
668 automatically disable further use of a problematic tape drive
669 after a configurable amount of errors has occurred.
671 An example: I have a multi-drive jukebox (6 drives, 380+ slots)
672 where tapes occasionally get stuck inside the drive. Bacula will
673 notice that the "mtx-changer" command will fail and then fail
674 any backup jobs trying to use that drive. However, it will still
675 keep on trying to run new jobs using that drive and fail -
676 forever, and thus failing lots and lots of jobs... Since we have
677 many drives Bacula could have just automatically disabled
678 further use of that drive and used one of the other ones
681 Item 20: An option to operate on all pools with update vol parameters
682 Origin: Dmitriy Pinchukov <absh@bossdev.kiev.ua>
684 Status: Patch made by Nigel Stepp
686 What: When I do update -> Volume parameters -> All Volumes
687 from Pool, then I have to select pools one by one. I'd like
688 console to have an option like "0: All Pools" in the list of
691 Why: I have many pools and therefore unhappy with manually
692 updating each of them using update -> Volume parameters -> All
693 Volumes from Pool -> pool #.
696 Item 21: Include timestamp of job launch in "stat clients" output
697 Origin: Mark Bergman <mark.bergman@uphs.upenn.edu>
698 Date: Tue Aug 22 17:13:39 EDT 2006
701 What: The "stat clients" command doesn't include any detail on when
702 the active backup jobs were launched.
704 Why: Including the timestamp would make it much easier to decide whether
705 a job is running properly.
707 Notes: It may be helpful to have the output from "stat clients" formatted
708 more like that from "stat dir" (and other commands), in a column
709 format. The per-client information that's currently shown (level,
710 client name, JobId, Volume, pool, device, Files, etc.) is good, but
711 somewhat hard to parse (both programmatically and visually),
712 particularly when there are many active clients.
716 Item 22: Implement Storage daemon compression
717 Date: 18 December 2006
718 Origin: Vadim A. Umanski , e-mail umanski@ext.ru
720 What: The ability to compress backup data on the SD receiving data
721 instead of doing that on client sending data.
722 Why: The need is practical. I've got some machines that can send
723 data to the network 4 or 5 times faster than compressing
724 them (I've measured that). They're using fast enough SCSI/FC
725 disk subsystems but rather slow CPUs (ex. UltraSPARC II).
726 And the backup server has got a quite fast CPUs (ex. Dual P4
727 Xeons) and quite a low load. When you have 20, 50 or 100 GB
728 of raw data - running a job 4 to 5 times faster - that
729 really matters. On the other hand, the data can be
730 compressed 50% or better - so losing twice more space for
731 disk backup is not good at all. And the network is all mine
732 (I have a dedicated management/provisioning network) and I
733 can get as high bandwidth as I need - 100Mbps, 1000Mbps...
734 That's why the server-side compression feature is needed!
737 Item 23: Improve Bacula's tape and drive usage and cleaning management
738 Date: 8 November 2005, November 11, 2005
739 Origin: Adam Thornton <athornton at sinenomine dot net>,
740 Arno Lehmann <al at its-lehmann dot de>
743 What: Make Bacula manage tape life cycle information, tape reuse
744 times and drive cleaning cycles.
746 Why: All three parts of this project are important when operating
748 We need to know which tapes need replacement, and we need to
749 make sure the drives are cleaned when necessary. While many
750 tape libraries and even autoloaders can handle all this
751 automatically, support by Bacula can be helpful for smaller
752 (older) libraries and single drives. Limiting the number of
753 times a tape is used might prevent tape errors when using
754 tapes until the drives can't read it any more. Also, checking
755 drive status during operation can prevent some failures (as I
756 [Arno] had to learn the hard way...)
758 Notes: First, Bacula could (and even does, to some limited extent)
759 record tape and drive usage. For tapes, the number of mounts,
760 the amount of data, and the time the tape has actually been
761 running could be recorded. Data fields for Read and Write
762 time and Number of mounts already exist in the catalog (I'm
763 not sure if VolBytes is the sum of all bytes ever written to
764 that volume by Bacula). This information can be important
765 when determining which media to replace. The ability to mark
766 Volumes as "used up" after a given number of write cycles
767 should also be implemented so that a tape is never actually
768 worn out. For the tape drives known to Bacula, similar
769 information is interesting to determine the device status and
770 expected life time: Time it's been Reading and Writing, number
771 of tape Loads / Unloads / Errors. This information is not yet
772 recorded as far as I [Arno] know. A new volume status would
773 be necessary for the new state, like "Used up" or "Worn out".
774 Volumes with this state could be used for restores, but not
775 for writing. These volumes should be migrated first (assuming
776 migration is implemented) and, once they are no longer needed,
777 could be moved to a Trash pool.
779 The next step would be to implement a drive cleaning setup.
780 Bacula already has knowledge about cleaning tapes. Once it
781 has some information about cleaning cycles (measured in drive
782 run time, number of tapes used, or calender days, for example)
783 it can automatically execute tape cleaning (with an
784 autochanger, obviously) or ask for operator assistance loading
787 The final step would be to implement TAPEALERT checks not only
788 when changing tapes and only sending the information to the
789 administrator, but rather checking after each tape error,
790 checking on a regular basis (for example after each tape
791 file), and also before unloading and after loading a new tape.
792 Then, depending on the drives TAPEALERT state and the known
793 drive cleaning state Bacula could automatically schedule later
794 cleaning, clean immediately, or inform the operator.
796 Implementing this would perhaps require another catalog change
797 and perhaps major changes in SD code and the DIR-SD protocol,
798 so I'd only consider this worth implementing if it would
799 actually be used or even needed by many people.
801 Implementation of these projects could happen in three distinct
802 sub-projects: Measuring Tape and Drive usage, retiring
803 volumes, and handling drive cleaning and TAPEALERTs.
805 Item 24: Multiple threads in file daemon for the same job
806 Date: 27 November 2005
807 Origin: Ove Risberg (Ove.Risberg at octocode dot com)
810 What: I want the file daemon to start multiple threads for a backup
811 job so the fastest possible backup can be made.
813 The file daemon could parse the FileSet information and start
814 one thread for each File entry located on a separate
817 A confiuration option in the job section should be used to
818 enable or disable this feature. The confgutration option could
819 specify the maximum number of threads in the file daemon.
821 If the theads could spool the data to separate spool files
822 the restore process will not be much slower.
824 Why: Multiple concurrent backups of a large fileserver with many
825 disks and controllers will be much faster.
827 Item 25: Archival (removal) of User Files to Tape
829 Origin: Ray Pengelly [ray at biomed dot queensu dot ca
832 What: The ability to archive data to storage based on certain parameters
833 such as age, size, or location. Once the data has been written to
834 storage and logged it is then pruned from the originating
835 filesystem. Note! We are talking about user's files and not
838 Why: This would allow fully automatic storage management which becomes
839 useful for large datastores. It would also allow for auto-staging
840 from one media type to another.
842 Example 1) Medical imaging needs to store large amounts of data.
843 They decide to keep data on their servers for 6 months and then put
844 it away for long term storage. The server then finds all files
845 older than 6 months writes them to tape. The files are then removed
848 Example 2) All data that hasn't been accessed in 2 months could be
849 moved from high-cost, fibre-channel disk storage to a low-cost
850 large-capacity SATA disk storage pool which doesn't have as quick of
851 access time. Then after another 6 months (or possibly as one
852 storage pool gets full) data is migrated to Tape.
858 ========= Added since the last vote =================
860 Item 1: Implement an interface between Bacula and Amazon's S3.
862 Origin: Soren Hansen <soren@ubuntu.com>
864 What: Enable the storage daemon to store backup data on Amazon's
867 Why: Amazon's S3 is a cheap way to store data off-site. Current
868 ways to integrate Bacula and S3 involve storing all the data
869 locally and syncing them to S3, and manually fetching them
870 again when they're needed. This is very cumbersome.
872 Item: Store and restore extended attributes, especially selinux file contexts
873 Date: 28 December 2007
874 Origin: Frank Sweetser <fs@wpi.edu>
875 What: The ability to store and restore extended attributes on
876 filesystems that support them, such as ext3.
878 Why: Security Enhanced Linux (SELinux) enabled systems make extensive
879 use of extended attributes. In addition to the standard user,
880 group, and permission, each file has an associated SELinux context
881 stored as an extended attribute. This context is used to define
882 which operations a given program is permitted to perform on that
883 file. Storing contexts on an SELinux system is as critical as
884 storing ownership and permissions. In the case of a full system
885 restore, the system will not even be able to boot until all
886 critical system files have been properly relabeled.
888 Notes: Fedora ships with a version of tar that has been patched to handle
889 extended attributes. The patch has not been integrated upstream
890 yet, so could serve as a good starting point.
892 http://linux.die.net/man/2/getxattr
893 http://linux.die.net/man/2/setxattr
894 http://linux.die.net/man/2/listxattr
896 http://linux.die.net/man/3/getfilecon
897 http://linux.die.net/man/3/setfilecon
899 Item 1: enable/disable compression depending on storage device (disk/tape)
900 Origin: Ralf Gross ralf-lists@ralfgross.de
902 Status: Initial Request
904 What: Add a new option to the storage resource of the director. Depending
905 on this option, compression will be enabled/disabled for a device.
907 Why: If different devices (disks/tapes) are used for full/diff/incr
908 backups, software compression will be enabled for all backups
909 because of the FileSet compression option. For backup to tapes
910 wich are able to do hardware compression this is not desired.
914 http://news.gmane.org/gmane.comp.sysutils.backup.bacula.devel/cutoff=11124
915 It must be clear to the user, that the FileSet compression option
916 must still be enabled use compression for a backup job at all.
917 Thus a name for the new option in the director must be
920 Notes: KES I think the Storage definition should probably override what
921 is in the Job definition or vice-versa, but in any case, it must
925 Item 1: Backup and Restore of Windows Encrypted Files through raw encryption
928 Origin: Michael Mohr, SAG Mohr.External@infineon.com
930 Date: 22 February 2008
934 What: Make it possible to backup and restore Encypted Files from and to
935 Windows systems without the need to decrypt it by using the raw
936 encryption functions API (see:
937 http://msdn2.microsoft.com/en-us/library/aa363783.aspx)
939 that is provided for that reason by Microsoft.
940 If a file ist encrypted could be examined by evaluating the
941 FILE_ATTRIBUTE_ENCRYTED flag of the GetFileAttributes
944 Why: Without the usage of this interface the fd-daemon running
945 under the system account can't read encypted Files because
946 the key needed for the decrytion is missed by them. As a result
947 actually encrypted files are not backed up
948 by bacula and also no error is shown while missing these files.
952 Item 1: Possibilty to schedule Jobs on last Friday of the month
953 Origin: Carsten Menke <bootsy52 at gmx dot net>
957 What: Currently if you want to run your monthly Backups on the last
958 Friday of each month this is only possible with workarounds (e.g
959 scripting) (As some months got 4 Fridays and some got 5 Fridays)
960 The same is true if you plan to run your yearly Backups on the
961 last Friday of the year. It would be nice to have the ability to
962 use the builtin scheduler for this.
964 Why: In many companies the last working day of the week is Friday (or
965 Saturday), so to get the most data of the month onto the monthly
966 tape, the employees are advised to insert the tape for the
967 monthly backups on the last friday of the month.
969 Notes: To give this a complete functionality it would be nice if the
970 "first" and "last" Keywords could be implemented in the
971 scheduler, so it is also possible to run monthy backups at the
972 first friday of the month and many things more. So if the syntax
973 would expand to this {first|last} {Month|Week|Day|Mo-Fri} of the
974 {Year|Month|Week} you would be able to run really flexible jobs.
976 To got a certain Job run on the last Friday of the Month for example one could
979 Run = pool=Monthly last Fri of the Month at 23:50
983 Run = pool=Yearly last Fri of the Year at 23:50
985 ## Certain Jobs the last Week of a Month
987 Run = pool=LastWeek last Week of the Month at 23:50
989 ## Monthly Backup on the last day of the month
991 Run = pool=Monthly last Day of the Month at 23:50
995 Origin: Frank Sweetser <fs@wpi.edu>
997 What: Add a new SD directive, "minimum spool size" (or similar). This
998 directive would specify a minimum level of free space available for
999 spooling. If the unused spool space is less than this level, any
1000 new spooling requests would be blocked as if the "maximum spool
1001 size" threshold had bee reached. Already spooling jobs would be
1002 unaffected by this directive.
1004 Why: I've been bitten by this scenario a couple of times:
1006 Assume a maximum spool size of 100M. Two concurrent jobs, A and B,
1007 are both running. Due to timing quirks and previously running jobs,
1008 job A has used 99.9M of space in the spool directory. While A is
1009 busy despooling to disk, B is happily using the remaining 0.1M of
1010 spool space. This ends up in a spool/despool sequence every 0.1M of
1011 data. In addition to fragmenting the data on the volume far more
1012 than was necessary, in larger data sets (ie, tens or hundreds of
1013 gigabytes) it can easily produce multi-megabyte report emails!
1015 Item n?: Expand the Verify Job capability to verify Jobs older than the
1016 last one. For VolumeToCatalog Jobs
1017 Date: 17 Januar 2008
1018 Origin: portrix.net Hamburg, Germany.
1019 Contact: Christian Sabelmann
1020 Status: 70% of the required Code is part of the Verify function since v. 2.x
1023 The ability to tell Bacula which Job should verify instead of
1024 automatically verify just the last one.
1027 It is sad that such a powerfull feature like Verify Jobs
1028 (VolumeToCatalog) is restricted to be used only with the last backup Job
1029 of a client. Actual users who have to do daily Backups are forced to
1030 also do daily Verify Jobs in order to take advantage of this useful
1031 feature. This Daily Verify after Backup conduct is not always desired
1032 and Verify Jobs have to be sometimes scheduled. (Not necessarily
1033 scheduled in Bacula). With this feature Admins can verify Jobs once a
1034 Week or less per month, selecting the Jobs they want to verify. This
1035 feature is also not to difficult to implement taking in account older bug
1036 reports about this feature and the selection of the Job to be verified.
1038 Notes: For the verify Job, the user could select the Job to be verified
1039 from a List of the latest Jobs of a client. It would also be possible to
1040 verify a certain volume. All of these would naturaly apply only for
1041 Jobs whose file information are still in the catalog.
1043 Item X: Add EFS support on Windows
1044 Origin: Alex Ehrlich (Alex.Ehrlich-at-mail.ee)
1045 Date: 05 August 2008
1048 What: For each file backed up or restored by FD on Windows, check if
1049 the file is encrypted; if so then use OpenEncryptedFileRaw,
1050 ReadEncryptedFileRaw, WriteEncryptedFileRaw,
1051 CloseEncryptedFileRaw instead of BackupRead and BackupWrite
1054 Why: Many laptop users utilize the EFS functionality today; so do.
1055 some non-laptop ones, too.
1056 Currently files encrypted by means of EFS cannot be backed up.
1057 It means a Windows boutique cannot rely on Bacula as its
1058 backup solution, at least when using Windows 2K, XPP,
1059 "better" Vista etc on workstations, unless EFS is
1060 forbidden by policies.
1061 The current situation might result into "false sense of
1062 security" among the end-users.
1064 Notes: Using xxxEncryptedFileRaw API would allow to backup and
1065 restore EFS-encrypted files without decrypting their data.
1066 Note that such files cannot be restored "portably" (at least,
1067 easily) but they would be restoreable to a different (or
1068 reinstalled) Win32 machine; the restore would require setup
1069 of a EFS recovery agent in advance, of course, and this shall
1070 be clearly reflected in the documentation, but this is the
1071 normal Windows SysAdmin's business.
1072 When "portable" backup is requested the EFS-encrypted files
1073 shall be clearly reported as errors.
1074 See MSDN on the "Backup and Restore of Encrypted Files" topic:
1075 http://msdn.microsoft.com/en-us/library/aa363783.aspx
1076 Maybe the EFS support requires a new flag in the database for
1078 Unfortunately, the implementation is not as straightforward as
1079 1-to-1 replacement of BackupRead with ReadEncryptedFileRaw,
1080 requiring some FD code rewrite to work with
1081 encrypted-file-related callback functions.
1083 encrypted-file-related callback functions.
1084 ========== Already implemented ================================
1086 Item n: make changing "spooldata=yes|no" possible for
1087 manual/interactive jobs
1088 Origin: Marc Schiffbauer <marc@schiffbauer.net>
1089 Date: 12 April 2007)
1092 What: Make it possible to modify the spooldata option
1093 for a job when being run from within the console.
1094 Currently it is possible to modify the backup level
1095 and the spooldata setting in a Schedule resource.
1096 It is also possible to modify the backup level when using
1097 the "run" command in the console.
1098 But it is currently not possible to to the same
1099 with "spooldata=yes|no" like:
1101 run job=MyJob level=incremental spooldata=yes
1103 Why: In some situations it would be handy to be able to switch
1104 spooldata on or off for interactive/manual jobs based on
1105 which data the admin expects or how fast the LAN/WAN
1106 connection currently is.
1110 Item 1: Implement an option to modify the last written date for volumes
1111 Date: 16 September 2008
1112 Origin: Franck (xeoslaenor at gmail dot com)
1114 What: The ability to modify the last written date for a volume
1115 Why: It's sometime necessary to jump a volume when you have a pool of volume
1116 which recycles the oldest volume at each backup.
1117 Sometime, it needs to cancel a set of backup (one day
1118 backup, completely) and we want to avoid that bacula
1119 choose the volume (which is not written at all) from
1120 the cancelled backup (It has to jump to next volume).
1121 in this case, we just need to update the written date
1122 manually to avoir the "oldest volume" purge.
1123 Notes: An option can be add to "update volume" command (like 'written date'
1126 ============= Empty Feature Request form ===========
1127 Item n: One line summary ...
1128 Date: Date submitted
1129 Origin: Name and email of originator.
1132 What: More detailed explanation ...
1134 Why: Why it is important ...
1136 Notes: Additional notes or features (omit if not used)
1137 ============== End Feature Request form ==============
1139 ========== Items on put hold by Kern ============================
1141 Item h1: Split documentation
1142 Origin: Maxx <maxxatworkat gmail dot com>
1143 Date: 27th July 2006
1144 Status: Approved, awaiting implementation
1146 What: Split documentation in several books
1148 Why: Bacula manual has now more than 600 pages, and looking for
1149 implementation details is getting complicated. I think
1150 it would be good to split the single volume in two or
1153 1) Introduction, requirements and tutorial, typically
1154 are useful only until first installation time
1156 2) Basic installation and configuration, with all the
1157 gory details about the directives supported 3)
1158 Advanced Bacula: testing, troubleshooting, GUI and
1159 ancillary programs, security managements, scripting,
1162 Notes: This is a project that needs to be done, and will be implemented,
1163 but it is really a developer issue of timing, and does not
1164 needed to be included in the voting.
1167 Item h2: Implement support for stacking arbitrary stream filters, sinks.
1168 Date: 23 November 2006
1169 Origin: Landon Fuller <landonf@threerings.net>
1170 Status: Planning. Assigned to landonf.
1172 What: Implement support for the following:
1173 - Stacking arbitrary stream filters (eg, encryption, compression,
1174 sparse data handling))
1175 - Attaching file sinks to terminate stream filters (ie, write out
1176 the resultant data to a file)
1177 - Refactor the restoration state machine accordingly
1179 Why: The existing stream implementation suffers from the following: - All
1180 state (compression, encryption, stream restoration), is
1181 global across the entire restore process, for all streams. There are
1182 multiple entry and exit points in the restoration state machine, and
1183 thus multiple places where state must be allocated, deallocated,
1184 initialized, or reinitialized. This results in exceptional complexity
1185 for the author of a stream filter.
1186 - The developer must enumerate all possible combinations of filters
1187 and stream types (ie, win32 data with encryption, without encryption,
1188 with encryption AND compression, etc).
1190 Notes: This feature request only covers implementing the stream filters/
1191 sinks, and refactoring the file daemon's restoration
1192 implementation accordingly. If I have extra time, I will also
1193 rewrite the backup implementation. My intent in implementing the
1194 restoration first is to solve pressing bugs in the restoration
1195 handling, and to ensure that the new restore implementation
1196 handles existing backups correctly.
1198 I do not plan on changing the network or tape data structures to
1199 support defining arbitrary stream filters, but supporting that
1200 functionality is the ultimate goal.
1202 Assistance with either code or testing would be fantastic.
1204 Notes: Kern: this project has a lot of merit, and we need to do it, but
1205 it is really an issue for developers rather than a new feature
1206 for users, so I have removed it from the voting list, but kept it
1207 here, but at some point, it will be implemented.
1209 Item h3: Filesystem watch triggered backup.
1210 Date: 31 August 2006
1211 Origin: Jesper Krogh <jesper@krogh.cc>
1214 What: With inotify and similar filesystem triggeret notification
1215 systems is it possible to have the file-daemon to monitor
1216 filesystem changes and initiate backup.
1218 Why: There are 2 situations where this is nice to have.
1219 1) It is possible to get a much finer-grained backup than
1220 the fixed schedules used now.. A file created and deleted
1221 a few hours later, can automatically be caught.
1223 2) The introduced load on the system will probably be
1224 distributed more even on the system.
1226 Notes: This can be combined with configration that specifies
1227 something like: "at most every 15 minutes or when changes
1230 Kern Notes: I would rather see this implemented by an external program
1231 that monitors the Filesystem changes, then uses the console
1234 Item h4: Directive/mode to backup only file changes, not entire file
1235 Date: 11 November 2005
1236 Origin: Joshua Kugler <joshua dot kugler at uaf dot edu>
1237 Marek Bajon <mbajon at bimsplus dot com dot pl>
1240 What: Currently when a file changes, the entire file will be backed up in
1241 the next incremental or full backup. To save space on the tapes
1242 it would be nice to have a mode whereby only the changes to the
1243 file would be backed up when it is changed.
1245 Why: This would save lots of space when backing up large files such as
1246 logs, mbox files, Outlook PST files and the like.
1248 Notes: This would require the usage of disk-based volumes as comparing
1249 files would not be feasible using a tape drive.
1251 Notes: Kern: I don't know how to implement this. Put on hold until someone
1252 provides a detailed implementation plan.
1255 Item h5: Implement multiple numeric backup levels as supported by dump
1257 Origin: Daniel Rich <drich@employees.org>
1259 What: Dump allows specification of backup levels numerically instead of just
1260 "full", "incr", and "diff". In this system, at any given level,
1261 all files are backed up that were were modified since the last
1262 backup of a higher level (with 0 being the highest and 9 being the
1263 lowest). A level 0 is therefore equivalent to a full, level 9 an
1264 incremental, and the levels 1 through 8 are varying levels of
1265 differentials. For bacula's sake, these could be represented as
1266 "full", "incr", and "diff1", "diff2", etc.
1268 Why: Support of multiple backup levels would provide for more advanced
1269 backup rotation schemes such as "Towers of Hanoi". This would
1270 allow better flexibility in performing backups, and can lead to
1271 shorter recover times.
1273 Notes: Legato Networker supports a similar system with full, incr, and 1-9 as
1276 Notes: Kern: I don't see the utility of this, and it would be a *huge*
1277 modification to existing code.
1279 Item h6: Implement NDMP protocol support
1284 What: Network Data Management Protocol is implemented by a number of
1285 NAS filer vendors to enable backups using third-party
1288 Why: This would allow NAS filer backups in Bacula without incurring
1289 the overhead of NFS or SBM/CIFS.
1291 Notes: Further information is available:
1293 http://www.ndmp.org/wp/wp.shtml
1294 http://www.traakan.com/ndmjob/index.html
1296 There are currently no viable open-source NDMP
1297 implementations. There is a reference SDK and example
1298 app available from ndmp.org but it has problems
1299 compiling on recent Linux and Solaris OS'. The ndmjob
1300 reference implementation from Traakan is known to
1301 compile on Solaris 10.
1303 Notes: Kern: I am not at all in favor of this until NDMP becomes
1304 an Open Standard or until there are Open Source libraries
1305 that interface to it.
1307 Item h7: Commercial database support
1308 Origin: Russell Howe <russell_howe dot wreckage dot org>
1312 What: It would be nice for the database backend to support more databases.
1313 I'm thinking of SQL Server at the moment, but I guess Oracle, DB2,
1314 MaxDB, etc are all candidates. SQL Server would presumably be
1315 implemented using FreeTDS or maybe an ODBC library?
1317 Why: We only really have one database server, which is MS SQL Server 2000.
1318 Maintaining a second one for the backup software (we grew out of
1319 SQLite, which I liked, but which didn't work so well with our
1320 database size). We don't really have a machine with the resources
1321 to run postgres, and would rather only maintain a single DBMS.
1322 We're stuck with SQL Server because pretty much all the company's
1323 custom applications (written by consultants) are locked into SQL
1324 Server 2000. I can imagine this scenario is fairly common, and it
1325 would be nice to use the existing properly specced database server
1326 for storing Bacula's catalog, rather than having to run a second
1329 Notes: This might be nice, but someone other than me will probably need to
1330 implement it, and at the moment, proprietary code cannot legally
1331 be mixed with Bacula GPLed code. This would be possible only
1332 providing the vendors provide GPLed (or OpenSource) interface
1335 Item h8: Incorporation of XACML2/SAML2 parsing
1336 Date: 19 January 2006
1337 Origin: Adam Thornton <athornton@sinenomine.net>
1340 What: XACML is "eXtensible Access Control Markup Language" and "SAML is
1341 the "Security Assertion Markup Language"--an XML standard for
1342 making statements about identity and authorization. Having these
1343 would give us a framework to approach ACLs in a generic manner,
1344 and in a way flexible enough to support the four major sorts of
1345 ACLs I see as a concern to Bacula at this point, as well as
1346 (probably) to deal with new sorts of ACLs that may appear in the
1349 Why: Bacula is beginning to need to back up systems with ACLs that do not
1350 map cleanly onto traditional Unix permissions. I see four sets of
1351 ACLs--in general, mutually incompatible with one another--that
1352 we're going to need to deal with. These are: NTFS ACLs, POSIX
1353 ACLs, NFSv4 ACLS, and AFS ACLS. (Some may question the relevance
1354 of AFS; AFS is one of Sine Nomine's core consulting businesses,
1355 and having a reputable file-level backup and restore technology
1356 for it (as Tivoli is probably going to drop AFS support soon since
1357 IBM no longer supports AFS) would be of huge benefit to our
1358 customers; we'd most likely create the AFS support at Sine Nomine
1359 for inclusion into the Bacula (and perhaps some changes to the
1360 OpenAFS volserver) core code.)
1362 Now, obviously, Bacula already handles NTFS just fine. However, I
1363 think there's a lot of value in implementing a generic ACL model,
1364 so that it's easy to support whatever particular instances of ACLs
1365 come down the pike: POSIX ACLS (think SELinux) and NFSv4 are the
1366 obvious things arriving in the Linux world in a big way in the
1367 near future. XACML, although overcomplicated for our needs,
1368 provides this framework, and we should be able to leverage other
1369 people's implementations to minimize the amount of work *we* have
1370 to do to get a generic ACL framework. Basically, the costs of
1371 implementation are high, but they're largely both external to
1372 Bacula and already sunk.
1374 Notes: As you indicate this is a bit of "blue sky" or in other words,
1375 at the moment, it is a bit esoteric to consider for Bacula.
1377 Item h9: Archive data
1379 Origin: calvin streeting calvin at absentdream dot com
1382 What: The abilty to archive to media (dvd/cd) in a uncompressed format
1383 for dead filing (archiving not backing up)
1385 Why: At work when jobs are finished and moved off of the main
1386 file servers (raid based systems) onto a simple Linux
1387 file server (ide based system) so users can find old
1388 information without contacting the IT dept.
1390 So this data dosn't realy change it only gets added to,
1391 But it also needs backing up. At the moment it takes
1392 about 8 hours to back up our servers (working data) so
1393 rather than add more time to existing backups i am trying
1394 to implement a system where we backup the acrhive data to
1395 cd/dvd these disks would only need to be appended to
1396 (burn only new/changed files to new disks for off site
1397 storage). basialy understand the differnce between
1398 achive data and live data.
1400 Notes: Scan the data and email me when it needs burning divide
1401 into predefined chunks keep a recored of what is on what
1402 disk make me a label (simple php->mysql=>pdf stuff) i
1403 could do this bit ability to save data uncompresed so
1404 it can be read in any other system (future proof data)
1405 save the catalog with the disk as some kind of menu
1408 Notes: Kern: I don't understand this item, and in any case, if it
1409 is specific to DVD/CDs, which we do not recommend using,
1410 it is unlikely to be implemented except as a user
1414 Item h10: Clustered file-daemons
1415 Origin: Alan Brown ajb2 at mssl dot ucl dot ac dot uk
1418 What: A "virtual" filedaemon, which is actually a cluster of real ones.
1420 Why: In the case of clustered filesystems (SAN setups, GFS, or OCFS2, etc)
1421 multiple machines may have access to the same set of filesystems
1423 For performance reasons, one may wish to initate backups from
1424 several of these machines simultaneously, instead of just using
1425 one backup source for the common clustered filesystem.
1427 For obvious reasons, normally backups of $A-FD/$PATH and
1428 B-FD/$PATH are treated as different backup sets. In this case
1429 they are the same communal set.
1431 Likewise when restoring, it would be easier to just specify
1432 one of the cluster machines and let bacula decide which to use.
1434 This can be faked to some extent using DNS round robin entries
1435 and a virtual IP address, however it means "status client" will
1436 always give bogus answers. Additionally there is no way of
1437 spreading the load evenly among the servers.
1439 What is required is something similar to the storage daemon
1440 autochanger directives, so that Bacula can keep track of
1441 operating backups/restores and direct new jobs to a "free"
1444 Notes: Kern: I don't understand the request enough to be able to
1445 implement it. A lot more design detail should be presented
1446 before voting on this project.