From: Kern Sibbald Date: Wed, 3 Jan 2007 09:24:35 +0000 (+0000) Subject: Fix dbcheck, update ReleaseNotes, finalize projects X-Git-Tag: Release-2.0.0~10 X-Git-Url: https://git.sur5r.net/?a=commitdiff_plain;h=ffd6101f6e7bd9cb471a3ffcdb6c2c4633bbd5ef;p=bacula%2Fbacula Fix dbcheck, update ReleaseNotes, finalize projects git-svn-id: https://bacula.svn.sourceforge.net/svnroot/bacula/trunk@3907 91ce42f0-d328-0410-95d8-f526ca767f89 --- diff --git a/bacula/ChangeLog b/bacula/ChangeLog index 0f889a2a87..b5ac73863b 100644 --- a/bacula/ChangeLog +++ b/bacula/ChangeLog @@ -1,7 +1,11 @@ Technical notes on version 1.39 General: -Version 1.39.34 released: +Version 2.0.0 released: 4 January 2007 +03Jan07 +kes Fix an incorrect dbcheck reference to Id. + +Version 1.39.34 Released: 28Dec06 kes Convert dbcheck to use 64 bit DB IDs. kes Update projects diff --git a/bacula/ReleaseNotes b/bacula/ReleaseNotes index ba729bb2db..4a9f72cbd1 100644 --- a/bacula/ReleaseNotes +++ b/bacula/ReleaseNotes @@ -104,8 +104,10 @@ This will turn off all seek requests during restores and avoid this problem. - VSS for Windows clients is now enabled by default. +- Do not unload autochanger when doing "update slots" +- Implement mount command for autochanger, see manual. -New Features in 1.40.0: +New Features in 2.0.0: - Turn on disk seek code for restores. - Bacula now support Migration jobs that are documented in a new Migration chapter in the manual @@ -168,7 +170,8 @@ New Features in 1.40.0: the specified job from being scheduled. Even when disabled, the job can be manually started from the console. - The database Id records should be 32/64 bit independent now. 64 bits - can be enabled by changing one define, but this has never been tested. + can be enabled by changing one define and changing the appropriate + table variable. Normally, you need 64 bits only for FileId. - Relative path specifications (i.e. ../xxx) are now permitted in the restore cd command. - When running multiple simultaneous jobs, most jobs that use spooling @@ -183,6 +186,8 @@ New Features in 1.40.0: - Lots of DVD fixes -- writing DVDs is now reported to work. - Fix opening of database in a restricted console to respect any Catalog ACL. +- Much better automatic handling of multiple database catalogs in + the restore command. - Permit multiple console/director resources in bconsole.conf. patch from Carsten Paeth calle@calle.in-berlin.de - Character substitution in Job/JobDefs WriteBootStrap. @@ -201,7 +206,7 @@ New Features in 1.40.0: - Add Media.Enabled flag to client backups for dotcmds.c - Enforce Media.Enabled=1 for a current restore to work - Require restore case 3 to have sqlquery permission to work. -- Add -n option to bconsole to turn off conio. +- Add -n option to bconsole to turn off conio -- used in bweb. - The bytes field in the terminated jobs part of the status command now reports in KB, MB, ... units. - When not descending into a directory, print the File= name that @@ -231,7 +236,8 @@ New Features in 1.40.0: lists volumes possibly needing replacement (error, ...). - Implement new code for changing userid and group at startup. This should get Bacula into the correct groups. -- Implement support for removable filesystems. +- Implement support for removable filesystems -- device type directive + and mount, unmount directives. - Transfer rates are now presented in a more readable format thanks to a user submission. - SD is now aware of what volumes are mounted. More information is printed diff --git a/bacula/kernstodo b/bacula/kernstodo index 347f3ff0c4..858e312a2d 100644 --- a/bacula/kernstodo +++ b/bacula/kernstodo @@ -41,6 +41,8 @@ Document: Priority: +- Look at the possibility of adding "SET NAMES UTF8" for MySQL, + and possibly changing the blobs into varchar. - Check if gnome-console works with TLS. - Ensure that the SD re-reads the Media record if the JobFiles does not match -- it may have been updated by another job. @@ -59,6 +61,7 @@ Projects: - GUI - Admin - Management reports + - Add doc for bweb -- especially Installation - Look at Webmin http://www.orangecrate.com/modules.php?name=News&file=article&sid=501 - Performance diff --git a/bacula/projects b/bacula/projects index f647ee59f7..8be12a4433 100644 --- a/bacula/projects +++ b/bacula/projects @@ -1,11 +1,10 @@ Projects: Bacula Projects Roadmap - Status updated 15 December 2006 - + Status updated 3 January 2007 Summary: -Item 1: Accurate restoration of renamed/deleted files +Item 1: Accurate restoration of renamed/deleted files Item 2: Implement a Bacula GUI/management tool. Item 3: Implement Base jobs. Item 4: Implement from-client and to-client on restore command line. @@ -40,7 +39,11 @@ Item 33: Commercial database support Item 34: Archive data Item 35: Filesystem watch triggered backup. Item 36: Implement multiple numeric backup levels as supported by dump - +Item 37: Implement a server-side compression feature +Item 38: Cause daemons to use a specific IP address to source communications +Item 39: Multiple threads in file daemon for the same job +Item 40: Restore only file attributes (permissions, ACL, owner, group...) +Item 41: Add an item to the restore option where you can select a pool Below, you will find more information on future projects: @@ -134,26 +137,26 @@ Item 3: Implement Base jobs. FD a list of files/attribs, and the FD must search the list and compare it for each file to be saved. -Item 4: Implement from-client and to-client on restore command line. - Date: 11 December 2006 - Origin: Discussion on Bacula-users entitled 'Scripted restores to - different clients', December 2006 - Status: New feature request +Item 4: Implement from-client and to-client on restore command line. + Date: 11 December 2006 + Origin: Discussion on Bacula-users entitled 'Scripted restores to + different clients', December 2006 + Status: New feature request - What: While using bconsole interactively, you can specify the client - that a backup job is to be restored for, and then you can - specify later a different client to send the restored files - back to. However, using the 'restore' command with all options - on the command line, this cannot be done, due to the ambiguous - 'client' parameter. Additionally, this parameter means different - things depending on if it's specified on the command line or - afterwards, in the Modify Job screens. + What: While using bconsole interactively, you can specify the client + that a backup job is to be restored for, and then you can + specify later a different client to send the restored files + back to. However, using the 'restore' command with all options + on the command line, this cannot be done, due to the ambiguous + 'client' parameter. Additionally, this parameter means different + things depending on if it's specified on the command line or + afterwards, in the Modify Job screens. - Why: This feature would enable restore jobs to be more completely - automated, for example by a web or GUI front-end. + Why: This feature would enable restore jobs to be more completely + automated, for example by a web or GUI front-end. Notes: client can also be implied by specifying the jobid on the command - line + line Item 5: Implement creation and maintenance of copy pools Date: 27 November 2005 @@ -392,11 +395,11 @@ Item 15: Allow skipping execution of Jobs Origin: Florian Schnabel Status: - What: An easy option to skip a certain job on a certain date. - Why: You could then easily skip tape backups on holidays. Especially - if you got no autochanger and can only fit one backup on a tape - that would be really handy, other jobs could proceed normally - and you won't get errors that way. + What: An easy option to skip a certain job on a certain date. + Why: You could then easily skip tape backups on holidays. Especially + if you got no autochanger and can only fit one backup on a tape + that would be really handy, other jobs could proceed normally + and you won't get errors that way. Item 16: Tray monitor window cleanups @@ -434,22 +437,22 @@ Item 17: Split documentation Item 18: Automatic promotion of backup levels - Date: 19 January 2006 - Origin: Adam Thornton - Status: Blue sky + Date: 19 January 2006 + Origin: Adam Thornton + Status: - What: Amanda has a feature whereby it estimates the space that a - differential, incremental, and full backup would take. If the - difference in space required between the scheduled level and the next - level up is beneath some user-defined critical threshold, the backup - level is bumped to the next type. Doing this minimizes the number of - volumes necessary during a restore, with a fairly minimal cost in - backup media space. + What: Amanda has a feature whereby it estimates the space that a + differential, incremental, and full backup would take. If the + difference in space required between the scheduled level and the next + level up is beneath some user-defined critical threshold, the backup + level is bumped to the next type. Doing this minimizes the number of + volumes necessary during a restore, with a fairly minimal cost in + backup media space. - Why: I know at least one (quite sophisticated and smart) user - for whom the absence of this feature is a deal-breaker in terms of - using Bacula; if we had it it would eliminate the one cool thing - Amanda can do and we can't (at least, the one cool thing I know of). + Why: I know at least one (quite sophisticated and smart) user + for whom the absence of this feature is a deal-breaker in terms of + using Bacula; if we had it it would eliminate the one cool thing + Amanda can do and we can't (at least, the one cool thing I know of). Item 19: Add an override in Schedule for Pools based on backup types. @@ -472,9 +475,9 @@ Status: has more capacity (i.e. a 8TB tape library. Item 20: An option to operate on all pools with update vol parameters - Origin: Dmitriy Pinchukov - Date: 16 August 2006 - Status: + Origin: Dmitriy Pinchukov + Date: 16 August 2006 + Status: What: When I do update -> Volume parameters -> All Volumes from Pool, then I have to select pools one by one. I'd like @@ -521,24 +524,24 @@ Item 22: Include timestamp of job launch in "stat clients" output Item 23: Message mailing based on backup types -Origin: Evan Kaufman - Date: January 6, 2006 -Status: + Origin: Evan Kaufman + Date: January 6, 2006 + Status: - What: In the "Messages" resource definitions, allowing messages - to be mailed based on the type (backup, restore, etc.) and level - (full, differential, etc) of job that created the originating - message(s). + What: In the "Messages" resource definitions, allowing messages + to be mailed based on the type (backup, restore, etc.) and level + (full, differential, etc) of job that created the originating + message(s). -Why: It would, for example, allow someone's boss to be emailed - automatically only when a Full Backup job runs, so he can - retrieve the tapes for offsite storage, even if the IT dept. - doesn't (or can't) explicitly notify him. At the same time, his - mailbox wouldnt be filled by notifications of Verifies, Restores, - or Incremental/Differential Backups (which would likely be kept - onsite). + Why: It would, for example, allow someone's boss to be emailed + automatically only when a Full Backup job runs, so he can + retrieve the tapes for offsite storage, even if the IT dept. + doesn't (or can't) explicitly notify him. At the same time, his + mailbox wouldnt be filled by notifications of Verifies, Restores, + or Incremental/Differential Backups (which would likely be kept + onsite). -Notes: One way this could be done is through additional message types, for example: + Notes: One way this could be done is through additional message types, for example: Messages { # email the boss only on full system backups @@ -765,39 +768,36 @@ Date: 23 November 2006 Origin: Landon Fuller Status: Planning. Assigned to landonf. -What: - Implement support for the following: - - Stacking arbitrary stream filters (eg, encryption, compression, - sparse data handling)) - - Attaching file sinks to terminate stream filters (ie, write out - the resultant data to a file) - - Refactor the restoration state machine accordingly - -Why: - The existing stream implementation suffers from the following: - - All state (compression, encryption, stream restoration), is - global across the entire restore process, for all streams. There are - multiple entry and exit points in the restoration state machine, and - thus multiple places where state must be allocated, deallocated, - initialized, or reinitialized. This results in exceptional complexity - for the author of a stream filter. - - The developer must enumerate all possible combinations of filters - and stream types (ie, win32 data with encryption, without encryption, - with encryption AND compression, etc). - -Notes: - This feature request only covers implementing the stream filters/ - sinks, and refactoring the file daemon's restoration implementation - accordingly. If I have extra time, I will also rewrite the backup - implementation. My intent in implementing the restoration first is to - solve pressing bugs in the restoration handling, and to ensure that - the new restore implementation handles existing backups correctly. - - I do not plan on changing the network or tape data structures to - support defining arbitrary stream filters, but supporting that - functionality is the ultimate goal. - - Assistance with either code or testing would be fantastic. + What: Implement support for the following: + - Stacking arbitrary stream filters (eg, encryption, compression, + sparse data handling)) + - Attaching file sinks to terminate stream filters (ie, write out + the resultant data to a file) + - Refactor the restoration state machine accordingly + + Why: The existing stream implementation suffers from the following: + - All state (compression, encryption, stream restoration), is + global across the entire restore process, for all streams. There are + multiple entry and exit points in the restoration state machine, and + thus multiple places where state must be allocated, deallocated, + initialized, or reinitialized. This results in exceptional complexity + for the author of a stream filter. + - The developer must enumerate all possible combinations of filters + and stream types (ie, win32 data with encryption, without encryption, + with encryption AND compression, etc). + + Notes: This feature request only covers implementing the stream filters/ + sinks, and refactoring the file daemon's restoration implementation + accordingly. If I have extra time, I will also rewrite the backup + implementation. My intent in implementing the restoration first is to + solve pressing bugs in the restoration handling, and to ensure that + the new restore implementation handles existing backups correctly. + + I do not plan on changing the network or tape data structures to + support defining arbitrary stream filters, but supporting that + functionality is the ultimate goal. + + Assistance with either code or testing would be fantastic. Item 28: Allow FD to initiate a backup Origin: Frank Volf (frank at deze dot org) @@ -829,9 +829,9 @@ Item 29: Directive/mode to backup only file changes, not entire file files would not be feasible using a tape drive. Item 30: Automatic disabling of devices - Date: 2005-11-11 - Origin: Peter Eriksson - Status: + Date: 2005-11-11 + Origin: Peter Eriksson + Status: What: After a configurable amount of fatal errors with a tape drive Bacula should automatically disable further use of a certain @@ -954,28 +954,28 @@ Item 34: Archive data What: The abilty to archive to media (dvd/cd) in a uncompressed format for dead filing (archiving not backing up) - Why: At my works when jobs are finished and moved off of the main file - servers (raid based systems) onto a simple linux file server (ide based - system) so users can find old information without contacting the IT - dept. - - So this data dosn't realy change it only gets added to, - But it also needs backing up. At the moment it takes - about 8 hours to back up our servers (working data) so - rather than add more time to existing backups i am trying - to implement a system where we backup the acrhive data to - cd/dvd these disks would only need to be appended to - (burn only new/changed files to new disks for off site - storage). basialy understand the differnce between - achive data and live data. - - Notes: Scan the data and email me when it needs burning divide - into predifind chunks keep a recored of what is on what - disk make me a label (simple php->mysql=>pdf stuff) i - could do this bit ability to save data uncompresed so - it can be read in any other system (future proof data) - save the catalog with the disk as some kind of menu - system + Why: At my works when jobs are finished and moved off of the main file + servers (raid based systems) onto a simple linux file server (ide based + system) so users can find old information without contacting the IT + dept. + + So this data dosn't realy change it only gets added to, + But it also needs backing up. At the moment it takes + about 8 hours to back up our servers (working data) so + rather than add more time to existing backups i am trying + to implement a system where we backup the acrhive data to + cd/dvd these disks would only need to be appended to + (burn only new/changed files to new disks for off site + storage). basialy understand the differnce between + achive data and live data. + + Notes: Scan the data and email me when it needs burning divide + into predifind chunks keep a recored of what is on what + disk make me a label (simple php->mysql=>pdf stuff) i + could do this bit ability to save data uncompresed so + it can be read in any other system (future proof data) + save the catalog with the disk as some kind of menu + system Item 35: Filesystem watch triggered backup. Date: 31 August 2006 @@ -1022,7 +1022,8 @@ Why: Support of multiple backup levels would provide for more advanced back Notes: Legato Networker supports a similar system with full, incr, and 1-9 as levels. -Item 1: Implement a server-side compression feature + +Item 37: Implement a server-side compression feature Date: 18 December 2006 Origin: Vadim A. Umanski , e-mail umanski@ext.ru Status: @@ -1043,36 +1044,36 @@ Item 1: Implement a server-side compression feature That's why the server-side compression feature is needed! Notes: -Item 1: Cause daemons to use a specific IP address to source communications - Origin: Bill Moran - Date: 18 Dec 2006 +Item 38: Cause daemons to use a specific IP address to source communications + Origin: Bill Moran + Date: 18 Dec 2006 Status: - What: Cause Bacula daemons (dir, fd, sd) to always use the ip address - specified in the [DIR|DF|SD]Addr directive as the source IP - for initiating communication. - Why: On complex networks, as well as extremely secure networks, it's - not unusual to have multiple possible routes through the network. - Often, each of these routes is secured by different policies - (effectively, firewalls allow or deny different traffic depending - on the source address) - Unfortunately, it can sometimes be difficult or impossible to - represent this in a system routing table, as the result is - excessive subnetting that quickly exhausts available IP space. - The best available workaround is to provide multiple IPs to - a single machine that are all on the same subnet. In order - for this to work properly, applications must support the ability - to bind outgoing connections to a specified address, otherwise - the operating system will always choose the first IP that - matches the required route. - Notes: Many other programs support this. For example, the following - can be configured in BIND: - query-source address 10.0.0.1; - transfer-source 10.0.0.2; - Which means queries from this server will always come from - 10.0.0.1 and zone transfers will always originate from - 10.0.0.2. - -Item n: Multiple threads in file daemon for the same job + What: Cause Bacula daemons (dir, fd, sd) to always use the ip address + specified in the [DIR|DF|SD]Addr directive as the source IP + for initiating communication. + Why: On complex networks, as well as extremely secure networks, it's + not unusual to have multiple possible routes through the network. + Often, each of these routes is secured by different policies + (effectively, firewalls allow or deny different traffic depending + on the source address) + Unfortunately, it can sometimes be difficult or impossible to + represent this in a system routing table, as the result is + excessive subnetting that quickly exhausts available IP space. + The best available workaround is to provide multiple IPs to + a single machine that are all on the same subnet. In order + for this to work properly, applications must support the ability + to bind outgoing connections to a specified address, otherwise + the operating system will always choose the first IP that + matches the required route. + Notes: Many other programs support this. For example, the following + can be configured in BIND: + query-source address 10.0.0.1; + transfer-source 10.0.0.2; + Which means queries from this server will always come from + 10.0.0.1 and zone transfers will always originate from + 10.0.0.2. + +Item 39: Multiple threads in file daemon for the same job Date: 27 November 2005 Origin: Ove Risberg (Ove.Risberg at octocode dot com) Status: @@ -1094,7 +1095,7 @@ Item n: Multiple threads in file daemon for the same job Why: Multiple concurrent backups of a large fileserver with many disks and controllers will be much faster. -Item n: Restore only file attributes (permissions, ACL, owner, group...) +Item 40: Restore only file attributes (permissions, ACL, owner, group...) Origin: Eric Bollengier Date: 30/12/2006 Status: @@ -1113,6 +1114,29 @@ Item n: Restore only file attributes (permissions, ACL, owner, group...) If the file isn't here, we can create an empty one and apply rights or do nothing. +Item 41: Add an item to the restore option where you can select a pool + Origin: kshatriyak at gmail dot com + Date: 1/1/2006 + Status: + + What: In the restore option (Select the most recent backup for a + client) it would be useful to add an option where you can limit + the selection to a certain pool. + + Why: When using cloned jobs, most of the time you have 2 pools - a + disk pool and a tape pool. People who have 2 pools would like to + select the most recent backup from disk, not from tape (tape + would be only needed in emergency). However, the most recent + backup (which may just differ a second from the disk backup) may + be on tape and would be selected. The problem becomes bigger if + you have a full and differential - the most "recent" full backup + may be on disk, while the most recent differential may be on tape + (though the differential on disk may differ even only a second or + so). Bacula will complain that the backups reside on different + media then. For now the only solution now when restoring things + when you have 2 pools is to manually search for the right + job-id's and enter them by hand, which is a bit fault tolerant. + ============= Empty Feature Request form =========== Item n: One line summary ... Date: Date submitted diff --git a/bacula/src/tools/dbcheck.c b/bacula/src/tools/dbcheck.c index 10daeabc6d..04cadc8251 100644 --- a/bacula/src/tools/dbcheck.c +++ b/bacula/src/tools/dbcheck.c @@ -513,8 +513,9 @@ static int make_id_list(const char *query, ID_LIST *id_list) */ static int delete_id_list(const char *query, ID_LIST *id_list) { + char ed1[50]; for (int i=0; i < id_list->num_ids; i++) { - bsnprintf(buf, sizeof(buf), query, id_list->Id[i]); + bsnprintf(buf, sizeof(buf), query, edit_int64(id_list->Id[i], ed1)); if (verbose) { printf(_("Deleting: %s\n"), buf); } diff --git a/bacula/technotes-1.39 b/bacula/technotes-1.39 index 2060751fc4..aab030f6cd 100644 --- a/bacula/technotes-1.39 +++ b/bacula/technotes-1.39 @@ -1,6 +1,10 @@ Technical notes on version 1.39 General: +Version 2.0.0 released: 4 January 2007 +03Jan07 +kes Fix an incorrect dbcheck reference to Id. + Version 1.39.34 released: 28Dec06 kes Convert dbcheck to use 64 bit DB IDs.