From ee7264d0897b1720b53779062938a6abc404007b Mon Sep 17 00:00:00 2001 From: Kern Sibbald Date: Wed, 8 Mar 2006 21:17:07 +0000 Subject: [PATCH] - Rename mac.c migrate.c - Add user friendly display of VolBytes in job report. - Rename target... to previous... to make it a bit easier to understand. - Add selection type and selection pattern to Migration (idea given by David Boyes) git-svn-id: https://bacula.svn.sourceforge.net/svnroot/bacula/trunk@2826 91ce42f0-d328-0410-95d8-f526ca767f89 --- bacula/ReleaseNotes | 191 +- bacula/examples/recover.pl | 2886 +++++++++++++++++ bacula/kernstodo | 109 +- bacula/kes-1.38 | 68 + bacula/kes-1.39 | 31 +- bacula/src/cats/create_postgresql_database.in | 11 +- bacula/src/cats/mysql.c | 16 +- bacula/src/dird/Makefile.in | 4 +- bacula/src/dird/backup.c | 6 +- bacula/src/dird/catreq.c | 8 +- bacula/src/dird/dird_conf.c | 59 +- bacula/src/dird/dird_conf.h | 6 +- bacula/src/dird/job.c | 14 +- bacula/src/dird/{mac.c => migrate.c} | 292 +- bacula/src/dird/protos.h | 8 +- bacula/src/dird/recycle.c | 2 +- bacula/src/dird/sql_cmds.c | 8 +- bacula/src/dird/ua_cmds.c | 8 +- bacula/src/dird/ua_dotcmds.c | 38 +- bacula/src/dird/ua_output.c | 12 +- bacula/src/dird/ua_restore.c | 11 +- bacula/src/dird/ua_run.c | 33 +- bacula/src/dird/ua_select.c | 6 +- bacula/src/dird/verify.c | 26 +- bacula/src/filed/job.c | 34 +- bacula/src/filed/pythonfd.c | 2 +- bacula/src/findlib/bfile.c | 3 +- bacula/src/findlib/bfile.h | 2 +- bacula/src/findlib/create_file.c | 14 +- bacula/src/jcr.h | 27 +- bacula/src/lib/bpipe.c | 9 +- bacula/src/lib/util.c | 2 +- bacula/src/version.h | 4 +- bacula/src/win32/README.win32 | 10 +- 34 files changed, 3705 insertions(+), 255 deletions(-) create mode 100755 bacula/examples/recover.pl rename bacula/src/dird/{mac.c => migrate.c} (65%) diff --git a/bacula/ReleaseNotes b/bacula/ReleaseNotes index e4346a291a..58461edee3 100644 --- a/bacula/ReleaseNotes +++ b/bacula/ReleaseNotes @@ -1,10 +1,171 @@ - Release Notes for Bacula 1.38.3 + Release Notes for Bacula 1.38.6-beta3 - Bacula code: Total files = 424 Total lines = 140,955 (*.h *.c *.in) + Bacula code: Total files = 418 Total lines = 136,328 (*.h *.c *.in) 20,440 additional lines of code since version 1.36.3 -Changes to 1.38.3: +New features: +- For autochanger get Scratch tape if in autochanger if + no appendable Volumes are available. +- New virtual disk autochanger. See scripts/disk-changer for + documentation. +- New optional Device resource directive in SD. 'Device Type =', + which may have types: File, DVD, Tape, or FIFO. This can + be useful for writing DVDs on FreeBSD where Bacula cannot + correctly detect the DVD. +- Faster restore tree building and uses less memory. +- The command line keyword job (or jobname) now refers to the + name of the job specified in the Job resource; jobid refers + as before to the non-unique numeric jobid; and ujobid refers + to the unique job identification that Bacula creates for each + job. +- The job report for Backups has a few more user friendly ways + of displaying the information submitted by John Kodis + . +- The wait command can now be made to wait for jobids. +- New command line keywords are permitted in update volume. They + are Inchanger=yes/no, slot=nn. + +Major bug fixes: +- Fix race condition in multiple-drive autochangers where + both drives want the same Volume. +- Do not allow opening default catalog for restricted console + if it is not in ACL. +- Writable FIFOs now work for restore. +- ACLs are now checked in all dot commands. +- Multiple drive autochangers and multiple different autochangers + should now work correctly (no race conditions for Volume names, + update slots use correct StorageId). +- Fix bug where drive was always reserved if a restore job failed + while in the reservation process. + + +Minor bug fixes: +- See below: + + +Release 1.38.6 beta3 4Mar06 +04Mar06 +- The po files should now be current. +- Fix new sql_use_result() code to properly release the + buffers in all cases. +- Convert to using new Python class definitons with (object). +- Use the keyword ujobid to mean the unique job id; job or jobname + to mean the Job name given on the Name directive, and jobid to + be the numeric (non-unique) job id. +- Allow listing by any of the above. +- Add the user friendly job report code for reporting job elapsed time + and rates with suffexes submitted by John Kodis +- Add Priority and JobLevel as Python settable items. +- Use TEMPORARY table creation where the table is created by + Bacula. +- Add new code submitted by Eric for waiting on specific jobid. +- Add ACL checking for the dot commands. +- Fix restore of writable FIFOs. +- Fix a bug in bpipe where the string was freed too early. + +26Feb06 +- Fix bug reported by Arno listing blocks with bls +- Update the po files at Eric's request. + +Release 1.38.6-beta2 25Feb06 +25Feb06 +- Add sql_use_result() define. + +Changes to 1.38.6-beta1 +- Don't open default catalog if not in ACL. +- Add virtual disk autochanger code. +- Add user supplied bug fix to make two autochangers work + correctly using StorageId with InChanger checks. +- Correct new/old_jcr confusion in copy_storage(). +- Remove & from Job during scan in msgchan.c -- probably + trashed the stack. +- When getting the next Volume if no Volume in Append mode + exists and we are dealing with an Autochanger, search + for a Scratch Volume. +- Check for missing value in dot commands -- bug fix. +- Fix bug in update barcodes command line scanning. +- Make sure Pool Max Vols is respected. +- Check that user supplied a value before referencing + it in restore -- pointed out by Karl Hakimian. +- Add Karl Hakimian's table insert code. +- Don't ask user to select a specific Volume when + updating all volumes in a Pool. +- Remove reservation if set for read when removing dcr. +- Lock code that requests next appendable volume so that + two jobs to get the same Volume at the same time. +- Add new Device Type = xxx code. Values are file, tape, + dvd, and fifo. +- Preserve certain modes (ST_LABEL|ST_APPEND|ST_READ) across + a re-open to change read/write permission on a device. +- Correct a misplaced double quote in certain autochanger + scripts. +- Make make_catalog_backup.in a bit more portable. +- Implement Karl Hakimian's sql_use_result(), which speeds + up restore tree building and reduces the memory load. +- Correct a number of minor bugs in getting a Volume from + the Scratch Pool. +- Implement additional command line options for update Volume. +- Don't require user to enter a Volume name when updating + all Volumes in a pool. + +Release 1.38.5 released 19Jan06: +- Apply label barcodes fix supplied by Rudolf Cejka. +- Modify standard rpm installation to set SD group to disk + so that SD will by default have access to tape drives. +- Allow users to specify user/group and start options + for each daemon in /etc/sysconf/bacula file. + +Changes to 1.38.4 released 17Jan06: +- The main changes are to the Director and the Storage daemon, + thus there is no need to update your File daemons. Just the + same, I do recommend running with the release 1.38.3 Win32 + FD or later. +- Add two new queries to query.sql provided by Arno. One + list volumes known to the Storage device, and the other + lists volumes possibly needing replacement (error, ...). +- Add periodic (every 24 hours) garbage collection of memory + pool by releasing free buffers. +- Correct bug counting sized (for display only) in smartall.c +- Print FD mempool stats if debug > 0 rather than 5. +- Correct bug in alist.c that re-allocated the list if the + number of items goes to zero. +- Move the reservation system thread locking to the top level + so that one job at a time tries all possible drives before + waiting. +- Implement a reservation 'fail' message queue that is built + and destroyed on each pass through the reservation system. + These messages are displayed in a 'Jobs waiting to reserve + a drive' list during a 'status storage='. Note, multiple + messages will generally print for each JobId because they + represent the different problems with either the same drive + or different drives. If this output proves too confusing + of voluminous, I will display it only when debug level 1 + or greater is enabled in the SD. +- Add enable/disable job=. This command prevents + the specified job from being scheduled. Even when disabled, + the job can be manually started from the console. +- During 'update slots' clear all InChanger flags where the + StorageId is zero (old Media records). +- Fix autochanger code to strip leading spaces from returned + slots number. Remove bc from chio-changer. +- Back port a bit of 1.39 crypto code to reduce diffs. +- Fix first call to autochanger that missed close()ing the + drive. Put close() just before each run_program(). Fixes + Arno's changer bug. +- Add PoolId to Job record when updating it at job start time. +- Pull in more code from 1.39 so that there are fewer file + differences (the new ua_dotcmds.c, base64.h, crypto.h + hmac.c jcr.c (dird and lib) lib.h md5.h parse_conf.c + util.c. Aside from ua_dotcmds.c these are mostly crypto + upgrades. +- Implement new method of walking the jcr chain. The + incr/dec of the use_count is done within the walking + routines. This should prevent a jcr from being freed + from under the walk routines. + + +Changes to 1.38.3 released 05Jan06: - This is mainly a bug release fix. In addition, the multiple drive reservation algorithm has been rewritten. - In addition, the method of handling waiting for tapes to be @@ -13,7 +174,7 @@ Changes to 1.38.3: - Simplify code in askdir.c that waits for creating an appendable volume so that it can handle multiple returns from the wait code. - Modify the wait code to permit multiple returns. -- Return a zero when "autochanger drives" is called and +- Return a zero when 'autochanger drives' is called and it is not an autochanger. - Make rewind_dev() a method taking a DCR as an argument. This permits closing and reopening the drive if the @@ -88,9 +249,9 @@ Changes to 1.38.3: at the same time. - Apply days keyword patch from Alexander.Bergolth at wu-wien.ac.at If this patch is applied, the number of days can be specified with - "list nextvol days=xx" + 'list nextvol days=xx' or - "status dir days=xx" + 'status dir days=xx' My use case is to be able to preview the next scheduled job (and the next tape to be used) on fridays if there are no scheduled jobs during the weekend. @@ -183,7 +344,7 @@ Major Changes in 1.38: - Volume Shadow Copy support for Win32 thus the capability to backup exclusively opened files (thanks to Thorsten Engel). A VSS enabled Win32 FD is available. You must explicitly - turn on VSS with "Enable VSS = yes" in your FileSet resource. + turn on VSS with 'Enable VSS = yes' in your FileSet resource. - New manual format with an index (thanks to Karl Cunningham). - New Web site format (thanks to Michael Scherer). - SQLite3 support. @@ -194,13 +355,13 @@ Major Changes in 1.38: in native languages. Thanks to Nicolas Boichat. New Directives: -- New Job directive "Prefer Mounted Volumes = yes|no" causes the +- New Job directive 'Prefer Mounted Volumes = yes|no' causes the SD to select either an Autochanger or a drive with a valid Volume already mounted in preference. If none is available, it will select the first available drive. - New Run directive in Job resource of DIR. It permits cloning of jobs. To clone a copy of the current job, use - Run = "job-name level=%l since=\"%s\"" + Run = 'job-name level=%l since=\'%s\'' Note, job-name is normally the same name as the job that is running but there is no restriction on what you put. If you want to start the job by hand and use job overrides such as @@ -288,7 +449,7 @@ New Directives: of the manual. New Commands: -- "python restart" restarts the Python interpreter. Rather brutal, make +- 'python restart' restarts the Python interpreter. Rather brutal, make sure no Python scripts are running. This permits you to change a Python script and get Bacula to use the new script. @@ -302,11 +463,11 @@ Items to note!!! - The Storage daemon now keeps track of what tapes it is using (was not the case in 1.36.x). This means that you must be much more careful when removing tapes and putting up a new one. In - general, you should always do a "unmount" prior to removing a - tape, and a "mount" after putting a new one into the drive. + general, you should always do a 'unmount' prior to removing a + tape, and a 'mount' after putting a new one into the drive. - If you use an Autochanger, you MUST update your SD conf file to use the new Autochanger resource. Otherwise, certain commands - such as "update slots" may not work. + such as 'update slots' may not work. - You must add --with-python=[DIR] to the configure command line if you want Python support. Python 2.2, 2.3 and 2.4 should be automatically detected if in the standard place. @@ -340,7 +501,7 @@ Items to note!!! compiling. -Other Items: +Other Items Fixed: - Security fixes for temp files created in mtx-changer, during ./configure, and during making of Rescue disk. - A new script, dvd-handler, in the scripts directory, @@ -351,7 +512,7 @@ Other Items: /patches/dvd+rw-tools-5.21.4.10.8.bacula.patch You must have Python installed to run the scripts. - Part files support: File volumes can now be split into multiple - files, called "parts". + files, called 'parts'. - For the details of the Python scripting support, please see the new Python Scripting chapter in the manual. - The default user/group for the Director and Storage daemon installed diff --git a/bacula/examples/recover.pl b/bacula/examples/recover.pl new file mode 100755 index 0000000000..fe6e946743 --- /dev/null +++ b/bacula/examples/recover.pl @@ -0,0 +1,2886 @@ +#!/usr/bin/perl -w + +=head1 NAME + +recover.pl - a script to provide an interface for restore files similar +to Legatto Networker's recover program. + +=cut + +use strict; +use Getopt::Std; +use DBI; +use Term::ReadKey; +use Term::ReadLine; +use Fcntl ':mode'; +use Time::ParseDate; +use Date::Format; +use Text::ParseWords; + +# Location of config file. +my $CONF_FILE = "$ENV{HOME}/.recoverrc"; +my $HIST_FILE = "$ENV{HOME}/.recover.hist"; + +######################################################################## +### Queries needed to gather files from directory. +######################################################################## + +my %queries = ( + 'postgres' => { + 'dir' => + "( + select + distinct on (name) + Filename.name, + Path.path, + File.lstat, + File.fileid, + File.fileindex, + Job.jobtdate - ? as visible, + Job.jobid + from + Path, + File, + Filename, + Job + where + clientid = ? and + Job.name = ? and + Job.jobtdate <= ? and + Path.path = ? and + File.pathid = Path.pathid and + Filename.filenameid = File.filenameid and + Filename.name != '' and + File.jobid = Job.jobid + order by + name, + jobid desc + ) + union + ( + select + distinct on (name) + substring(Path.path from ? + 1) as name, + substring(Path.path from 1 for ?) as path, + File.lstat, + File.fileid, + File.fileindex, + Job.jobtdate - ? as visible, + Job.jobid + from + Path, + File, + Filename, + Job + where + clientid = ? and + Job.name = ? and + Job.jobtdate <= ? and + File.jobid = Job.jobid and + Filename.name = '' and + Filename.filenameid = File.filenameid and + File.pathid = Path.pathid and + Path.path ~ ('^' || ? || '[^/]*/\$') + order by + name, + jobid desc + ) + order by + name + ", + 'sel' => + "( + select + distinct on (name) + Path.path || Filename.name as name, + File.fileid, + File.lstat, + File.fileindex, + Job.jobid + from + Path, + File, + Filename, + Job + where + clientid = ? and + Job.name = ? and + Job.jobtdate <= ? and + Job.jobtdate >= ? and + Path.path like ? || '%' and + File.pathid = Path.pathid and + Filename.filenameid = File.filenameid and + Filename.name != '' and + File.jobid = Job.jobid + order by + name, jobid desc + ) + union + ( + select + distinct on (name) + Path.path as name, + File.fileid, + File.lstat, + File.fileindex, + Job.jobid + from + Path, + File, + Filename, + Job + where + clientid = ? and + Job.name = ? and + Job.jobtdate <= ? and + Job.jobtdate >= ? and + File.jobid = Job.jobid and + Filename.name = '' and + Filename.filenameid = File.filenameid and + File.pathid = Path.pathid and + Path.path like ? || '%' + order by + name, jobid desc + ) + ", + 'cache' => + "select + distinct on (path, name) + Path.path, + Filename.name, + File.fileid, + File.lstat, + File.fileindex, + Job.jobtdate - ? as visible, + Job.jobid + from + Path, + File, + Filename, + Job + where + clientid = ? and + Job.name = ? and + Job.jobtdate <= ? and + Job.jobtdate >= ? and + File.pathid = Path.pathid and + File.filenameid = Filename.filenameid and + File.jobid = Job.jobid + order by + path, name, jobid desc + ", + 'ver' => + "select + Path.path, + Filename.name, + File.fileid, + File.fileindex, + File.lstat, + Job.jobtdate, + Job.jobid, + Job.jobtdate - ? as visible, + Media.volumename + from + Job, Path, Filename, File, JobMedia, Media + where + File.pathid = Path.pathid and + File.filenameid = Filename.filenameid and + File.jobid = Job.jobid and + File.Jobid = JobMedia.jobid and + File.fileindex >= JobMedia.firstindex and + File.fileindex <= JobMedia.lastindex and + Job.jobtdate <= ? and + JobMedia.mediaid = Media.mediaid and + Path.path = ? and + Filename.name = ? and + Job.clientid = ? and + Job.name = ? + order by job + " + }, + 'mysql' => { + 'dir' => + " + ( + select + distinct(Filename.name), + Path.path, + File.lstat, + File.fileid, + File.fileindex, + Job.jobtdate - ? as visible, + Job.jobid + from + Path, + File, + Filename, + Job + where + clientid = ? and + Job.name = ? and + Job.jobtdate <= ? and + Path.path = ? and + File.pathid = Path.pathid and + Filename.filenameid = File.filenameid and + Filename.name != '' and + File.jobid = Job.jobid + group by + name + order by + name, + jobid desc + ) + union + ( + select + distinct(substring(Path.path from ? + 1)) as name, + substring(Path.path from 1 for ?) as path, + File.lstat, + File.fileid, + File.fileindex, + Job.jobtdate - ? as visible, + Job.jobid + from + Path, + File, + Filename, + Job + where + clientid = ? and + Job.name = ? and + Job.jobtdate <= ? and + File.jobid = Job.jobid and + Filename.name = '' and + Filename.filenameid = File.filenameid and + File.pathid = Path.pathid and + Path.path rlike concat('^', ?, '[^/]*/\$') + group by + name + order by + name, + jobid desc + ) + order by + name + ", + 'sel' => + " + ( + select + distinct(concat(Path.path, Filename.name)) as name, + File.fileid, + File.lstat, + File.fileindex, + Job.jobid + from + Path, + File, + Filename, + Job + where + Job.clientid = ? and + Job.name = ? and + Job.jobtdate <= ? and + Job.jobtdate >= ? and + Path.path like concat(?, '%') and + File.pathid = Path.pathid and + Filename.filenameid = File.filenameid and + Filename.name != '' and + File.jobid = Job.jobid + group by + path, name + order by + name, + jobid desc + ) + union + ( + select + distinct(Path.path) as name, + File.fileid, + File.lstat, + File.fileindex, + Job.jobid + from + Path, + File, + Filename, + Job + where + Job.clientid = ? and + Job.name = ? and + Job.jobtdate <= ? and + Job.jobtdate >= ? and + File.jobid = Job.jobid and + Filename.name = '' and + Filename.filenameid = File.filenameid and + File.pathid = Path.pathid and + Path.path like concat(?, '%') + group by + path + order by + name, + jobid desc + ) + ", + 'cache' => + "select + distinct path, + Filename.name, + File.fileid, + File.lstat, + File.fileindex, + Job.jobtdate - ? as visible, + Job.jobid + from + Path, + File, + Filename, + Job + where + clientid = ? and + Job.name = ? and + Job.jobtdate <= ? and + Job.jobtdate >= ? and + File.pathid = Path.pathid and + File.filenameid = Filename.filenameid and + File.jobid = Job.jobid + group by + path, name + order by + path, name, jobid desc + ", + 'ver' => + "select + Path.path, + Filename.name, + File.fileid, + File.fileindex, + File.lstat, + Job.jobtdate, + Job.jobid, + Job.jobtdate - ? as visible, + Media.volumename + from + Job, Path, Filename, File, JobMedia, Media + where + File.pathid = Path.pathid and + File.filenameid = Filename.filenameid and + File.jobid = Job.jobid and + File.Jobid = JobMedia.jobid and + File.fileindex >= JobMedia.firstindex and + File.fileindex <= JobMedia.lastindex and + Job.jobtdate <= ? and + JobMedia.mediaid = Media.mediaid and + Path.path = ? and + Filename.name = ? and + Job.clientid = ? and + Job.name = ? + order by job + " + } +); + +############################################################################ +### Command lists for help and file completion +############################################################################ + +my %COMMANDS = ( + 'add' => '(add files) - Add files recursively to restore list', + 'bootstrap' => 'print bootstrap file', + 'cd' => '(cd dir) - Change working directory', + 'changetime', '(changetime date/time) - Change database view to date', + 'client' => '(client client-name) - change client to view', + 'debug' => 'toggle debug flag', + 'delete' => 'Remove files from restore list.', + 'help' => 'Display this list', + 'history', 'Print command history', + 'info', '(info files) - Print stat and tape information about files', + 'ls' => '(ls [opts] files) - List files in current directory', + 'pwd' => 'Print current working directory', + 'quit' => 'Exit program', + 'recover', 'Create table for bconsole to use in recover', + 'relocate', '(relocate dir) - specify new location for recovered files', + 'show', '(show item) - Display information about item', + 'verbose' => 'toggle verbose flag', + 'versions', '(versions files) - Show all versions of file on tape', + 'volumes', 'Show volumes needed for restore.' +); + +my %SHOW = ( + 'cache' => 'Display cached directories', + 'catalog' => 'Display name of current catalog from config file', + 'client' => 'Display current client', + 'clients' => 'Display clients available in this catalog', + 'restore' => 'Display information about pending restore', + 'volumes' => 'Show volumes needed for restore.' +); + +############################################################################## +### Read config and command line. +############################################################################## + +my %catalogs; +my $catalog; # Current catalog + +## Globals + +my %restore; +my $rnum = 0; +my $rbytes = 0; +my $debug = 0; +my $verbose = 0; +my $rtime; +my $cwd; +my $lwd; +my $files; +my $restore_to = '/'; +my $start_dir; +my $preload; +my $dircache = {}; +my $usecache = 1; + +=head1 SYNTAX + +B [B<-b> I] [B<-c> I B<-j> I] +[B<-i> I] [B<-p>] [B<-t> I] + +B [B<-h>] + +Most of the command line arguments can be specified in the init file +B<$HOME/.recoverrc> (see CONFIG FILE FORMAT below). The command +line arguments will override the options in the init file. If no +I is specified, the first one found in the init file will +be used. + +=head1 DESCRIPTION + +B will read the specified catalog and provide a shell like +environment from which a time based view of the specified client/jobname +and be exampled and selected for restoration. + +The command line option B<-b> specified the DBI compatible connect +script to use when connecting to the catalog database. The B<-c> and +B<-j> options specify the client and jobname respectively to view from +the catalog database. The B<-i> option will set the initial directory +you are viewing to the specified directory. if B<-i> is not specified, +it will default to /. You can set the initial time to view the catalog +from using the B<-t> option. + +The B<-p> option will pre-load the entire catalog into memory. This +could take a lot of memory, so use it with caution. + +The B<-d> option turns on debugging and the B<-v> option turns on +verbose output. + +By specifying a I, the default options for connecting to +the catalog database will be taken from the section of the inti file +specified by that name. + +The B<-h> option will display this document. + +In order for this program to have a chance of not being painfully slow, +the following indexs should be added to your database. + +B + +B + +=cut + +my $vars = {}; +getopts("c:b:hi:j:pt:vd", $vars) || die "Usage: bad arguments\n"; + +if ($vars->{'h'}) { + system("perldoc $0"); + exit; +} + +$preload = $vars->{'p'} if ($vars->{'p'}); +$debug = $vars->{'d'} if ($vars->{'d'}); +$verbose = $vars->{'v'} if ($vars->{'v'}); + +# Set initial time to view the catalog + +if ($vars->{'t'}) { + $rtime = parsedate($vars->{'t'}, FUZZY => 1, PREFER_PAST => 1); +} +else { + $rtime = time(); +} + +my $dbconnect; +my $username = ""; +my $password = ""; +my $db; +my $client; +my $jobname; +my $jobs; +my $ftime; + +my $cstr; + +# Read config file (if available). + +&read_config($CONF_FILE); + +# Set defaults + +$catalog = $ARGV[0] if (@ARGV); + +if ($catalog) { + $cstr = ${catalogs{$catalog}}->{'client'} + if (${catalogs{$catalog}}->{'client'}); + + $jobname = $catalogs{$catalog}->{'jobname'} + if ($catalogs{$catalog}->{'jobname'}); + + $dbconnect = $catalogs{$catalog}->{'dbconnect'} + if ($catalogs{$catalog}->{'dbconnect'}); + + $username = $catalogs{$catalog}->{'username'} + if ($catalogs{$catalog}->{'username'}); + + $password = $catalogs{$catalog}->{'password'} + if ($catalogs{$catalog}->{'password'}); + + $start_dir = $catalogs{$catalog}->{'cd'} + if ($catalogs{$catalog}->{'cd'}); + + $preload = $catalogs{$catalog}->{'preload'} + if ($catalogs{$catalog}->{'preload'} && !defined($vars->{'p'})); + + $verbose = $catalogs{$catalog}->{'verbose'} + if ($catalogs{$catalog}->{'verbose'} && !defined($vars->{'v'})); + + $debug = $catalogs{$catalog}->{'debug'} + if ($catalogs{$catalog}->{'debug'} && !defined($vars->{'d'})); +} + +#### Command line overries config file + +$start_dir = $vars->{'i'} if ($vars->{'i'}); +$start_dir = '/' if (!$start_dir); + +$start_dir .= '/' if (substr($start_dir, length($start_dir) - 1, 1) ne '/'); + +if ($vars->{'b'}) { + $dbconnect = $vars->{'b'}; +} + +die "You must supply a db connect string.\n" if (!defined($dbconnect)); + +if ($dbconnect =~ /^dbi:Pg/) { + $db = 'postgres'; +} +elsif ($dbconnect =~ /^dbi:mysql/) { + $db = 'mysql'; +} +else { + die "Unknown database type specified in $dbconnect\n"; +} + +# Initialize database connection + +print STDERR "DBG: Connect using: $dbconnect\n" if ($debug); + +my $dbh = DBI->connect($dbconnect, $username, $password) || + die "Can't open bacula database\nDatabase connect string '$dbconnect'"; + +die "Client id required.\n" if (!($cstr || $vars->{'c'})); + +$cstr = $vars->{'c'} if ($vars->{'c'}); +$client = &lookup_client($cstr); + +# Set job information +$jobname = $vars->{'j'} if ($vars->{'j'}); + +die "You need to specify a job name.\n" if (!$jobname); + +&setjob; + +die "Failed to set client\n" if (!$client); + +# Prepare our query +my $dir_sth = $dbh->prepare($queries{$db}->{'dir'}) + || die "Can't prepare $queries{$db}->{'dir'}\n"; + +my $sel_sth = $dbh->prepare($queries{$db}->{'sel'}) + || die "Can't prepare $queries{$db}->{'sel'}\n"; + +my $ver_sth = $dbh->prepare($queries{$db}->{'ver'}) + || die "Can't prepare $queries{$db}->{'ver'}\n"; + +my $clients; + +# Initialize readline. +my $term = new Term::ReadLine('Bacula Recover'); +$term->ornaments(0); + +my $readline = $term->ReadLine; +my $tty_attribs = $term->Attribs; + +# Needed for base64 decode + +my @base64_digits = ( + 'A', 'B', 'C', 'D', 'E', 'F', 'G', 'H', 'I', 'J', 'K', 'L', 'M', + 'N', 'O', 'P', 'Q', 'R', 'S', 'T', 'U', 'V', 'W', 'X', 'Y', 'Z', + 'a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'i', 'j', 'k', 'l', 'm', + 'n', 'o', 'p', 'q', 'r', 's', 't', 'u', 'v', 'w', 'x', 'y', 'z', + '0', '1', '2', '3', '4', '5', '6', '7', '8', '9', '+', '/' +); +my @base64_map = (0) x 128; + +for (my $i=0; $i<64; $i++) { + $base64_map[ord($base64_digits[$i])] = $i; +} + +############################################################################## +### Support routines +############################################################################## + +=head1 FILES + +B<$HOME/.recoverrc> Configuration file for B. + +=head1 CONFIG FILE FORMAT + +The config file will allow you to specify the defaults for your +catalog(s). Each catalog definition starts with B<[>IB<]>. +Blank lines and lines starting with # are ignored. + +The first catalog specified will be used as the default catalog. + +All values are specified in I B<=> I format. You can +specify the following Is for each catalog. + +=cut + +sub read_config { + my $conf_file = shift; + my $c; + + # No nothing if config file can't be read. + + if (-r $conf_file) { + open(CONF, "<$conf_file") || die "$!: Can't open $conf_file\n"; + + while () { + chomp; + # Skip comments and blank links + next if (/^\s*#/); + next if (/^\s*$/); + + if (/^\[(\w+)\]$/) { + $c = $1; + $catalog = $c if (!$catalog); + + if ($catalogs{$c}) { + die "Duplicate catalog definition in $conf_file\n"; + } + + $catalogs{$c} = {}; + } + elsif (!$c) { + die "Conf file must start with catalog definition [catname]\n"; + } + else { + + if (/^(\w+)\s*=\s*(.*)/) { + my $item = $1; + my $value = $2; + +=head2 client + +The name of the default client to view when connecting to this +catalog. This can be changed later with the B command. + +=cut + + if ($item eq 'client') { + $catalogs{$c}->{'client'} = $value; + } + +=head2 dbconnect + +The DBI compatible database string to use to connect to this catalog. + +=over 4 + +=item B + +dbi:Pg:dbname=bacula;host=backuphost + +=back + +=cut + elsif ($item eq 'dbconnect') { + $catalogs{$c}->{'dbconnect'} = $value; + } + +=head2 jobname + +The name of the default job to view when connecting to the catalog. This +can be changed later with the B command. + +=cut + elsif ($item eq 'jobname') { + $catalogs{$c}->{'jobname'} = $value; + } + +=head2 password + +The password to use when connecing to the catalog database. + +=cut + elsif ($item eq 'password') { + $catalogs{$c}->{'password'} = $value; + } + +=head2 preload + +Set the preload flag. A preload flag of 1 or on will load the entire +catalog when recover.pl is start. This is a memory hog, so use with +caution. + +=cut + elsif ($item eq 'preload') { + + if ($value =~ /^(1|on)$/i) { + $catalogs{$c}->{'preload'} = 1; + } + elsif ($value =~ /^(0|off)$/i) { + $catalogs{$c}->{'preload'} = 0; + } + else { + die "$value: Unknown value for preload.\n"; + } + + } + +=head2 username + +The username to use when connecing to the catalog database. + +=cut + elsif ($item eq 'username') { + $catalogs{$c}->{'username'} = $value; + } + else { + die "Unknown opton $item in $conf_file.\n"; + } + + } + else { + die "Bad line $_ in $conf_file.\n"; + } + + } + + } + + close(CONF); + } + +} + +sub create_file_entry { + my $name = shift; + my $fileid = shift; + my $fileindex = shift; + my $jobid = shift; + my $visible = shift; + my $lstat = shift; + + print STDERR "DBG: name = $name\n" if ($debug); + print STDERR "DBG: fileid = $fileid\n" if ($debug); + print STDERR "DBG: fileindex = $fileindex\n" if ($debug); + print STDERR "DBG: jobid = $jobid\n" if ($debug); + print STDERR "DBG: visible = $visible\n" if ($debug); + print STDERR "DBG: lstat = $lstat\n" if ($debug); + + my $data = { + fileid => $fileid, + fileindex => $fileindex, + jobid => $jobid, + visible => ($visible >= 0) ? 1 : 0 + }; + + # decode file stat + my @stat = (); + + foreach my $s (split(' ', $lstat)) { + print STDERR "DBG: Add $s to stat array.\n" if ($debug); + push(@stat, from_base64($s)); + } + + $data->{'lstat'} = { + 'st_dev' => $stat[0], + 'st_ino' => $stat[1], + 'st_mode' => $stat[2], + 'st_nlink' => $stat[3], + 'st_uid' => $stat[4], + 'st_gid' => $stat[5], + 'st_rdev' => $stat[6], + 'st_size' => $stat[7], + 'st_blksize' => $stat[8], + 'st_blocks' => $stat[9], + 'st_atime' => $stat[10], + 'st_mtime' => $stat[11], + 'st_ctime' => $stat[12], + 'LinkFI' => $stat[13], + 'st_flags' => $stat[14], + 'data_stream' => $stat[15] + }; + + # Create mode string. + my $sstr = &mode2str($stat[2]); + $data->{'lstat'}->{'statstr'} = $sstr; + return $data; +} +# Read directory data, return hash reference. + +sub fetch_dir { + my $dir = shift; + + return $dircache->{$dir} if ($dircache->{$dir}); + + print "$dir not cached, fetching from database.\n" if ($verbose); + my $data = {}; + my $fmax = 0; + + my $dl = length($dir); + + print STDERR "? - 1: ftime = $ftime\n" if ($debug); + print STDERR "? - 2: client = $client\n" if ($debug); + print STDERR "? - 3: jobname = $jobname\n" if ($debug); + print STDERR "? - 4: rtime = $rtime\n" if ($debug); + print STDERR "? - 5: dir = $dir\n" if ($debug); + print STDERR "? - 6, 7: dl = $dl, $dl\n" if ($debug); + print STDERR "? - 8: ftime = $ftime\n" if ($debug); + print STDERR "? - 9: client = $client\n" if ($debug); + print STDERR "? - 10: jobname = $jobname\n" if ($debug); + print STDERR "? - 11: rtime = $rtime\n" if ($debug); + print STDERR "? - 12: dir = $dir\n" if ($debug); + + print STDERR "DBG: Execute - $queries{$db}->{'dir'}\n" if ($debug); + $dir_sth->execute( + $ftime, + $client, + $jobname, + $rtime, + $dir, + $dl, $dl, + $ftime, + $client, + $jobname, + $rtime, + $dir + ) || die "Can't execute $queries{$db}->{'dir'}\n"; + + while (my $ref = $dir_sth->fetchrow_hashref) { + my $file = $$ref{name}; + print STDERR "DBG: File $file found in database.\n" if ($debug); + my $l = length($file); + $fmax = $l if ($l > $fmax); + + $data->{$file} = &create_file_entry( + $file, + $ref->{'fileid'}, + $ref->{'fileindex'}, + $ref->{'jobid'}, + $ref->{'visible'}, + $ref->{'lstat'} + ); + } + + return undef if (!$fmax); + + $dircache->{$dir} = $data if ($usecache); + return $data; +} + +sub cache_catalog { + print "Loading entire catalog, please wait...\n"; + my $sth = $dbh->prepare($queries{$db}->{'cache'}) + || die "Can't prepare $queries{$db}->{'cache'}\n"; + print STDERR "DBG: Execute - $queries{$db}->{'cache'}\n" if ($debug); + $sth->execute($ftime, $client, $jobname, $rtime, $ftime) + || die "Can't execute $queries{$db}->{'cache'}\n"; + + print "Query complete, building catalog cache...\n" if ($verbose); + + while (my $ref = $sth->fetchrow_hashref) { + my $dir = $ref->{path}; + my $file = $ref->{name}; + print STDERR "DBG: File $dir$file found in database.\n" if ($debug); + + next if ($dir eq '/' and $file eq ''); # Skip data for / + + # Rearrange directory + + if ($file eq '' and $dir =~ m|(.*/)([^/]+/)$|) { + $dir = $1; + $file = $2; + } + + my $data = &create_file_entry( + $file, + $ref->{'fileid'}, + $ref->{'fileindex'}, + $ref->{'jobid'}, + $ref->{'visible'}, + $ref->{'lstat'} + ); + + $dircache->{$dir} = {} if (!$dircache->{$dir}); + $dircache->{$dir}->{$file} = $data; + } + + $sth->finish(); +} + +# Break a path up into dir and file. + +sub path_parts { + my $path = shift; + my $fqdir; + my $dir; + my $file; + + if (substr($path, 0, 1) eq '/') { + + # Find dir vs. file + if ($path =~ m|^(/.*/)([^/]*$)|) { + $fqdir = $dir = $1; + $file = $2; + } + else { # Must be in / + $fqdir = $dir = '/'; + $file = substr($path, 1); + } + + print STDERR "DBG: / Dir - $dir; file = $file\n" if ($debug); + } + # relative path + elsif ($path =~ m|^(.*/)([^/]*)$|) { + $fqdir = "$cwd$1"; + $dir = $1; + $file = $2; + print STDERR "DBG: Dir - $dir; file = $file\n" if ($debug); + } + # File is in our current directory. + else { + $fqdir = $cwd; + $dir = ''; + $file = $path; + print STDERR "DBG: Set dir to $dir\n" if ($debug); + } + + return ($fqdir, $dir, $file); +} + +sub lookup_client { + my $c = shift; + + if (!$clients) { + $clients = {}; + my $query = "select clientid, name from Client"; + my $sth = $dbh->prepare($query) || die "Can't prepare $query\n"; + $sth->execute || die "Can't execute $query\n"; + + while (my $ref = $sth->fetchrow_hashref) { + $clients->{$ref->{'name'}} = $ref->{'clientid'}; + } + + $sth->finish; + } + + if ($c !~ /^\d+$/) { + + if ($clients->{$c}) { + $c = $clients->{$c}; + } + else { + warn "Could not find client $c\n"; + $c = $client; + } + + } + + return $c; +} + +sub setjob { + + if (!$jobs) { + $jobs = {}; + my $query = "select distinct name from Job order by name"; + my $sth = $dbh->prepare($query) || die "Can't prepare $query\n"; + $sth->execute || die "Can't execute $query\n"; + + while (my $ref = $sth->fetchrow_hashref) { + $jobs->{$$ref{'name'}} = $$ref{'name'}; + } + + $sth->finish; + } + + my $query = "select + jobtdate + from + Job + where + jobtdate <= $rtime and + name = '$jobname' and + level = 'F' + order by jobtdate desc + limit 1 + "; + + my $sth = $dbh->prepare($query) || die "Can't prepare $query\n"; + $sth->execute || die "Can't execute $query\n"; + + if ($sth->rows == 1) { + my $ref = $sth->fetchrow_hashref; + $ftime = $$ref{jobtdate}; + } + else { + warn "Could not find full backup. Setting full time to 0.\n"; + $ftime = 0; + } + + $sth->finish; +} + +sub select_files { + my $mark = shift; + my $opts = shift; + my $dir = shift; + my @flist = @_; + + if (!@flist) { + + if ($cwd eq '/') { + my $finfo = &fetch_dir('/'); + @flist = keys %$finfo; + } + else { + @flist = ($cwd); + } + + } + + foreach my $f (@flist) { + $f =~ s|/+$||; + my $path = (substr($f, 0, 1) eq '/') ? $f : "$dir$f"; + my ($fqdir, $dir, $file) = &path_parts($path); + my $finfo = &fetch_dir($fqdir); + + if (!$finfo->{$file}) { + + if (!$finfo->{"$file/"}) { + warn "$f: File not found.\n"; + next; + } + + $file .= '/'; + } + + my $info = $finfo->{$file}; + + my $fid = $info->{'fileid'}; + my $fidx = $info->{'fileindex'}; + my $jid = $info->{'jobid'}; + my $size = $info->{'lstat'}->{'st_size'}; + + if ($opts->{'all'} || $info->{'visible'}) { + print STDERR "DBG: $file - $size bytes\n" + if ($debug); + + if ($mark) { + + if (!$restore{$fid}) { + print "Adding $fqdir$file\n" if (!$opts->{'quiet'}); + $restore{$fid} = [$jid, $fidx]; + $rnum++; + $rbytes += $size; + } + + } + else { + + if ($restore{$fid}) { + print "Removing $fqdir$file\n" if (!$opts->{'quiet'}); + delete $restore{$fid}; + $rnum--; + $rbytes -= $size; + } + + } + + if ($file =~ m|/$|) { + + # Use preloaded files if we already retrieved them. + if ($preload) { + my $newdir = "$dir$file"; + my $finfo = &fetch_dir($newdir); + &select_files($mark, $opts, $newdir, keys %$finfo); + next; + } + else { + my $newdir = "$fqdir$file"; + my $begin = ($opts->{'all'}) ? 0 : $ftime; + + print STDERR "DBG: Execute - $queries{$db}->{'sel'}\n" + if ($debug); + + $sel_sth->execute( + $client, + $jobname, + $rtime, + $begin, + $newdir, + $client, + $jobname, + $rtime, + $begin, + $newdir + ) || die "Can't execute $queries{$db}->{'sel'}\n"; + + while (my $ref = $sel_sth->fetchrow_hashref) { + my $file = $$ref{'name'}; + my $fid = $$ref{'fileid'}; + my $fidx = $$ref{'fileindex'}; + my $jid = $$ref{'jobid'}; + my @stat_enc = split(' ', $$ref{'lstat'}); + my $size = &from_base64($stat_enc[7]); + + if ($mark) { + + if (!$restore{$fid}) { + print "Adding $file\n" if (!$opts->{'quiet'}); + $restore{$fid} = [$jid, $fidx]; + $rnum++; + $rbytes += $size; + } + + } + else { + + if ($restore{$fid}) { + print "Removing $file\n" if (!$opts->{'quiet'}); + delete $restore{$fid}; + $rnum--; + $rbytes -= $size; + } + + } + + } + + } + + } + + } + + } + +} + +# Expand shell wildcards + +sub expand_files { + my $path = shift; + my ($fqdir, $dir, $file) = &path_parts($path); + my $finfo = &fetch_dir($fqdir); + return ($path) if (!$finfo); + + my $pat = "^$file\$"; + + # Add / for dir match + my $dpat = $file; + $dpat =~ s|/+$||; + $dpat = "^$dpat/\$"; + + my @match; + + $pat =~ s/\./\\./g; + $dpat =~ s/\./\\./g; + $pat =~ s/\?/./g; + $dpat =~ s/\?/./g; + $pat =~ s/\*/.*/g; + $dpat =~ s/\*/.*/g; + + foreach my $f (sort keys %$finfo) { + + if ($f =~ /$pat/) { + push (@match, ($fqdir eq $cwd) ? $f : "$fqdir$f"); + } + elsif ($f =~ /$dpat/) { + push (@match, ($fqdir eq $cwd) ? $f : "$fqdir$f"); + } + + } + + return ($path) if (!@match); + return @match; +} + +sub expand_dirs { + my $path = shift; + my ($fqdir, $dir, $file) = &path_parts($path, 1); + + print STDERR "Expand $path\n" if ($debug); + + my $finfo = &fetch_dir($fqdir); + return ($path) if (!$finfo); + + $file =~ s|/+$||; + + my $pat = "^$file/\$"; + my @match; + + $pat =~ s/\./\\./g; + $pat =~ s/\?/./g; + $pat =~ s/\*/.*/g; + + foreach my $f (sort keys %$finfo) { + print STDERR "Match $f to $pat\n" if ($debug); + push (@match, ($fqdir eq $cwd) ? $f : "$fqdir$f") if ($f =~ /$pat/); + } + + return ($path) if (!@match); + return @match; +} + +sub mode2str { + my $mode = shift; + my $sstr = ''; + + if (S_ISDIR($mode)) { + $sstr = 'd'; + } + elsif (S_ISCHR($mode)) { + $sstr = 'c'; + } + elsif (S_ISBLK($mode)) { + $sstr = 'b'; + } + elsif (S_ISREG($mode)) { + $sstr = '-'; + } + elsif (S_ISFIFO($mode)) { + $sstr = 'f'; + } + elsif (S_ISLNK($mode)) { + $sstr = 'l'; + } + elsif (S_ISSOCK($mode)) { + $sstr = 's'; + } + else { + $sstr = '?'; + } + + $sstr .= ($mode&S_IRUSR) ? 'r' : '-'; + $sstr .= ($mode&S_IWUSR) ? 'w' : '-'; + $sstr .= ($mode&S_IXUSR) ? + (($mode&S_ISUID) ? 's' : 'x') : + (($mode&S_ISUID) ? 'S' : '-'); + $sstr .= ($mode&S_IRGRP) ? 'r' : '-'; + $sstr .= ($mode&S_IWGRP) ? 'w' : '-'; + $sstr .= ($mode&S_IXGRP) ? + (($mode&S_ISGID) ? 's' : 'x') : + (($mode&S_ISGID) ? 'S' : '-'); + $sstr .= ($mode&S_IROTH) ? 'r' : '-'; + $sstr .= ($mode&S_IWOTH) ? 'w' : '-'; + $sstr .= ($mode&S_IXOTH) ? + (($mode&S_ISVTX) ? 't' : 'x') : + (($mode&S_ISVTX) ? 'T' : '-'); + + return $sstr; +} + +# Base 64 decoder +# Algorithm copied from bacula source + +sub from_base64 { + my $where = shift; + my $val = 0; + my $i = 0; + my $neg = 0; + + if (substr($where, 0, 1) eq '-') { + $neg = 1; + $where = substr($where, 1); + } + + while ($where ne '') { + $val <<= 6; + my $d = substr($where, 0, 1); + #print STDERR "\n$d - " . ord($d) . " - " . $base64_map[ord($d)] . "\n"; + $val += $base64_map[ord(substr($where, 0, 1))]; + $where = substr($where, 1); + } + + return $val; +} + +### Command completion code + +sub get_match { + my @m = @_; + my $r = ''; + + for (my $i = 0, my $matched = 1; $i < length($m[0]) && $matched; $i++) { + my $c = substr($m[0], $i, 1); + + for (my $j = 1; $j < @m; $j++) { + + if ($c ne substr($m[$j], $i, 1)) { + $matched = 0; + last; + } + + } + + $r .= $c if ($matched); + } + + return $r; +} + +sub complete { + my $text = shift; + my $line = shift; + my $start = shift; + my $end = shift; + + $tty_attribs->{'completion_append_character'} = ' '; + $tty_attribs->{completion_entry_function} = \&nocomplete; + print STDERR "\nDBG: text - $text; line - $line; start - $start; end = $end\n" + if ($debug); + + # Complete command if we are at start of line. + + if ($start == 0 || substr($line, 0, $start) =~ /^\s*$/) { + my @list = grep (/^$text/, sort keys %COMMANDS); + return () if (!@list); + my $match = (@list > 1) ? &get_match(@list) : ''; + return $match, @list; + } + else { + # Count arguments + my $cstr = $line; + $cstr =~ s/^\s+//; # Remove leading spaces + + my ($cmd, @args) = shellwords($cstr); + return () if (!defined($cmd)); + + # Complete dirs for cd + if ($cmd eq 'cd') { + return () if (@args > 1); + return &complete_files($text, 1); + } + # Complete files/dirs for info and ls + elsif ($cmd =~ /^(add|delete|info|ls|mark|unmark|versions)$/) { + return &complete_files($text, 0); + } + # Complete clients for client + elsif ($cmd eq 'client') { + return () if (@args > 2); + my $pat = $text; + $pat =~ s/\./\\./g; + my @flist; + + print STDERR "DBG: " . (@args) . " arguments found.\n" if ($debug); + + if (@args < 1 || (@args == 1 and $line =~ /[^\s]$/)) { + @flist = grep (/^$pat/, sort keys %$clients); + } + else { + @flist = grep (/^$pat/, sort keys %$jobs); + } + + return () if (!@flist); + my $match = (@flist > 1) ? &get_match(@flist) : ''; + + #return $match, map {s/ /\\ /g; $_} @flist; + return $match, @flist; + } + # Complete show options for show + elsif ($cmd eq 'show') { + return () if (@args > 1); + # attempt to suggest match. + my @list = grep (/^$text/, sort keys %SHOW); + return () if (!@list); + my $match = (@list > 1) ? &get_match(@list) : ''; + return $match, @list; + } + elsif ($cmd =~ /^(bsr|bootstrap|relocate)$/) { + $tty_attribs->{completion_entry_function} = + $tty_attribs->{filename_completion_function}; + } + + } + + return (); +} + +sub complete_files { + my $path = shift; + my $dironly = shift; + my $finfo; + my @flist; + + my ($fqdir, $dir, $pat) = &path_parts($path, 1); + + $pat =~ s/([.\[\]\\])/\\$1/g; + # First check for absolute name. + + $finfo = &fetch_dir($fqdir); + print STDERR "DBG: " . join(', ', keys %$finfo) . "\n" if ($debug); + return () if (!$finfo); # Nothing if dir not found. + + if ($dironly) { + @flist = grep (m|^$pat.*/$|, sort keys %$finfo); + } + else { + @flist = grep (/^$pat/, sort keys %$finfo); + } + + return undef if (!@flist); + + print STDERR "DBG: Files found\n" if ($debug); + + if (@flist == 1 && $flist[0] =~ m|/$|) { + $tty_attribs->{'completion_append_character'} = ''; + } + + @flist = map {s/ /\\ /g; ($fqdir eq $cwd) ? $_ : "$dir$_"} @flist; + my $match = (@flist > 1) ? &get_match(@flist) : ''; + + print STDERR "DBG: Dir - $dir; cwd - $cwd\n" if ($debug); + # Fill in dir if necessary. + return $match, @flist; +} + +sub nocomplete { + return (); +} + +# subroutine to create printf format for long listing of ls + +sub long_fmt { + my $flist = shift; + my $fmax = 0; + my $lmax = 0; + my $umax = 0; + my $gmax = 0; + my $smax = 0; + + foreach my $f (@$flist) { + my $file = $f->[0]; + my $info = $f->[1]; + my $lstat = $info->{'lstat'}; + + my $l = length($file); + $fmax = $l if ($l > $fmax); + + $l = length($lstat->{'st_nlink'}); + $lmax = $l if ($l > $lmax); + $l = length($lstat->{'st_uid'}); + $umax = $l if ($l > $umax); + $l = length($lstat->{'st_gid'}); + $gmax = $l if ($l > $gmax); + $l = length($lstat->{'st_size'}); + $smax = $l if ($l > $smax); + } + + return "%s %${lmax}d %${umax}d %${gmax}d %${smax}d %s %s\n"; +} + +sub print_by_cols { + my @list = @_; + my $l = @list; + my $w = $term->get_screen_size; + my @wds = (1); + my $m = $w/3 + 1; + my $max_cols = ($m < @list) ? $w : @list; + my $fpc = 1; + my $cols = 1; + + print STDERR "Need to print $l files\n" if ($debug); + + while($max_cols > 1) { + my $used = 0; + + # Initialize array of widths + @wds = 0 x $max_cols; + + for ($cols = 0; $cols < $max_cols && $used < $w; $cols++) { + my $cw = 0; + + for (my $j = $cols*$fpc; $j < ($cols + 1)*$fpc && $j < $l; $j++ ) { + my $fl = length($list[$j]->[0]); + $cw = $fl if ($fl > $cw); + } + + $wds[$cols] = $cw; + $used += $cw; + print STDERR "DBG: Total so far is $used\n" if ($debug); + + if ($used >= $w) { + $cols++; + last; + } + + $used += 3; + } + + print STDERR "DBG: $cols of $max_cols columns uses $used space.\n" + if ($debug); + + print STDERR "DBG: Print $fpc files per column\n" + if ($debug); + + last if ($used <= $w && $cols == $max_cols); + $fpc = int($l/$cols); + $fpc++ if ($l % $cols); + $max_cols = $cols - 1; + } + + if ($max_cols == 1) { + $cols = 1; + $fpc = $l; + } + + print STDERR "Print out $fpc rows with $cols columns\n" + if ($debug); + + for (my $i = 0; $i < $fpc; $i++) { + + for (my $j = $i; $j < $fpc*$cols; $j += $fpc) { + my $cw = $wds[($j - $i)/$fpc]; + my $fmt = "%s%-${cw}s"; + my $file; + my $r; + + if ($j < @list) { + $file = $list[$j]->[0]; + my $fdata = $list[$j]->[1]; + $r = ($restore{$fdata->{'fileid'}}) ? '+' : ' '; + } + else { + $file = ''; + $r = ' '; + } + + print ' ' if ($i != $j); + printf $fmt, $r, $file; + } + + print "\n"; + } + +} + +sub ls_date { + my $seconds = shift; + my $date; + + if (abs(time() - $seconds) > 15724800) { + $date = time2str('%b %e %Y', $seconds); + } + else { + $date = time2str('%b %e %R', $seconds); + } + + return $date; +} + +# subroutine to load entire bacula database. +=head1 SHELL + +Once running, B will present the user with a shell like +environment where file can be exampled and selected for recover. The +shell will provide command history and editing and if you have the +Gnu readline module installed on your system, it will also provide +command completion. When interacting with files, wildcards should work +as expected. + +The following commands are understood. + +=cut + +sub parse_command { + my $cstr = shift; + my @command; + my $cmd; + my @args; + + # Nop on blank or commented lines + return ('nop') if ($cstr =~ /^\s*$/); + return ('nop') if ($cstr =~ /^\s*#/); + + # Get rid of leading white space to make shellwords work better + $cstr =~ s/^\s*//; + + ($cmd, @args) = shellwords($cstr); + + if (!defined($cmd)) { + warn "Could not warse $cstr\n"; + return ('nop'); + } + +=head2 add [I] + +Mark I for recovery. If I is not specified, mark all +files in the current directory. B is an alias for this command. + +=cut + elsif ($cmd eq 'add' || $cmd eq 'mark') { + my $options = {}; + @ARGV = @args; + + # Parse ls options + my $vars = {}; + getopts("aq", $vars) || return ('error', 'Add: Usage add [-q|-a] files'); + $options->{'all'} = $vars->{'a'}; + $options->{'quiet'} =$vars->{'q'}; + + + @command = ('add', $options); + + foreach my $a (@ARGV) { + push(@command, &expand_files($a)); + } + + } + +=head2 bootstrap I + +Create a bootstrap file suitable for use with the bacula B +command. B is an alias for this command. + +=cut + elsif ($cmd eq 'bootstrap' || $cmd eq 'bsr') { + return ('error', 'bootstrap takes single argument (file to write to)') + if (@args != 1); + @command = ('bootstrap', $args[0]); + } + +=head2 cd I + +Allows you to set your current directory. This command understands . for +the current directory and .. for the parent. Also, cd - will change you +back to the previous directory you were in. + +=cut + elsif ($cmd eq 'cd') { + # Cd with no args goes to / + @args = ('/') if (!@args); + + if (@args != 1) { + return ('error', 'Bad cd. cd requires 1 and only 1 argument.'); + } + + my $todir = $args[0]; + + # cd - should cd to previous directory. It is handled later. + return ('cd', '-') if ($todir eq '-'); + + # Expand wilecards + my @e = expand_dirs($todir); + + if (@e > 1) { + return ('error', 'Bad cd. Wildcard expands to more than 1 dir.'); + } + + $todir = $e[0]; + + print STDERR "Initial target is $todir\n" if ($debug); + + # remove prepended . + + while ($todir =~ m|^\./(.*)|) { + $todir = $1; + $todir = '.' if (!$todir); + } + + # If only . is left, replace with current directory. + $todir = $cwd if ($todir eq '.'); + print STDERR "target after . processing is $todir\n" if ($debug); + + # Now deal with .. + my $prefix = $cwd; + + while ($todir =~ m|^\.\./(.*)|) { + $todir = $1; + print STDERR "DBG: ../ found, new todir - $todir\n" if ($debug); + $prefix =~ s|/[^/]*/$|/|; + } + + if ($todir eq '..') { + $prefix =~ s|/[^/]*/$|/|; + $todir = ''; + } + + print STDERR "target after .. processing is $todir\n" if ($debug); + print STDERR "DBG: Final prefix - $prefix\n" if ($debug); + + $todir = "$prefix$todir" if ($prefix ne $cwd); + + print STDERR "DBG: todir after .. handling - $todir\n" if ($debug); + + # Turn relative directories into absolute directories. + + if (substr($todir, 0, 1) ne '/') { + print STDERR "DBG: $todir has no leading /, prepend $cwd\n" if ($debug); + $todir = "$cwd$todir"; + } + + # Make sure we have a trailing / + + if (substr($todir, length($todir) - 1) ne '/') { + print STDERR "DBG: No trailing /, append /\n" if ($debug); + $todir .= '/'; + } + + @command = ('cd', $todir); + } + +=head2 changetime I + +This command changes the time used in generating the view of the +filesystem. Files that were backed up before the specified time +(optionally until the next full backup) will be the only files seen. + +The time can be specifed in almost any reasonable way. Here are a few +examples: + +=over 4 + +=item 1/1/2006 + +=item yesterday + +=item sunday + +=item 5 days ago + +=item last month + +=back + +=cut + elsif ($cmd eq 'changetime') { + @command = ($cmd, join(' ', @args)); + } + +=head2 client I I + +Specify the client and jobname to view. + +=cut + elsif ($cmd eq 'client') { + + if (@args != 2) { + return ('error', 'client takes a two arguments client-name job-name'); + } + + @command = ('client', @args); + } + +=head2 debug + +Toggle debug flag. + +=cut + elsif ($cmd eq 'debug') { + @command = ('debug'); + } + +=head2 delete [I] + +Un-mark file that were previous marked for recovery. If I is +not specified, mark all files in the current directory. B is an +alias for this command. + +=cut + elsif ($cmd eq 'delete' || $cmd eq 'unmark') { + @command = ('delete'); + + foreach my $a (@args) { + push(@command, &expand_files($a)); + } + + } + +=head2 help + +Show list of command with brief description of what they do. + +=cut + elsif ($cmd eq 'help') { + @command = ('help'); + } + +=head2 history + +Display command line history. B is an alias for this command. + +=cut + elsif ($cmd eq 'h' || $cmd eq 'history') { + @command = ('history'); + } + +=head2 info [I] + +Display information about the specified files. The format of the +information provided is reminiscent of the bootstrap file. + +=cut + elsif ($cmd eq 'info') { + push(@command, 'info'); + + foreach my $a (@args) { + push(@command, &expand_files($a)); + } + + } + +=head2 ls [I] + +This command will list the specified files (defaults to all files in +the current directory). Files are sorted alphabetically be default. It +understand the following options. + +=over 4 + +=item -a + +Causes ls to list files even if they are only on backups preceding the +closest full backup to the currently selected date/time. + +=item -l + +List files in long format (like unix ls command). + +=item -r + +reverse direction of sort. + +=item -S + +Sort files by size. + +=item -t + +Sort files by time + +=back + +=cut + elsif ($cmd eq 'ls' || $cmd eq 'dir' || $cmd eq 'll') { + my $options = {}; + @ARGV = @args; + + # Parse ls options + my $vars = {}; + getopts("altSr", $vars) || return ('error', 'Bad ls usage.'); + $options->{'all'} = $vars->{'a'}; + $options->{'long'} = $vars->{'l'}; + $options->{'long'} = 1 if ($cmd eq 'dir' || $cmd eq 'll'); + + $options->{'sort'} = 'time' if ($vars->{'t'}); + + return ('error', 'Only one sort at a time allowed.') + if ($options->{'sort'} && ($vars->{'S'})); + + $options->{'sort'} = 'size' if ($vars->{'S'}); + $options->{'sort'} = 'alpha' if (!$options->{'sort'}); + + $options->{'sort'} = 'r' . $options->{'sort'} if ($vars->{'r'}); + + @command = ('ls', $options); + + foreach my $a (@ARGV) { + push(@command, &expand_files($a)); + } + + } + +=head2 pwd + +Show current directory. + +=cut + elsif ($cmd eq 'pwd') { + @command = ('pwd'); + } + +=head2 quit + +Exit program. + +B, B and B are all aliases for this command. + +=cut + elsif ($cmd eq 'quit' || $cmd eq 'q' || $cmd eq 'exit' || $cmd eq 'x') { + @command = ('quit'); + } + +=head2 recover + +This command creates a table in the bacula catalog that case be used to +restore the selected files. It will also display the command to enter +into bconsole to start the restore. + +=cut + elsif ($cmd eq 'recover') { + @command = ('recover'); + } + +=head2 relocate I + +Specify the directory to restore files to. Defaults to /. + +=cut + elsif ($cmd eq 'relocate') { + return ('error', 'relocate required a single directory to relocate to') + if (@args != 1); + + my $todir = $args[0]; + $todir = `pwd` . $todir if (substr($todir, 0, 1) ne '/'); + @command = ('relocate', $todir); + } + +=head2 show I + +Show various information about B. The following items can be specified. + +=over 4 + +=item cache + +Display's a list of cached directories. + +=item catalog + +Displays the name of the catalog we are talking to. + +=item client + +Display current client and job named that are being viewed. + +=item restore + +Display the number of files and size to be restored. + +=item volumes + +Display the volumes that will be required to perform a restore on the +selected files. + +=back + +=cut + elsif ($cmd eq 'show') { + return ('error', 'show takes a single argument') if (@args != 1); + @command = ('show', $args[0]); + } + +=head2 verbose + +Toggle verbose flag. + +=cut + elsif ($cmd eq 'verbose') { + @command = ('verbose'); + } + +=head2 versions [I] + +View all version of specified files available from the current +time. B is an alias for this command. + +=cut + elsif ($cmd eq 'versions' || $cmd eq 'ver') { + push(@command, 'versions'); + + foreach my $a (@args) { + push(@command, &expand_files($a)); + } + + } + +=head2 volumes + +Display the volumes that will be required to perform a restore on the +selected files. + +=cut + elsif ($cmd eq 'volumes') { + @command = ('volumes'); + } + else { + @command = ('error', "$cmd: Unknown command"); + } + + return @command; +} + +############################################################################## +### Command processing +############################################################################## + +# Add files to restore list. + +sub cmd_add { + my $opts = shift; + my @flist = @_; + + my $save_rnum = $rnum; + &select_files(1, $opts, $cwd, @flist); + print "" . ($rnum - $save_rnum) . " files marked for restore\n"; +} + +sub cmd_bootstrap { + my $bsrfile = shift; + my %jobs; + my @media; + my %bootstrap; + + # Get list of job ids to restore from. + + foreach my $fid (keys %restore) { + $jobs{$restore{$fid}->[0]} = 1; + } + + my $jlist = join(', ', sort keys %jobs); + + if (!$jlist) { + print "Nothing to restore.\n"; + return; + } + + # Read in media info + + my $query = "select + Job.jobid, + volumename, + mediatype, + volsessionid, + volsessiontime, + firstindex, + lastindex, + startfile as volfile, + JobMedia.startblock, + JobMedia.endblock, + volindex + from + Job, + Media, + JobMedia + where + Job.jobid in ($jlist) and + Job.jobid = JobMedia.jobid and + JobMedia.mediaid = Media.mediaid + order by + volumename, + volsessionid, + volindex + "; + + my $sth = $dbh->prepare($query) || die "Can't prepare $query\n"; + $sth->execute || die "Can't execute $query\n"; + + while (my $ref = $sth->fetchrow_hashref) { + push(@media, { + 'jobid' => $ref->{'jobid'}, + 'volumename' => $ref->{'volumename'}, + 'mediatype' => $ref->{'mediatype'}, + 'volsessionid' => $ref->{'volsessionid'}, + 'volsessiontime' => $ref->{'volsessiontime'}, + 'firstindex' => $ref->{'firstindex'}, + 'lastindex' => $ref->{'lastindex'}, + 'volfile' => $ref->{'volfile'}, + 'startblock' => $ref->{'startblock'}, + 'endblock' => $ref->{'endblock'}, + 'volindex' => $ref->{'volindex'} + }); + } + +# Gather bootstrap info +# +# key - jobid.volumename.volumesession.volindex +# job +# name +# type +# session +# time +# file +# startblock +# endblock +# array of file indexes. + + for my $info (values %restore) { + my $jobid = $info->[0]; + my $fidx = $info->[1]; + + foreach my $m (@media) { + + if ($jobid == $m->{'jobid'} && $fidx >= $m->{'firstindex'} && $fidx <= $m->{'lastindex'}) { + my $key = "$jobid."; + $key .= "$m->{volumename}.$m->{volsessionid}.$m->{volindex}"; + + $bootstrap{$key} = { + 'job' => $jobid, + 'name' => $m->{'volumename'}, + 'type' => $m->{'mediatype'}, + 'session' => $m->{'volsessionid'}, + 'index' => $m->{'volindex'}, + 'time' => $m->{'volsessiontime'}, + 'file' => $m->{'volfile'}, + 'startblock' => $m->{'startblock'}, + 'endblock' => $m->{'endblock'} + } + if (!$bootstrap{$key}); + + $bootstrap{$key}->{'files'} = [] + if (!$bootstrap{$key}->{'files'}); + push(@{$bootstrap{$key}->{'files'}}, $fidx); + } + + } + + } + + # print bootstrap + + print STDERR "DBG: Keys = " . join(', ', keys %bootstrap) . "\n" + if ($debug); + + my @keys = sort { + return $bootstrap{$a}->{'time'} <=> $bootstrap{$b}->{'time'} + if ($bootstrap{$a}->{'time'} != $bootstrap{$b}->{'time'}); + return $bootstrap{$a}->{'name'} cmp $bootstrap{$b}->{'name'} + if ($bootstrap{$a}->{'name'} ne $bootstrap{$b}->{'name'}); + return $bootstrap{$a}->{'session'} <=> $bootstrap{$b}->{'session'} + if ($bootstrap{$a}->{'session'} != $bootstrap{$b}->{'session'}); + return $bootstrap{$a}->{'index'} <=> $bootstrap{$b}->{'index'}; + } keys %bootstrap; + + if (!open(BSR, ">$bsrfile")) { + warn "$bsrfile: $|\n"; + return; + } + + foreach my $key (@keys) { + my $info = $bootstrap{$key}; + print BSR "Volume=\"$info->{name}\"\n"; + print BSR "MediaType=\"$info->{type}\"\n"; + print BSR "VolSessionId=$info->{session}\n"; + print BSR "VolSessionTime=$info->{time}\n"; + print BSR "VolFile=$info->{file}\n"; + print BSR "VolBlock=$info->{startblock}-$info->{endblock}\n"; + + my @fids = sort { $a <=> $b} @{$bootstrap{$key}->{'files'}}; + my $first; + my $prev; + + for (my $i = 0; $i < @fids; $i++) { + $first = $fids[$i] if (!$first); + + if ($prev) { + + if ($fids[$i] != $prev + 1) { + print BSR "FileIndex=$first"; + print BSR "-$prev" if ($first != $prev); + print BSR "\n"; + $first = $fids[$i]; + } + + } + + $prev = $fids[$i]; + } + + print BSR "FileIndex=$first"; + print BSR "-$prev" if ($first != $prev); + print BSR "\n"; + print BSR "Count=" . (@fids) . "\n"; + } + + close(BSR); +} + +# Change directory + +sub cmd_cd { + my $dir = shift; + + my $save = $files; + + $dir = $lwd if ($dir eq '-' && defined($lwd)); + + if ($dir ne '-') { + $files = &fetch_dir($dir); + } + else { + warn "Previous director not defined.\n"; + } + + if ($files) { + $lwd = $cwd; + $cwd = $dir; + } + else { + print STDERR "Could not locate directory $dir\n"; + $files = $save; + } + + $cwd = '/' if (!$cwd); +} + +sub cmd_changetime { + my $tstr = shift; + + if (!$tstr) { + print "Time currently set to " . localtime($rtime) . "\n"; + return; + } + + my $newtime = parsedate($tstr, FUZZY => 1, PREFER_PAST => 1); + + if (defined($newtime)) { + print STDERR "Time evaluated to $newtime\n" if ($debug); + $rtime = $newtime; + print "Setting date/time to " . localtime($rtime) . "\n"; + &setjob; + + # Clean cache. + $dircache = {}; + &cache_catalog if ($preload); + + # Get directory based on new time. + $files = &fetch_dir($cwd); + } + else { + print STDERR "Could not parse $tstr as date/time\n"; + } + +} + +# Change client + +sub cmd_client { + my $c = shift; + $jobname = shift; # Set global job name + + # Lookup client id. + $client = &lookup_client($c); + + # Clear cache, we changed machines/jobs + $dircache = {}; + &cache_catalog if ($preload); + + # Find last full backup time. + &setjob; + + # Get current directory on new client. + $files = &fetch_dir($cwd); + + # Clear restore info + $rnum = 0; + $rbytes = 0; + %restore = (); +} + +sub cmd_debug { + $debug = 1 - $debug; +} + +sub cmd_delete { + my @flist = @_; + my $opts = {quiet=>1}; + + my $save_rnum = $rnum; + &select_files(0, $opts, $cwd, @flist); + print "" . ($save_rnum - $rnum) . " files un-marked for restore\n"; +} + +sub cmd_help { + + foreach my $h (sort keys %COMMANDS) { + printf "%-12s %s\n", $h, $COMMANDS{$h}; + } + +} + +sub cmd_history { + + foreach my $h ($term->GetHistory) { + print "$h\n"; + } + +} + +# Print catalog/tape info about files + +sub cmd_info { + my @flist = @_; + @flist = ($cwd) if (!@flist); + + foreach my $f (@flist) { + $f =~ s|/+$||; + my ($fqdir, $dir, $file) = &path_parts($f); + my $finfo = &fetch_dir($fqdir); + + if (!$finfo->{$file}) { + + if (!$finfo->{"$file/"}) { + warn "$f: File not found.\n"; + next; + } + + $file .= '/'; + } + + my $fileid = $finfo->{$file}->{fileid}; + my $fileindex = $finfo->{$file}->{fileindex}; + my $jobid = $finfo->{$file}->{jobid}; + + print "#$f -\n"; + print "#FileID : $finfo->{$file}->{fileid}\n"; + print "#JobID : $jobid\n"; + print "#Visible : $finfo->{$file}->{visible}\n"; + + my $query = "select + volumename, + mediatype, + volsessionid, + volsessiontime, + startfile, + JobMedia.startblock, + JobMedia.endblock + from + Job, + Media, + JobMedia + where + Job.jobid = $jobid and + Job.jobid = JobMedia.jobid and + $fileindex >= firstindex and + $fileindex <= lastindex and + JobMedia.mediaid = Media.mediaid + "; + + my $sth = $dbh->prepare($query) || die "Can't prepare $query\n"; + $sth->execute || die "Can't execute $query\n"; + + while (my $ref = $sth->fetchrow_hashref) { + print "Volume=\"$ref->{volumename}\"\n"; + print "MediaType=\"$ref->{mediatype}\"\n"; + print "VolSessionId=$ref->{volsessionid}\n"; + print "VolSessionTime=$ref->{volsessiontime}\n"; + print "VolFile=$ref->{startfile}\n"; + print "VolBlock=$ref->{startblock}-$ref->{endblock}\n"; + print "FileIndex=$finfo->{$file}->{fileindex}\n"; + print "Count=1\n"; + } + + $sth->finish; + } + +} + +# List files. + +sub cmd_ls { + my $opts = shift; + my @flist = @_; + my @keys; + + print STDERR "DBG: " . (@flist) . " files to list.\n" if ($debug); + + if (!@flist) { + @flist = keys %$files; + } + + # Sort files as specified. + + if ($opts->{sort} eq 'alpha') { + print STDERR "DBG: Sort by alpha\n" if ($debug); + @keys = sort @flist; + } + elsif ($opts->{sort} eq 'ralpha') { + print STDERR "DBG: Sort by reverse alpha\n" if ($debug); + @keys = sort {$b cmp $a} @flist; + } + elsif ($opts->{sort} eq 'time') { + print STDERR "DBG: Sort by time\n" if ($debug); + @keys = sort { + return $a cmp $b + if ($files->{$b}->{'lstat'}->{'st_mtime'} == + $files->{$a}->{'lstat'}->{'st_mtime'}); + $files->{$b}->{'lstat'}->{'st_mtime'} <=> + $files->{$a}->{'lstat'}->{'st_mtime'} + } @flist; + } + elsif ($opts->{sort} eq 'rtime') { + print STDERR "DBG: Sort by reverse time\n" if ($debug); + @keys = sort { + return $b cmp $a + if ($files->{$a}->{'lstat'}->{'st_mtime'} == + $files->{$b}->{'lstat'}->{'st_mtime'}); + $files->{$a}->{'lstat'}->{'st_mtime'} <=> + $files->{$b}->{'lstat'}->{'st_mtime'} + } @flist; + } + elsif ($opts->{sort} eq 'size') { + print STDERR "DBG: Sort by size\n" if ($debug); + @keys = sort { + return $a cmp $b + if ($files->{$a}->{'lstat'}->{'st_size'} == + $files->{$b}->{'lstat'}->{'st_size'}); + $files->{$b}->{'lstat'}->{'st_size'} <=> + $files->{$a}->{'lstat'}->{'st_size'} + } @flist; + } + elsif ($opts->{sort} eq 'rsize') { + print STDERR "DBG: Sort by reverse size\n" if ($debug); + @keys = sort { + return $b cmp $a + if ($files->{$a}->{'lstat'}->{'st_size'} == + $files->{$b}->{'lstat'}->{'st_size'}); + $files->{$a}->{'lstat'}->{'st_size'} <=> + $files->{$b}->{'lstat'}->{'st_size'} + } @flist; + } + else { + print STDERR "DBG: $opts->{sort}, no sort\n" if ($debug); + @keys = @flist; + } + + @flist = (); + + foreach my $f (@keys) { + print STDERR "DBG: list $f\n" if ($debug); + $f =~ s|/+$||; + my ($fqdir, $dir, $file) = &path_parts($f); + my $finfo = &fetch_dir($fqdir); + + if (!$finfo->{$file}) { + + if (!$finfo->{"$file/"}) { + warn "$f: File not found.\n"; + next; + } + + $file .= '/'; + } + + my $fdata = $finfo->{$file}; + + if ($opts->{'all'} || $fdata->{'visible'}) { + push(@flist, ["$dir$file", $fdata]); + } + + } + + if ($opts->{'long'}) { + my $lfmt = &long_fmt(\@flist) if ($opts->{'long'}); + + foreach my $f (@flist) { + my $file = $f->[0]; + my $fdata = $f->[1]; + my $r = ($restore{$fdata->{'fileid'}}) ? '+' : ' '; + my $lstat = $fdata->{'lstat'}; + + printf $lfmt, $lstat->{'statstr'}, $lstat->{'st_nlink'}, + $lstat->{'st_uid'}, $lstat->{'st_gid'}, $lstat->{'st_size'}, + ls_date($lstat->{'st_mtime'}), "$r$file"; + } + } + else { + &print_by_cols(@flist); + } + +} + +sub cmd_pwd { + print "$cwd\n"; +} + +# Create restore data for bconsole + +sub cmd_recover { + my $query = "create table recover (jobid int, fileindex int)"; + + $dbh->do($query) + || warn "Could not create recover table. Hope it's already there.\n"; + + if ($db eq 'postgres') { + $query = "COPY recover FROM STDIN"; + + $dbh->do($query) || die "Can't execute $query\n"; + + foreach my $finfo (values %restore) { + $dbh->pg_putline("$finfo->[0]\t$finfo->[1]\n"); + } + + $dbh->pg_endcopy; + } + else { + + foreach my $finfo (values %restore) { + $query = "insert into recover ( + 'jobid', 'fileindex' + ) + values ( + $finfo->[0], $finfo->[1] + )"; + $dbh->do($query) || die "Can't execute $query\n"; + } + + } + + $query = "GRANT all on recover to bacula"; + $dbh->do($query) || die "Can't execute $query\n"; + + $query = "select name from Client where clientid = $client"; + my $sth = $dbh->prepare($query) || die "Can't prepare $query\n"; + $sth->execute || die "Can't execute $query\n"; + + my $ref = $sth->fetchrow_hashref; + print "Restore prepared. Run bconsole and enter the following command\n"; + print "restore client=$$ref{name} where=$restore_to file=\?recover\n"; + $sth->finish; +} + +sub cmd_relocate { + $restore_to = shift; +} + +# Display information about recover's state + +sub cmd_show { + my $what = shift; + + if ($what eq 'clients') { + + foreach my $c (sort keys %$clients) { + print "$c\n"; + } + + } + elsif ($what eq 'catalog') { + print "$catalog\n"; + } + elsif ($what eq 'client') { + my $query = "select name from Client where clientid = $client"; + my $sth = $dbh->prepare($query) || die "Can't prepare $query\n"; + $sth->execute || die "Can't execute $query\n"; + + my $ref = $sth->fetchrow_hashref; + print "$$ref{name}; $jobname\n"; + $sth->finish; + } + elsif ($what eq 'cache') { + print "The following directories are cached\n"; + + foreach my $d (sort keys %$dircache) { + print "$d\n"; + } + + } + elsif ($what eq 'restore') { + print "There are $rnum files marked for restore.\n"; + + print STDERR "DBG: Bytes = $rbytes\n" if ($debug); + + if ($rbytes < 1024) { + print "The restore will require $rbytes bytes.\n"; + } + elsif ($rbytes < 1024*1024) { + my $rk = $rbytes/1024; + printf "The restore will require %.2f KB.\n", $rk; + } + elsif ($rbytes < 1024*1024*1024) { + my $rm = $rbytes/1024/1024; + printf "The restore will require %.2f MB.\n", $rm; + } + else { + my $rg = $rbytes/1024/1024/1024; + printf "The restore will require %.2f GB.\n", $rg; + } + + print "Restores will be placed in $restore_to\n"; + } + elsif ($what eq 'volumes') { + &cmd_volumes; + } + elsif ($what eq 'qinfo') { + my $dl = length($cwd); + print "? - 1: ftime = $ftime\n"; + print "? - 2: client = $client\n"; + print "? - 3: jobname = $jobname\n"; + print "? - 4: rtime = $rtime\n"; + print "? - 5: dir = $cwd\n"; + print "? - 6, 7: dl = $dl\n"; + print "? - 8: ftime = $ftime\n"; + print "? - 9: client = $client\n"; + print "? - 10: jobname = $jobname\n"; + print "? - 11: rtime = $rtime\n"; + print "? - 12: dir = $cwd\n"; + } + else { + warn "Don't know how to show $what\n"; + } + +} + +sub cmd_verbose { + $verbose = 1 - $verbose; +} + +sub cmd_versions { + my @flist = @_; + + @flist = ($cwd) if (!@flist); + + foreach my $f (@flist) { + my $path; + my $data = {}; + + print STDERR "DBG: Get versions for $f\n" if ($debug); + + $f =~ s|/+$||; + my ($fqdir, $dir, $file) = &path_parts($f); + my $finfo = &fetch_dir($fqdir); + + if (!$finfo->{$file}) { + + if (!$finfo->{"$file/"}) { + warn "$f: File not found.\n"; + next; + } + + $file .= '/'; + } + + if ($file =~ m|/$|) { + $path = "$fqdir$file"; + $file = ''; + } + else { + $path = $fqdir; + } + + print STDERR "DBG: Use $ftime, $path, $file, $client, $jobname\n" + if ($debug); + + $ver_sth->execute($ftime, $rtime, $path, $file, $client, $jobname) + || die "Can't execute $queries{$db}->{'ver'}\n"; + + # Gather stats + + while (my $ref = $ver_sth->fetchrow_hashref) { + my $f = "$ref->{name};$ref->{jobtdate}"; + $data->{$f} = &create_file_entry( + $f, + $ref->{'fileid'}, + $ref->{'fileindex'}, + $ref->{'jobid'}, + $ref->{'visible'}, + $ref->{'lstat'} + ); + + $data->{$f}->{'jobtdate'} = $ref->{'jobtdate'}; + $data->{$f}->{'volume'} = $ref->{'volumename'}; + } + + my @keys = sort { + $data->{$a}->{'jobtdate'} <=> + $data->{$b}->{'jobtdate'} + } keys %$data; + + my @list = (); + + foreach my $f (@keys) { + push(@list, [$file, $data->{$f}]); + } + + my $lfmt = &long_fmt(\@list); + print "\nVersions of \`$path$file' earlier than "; + print localtime($rtime) . ":\n\n"; + + foreach my $f (@keys) { + my $lstat = $data->{$f}->{'lstat'}; + printf $lfmt, $lstat->{'statstr'}, $lstat->{'st_nlink'}, + $lstat->{'st_uid'}, $lstat->{'st_gid'}, $lstat->{'st_size'}, + time2str('%c', $lstat->{'st_mtime'}), $file; + print "save time: " . localtime($data->{$f}->{'jobtdate'}) . "\n"; + print " location: $data->{$f}->{volume}\n\n"; + } + + } + +} + +# List volumes needed for restore. + +sub cmd_volumes { + my %media; + my @jobmedia; + my %volumes; + + # Get media. + my $query = "select mediaid, volumename from Media"; + my $sth = $dbh->prepare($query) || die "Can't prepare $query\n"; + + $sth->execute || die "Can't execute $query\n"; + + while (my $ref = $sth->fetchrow_hashref) { + $media{$$ref{'mediaid'}} = $$ref{'volumename'}; + } + + $sth->finish(); + + # Get media usage. + $query = "select mediaid, jobid, firstindex, lastindex from JobMedia"; + $sth = $dbh->prepare($query) || die "Can't prepare $query\n"; + + $sth->execute || die "Can't execute $query\n"; + + while (my $ref = $sth->fetchrow_hashref) { + push(@jobmedia, { + 'mediaid' => $$ref{'mediaid'}, + 'jobid' => $$ref{'jobid'}, + 'firstindex' => $$ref{'firstindex'}, + 'lastindex' => $$ref{'lastindex'} + }); + } + + $sth->finish(); + + # Find needed volumes + + foreach my $fileid (keys %restore) { + my ($jobid, $idx) = @{$restore{$fileid}}; + + foreach my $jm (@jobmedia) { + next if ($jm->{'jobid'}) != $jobid; + + if ($idx >= $jm->{'firstindex'} && $idx <= $jm->{'lastindex'}) { + $volumes{$media{$jm->{'mediaid'}}} = 1; + } + + } + + } + + print "The following volumes are needed for restore.\n"; + + foreach my $v (sort keys %volumes) { + print "$v\n"; + } + +} + +sub cmd_error { + my $msg = shift; + print STDERR "$msg\n"; +} + +############################################################################## +### Start of program +############################################################################## + +&cache_catalog if ($preload); + +print "Using $readline for command processing\n" if ($verbose); + +# Initialize command completion + +# Add binding for Perl readline. Issue warning. +if ($readline eq 'Term::ReadLine::Gnu') { + $term->ReadHistory($HIST_FILE); + print STDERR "DBG: FCD - $tty_attribs->{filename_completion_desired}\n" + if ($debug); + $tty_attribs->{attempted_completion_function} = \&complete; + $tty_attribs->{attempted_completion_function} = \&complete; + print STDERR "DBG: Quote chars = '$tty_attribs->{filename_quote_characters}'\n" if ($debug); +} +elsif ($readline eq 'Term::ReadLine::Perl') { + readline::rl_bind('TAB', 'ViComplete'); + warn "Command completion disabled. $readline is seriously broken\n"; +} +else { + warn "Can't deal with $readline, Command completion disabled.\n"; +} + +&cmd_cd($start_dir); + +while (defined($cstr = $term->readline('recover> '))) { + print "\n" if ($readline eq 'Term::ReadLine::Perl'); + my @command = parse_command($cstr); + last if ($command[0] eq 'quit'); + next if ($command[0] eq 'nop'); + + print STDERR "Execute $command[0] command.\n" if ($debug); + + my $cmd = \&{"cmd_$command[0]"}; + + # The following line will call the subroutine named cmd_ prepended to + # the name of the command returned by parse_command. + + &$cmd(@command[1..$#command]); +}; + +$dir_sth->finish(); +$sel_sth->finish(); +$ver_sth->finish(); +$dbh->disconnect(); + +print "\n" if (!defined($cstr)); + +$term->WriteHistory($HIST_FILE) if ($readline eq 'Term::ReadLine::Gnu'); + +=head1 DEPENDENCIES + +The following CPAN modules are required to run this program. + +DBI, Term::ReadKey, Time::ParseDate, Date::Format, Text::ParseWords + +Additionally, you will only get command line completion if you also have + +Term::ReadLine::Gnu + +=head1 AUTHOR + +Karl Hakimian + +=head1 LICENSE + +Copyright (C) 2006 Karl Hakimian + +This program is free software; you can redistribute it and/or modify +it under the terms of the GNU General Public License as published by +the Free Software Foundation; either version 2 of the License, or +(at your option) any later version. + +This program is distributed in the hope that it will be useful, +but WITHOUT ANY WARRANTY; without even the implied warranty of +MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the +GNU General Public License for more details. + +You should have received a copy of the GNU General Public License +along with this program; if not, write to the Free Software +Foundation, Inc., 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA + +=cut diff --git a/bacula/kernstodo b/bacula/kernstodo index 0493eebda2..071337cd97 100644 --- a/bacula/kernstodo +++ b/bacula/kernstodo @@ -6,7 +6,6 @@ Project Developer ======= ========= Document: -- Does ClientRunAfterJob fail the job on a bad return code? - Document cleaning up the spool files: db, pid, state, bsr, mail, conmsg, spool - Document the multiple-drive-changer.txt script. @@ -17,6 +16,68 @@ Document: Priority: For 1.39: +- Print warning message if LANG environment variable does not specify + UTF-8. +=== Migration from David === +What I'd like to see: + +Job { + Name = "-migrate" + Type = Migrate + Messages = Standard + Pool = Default + Migration Selection Type = LowestUtil | OldestVol | PoolOccupancy | +Client | PoolResidence | Volume | JobName | SQLquery + Migration Selection Pattern = "regexp" + Next Pool = +} + +There should be no need for a Level (migration is always Full, since you +don't calculate differential/incremental differences for migration), +Storage should be determined by the volume types in the pool, and Client +is really a selection issue. Migration should always occur to the +NextPool defined in the pool definition. If no nextpool is defined, the +job should end with a reason of "no place to go". If Next Pool statement +is present, we override the check in the pool definition and use the +pool specified. + +Here's how I'd define Migration Selection Types: + +LowestUtil -- Identify the volume in the pool with the least data on it +and empty it. No Migration Selection Pattern required. + +OldestVol -- Identify the LRU volume with data written, and empty it. No +Migration Selection Pattern required. + +PoolOccupancy -- if pool occupancy exceeds , migrate volumes +(starting with most full volumes) until pool occupancy drops below +. Pool highmig and lowmig values are in pool definition, no +Migration Selection Pattern required. + +Client -- Migrate data from selected client only. Migration Selection +Pattern regexp provides pattern to select client names, eg ^FS00* makes +all client names starting with FS00 eligible for migration. + +PoolResidence -- Migrate data sitting in pool for longer than +PoolResidence value in pool definition. Migration Selection Pattern +optional; if specified, override value in pool definition (value in +minutes). + +Volume -- Migrate all data on specified volumes. Migration Selection +Pattern regexp provides selection criteria for volumes to be migrated. +Volumes must exist in pool to be eligible for migration. + +Jobname -- Migration all jobs matching name. Migration Selection Pattern +regexp provides pattern to select jobnames existing in pool. + +SQLQuery -- Migrate all jobuids returned by the supplied SQL query. +Migration Selection Pattern contains SQL query to execute; should return +a list of 1 or more jobuids to migrate. + +[ possibly a Python event -- kes ] +=== +- Network error on Win32 should set Win32 error code. +- What happens when you rename a Disk Volume? - Job retention period in a Pool (and hence Volume). The job would then be migrated. - Detect resource deadlock in Migrate when same job wants to read @@ -222,6 +283,50 @@ For 1.39: - It remains to be seen how the backup performance of the DIR's will be affected when comparing the catalog for a large filesystem. +==== +From David: +How about introducing a Type = MgmtPolicy job type? That job type would +be responsible for scanning the Bacula environment looking for specific +conditions, and submitting the appropriate jobs for implementing said +policy, eg: + +Job { + Name = "Migration-Policy" + Type = MgmtPolicy + Policy Selection Job Type = Migrate + Scope = " " + Threshold = " " + Job Template = +} + +Where is any legal job keyword, is a comparison +operator (=,<,>,!=, logical operators AND/OR/NOT) and is a +appropriate regexp. I could see an argument for Scope and Threshold +being SQL queries if we want to support full flexibility. The +Migration-Policy job would then get scheduled as frequently as a site +felt necessary (suggested default: every 15 minutes). + +Example: + +Job { + Name = "Migration-Policy" + Type = MgmtPolicy + Policy Selection Job Type = Migration + Scope = "Pool=*" + Threshold = "Migration Selection Type = LowestUtil" + Job Template = "MigrationTemplate" +} + +would select all pools for examination and generate a job based on +MigrationTemplate to automatically select the volume with the lowest +usage and migrate it's contents to the nextpool defined for that pool. + +This policy abstraction would be really handy for adjusting the behavior +of Bacula according to site-selectable criteria (one thing that pops +into mind is Amanda's ability to automatically adjust backup levels +depending on various criteria). + + ===== Regression tests: @@ -1282,4 +1387,4 @@ Block Position: 0 - Reserve blocks other restore jobs when first cannot connect to SD. - Fix Maximum Changer Wait, Maximum Open Wait, Maximum Rewind Wait to accept time qualifiers. - +- Does ClientRunAfterJob fail the job on a bad return code? diff --git a/bacula/kes-1.38 b/bacula/kes-1.38 index 2de8cb1522..11b8044742 100644 --- a/bacula/kes-1.38 +++ b/bacula/kes-1.38 @@ -3,6 +3,74 @@ General: +Release 1.38.6 beta3 4Mar06 +04Mar06 +- The po files should now be current. +- Fix new sql_use_result() code to properly release the + buffers in all cases. +- Convert to using new Python class definitons with (object). +- Use the keyword ujobid to mean the unique job id; job or jobname + to mean the Job name given on the Name directive, and jobid to + be the numeric (non-unique) job id. +- Allow listing by any of the above. +- Add the user friendly job report code for reporting job elapsed time + and rates with suffexes from John Kodis . +- Add Priority and JobLevel as Python settable items. +- Use TEMPORARY table creation where the table is created by + Bacula. +- Add new code submitted by Eric for waiting on specific jobid. +- Add ACL checking for the dot commands. +- Fix restore of writable FIFOs. +- Fix a bug in bpipe where the string was freed too early. + +26Feb06 +- Fix bug reported by Arno listing blocks with bls +- Update the po files at Eric's request. + +Release 1.38.6-beta2 25Feb06 +25Feb06 +- Add sql_use_result() define. + +Release 1.38.6 beta1 24Feb06 +24Feb06 +- Don't open default catalog if not in ACL. + +22Feb06 +- Add virtual disk autochanger code. +- Add user supplied bug fix to make two autochangers work + correctly using StorageId with InChanger checks. +- Correct new/old_jcr confusion in copy_storage(). +- Remove & from Job during scan in msgchan.c -- probably + trashed the stack. +- When getting the next Volume if no Volume in Append mode + exists and we are dealing with an Autochanger, search + for a Scratch Volume. +- Check for missing value in dot commands -- bug fix. +- Fix bug in update barcodes command line scanning. +- Make sure Pool Max Vols is respected. +- Check that user supplied a value before referencing + it in restore -- pointed out by Karl Hakimian. +- Add Karl Hakimian's table insert code. +- Don't ask user to select a specific Volume when + updating all volumes in a Pool. +- Remove reservation if set for read when removing dcr. +- Lock code that requests next appendable volume so that + two jobs to get the same Volume at the same time. +- Add new Device Type = xxx code. Values are file, tape, + dvd, and fifo. +- Preserve certain modes (ST_LABEL|ST_APPEND|ST_READ) across + a re-open to change read/write permission on a device. +- Correct a misplaced double quote in certain autochanger + scripts. +- Make make_catalog_backup.in a bit more portable. +- Implement Karl Hakimian's sql_use_result(), which speeds + up restore tree building and reduces the memory load. +- Correct a number of minor bugs in getting a Volume from + the Scratch Pool. +- Implement additional command line options for update Volume. +- Don't require user to enter a Volume name when updating + all Volumes in a pool. + Release 1.38.5 released 19Jan06: 19Jan06 - Apply label barcodes fix supplied by Rudolf Cejka. diff --git a/bacula/kes-1.39 b/bacula/kes-1.39 index 5f3add3503..b2c99eb8bb 100644 --- a/bacula/kes-1.39 +++ b/bacula/kes-1.39 @@ -2,10 +2,34 @@ Kern Sibbald General: +08Mar06 +- Rename mac.c migrate.c +- Add user friendly display of VolBytes in job report. +- Rename target... to previous... to make it a bit easier to + understand. +- Add selection type and selection pattern to Migration (idea + given by David Boyes). +04Mar06 +- The po files should now be current. +- Fix new sql_use_result() code to properly release the + buffers in all cases. +- Use the keyword ujobid to mean the unique job id; job or jobname + to mean the Job name given on the Name directive, and jobid to + be the numeric (non-unique) job id. +- Allow listing by any of the above. +- Add the user friendly job report code for reporting job elapsed time + and rates with suffexes from John Kodis . +- Add Priority and JobLevel as Python settable items. +- Use TEMPORARY table creation where the table is created by + Bacula. +- Add new code submitted by Eric for waiting on specific jobid. +- Add ACL checking for the dot commands. +- Fix restore of writable FIFOs. +- Fix a bug in bpipe where the string was freed too early. 27Feb06 - Modify the Python class examples to inherit object -- new way - of defining classes. Patch from Felix Schwartz. + of defining classes. Patch from Felix Schwarz. - Implement jobuid to replace old usage of job in keywords as suggested by Eric Bollengier. - Apply patch for enhancing wait from Eric Bollengier. On can now: @@ -15,11 +39,9 @@ General: wait job=job-name - Implement write variables for Python to set Priority (anytime), and Job Level, only during JobInit event. - 26Feb06 - Fix the block listing bug pointed out by Arno. - Update the po files at Eric's request. - 24Feb06 - Fix Maximum Changer Wait, Maximum Open Wait, Maximum Rewind Wait to accept time qualifiers. @@ -101,7 +123,8 @@ Changes to 1.39.5 - Move updating bootstrap code in backup.c to subroutine update_bootstrap_file(). - Add new job status elapsed time and bytes written user - friendly job report output patch sent by a user. + friendly job report output patch sent by John Kodis + . - Implement a storage list in Pools. - Separate out setup_job() code from run_job(). - Get migration working -- lots of changes in mac.c in both diff --git a/bacula/src/cats/create_postgresql_database.in b/bacula/src/cats/create_postgresql_database.in index 1cbe61e9e6..e40ef541c6 100644 --- a/bacula/src/cats/create_postgresql_database.in +++ b/bacula/src/cats/create_postgresql_database.in @@ -5,8 +5,17 @@ bindir=@SQL_BINDIR@ +# use SQL_ASCII to be able to put any filename into +# the database even those created with unusual character sets +ENCODING="ENCODING 'SQL_ASCII'" +# use UTF8 if you are using standard Unix/Linux LANG specifications +# that use UTF8 -- this is normally the default and *should* be +# your standard. Bacula consoles work correctly *only* with UTF8. +#ENCODING="ENCODING 'UTF8'" + + if $bindir/psql -f - -d template1 $* <result = sql_use_result(mdb)) != NULL) { - int num_fields = sql_num_fields(mdb); + int num_fields = 0; + /* We *must* fetch all rows */ while ((row = sql_fetch_row(mdb)) != NULL) { - if (result_handler(ctx, num_fields, row)) - break; + if (send) { + /* the result handler returns 1 when it has + * seen all the data it wants. However, we + * loop to the end of the data. + */ + num_fields++; + if (result_handler(ctx, num_fields, row)) { + send = false; + } + } } sql_free_result(mdb); diff --git a/bacula/src/dird/Makefile.in b/bacula/src/dird/Makefile.in index 316ad3f50f..068deac2f4 100644 --- a/bacula/src/dird/Makefile.in +++ b/bacula/src/dird/Makefile.in @@ -34,7 +34,7 @@ SVRSRCS = dird.c admin.c authenticate.c \ autoprune.c backup.c bsr.c \ catreq.c dird_conf.c expand.c \ fd_cmds.c getmsg.c inc_conf.c job.c \ - jobq.c mac.c \ + jobq.c migrate.c \ mountreq.c msgchan.c next_vol.c newvol.c \ pythondir.c \ recycle.c restore.c run_conf.c \ @@ -49,7 +49,7 @@ SVROBJS = dird.o admin.o authenticate.o \ autoprune.o backup.o bsr.o \ catreq.o dird_conf.o expand.o \ fd_cmds.o getmsg.o inc_conf.o job.o \ - jobq.o mac.o \ + jobq.o migrate.o \ mountreq.o msgchan.o next_vol.o newvol.o \ pythondir.o \ recycle.o restore.o run_conf.o \ diff --git a/bacula/src/dird/backup.c b/bacula/src/dird/backup.c index 496e14d6d8..79e041602e 100644 --- a/bacula/src/dird/backup.c +++ b/bacula/src/dird/backup.c @@ -309,7 +309,7 @@ void backup_cleanup(JCR *jcr, int TermCode) { char sdt[50], edt[50], schedt[50]; char ec1[30], ec2[30], ec3[30], ec4[30], ec5[30], compress[50]; - char ec6[30], ec7[30], elapsed[50]; + char ec6[30], ec7[30], ec8[30], elapsed[50]; char term_code[100], fd_term_msg[100], sd_term_msg[100]; const char *term_msg; int msg_type; @@ -347,7 +347,6 @@ void backup_cleanup(JCR *jcr, int TermCode) update_bootstrap_file(jcr); - msg_type = M_INFO; /* by default INFO message */ switch (jcr->JobStatus) { case JS_Terminated: @@ -441,7 +440,7 @@ void backup_cleanup(JCR *jcr, int TermCode) " Volume name(s): %s\n" " Volume Session Id: %d\n" " Volume Session Time: %d\n" -" Last Volume Bytes: %s\n" +" Last Volume Bytes: %s (%sB)\n" " Non-fatal FD errors: %d\n" " SD Errors: %d\n" " FD termination status: %s\n" @@ -474,6 +473,7 @@ void backup_cleanup(JCR *jcr, int TermCode) jcr->VolSessionId, jcr->VolSessionTime, edit_uint64_with_commas(mr.VolBytes, ec7), + edit_uint64_with_suffix(mr.VolBytes, ec8), jcr->Errors, jcr->SDErrors, fd_term_msg, diff --git a/bacula/src/dird/catreq.c b/bacula/src/dird/catreq.c index e599bd95f2..95deae2086 100644 --- a/bacula/src/dird/catreq.c +++ b/bacula/src/dird/catreq.c @@ -279,8 +279,8 @@ void catalog_request(JCR *jcr, BSOCK *bs) &jm.FirstIndex, &jm.LastIndex, &jm.StartFile, &jm.EndFile, &jm.StartBlock, &jm.EndBlock, &jm.Copy, &jm.Stripe) == 9) { - if (jcr->target_jcr) { - jm.JobId = jcr->target_jcr->JobId; + if (jcr->previous_jcr) { + jm.JobId = jcr->previous_jcr->JobId; jm.MediaId = jcr->MediaId; } else { jm.JobId = jcr->JobId; @@ -394,8 +394,8 @@ void catalog_update(JCR *jcr, BSOCK *bs) ar->FileIndex = FileIndex; ar->Stream = Stream; ar->link = NULL; - if (jcr->target_jcr) { - ar->JobId = jcr->target_jcr->JobId; + if (jcr->previous_jcr) { + ar->JobId = jcr->previous_jcr->JobId; } else { ar->JobId = jcr->JobId; } diff --git a/bacula/src/dird/dird_conf.c b/bacula/src/dird/dird_conf.c index 0ebace8e60..1a79856679 100644 --- a/bacula/src/dird/dird_conf.c +++ b/bacula/src/dird/dird_conf.c @@ -61,6 +61,7 @@ void store_level(LEX *lc, RES_ITEM *item, int index, int pass); void store_replace(LEX *lc, RES_ITEM *item, int index, int pass); void store_acl(LEX *lc, RES_ITEM *item, int index, int pass); static void store_device(LEX *lc, RES_ITEM *item, int index, int pass); +static void store_migtype(LEX *lc, RES_ITEM *item, int index, int pass); /* We build the current resource here as we are @@ -237,8 +238,9 @@ RES_ITEM job_items[] = { {"fileset", store_res, ITEM(res_job.fileset), R_FILESET, ITEM_REQUIRED, 0}, {"schedule", store_res, ITEM(res_job.schedule), R_SCHEDULE, 0, 0}, {"verifyjob", store_res, ITEM(res_job.verify_job), R_JOB, 0, 0}, - {"migrationjob", store_res, ITEM(res_job.migration_job), R_JOB, 0, 0}, + {"jobtoverify", store_res, ITEM(res_job.verify_job), R_JOB, 0, 0}, {"jobdefs", store_res, ITEM(res_job.jobdefs), R_JOBDEFS, 0, 0}, + {"nextpool", store_res, ITEM(res_job.next_pool), R_POOL, 0, 0}, {"run", store_alist_str, ITEM(res_job.run_cmds), 0, 0, 0}, /* Root of where to restore files */ {"where", store_dir, ITEM(res_job.RestoreWhere), 0, 0, 0}, @@ -275,6 +277,8 @@ RES_ITEM job_items[] = { {"rescheduletimes", store_pint, ITEM(res_job.RescheduleTimes), 0, 0, 0}, {"priority", store_pint, ITEM(res_job.Priority), 0, ITEM_DEFAULT, 10}, {"writepartafterjob", store_bool, ITEM(res_job.write_part_after_job), 0, ITEM_DEFAULT, false}, + {"selectionpattern", store_str, ITEM(res_job.selection_pattern), 0, 0, 0}, + {"selectiontype", store_migtype, ITEM(res_job.selection_type), 0, 0, 0}, {NULL, NULL, NULL, 0, 0, 0} }; @@ -412,12 +416,29 @@ struct s_jt jobtypes[] = { {"admin", JT_ADMIN}, {"verify", JT_VERIFY}, {"restore", JT_RESTORE}, - {"copy", JT_COPY}, {"migrate", JT_MIGRATE}, {NULL, 0} }; +/* Keywords (RHS) permitted in Selection type records + * + * type_name job_type + */ +struct s_jt migtypes[] = { + {"smallestvolume", MT_SMALLEST_VOL}, + {"oldestvolume", MT_OLDEST_VOL}, + {"pooloccupancy", MT_POOL_OCCUPANCY}, + {"pooltime", MT_POOL_TIME}, + {"client", MT_CLIENT}, + {"volume", MT_VOLUME}, + {"job", MT_JOB}, + {"sqlquery", MT_SQLQUERY}, + {NULL, 0} +}; + + + /* Options permitted in Restore replace= */ struct s_kw ReplaceOptions[] = { {"always", REPLACE_ALWAYS}, @@ -547,6 +568,9 @@ void dump_resource(int type, RES *reshdr, void sendit(void *sock, const char *fm res->res_job.RescheduleOnError, res->res_job.RescheduleTimes, edit_uint64_with_commas(res->res_job.RescheduleInterval, ed1), res->res_job.spool_data, res->res_job.write_part_after_job); + if (res->res_job.JobType == JT_MIGRATE) { + sendit(sock, _(" SelectionType=%d\n"), res->res_job.selection_type); + } if (res->res_job.client) { sendit(sock, _(" --> ")); dump_resource(-R_CLIENT, (RES *)res->res_job.client, sendit, sock); @@ -610,6 +634,9 @@ void dump_resource(int type, RES *reshdr, void sendit(void *sock, const char *fm sendit(sock, _(" --> Run=%s\n"), runcmd); } } + if (res->res_job.selection_pattern) { + sendit(sock, _(" --> SelectionPattern=%s\n"), NPRT(res->res_job.selection_pattern)); + } if (res->res_job.messages) { sendit(sock, _(" --> ")); dump_resource(-R_MSGS, (RES *)res->res_job.messages, sendit, sock); @@ -1084,6 +1111,9 @@ void free_resource(RES *sres, int type) if (res->res_job.ClientRunAfterJob) { free(res->res_job.ClientRunAfterJob); } + if (res->res_job.selection_pattern) { + free(res->res_job.selection_pattern); + } if (res->res_job.run_cmds) { delete res->res_job.run_cmds; } @@ -1394,6 +1424,31 @@ static void store_device(LEX *lc, RES_ITEM *item, int index, int pass) } } +/* + * Store JobType (backup, verify, restore) + * + */ +static void store_migtype(LEX *lc, RES_ITEM *item, int index, int pass) +{ + int token, i; + + token = lex_get_token(lc, T_NAME); + /* Store the type both pass 1 and pass 2 */ + for (i=0; migtypes[i].type_name; i++) { + if (strcasecmp(lc->str, migtypes[i].type_name) == 0) { + *(int *)(item->value) = migtypes[i].job_type; + i = 0; + break; + } + } + if (i != 0) { + scan_err1(lc, _("Expected a Migration Job Type keyword, got: %s"), lc->str); + } + scan_to_eol(lc); + set_bit(index, res_all.hdr.item_present); +} + + /* * Store JobType (backup, verify, restore) diff --git a/bacula/src/dird/dird_conf.h b/bacula/src/dird/dird_conf.h index 6886983ee4..7117c38fe1 100644 --- a/bacula/src/dird/dird_conf.h +++ b/bacula/src/dird/dird_conf.h @@ -298,7 +298,7 @@ public: utime_t MaxStartDelay; /* max start delay in seconds */ utime_t RescheduleInterval; /* Reschedule interval */ utime_t JobRetention; /* job retention period in seconds */ - uint32_t MaxConcurrentJobs; /* Maximume concurrent jobs */ + uint32_t MaxConcurrentJobs; /* Maximum concurrent jobs */ int RescheduleTimes; /* Number of times to reschedule job */ bool RescheduleOnError; /* Set to reschedule on error */ bool PrefixLinks; /* prefix soft links with Where path */ @@ -321,9 +321,11 @@ public: POOL *full_pool; /* Pool for Full backups */ POOL *inc_pool; /* Pool for Incremental backups */ POOL *dif_pool; /* Pool for Differental backups */ + POOL *next_pool; /* Next Pool for Migration */ + char *selection_pattern; + int selection_type; union { JOB *verify_job; /* Job name to verify */ - JOB *migration_job; /* Job name to migrate */ }; JOB *jobdefs; /* Job defaults */ alist *run_cmds; /* Run commands */ diff --git a/bacula/src/dird/job.c b/bacula/src/dird/job.c index f23976861c..607835c7c5 100644 --- a/bacula/src/dird/job.c +++ b/bacula/src/dird/job.c @@ -225,10 +225,8 @@ static void *job_thread(void *arg) } break; case JT_MIGRATE: - case JT_COPY: - case JT_ARCHIVE: - if (!do_mac_init(jcr)) { /* migration, archive, copy */ - mac_cleanup(jcr, JS_ErrorTerminated); + if (!do_migration_init(jcr)) { + migration_cleanup(jcr, JS_ErrorTerminated); } break; default: @@ -303,10 +301,10 @@ static void *job_thread(void *arg) case JT_MIGRATE: case JT_COPY: case JT_ARCHIVE: - if (do_mac(jcr)) { /* migration, archive, copy */ + if (do_migration(jcr)) { do_autoprune(jcr); } else { - mac_cleanup(jcr, JS_ErrorTerminated); + migration_cleanup(jcr, JS_ErrorTerminated); } break; default: @@ -958,12 +956,12 @@ bool create_restore_bootstrap_file(JCR *jcr) memset(&rx, 0, sizeof(rx)); rx.bsr = new_bsr(); rx.JobIds = ""; - rx.bsr->JobId = jcr->target_jr.JobId; + rx.bsr->JobId = jcr->previous_jr.JobId; ua = new_ua_context(jcr); complete_bsr(ua, rx.bsr); rx.bsr->fi = new_findex(); rx.bsr->fi->findex = 1; - rx.bsr->fi->findex2 = jcr->target_jr.JobFiles; + rx.bsr->fi->findex2 = jcr->previous_jr.JobFiles; jcr->ExpectedFiles = write_bsr_file(ua, rx); if (jcr->ExpectedFiles == 0) { free_ua_context(ua); diff --git a/bacula/src/dird/mac.c b/bacula/src/dird/migrate.c similarity index 65% rename from bacula/src/dird/mac.c rename to bacula/src/dird/migrate.c index d042552377..b818adc37b 100644 --- a/bacula/src/dird/mac.c +++ b/bacula/src/dird/migrate.c @@ -1,7 +1,7 @@ /* * - * Bacula Director -- mac.c -- responsible for doing - * migration, archive, and copy jobs. + * Bacula Director -- migrate.c -- responsible for doing + * migration jobs. * * Kern Sibbald, September MMIV * @@ -34,68 +34,32 @@ #include "ua.h" static char OKbootstrap[] = "3000 OK bootstrap\n"; +static bool get_job_to_migrate(JCR *jcr); /* * Called here before the job is run to do the job * specific setup. */ -bool do_mac_init(JCR *jcr) +bool do_migration_init(JCR *jcr) { POOL_DBR pr; - char *Name; - const char *Type; - switch(jcr->JobType) { - case JT_MIGRATE: - Type = "Migration"; - break; - case JT_ARCHIVE: - Type = "Archive"; - break; - case JT_COPY: - Type = "Copy"; - break; - default: - Type = "Unknown"; - break; - } - - if (!get_or_create_fileset_record(jcr)) { + if (!get_job_to_migrate(jcr)) { return false; } - /* - * Find JobId of last job that ran. - */ - Name = jcr->job->migration_job->hdr.name; - Dmsg1(100, "find last jobid for: %s\n", NPRT(Name)); - jcr->target_jr.JobType = JT_BACKUP; - if (!db_find_last_jobid(jcr, jcr->db, Name, &jcr->target_jr)) { - Jmsg(jcr, M_FATAL, 0, - _("Previous job \"%s\" not found. ERR=%s\n"), Name, - db_strerror(jcr->db)); - return false; + if (jcr->previous_jr.JobId == 0) { + return true; /* no work */ } - Dmsg1(100, "Last jobid=%d\n", jcr->target_jr.JobId); - if (!db_get_job_record(jcr, jcr->db, &jcr->target_jr)) { - Jmsg(jcr, M_FATAL, 0, _("Could not get job record for previous Job. ERR=%s"), - db_strerror(jcr->db)); - return false; - } - if (jcr->target_jr.JobStatus != 'T') { - Jmsg(jcr, M_FATAL, 0, _("Last Job %d did not terminate normally. JobStatus=%c\n"), - jcr->target_jr.JobId, jcr->target_jr.JobStatus); + if (!get_or_create_fileset_record(jcr)) { return false; } - Jmsg(jcr, M_INFO, 0, _("%s using JobId=%d Job=%s\n"), - Type, jcr->target_jr.JobId, jcr->target_jr.Job); - /* * Get the Pool record -- first apply any level defined pools */ - switch (jcr->target_jr.JobLevel) { + switch (jcr->previous_jr.JobLevel) { case L_FULL: if (jcr->full_pool) { jcr->pool = jcr->full_pool; @@ -142,40 +106,28 @@ bool do_mac_init(JCR *jcr) } /* - * Do a Migration, Archive, or Copy of a previous job + * Do a Migration of a previous job * * Returns: false on failure * true on success */ -bool do_mac(JCR *jcr) +bool do_migration(JCR *jcr) { POOL_DBR pr; POOL *pool; - const char *Type; char ed1[100]; BSOCK *sd; JOB *job, *tjob; JCR *tjcr; - switch(jcr->JobType) { - case JT_MIGRATE: - Type = "Migration"; - break; - case JT_ARCHIVE: - Type = "Archive"; - break; - case JT_COPY: - Type = "Copy"; - break; - default: - Type = "Unknown"; - break; + if (jcr->previous_jr.JobId == 0) { + jcr->JobStatus = JS_Terminated; + migration_cleanup(jcr, jcr->JobStatus); + return true; /* no work */ } - - Dmsg4(100, "Target: Name=%s JobId=%d Type=%c Level=%c\n", - jcr->target_jr.Name, jcr->target_jr.JobId, - jcr->target_jr.JobType, jcr->target_jr.JobLevel); + jcr->previous_jr.Name, jcr->previous_jr.JobId, + jcr->previous_jr.JobType, jcr->previous_jr.JobLevel); Dmsg4(100, "Current: Name=%s JobId=%d Type=%c Level=%c\n", jcr->jr.Name, jcr->jr.JobId, @@ -183,7 +135,7 @@ bool do_mac(JCR *jcr) LockRes(); job = (JOB *)GetResWithName(R_JOB, jcr->jr.Name); - tjob = (JOB *)GetResWithName(R_JOB, jcr->target_jr.Name); + tjob = (JOB *)GetResWithName(R_JOB, jcr->previous_jr.Name); UnlockRes(); if (!job || !tjob) { return false; @@ -196,8 +148,8 @@ bool do_mac(JCR *jcr) * the original backup job. Most operations on the current * migration jcr are also done on the target jcr. */ - tjcr = jcr->target_jcr = new_jcr(sizeof(JCR), dird_free_jcr); - memcpy(&tjcr->target_jr, &jcr->target_jr, sizeof(tjcr->target_jr)); + tjcr = jcr->previous_jcr = new_jcr(sizeof(JCR), dird_free_jcr); + memcpy(&tjcr->previous_jr, &jcr->previous_jr, sizeof(tjcr->previous_jr)); /* Turn the tjcr into a "real" job */ set_jcr_defaults(tjcr, tjob); @@ -213,9 +165,8 @@ bool do_mac(JCR *jcr) * find the pool name from the database record. */ memset(&pr, 0, sizeof(pr)); - pr.PoolId = tjcr->target_jr.PoolId; + pr.PoolId = tjcr->previous_jr.PoolId; if (!db_get_pool_record(jcr, jcr->db, &pr)) { - char ed1[50]; Jmsg(jcr, M_FATAL, 0, _("Pool for JobId %s not in database. ERR=%s\n"), edit_int64(pr.PoolId, ed1), db_strerror(jcr->db)); return false; @@ -262,8 +213,8 @@ bool do_mac(JCR *jcr) copy_storage(jcr, jcr->pool->storage); /* Print Job Start message */ - Jmsg(jcr, M_INFO, 0, _("Start %s JobId %s, Job=%s\n"), - Type, edit_uint64(jcr->JobId, ed1), jcr->Job); + Jmsg(jcr, M_INFO, 0, _("Start Migration JobId %s, Job=%s\n"), + edit_uint64(jcr->JobId, ed1), jcr->Job); set_jcr_job_status(jcr, JS_Running); set_jcr_job_status(jcr, JS_Running); @@ -332,45 +283,175 @@ bool do_mac(JCR *jcr) jcr->JobStatus = jcr->SDJobStatus; if (jcr->JobStatus == JS_Terminated) { - mac_cleanup(jcr, jcr->JobStatus); + migration_cleanup(jcr, jcr->JobStatus); return true; } return false; } +/* + * Callback handler make list of JobIds + */ +static int jobid_handler(void *ctx, int num_fields, char **row) +{ + POOLMEM *JobIds = (POOLMEM *)ctx; + + if (JobIds[0] != 0) { + pm_strcat(JobIds, ","); + } + pm_strcat(JobIds, row[0]); + return 0; +} + +const char *sql_smallest_vol = + "SELECT MediaId FROM Media,Pool WHERE" + " VolStatus in ('Full','Used') AND" + " Media.PoolId=Pool.PoolId AND Pool.Name='%s'" + " ORDER BY VolBytes ASC LIMIT 1"; + +const char *sql_oldest_vol = + "SELECT MediaId FROM Media,Pool WHERE" + " VolStatus in ('Full','Used') AND" + " Media.PoolId=Pool.PoolId AND Pool.Name='%s'" + " ORDER BY LastWritten ASC LIMIT 1"; + +const char *sql_jobids_from_mediaid = + "SELECT DISTINCT Job.JobId FROM JobMedia,Job" + " WHERE JobMedia.JobId=Job.JobId AND JobMedia.MediaId=%s" + " ORDER by Job.StartTime"; + + + +/* + * Returns: false on error + * true if OK and jcr->previous_jr filled in + */ +static bool get_job_to_migrate(JCR *jcr) +{ + char ed1[30]; + POOL_MEM query(PM_MESSAGE); + POOLMEM *JobIds = get_pool_memory(PM_MESSAGE); + + if (jcr->MigrateJobId != 0) { + jcr->previous_jr.JobId = jcr->MigrateJobId; + } else { + switch (jcr->job->selection_type) { + case MT_SMALLEST_VOL: + Mmsg(query, sql_smallest_vol, jcr->pool->hdr.name); + JobIds = get_pool_memory(PM_MESSAGE); + JobIds[0] = 0; + if (!db_sql_query(jcr->db, query.c_str(), jobid_handler, (void *)JobIds)) { + Jmsg(jcr, M_FATAL, 0, + _("SQL to get Volume failed. ERR=%s\n"), db_strerror(jcr->db)); + goto bail_out; + } + if (JobIds[0] == 0) { + Jmsg(jcr, M_INFO, 0, _("No Volumes found to migrate.\n")); + goto ok_out; + } + Mmsg(query, sql_jobids_from_mediaid, JobIds); + JobIds[0] = 0; + if (!db_sql_query(jcr->db, query.c_str(), jobid_handler, (void *)JobIds)) { + Jmsg(jcr, M_FATAL, 0, + _("SQL to get Volume failed. ERR=%s\n"), db_strerror(jcr->db)); + goto bail_out; + } + Dmsg1(000, "Jobids=%s\n", JobIds); + goto ok_out; + break; + case MT_OLDEST_VOL: + Mmsg(query, sql_oldest_vol, jcr->pool->hdr.name); + JobIds = get_pool_memory(PM_MESSAGE); + JobIds[0] = 0; + if (!db_sql_query(jcr->db, query.c_str(), jobid_handler, (void *)JobIds)) { + Jmsg(jcr, M_FATAL, 0, + _("SQL to get Volume failed. ERR=%s\n"), db_strerror(jcr->db)); + goto bail_out; + } + if (JobIds[0] == 0) { + Jmsg(jcr, M_INFO, 0, _("No jobs found to migrate.\n")); + goto ok_out; + } + Mmsg(query, sql_jobids_from_mediaid, JobIds); + JobIds[0] = 0; + if (!db_sql_query(jcr->db, query.c_str(), jobid_handler, (void *)JobIds)) { + Jmsg(jcr, M_FATAL, 0, + _("SQL to get Volume failed. ERR=%s\n"), db_strerror(jcr->db)); + goto bail_out; + } + Dmsg1(000, "Jobids=%s\n", JobIds); + goto ok_out; + break; + case MT_POOL_OCCUPANCY: + break; + case MT_POOL_TIME: + break; + case MT_CLIENT: + break; + case MT_VOLUME: + break; + case MT_JOB: + break; + case MT_SQLQUERY: + JobIds[0] = 0; + if (!jcr->job->selection_pattern) { + Jmsg(jcr, M_FATAL, 0, _("No selection pattern specified.\n")); + goto bail_out; + } + if (!db_sql_query(jcr->db, query.c_str(), jobid_handler, (void *)JobIds)) { + Jmsg(jcr, M_FATAL, 0, + _("SQL to get Volume failed. ERR=%s\n"), db_strerror(jcr->db)); + goto bail_out; + } + if (JobIds[0] == 0) { + Jmsg(jcr, M_INFO, 0, _("No jobs found to migrate.\n")); + goto ok_out; + } + Dmsg1(000, "Jobids=%s\n", JobIds); + goto bail_out; + break; + default: + Jmsg(jcr, M_FATAL, 0, _("Unknown Migration Selection Type.\n")); + goto bail_out; + } + } + Dmsg1(100, "Last jobid=%d\n", jcr->previous_jr.JobId); + + if (!db_get_job_record(jcr, jcr->db, &jcr->previous_jr)) { + Jmsg(jcr, M_FATAL, 0, _("Could not get job record for JobId %s to migrate. ERR=%s"), + edit_int64(jcr->previous_jr.JobId, ed1), + db_strerror(jcr->db)); + goto bail_out; + } + Jmsg(jcr, M_INFO, 0, _("Migration using JobId=%d Job=%s\n"), + jcr->previous_jr.JobId, jcr->previous_jr.Job); + +ok_out: + free_pool_memory(JobIds); + return true; + +bail_out: + free_pool_memory(JobIds); + return false; +} + /* * Release resources allocated during backup. */ -void mac_cleanup(JCR *jcr, int TermCode) +void migration_cleanup(JCR *jcr, int TermCode) { char sdt[MAX_TIME_LENGTH], edt[MAX_TIME_LENGTH]; - char ec1[30], ec2[30], ec3[30], ec4[30], elapsed[50]; + char ec1[30], ec2[30], ec3[30], ec4[30], ec5[30], elapsed[50]; char term_code[100], sd_term_msg[100]; const char *term_msg; int msg_type; MEDIA_DBR mr; double kbps; utime_t RunTime; - const char *Type; - JCR *tjcr = jcr->target_jcr; + JCR *tjcr = jcr->previous_jcr; POOL_MEM query(PM_MESSAGE); - switch(jcr->JobType) { - case JT_MIGRATE: - Type = "Migration"; - break; - case JT_ARCHIVE: - Type = "Archive"; - break; - case JT_COPY: - Type = "Copy"; - break; - default: - Type = "Unknown"; - break; - } - /* Ensure target is defined to avoid a lot of testing */ if (!tjcr) { tjcr = jcr; @@ -380,7 +461,7 @@ void mac_cleanup(JCR *jcr, int TermCode) tjcr->VolSessionId = jcr->VolSessionId; tjcr->VolSessionTime = jcr->VolSessionTime; - Dmsg2(100, "Enter mac_cleanup %d %c\n", TermCode, TermCode); + Dmsg2(100, "Enter migrate_cleanup %d %c\n", TermCode, TermCode); dequeue_messages(jcr); /* display any queued messages */ memset(&mr, 0, sizeof(mr)); set_jcr_job_status(jcr, TermCode); @@ -392,8 +473,8 @@ void mac_cleanup(JCR *jcr, int TermCode) Mmsg(query, "UPDATE Job SET StartTime='%s',EndTime='%s'," "JobTDate=%s WHERE JobId=%s", - jcr->target_jr.cStartTime, jcr->target_jr.cEndTime, - edit_uint64(jcr->target_jr.JobTDate, ec1), + jcr->previous_jr.cStartTime, jcr->previous_jr.cEndTime, + edit_uint64(jcr->previous_jr.JobTDate, ec1), edit_uint64(tjcr->jr.JobId, ec2)); db_sql_query(tjcr->db, query.c_str(), NULL, NULL); @@ -445,7 +526,7 @@ void mac_cleanup(JCR *jcr, int TermCode) term_msg = _("Inappropriate %s term code"); break; } - bsnprintf(term_code, sizeof(term_code), term_msg, Type); + bsnprintf(term_code, sizeof(term_code), term_msg, "Migration"); bstrftimes(sdt, sizeof(sdt), jcr->jr.StartTime); bstrftimes(edt, sizeof(edt), jcr->jr.EndTime); RunTime = jcr->jr.EndTime - jcr->jr.StartTime; @@ -490,14 +571,14 @@ void mac_cleanup(JCR *jcr, int TermCode) " Volume name(s): %s\n" " Volume Session Id: %d\n" " Volume Session Time: %d\n" -" Last Volume Bytes: %s\n" +" Last Volume Bytes: %s (%sB)\n" " SD Errors: %d\n" " SD termination status: %s\n" " Termination: %s\n\n"), VERSION, LSMDATE, edt, - jcr->target_jr.JobId, + jcr->previous_jr.JobId, tjcr->jr.JobId, jcr->jr.JobId, jcr->jr.Job, @@ -509,20 +590,21 @@ void mac_cleanup(JCR *jcr, int TermCode) edt, edit_utime(RunTime, elapsed, sizeof(elapsed)), jcr->JobPriority, - edit_uint64_with_commas(jcr->SDJobFiles, ec2), - edit_uint64_with_commas(jcr->SDJobBytes, ec3), - edit_uint64_with_suffix(jcr->jr.JobBytes, ec4), + edit_uint64_with_commas(jcr->SDJobFiles, ec1), + edit_uint64_with_commas(jcr->SDJobBytes, ec2), + edit_uint64_with_suffix(jcr->jr.JobBytes, ec3), (float)kbps, tjcr->VolumeName, jcr->VolSessionId, jcr->VolSessionTime, - edit_uint64_with_commas(mr.VolBytes, ec1), + edit_uint64_with_commas(mr.VolBytes, ec4), + edit_uint64_with_suffix(mr.VolBytes, ec5), jcr->SDErrors, sd_term_msg, term_code); - Dmsg1(100, "Leave mac_cleanup() target_jcr=0x%x\n", jcr->target_jcr); - if (jcr->target_jcr) { - free_jcr(jcr->target_jcr); + Dmsg1(100, "Leave migrate_cleanup() previous_jcr=0x%x\n", jcr->previous_jcr); + if (jcr->previous_jcr) { + free_jcr(jcr->previous_jcr); } } diff --git a/bacula/src/dird/protos.h b/bacula/src/dird/protos.h index 83e8817cae..db1d32f6d2 100644 --- a/bacula/src/dird/protos.h +++ b/bacula/src/dird/protos.h @@ -103,10 +103,10 @@ extern bool setup_job(JCR *jcr); extern void create_clones(JCR *jcr); extern bool create_restore_bootstrap_file(JCR *jcr); -/* mac.c */ -extern bool do_mac(JCR *jcr); -extern bool do_mac_init(JCR *jcr); -extern void mac_cleanup(JCR *jcr, int TermCode); +/* migration.c */ +extern bool do_migration(JCR *jcr); +extern bool do_migration_init(JCR *jcr); +extern void migration_cleanup(JCR *jcr, int TermCode); /* mountreq.c */ diff --git a/bacula/src/dird/recycle.c b/bacula/src/dird/recycle.c index dddff5be55..f034fae717 100644 --- a/bacula/src/dird/recycle.c +++ b/bacula/src/dird/recycle.c @@ -9,7 +9,7 @@ */ /* - Copyright (C) 2002-2005 Kern Sibbald + Copyright (C) 2002-2006 Kern Sibbald This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License diff --git a/bacula/src/dird/sql_cmds.c b/bacula/src/dird/sql_cmds.c index 065aeba85b..7090753be9 100644 --- a/bacula/src/dird/sql_cmds.c +++ b/bacula/src/dird/sql_cmds.c @@ -7,7 +7,7 @@ * Version $Id$ */ /* - Copyright (C) 2002-2005 Kern Sibbald + Copyright (C) 2002-2006 Kern Sibbald This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License @@ -68,7 +68,7 @@ const char *drop_deltabs[] = { /* List of SQL commands to create temp table and indicies */ const char *create_deltabs[] = { - "CREATE TABLE DelCandidates (" + "CREATE TEMPORARY TABLE DelCandidates (" #ifdef HAVE_MYSQL "JobId INTEGER UNSIGNED NOT NULL, " "PurgedFiles TINYINT, " @@ -211,7 +211,7 @@ const char *uar_del_temp = "DROP TABLE temp"; const char *uar_del_temp1 = "DROP TABLE temp1"; const char *uar_create_temp = - "CREATE TABLE temp (" + "CREATE TEMPORARY TABLE temp (" #ifdef HAVE_POSTGRESQL "JobId INTEGER NOT NULL," "JobTDate BIGINT," @@ -239,7 +239,7 @@ const char *uar_create_temp = #endif const char *uar_create_temp1 = - "CREATE TABLE temp1 (" + "CREATE TEMPORARY TABLE temp1 (" #ifdef HAVE_POSTGRESQL "JobId INTEGER NOT NULL," "JobTDate BIGINT)"; diff --git a/bacula/src/dird/ua_cmds.c b/bacula/src/dird/ua_cmds.c index a5d48084ed..254810fe00 100644 --- a/bacula/src/dird/ua_cmds.c +++ b/bacula/src/dird/ua_cmds.c @@ -127,7 +127,7 @@ static struct cmdstruct commands[] = { { N_("use"), use_cmd, _("use catalog xxx")}, { N_("var"), var_cmd, _("does variable expansion")}, { N_("version"), version_cmd, _("print Director version")}, - { N_("wait"), wait_cmd, _("wait until no jobs are running [ | | ]")}, + { N_("wait"), wait_cmd, _("wait until no jobs are running [ | | ]")}, }; #define comsize (sizeof(commands)/sizeof(struct cmdstruct)) @@ -396,7 +396,7 @@ static int cancel_cmd(UAContext *ua, const char *cmd) bstrncpy(jcr->Job, ua->argv[i], sizeof(jcr->Job)); } break; - } else if (strcasecmp(ua->argk[i], _("jobuid")) == 0) { + } else if (strcasecmp(ua->argk[i], _("ujobid")) == 0) { if (!ua->argv[i]) { break; } @@ -1444,7 +1444,7 @@ int wait_cmd(UAContext *ua, const char *cmd) return 1; } - /* we have jobid, jobname or jobuid argument */ + /* we have jobid, jobname or ujobid argument */ uint32_t jobid = 0 ; @@ -1471,7 +1471,7 @@ int wait_cmd(UAContext *ua, const char *cmd) free_jcr(jcr); } break; - } else if (strcasecmp(ua->argk[i], "jobuid") == 0) { + } else if (strcasecmp(ua->argk[i], "ujobid") == 0) { if (!ua->argv[i]) { break; } diff --git a/bacula/src/dird/ua_dotcmds.c b/bacula/src/dird/ua_dotcmds.c index e426ed8969..688ceb8764 100644 --- a/bacula/src/dird/ua_dotcmds.c +++ b/bacula/src/dird/ua_dotcmds.c @@ -139,7 +139,9 @@ static int jobscmd(UAContext *ua, const char *cmd) JOB *job = NULL; LockRes(); while ( (job = (JOB *)GetNextRes(R_JOB, (RES *)job)) ) { - bsendmsg(ua, "%s\n", job->hdr.name); + if (acl_access_ok(ua, Job_ACL, job->hdr.name)) { + bsendmsg(ua, "%s\n", job->hdr.name); + } } UnlockRes(); return 1; @@ -150,7 +152,9 @@ static int filesetscmd(UAContext *ua, const char *cmd) FILESET *fs = NULL; LockRes(); while ( (fs = (FILESET *)GetNextRes(R_FILESET, (RES *)fs)) ) { - bsendmsg(ua, "%s\n", fs->hdr.name); + if (acl_access_ok(ua, FileSet_ACL, fs->hdr.name)) { + bsendmsg(ua, "%s\n", fs->hdr.name); + } } UnlockRes(); return 1; @@ -161,7 +165,9 @@ static int clientscmd(UAContext *ua, const char *cmd) CLIENT *client = NULL; LockRes(); while ( (client = (CLIENT *)GetNextRes(R_CLIENT, (RES *)client)) ) { - bsendmsg(ua, "%s\n", client->hdr.name); + if (acl_access_ok(ua, Client_ACL, client->hdr.name)) { + bsendmsg(ua, "%s\n", client->hdr.name); + } } UnlockRes(); return 1; @@ -183,7 +189,9 @@ static int poolscmd(UAContext *ua, const char *cmd) POOL *pool = NULL; LockRes(); while ( (pool = (POOL *)GetNextRes(R_POOL, (RES *)pool)) ) { - bsendmsg(ua, "%s\n", pool->hdr.name); + if (acl_access_ok(ua, Pool_ACL, pool->hdr.name)) { + bsendmsg(ua, "%s\n", pool->hdr.name); + } } UnlockRes(); return 1; @@ -194,7 +202,9 @@ static int storagecmd(UAContext *ua, const char *cmd) STORE *store = NULL; LockRes(); while ( (store = (STORE *)GetNextRes(R_STORAGE, (RES *)store)) ) { - bsendmsg(ua, "%s\n", store->hdr.name); + if (acl_access_ok(ua, Storage_ACL, store->hdr.name)) { + bsendmsg(ua, "%s\n", store->hdr.name); + } } UnlockRes(); return 1; @@ -226,6 +236,10 @@ static int backupscmd(UAContext *ua, const char *cmd) if (ua->argc != 3 || strcmp(ua->argk[1], "client") != 0 || strcmp(ua->argk[2], "fileset") != 0) { return 1; } + if (!acl_access_ok(ua, Client_ACL, ua->argv[1]) || + !acl_access_ok(ua, FileSet_ACL, ua->argv[2])) { + return 1; + } Mmsg(ua->cmd, client_backups, ua->argv[1], ua->argv[2]); if (!db_sql_query(ua->db, ua->cmd, client_backups_handler, (void *)ua)) { bsendmsg(ua, _("Query failed: %s. ERR=%s\n"), ua->cmd, db_strerror(ua->db)); @@ -246,8 +260,6 @@ static int levelscmd(UAContext *ua, const char *cmd) return 1; } - - /* * Return default values for a job */ @@ -264,6 +276,9 @@ static int defaultscmd(UAContext *ua, const char *cmd) /* Job defaults */ if (strcmp(ua->argk[1], "job") == 0) { + if (!acl_access_ok(ua, Job_ACL, ua->argv[1])) { + return 1; + } job = (JOB *)GetResWithName(R_JOB, ua->argv[1]); if (job) { STORE *store; @@ -282,6 +297,9 @@ static int defaultscmd(UAContext *ua, const char *cmd) } /* Client defaults */ else if (strcmp(ua->argk[1], "client") == 0) { + if (!acl_access_ok(ua, Client_ACL, ua->argv[1])) { + return 1; + } client = (CLIENT *)GetResWithName(R_CLIENT, ua->argv[1]); if (client) { bsendmsg(ua, "client=%s", client->hdr.name); @@ -294,6 +312,9 @@ static int defaultscmd(UAContext *ua, const char *cmd) } /* Storage defaults */ else if (strcmp(ua->argk[1], "storage") == 0) { + if (!acl_access_ok(ua, Storage_ACL, ua->argv[1])) { + return 1; + } storage = (STORE *)GetResWithName(R_STORAGE, ua->argv[1]); DEVICE *device; if (storage) { @@ -312,6 +333,9 @@ static int defaultscmd(UAContext *ua, const char *cmd) } /* Pool defaults */ else if (strcmp(ua->argk[1], "pool") == 0) { + if (!acl_access_ok(ua, Pool_ACL, ua->argv[1])) { + return 1; + } pool = (POOL *)GetResWithName(R_POOL, ua->argv[1]); if (pool) { bsendmsg(ua, "pool=%s", pool->hdr.name); diff --git a/bacula/src/dird/ua_output.c b/bacula/src/dird/ua_output.c index 039a1db9c0..36ae41ec82 100644 --- a/bacula/src/dird/ua_output.c +++ b/bacula/src/dird/ua_output.c @@ -202,7 +202,7 @@ bail_out: * * list jobs - lists all jobs run * list jobid=nnn - list job data for jobid - * list jobuid=uname - list job data for unique jobid + * list ujobid=uname - list job data for unique jobid * list job=name - list all jobs with "name" * list jobname=name - same as above * list jobmedia jobid= @@ -285,8 +285,8 @@ static int do_list_cmd(UAContext *ua, const char *cmd, e_list_type llist) jr.JobId = 0; db_list_job_records(ua->jcr, ua->db, &jr, prtit, ua, llist); - /* List JOBUID=xxx */ - } else if (strcasecmp(ua->argk[i], N_("jobuid")) == 0 && ua->argv[i]) { + /* List UJOBID=xxx */ + } else if (strcasecmp(ua->argk[i], N_("ujobid")) == 0 && ua->argv[i]) { bstrncpy(jr.Job, ua->argv[i], MAX_NAME_LENGTH); jr.JobId = 0; db_list_job_records(ua->jcr, ua->db, &jr, prtit, ua, llist); @@ -295,7 +295,7 @@ static int do_list_cmd(UAContext *ua, const char *cmd, e_list_type llist) } else if (strcasecmp(ua->argk[i], N_("files")) == 0) { for (j=i+1; jargc; j++) { - if (strcasecmp(ua->argk[j], N_("jobuid")) == 0 && ua->argv[j]) { + if (strcasecmp(ua->argk[j], N_("ujobid")) == 0 && ua->argv[j]) { bstrncpy(jr.Job, ua->argv[j], MAX_NAME_LENGTH); jr.JobId = 0; db_get_job_record(ua->jcr, ua->db, &jr); @@ -314,7 +314,7 @@ static int do_list_cmd(UAContext *ua, const char *cmd, e_list_type llist) } else if (strcasecmp(ua->argk[i], N_("jobmedia")) == 0) { int done = FALSE; for (j=i+1; jargc; j++) { - if (strcasecmp(ua->argk[j], N_("jobuid")) == 0 && ua->argv[j]) { + if (strcasecmp(ua->argk[j], N_("ujobid")) == 0 && ua->argv[j]) { bstrncpy(jr.Job, ua->argv[j], MAX_NAME_LENGTH); jr.JobId = 0; db_get_job_record(ua->jcr, ua->db, &jr); @@ -352,7 +352,7 @@ static int do_list_cmd(UAContext *ua, const char *cmd, e_list_type llist) strcasecmp(ua->argk[i], N_("volumes")) == 0) { bool done = false; for (j=i+1; jargc; j++) { - if (strcasecmp(ua->argk[j], N_("jobuid")) == 0 && ua->argv[j]) { + if (strcasecmp(ua->argk[j], N_("ujobid")) == 0 && ua->argv[j]) { bstrncpy(jr.Job, ua->argv[j], MAX_NAME_LENGTH); jr.JobId = 0; db_get_job_record(ua->jcr, ua->db, &jr); diff --git a/bacula/src/dird/ua_restore.c b/bacula/src/dird/ua_restore.c index 9fd21a4e72..2d29849083 100644 --- a/bacula/src/dird/ua_restore.c +++ b/bacula/src/dird/ua_restore.c @@ -13,7 +13,7 @@ * Version $Id$ */ /* - Copyright (C) 2002-2005 Kern Sibbald + Copyright (C) 2002-2006 Kern Sibbald This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License @@ -1153,7 +1153,14 @@ bail_out: } -/* Return next JobId from comma separated list */ +/* + * Return next JobId from comma separated list + * + * Returns: + * 1 if next JobId returned + * 0 if no more JobIds are in list + * -1 there is an error + */ int get_next_jobid_from_list(char **p, JobId_t *JobId) { char jobid[30]; diff --git a/bacula/src/dird/ua_run.c b/bacula/src/dird/ua_run.c index 470cd5bd96..e486bf817a 100644 --- a/bacula/src/dird/ua_run.c +++ b/bacula/src/dird/ua_run.c @@ -7,7 +7,7 @@ * Version $Id$ */ /* - Copyright (C) 2001-2005 Kern Sibbald + Copyright (C) 2001-2006 Kern Sibbald This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License @@ -47,7 +47,7 @@ int run_cmd(UAContext *ua, const char *cmd) char *where, *fileset_name, *client_name, *bootstrap; const char *replace; char *when, *verify_job_name, *catalog_name; - char *migration_job_name; + char *previous_job_name; char *since = NULL; char *verify_list; bool cloned = false; @@ -56,7 +56,7 @@ int run_cmd(UAContext *ua, const char *cmd) bool kw_ok; JOB *job = NULL; JOB *verify_job = NULL; - JOB *migration_job = NULL; + JOB *previous_job = NULL; STORE *store = NULL; CLIENT *client = NULL; FILESET *fileset = NULL; @@ -104,7 +104,7 @@ int run_cmd(UAContext *ua, const char *cmd) bootstrap = NULL; replace = NULL; verify_job_name = NULL; - migration_job_name = NULL; + previous_job_name = NULL; catalog_name = NULL; verify_list = NULL; @@ -114,7 +114,7 @@ int run_cmd(UAContext *ua, const char *cmd) /* Keep looking until we find a good keyword */ for (j=0; !kw_ok && kw[j]; j++) { if (strcasecmp(ua->argk[i], _(kw[j])) == 0) { - /* Note, yes and run have no value, so do not err */ + /* Note, yes and run have no value, so do not fail */ if (!ua->argv[i] && j != YES_POS /*yes*/) { bsendmsg(ua, _("Value missing for keyword %s\n"), ua->argk[i]); return 1; @@ -259,11 +259,11 @@ int run_cmd(UAContext *ua, const char *cmd) kw_ok = true; break; case 21: /* Migration Job */ - if (migration_job_name) { + if (previous_job_name) { bsendmsg(ua, _("Migration Job specified twice.\n")); return 0; } - migration_job_name = ua->argv[i]; + previous_job_name = ua->argv[i]; kw_ok = true; break; @@ -414,14 +414,14 @@ int run_cmd(UAContext *ua, const char *cmd) verify_job = job->verify_job; } - if (migration_job_name) { - migration_job = (JOB *)GetResWithName(R_JOB, migration_job_name); - if (!migration_job) { - bsendmsg(ua, _("Migration Job \"%s\" not found.\n"), migration_job_name); - migration_job = select_job_resource(ua); + if (previous_job_name) { + previous_job = (JOB *)GetResWithName(R_JOB, previous_job_name); + if (!previous_job) { + bsendmsg(ua, _("Migration Job \"%s\" not found.\n"), previous_job_name); + previous_job = select_job_resource(ua); } } else { - migration_job = job->verify_job; + previous_job = job->verify_job; } @@ -433,7 +433,7 @@ int run_cmd(UAContext *ua, const char *cmd) set_jcr_defaults(jcr, job); jcr->verify_job = verify_job; - jcr->migration_job = migration_job; + jcr->previous_job = previous_job; set_storage(jcr, store); jcr->client = client; jcr->fileset = fileset; @@ -513,6 +513,7 @@ try_again: } } if (jid) { + /* Note, this is also MigrateJobId */ jcr->RestoreJobId = str_to_int64(jid); } @@ -673,7 +674,7 @@ try_again: "FileSet: %s\n" "Client: %s\n" "Storage: %s\n" - "Migration Job: %s\n" + "JobId: %s\n" "When: %s\n" "Catalog: %s\n" "Priority: %d\n"), @@ -684,7 +685,7 @@ try_again: jcr->fileset->hdr.name, jcr->client->hdr.name, jcr->store->hdr.name, - jcr->migration_job->hdr.name, + jcr->MigrateJobId==0?"*None*":edit_uint64(jcr->MigrateJobId, ec1), bstrutime(dt, sizeof(dt), jcr->sched_time), jcr->catalog->hdr.name, jcr->JobPriority); diff --git a/bacula/src/dird/ua_select.c b/bacula/src/dird/ua_select.c index 29c9efd997..451e9ec8c3 100644 --- a/bacula/src/dird/ua_select.c +++ b/bacula/src/dird/ua_select.c @@ -626,7 +626,7 @@ int get_job_dbr(UAContext *ua, JOB_DBR *jr) int i; for (i=1; iargc; i++) { - if (strcasecmp(ua->argk[i], N_("jobuid")) == 0 && ua->argv[i]) { + if (strcasecmp(ua->argk[i], N_("ujobid")) == 0 && ua->argv[i]) { jr->JobId = 0; bstrncpy(jr->Job, ua->argv[i], sizeof(jr->Job)); } else if (strcasecmp(ua->argk[i], N_("jobid")) == 0 && ua->argv[i]) { @@ -845,9 +845,9 @@ STORE *get_storage_resource(UAContext *ua, bool use_default) store = jcr->store; free_jcr(jcr); break; - } else if (strcasecmp(ua->argk[i], N_("jobuid")) == 0) { + } else if (strcasecmp(ua->argk[i], N_("ujobid")) == 0) { if (!ua->argv[i]) { - bsendmsg(ua, _("Expecting jobuid=xxx, got: %s.\n"), ua->argk[i]); + bsendmsg(ua, _("Expecting ujobid=xxx, got: %s.\n"), ua->argk[i]); return NULL; } if (!(jcr=get_jcr_by_full_name(ua->argv[i]))) { diff --git a/bacula/src/dird/verify.c b/bacula/src/dird/verify.c index 45b0246a58..df5518664b 100644 --- a/bacula/src/dird/verify.c +++ b/bacula/src/dird/verify.c @@ -60,7 +60,7 @@ bool do_verify_init(JCR *jcr) JobId_t verify_jobid = 0; const char *Name; - memset(&jcr->target_jr, 0, sizeof(jcr->target_jr)); + memset(&jcr->previous_jr, 0, sizeof(jcr->previous_jr)); Dmsg1(9, "bdird: created client %s record\n", jcr->client->hdr.name); @@ -104,19 +104,19 @@ bool do_verify_init(JCR *jcr) if (jcr->JobLevel == L_VERIFY_CATALOG || jcr->JobLevel == L_VERIFY_VOLUME_TO_CATALOG || jcr->JobLevel == L_VERIFY_DISK_TO_CATALOG) { - jcr->target_jr.JobId = verify_jobid; - if (!db_get_job_record(jcr, jcr->db, &jcr->target_jr)) { + jcr->previous_jr.JobId = verify_jobid; + if (!db_get_job_record(jcr, jcr->db, &jcr->previous_jr)) { Jmsg(jcr, M_FATAL, 0, _("Could not get job record for previous Job. ERR=%s"), db_strerror(jcr->db)); return false; } - if (jcr->target_jr.JobStatus != 'T') { + if (jcr->previous_jr.JobStatus != 'T') { Jmsg(jcr, M_FATAL, 0, _("Last Job %d did not terminate normally. JobStatus=%c\n"), - verify_jobid, jcr->target_jr.JobStatus); + verify_jobid, jcr->previous_jr.JobStatus); return false; } Jmsg(jcr, M_INFO, 0, _("Verifying against JobId=%d Job=%s\n"), - jcr->target_jr.JobId, jcr->target_jr.Job); + jcr->previous_jr.JobId, jcr->previous_jr.Job); } /* @@ -136,7 +136,7 @@ bool do_verify_init(JCR *jcr) if (jcr->JobLevel == L_VERIFY_DISK_TO_CATALOG && jcr->verify_job) { jcr->fileset = jcr->verify_job->fileset; } - Dmsg2(100, "ClientId=%u JobLevel=%c\n", jcr->target_jr.ClientId, jcr->JobLevel); + Dmsg2(100, "ClientId=%u JobLevel=%c\n", jcr->previous_jr.ClientId, jcr->JobLevel); return true; } @@ -285,19 +285,19 @@ bool do_verify(JCR *jcr) Dmsg0(10, "Verify level=catalog\n"); jcr->sd_msg_thread_done = true; /* no SD msg thread, so it is done */ jcr->SDJobStatus = JS_Terminated; - get_attributes_and_compare_to_catalog(jcr, jcr->target_jr.JobId); + get_attributes_and_compare_to_catalog(jcr, jcr->previous_jr.JobId); break; case L_VERIFY_VOLUME_TO_CATALOG: Dmsg0(10, "Verify level=volume\n"); - get_attributes_and_compare_to_catalog(jcr, jcr->target_jr.JobId); + get_attributes_and_compare_to_catalog(jcr, jcr->previous_jr.JobId); break; case L_VERIFY_DISK_TO_CATALOG: Dmsg0(10, "Verify level=disk_to_catalog\n"); jcr->sd_msg_thread_done = true; /* no SD msg thread, so it is done */ jcr->SDJobStatus = JS_Terminated; - get_attributes_and_compare_to_catalog(jcr, jcr->target_jr.JobId); + get_attributes_and_compare_to_catalog(jcr, jcr->previous_jr.JobId); break; case L_VERIFY_INIT: @@ -421,7 +421,7 @@ void verify_cleanup(JCR *jcr, int TermCode) jcr->fileset->hdr.name, level_to_str(jcr->JobLevel), jcr->client->hdr.name, - jcr->target_jr.JobId, + jcr->previous_jr.JobId, Name, sdt, edt, @@ -454,7 +454,7 @@ void verify_cleanup(JCR *jcr, int TermCode) jcr->fileset->hdr.name, level_to_str(jcr->JobLevel), jcr->client->hdr.name, - jcr->target_jr.JobId, + jcr->previous_jr.JobId, Name, sdt, edt, @@ -551,7 +551,7 @@ int get_attributes_and_compare_to_catalog(JCR *jcr, JobId_t JobId) */ fdbr.FileId = 0; if (!db_get_file_attributes_record(jcr, jcr->db, jcr->fname, - &jcr->target_jr, &fdbr)) { + &jcr->previous_jr, &fdbr)) { Jmsg(jcr, M_INFO, 0, _("New file: %s\n"), jcr->fname); Dmsg1(020, _("File not in catalog: %s\n"), jcr->fname); stat = JS_Differences; diff --git a/bacula/src/filed/job.c b/bacula/src/filed/job.c index 56e43efcbf..fffe62bf20 100644 --- a/bacula/src/filed/job.c +++ b/bacula/src/filed/job.c @@ -7,7 +7,7 @@ * */ /* - Copyright (C) 2000-2005 Kern Sibbald + Copyright (C) 2000-2006 Kern Sibbald This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License @@ -870,32 +870,32 @@ static void set_options(findFOPTS *fo, const char *opts) fo->flags |= FO_READFIFO; break; case 'S': - switch(*(p + 1)) { + switch(*(p + 1)) { case ' ': /* Old director did not specify SHA variant */ fo->flags |= FO_SHA1; break; - case '1': - fo->flags |= FO_SHA1; + case '1': + fo->flags |= FO_SHA1; p++; - break; + break; #ifdef HAVE_SHA2 - case '2': - fo->flags |= FO_SHA256; + case '2': + fo->flags |= FO_SHA256; p++; - break; - case '3': - fo->flags |= FO_SHA512; + break; + case '3': + fo->flags |= FO_SHA512; p++; - break; + break; #endif - default: - /* Automatically downgrade to SHA-1 if an unsupported - * SHA variant is specified */ - fo->flags |= FO_SHA1; + default: + /* Automatically downgrade to SHA-1 if an unsupported + * SHA variant is specified */ + fo->flags |= FO_SHA1; p++; - break; - } + break; + } break; case 's': fo->flags |= FO_SPARSE; diff --git a/bacula/src/filed/pythonfd.c b/bacula/src/filed/pythonfd.c index 4ba9c9c5ad..22008eebac 100644 --- a/bacula/src/filed/pythonfd.c +++ b/bacula/src/filed/pythonfd.c @@ -9,7 +9,7 @@ */ /* - Copyright (C) 2005 Kern Sibbald + Copyright (C) 2005-2006 Kern Sibbald This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License as diff --git a/bacula/src/findlib/bfile.c b/bacula/src/findlib/bfile.c index 8f1321727a..ec0f3beeb6 100644 --- a/bacula/src/findlib/bfile.c +++ b/bacula/src/findlib/bfile.c @@ -9,7 +9,7 @@ * */ /* - Copyright (C) 2003-2005 Kern Sibbald + Copyright (C) 2003-2006 Kern Sibbald This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License @@ -761,6 +761,7 @@ int bopen(BFILE *bfd, const char *fname, int flags, mode_t mode) } /* Normal file open */ + Dmsg1(400, "open file %s\n", fname); bfd->fid = open(fname, flags, mode); bfd->berrno = errno; Dmsg1(400, "Open file %d\n", bfd->fid); diff --git a/bacula/src/findlib/bfile.h b/bacula/src/findlib/bfile.h index dc724fa7d4..46b58cc1c7 100644 --- a/bacula/src/findlib/bfile.h +++ b/bacula/src/findlib/bfile.h @@ -6,7 +6,7 @@ * Kern Sibbald May MMIII */ /* - Copyright (C) 2003-2005 Kern Sibbald + Copyright (C) 2003-2006 Kern Sibbald This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License diff --git a/bacula/src/findlib/create_file.c b/bacula/src/findlib/create_file.c index 0cea7be666..4e4312f713 100644 --- a/bacula/src/findlib/create_file.c +++ b/bacula/src/findlib/create_file.c @@ -7,7 +7,7 @@ * */ /* - Copyright (C) 2000-2005 Kern Sibbald + Copyright (C) 2000-2006 Kern Sibbald This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License @@ -72,7 +72,7 @@ int create_file(JCR *jcr, ATTR *attr, BFILE *bfd, int replace) } new_mode = attr->statp.st_mode; - Dmsg2(300, "newmode=%x file=%s\n", new_mode, attr->ofname); + Dmsg3(200, "type=%d newmode=%x file=%s\n", attr->type, new_mode, attr->ofname); parent_mode = S_IWUSR | S_IXUSR | new_mode; gid = attr->statp.st_gid; uid = attr->statp.st_uid; @@ -104,11 +104,11 @@ int create_file(JCR *jcr, ATTR *attr, BFILE *bfd, int replace) } } switch (attr->type) { + case FT_RAW: /* raw device to be written */ + case FT_FIFO: /* FIFO to be written to */ case FT_LNKSAVED: /* Hard linked, file already saved */ case FT_LNK: - case FT_RAW: - case FT_FIFO: - case FT_SPEC: + case FT_SPEC: /* fifo, ... to be backed up */ case FT_REGE: /* empty file */ case FT_REG: /* regular file */ /* @@ -117,7 +117,7 @@ int create_file(JCR *jcr, ATTR *attr, BFILE *bfd, int replace) * we may blow away a FIFO that is being used to read the * restore data, or we may blow away a partition definition. */ - if (exists && attr->type != FT_RAW) { + if (exists && attr->type != FT_RAW && attr->type != FT_FIFO) { /* Get rid of old copy */ if (unlink(attr->ofname) == -1) { berrno be; @@ -284,6 +284,7 @@ int create_file(JCR *jcr, ATTR *attr, BFILE *bfd, int replace) mode = O_WRONLY | O_BINARY; /* Timeout open() in 60 seconds */ if (attr->type == FT_FIFO) { + Dmsg0(200, "Set FIFO timer\n"); tid = start_thread_timer(pthread_self(), 60); } else { tid = NULL; @@ -291,6 +292,7 @@ int create_file(JCR *jcr, ATTR *attr, BFILE *bfd, int replace) if (is_bopen(bfd)) { Qmsg1(jcr, M_ERROR, 0, _("bpkt already open fid=%d\n"), bfd->fid); } + Dmsg2(200, "open %s mode=0x%x\n", attr->ofname, mode); if ((bopen(bfd, attr->ofname, mode, 0)) < 0) { berrno be; be.set_errno(bfd->berrno); diff --git a/bacula/src/jcr.h b/bacula/src/jcr.h index a0e51aaaf2..096661c7d6 100644 --- a/bacula/src/jcr.h +++ b/bacula/src/jcr.h @@ -74,6 +74,18 @@ #define JS_WaitStartTime 't' /* Waiting for start time */ #define JS_WaitPriority 'p' /* Waiting for higher priority jobs to finish */ +/* Migration selection types */ +enum { + MT_SMALLEST_VOL = 1, + MT_OLDEST_VOL, + MT_POOL_OCCUPANCY, + MT_POOL_TIME, + MT_CLIENT, + MT_VOLUME, + MT_JOB, + MT_SQLQUERY +}; + #define job_canceled(jcr) \ (jcr->JobStatus == JS_Canceled || \ jcr->JobStatus == JS_ErrorTerminated || \ @@ -164,10 +176,7 @@ public: volatile bool sd_msg_thread_done; /* Set when Storage message thread terms */ BSOCK *ua; /* User agent */ JOB *job; /* Job resource */ - union { - JOB *verify_job; /* Job resource of verify target job */ - JOB *migration_job; /* Job resource of migration target job */ - }; + JOB *verify_job; /* Job resource of verify previous job */ alist *storage; /* Storage possibilities */ STORE *store; /* Storage daemon selected */ CLIENT *client; /* Client resource */ @@ -189,11 +198,15 @@ public: uint32_t FileIndex; /* Last FileIndex processed */ POOLMEM *fname; /* name to put into catalog */ JOB_DBR jr; /* Job DB record for current job */ - JOB_DBR target_jr; /* target job */ - JCR *target_jcr; /* target job control record */ + JOB_DBR previous_jr; /* previous job database record */ + JOB *previous_job; /* Job resource of migration previous job */ + JCR *previous_jcr; /* previous job control record */ char FSCreateTime[MAX_TIME_LENGTH]; /* FileSet CreateTime as returned from DB */ char since[MAX_TIME_LENGTH]; /* since time */ - uint32_t RestoreJobId; /* Id specified by UA */ + union { + JobId_t RestoreJobId; /* Id specified by UA */ + JobId_t MigrateJobId; + }; POOLMEM *client_uname; /* client uname */ int replace; /* Replace option */ int NumVols; /* Number of Volume used in pool */ diff --git a/bacula/src/lib/bpipe.c b/bacula/src/lib/bpipe.c index ca82913663..b87f106ab5 100644 --- a/bacula/src/lib/bpipe.c +++ b/bacula/src/lib/bpipe.c @@ -6,7 +6,7 @@ * Version $Id$ */ /* - Copyright (C) 2002-2005 Kern Sibbald + Copyright (C) 2002-2006 Kern Sibbald This program is free software; you can redistribute it and/or modify it under the terms of the GNU General Public License @@ -62,13 +62,13 @@ BPIPE *open_bpipe(char *prog, int wait, const char *mode) printf("argc=%d argv=%s:\n", i, bargv[i]); } #endif - free_pool_memory(tprog); /* Each pipe is one way, write one end, read the other, so we need two */ if (mode_write && pipe(writep) == -1) { save_errno = errno; free(bpipe); errno = save_errno; + free_pool_memory(tprog); return NULL; } if (mode_read && pipe(readp) == -1) { @@ -79,6 +79,7 @@ BPIPE *open_bpipe(char *prog, int wait, const char *mode) } free(bpipe); errno = save_errno; + free_pool_memory(tprog); return NULL; } /* Start worker process */ @@ -95,6 +96,7 @@ BPIPE *open_bpipe(char *prog, int wait, const char *mode) } free(bpipe); errno = save_errno; + free_pool_memory(tprog); return NULL; case 0: /* child */ @@ -120,11 +122,10 @@ BPIPE *open_bpipe(char *prog, int wait, const char *mode) } exit(255); /* unknown errno */ - - default: /* parent */ break; } + free_pool_memory(tprog); if (mode_read) { close(readp[1]); /* close unused parent fds */ bpipe->rfd = fdopen(readp[0], "r"); /* open file descriptor */ diff --git a/bacula/src/lib/util.c b/bacula/src/lib/util.c index 90cf6044a0..95799f2498 100644 --- a/bacula/src/lib/util.c +++ b/bacula/src/lib/util.c @@ -510,7 +510,7 @@ void make_session_key(char *key, char *seed, int mode) * %d = Director's name * %e = Job Exit code * %i = JobId - * %j = Unique Job name + * %j = Unique Job id * %l = job level * %n = Unadorned Job name * %s = Since time diff --git a/bacula/src/version.h b/bacula/src/version.h index 6b8ec0a68a..3f38819a6b 100644 --- a/bacula/src/version.h +++ b/bacula/src/version.h @@ -4,8 +4,8 @@ #undef VERSION #define VERSION "1.39.6" -#define BDATE "27 February 2006" -#define LSMDATE "27Feb06" +#define BDATE "08 March 2006" +#define LSMDATE "08Mar06" /* Debug flags */ #undef DEBUG diff --git a/bacula/src/win32/README.win32 b/bacula/src/win32/README.win32 index c58c04ee73..44b68673af 100644 --- a/bacula/src/win32/README.win32 +++ b/bacula/src/win32/README.win32 @@ -4,7 +4,7 @@ environment for building the native Win32 Bacula File daemon, the native Win32 bconsole program and the wx-console GUI console program. -The directory structure is: +The directory structure that you need to have is: bacula/src/win32 Makefiles and scripts baculafd Visual Studio Files Release Release objects, and bacula-fd.exe @@ -34,7 +34,6 @@ Win32 Bacula. It can be found in the Source Forge Bacula project release section. Docs is released as a separate tar file, which is created from the bacula CVS docs project (module). - Instructions if you want to build bacula-fd with VSS (Volume Shadow Copy Service) support. Note, the non-VSS build is no longer supported though you may be able to get it to work by @@ -59,10 +58,13 @@ below: below. To build it: +- We are using Microsoft Visual Studio .NET 2003 as the compiler, + but the build is done via scripting using the latest Cygwin + environment. - For this version of Bacula, you must have msvcr71.dll - installed in c:/windows/system32. The winbacula.nsi.in + installed in c:/windows/system32 (i.e. Windows VC++ 2003) + The winbacula.nsi.in and pebuilder Makefile.in files have this hard coded in. -- We are using Microsoft Visual Studio .NET 2003. - Make sure nmake is on your PATH. - Make sure your COMSPEC is properly setup (see full dump of my cygwin environment below). -- 2.39.5