2003-11-xx Version 1.33 xxNov03
+24Nov03
+- Sort FileSet selection list by CreateTime.
+- Add "lsmark", and "estimate" to tree routines.
+- Doing a mark or unmark now prints how many entries were changed.
+- Add command argument parsing to btape.c
+- Enhance EOT to print file:block on message.
+- Add repeat counts on btape bsf, fsf, bsr, fsr, and weof commands.
+- Enhance btape's fill command to be much clearer and more reliable.
+- Add state file to btape so that unfill command can be done any time
+ after a fill command.
+- Use reposition_dev() to position for read back of last block.
+22Nov03
+- Zap InChanger flag only if setting Slot to non-zero.
+- Added new SD directive TwoEOF, default off, that tells Bacula whether
+ to write one or two EOFs for EOM. For the OnStream driver it must
+ be off, otherwise an empty file will be created.
+- Cleaned up the btape "fill" command to compare the last block written
+ and read rather than just printing them.
+21Nov03
+- Implement btape test for autochanger.
+- Implement btape test for Fast Forward Space File.
+- Moved up to cygwin 1.5.5-1
+- Implemented Fast Forward Space File
+20Nov03
+- Add support for selecting volumes from InChanger list first, then
+ selecting from all available volumes.
+- Ensure that Volumes are selected from oldest LastWritten data/time.
+- A couple of bug fixes ensuring the proper ordering of volumes.
+19Nov03
+- Return oldest LastWritten for find_next_volume.
+- Remove ASSERT from stored/acquire.c that could trip when it shouldn't.
+- Enhance SD status if debug_level > 1 to show details of dev status.
18Nov03
+- Create update_bacula_tables, ... scripts and modify configure and Makefiles
+- Eliminate is_num() and use is_an_integer().
+- Add user slot selection code "slots=1,2-3,5,10, ..."
- Start daemons at level 90 rather than 20 so that MySQL will already
be started.
- Write alter_mysql_tables.in and alter_sqlite_tables.in
09Nov03
- Implement new code that assures that a non-zero Slot is unique within
a given Pool. When setting a non-zero Slot, the Slot of all other
- Volumes with the same Slot is set to zero.
+ Volumes with the same Slot is set to zero. Redone later to add
+ InChanger flag
07Nov03
- Fix bug reported by Lars where an incorrect Volume name was printed
by the "status dir" command.
Release Notes for Bacula 1.33
- Bacula code: Total files = 259 Total lines = 78,087 (*.h *.c *.in)
+ Bacula code: Total files = 259 Total lines = 78,302 (*.h *.c *.in)
Most Significant Changes since 1.32d
- Implement "update slots scan" that reads the volume label.
-
-
+- Turn off changer Volumes that are not current in the changer.
+- Enhance "fill" command of btape.
+- Added an autochanger test to the btape test command.
+- New "estimate" and "lsmark" in restore command. Estimate gives
+ a byte estimate for the restore, and lsmark does an ls listing
+ of marked files.
+- Implement Fast Forward Space File.
+- New version 1.5.5 Cygwin.
+- Select the oldest LastWritten volume during recycling.
+- Modify SD to update the catalog database when it is shutdown,
+ even if the job is canceled.
+
+Other Changes since 1.32d
+- The console program will run all commands it finds in ~/.bconsolerc
+ at startup.
+- Add Dan Langille's changes to the bacula start/stop script that
+ permit dropping root permissions just after startup.
Items to note: !!!!!
- The daemon protocol has changed, you must update everything at once.
which will delete ALL prior catalog information, or you can
update your database with:
- ./alter_bacula_tables
+ ./update_bacula_tables
+- smtp has now become bsmtp
+- console has now become bconsole.
+- console.conf is now bconsole.conf
Kern's ToDo List
- 05 November 2003
+ 23 November 2003
Documentation to do: (any release a little bit at a time)
- Document running a test version.
- VXA drives have a "cleaning required"
indicator, but Exabyte recommends preventive cleaning after every 75
hours of operation.
+ From Phil:
+ In this context, it should be noted that Exabyte has a command-line
+ vxatool utility available for free download. (The current version is
+ vxatool-3.72.) It can get diagnostic info, read, write and erase tapes,
+ test the drive, unload tapes, change drive settings, flash new firmware,
+ etc.
+ Of particular interest in this context is that vxatool <device> -i will
+ report, among other details, the time since last cleaning in tape motion
+ minutes. This information can be retrieved (and settings changed, for
+ that matter) through the generic-SCSI device even when Bacula has the
+ regular tape device locked. (Needless to say, I don't recommend
+ changing tape settings while a job is running.)
- Lookup HP cleaning recommendations.
- Lookup HP tape replacement recommendations (see trouble shooting autochanger)
- Create a man page for each binary (Debian package requirement).
For 1.33
-- Add flag to write only one EOF mark on the tape.
-- Implement autochanger testing in btape "test" command.
- Implement RestoreJobRetention? Maybe better "JobRetention" in a Job,
which would take precidence over the Catalog "JobRetention".
- Implement Label Format in Add and Label console commands.
- Make a Running Jobs: output similar to current Scheduled Jobs:
After 1.33:
+- Make dev->file and dev->block_num signed integers so that -1 can
+ be an invalid value.
- Create VolAddr for disk files in place of VolFile and VolBlock. This
is needed to properly specify ranges.
- Print bsmtp output to job report so that problems will be seen.
-- Make mark/unmark report how many files marked/unmarked.
- Have some way to estimate the restore size or have it printed.
- Pass the number of files to be restored to the FD for reporting
- Add progress of files/bytes to SD and FD.
-- Implement lmark to list everyfile marked.
- Don't continue Restore if no files selected.
- Print warning message if FileId > 4 billion
- do a "messages" before the first prompt in Console
asterisk preceding the name indicates a feature not currently
implemented.
+ - Match "xxx" - Match regular expression
+ - Wild "xxx" - Do a wild card match
+
For Backup Jobs:
- Compression= (GZIP, ...)
- Signature= (MD5, SHA1, ...)
- *Reader= (filename) - external read (backup) program
- *Plugin= (filename) - read/write plugin module
+ - Include= (yes/no) - Include the file matched no additional
+ patterns are applied.
+
For Verify Jobs:
- verify= (ipnougsamc5) - verify options
Options {
Signature = MD5
# Note multiple Matches are ORed
- Match = /*.gz/ # matches .gz files */
- Match = /*.Z/ # matches .Z files */
+ Match = "*.gz" # matches .gz files
+ Match = "*.Z" # matches .Z files
}
Options {
Compression = GZIP
Signature = MD5
- Match = /*.?*/ # matches all files
+ Match = "*.?*" # matches all files
}
File = /
}
- Finish implementation of Verify=DiskToCatalog
- Make sure that Volumes are recycled based on "Least recently used"
rather than lowest MediaId.
-
+- Add flag to write only one EOF mark on the tape.
+- Implement autochanger testing in btape "test" command.
+- Implement lmark to list everyfile marked.
+- Make mark/unmark report how many files marked/unmarked.
+# basic defines for every build
%define depkgs ../depkgs
+%define depkgs_version 24Jul03
+%define tomsrtbt tomsrtbt-2.0.103
#
-# You must build the package with at least one define
+# You must build the package with at least one define parameter
# e.g. rpmbuild -ba --define "build_rh7 1" bacula.spec
#
# If you want the MySQL version, use:
Group: System Environment/Daemons
Copyright: GPL v2
Source0:http://www.prdownloads.sourceforge.net/bacula/%{name}-%{version}.tar.gz
-Source1:http://www.prdownloads.sourceforge.net/bacula/depkgs-24Jul03.tar.gz
+Source1:http://www.prdownloads.sourceforge.net/bacula/depkgs-%{depkgs_version}.tar.gz
+Source2:http://www.tux.org/pub/distributions/tinylinux/tomsrtbt/%{tomsrtbt}.tar.gz
BuildRoot: %{_tmppath}/%{name}-root
URL: http://www.bacula.org/
Vendor: The Bacula Team
Distribution: The Bacula Team
Packager: D. Scott Barninger <barninger@fairfieldcomputers.com>
-Requires: gnome-libs >= 1.4
-Requires: readline
-BuildRequires: gnome-libs-devel >= 1.4
BuildRequires: readline-devel
+%if %{rh7}
+BuildRequires: gtk+-devel >= 1.2
+BuildRequires: gnome-libs-devel >= 1.4
+%else
+BuildRequires: gtk2-devel >= 2.0
+BuildRequires: libgnomeui-devel >= 2.0
+%endif
%if %{mysql}
-Requires: mysql >= 3.23
-Requires: mysql-server >= 3.23
BuildRequires: mysql-devel >= 3.23
%endif
-
%description
Bacula - It comes by night and sucks the vital essence from your computers.
Summary: Bacula - The Network Backup Solution
Group: System Environment/Daemons
+Requires: readline
+%if %{rh7}
+Requires: gtk+ >= 1.2
+Requires: gnome-libs >= 1.4
+%else
+Requires: gtk2 >= 2.0
+Requires: libgnomeui >= 2.0
+%endif
+%if %{mysql}
+Requires: mysql >= 3.23
+Requires: mysql-server >= 3.23
+%endif
%if %{mysql}
%description mysql-%{rh_version}
%package client-%{rh_version}
Summary: Bacula - The Network Backup Solution
Group: System Environment/Daemons
+Requires: readline
+%if %{rh7}
+Requires: gtk+ >= 1.2
+Requires: gnome-libs >= 1.4
+%else
+Requires: gtk2 >= 2.0
+Requires: libgnomeui >= 2.0
+%endif
+
%description client-%{rh_version}
Bacula - It comes by night and sucks the vital essence from your computers.
This is the File daemon (Client) only package.
+%package rescue
+
+Summary: Bacula - The Network Backup Solution
+Group: System Environment/Daemons
+Requires: coreutils, util-linux, libc5
+
+%description rescue
+Bacula - It comes by night and sucks the vital essence from your computers.
+
+Bacula is a set of computer programs that permit you (or the system
+administrator) to manage backup, recovery, and verification of computer
+data across a network of computers of different kinds. In technical terms,
+it is a network client/server based backup program. Bacula is relatively
+easy to use and efficient, while offering many advanced storage management
+features that make it easy to find and recover lost or damaged files.
+Bacula source code has been released under the GPL version 2 license.
+
+This package installs scripts for disaster recovery and builds rescue
+floppy disks for bare metal recovery. This package includes tomsrtbt
+(http://www.toms.net/rb/, by Tom Oehser, Tom@Toms.NET) to provide a tool
+to build a boot floppy disk.
+
+You need to have the bacula-sqlite, bacula-mysql or bacula-client package for
+your platform installed and configured before installing this package.
%prep
%setup -b 1
+%setup -b 2
%build
--with-scriptdir=/etc/bacula \
--enable-smartalloc \
--enable-gnome \
+ --enable-static-fd \
%if %{mysql}
--with-mysql \
%else
--with-subsys-dir=/var/lock/subsys
make
+cd src/filed
+strip static-bacula-fd
+cd ../../
+
%install
cwd=${PWD}
mkdir -p $RPM_BUILD_ROOT/usr/share/pixmaps
mkdir -p $RPM_BUILD_ROOT/usr/share/gnome/apps/System
mkdir -p $RPM_BUILD_ROOT/usr/share/applications
+mkdir -p $RPM_BUILD_ROOT/etc/bacula/rescue
+mkdir -p $RPM_BUILD_ROOT/etc/bacula/rescue/tomsrtbt
%if ! %{mysql}
mkdir -p $RPM_BUILD_ROOT/usr/lib/sqlite
# install the logrotate file
cp scripts/logrotate $RPM_BUILD_ROOT/etc/logrotate.d/bacula
+# install the rescue stuff
+# these are the rescue scripts
+cp rescue/linux/backup.etc.list $RPM_BUILD_ROOT/etc/bacula/rescue/
+cp rescue/linux/format_floppy $RPM_BUILD_ROOT/etc/bacula/rescue/
+cp rescue/linux/getdiskinfo $RPM_BUILD_ROOT/etc/bacula/rescue/
+cp rescue/linux/make_rescue_disk $RPM_BUILD_ROOT/etc/bacula/rescue/
+cp rescue/linux/make_static_bacula $RPM_BUILD_ROOT/etc/bacula/rescue/
+cp rescue/linux/restore_bacula $RPM_BUILD_ROOT/etc/bacula/rescue/
+cp rescue/linux/restore_etc $RPM_BUILD_ROOT/etc/bacula/rescue/
+cp rescue/linux/run_grub $RPM_BUILD_ROOT/etc/bacula/rescue/
+cp rescue/linux/run_lilo $RPM_BUILD_ROOT/etc/bacula/rescue/
+cp rescue/linux/sfdisk.bz2 $RPM_BUILD_ROOT/etc/bacula/rescue/
+
+# this is the static file daemon
+cp src/filed/static-bacula-fd $RPM_BUILD_ROOT/etc/bacula/rescue/bacula-fd
+
+# this is the tom's root boot disk
+cp ../%{tomsrtbt}/* $RPM_BUILD_ROOT/etc/bacula/rescue/tomsrtbt/
+
%clean
[ "$RPM_BUILD_ROOT" != "/" ] && rm -rf "$RPM_BUILD_ROOT"
# delete our links
/sbin/chkconfig --del bacula-fd
+%files rescue
+%defattr(-,root,root)
+%attr(0644,root,root) /etc/bacula/rescue/backup.etc.list
+%attr(0754,root,root) /etc/bacula/rescue/format_floppy
+%attr(0754,root,root) /etc/bacula/rescue/getdiskinfo
+%attr(0754,root,root) /etc/bacula/rescue/make_rescue_disk
+%attr(0754,root,root) /etc/bacula/rescue/make_static_bacula
+%attr(0754,root,root) /etc/bacula/rescue/restore_bacula
+%attr(0754,root,root) /etc/bacula/rescue/restore_etc
+%attr(0754,root,root) /etc/bacula/rescue/run_grub
+%attr(0754,root,root) /etc/bacula/rescue/run_lilo
+%attr(0644,root,root) /etc/bacula/rescue/sfdisk.bz2
+%attr(0754,root,root) /etc/bacula/rescue/bacula-fd
+/etc/bacula/rescue/tomsrtbt/*
+
+%post rescue
+# link our current installed conf file to the rescue directory
+ln -s /etc/bacula-fd.conf /etc/bacula/rescue/bacula-fd.conf
+
+echo
+echo "Ready to create the rescue files for this system."
+echo "Press <enter> to continue..."
+read A
+echo
+
+# run getdiskinfo
+echo "Running getdiskinfo..."
+cd /etc/bacula/rescue
+./getdiskinfo
+
+echo
+echo "Finished."
+echo "To create a boot disk run \"./install.s\" from the /etc/bacula/rescue/tomsrtbt/"
+echo "directory. To make the bacula rescue disk run"
+echo "\"./make_rescue_disk --copy-static-bacula --copy-etc-files\" "
+echo "from the /etc/bacula/rescue directory. To recreate the rescue"
+echo "information for this system run ./getdiskinfo again."
+echo
+
+%preun rescue
+# remove the files created after the initial rpm installation
+rm -f /etc/bacula/rescue/bacula-fd.conf
+rm -f /etc/bacula/rescue/partition.*
+rm -f /etc/bacula/rescue/format.*
+rm -f /etc/bacula/rescue/mount_drives
+rm -f /etc/bacula/rescue/start_network
+rm -f /etc/bacula/rescue/sfdisk
+rm -rf /etc/bacula/rescue/diskinfo/*
+
%changelog
+* Sun Nov 23 2003 D. Scott Barninger <barninger at fairfieldcomputers.com>
+- Added define at top of file for depkgs version
+- Added rescue sub-package
+- Moved requires statements into proper sub-package locations
+* Mon Oct 27 2003 D. Scott Barninger <barninger at fairfieldcomputers.com>
+- Corrected Requires for Gnome 1.4/2.0 builds
* Fri Oct 24 2003 D. Scott Barninger <barninger at fairfieldcomputers.com>
- Added separate Source declaration for depkgs
- added patch for make_catalog_backup script
CREATE TABLE BaseFiles (
BaseId INTEGER UNSIGNED AUTO_INCREMENT,
+ BaseJobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
FileId INTEGER UNSIGNED NOT NULL REFERENCES File,
FileIndex INTEGER UNSIGNED,
CREATE TABLE BaseFiles (
BaseId INTEGER UNSIGNED AUTOINCREMENT,
+ BaseJobId INTEGER UNSIGNED REFERENCES Job NOT NULL,
JobId INTEGER UNSIGNED REFERENCES Job NOT NULL,
FileId INTEGER UNSIGNED REFERENCES File NOT NULL,
FileIndex INTEGER UNSIGNED,
PRIMARY KEY (UnsavedId)
);
+DROP TABLE BaseFiles;
+
+CREATE TABLE BaseFiles (
+ BaseId INTEGER UNSIGNED AUTO_INCREMENT,
+ BaseJobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
+ JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
+ FileId INTEGER UNSIGNED NOT NULL REFERENCES File,
+ FileIndex INTEGER UNSIGNED,
+ PRIMARY KEY(BaseId)
+ );
+
UPDATE Version SET VersionId=7;
END-OF-DATA
CREATE INDEX inx8 ON Media (PoolId);
+DROP TABLE BaseFiles;
+
+CREATE TABLE BaseFiles (
+ BaseId INTEGER UNSIGNED AUTOINCREMENT,
+ BaseJobId INTEGER UNSIGNED REFERENCES Job NOT NULL,
+ JobId INTEGER UNSIGNED REFERENCES Job NOT NULL,
+ FileId INTEGER UNSIGNED REFERENCES File NOT NULL,
+ FileIndex INTEGER UNSIGNED,
+ PRIMARY KEY(BaseId)
+ );
+
COMMIT;
UPDATE Version SET VersionId=7;
* 1. The generic lexical scanner in lib/lex.c and lib/lex.h
*
* 2. The generic config scanner in lib/parse_config.c and
- * lib/parse_config.h.
- * These files contain the parser code, some utility
- * routines, and the common store routines (name, int,
- * string).
+ * lib/parse_config.h.
+ * These files contain the parser code, some utility
+ * routines, and the common store routines (name, int,
+ * string).
*
* 3. The daemon specific file, which contains the Resource
- * definitions as well as any specific store routines
- * for the resource records.
+ * definitions as well as any specific store routines
+ * for the resource records.
*
* Kern Sibbald, January MM
*
/*
* Director Resource
*
- * name handler value code flags default_value
+ * name handler value code flags default_value
*/
static struct res_items dir_items[] = {
{"name", store_name, ITEM(res_dir.hdr.name), 0, ITEM_REQUIRED, 0},
/*
* Console Resource
*
- * name handler value code flags default_value
+ * name handler value code flags default_value
*/
static struct res_items con_items[] = {
{"name", store_name, ITEM(res_con.hdr.name), 0, ITEM_REQUIRED, 0},
/*
* Client or File daemon resource
*
- * name handler value code flags default_value
+ * name handler value code flags default_value
*/
static struct res_items cli_items[] = {
/* Storage daemon resource
*
- * name handler value code flags default_value
+ * name handler value code flags default_value
*/
static struct res_items store_items[] = {
{"name", store_name, ITEM(res_store.hdr.name), 0, ITEM_REQUIRED, 0},
/*
* Catalog Resource Directives
*
- * name handler value code flags default_value
+ * name handler value code flags default_value
*/
static struct res_items cat_items[] = {
{"name", store_name, ITEM(res_cat.hdr.name), 0, ITEM_REQUIRED, 0},
/*
* Job Resource Directives
*
- * name handler value code flags default_value
+ * name handler value code flags default_value
*/
static struct res_items job_items[] = {
{"name", store_name, ITEM(res_job.hdr.name), 0, ITEM_REQUIRED, 0},
{"rescheduleinterval", store_time, ITEM(res_job.RescheduleInterval), 0, ITEM_DEFAULT, 60 * 30},
{"rescheduletimes", store_pint, ITEM(res_job.RescheduleTimes), 0, 0, 0},
{"priority", store_pint, ITEM(res_job.Priority), 0, ITEM_DEFAULT, 10},
+ {"jobretention", store_time, ITEM(res_job.JobRetention), 0, 0, 0},
{NULL, NULL, NULL, 0, 0, 0}
};
/* FileSet resource
*
- * name handler value code flags default_value
+ * name handler value code flags default_value
*/
static struct res_items fs_items[] = {
{"name", store_name, ITEM(res_fs.hdr.name), 0, ITEM_REQUIRED, 0},
{"description", store_str, ITEM(res_fs.hdr.desc), 0, 0, 0},
{"include", store_inc, NULL, 0, ITEM_NO_EQUALS, 0},
{"exclude", store_inc, NULL, 1, ITEM_NO_EQUALS, 0},
- {NULL, NULL, NULL, 0, 0, 0}
+ {NULL, NULL, NULL, 0, 0, 0}
};
/* Schedule -- see run_conf.c */
/* Schedule
*
- * name handler value code flags default_value
+ * name handler value code flags default_value
*/
static struct res_items sch_items[] = {
{"name", store_name, ITEM(res_sch.hdr.name), 0, ITEM_REQUIRED, 0},
/* Group resource -- not implemented
*
- * name handler value code flags default_value
+ * name handler value code flags default_value
*/
static struct res_items group_items[] = {
{"name", store_name, ITEM(res_group.hdr.name), 0, ITEM_REQUIRED, 0},
/* Pool resource
*
- * name handler value code flags default_value
+ * name handler value code flags default_value
*/
static struct res_items pool_items[] = {
{"name", store_name, ITEM(res_pool.hdr.name), 0, ITEM_REQUIRED, 0},
/*
* Counter Resource
- * name handler value code flags default_value
+ * name handler value code flags default_value
*/
static struct res_items counter_items[] = {
{"name", store_name, ITEM(res_counter.hdr.name), 0, ITEM_REQUIRED, 0},
* NOTE!!! keep it in the same order as the R_codes
* or eliminate all resources[rindex].name
*
- * name items rcode res_head
+ * name items rcode res_head
*/
struct s_res resources[] = {
{"director", dir_items, R_DIRECTOR, NULL},
{"messages", msgs_items, R_MSGS, NULL},
{"counter", counter_items, R_COUNTER, NULL},
{"console", con_items, R_CONSOLE, NULL},
- {NULL, NULL, 0, NULL}
+ {NULL, NULL, 0, NULL}
};
/* Keywords (RHS) permitted in Job Level records
*
- * level_name level job_type
+ * level_name level job_type
*/
struct s_jl joblevels[] = {
{"Full", L_FULL, JT_BACKUP},
{"Data", L_VERIFY_DATA, JT_VERIFY},
{" ", L_NONE, JT_ADMIN},
{" ", L_NONE, JT_RESTORE},
- {NULL, 0}
+ {NULL, 0}
};
/* Keywords (RHS) permitted in Job type records
*
- * type_name job_type
+ * type_name job_type
*/
struct s_jt jobtypes[] = {
{"backup", JT_BACKUP},
{"admin", JT_ADMIN},
{"verify", JT_VERIFY},
{"restore", JT_RESTORE},
- {NULL, 0}
+ {NULL, 0}
};
{"client", 'C'},
{"fileset", 'F'},
{"level", 'L'},
- {NULL, 0}
+ {NULL, 0}
};
/* Keywords (RHS) permitted in Restore records */
{"where", 'W'}, /* root of restore */
{"replace", 'R'}, /* replacement options */
{"bootstrap", 'B'}, /* bootstrap file */
- {NULL, 0}
+ {NULL, 0}
};
/* Options permitted in Restore replace= */
{"ifnewer", REPLACE_IFNEWER},
{"ifolder", REPLACE_IFOLDER},
{"never", REPLACE_NEVER},
- {NULL, 0}
+ {NULL, 0}
};
char *level_to_str(int level)
sprintf(level_no, "%d", level); /* default if not found */
for (i=0; joblevels[i].level_name; i++) {
if (level == joblevels[i].level) {
- str = joblevels[i].level_name;
- break;
+ str = joblevels[i].level_name;
+ break;
}
}
return str;
sendit(sock, "No %s resource defined\n", res_to_str(type));
return;
}
- if (type < 0) { /* no recursion */
+ if (type < 0) { /* no recursion */
type = - type;
recurse = false;
}
switch (type) {
case R_DIRECTOR:
sendit(sock, "Director: name=%s MaxJobs=%d FDtimeout=%s SDtimeout=%s\n",
- reshdr->name, res->res_dir.MaxConcurrentJobs,
- edit_uint64(res->res_dir.FDConnectTimeout, ed1),
- edit_uint64(res->res_dir.SDConnectTimeout, ed2));
+ reshdr->name, res->res_dir.MaxConcurrentJobs,
+ edit_uint64(res->res_dir.FDConnectTimeout, ed1),
+ edit_uint64(res->res_dir.SDConnectTimeout, ed2));
if (res->res_dir.query_file) {
sendit(sock, " query_file=%s\n", res->res_dir.query_file);
}
if (res->res_dir.messages) {
sendit(sock, " --> ");
- dump_resource(-R_MSGS, (RES *)res->res_dir.messages, sendit, sock);
+ dump_resource(-R_MSGS, (RES *)res->res_dir.messages, sendit, sock);
}
break;
case R_CONSOLE:
sendit(sock, "Console: name=%s SSL=%d\n",
- res->res_con.hdr.name, res->res_con.enable_ssl);
+ res->res_con.hdr.name, res->res_con.enable_ssl);
break;
case R_COUNTER:
if (res->res_counter.WrapCounter) {
sendit(sock, "Counter: name=%s min=%d max=%d cur=%d wrapcntr=%s\n",
- res->res_counter.hdr.name, res->res_counter.MinValue,
- res->res_counter.MaxValue, res->res_counter.CurrentValue,
- res->res_counter.WrapCounter->hdr.name);
+ res->res_counter.hdr.name, res->res_counter.MinValue,
+ res->res_counter.MaxValue, res->res_counter.CurrentValue,
+ res->res_counter.WrapCounter->hdr.name);
} else {
sendit(sock, "Counter: name=%s min=%d max=%d\n",
- res->res_counter.hdr.name, res->res_counter.MinValue,
- res->res_counter.MaxValue);
+ res->res_counter.hdr.name, res->res_counter.MinValue,
+ res->res_counter.MaxValue);
}
if (res->res_counter.Catalog) {
sendit(sock, " --> ");
- dump_resource(-R_CATALOG, (RES *)res->res_counter.Catalog, sendit, sock);
+ dump_resource(-R_CATALOG, (RES *)res->res_counter.Catalog, sendit, sock);
}
break;
case R_CLIENT:
sendit(sock, "Client: name=%s address=%s FDport=%d MaxJobs=%u\n",
- res->res_client.hdr.name, res->res_client.address, res->res_client.FDport,
- res->res_client.MaxConcurrentJobs);
+ res->res_client.hdr.name, res->res_client.address, res->res_client.FDport,
+ res->res_client.MaxConcurrentJobs);
sendit(sock, " JobRetention=%s FileRetention=%s AutoPrune=%d\n",
- edit_utime(res->res_client.JobRetention, ed1),
- edit_utime(res->res_client.FileRetention, ed2),
- res->res_client.AutoPrune);
+ edit_utime(res->res_client.JobRetention, ed1),
+ edit_utime(res->res_client.FileRetention, ed2),
+ res->res_client.AutoPrune);
if (res->res_client.catalog) {
sendit(sock, " --> ");
- dump_resource(-R_CATALOG, (RES *)res->res_client.catalog, sendit, sock);
+ dump_resource(-R_CATALOG, (RES *)res->res_client.catalog, sendit, sock);
}
break;
case R_STORAGE:
sendit(sock, "Storage: name=%s address=%s SDport=%d MaxJobs=%u\n\
DeviceName=%s MediaType=%s\n",
- res->res_store.hdr.name, res->res_store.address, res->res_store.SDport,
- res->res_store.MaxConcurrentJobs,
- res->res_store.dev_name, res->res_store.media_type);
+ res->res_store.hdr.name, res->res_store.address, res->res_store.SDport,
+ res->res_store.MaxConcurrentJobs,
+ res->res_store.dev_name, res->res_store.media_type);
break;
case R_CATALOG:
sendit(sock, "Catalog: name=%s address=%s DBport=%d db_name=%s\n\
db_user=%s\n",
- res->res_cat.hdr.name, NPRT(res->res_cat.db_address),
- res->res_cat.db_port, res->res_cat.db_name, NPRT(res->res_cat.db_user));
+ res->res_cat.hdr.name, NPRT(res->res_cat.db_address),
+ res->res_cat.db_port, res->res_cat.db_name, NPRT(res->res_cat.db_user));
break;
case R_JOB:
sendit(sock, "Job: name=%s JobType=%d level=%s Priority=%d MaxJobs=%u\n",
- res->res_job.hdr.name, res->res_job.JobType,
- level_to_str(res->res_job.level), res->res_job.Priority,
- res->res_job.MaxConcurrentJobs);
+ res->res_job.hdr.name, res->res_job.JobType,
+ level_to_str(res->res_job.level), res->res_job.Priority,
+ res->res_job.MaxConcurrentJobs);
sendit(sock, " Resched=%d Times=%d Interval=%s\n",
- res->res_job.RescheduleOnError, res->res_job.RescheduleTimes,
- edit_uint64_with_commas(res->res_job.RescheduleInterval, ed1));
+ res->res_job.RescheduleOnError, res->res_job.RescheduleTimes,
+ edit_uint64_with_commas(res->res_job.RescheduleInterval, ed1));
if (res->res_job.client) {
sendit(sock, " --> ");
- dump_resource(-R_CLIENT, (RES *)res->res_job.client, sendit, sock);
+ dump_resource(-R_CLIENT, (RES *)res->res_job.client, sendit, sock);
}
if (res->res_job.fileset) {
sendit(sock, " --> ");
- dump_resource(-R_FILESET, (RES *)res->res_job.fileset, sendit, sock);
+ dump_resource(-R_FILESET, (RES *)res->res_job.fileset, sendit, sock);
}
if (res->res_job.schedule) {
sendit(sock, " --> ");
- dump_resource(-R_SCHEDULE, (RES *)res->res_job.schedule, sendit, sock);
+ dump_resource(-R_SCHEDULE, (RES *)res->res_job.schedule, sendit, sock);
}
if (res->res_job.RestoreWhere) {
sendit(sock, " --> Where=%s\n", NPRT(res->res_job.RestoreWhere));
}
if (res->res_job.storage) {
sendit(sock, " --> ");
- dump_resource(-R_STORAGE, (RES *)res->res_job.storage, sendit, sock);
+ dump_resource(-R_STORAGE, (RES *)res->res_job.storage, sendit, sock);
}
if (res->res_job.pool) {
sendit(sock, " --> ");
- dump_resource(-R_POOL, (RES *)res->res_job.pool, sendit, sock);
+ dump_resource(-R_POOL, (RES *)res->res_job.pool, sendit, sock);
} else {
sendit(sock, "!!! No Pool resource\n");
}
if (res->res_job.verify_job) {
sendit(sock, " --> ");
- dump_resource(-R_JOB, (RES *)res->res_job.verify_job, sendit, sock);
+ dump_resource(-R_JOB, (RES *)res->res_job.verify_job, sendit, sock);
}
break;
if (res->res_job.messages) {
sendit(sock, " --> ");
- dump_resource(-R_MSGS, (RES *)res->res_job.messages, sendit, sock);
+ dump_resource(-R_MSGS, (RES *)res->res_job.messages, sendit, sock);
}
break;
case R_FILESET:
sendit(sock, "FileSet: name=%s\n", res->res_fs.hdr.name);
for (int i=0; i<res->res_fs.num_includes; i++) {
- INCEXE *incexe = res->res_fs.include_items[i];
- for (int j=0; j<incexe->name_list.size(); j++) {
+ INCEXE *incexe = res->res_fs.include_items[i];
+ for (int j=0; j<incexe->name_list.size(); j++) {
sendit(sock, " Inc: %s\n", incexe->name_list.get(j));
- }
+ }
}
for (int i=0; i<res->res_fs.num_excludes; i++) {
- INCEXE *incexe = res->res_fs.exclude_items[i];
- for (int j=0; j<incexe->name_list.size(); j++) {
+ INCEXE *incexe = res->res_fs.exclude_items[i];
+ for (int j=0; j<incexe->name_list.size(); j++) {
sendit(sock, " Exc: %s\n", incexe->name_list.get(j));
- }
+ }
}
break;
case R_SCHEDULE:
if (res->res_sch.run) {
- int i;
- RUN *run = res->res_sch.run;
- char buf[1000], num[10];
+ int i;
+ RUN *run = res->res_sch.run;
+ char buf[1000], num[10];
sendit(sock, "Schedule: name=%s\n", res->res_sch.hdr.name);
- if (!run) {
- break;
- }
+ if (!run) {
+ break;
+ }
next_run:
sendit(sock, " --> Run Level=%s\n", level_to_str(run->level));
bstrncpy(buf, " hour=", sizeof(buf));
- for (i=0; i<24; i++) {
- if (bit_is_set(i, run->hour)) {
+ for (i=0; i<24; i++) {
+ if (bit_is_set(i, run->hour)) {
sprintf(num, "%d ", i);
- bstrncat(buf, num, sizeof(buf));
- }
- }
+ bstrncat(buf, num, sizeof(buf));
+ }
+ }
strcat(buf, "\n");
- sendit(sock, buf);
+ sendit(sock, buf);
strcpy(buf, " mday=");
- for (i=0; i<31; i++) {
- if (bit_is_set(i, run->mday)) {
+ for (i=0; i<31; i++) {
+ if (bit_is_set(i, run->mday)) {
sprintf(num, "%d ", i+1);
- strcat(buf, num);
- }
- }
+ strcat(buf, num);
+ }
+ }
strcat(buf, "\n");
- sendit(sock, buf);
+ sendit(sock, buf);
strcpy(buf, " month=");
- for (i=0; i<12; i++) {
- if (bit_is_set(i, run->month)) {
+ for (i=0; i<12; i++) {
+ if (bit_is_set(i, run->month)) {
sprintf(num, "%d ", i+1);
- strcat(buf, num);
- }
- }
+ strcat(buf, num);
+ }
+ }
strcat(buf, "\n");
- sendit(sock, buf);
+ sendit(sock, buf);
strcpy(buf, " wday=");
- for (i=0; i<7; i++) {
- if (bit_is_set(i, run->wday)) {
+ for (i=0; i<7; i++) {
+ if (bit_is_set(i, run->wday)) {
sprintf(num, "%d ", i+1);
- strcat(buf, num);
- }
- }
+ strcat(buf, num);
+ }
+ }
strcat(buf, "\n");
- sendit(sock, buf);
+ sendit(sock, buf);
strcpy(buf, " wpos=");
- for (i=0; i<5; i++) {
- if (bit_is_set(i, run->wpos)) {
+ for (i=0; i<5; i++) {
+ if (bit_is_set(i, run->wpos)) {
sprintf(num, "%d ", i+1);
- strcat(buf, num);
- }
- }
+ strcat(buf, num);
+ }
+ }
strcat(buf, "\n");
- sendit(sock, buf);
+ sendit(sock, buf);
sendit(sock, " mins=%d\n", run->minute);
- if (run->pool) {
+ if (run->pool) {
sendit(sock, " --> ");
- dump_resource(-R_POOL, (RES *)run->pool, sendit, sock);
- }
- if (run->storage) {
+ dump_resource(-R_POOL, (RES *)run->pool, sendit, sock);
+ }
+ if (run->storage) {
sendit(sock, " --> ");
- dump_resource(-R_STORAGE, (RES *)run->storage, sendit, sock);
- }
- if (run->msgs) {
+ dump_resource(-R_STORAGE, (RES *)run->storage, sendit, sock);
+ }
+ if (run->msgs) {
sendit(sock, " --> ");
- dump_resource(-R_MSGS, (RES *)run->msgs, sendit, sock);
- }
- /* If another Run record is chained in, go print it */
- if (run->next) {
- run = run->next;
- goto next_run;
- }
+ dump_resource(-R_MSGS, (RES *)run->msgs, sendit, sock);
+ }
+ /* If another Run record is chained in, go print it */
+ if (run->next) {
+ run = run->next;
+ goto next_run;
+ }
} else {
sendit(sock, "Schedule: name=%s\n", res->res_sch.hdr.name);
}
break;
case R_POOL:
sendit(sock, "Pool: name=%s PoolType=%s\n", res->res_pool.hdr.name,
- res->res_pool.pool_type);
+ res->res_pool.pool_type);
sendit(sock, " use_cat=%d use_once=%d acpt_any=%d cat_files=%d\n",
- res->res_pool.use_catalog, res->res_pool.use_volume_once,
- res->res_pool.accept_any_volume, res->res_pool.catalog_files);
+ res->res_pool.use_catalog, res->res_pool.use_volume_once,
+ res->res_pool.accept_any_volume, res->res_pool.catalog_files);
sendit(sock, " max_vols=%d auto_prune=%d VolRetention=%s\n",
- res->res_pool.max_volumes, res->res_pool.AutoPrune,
- edit_utime(res->res_pool.VolRetention, ed1));
+ res->res_pool.max_volumes, res->res_pool.AutoPrune,
+ edit_utime(res->res_pool.VolRetention, ed1));
sendit(sock, " VolUse=%s recycle=%d LabelFormat=%s\n",
- edit_utime(res->res_pool.VolUseDuration, ed1),
- res->res_pool.Recycle,
- NPRT(res->res_pool.label_format));
+ edit_utime(res->res_pool.VolUseDuration, ed1),
+ res->res_pool.Recycle,
+ NPRT(res->res_pool.label_format));
sendit(sock, " CleaningPrefix=%s\n",
- NPRT(res->res_pool.cleaning_prefix));
+ NPRT(res->res_pool.cleaning_prefix));
sendit(sock, " recyleOldest=%d MaxVolJobs=%d MaxVolFiles=%d\n",
- res->res_pool.purge_oldest_volume,
- res->res_pool.MaxVolJobs, res->res_pool.MaxVolFiles);
+ res->res_pool.purge_oldest_volume,
+ res->res_pool.MaxVolJobs, res->res_pool.MaxVolFiles);
break;
case R_MSGS:
sendit(sock, "Messages: name=%s\n", res->res_msgs.hdr.name);
switch (type) {
case R_DIRECTOR:
if (res->res_dir.working_directory) {
- free(res->res_dir.working_directory);
+ free(res->res_dir.working_directory);
}
if (res->res_dir.pid_directory) {
- free(res->res_dir.pid_directory);
+ free(res->res_dir.pid_directory);
}
if (res->res_dir.subsys_directory) {
- free(res->res_dir.subsys_directory);
+ free(res->res_dir.subsys_directory);
}
if (res->res_dir.password) {
- free(res->res_dir.password);
+ free(res->res_dir.password);
}
if (res->res_dir.query_file) {
- free(res->res_dir.query_file);
+ free(res->res_dir.query_file);
}
if (res->res_dir.DIRaddr) {
- free(res->res_dir.DIRaddr);
+ free(res->res_dir.DIRaddr);
}
break;
case R_COUNTER:
break;
case R_CONSOLE:
if (res->res_con.password) {
- free(res->res_con.password);
+ free(res->res_con.password);
}
break;
case R_CLIENT:
if (res->res_client.address) {
- free(res->res_client.address);
+ free(res->res_client.address);
}
if (res->res_client.password) {
- free(res->res_client.password);
+ free(res->res_client.password);
}
break;
case R_STORAGE:
if (res->res_store.address) {
- free(res->res_store.address);
+ free(res->res_store.address);
}
if (res->res_store.password) {
- free(res->res_store.password);
+ free(res->res_store.password);
}
if (res->res_store.media_type) {
- free(res->res_store.media_type);
+ free(res->res_store.media_type);
}
if (res->res_store.dev_name) {
- free(res->res_store.dev_name);
+ free(res->res_store.dev_name);
}
break;
case R_CATALOG:
if (res->res_cat.db_address) {
- free(res->res_cat.db_address);
+ free(res->res_cat.db_address);
}
if (res->res_cat.db_socket) {
- free(res->res_cat.db_socket);
+ free(res->res_cat.db_socket);
}
if (res->res_cat.db_user) {
- free(res->res_cat.db_user);
+ free(res->res_cat.db_user);
}
if (res->res_cat.db_name) {
- free(res->res_cat.db_name);
+ free(res->res_cat.db_name);
}
if (res->res_cat.db_password) {
- free(res->res_cat.db_password);
+ free(res->res_cat.db_password);
}
break;
case R_FILESET:
if ((num=res->res_fs.num_includes)) {
- while (--num >= 0) {
- free_incexe(res->res_fs.include_items[num]);
- }
- free(res->res_fs.include_items);
+ while (--num >= 0) {
+ free_incexe(res->res_fs.include_items[num]);
+ }
+ free(res->res_fs.include_items);
}
res->res_fs.num_includes = 0;
if ((num=res->res_fs.num_excludes)) {
- while (--num >= 0) {
- free_incexe(res->res_fs.exclude_items[num]);
- }
- free(res->res_fs.exclude_items);
+ while (--num >= 0) {
+ free_incexe(res->res_fs.exclude_items[num]);
+ }
+ free(res->res_fs.exclude_items);
}
res->res_fs.num_excludes = 0;
break;
case R_POOL:
if (res->res_pool.pool_type) {
- free(res->res_pool.pool_type);
+ free(res->res_pool.pool_type);
}
if (res->res_pool.label_format) {
- free(res->res_pool.label_format);
+ free(res->res_pool.label_format);
}
if (res->res_pool.cleaning_prefix) {
- free(res->res_pool.cleaning_prefix);
+ free(res->res_pool.cleaning_prefix);
}
break;
case R_SCHEDULE:
if (res->res_sch.run) {
- RUN *nrun, *next;
- nrun = res->res_sch.run;
- while (nrun) {
- next = nrun->next;
- free(nrun);
- nrun = next;
- }
+ RUN *nrun, *next;
+ nrun = res->res_sch.run;
+ while (nrun) {
+ next = nrun->next;
+ free(nrun);
+ nrun = next;
+ }
}
break;
case R_JOB:
if (res->res_job.RestoreWhere) {
- free(res->res_job.RestoreWhere);
+ free(res->res_job.RestoreWhere);
}
if (res->res_job.RestoreBootstrap) {
- free(res->res_job.RestoreBootstrap);
+ free(res->res_job.RestoreBootstrap);
}
if (res->res_job.WriteBootstrap) {
- free(res->res_job.WriteBootstrap);
+ free(res->res_job.WriteBootstrap);
}
if (res->res_job.RunBeforeJob) {
- free(res->res_job.RunBeforeJob);
+ free(res->res_job.RunBeforeJob);
}
if (res->res_job.RunAfterJob) {
- free(res->res_job.RunAfterJob);
+ free(res->res_job.RunAfterJob);
}
if (res->res_job.RunAfterFailedJob) {
- free(res->res_job.RunAfterFailedJob);
+ free(res->res_job.RunAfterFailedJob);
}
if (res->res_job.ClientRunBeforeJob) {
- free(res->res_job.ClientRunBeforeJob);
+ free(res->res_job.ClientRunBeforeJob);
}
if (res->res_job.ClientRunAfterJob) {
- free(res->res_job.ClientRunAfterJob);
+ free(res->res_job.ClientRunAfterJob);
}
break;
case R_MSGS:
if (res->res_msgs.mail_cmd) {
- free(res->res_msgs.mail_cmd);
+ free(res->res_msgs.mail_cmd);
}
if (res->res_msgs.operator_cmd) {
- free(res->res_msgs.operator_cmd);
+ free(res->res_msgs.operator_cmd);
}
free_msgs_res((MSGS *)res); /* free message resource */
res = NULL;
*/
for (i=0; items[i].name; i++) {
if (items[i].flags & ITEM_REQUIRED) {
- if (!bit_is_set(i, res_all.res_dir.hdr.item_present)) {
+ if (!bit_is_set(i, res_all.res_dir.hdr.item_present)) {
Emsg2(M_ERROR_TERM, 0, "%s item is required in %s resource, but not found.\n",
- items[i].name, resources[rindex]);
- }
+ items[i].name, resources[rindex]);
+ }
}
/* If this triggers, take a look at lib/parse_conf.h */
if (i >= MAX_RES_ITEMS) {
case R_POOL:
case R_MSGS:
case R_FILESET:
- break;
+ break;
/* Resources containing another resource */
case R_DIRECTOR:
- if ((res = (URES *)GetResWithName(R_DIRECTOR, res_all.res_dir.hdr.name)) == NULL) {
+ if ((res = (URES *)GetResWithName(R_DIRECTOR, res_all.res_dir.hdr.name)) == NULL) {
Emsg1(M_ERROR_TERM, 0, "Cannot find Director resource %s\n", res_all.res_dir.hdr.name);
- }
- res->res_dir.messages = res_all.res_dir.messages;
- break;
+ }
+ res->res_dir.messages = res_all.res_dir.messages;
+ break;
case R_JOB:
- if ((res = (URES *)GetResWithName(R_JOB, res_all.res_dir.hdr.name)) == NULL) {
+ if ((res = (URES *)GetResWithName(R_JOB, res_all.res_dir.hdr.name)) == NULL) {
Emsg1(M_ERROR_TERM, 0, "Cannot find Job resource %s\n", res_all.res_dir.hdr.name);
- }
- res->res_job.messages = res_all.res_job.messages;
- res->res_job.schedule = res_all.res_job.schedule;
- res->res_job.client = res_all.res_job.client;
- res->res_job.fileset = res_all.res_job.fileset;
- res->res_job.storage = res_all.res_job.storage;
- res->res_job.pool = res_all.res_job.pool;
- res->res_job.verify_job = res_all.res_job.verify_job;
- if (res->res_job.JobType == 0) {
+ }
+ res->res_job.messages = res_all.res_job.messages;
+ res->res_job.schedule = res_all.res_job.schedule;
+ res->res_job.client = res_all.res_job.client;
+ res->res_job.fileset = res_all.res_job.fileset;
+ res->res_job.storage = res_all.res_job.storage;
+ res->res_job.pool = res_all.res_job.pool;
+ res->res_job.verify_job = res_all.res_job.verify_job;
+ if (res->res_job.JobType == 0) {
Emsg1(M_ERROR_TERM, 0, "Job Type not defined for Job resource %s\n", res_all.res_dir.hdr.name);
- }
- if (res->res_job.level != 0) {
- int i;
- for (i=0; joblevels[i].level_name; i++) {
- if (joblevels[i].level == res->res_job.level &&
- joblevels[i].job_type == res->res_job.JobType) {
- i = 0;
- break;
- }
- }
- if (i != 0) {
+ }
+ if (res->res_job.level != 0) {
+ int i;
+ for (i=0; joblevels[i].level_name; i++) {
+ if (joblevels[i].level == res->res_job.level &&
+ joblevels[i].job_type == res->res_job.JobType) {
+ i = 0;
+ break;
+ }
+ }
+ if (i != 0) {
Emsg1(M_ERROR_TERM, 0, "Inappropriate level specified in Job resource %s\n",
- res_all.res_dir.hdr.name);
- }
- }
- break;
+ res_all.res_dir.hdr.name);
+ }
+ }
+ break;
case R_COUNTER:
- if ((res = (URES *)GetResWithName(R_COUNTER, res_all.res_counter.hdr.name)) == NULL) {
+ if ((res = (URES *)GetResWithName(R_COUNTER, res_all.res_counter.hdr.name)) == NULL) {
Emsg1(M_ERROR_TERM, 0, "Cannot find Counter resource %s\n", res_all.res_counter.hdr.name);
- }
- res->res_counter.Catalog = res_all.res_counter.Catalog;
- res->res_counter.WrapCounter = res_all.res_counter.WrapCounter;
- break;
+ }
+ res->res_counter.Catalog = res_all.res_counter.Catalog;
+ res->res_counter.WrapCounter = res_all.res_counter.WrapCounter;
+ break;
case R_CLIENT:
- if ((res = (URES *)GetResWithName(R_CLIENT, res_all.res_client.hdr.name)) == NULL) {
+ if ((res = (URES *)GetResWithName(R_CLIENT, res_all.res_client.hdr.name)) == NULL) {
Emsg1(M_ERROR_TERM, 0, "Cannot find Client resource %s\n", res_all.res_client.hdr.name);
- }
- res->res_client.catalog = res_all.res_client.catalog;
- break;
+ }
+ res->res_client.catalog = res_all.res_client.catalog;
+ break;
case R_SCHEDULE:
- /* Schedule is a bit different in that it contains a RUN record
+ /* Schedule is a bit different in that it contains a RUN record
* chain which isn't a "named" resource. This chain was linked
- * in by run_conf.c during pass 2, so here we jam the pointer
- * into the Schedule resource.
- */
- if ((res = (URES *)GetResWithName(R_SCHEDULE, res_all.res_client.hdr.name)) == NULL) {
+ * in by run_conf.c during pass 2, so here we jam the pointer
+ * into the Schedule resource.
+ */
+ if ((res = (URES *)GetResWithName(R_SCHEDULE, res_all.res_client.hdr.name)) == NULL) {
Emsg1(M_ERROR_TERM, 0, "Cannot find Schedule resource %s\n", res_all.res_client.hdr.name);
- }
- res->res_sch.run = res_all.res_sch.run;
- break;
+ }
+ res->res_sch.run = res_all.res_sch.run;
+ break;
default:
Emsg1(M_ERROR, 0, "Unknown resource type %d in save_resource.\n", type);
- error = 1;
- break;
+ error = 1;
+ break;
}
/* Note, the resource name was already saved during pass 1,
* so here, we can just release it.
*/
if (res_all.res_dir.hdr.name) {
- free(res_all.res_dir.hdr.name);
- res_all.res_dir.hdr.name = NULL;
+ free(res_all.res_dir.hdr.name);
+ res_all.res_dir.hdr.name = NULL;
}
if (res_all.res_dir.hdr.desc) {
- free(res_all.res_dir.hdr.desc);
- res_all.res_dir.hdr.desc = NULL;
+ free(res_all.res_dir.hdr.desc);
+ res_all.res_dir.hdr.desc = NULL;
}
return;
}
res = (URES *)malloc(size);
memcpy(res, &res_all, size);
if (!resources[rindex].res_head) {
- resources[rindex].res_head = (RES *)res; /* store first entry */
+ resources[rindex].res_head = (RES *)res; /* store first entry */
Dmsg3(200, "Inserting first %s res: %s index=%d\n", res_to_str(type),
- res->res_dir.hdr.name, rindex);
+ res->res_dir.hdr.name, rindex);
} else {
- RES *next;
- /* Add new res to end of chain */
- for (next=resources[rindex].res_head; next->next; next=next->next) {
- if (strcmp(next->name, res->res_dir.hdr.name) == 0) {
- Emsg2(M_ERROR_TERM, 0,
+ RES *next;
+ /* Add new res to end of chain */
+ for (next=resources[rindex].res_head; next->next; next=next->next) {
+ if (strcmp(next->name, res->res_dir.hdr.name) == 0) {
+ Emsg2(M_ERROR_TERM, 0,
_("Attempt to define second %s resource named \"%s\" is not permitted.\n"),
- resources[rindex].name, res->res_dir.hdr.name);
- }
- }
- next->next = (RES *)res;
+ resources[rindex].name, res->res_dir.hdr.name);
+ }
+ }
+ next->next = (RES *)res;
Dmsg4(200, "Inserting %s res: %s index=%d pass=%d\n", res_to_str(type),
- res->res_dir.hdr.name, rindex, pass);
+ res->res_dir.hdr.name, rindex, pass);
}
}
}
/* Store the type both pass 1 and pass 2 */
for (i=0; jobtypes[i].type_name; i++) {
if (strcasecmp(lc->str, jobtypes[i].type_name) == 0) {
- ((JOB *)(item->value))->JobType = jobtypes[i].job_type;
- i = 0;
- break;
+ ((JOB *)(item->value))->JobType = jobtypes[i].job_type;
+ i = 0;
+ break;
}
}
if (i != 0) {
/* Store the level pass 2 so that type is defined */
for (i=0; joblevels[i].level_name; i++) {
if (strcasecmp(lc->str, joblevels[i].level_name) == 0) {
- ((JOB *)(item->value))->level = joblevels[i].level;
- i = 0;
- break;
+ ((JOB *)(item->value))->level = joblevels[i].level;
+ i = 0;
+ break;
}
}
if (i != 0) {
/* Scan Replacement options */
for (i=0; ReplaceOptions[i].name; i++) {
if (strcasecmp(lc->str, ReplaceOptions[i].name) == 0) {
- *(int *)(item->value) = ReplaceOptions[i].token;
- i = 0;
- break;
+ *(int *)(item->value) = ReplaceOptions[i].token;
+ i = 0;
+ break;
}
}
if (i != 0) {
}
Dmsg1(190, "Got keyword: %s\n", lc->str);
for (i=0; BakVerFields[i].name; i++) {
- if (strcasecmp(lc->str, BakVerFields[i].name) == 0) {
- found = true;
- if (lex_get_token(lc, T_ALL) != T_EQUALS) {
+ if (strcasecmp(lc->str, BakVerFields[i].name) == 0) {
+ found = true;
+ if (lex_get_token(lc, T_ALL) != T_EQUALS) {
scan_err1(lc, "Expected an equals, got: %s", lc->str);
- }
- token = lex_get_token(lc, T_NAME);
+ }
+ token = lex_get_token(lc, T_NAME);
Dmsg1(190, "Got value: %s\n", lc->str);
- switch (BakVerFields[i].token) {
+ switch (BakVerFields[i].token) {
case 'C':
- /* Find Client Resource */
- if (pass == 2) {
- res = GetResWithName(R_CLIENT, lc->str);
- if (res == NULL) {
+ /* Find Client Resource */
+ if (pass == 2) {
+ res = GetResWithName(R_CLIENT, lc->str);
+ if (res == NULL) {
scan_err1(lc, "Could not find specified Client Resource: %s",
- lc->str);
- }
- res_all.res_job.client = (CLIENT *)res;
- }
- break;
+ lc->str);
+ }
+ res_all.res_job.client = (CLIENT *)res;
+ }
+ break;
case 'F':
- /* Find FileSet Resource */
- if (pass == 2) {
- res = GetResWithName(R_FILESET, lc->str);
- if (res == NULL) {
+ /* Find FileSet Resource */
+ if (pass == 2) {
+ res = GetResWithName(R_FILESET, lc->str);
+ if (res == NULL) {
scan_err1(lc, "Could not find specified FileSet Resource: %s\n",
- lc->str);
- }
- res_all.res_job.fileset = (FILESET *)res;
- }
- break;
+ lc->str);
+ }
+ res_all.res_job.fileset = (FILESET *)res;
+ }
+ break;
case 'L':
- /* Get level */
- for (i=0; joblevels[i].level_name; i++) {
- if (joblevels[i].job_type == item->code &&
- strcasecmp(lc->str, joblevels[i].level_name) == 0) {
- ((JOB *)(item->value))->level = joblevels[i].level;
- i = 0;
- break;
- }
- }
- if (i != 0) {
+ /* Get level */
+ for (i=0; joblevels[i].level_name; i++) {
+ if (joblevels[i].job_type == item->code &&
+ strcasecmp(lc->str, joblevels[i].level_name) == 0) {
+ ((JOB *)(item->value))->level = joblevels[i].level;
+ i = 0;
+ break;
+ }
+ }
+ if (i != 0) {
scan_err1(lc, "Expected a Job Level keyword, got: %s", lc->str);
- }
- break;
- } /* end switch */
- break;
- } /* end if strcmp() */
+ }
+ break;
+ } /* end switch */
+ break;
+ } /* end if strcmp() */
} /* end for */
if (!found) {
scan_err1(lc, "%s not a valid Backup/verify keyword", lc->str);
}
} /* end while */
- lc->options = options; /* reset original options */
+ lc->options = options; /* reset original options */
set_bit(index, res_all.hdr.item_present);
}
}
for (i=0; RestoreFields[i].name; i++) {
Dmsg1(190, "Restore kw=%s\n", lc->str);
- if (strcasecmp(lc->str, RestoreFields[i].name) == 0) {
- found = true;
- if (lex_get_token(lc, T_ALL) != T_EQUALS) {
+ if (strcasecmp(lc->str, RestoreFields[i].name) == 0) {
+ found = true;
+ if (lex_get_token(lc, T_ALL) != T_EQUALS) {
scan_err1(lc, "Expected an equals, got: %s", lc->str);
- }
- token = lex_get_token(lc, T_ALL);
+ }
+ token = lex_get_token(lc, T_ALL);
Dmsg1(190, "Restore value=%s\n", lc->str);
- switch (RestoreFields[i].token) {
+ switch (RestoreFields[i].token) {
case 'B':
- /* Bootstrap */
- if (token != T_IDENTIFIER && token != T_UNQUOTED_STRING && token != T_QUOTED_STRING) {
+ /* Bootstrap */
+ if (token != T_IDENTIFIER && token != T_UNQUOTED_STRING && token != T_QUOTED_STRING) {
scan_err1(lc, "Expected a Restore bootstrap file, got: %s", lc->str);
- }
- if (pass == 1) {
- res_all.res_job.RestoreBootstrap = bstrdup(lc->str);
- }
- break;
+ }
+ if (pass == 1) {
+ res_all.res_job.RestoreBootstrap = bstrdup(lc->str);
+ }
+ break;
case 'C':
- /* Find Client Resource */
- if (pass == 2) {
- res = GetResWithName(R_CLIENT, lc->str);
- if (res == NULL) {
+ /* Find Client Resource */
+ if (pass == 2) {
+ res = GetResWithName(R_CLIENT, lc->str);
+ if (res == NULL) {
scan_err1(lc, "Could not find specified Client Resource: %s",
- lc->str);
- }
- res_all.res_job.client = (CLIENT *)res;
- }
- break;
+ lc->str);
+ }
+ res_all.res_job.client = (CLIENT *)res;
+ }
+ break;
case 'F':
- /* Find FileSet Resource */
- if (pass == 2) {
- res = GetResWithName(R_FILESET, lc->str);
- if (res == NULL) {
+ /* Find FileSet Resource */
+ if (pass == 2) {
+ res = GetResWithName(R_FILESET, lc->str);
+ if (res == NULL) {
scan_err1(lc, "Could not find specified FileSet Resource: %s\n",
- lc->str);
- }
- res_all.res_job.fileset = (FILESET *)res;
- }
- break;
+ lc->str);
+ }
+ res_all.res_job.fileset = (FILESET *)res;
+ }
+ break;
case 'J':
- /* JobId */
- if (token != T_NUMBER) {
+ /* JobId */
+ if (token != T_NUMBER) {
scan_err1(lc, "expected an integer number, got: %s", lc->str);
- }
- errno = 0;
- res_all.res_job.RestoreJobId = strtol(lc->str, NULL, 0);
+ }
+ errno = 0;
+ res_all.res_job.RestoreJobId = strtol(lc->str, NULL, 0);
Dmsg1(190, "RestorJobId=%d\n", res_all.res_job.RestoreJobId);
- if (errno != 0) {
+ if (errno != 0) {
scan_err1(lc, "expected an integer number, got: %s", lc->str);
- }
- break;
+ }
+ break;
case 'W':
- /* Where */
- if (token != T_IDENTIFIER && token != T_UNQUOTED_STRING && token != T_QUOTED_STRING) {
+ /* Where */
+ if (token != T_IDENTIFIER && token != T_UNQUOTED_STRING && token != T_QUOTED_STRING) {
scan_err1(lc, "Expected a Restore root directory, got: %s", lc->str);
- }
- if (pass == 1) {
- res_all.res_job.RestoreWhere = bstrdup(lc->str);
- }
- break;
+ }
+ if (pass == 1) {
+ res_all.res_job.RestoreWhere = bstrdup(lc->str);
+ }
+ break;
case 'R':
- /* Replacement options */
- if (token != T_IDENTIFIER && token != T_UNQUOTED_STRING && token != T_QUOTED_STRING) {
+ /* Replacement options */
+ if (token != T_IDENTIFIER && token != T_UNQUOTED_STRING && token != T_QUOTED_STRING) {
scan_err1(lc, "Expected a keyword name, got: %s", lc->str);
- }
- /* Fix to scan Replacement options */
- for (i=0; ReplaceOptions[i].name; i++) {
- if (strcasecmp(lc->str, ReplaceOptions[i].name) == 0) {
- ((JOB *)(item->value))->replace = ReplaceOptions[i].token;
- i = 0;
- break;
- }
- }
- if (i != 0) {
+ }
+ /* Fix to scan Replacement options */
+ for (i=0; ReplaceOptions[i].name; i++) {
+ if (strcasecmp(lc->str, ReplaceOptions[i].name) == 0) {
+ ((JOB *)(item->value))->replace = ReplaceOptions[i].token;
+ i = 0;
+ break;
+ }
+ }
+ if (i != 0) {
scan_err1(lc, "Expected a Restore replacement option, got: %s", lc->str);
- }
- break;
- } /* end switch */
- break;
- } /* end if strcmp() */
+ }
+ break;
+ } /* end switch */
+ break;
+ } /* end if strcmp() */
} /* end for */
if (!found) {
scan_err1(lc, "%s not a valid Restore keyword", lc->str);
}
} /* end while */
- lc->options = options; /* reset original options */
+ lc->options = options; /* reset original options */
set_bit(index, res_all.hdr.item_present);
}
/*
* Resource codes -- they must be sequential for indexing
*/
-#define R_FIRST 1001
-
-#define R_DIRECTOR 1001
-#define R_CLIENT 1002
-#define R_JOB 1003
-#define R_STORAGE 1004
-#define R_CATALOG 1005
-#define R_SCHEDULE 1006
-#define R_FILESET 1007
-#define R_GROUP 1008
-#define R_POOL 1009
-#define R_MSGS 1010
-#define R_COUNTER 1011
-#define R_CONSOLE 1012
-
-#define R_LAST R_CONSOLE
+#define R_FIRST 1001
+
+#define R_DIRECTOR 1001
+#define R_CLIENT 1002
+#define R_JOB 1003
+#define R_STORAGE 1004
+#define R_CATALOG 1005
+#define R_SCHEDULE 1006
+#define R_FILESET 1007
+#define R_GROUP 1008
+#define R_POOL 1009
+#define R_MSGS 1010
+#define R_COUNTER 1011
+#define R_CONSOLE 1012
+
+#define R_LAST R_CONSOLE
/*
* Some resource attributes
*/
-#define R_NAME 1020
-#define R_ADDRESS 1021
-#define R_PASSWORD 1022
-#define R_TYPE 1023
-#define R_BACKUP 1024
+#define R_NAME 1020
+#define R_ADDRESS 1021
+#define R_PASSWORD 1022
+#define R_TYPE 1023
+#define R_BACKUP 1024
/* Used for certain KeyWord tables */
-struct s_kw {
+struct s_kw {
char *name;
- int token;
+ int token;
};
/* Job Level keyword structure */
struct s_jl {
- char *level_name; /* level keyword */
- int level; /* level */
- int job_type; /* JobType permitting this level */
+ char *level_name; /* level keyword */
+ int level; /* level */
+ int job_type; /* JobType permitting this level */
};
/* Job Type keyword structure */
struct RUN;
/*
- * Director Resource
+ * Director Resource
*
*/
struct DIRRES {
- RES hdr;
- int DIRport; /* where we listen -- UA port server port */
- char *DIRaddr; /* bind address */
- char *password; /* Password for UA access */
- int enable_ssl; /* Use SSL for UA */
- char *query_file; /* SQL query file */
- char *working_directory; /* WorkingDirectory */
- char *pid_directory; /* PidDirectory */
- char *subsys_directory; /* SubsysDirectory */
- int require_ssl; /* Require SSL for all connections */
- MSGS *messages; /* Daemon message handler */
- uint32_t MaxConcurrentJobs; /* Max concurrent jobs for whole director */
- utime_t FDConnectTimeout; /* timeout for connect in seconds */
- utime_t SDConnectTimeout; /* timeout in seconds */
+ RES hdr;
+ int DIRport; /* where we listen -- UA port server port */
+ char *DIRaddr; /* bind address */
+ char *password; /* Password for UA access */
+ int enable_ssl; /* Use SSL for UA */
+ char *query_file; /* SQL query file */
+ char *working_directory; /* WorkingDirectory */
+ char *pid_directory; /* PidDirectory */
+ char *subsys_directory; /* SubsysDirectory */
+ int require_ssl; /* Require SSL for all connections */
+ MSGS *messages; /* Daemon message handler */
+ uint32_t MaxConcurrentJobs; /* Max concurrent jobs for whole director */
+ utime_t FDConnectTimeout; /* timeout for connect in seconds */
+ utime_t SDConnectTimeout; /* timeout in seconds */
};
/*
* Console Resource
*/
struct CONRES {
- RES hdr;
- char *password; /* UA server password */
- int enable_ssl; /* Use SSL */
+ RES hdr;
+ char *password; /* UA server password */
+ int enable_ssl; /* Use SSL */
};
*
*/
struct CAT {
- RES hdr;
+ RES hdr;
- int db_port; /* Port -- not yet implemented */
- char *db_address; /* host name for remote access */
- char *db_socket; /* Socket for local access */
+ int db_port; /* Port -- not yet implemented */
+ char *db_address; /* host name for remote access */
+ char *db_socket; /* Socket for local access */
char *db_password;
char *db_user;
char *db_name;
*
*/
struct CLIENT {
- RES hdr;
+ RES hdr;
- int FDport; /* Where File daemon listens */
- int AutoPrune; /* Do automatic pruning? */
- utime_t FileRetention; /* file retention period in seconds */
- utime_t JobRetention; /* job retention period in seconds */
+ int FDport; /* Where File daemon listens */
+ int AutoPrune; /* Do automatic pruning? */
+ utime_t FileRetention; /* file retention period in seconds */
+ utime_t JobRetention; /* job retention period in seconds */
char *address;
char *password;
- CAT *catalog; /* Catalog resource */
- uint32_t MaxConcurrentJobs; /* Maximume concurrent jobs */
- uint32_t NumConcurrentJobs; /* number of concurrent jobs running */
- int enable_ssl; /* Use SSL */
+ CAT *catalog; /* Catalog resource */
+ uint32_t MaxConcurrentJobs; /* Maximume concurrent jobs */
+ uint32_t NumConcurrentJobs; /* number of concurrent jobs running */
+ int enable_ssl; /* Use SSL */
};
/*
*
*/
struct STORE {
- RES hdr;
+ RES hdr;
- int SDport; /* port where Directors connect */
- int SDDport; /* data port for File daemon */
+ int SDport; /* port where Directors connect */
+ int SDDport; /* data port for File daemon */
char *address;
char *password;
char *media_type;
char *dev_name;
- int autochanger; /* set if autochanger */
- uint32_t MaxConcurrentJobs; /* Maximume concurrent jobs */
- uint32_t NumConcurrentJobs; /* number of concurrent jobs running */
- int enable_ssl; /* Use SSL */
+ int autochanger; /* set if autochanger */
+ uint32_t MaxConcurrentJobs; /* Maximume concurrent jobs */
+ uint32_t NumConcurrentJobs; /* number of concurrent jobs running */
+ int enable_ssl; /* Use SSL */
};
*
*/
struct JOB {
- RES hdr;
-
- int JobType; /* job type (backup, verify, restore */
- int level; /* default backup/verify level */
- int Priority; /* Job priority */
- int RestoreJobId; /* What -- JobId to restore */
- char *RestoreWhere; /* Where on disk to restore -- directory */
- char *RestoreBootstrap; /* Bootstrap file */
- char *RunBeforeJob; /* Run program before Job */
- char *RunAfterJob; /* Run program after Job */
- char *RunAfterFailedJob; /* Run program after Job that errs */
- char *ClientRunBeforeJob; /* Run client program before Job */
- char *ClientRunAfterJob; /* Run client program after Job */
- char *WriteBootstrap; /* Where to write bootstrap Job updates */
- int replace; /* How (overwrite, ..) */
- utime_t MaxRunTime; /* max run time in seconds */
- utime_t MaxStartDelay; /* max start delay in seconds */
- int PrefixLinks; /* prefix soft links with Where path */
- int PruneJobs; /* Force pruning of Jobs */
- int PruneFiles; /* Force pruning of Files */
- int PruneVolumes; /* Force pruning of Volumes */
- int SpoolAttributes; /* Set to spool attributes in SD */
- uint32_t MaxConcurrentJobs; /* Maximume concurrent jobs */
- int RescheduleOnError; /* Set to reschedule on error */
- int RescheduleTimes; /* Number of times to reschedule job */
- utime_t RescheduleInterval; /* Reschedule interval */
+ RES hdr;
+
+ int JobType; /* job type (backup, verify, restore */
+ int level; /* default backup/verify level */
+ int Priority; /* Job priority */
+ int RestoreJobId; /* What -- JobId to restore */
+ char *RestoreWhere; /* Where on disk to restore -- directory */
+ char *RestoreBootstrap; /* Bootstrap file */
+ char *RunBeforeJob; /* Run program before Job */
+ char *RunAfterJob; /* Run program after Job */
+ char *RunAfterFailedJob; /* Run program after Job that errs */
+ char *ClientRunBeforeJob; /* Run client program before Job */
+ char *ClientRunAfterJob; /* Run client program after Job */
+ char *WriteBootstrap; /* Where to write bootstrap Job updates */
+ int replace; /* How (overwrite, ..) */
+ utime_t MaxRunTime; /* max run time in seconds */
+ utime_t MaxStartDelay; /* max start delay in seconds */
+ int PrefixLinks; /* prefix soft links with Where path */
+ int PruneJobs; /* Force pruning of Jobs */
+ int PruneFiles; /* Force pruning of Files */
+ int PruneVolumes; /* Force pruning of Volumes */
+ int SpoolAttributes; /* Set to spool attributes in SD */
+ uint32_t MaxConcurrentJobs; /* Maximume concurrent jobs */
+ int RescheduleOnError; /* Set to reschedule on error */
+ int RescheduleTimes; /* Number of times to reschedule job */
+ utime_t RescheduleInterval; /* Reschedule interval */
+ utime_t JobRetention; /* job retention period in seconds */
- MSGS *messages; /* How and where to send messages */
- SCHED *schedule; /* When -- Automatic schedule */
- CLIENT *client; /* Who to backup */
- FILESET *fileset; /* What to backup -- Fileset */
- STORE *storage; /* Where is device -- Storage daemon */
- POOL *pool; /* Where is media -- Media Pool */
- JOB *verify_job; /* Job name to verify */
- uint32_t NumConcurrentJobs; /* number of concurrent jobs running */
+ MSGS *messages; /* How and where to send messages */
+ SCHED *schedule; /* When -- Automatic schedule */
+ CLIENT *client; /* Who to backup */
+ FILESET *fileset; /* What to backup -- Fileset */
+ STORE *storage; /* Where is device -- Storage daemon */
+ POOL *pool; /* Where is media -- Media Pool */
+ JOB *verify_job; /* Job name to verify */
+ uint32_t NumConcurrentJobs; /* number of concurrent jobs running */
};
#define MAX_FOPTS 30
/* File options structure */
struct FOPTS {
- char opts[MAX_FOPTS]; /* options string */
- alist match; /* match string(s) */
- alist base_list; /* list of base names */
+ char opts[MAX_FOPTS]; /* options string */
+ alist match; /* match string(s) */
+ alist base_list; /* list of base names */
};
/* This is either an include item or an exclude item */
struct INCEXE {
- FOPTS *current_opts; /* points to current options structure */
- FOPTS **opts_list; /* options list */
- int num_opts; /* number of options items */
- alist name_list; /* filename list -- holds char * */
+ FOPTS *current_opts; /* points to current options structure */
+ FOPTS **opts_list; /* options list */
+ int num_opts; /* number of options items */
+ alist name_list; /* filename list -- holds char * */
};
/*
*
*/
struct FILESET {
- RES hdr;
+ RES hdr;
- int new_include; /* Set if new include used */
- INCEXE **include_items; /* array of incexe structures */
- int num_includes; /* number in array */
+ int new_include; /* Set if new include used */
+ INCEXE **include_items; /* array of incexe structures */
+ int num_includes; /* number in array */
INCEXE **exclude_items;
int num_excludes;
- int have_MD5; /* set if MD5 initialized */
- struct MD5Context md5c; /* MD5 of include/exclude */
- char MD5[30]; /* base 64 representation of MD5 */
+ int have_MD5; /* set if MD5 initialized */
+ struct MD5Context md5c; /* MD5 of include/exclude */
+ char MD5[30]; /* base 64 representation of MD5 */
};
*
*/
struct SCHED {
- RES hdr;
+ RES hdr;
RUN *run;
};
*
*/
struct GROUP {
- RES hdr;
+ RES hdr;
};
/*
* Counter Resource
*/
struct COUNTER {
- RES hdr;
-
- int32_t MinValue; /* Minimum value */
- int32_t MaxValue; /* Maximum value */
- int32_t CurrentValue; /* Current value */
- COUNTER *WrapCounter; /* Wrap counter name */
- CAT *Catalog; /* Where to store */
- bool created; /* Created in DB */
+ RES hdr;
+
+ int32_t MinValue; /* Minimum value */
+ int32_t MaxValue; /* Maximum value */
+ int32_t CurrentValue; /* Current value */
+ COUNTER *WrapCounter; /* Wrap counter name */
+ CAT *Catalog; /* Where to store */
+ bool created; /* Created in DB */
};
/*
*
*/
struct POOL {
- RES hdr;
-
- char *pool_type; /* Pool type */
- char *label_format; /* Label format string */
- char *cleaning_prefix; /* Cleaning label prefix */
- int use_catalog; /* maintain catalog for media */
- int catalog_files; /* maintain file entries in catalog */
- int use_volume_once; /* write on volume only once */
- int accept_any_volume; /* accept any volume */
- int purge_oldest_volume; /* purge oldest volume */
- int recycle_oldest_volume; /* attempt to recycle oldest volume */
- int recycle_current_volume; /* attempt recycle of current volume */
- uint32_t max_volumes; /* max number of volumes */
- utime_t VolRetention; /* volume retention period in seconds */
- utime_t VolUseDuration; /* duration volume can be used */
- uint32_t MaxVolJobs; /* Maximum jobs on the Volume */
- uint32_t MaxVolFiles; /* Maximum files on the Volume */
- uint64_t MaxVolBytes; /* Maximum bytes on the Volume */
- int AutoPrune; /* default for pool auto prune */
- int Recycle; /* default for media recycle yes/no */
+ RES hdr;
+
+ char *pool_type; /* Pool type */
+ char *label_format; /* Label format string */
+ char *cleaning_prefix; /* Cleaning label prefix */
+ int use_catalog; /* maintain catalog for media */
+ int catalog_files; /* maintain file entries in catalog */
+ int use_volume_once; /* write on volume only once */
+ int accept_any_volume; /* accept any volume */
+ int purge_oldest_volume; /* purge oldest volume */
+ int recycle_oldest_volume; /* attempt to recycle oldest volume */
+ int recycle_current_volume; /* attempt recycle of current volume */
+ uint32_t max_volumes; /* max number of volumes */
+ utime_t VolRetention; /* volume retention period in seconds */
+ utime_t VolUseDuration; /* duration volume can be used */
+ uint32_t MaxVolJobs; /* Maximum jobs on the Volume */
+ uint32_t MaxVolFiles; /* Maximum files on the Volume */
+ uint64_t MaxVolBytes; /* Maximum bytes on the Volume */
+ int AutoPrune; /* default for pool auto prune */
+ int Recycle; /* default for media recycle yes/no */
};
CONRES res_con;
CLIENT res_client;
STORE res_store;
- CAT res_cat;
- JOB res_job;
+ CAT res_cat;
+ JOB res_job;
FILESET res_fs;
SCHED res_sch;
GROUP res_group;
POOL res_pool;
MSGS res_msgs;
COUNTER res_counter;
- RES hdr;
+ RES hdr;
};
/* Run structure contained in Schedule Resource */
struct RUN {
- RUN *next; /* points to next run record */
- int level; /* level override */
- int Priority; /* priority override */
+ RUN *next; /* points to next run record */
+ int level; /* level override */
+ int Priority; /* priority override */
int job_type;
- POOL *pool; /* Pool override */
- STORE *storage; /* Storage override */
- MSGS *msgs; /* Messages override */
+ POOL *pool; /* Pool override */
+ STORE *storage; /* Storage override */
+ MSGS *msgs; /* Messages override */
char *since;
int level_no;
- int minute; /* minute to run job */
- time_t last_run; /* last time run */
- time_t next_run; /* next time to run */
+ int minute; /* minute to run job */
+ time_t last_run; /* last time run */
+ time_t next_run; /* next time to run */
char hour[nbytes_for_bits(24)]; /* bit set for each hour */
char mday[nbytes_for_bits(31)]; /* bit set for each day of month */
char month[nbytes_for_bits(12)]; /* bit set for each month */
"SELECT FileSet.FileSetId,FileSet.FileSet,FileSet.CreateTime FROM Job,"
"Client,FileSet WHERE Job.FileSetId=FileSet.FileSetId "
"AND Job.ClientId=%u AND Client.ClientId=%u "
- "GROUP BY FileSet.FileSetId ORDER BY FileSet.FileSetId";
+ "GROUP BY FileSet.FileSetId ORDER BY FileSet.CreateTime";
/* Find MediaType used by this Job */
char *uar_mediatype =
/*
*
* Bacula Director -- User Agent Database File tree for Restore
- * command.
+ * command. This file interacts with the user implementing the
+ * UA tree commands.
*
* Kern Sibbald, July MMII
*
static int countcmd(UAContext *ua, TREE_CTX *tree);
static int findcmd(UAContext *ua, TREE_CTX *tree);
static int lscmd(UAContext *ua, TREE_CTX *tree);
+static int lsmark(UAContext *ua, TREE_CTX *tree);
static int dircmd(UAContext *ua, TREE_CTX *tree);
+static int estimatecmd(UAContext *ua, TREE_CTX *tree);
static int helpcmd(UAContext *ua, TREE_CTX *tree);
static int cdcmd(UAContext *ua, TREE_CTX *tree);
static int pwdcmd(UAContext *ua, TREE_CTX *tree);
struct cmdstruct { char *key; int (*func)(UAContext *ua, TREE_CTX *tree); char *help; };
static struct cmdstruct commands[] = {
- { N_("mark"), markcmd, _("mark file for restoration")},
- { N_("unmark"), unmarkcmd, _("unmark file for restoration")},
{ N_("cd"), cdcmd, _("change current directory")},
- { N_("pwd"), pwdcmd, _("print current working directory")},
- { N_("ls"), lscmd, _("list current directory")},
- { N_("dir"), dircmd, _("list current directory")},
{ N_("count"), countcmd, _("count marked files")},
- { N_("find"), findcmd, _("find files")},
+ { N_("dir"), dircmd, _("list current directory")},
{ N_("done"), quitcmd, _("leave file selection mode")},
+ { N_("estimate"), estimatecmd, _("estimate restore size")},
{ N_("exit"), quitcmd, _("exit = done")},
+ { N_("find"), findcmd, _("find files")},
{ N_("help"), helpcmd, _("print help")},
+ { N_("lsmark"), lsmark, _("list the marked files")},
+ { N_("ls"), lscmd, _("list current directory")},
+ { N_("mark"), markcmd, _("mark file for restoration")},
+ { N_("pwd"), pwdcmd, _("print current working directory")},
+ { N_("unmark"), unmarkcmd, _("unmark file for restoration")},
{ N_("?"), helpcmd, _("print help")},
};
#define comsize (sizeof(commands)/sizeof(struct cmdstruct))
* down the tree setting all children if the
* node is a directory.
*/
-static void set_extract(UAContext *ua, TREE_NODE *node, TREE_CTX *tree, bool extract)
+static int set_extract(UAContext *ua, TREE_NODE *node, TREE_CTX *tree, bool extract)
{
TREE_NODE *n;
FILE_DBR fdbr;
struct stat statp;
+ int count = 0;
node->extract = extract;
+ if (node->type != TN_NEWDIR) {
+ count++;
+ }
/* For a non-file (i.e. directory), we see all the children */
if (node->type != TN_FILE) {
for (n=node->child; n; n=n->sibling) {
- set_extract(ua, n, tree, extract);
+ count += set_extract(ua, n, tree, extract);
}
} else if (extract) {
char cwd[2000];
}
}
}
+ return count;
}
static int markcmd(UAContext *ua, TREE_CTX *tree)
{
TREE_NODE *node;
+ int count = 0;
- if (ua->argc < 2)
- return 1;
- if (!tree->node->child) {
+ if (ua->argc < 2 || !tree->node->child) {
+ bsendmsg(ua, _("No files marked.\n"));
return 1;
}
for (node = tree->node->child; node; node=node->sibling) {
if (fnmatch(ua->argk[1], node->fname, 0) == 0) {
- set_extract(ua, node, tree, true);
+ count += set_extract(ua, node, tree, true);
}
}
+ if (count == 0) {
+ bsendmsg(ua, _("No files marked.\n"));
+ } else {
+ bsendmsg(ua, _("%d file%s marked.\n"), count, count==0?"":"s");
+ }
return 1;
}
return 1;
}
+/*
+ * Ls command that lists only the marked files
+ */
+static int lsmark(UAContext *ua, TREE_CTX *tree)
+{
+ TREE_NODE *node;
+
+ if (!tree->node->child) {
+ return 1;
+ }
+ for (node = tree->node->child; node; node=node->sibling) {
+ if (ua->argc == 1 || fnmatch(ua->argk[1], node->fname, 0) == 0 &&
+ node->extract) {
+ bsendmsg(ua, "%s%s%s\n", node->extract?"*":"", node->fname,
+ (node->type==TN_DIR||node->type==TN_NEWDIR)?"/":"");
+ }
+ }
+ return 1;
+}
+
+
extern char *getuser(uid_t uid);
extern char *getgroup(gid_t gid);
}
+static int estimatecmd(UAContext *ua, TREE_CTX *tree)
+{
+ int total, num_extract;
+ uint64_t total_bytes = 0;
+ FILE_DBR fdbr;
+ struct stat statp;
+ char cwd[1100];
+ char ec1[50];
+
+ total = num_extract = 0;
+ for (TREE_NODE *node=first_tree_node(tree->root); node; node=next_tree_node(node)) {
+ if (node->type != TN_NEWDIR) {
+ total++;
+ /* If regular file, get size */
+ if (node->extract && node->type == TN_FILE) {
+ num_extract++;
+ tree_getpath(node, cwd, sizeof(cwd));
+ fdbr.FileId = 0;
+ fdbr.JobId = node->JobId;
+ if (db_get_file_attributes_record(ua->jcr, ua->db, cwd, NULL, &fdbr)) {
+ int32_t LinkFI;
+ decode_stat(fdbr.LStat, &statp, &LinkFI); /* decode stat pkt */
+ if (S_ISREG(statp.st_mode) && statp.st_size > 0) {
+ total_bytes += statp.st_size;
+ }
+ }
+ /* Directory, count only */
+ } else if (node->extract) {
+ num_extract++;
+ }
+ }
+ }
+ bsendmsg(ua, "%d total files; %d marked for restoration; %s bytes.\n",
+ total, num_extract, edit_uint64_with_commas(total_bytes, ec1));
+ return 1;
+}
+
+
+
static int helpcmd(UAContext *ua, TREE_CTX *tree)
{
unsigned int i;
static int unmarkcmd(UAContext *ua, TREE_CTX *tree)
{
TREE_NODE *node;
+ int count = 0;
- if (ua->argc < 2)
- return 1;
- if (!tree->node->child) {
+ if (ua->argc < 2 || !tree->node->child) {
+ bsendmsg(ua, _("No files unmarked.\n"));
return 1;
}
for (node = tree->node->child; node; node=node->sibling) {
if (fnmatch(ua->argk[1], node->fname, 0) == 0) {
- set_extract(ua, node, tree, false);
+ count += set_extract(ua, node, tree, false);
}
}
+ if (count == 0) {
+ bsendmsg(ua, _("No files unmarked.\n"));
+ } else {
+ bsendmsg(ua, _("%d file%s unmarked.\n"), count, count==0?"":"s");
+ }
return 1;
}
errno = 0;
nread = read(bsock->fd, ptr, nleft);
if (bsock->timed_out || bsock->terminated) {
- Dmsg1(400, "timed_out = %d\n", bsock->timed_out);
return nread;
}
} while (nread == -1 && (errno == EINTR || errno == EAGAIN));
va_list arg_ptr;
int maxlen;
+ if (bs->errors || bs->terminated) {
+ return 0;
+ }
/* This probably won't work, but we vsnprintf, then if we
* get a negative length or a length greater than our buffer
* (depending on which library is used), the printf was truncated, so
bs->msg = realloc_pool_memory(bs->msg, maxlen + 200);
goto again;
}
- return bnet_send(bs) < 0 ? 0 : 1;
+ return bnet_send(bs);
}
/*
case MD_DIRECTOR:
Dmsg1(800, "DIRECTOR for following msg: %s", msg);
if (jcr && jcr->dir_bsock && !jcr->dir_bsock->errors) {
-
- jcr->dir_bsock->msglen = Mmsg(&(jcr->dir_bsock->msg),
- "Jmsg Job=%s type=%d level=%d %s", jcr->Job,
- type, level, msg) + 1;
- bnet_send(jcr->dir_bsock);
+ bnet_fsend(jcr->dir_bsock, "Jmsg Job=%s type=%d level=%d %s",
+ jcr->Job, type, level, msg);
}
break;
case MD_STDOUT:
/* scan.c */
void strip_trailing_junk (char *str);
void strip_trailing_slashes (char *dir);
-int skip_spaces (char **msg);
-int skip_nonspaces (char **msg);
+bool skip_spaces (char **msg);
+bool skip_nonspaces (char **msg);
int fstrsch (char *a, char *b);
int parse_args(POOLMEM *cmd, POOLMEM **args, int *argc,
char **argk, char **argv, int max_args);
* 1 on success
* new address in passed parameter
*/
-int skip_spaces(char **msg)
+bool skip_spaces(char **msg)
{
char *p = *msg;
if (!p) {
- return 0;
+ return false;
}
while (*p && B_ISSPACE(*p)) {
p++;
}
*msg = p;
- return *p ? 1 : 0;
+ return *p ? true : false;
}
/*
* 1 on success
* new address in passed parameter
*/
-int skip_nonspaces(char **msg)
+bool skip_nonspaces(char **msg)
{
char *p = *msg;
if (!p) {
- return 0;
+ return false;
}
while (*p && !B_ISSPACE(*p)) {
p++;
}
*msg = p;
- return *p ? 1 : 0;
+ return *p ? true : false;
}
/* folded search for string - case insensitive */
if (dev->dev_errno == 0) {
dev->dev_errno = ENOSPC; /* out of space */
}
- Jmsg(jcr, M_ERROR, 0, _("Write error on device %s. ERR=%s.\n"),
- dev->dev_name, strerror(dev->dev_errno));
+ Jmsg(jcr, M_ERROR, 0, _("Write error at %u:%u on device %s. ERR=%s.\n"),
+ dev->file, dev->block_num, dev->dev_name, strerror(dev->dev_errno));
} else {
dev->dev_errno = ENOSPC; /* out of space */
- Jmsg3(jcr, M_INFO, 0, _("End of medium on device %s. Write of %u bytes got %d.\n"),
- dev->dev_name, wlen, stat);
+ Jmsg(jcr, M_INFO, 0, _("End of medium at %u:%u on device %s. Write of %u bytes got %d.\n"),
+ dev->file, dev->block_num, dev->dev_name, wlen, stat);
}
Dmsg6(100, "=== Write error. size=%u rtn=%d dev_blk=%d blk_blk=%d errno=%d: ERR=%s\n",
int bsize = TAPE_BSIZE;
char VolName[MAX_NAME_LENGTH];
+/*
+ * If you change the format of the state file,
+ * increment this value
+ */
+static uint32_t btape_state_level = 1;
+
DEVICE *dev = NULL;
DEVRES *device = NULL;
static bool open_the_device();
static char *edit_device_codes(JCR *jcr, char *omsg, char *imsg, char *cmd);
static void autochangercmd();
+static void do_unfill();
/* Static variables */
#define CONFIG_FILE "bacula-sd.conf"
char *configfile;
+#define MAX_CMD_ARGS 30
+static POOLMEM *cmd;
+static POOLMEM *args;
+static char *argk[MAX_CMD_ARGS];
+static char *argv[MAX_CMD_ARGS];
+static int argc;
+
static BSR *bsr = NULL;
-static char cmd[1000];
static int signals = TRUE;
static int ok;
static int stop;
* Main Bacula Pool Creation Program
*
*/
-int main(int argc, char *argv[])
+int main(int margc, char *margv[])
{
int ch;
printf("Tape block granularity is %d bytes.\n", TAPE_BSIZE);
working_directory = "/tmp";
- my_name_is(argc, argv, "btape");
+ my_name_is(margc, margv, "btape");
init_msg(NULL, NULL);
- while ((ch = getopt(argc, argv, "b:c:d:sv?")) != -1) {
+ while ((ch = getopt(margc, margv, "b:c:d:sv?")) != -1) {
switch (ch) {
case 'b': /* bootstrap file */
bsr = parse_bsr(NULL, optarg);
}
}
- argc -= optind;
- argv += optind;
-
+ margc -= optind;
+ margv += optind;
+ cmd = get_pool_memory(PM_FNAME);
+ args = get_pool_memory(PM_FNAME);
if (signals) {
init_signals(terminate_btape);
/* See if we can open a device */
- if (argc == 0) {
+ if (margc == 0) {
Pmsg0(000, "No archive name specified.\n");
usage();
exit(1);
- } else if (argc != 1) {
+ } else if (margc != 1) {
Pmsg0(000, "Improper number of arguments specified.\n");
usage();
exit(1);
}
- jcr = setup_jcr("btape", argv[0], bsr, NULL);
+ jcr = setup_jcr("btape", margv[0], bsr, NULL);
dev = setup_to_access_device(jcr, 0); /* acquire for write */
if (!dev) {
exit(1);
}
+ dev->max_volume_size = 0;
if (!open_the_device()) {
goto terminate;
}
free(configfile);
}
free_config_resources();
+ if (args) {
+ free_pool_memory(args);
+ args = NULL;
+ }
+ if (cmd) {
+ free_pool_memory(cmd);
+ cmd = NULL;
+ }
if (dev) {
term_dev(dev);
static void labelcmd()
{
if (VolumeName) {
- bstrncpy(cmd, VolumeName, sizeof(cmd));
+ pm_strcpy(&cmd, VolumeName);
} else {
if (!get_cmd("Enter Volume Name: ")) {
return;
static void weofcmd()
{
int stat;
+ int num = 1;
+ if (argc > 1) {
+ num = atoi(argk[1]);
+ }
+ if (num <= 0) {
+ num = 1;
+ }
- if ((stat = weof_dev(dev, 1)) < 0) {
+ if ((stat = weof_dev(dev, num)) < 0) {
Pmsg2(0, "Bad status from weof %d. ERR=%s\n", stat, strerror_dev(dev));
return;
} else {
- Pmsg1(0, "Wrote EOF to %s\n", dev_name(dev));
+ Pmsg3(0, "Wrote %d EOF%s to %s\n", num, num==1?"":"s", dev_name(dev));
}
}
*/
static void bsfcmd()
{
+ int num = 1;
+ if (argc > 1) {
+ num = atoi(argk[1]);
+ }
+ if (num <= 0) {
+ num = 1;
+ }
- if (!bsf_dev(dev, 1)) {
+ if (!bsf_dev(dev, num)) {
Pmsg1(0, _("Bad status from bsf. ERR=%s\n"), strerror_dev(dev));
} else {
- Pmsg0(0, _("Backspaced one file.\n"));
+ Pmsg2(0, _("Backspaced %d file%s.\n"), num, num==1?"":"s");
}
}
*/
static void bsrcmd()
{
- if (!bsr_dev(dev, 1)) {
+ int num = 1;
+ if (argc > 1) {
+ num = atoi(argk[1]);
+ }
+ if (num <= 0) {
+ num = 1;
+ }
+ if (!bsr_dev(dev, num)) {
Pmsg1(0, _("Bad status from bsr. ERR=%s\n"), strerror_dev(dev));
} else {
- Pmsg0(0, _("Backspaced one record.\n"));
+ Pmsg2(0, _("Backspaced %d record%s.\n"), num, num==1?"":"s");
}
}
"I'm going to write one record in file 0,\n"
" two records in file 1,\n"
" and three records in file 2\n\n"));
+ argc = 1;
rewindcmd();
wrcmd();
weofcmd(); /* end file 0 */
Pmsg0(-1, _("\n\n=== Forward space files test ===\n\n"
"This test is essential to Bacula.\n\n"
"I'm going to write five files then test forward spacing\n\n"));
+ argc = 1;
rewindcmd();
wrcmd();
weofcmd(); /* end file 0 */
/* Forward space a file */
static void fsfcmd()
{
- if (!fsf_dev(dev, 1)) {
+ int num = 1;
+ if (argc > 1) {
+ num = atoi(argk[1]);
+ }
+ if (num <= 0) {
+ num = 1;
+ }
+ if (!fsf_dev(dev, num)) {
Pmsg1(0, "Bad status from fsf. ERR=%s\n", strerror_dev(dev));
return;
}
- Pmsg0(0, "Forward spaced one file.\n");
+ Pmsg2(0, "Forward spaced %d file%s.\n", num, num==1?"":"s");
}
/* Forward space a record */
static void fsrcmd()
{
- if (!fsr_dev(dev, 1)) {
+ int num = 1;
+ if (argc > 1) {
+ num = atoi(argk[1]);
+ }
+ if (num <= 0) {
+ num = 1;
+ }
+ if (!fsr_dev(dev, num)) {
Pmsg1(0, "Bad status from fsr. ERR=%s\n", strerror_dev(dev));
return;
}
- Pmsg0(0, "Forward spaced one record.\n");
+ Pmsg2(0, "Forward spaced %d record%s.\n", num, num==1?"":"s");
}
DEV_RECORD rec;
DEV_BLOCK *block;
char ec1[50];
+ int fd;
+ uint32_t i;
+ uint32_t min_block_size;
ok = TRUE;
stop = 0;
vol_num = 0;
+ last_file = 0;
+ last_block_num = 0;
+ BlockNumber = 0;
Pmsg0(-1, "\n\
This command simulates Bacula writing to a tape.\n\
Dmsg1(20, "Begin append device=%s\n", dev_name(dev));
+
+ /* Use fixed block size to simplify read back */
+ min_block_size = dev->min_block_size;
+ dev->min_block_size = dev->max_block_size;
block = new_block(dev);
/*
#define REC_SIZE 32768
rec.data_len = REC_SIZE;
+ /*
+ * Put some random data in the record
+ */
+ fd = open("/dev/urandom", O_RDONLY);
+ if (fd) {
+ read(fd, rec.data, rec.data_len);
+ close(fd);
+ } else {
+ uint32_t *p = (uint32_t *)rec.data;
+ srandom(time(NULL));
+ for (i=0; i<rec.data_len/sizeof(uint32_t); i++) {
+ p[i] = random();
+ }
+ }
+
/*
* Generate data as if from File daemon, write to device
*/
jcr->VolFirstIndex = 0;
time(&jcr->run_time); /* start counting time for rates */
- Pmsg0(-1, "Begin writing Bacula records to first tape ...\n");
- Pmsg1(-1, "Block num = %d\n", dev->block_num);
+ if (simple) {
+ Pmsg0(-1, "Begin writing Bacula records to tape ...\n");
+ } else {
+ Pmsg0(-1, "Begin writing Bacula records to first tape ...\n");
+ }
for (file_index = 0; ok && !job_canceled(jcr); ) {
rec.VolSessionId = jcr->VolSessionId;
rec.VolSessionTime = jcr->VolSessionTime;
rec.FileIndex = ++file_index;
rec.Stream = STREAM_FILE_DATA;
- /*
- * Fill the buffer with the file_index negated. Negation ensures that
- * more bits are turned on.
- */
- uint64_t *lp = (uint64_t *)rec.data;
- for (uint32_t i=0; i < (rec.data_len-sizeof(uint64_t))/sizeof(uint64_t); i++) {
- *lp++ = ~file_index;
+ /* Mix up the data just a bit */
+ uint32_t *lp = (uint32_t *)rec.data;
+ lp[0] += lp[13];
+ for (i=1; i < (rec.data_len-sizeof(uint32_t))/sizeof(uint32_t)-1; i++) {
+ lp[i] += lp[i-1];
}
- Dmsg4(250, "before writ_rec FI=%d SessId=%d Strm=%s len=%d\n",
+ Dmsg4(250, "before write_rec FI=%d SessId=%d Strm=%s len=%d\n",
rec.FileIndex, rec.VolSessionId, stream_to_ascii(rec.Stream, rec.FileIndex),
rec.data_len);
now = time(NULL);
now -= jcr->run_time;
if (now <= 0) {
- now = 1;
+ now = 1; /* prevent divide error */
}
kbs = (double)dev->VolCatInfo.VolCatBytes / (1000.0 * (double)now);
- Pmsg4(-1, "Wrote block=%u, blk_num=%d VolBytes=%s rate=%.1f KB/s\n", block->BlockNumber,
- dev->block_num,
+ Pmsg4(-1, "Wrote blk_block=%u, dev_blk_num=%u VolBytes=%s rate=%.1f KB/s\n",
+ block->BlockNumber, dev->block_num,
edit_uint64_with_commas(dev->VolCatInfo.VolCatBytes, ec1), (float)kbs);
}
/* Every 15000 blocks (approx 1GB) write an EOF.
Pmsg0(-1, _("Set ok=FALSE after write_block_to_device.\n"));
ok = FALSE;
}
- Pmsg0(-1, "Wrote End Of Session label.\n");
+ Pmsg0(-1, _("Wrote End Of Session label.\n"));
+ }
+
+ sprintf(buf, "%s/btape.state", working_directory);
+ fd = open(buf, O_CREAT|O_TRUNC|O_WRONLY, 0640);
+ if (fd >= 0) {
+ write(fd, &btape_state_level, sizeof(btape_state_level));
+ write(fd, &last_block_num, sizeof(last_block_num));
+ write(fd, &last_file, sizeof(last_file));
+ write(fd, last_block->buf, last_block->buf_len);
+ close(fd);
+ Pmsg0(-1, "Wrote state file.\n");
+ } else {
+ Pmsg2(-1, _("Could not create state file: %s ERR=%s\n"), buf,
+ strerror(errno));
}
/* Release the device */
ok = FALSE;
}
- free_block(block);
- free_memory(rec.data);
-
- dump_block(last_block, _("Last block written to tape.\n"));
+ if (verbose) {
+ Pmsg0(-1, "\n");
+ dump_block(last_block, _("Last block written to tape.\n"));
+ }
Pmsg0(-1, _("\n\nDone filling tape. Now beginning re-read of tape ...\n"));
- unfillcmd();
+ if (simple) {
+ do_unfill();
+ } else {
+ /* Multiple Volume tape */
+ dumped = 0;
+ VolBytes = 0;
+ LastBlock = 0;
+ block = new_block(dev);
+
+ dev->capabilities |= CAP_ANONVOLS; /* allow reading any volume */
+ dev->capabilities &= ~CAP_LABEL; /* don't label anything here */
+
+ end_of_tape = 0;
+
+
+ time(&jcr->run_time); /* start counting time for rates */
+ stop = 0;
+ file_index = 0;
+ /* Close device so user can use autochanger if desired */
+ if (dev_cap(dev, CAP_OFFLINEUNMOUNT)) {
+ offline_dev(dev);
+ }
+ force_close_dev(dev);
+ get_cmd(_("Mount first tape. Press enter when ready: "));
+
+ free_vol_list(jcr);
+ set_volume_name("TestVolume1", 1);
+ jcr->bsr = NULL;
+ create_vol_list(jcr);
+ close_dev(dev);
+ dev->state &= ~ST_READ;
+ if (!acquire_device_for_read(jcr, dev, block)) {
+ Pmsg1(-1, "%s", dev->errmsg);
+ goto bail_out;
+ }
+ /* Read all records and then second tape */
+ read_records(jcr, dev, record_cb, my_mount_next_read_volume);
+ }
+bail_out:
+ dev->min_block_size = min_block_size;
+ free_block(block);
+ free_memory(rec.data);
}
/*
* verify that it is correct.
*/
static void unfillcmd()
+{
+ int fd;
+
+ if (!last_block) {
+ last_block = new_block(dev);
+ }
+ sprintf(buf, "%s/btape.state", working_directory);
+ fd = open(buf, O_RDONLY);
+ if (fd >= 0) {
+ uint32_t state_level;
+ read(fd, &state_level, sizeof(btape_state_level));
+ read(fd, &last_block_num, sizeof(last_block_num));
+ read(fd, &last_file, sizeof(last_file));
+ read(fd, last_block->buf, last_block->buf_len);
+ close(fd);
+ if (state_level != btape_state_level) {
+ Pmsg0(-1, "\nThe state file level has changed. You must redo\n"
+ "the fill command.\n");
+ return;
+ }
+ } else {
+ Pmsg2(-1, "\nCould not find the state file: %s ERR=%s\n"
+ "You must redo the fill command.\n", buf, strerror(errno));
+ return;
+ }
+
+ do_unfill();
+}
+
+static void do_unfill()
{
DEV_BLOCK *block;
- uint32_t i;
dumped = 0;
VolBytes = 0;
end_of_tape = 0;
- if (!simple) {
- /* Close device so user can use autochanger if desired */
- if (dev_cap(dev, CAP_OFFLINEUNMOUNT)) {
- offline_dev(dev);
- }
- force_close_dev(dev);
- get_cmd(_("Mount first tape. Press enter when ready: "));
-
- free_vol_list(jcr);
- set_volume_name("TestVolume1", 1);
- jcr->bsr = NULL;
- create_vol_list(jcr);
- close_dev(dev);
- dev->state &= ~ST_READ;
- if (!acquire_device_for_read(jcr, dev, block)) {
- Pmsg1(-1, "%s", dev->errmsg);
- return;
- }
- }
time(&jcr->run_time); /* start counting time for rates */
stop = 0;
file_index = 0;
- if (!simple) {
- /* Read all records and then second tape */
- read_records(jcr, dev, record_cb, my_mount_next_read_volume);
- } else {
- /*
- * Simplified test, we simply fsf to file, then read the
- * last block and make sure it is the same as the saved block.
- */
- Pmsg0(000, "Rewinding tape ...\n");
- if (!rewind_dev(dev)) {
- Pmsg1(-1, _("Error rewinding: ERR=%s\n"), strerror_dev(dev));
- goto bail_out;
- }
- if (last_file > 0) {
- Pmsg1(000, "Forward spacing to last file=%u\n", last_file);
- if (!fsf_dev(dev, last_file)) {
- Pmsg1(-1, _("Error in FSF: ERR=%s\n"), strerror_dev(dev));
- goto bail_out;
- }
- }
- Pmsg1(-1, _("Forward space to file %u complete. Reading blocks ...\n"),
- last_file);
- Pmsg1(-1, _("Now reading to block %u.\n"), last_block_num);
- for (i=0; i <= last_block_num; i++) {
- if (!read_block_from_device(jcr, dev, block, NO_BLOCK_NUMBER_CHECK)) {
- Pmsg1(-1, _("Error reading blocks: ERR=%s\n"), strerror_dev(dev));
- Pmsg2(-1, _("Wanted block %u error at block %u\n"), last_block_num, i);
- goto bail_out;
- }
- if (i > 0 && i % 1000 == 0) {
- Pmsg1(-1, _("At block %u\n"), i);
+
+ /*
+ * Note, re-reading last block may have caused us to
+ * lose track of where we are (block number unknown).
+ */
+ rewind_dev(dev); /* get to a known place on tape */
+ Pmsg4(-1, _("Reposition from %u:%u to %u:%u\n"), dev->file, dev->block_num,
+ last_file, last_block_num);
+ if (!reposition_dev(dev, last_file, last_block_num)) {
+ Pmsg1(-1, "Reposition error. ERR=%s\n", strerror_dev(dev));
+ }
+ Pmsg1(-1, _("Reading block %u.\n"), last_block_num);
+ if (!read_block_from_device(jcr, dev, block, NO_BLOCK_NUMBER_CHECK)) {
+ Pmsg1(-1, _("Error reading block: ERR=%s\n"), strerror_dev(dev));
+ goto bail_out;
+ }
+ if (last_block) {
+ char *p, *q;
+ uint32_t CheckSum, block_len;
+ ser_declare;
+ p = last_block->buf;
+ q = block->buf;
+ unser_begin(q, BLKHDR2_LENGTH);
+ unser_uint32(CheckSum);
+ unser_uint32(block_len);
+ while (q < (block->buf+block_len)) {
+ if (*p == *q) {
+ p++;
+ q++;
+ continue;
}
+ Pmsg0(-1, "\n");
+ dump_block(last_block, _("Last block written"));
+ Pmsg0(-1, "\n");
+ dump_block(block, _("Block read back"));
+ Pmsg1(-1, "\n\nThe blocks differ at byte %u\n", p - last_block->buf);
+ Pmsg0(-1, "\n\n!!!! The last block written and the block\n"
+ "that was read back differ. The test FAILED !!!!\n"
+ "This must be corrected before you use Bacula\n"
+ "to write multi-tape Volumes.!!!!\n");
+ goto bail_out;
}
- if (last_block) {
- char *p, *q;
- uint32_t CheckSum, block_len;
- ser_declare;
- p = last_block->buf;
- q = block->buf;
- unser_begin(q, BLKHDR1_LENGTH);
- unser_uint32(CheckSum);
- unser_uint32(block_len);
- while (q < (block->buf+block_len+BLKHDR2_LENGTH)) {
- if (*p++ == *q++) {
- continue;
- }
- Pmsg0(-1, "\n");
- dump_block(last_block, _("Last block written"));
- dump_block(block, _("Block read back"));
- Pmsg0(-1, "\n\n!!!! The last block written and the block\n"
- "that was read back differ. The test FAILED !!!!\n"
- "This must be corrected before you use Bacula\n"
- "to write multi-tape Volumes.!!!!\n");
- goto bail_out;
- }
- Pmsg0(-1, _("\nThe blocks are identical. Test succeeded.\n"));
- if (verbose) {
- dump_block(last_block, _("Last block written"));
- dump_block(block, _("Block read back"));
- }
+ Pmsg0(-1, _("\nThe blocks are identical. Test succeeded.\n\n"));
+ if (verbose) {
+ dump_block(last_block, _("Last block written"));
+ dump_block(block, _("Block read back"));
}
}
bail_out:
free_block(block);
-
- Pmsg0(000, _("Done with reread of fill data.\n"));
}
/*
if (stop > 1 && !dumped) { /* on second tape */
dumped = 1;
- dump_block(block, "First block on second tape");
+ if (verbose) {
+ dump_block(block, "First block on second tape");
+ }
Pmsg4(-1, "Blk: FileIndex=%d: block=%u size=%d vol=%s\n",
rec->FileIndex, block->BlockNumber, block->block_len, dev->VolHdr.VolName);
Pmsg6(-1, " Rec: VId=%d VT=%d FI=%s Strm=%s len=%d state=%x\n",
this_file = dev->file;
this_block_num = dev->block_num;
if (!write_block_to_dev(jcr, dev, block)) {
- Pmsg3(000, "Block not written: FileIndex=%u Block=%u Size=%u\n",
- (unsigned)file_index, block->BlockNumber, block->block_len);
- Pmsg2(000, "last_block_num=%u this_block_num=%d\n", last_block_num,
- this_block_num);
- if (dump) {
- dump_block(block, "Block not written");
+ Pmsg3(000, "Last block at: %u:%u this_dev_block_num=%d\n",
+ last_file, last_block_num, this_block_num);
+ if (verbose) {
+ Pmsg3(000, "Block not written: FileIndex=%u blk_block=%u Size=%u\n",
+ (unsigned)file_index, block->BlockNumber, block->block_len);
+ if (dump) {
+ dump_block(block, "Block not written");
+ }
}
if (stop == 0) {
eot_block = block->BlockNumber;
now = time(NULL);
now -= jcr->run_time;
if (now <= 0) {
- now = 1;
+ now = 1; /* don't divide by zero */
}
kbs = (double)dev->VolCatInfo.VolCatBytes / (1000 * now);
vol_size = dev->VolCatInfo.VolCatBytes;
uint32_t block_num = 0;
uint32_t *p;
int my_errno;
+ uint32_t i;
block = new_block(dev);
fd = open("/dev/urandom", O_RDONLY);
if (fd) {
read(fd, block->buf, block->buf_len);
+ close(fd);
} else {
- Pmsg0(0, "Cannot open /dev/urandom.\n");
- free_block(block);
- return;
+ uint32_t *p = (uint32_t *)block->buf;
+ srandom(time(NULL));
+ for (i=0; i<block->buf_len/sizeof(uint32_t); i++) {
+ p[i] = random();
+ }
}
p = (uint32_t *)block->buf;
Pmsg1(0, "Begin writing raw blocks of %u bytes.\n", block->buf_len);
printf("+");
fflush(stdout);
}
+ p[0] += p[13];
+ for (i=1; i<(block->buf_len-sizeof(uint32_t))/sizeof(uint32_t)-1; i++) {
+ p[i] += p[i-1];
+ }
continue;
}
break;
uint32_t block_num = 0;
uint32_t *p;
int my_errno;
- int fd;
+ int fd;
+ uint32_t i;
block = new_block(dev);
fd = open("/dev/urandom", O_RDONLY);
if (fd) {
read(fd, block->buf, block->buf_len);
+ close(fd);
} else {
- Pmsg0(0, "Cannot open /dev/urandom.\n");
- free_block(block);
- return;
+ uint32_t *p = (uint32_t *)block->buf;
+ srandom(time(NULL));
+ for (i=0; i<block->buf_len/sizeof(uint32_t); i++) {
+ p[i] = random();
+ }
}
p = (uint32_t *)block->buf;
Pmsg1(0, "Begin writing Bacula blocks of %u bytes.\n", block->buf_len);
printf("+");
fflush(stdout);
}
+ p[0] += p[13];
+ for (i=1; i<(block->buf_len/sizeof(uint32_t)-1); i++) {
+ p[i] += p[i-1];
+ }
}
my_errno = errno;
printf("\n");
do_tape_cmds()
{
unsigned int i;
- int found;
+ bool found;
while (get_cmd("*")) {
sm_check(__FILE__, __LINE__, False);
- found = 0;
+ found = false;
+ parse_args(cmd, &args, &argc, argk, argv, MAX_CMD_ARGS);
for (i=0; i<comsize; i++) /* search for command */
- if (fstrsch(cmd, commands[i].key)) {
+ if (argc > 0 && fstrsch(argk[0], commands[i].key)) {
(*commands[i].func)(); /* go execute command */
- found = 1;
+ found = true;
break;
}
if (!found)
while (p >= jcr->dev_name && *p != '/')
p--;
if (*p == '/') {
- strcpy(jcr->VolumeName, p+1);
+ pm_strcpy(&jcr->VolumeName, p+1);
*p = 0;
}
}
jcr->pool_type = get_pool_memory(PM_FNAME);
strcpy(jcr->pool_type, "Backup");
jcr->job_name = get_pool_memory(PM_FNAME);
- strcpy(jcr->job_name, "Dummy.Job.Name");
+ pm_strcpy(&jcr->job_name, "Dummy.Job.Name");
jcr->client_name = get_pool_memory(PM_FNAME);
- strcpy(jcr->client_name, "Dummy.Client.Name");
- strcpy(jcr->Job, name);
+ pm_strcpy(&jcr->client_name, "Dummy.Client.Name");
+ bstrncpy(jcr->Job, name, sizeof(jcr->Job));
jcr->fileset_name = get_pool_memory(PM_FNAME);
- strcpy(jcr->fileset_name, "Dummy.fileset.name");
+ pm_strcpy(&jcr->fileset_name, "Dummy.fileset.name");
jcr->fileset_md5 = get_pool_memory(PM_FNAME);
- strcpy(jcr->fileset_md5, "Dummy.fileset.md5");
+ pm_strcpy(&jcr->fileset_md5, "Dummy.fileset.md5");
jcr->JobId = 1;
jcr->JobType = JT_BACKUP;
jcr->JobLevel = L_FULL;
#undef VERSION
#define VERSION "1.33"
#define VSTRING "1"
-#define BDATE "22 Nov 2003"
-#define LSMDATE "22Nov03"
+#define BDATE "24 Nov 2003"
+#define LSMDATE "24Nov03"
/* Debug flags */
#undef DEBUG