2002-mm-dd Release 1.23
+ From kes18Jul02
+- The following two changes were prompted by questions/suggestions
+ from A Morgan.
+- If you have "AlwaysOpen = no" in your SD Device
+ resource, Bacula will free() the drive when it
+ is finished with the Job.
+- If you have "Offline On Unmount = yes", in your
+ SD Device resource, Bacula will offline (or eject)
+ the tape prior to freeing it.
+- Added Maximum Open Wait to allow open() to wait if drive is busy.
+- Added RunBeforeJob and RunAfterJob records to Job records.
+ This permits running an external program with the following editing
+ codes:
+ %% = %
+ %c = Client's name
+ %d = Director's name
+ %i = JobId
+ %e = Job Exit
+ %j = Job
+ %l = Job Level
+ %n = Job name
+ %t = Job type
+
+ From kes17Jul02
+- Added autochanger support to devices.c
+- Allow user to change Slot in the Volume record.
+- Implemented code in lib to run an external program
+ (tape changer)
+- Implemented a changer script for mtx.
+- Currently the changer commands used are:
+ loaded -- returns number of slot loaded or 0
+ load -- loads a specified slot
+ unload -- unloads the device (returns casette to slot)
+- Other changer commands defined but not yet used:
+ list -- returns list of slots containing a cassette
+ slots -- returns total number of slots
+- Implemented ChangerCommand, specified in the SD Device
+ resource, permits editing of:
+ %% = %
+ %a = archive device name
+ %c = changer device name
+ %f = Client's name
+ %j = Job name
+ %o = command
+ %s = Slot base 0
+ %S = Slot base 1
+ %v = Volume name
+- Implemented MaximumChangerWait default 120 seconds. It is
+ specified in the SD Device resource.
+
+ From kes15Jul02
+- Moved techlogs from main directory to be subdirectory of doc
+- Added code for strerror_r, and detection of gethostbyname_r().
+- The protocol between the Director and the SD has changed.
+- Major rework of SD tape mounting to prepare for Changer commands.
+- Separated Update Media record and Create JobMedia record. These
+ are done from the SD by calling the Director. Need separate Create
+ JobMedia so that when a Job spans a volume, all other Jobs writing
+ the same volume will also have JobMedia records created.
+- Added message to user indicating selection aborted if he enters .
+ to a Console selection request.
+- Create a jobstatus_to_ascii() routine for use in status commands.
+ This makes a single routine from three separate pieces of code.
+ Updated the code to properly handle more (all) termination statuses.
+- Tried to fix the gnome-console to handle history a bit better. There
+ are still some problems with focus not being properly set to the edit
+ box after history replacement.
+- Removed the shutdown() from bnet_close() hoping to fix Console termination
+ errors that are occassionally seen -- no luck.
+- Moved add_str() to lib/util and renamed it add_str_to_pool_mem() so that
+ it can be used to edit Job termination codes and Changer command codes.
+- Reworked how the SD mounts tapes (in device.c) so that control passes through
+ only a single routine. The logic is much simpler than previously, and now
+ adding AutoChanger code is straight forward.
+- Made SD tape mounting much more fault tolerant -- more cases retry instead
+ of terminating the Job.
+- Wrote code to edit_device_codes() for Changer commands. Not yet fully
+ implemented.
+- Added a ChangerDevice directive to the Device resource. Still need to add
+ ChangerCommand.
+
From kes07Jul02
- This documents what I did while on vacation.
- A fair amount of documentation.
Archive Device (defines device name)
-
2000-03-10 Release 0.3 Kern Sibbald
- Implemented new base64 encoding for attributes.
This eliminates some of the error messages in the
-This is the first release of Bacula, and as such it
-is perhaps a bit rough around the edges. As you will
-note, I don't follow the standard GNU release numbering
-conventions, but rather one that I started in 1970.
-My internal releases were 0.nn, the first release to
-another user was 1.0, each modified source code release
-then gets a new minor release (1.1, ...) as well as
-a date. Each major change in the software -- e.g. new
-tape format will have the major release number incremented.
-
-Your best bet for getting Bacula up and running
-is to read the manual, which can be found in
+As you will note, I don't follow the standard GNU release
+numbering conventions, but rather one that I started in
+1970. My internal releases were 0.nn, the first release to
+another user was 1.0, each modified source code release then
+gets a new minor release (1.1, ...) as well as a date. Each
+major change in the software -- e.g. new tape format will
+have the major release number incremented.
+
+Your best bet for getting Bacula up and running is to read
+the manual, which can be found in
<bacula-main-directory>/doc/html-manual, or in
<bacula-main-directory>/doc/bacula.pdf.
in time.c or dev.c. There may also be problems in
lib/signal.c as I currently pull in all Linux signals,
some of which may not be available on your system.
-
# ------------------------------------------
# Where to send dump email
# ------------------------------------------
-dump_email=root
+dump_email=root@localhost
AC_ARG_WITH(dump-email,
[ --with-dump-email=Dump email address],
[
# ------------------------------------------
# Where to send job email
# ------------------------------------------
-job_email=root
+job_email=root@localhost
AC_ARG_WITH(job-email,
[ --with-job-email=Job output email address],
[
# ------------------------------------------
# Where to send dump email
# ------------------------------------------
-dump_email=root
+dump_email=root@localhost
# Check whether --with-dump-email or --without-dump-email was given.
if test "${with_dump_email+set}" = set; then
withval="$with_dump_email"
# ------------------------------------------
# Where to send job email
# ------------------------------------------
-job_email=root
+job_email=root@localhost
# Check whether --with-job-email or --without-job-email was given.
if test "${with_job_email+set}" = set; then
withval="$with_job_email"
--- /dev/null
+ Kern's ToDo List
+ 18 July 2002
+
+To do:
+- Document passwords.
+- Document running multiple Jobs
+- Document that two Verifys at same time on same client do not work.
+- Document how to recycle a tape in 7 days even if the backup takes a long time.
+- Document default config file locations.
+- Document better includes (does it cross file systems ?).
+- Document specifically how to add new File daemon to config files.
+- Document forcing a new tape to be used.
+
+- Pass "Catalog Files = no" to storage daemon to eliminate
+ network traffic.
+- Implement alter_sqlite_tables
+- Fix scheduler -- see "Hourly cycle". It doesn't do both each
+ hour, rather it alternates between 0:05 and 0:35.
+- Create Counter DB records.
+- Make Pool resource handle Counter resources.
+- Remove NextId for SQLite. Optimize.
+- Termination status in FD for Verify = C -- incorrect.
+- Fix strerror() to use strerror_r()
+- Fix gethostbyname() to use gethostbyname_r()
+- Cleanup path/filename separation in sql_get.c and sql_create.c
+- Implement ./configure --with-client-only
+- Strip trailing / from Include
+- Move all SQL statements into a single location.
+- Cleanup db_update_media and db_update_pool
+- Add UA rc and history files.
+- put termcap (used by console) in ./configure and
+ allow -with-termcap-dir.
+- Remove JobMediaId it is not used.
+- Enhance time and size scanning routines.
+- Fix Autoprune for Volumes to respect need for full save.
+- DateWritten may be wrong.
+- Fix Win32 config file definition name on /install
+- When we are at EOM, we must ask each job to write JobMedia
+ record (update_volume_info).
+- No READLINE_SRC if found in alternate directory.
+- Add Client FS/OS id (Linux, Win95/98, ...).
+- Put Windows files in Windows stream?
+
+====== 31 May 2002 ========
+Now that Bacula 1.20 is released, virtually all the
+basic features are implemented (some are still quite
+primitive though). Over the next month or two, I'm
+planning to focus on the following items:
+
+Minor details:
+- Fix any bugs I find or you report.
+- Finish the implementation of automatic pruning
+ (add pruning of Restore and Verify jobs).
+- Make sure pruning of Volumes won't prune the
+ only backup of a FileSet
+
+Major Project:
+- Improve the Restore capabilities of Bacula
+ * Restore to most recent system state (i.e.
+ figure out what tapes need to be mounted and
+ in what order).
+ * Restore to a particular time (perhaps several
+ variations -- e.g. before date, after date).
+ * Interactive Restore where you get to select
+ what files are to be restored (much like the Unix
+ "restore" program permits). Now that we have a
+ catalog of all files saved, it would be nice to
+ be able to use it.
+ * Restore options (overwrite, overwrite if older,
+ overwrite if newer, never overwrite, ...)
+ * Improve the standalone programs (bls and bextract)
+ to have pattern matching capabilities (e.g. restore
+ by FileSet, Job, JobType, JobLevel, ...).
+ * Ideally after each Job, Bacula could write out a
+ set of commands to a file that if later feed to
+ bextract would restore your system to the current
+ state (at least for the saved FileSet). This would
+ provide a simple disaster recovery that could be
+ initiated from a "floppy" and one simple ASCII control
+ file. I'm not exactly sure how to do this, but it
+ shouldn't be too hard and I'll
+ be trying to go in this direction.
+
+Smaller Projects:
+- Implement tape verification to ensure that the data
+ written for a particular Job can really be read.
+- Compare tape File attributes to Catalog.
+ (File attributes are size, dates, MD5, but not
+ data).
+- Compare tape to Client files (attributes, or
+ attributes and data)
+
+Playing around:
+- With the current Bacula 1.21 (not yet in the CVS) I
+ expect there is about 95% chance that running multiple
+ simultaneous Jobs will actually work without stepping
+ on each other. I'm planning to try this sometime soon.
+===========
+
+Projects:
+- Add Base job.
+- Rework Storage daemon with new rwl_lock routines.
+- Implement Label templates
+- Pass JCR to database routines permitting better error printing.
+- Improve Restore
+- Verify tape data
+- Verify against Full.
+
+Dump:
+ mysqldump -f --opt bacula >bacula
+
+
+To be done:
+- Probably add End of Data tape records (this would make
+ the tape format incompatible with the previous version).
+- I'll most likely enhance the current tape format
+ in the way that I previously described, which will make
+ some of the labels incompatible, but the change will
+ not affect the current restore code since it does not
+ look at the details of the labels.
+- I may add a few more waiting conditions in the Storage
+ daemon where it will current immediately aborts a
+ Job if the necessary resources are not available (e.g.
+ tape is being written and a read request arrives).
+- Write an applet for Linux.
+
+- Remove PoolId from Job table, it exists in Media.
+- Allow commands to detach or run in background.
+- Write better dump of Messages resource.
+- Fix status delay on storage daemon during rewind.
+- Add VerNo to each Session label record.
+- Add Job to Session records.
+- Add VOLUME_CAT_INFO to the EOS tape record (as
+ well as to the EOD record).
+- Add SD message variables to control operator wait time
+ - Maximum Operator Wait
+ - Minimum Message Interval
+ - Maximum Message Interval
+- Add EOM handling variables
+ - Write EOD records
+ - Require EOD records
+- Send Operator message when cannot read tape label.
+- Think about how to handle I/O error on MTEOM.
+- If Storage daemon aborts a job, ensure that this
+ is printed in the error message.
+- Verify level=Volume (scan only), level=Data (compare of data to file).
+ Verify level=Catalog, level=InitCatalog
+- Scan tape contents into database.
+- Dump of Catalog
+- Cold start full restore (restore catalog then
+ user selects what to restore). Write summary file containing only
+ Job, Media, and Catalog information. Store on another machine.
+- Dump/Restore database
+- File system type
+- Events file
+- Implement first cut of Catalog Retention period (remove old
+ entries from database).
+- Add SessionTime/Id filters to bextract.
+- Write bscan
+- Ensure that Start/End File/Block are correct.
+- Add keyword search to show command in Console.
+- If MySQL database is not running, job terminates with
+ wierd type and wierd error code.
+- Write a regression script
+- Report bad status from smtp or mail program.
+- Fix Win2000 error with no messages during startup.
+- Add estimate to Console
+- Events : tape has more than xxx bytes.
+- In Storage daemon, status should include job cancelled.
+- Write general list maintenance subroutines.
+- Implement immortal format with EDOs.
+- Restrict characters permitted in a Resource name.
+- Restore file xx or files xx, yy to their most recent values.
+- Provide definitive identification of type in backup.
+- Complete code in Bacula Resources -- this will permit
+ reading a new config file at any time.
+- Document new Console
+- Handle ctl-c in Console
+- Test restore of Windows backup
+- Implement LabelTemplate (at least first cut).
+- Implement script driven addition of File daemon to
+ config files.
+
+- Bug: anonymous Volumes requires mount in some cases.
+- see setgroup and user for Bacula p4-5 of stunnel.c
+- Implement new serialize subroutines
+ send(socket, "string", &Vol, "uint32", &i, NULL)
+- Add save type to Session label.
+- Correct date on Session label.
+- On I/O error, write EOF, then try to write again.
+- Audit all UA commands to ensure that we always prompt where
+ possible.
+- If ./btape is called without /dev, assume argument is
+ a Storage resource name.
+- Put memory utilization in Status output of each daemon
+ if full status requested or if some level of debug on.
+- Make database type selectable by .conf files i.e. at runtime
+- gethostbyname failure in bnet_connect() continues
+ generating errors -- should stop.
+- Don't create a volume that is already written. I.e. create only once.
+- If error at end of tape, implement some way to kill waiting processes.
+- Get correct block/file information in Catalog, pay attention
+ to change of media.
+- Add HOST to Volume label.
+- Set flag for uname -a. Add to Volume label.
+- Implement throttled work queue.
+- Write bscan program that will syncronize the DB Media record with
+ the contents of the Volume -- for use after a crash.
+- Check for EOT at ENOSPC or EIO or ENXIO (unix Pc)
+- Allow multiple Storage specifications (or multiple names on
+ a single Storage specification) in the Job record. Thus a job
+ can be backed up to a number of storage devices.
+- Implement full MediaLabel code.
+- Implement dump label to UA
+- Copy volume using single drive.
+- Copy volume with multiple driven (same or different block size).
+- Add block size (min, max) to Vol label.
+- Concept of VolumeSet during restore which is a list
+ of Volume names needed.
+- Restore files modified after date
+- Restore file modified before date
+- Emergency restore info:
+ - Backup Bacula
+ - Backup working directory
+ - Backup Catalog
+- Restore options (do not overwrite)
+- Restore -- do nothing but show what would happend
+- Authentication between SD and FD
+- SET LD_RUN_PATH=$HOME/mysql/lib/mysql
+- Send Volumes needed during restore to Console
+- Put Job statistics in End Session Label (files saved,
+ total bytes, start time, ...).
+- Put FileSet name in the SOS label.
+- Implement Restore FileSet=
+- Write a scanner for the UA (keyword, scan-routine, result, prompt).
+- Create a protocol.h and protocol.c where all protocol messages
+ are concentrated.
+- If SD cannot open a drive, make it periodically retry.
+- Put Bacula version somewhere in Job stream, probably Start Session
+ Labels.
+- Remove duplicate fields from jcr (e.g. jcr.level and
+ jcr.jr.Level, ...).
+- Timout a job or terminate if link goes down, or reopen link and query.
+- Define how we handle times to avoid problem with Unix dates (2049 ?).
+- The daemons should know when one is already
+ running and refuse to run a second copy.
+- Fill all fields in Vol/Job Header -- ensure that everything
+ needed is written to tape. Think about restore to Catalog
+ from tape. Client record needs improving.
+- Find general solution for sscanf size problems (as well
+ as sprintf. Do at run time?
+
+- Concept of precious tapes (cannot be reused).
+- Allow FD to run from inetd ???
+- Preprocessing command per file.
+- Postprocessing command per file (when restoring).
+
+- Restore should get Device and Pool information from
+ job record rather than from config.
+- Make SD send attribute stream to DR but first
+ buffering to file, then sending only when the
+ files are written to tape.
+- Autolabel should be specified by DR instead of SD.
+- Ability to recreate the catalog from a tape.
+- Find out how to get the system tape block limits, e.g.:
+ Apr 22 21:22:10 polymatou kernel: st1: Block limits 1 - 245760 bytes.
+ Apr 22 21:22:10 polymatou kernel: st0: Block limits 2 - 16777214 bytes.
+- Storage daemon
+ - Add media capacity
+ - AutoScan (check checksum of tape)
+ - Format command = "format /dev/nst0"
+ - MaxRewindTime
+ - MinRewindTime
+ - MaxBufferSize
+ - Seek resolution (usually corresponds to buffer size)
+ - EODErrorCode=ENOSPC or code
+ - Partial Read error code
+ - Partial write error code
+ - Nonformatted read error
+ - Nonformatted write error
+ - WriteProtected error
+ - IOTimeout
+ - OpenRetries
+ - OpenTimeout
+ - IgnoreCloseErrors=yes
+ - Tape=yes
+ - NoRewind=yes
+- Pool
+ - Maxwrites
+ - Recycle period
+- Job
+ - MaxWarnings
+ - MaxErrors (job?)
+=====
+- Eliminate duplicate File records to shrink database.
+- FD sends unsaved file list to Director at end of job.
+- Implement InsertUniqueDB.
+- Write a Storage daemon that uses pipes and
+ standard Unix programs to write to the tape.
+ See afbackup.
+- Need something that monitors the JCR queue and
+ times out jobs by asking the deamons where they are.
+- Add daemon JCR JobId=0 to have a daemon context
+- Pool resource
+ - Auto label
+ - Auto media verify
+ - Client (list of clients to force client)
+ - Devices (list of devices to force device)
+ - enable/disable
+ - Groups
+ - Levels
+ - Type: Backup, ...
+ - Recycle from other pools: Yes, No
+ - Recycle to other pools: Yes, no
+ - FileSets
+ - MaxBytes?
+ - Optional MediaType to force media?
+ - Maintain Catalog
+ - Label Template
+ - Retention Period
+ ============
+ - Name
+ - NumVols
+ - NaxVols
+ - CurrentVol
+
+=====
+ if(connect(sockfd, (struct sockaddr * ) (& addr), sizeof(addr)) .lt. 0){
+ close(sockfd);
+ return(-6);
+ }
+
+ linger.l_onoff = 1;
+ linger.l_linger = 60;
+ i = setsockopt(sockfd, SOL_SOCKET, SO_LINGER, (char *) &linger,
+ sizeof (linger));
+
+ fl = fcntl(sockfd, F_GETFL);
+ fcntl(sockfd, F_SETFL, fl & (~ O_NONBLOCK) & (~ O_NDELAY));
+====
+- Add "0nnn" in front of all sscanf %s fields
+ to prevent field overflow.
+- Restore:
+ What: jobid or file list
+ From: tape, file, ...
+ Where: original location, another path
+ How: Always replace, Replace if newer, Never replace
+ Report: files restored; files not restored; errors; warnings
+ summary.
+- Enhance Jmsg code to permit buffering and saving to disk.
+- Probably create a jcr with JobId=0 as a master
+ catchall if jcr not found or if operation involves
+ global operation.
+- device driver = "xxxx" for drives.
+- restart: paranoid: read label fsf to
+ eom read append block, and go
+ super-paranoid: read label, read all files
+ in between, read append block, and go
+ verify: backspace, read append block, and go
+ permissive: same as above but frees drive
+ if tape is not valid.
+- Verify from Volume
+- Ensure that /dev/null works
+- File daemon should build list of files skipped, and then
+ at end of save retry and report any errors.
+- Need report class for messages. Perhaps
+ report resource where report=group of messages
+- Extract what=(session_id|file_list); where
+- Verify from Tape
+- enhance scan_attrib and rename scan_jobtype, and
+ fill in code for "since" option
+- dir_config: get rid of all printfs
+- To buffer messages, we need associated jobid and Director name.
+- Need to save contents of FileSet to tape?
+- Director needs a time after which the report status is sent
+ anyway -- or better yet, a retry time for the job.
+ Don't reschedule a job if previous incarnation is still running.
+- Figure out how to do a "full" restore from catalog
+- Figure out how to save the catalog (possibly a special FileSet).
+- Figure out how to restore the catalog.
+- Figure out how to put a Volume into the catalog (from the tape)
+- Figure out how to do a restore from a Volume
+- Some way to automatically backup everything is needed????
+- Need a structure for pending actions:
+ - buffered messages
+ - termination status (part of buffered msgs?)
+- Concept of grouping Storage devices and job can use
+ any of a number of devices
+- Drive management
+ Read, Write, Clean, Delete
+- Login to Bacula; Bacula users with different permissions:
+ owner, group, user
+- Tape recycle destination
+- Job Schedule Status
+ - Automatic
+ - Manual
+ - Running
+- File daemon should pass Director the operating system info
+ to be stored in the Client Record (or verified that it has
+ not changed).
+- Store info on each file system type (probably in the job header on tape.
+ This could be the output of df; or perhaps some sort of /etc/mtab record.
+
+Longer term to do:
+- Use media 1 time (so that we can do 6 days of incremental
+ backups before switching to another tape) (already)
+ specify # times (jobs)
+ specify bytes (already)
+ specify time (seconds, hours, days)
+- Implement FSM (File System Modules).
+- Identify unchanged or "system" files and save them to a
+ special tape thus removing them from the standard
+ backup FileSet -- BASE backup.
+- Turn virutally all sprintfs into snprintfs.
+- Heartbeat between daemons.
+- Audit M_ error codes to ensure they are correct and
+ consistent.
+- Add variable break characters to lex analyzer.
+ Either a bit mask or a string of chars so that
+ the caller can change the break characters.
+- Make a single T_BREAK to replace T_COMMA, etc.
+- Ensure that File daemon and Storage daemon can
+ continue a save if the Director goes down (this
+ is NOT currently the case). Must detect socket error,
+ buffer messages for later.
+
+
+Done: (see kernsdone for more)
POOLMEM *cmd; /* SQL command string */
POOLMEM *cached_path;
uint32_t cached_path_id;
+ int transaction; /* transaction started */
+ int changes; /* changes during transaction */
} B_DB;
POOLMEM *cmd; /* SQL command string */
POOLMEM *cached_path;
uint32_t cached_path_id;
+ int changes; /* changes made to db */
} B_DB;
Type CHAR NOT NULL,
Level CHAR NOT NULL,
ClientId INTEGER REFERENCES Client DEFAULT 0,
- JobStatus CHAR,
+ JobStatus CHAR NOT NULL,
SchedTime DATETIME NOT NULL,
StartTime DATETIME DEFAULT 0,
EndTime DATETIME DEFAULT 0,
PRIMARY KEY (Counter)
);
-PRAGMA default_synchronous=OFF;
+PRAGMA default_synchronous = OFF;
+PRAGMA default_cache_size = 10000;
END-OF-DATA
exit 0
e_msg(file, line, M_FATAL, 0, mdb->errmsg); /* ***FIXME*** remove me */
return 0;
}
+ mdb->changes++;
return 1;
}
e_msg(file, line, M_ERROR, 0, "%s\n", cmd);
return 0;
}
+ mdb->changes++;
return 1;
}
e_msg(file, line, M_ERROR, 0, mdb->errmsg);
return -1;
}
+ mdb->changes++;
return sql_affected_rows(mdb);
}
}
/* Must create it */
Mmsg(&mdb->cmd,
-"INSERT INTO Job (JobId, Job, Name, Type, Level, SchedTime, JobTDate) VALUES \
-(%s, \"%s\", \"%s\", \"%c\", \"%c\", \"%s\", %s)",
- JobId, jr->Job, jr->Name, (char)(jr->Type), (char)(jr->Level), dt,
- edit_uint64(JobTDate, ed1));
+"INSERT INTO Job (JobId,Job,Name,Type,Level,JobStatus,SchedTime,JobTDate) VALUES \
+(%s,\"%s\",\"%s\",\"%c\",\"%c\",\"%c\",\"%s\",%s)",
+ JobId, jr->Job, jr->Name, (char)(jr->Type), (char)(jr->Level),
+ (char)(jr->JobStatus), dt, edit_uint64(JobTDate, ed1));
if (!INSERT_DB(mdb, mdb->cmd)) {
Mmsg2(&mdb->errmsg, _("Create DB Job record %s failed. ERR=%s\n"),
Dmsg0(50, "db_create_file_record\n");
Dmsg3(100, "Path=%s File=%s FilenameId=%d\n", spath, file, ar->FilenameId);
-
+#ifdef HAVE_SQLITE
+ if (mdb->transaction && mdb->changes > 10000) {
+ my_sqlite_query(mdb, "COMMIT"); /* end transaction */
+ my_sqlite_query(mdb, "BEGIN"); /* start new transaction */
+ mdb->changes = 0;
+ }
+#endif
return 1;
}
#ifdef HAVE_SQLITE
/******FIXME***** do this machine independently */
my_sqlite_query(mdb, "BEGIN"); /* begin transaction */
+ mdb->transaction = 1;
#endif
+ mdb->changes = 0;
return stat;
}
stat = UPDATE_DB(mdb, mdb->cmd);
#ifdef HAVE_SQLITE
my_sqlite_query(mdb, "COMMIT"); /* end transaction */
+ mdb->transaction = 0;
#endif
db_unlock(mdb);
return stat;
jcr->jr.StartTime = jcr->start_time;
jcr->jr.Type = jcr->JobType;
jcr->jr.Level = jcr->JobLevel;
+ jcr->jr.JobStatus = jcr->JobStatus;
strcpy(jcr->jr.Name, jcr->job->hdr.name);
strcpy(jcr->jr.Job, jcr->Job);
static int save_file(FF_PKT *ff_pkt, void *ijcr)
{
char attribs[MAXSTRING];
- int fid, stat, stream;
+ int fid, stat, stream, len;
struct MD5Context md5c;
int gotMD5 = 0;
unsigned char signature[16];
encode_stat(attribs, &ff_pkt->statp);
jcr->JobFiles++; /* increment number of files sent */
- jcr->last_fname = (char *) check_pool_memory_size(jcr->last_fname, strlen(ff_pkt->fname) + 1);
+ len = strlen(ff_pkt->fname);
+ jcr->last_fname = check_pool_memory_size(jcr->last_fname, len + 1);
+ jcr->last_fname[len] = 0; /* terminate properly before copy */
strcpy(jcr->last_fname, ff_pkt->fname);
/*
if (njcr->JobId == 0) {
len = Mmsg(&msg, _("Director connected at: %s\n"), dt);
} else {
- len = Mmsg(&msg, _("JobId %d Job %s is running. Started: %s\n"),
+ len = Mmsg(&msg, _("JobId %d Job %s is running.\n %s %s started: %s\n"),
+ job_type_to_str(njcr->JobType), job_level_to_str(njcr->JobLevel),
njcr->JobId, njcr->Job, dt);
}
sendit(msg, len, arg);
{
char attribs[MAXSTRING];
int32_t n;
- int fid, stat;
+ int fid, stat, len;
struct MD5Context md5c;
unsigned char signature[16];
BSOCK *sd, *dir;
sd = jcr->store_bsock;
dir = jcr->dir_bsock;
+ jcr->num_files_examined++; /* bump total file count */
switch (ff_pkt->type) {
case FT_LNKSAVED: /* Hard linked, file already saved */
encode_stat(attribs, &ff_pkt->statp);
jcr->JobFiles++; /* increment number of files sent */
+ len = strlen(ff_pkt->fname);
+ jcr->last_fname = check_pool_memory_size(jcr->last_fname, len + 1);
+ jcr->last_fname[len] = 0; /* terminate properly before copy */
+ strcpy(jcr->last_fname, ff_pkt->fname);
if (ff_pkt->VerifyOpts[0] == 0) {
ff_pkt->VerifyOpts[0] = 'V';
time_t end_time; /* job end time */
POOLMEM *VolumeName; /* Volume name desired -- pool_memory */
POOLMEM *client_name; /* client name */
- char *RestoreBootstrap; /* Bootstrap file to restore */
+ POOLMEM *RestoreBootstrap; /* Bootstrap file to restore */
char *sd_auth_key; /* SD auth key */
MSGS *msgs; /* Message resource */
#ifdef FILE_DAEMON
/* File Daemon specific part of JCR */
uint32_t num_files_examined; /* files examined this job */
- char *last_fname; /* last file saved */
+ POOLMEM *last_fname; /* last file saved/verified */
/*********FIXME********* add missing files and files to be retried */
int incremental; /* set if incremental for SINCE */
time_t mtime; /* begin time for SINCE */
long NumVolumes; /* number of volumes used */
long CurVolume; /* current volume number */
int mode; /* manual/auto run */
+ int spool_attributes; /* set if spooling attributes */
+ int no_attributes; /* set if no attributes wanted */
int label_status; /* device volume label status */
int label_errors; /* count of label errors */
int session_opened;
{
int32_t nleft, nwritten;
+ if (bsock->spool) {
+ nwritten = fwrite(ptr, 1, nbytes, bsock->spool_fd);
+ if (nwritten != nbytes) {
+ Emsg1(M_ERROR, 0, _("Spool write error. ERR=%s\n"), strerror(errno));
+ Dmsg2(400, "nwritten=%d nbytes=%d.\n", nwritten, nbytes);
+ return -1;
+ }
+ return nbytes;
+ }
nleft = nbytes;
while (nleft > 0) {
do {
return nbytes; /* return actual length of message */
}
+int bnet_despool(BSOCK *bsock)
+{
+ int32_t pktsiz;
+ size_t nbytes;
+
+ rewind(bsock->spool_fd);
+ while (fread((char *)&pktsiz, 1, sizeof(int32_t), bsock->spool_fd) == sizeof(int32_t)) {
+ bsock->msglen = ntohl(pktsiz);
+ if (bsock->msglen > 0) {
+ if (bsock->msglen > (int32_t)sizeof_pool_memory(bsock->msg)) {
+ bsock->msg = realloc_pool_memory(bsock->msg, bsock->msglen);
+ }
+ nbytes = fread(bsock->msg, 1, bsock->msglen, bsock->spool_fd);
+ if (nbytes != (size_t)bsock->msglen) {
+ Dmsg2(400, "nbytes=%d msglen=%d\n", nbytes, bsock->msglen);
+ Emsg1(M_ERROR, 0, _("fread error. ERR=%s\n"), strerror(errno));
+ return 0;
+ }
+ }
+ bnet_send(bsock);
+ }
+ if (ferror(bsock->spool_fd)) {
+ Emsg1(M_ERROR, 0, _("fread error. ERR=%s\n"), strerror(errno));
+ return 0;
+ }
+ return 1;
+}
+
/*
* Send a message over the network. The send consists of
* two network packets. The first is sends a 32 bit integer containing
POOLMEM *errmsg; /* edited error message (to be implemented) */
RES *res; /* Resource to which we are connected */
struct s_bsock *next; /* next BSOCK if duped */
+ int spool; /* set for spooling */
+ FILE *spool_fd; /* spooling file */
} BSOCK;
/* Signal definitions for use in bnet_sig() */
#define BP_BYTES 7 /* Binary bytes */
#define BP_FLOAT32 8 /* 32 bit floating point */
#define BP_FLOAT64 9 /* 64 bit floating point */
-
return omsg;
}
+static void make_unique_spool_filename(JCR *jcr, POOLMEM **name, int fd)
+{
+ Mmsg(name, "%s/%s.spool.%s.%d", working_directory, my_name,
+ jcr->Job, fd);
+}
+
+int open_spool_file(void *vjcr, BSOCK *bs)
+{
+ POOLMEM *name = get_pool_memory(PM_MESSAGE);
+ JCR *jcr = (JCR *)vjcr;
+
+ make_unique_spool_filename(jcr, &name, bs->fd);
+ bs->spool_fd = fopen(name, "w+");
+ if (!bs->spool_fd) {
+ Jmsg(jcr, M_ERROR, 0, "fopen spool file %s failed: ERR=%s\n", name, strerror(errno));
+ free_pool_memory(name);
+ return 0;
+ }
+ free_pool_memory(name);
+ return 1;
+}
+
+int close_spool_file(void *vjcr, BSOCK *bs)
+{
+ POOLMEM *name = get_pool_memory(PM_MESSAGE);
+ JCR *jcr = (JCR *)vjcr;
+
+ make_unique_spool_filename(jcr, &name, bs->fd);
+ fclose(bs->spool_fd);
+ unlink(name);
+ free_pool_memory(name);
+ bs->spool_fd = NULL;
+ bs->spool = 0;
+ return 1;
+}
+
+
/*
* Create a unique filename for the mail command
*/
-static void make_unique_mail_filename(JCR *jcr, char **name, DEST *d)
+static void make_unique_mail_filename(JCR *jcr, POOLMEM **name, DEST *d)
{
if (jcr) {
Mmsg(name, "%s/%s.mail.%s.%d", working_directory, my_name,
DEST *d;
FILE *pfd;
POOLMEM *cmd, *line;
- int len;
+ int len, stat;
Dmsg1(050, "Close_msg jcr=0x%x\n", jcr);
while (fgets(line, len, d->fd)) {
fputs(line, pfd);
}
- pclose(pfd); /* close pipe, sending mail */
+ stat = pclose(pfd); /* close pipe, sending mail */
+ /*
+ * Since we are closing all messages, before "recursing"
+ * make sure we are not closing the daemon messages, otherwise
+ * kaboom.
+ */
+ if (stat < 0 && msgs != daemon_msgs) {
+ Emsg0(M_ERROR, 0, _("Mail program terminated in error.\n"));
+ }
free_memory(line);
rem_temp_file:
/* Remove temp file */
d->fd = open_mail_pipe(jcr, &mcmd, d);
free_pool_memory(mcmd);
if (d->fd) {
+ int stat;
fputs(msg, d->fd);
/* Messages to the operator go one at a time */
- pclose(d->fd);
+ stat = pclose(d->fd);
d->fd = NULL;
+ if (stat < 0) {
+ Emsg0(M_ERROR, 0, _("Operator mail program terminated in error.\n"));
+ }
}
break;
case MD_MAIL:
char *buf;
va_list arg_ptr;
int i, len;
- JCR *jcr = (JCR *) vjcr;
+ JCR *jcr = (JCR *)vjcr;
MSGS *msgs;
char *job;
char * bnet_strerror (BSOCK *bsock);
char * bnet_sig_to_ascii (BSOCK *bsock);
int bnet_wait_data (BSOCK *bsock, int sec);
+int bnet_despool (BSOCK *bsock);
/* cram-md5.c */
void dispatch_message (void *jcr, int type, int level, char *buf);
void init_console_msg (char *wd);
void free_msgs_res (MSGS *msgs);
+int open_spool_file (void *jcr, BSOCK *bs);
+int close_spool_file (void *vjcr, BSOCK *bs);
/* bnet_server.c */
sm_check(__FILE__, __LINE__, False);
+ if (!jcr->no_attributes && jcr->spool_attributes) {
+ open_spool_file(jcr, jcr->dir_bsock);
+ }
+
ds = fd_sock;
if (!bnet_set_buffer_size(ds, MAX_NETWORK_BUFFER_SIZE, BNET_SETBUF_WRITE)) {
}
sm_check(__FILE__, __LINE__, False);
if (!ok) {
+ Dmsg0(400, "Not OK\n");
break;
}
jcr->JobBytes += rec.data_len; /* increment bytes this job */
stream_to_ascii(rec.Stream), rec.data_len);
/* Send attributes and MD5 to Director for Catalog */
if (stream == STREAM_UNIX_ATTRIBUTES || stream == STREAM_MD5_SIGNATURE) {
- if (!dir_update_file_attributes(jcr, &rec)) {
- ok = FALSE;
- break;
+ if (!jcr->no_attributes) {
+ if (jcr->spool_attributes && jcr->dir_bsock->spool_fd) {
+ jcr->dir_bsock->spool = 1;
+ }
+ if (!dir_update_file_attributes(jcr, &rec)) {
+ Jmsg(jcr, M_FATAL, 0, _("Error updating file attributes. ERR=%s\n"),
+ bnet_strerror(jcr->dir_bsock));
+ ok = FALSE;
+ jcr->dir_bsock->spool = 0;
+ break;
+ }
+ jcr->dir_bsock->spool = 0;
}
}
sm_check(__FILE__, __LINE__, False);
}
/* Write out final block of this session */
if (!write_block_to_device(jcr, dev, block)) {
- Pmsg0(0, "Set ok=FALSE after write_block_to_device.\n");
+ Pmsg0(000, "Set ok=FALSE after write_block_to_device.\n");
ok = FALSE;
}
/* Release the device */
if (!release_device(jcr, dev, block)) {
- Pmsg0(0, "Error in release_device\n");
+ Pmsg0(000, "Error in release_device\n");
ok = FALSE;
}
free_block(block);
+
+ if (jcr->spool_attributes && jcr->dir_bsock->spool_fd) {
+ bnet_despool(jcr->dir_bsock);
+ close_spool_file(jcr, jcr->dir_bsock);
+ }
+
Dmsg0(90, "return from do_append_data()\n");
return ok ? 1 : 0;
}
dev->block_num = dev->file = 0;
dev->file_bytes = 0;
+#ifdef MTUNLOCK
+ mt_com.mt_op = MTUNLOCK;
+ mt_com.mt_count = 1;
+ ioctl(dev->fd, MTIOCTOP, (char *)&mt_com);
+#endif
mt_com.mt_op = MTOFFL;
mt_com.mt_count = 1;
if (ioctl(dev->fd, MTIOCTOP, (char *)&mt_com) < 0) {
static void do_close(DEVICE *dev)
{
+
Dmsg0(29, "really close_dev\n");
close(dev->fd);
/* Clean up device packet so it can be reused */
dev->LastBlockNumWritten = 0;
memset(&dev->VolCatInfo, 0, sizeof(dev->VolCatInfo));
memset(&dev->VolHdr, 0, sizeof(dev->VolHdr));
+ dev->use_count--;
}
/*
do_close(dev);
} else {
Dmsg0(29, "close_dev but in use so leave open.\n");
+ dev->use_count--;
}
- dev->use_count--;
}
/*
}
Dmsg0(29, "really close_dev\n");
do_close(dev);
- dev->use_count--;
}
int truncate_dev(DEVICE *dev)
/* NOTE, we reuse a calling argument jcr. Be warned! */
for (jcr=NULL; (jcr=get_next_jcr(jcr)); ) {
if (jcr->JobStatus == JS_WaitFD) {
- bnet_fsend(user, _("Job %s is waiting for the Client connection.\n"),
- jcr->Job);
+ bnet_fsend(user, _("%s Job %s waiting for Client connection.\n"),
+ job_type_to_str(jcr->JobType), jcr->Job);
}
if (jcr->device) {
- bnet_fsend(user, _("Job %s is using device %s\n"),
+ bnet_fsend(user, _("%s %s job %s is using device %s\n"),
+ job_level_to_str(jcr->JobLevel),
+ job_type_to_str(jcr->JobType),
jcr->Job, jcr->device->device_name);
sec = time(NULL) - jcr->run_time;
if (sec <= 0) {
{"autochanger", store_yesno, ITEM(res_dev.cap_bits), CAP_AUTOCHANGER, ITEM_DEFAULT, 0},
{"changerdevice", store_strname,ITEM(res_dev.changer_name), 0, 0, 0},
{"changercommand", store_strname,ITEM(res_dev.changer_command), 0, 0, 0},
- {"maximumchangerwait", store_pint, ITEM(res_dev.max_changer_wait), 0, ITEM_DEFAULT, 60},
+ {"maximumchangerwait", store_pint, ITEM(res_dev.max_changer_wait), 0, ITEM_DEFAULT, 2 * 60},
{"maximumopenwait", store_pint, ITEM(res_dev.max_open_wait), 0, ITEM_DEFAULT, 5 * 60},
{"offlineonunmount", store_yesno, ITEM(res_dev.cap_bits), CAP_OFFLINEUNMOUNT, ITEM_DEFAULT, 1},
{"maximumrewindwait", store_pint, ITEM(res_dev.max_rewind_wait), 0, ITEM_DEFAULT, 5 * 60},
{"storage", store_items, R_STORAGE, NULL},
{"device", dev_items, R_DEVICE, NULL},
{"messages", msgs_items, R_MSGS, NULL},
- {NULL, NULL, 0, NULL}
+ {NULL, NULL, 0, NULL}
};
return;
}
sendit(sock, "dump_resource type=%d\n", type);
- if (type < 0) { /* no recursion */
+ if (type < 0) { /* no recursion */
type = - type;
recurse = 0;
}
switch (type) {
case R_DIRECTOR:
sendit(sock, "Director: name=%s\n", res->res_dir.hdr.name);
- break;
+ break;
case R_STORAGE:
sendit(sock, "Storage: name=%s address=%s SDport=%d SDDport=%d\n",
- res->res_store.hdr.name, res->res_store.address,
- res->res_store.SDport, res->res_store.SDDport);
- break;
+ res->res_store.hdr.name, res->res_store.address,
+ res->res_store.SDport, res->res_store.SDDport);
+ break;
case R_DEVICE:
sendit(sock, "Device: name=%s MediaType=%s Device=%s\n",
- res->res_dev.hdr.name,
- res->res_dev.media_type, res->res_dev.device_name);
+ res->res_dev.hdr.name,
+ res->res_dev.media_type, res->res_dev.device_name);
sendit(sock, " rew_wait=%d min_bs=%d max_bs=%d\n",
- res->res_dev.max_rewind_wait, res->res_dev.min_block_size,
- res->res_dev.max_block_size);
+ res->res_dev.max_rewind_wait, res->res_dev.min_block_size,
+ res->res_dev.max_block_size);
sendit(sock, " max_jobs=%d max_files=%" lld " max_size=%" lld "\n",
- res->res_dev.max_volume_jobs, res->res_dev.max_volume_files,
- res->res_dev.max_volume_size);
+ res->res_dev.max_volume_jobs, res->res_dev.max_volume_files,
+ res->res_dev.max_volume_size);
sendit(sock, " max_file_size=%" lld " capacity=%" lld "\n",
- res->res_dev.max_file_size, res->res_dev.volume_capacity);
+ res->res_dev.max_file_size, res->res_dev.volume_capacity);
strcpy(buf, " ");
- if (res->res_dev.cap_bits & CAP_EOF) {
+ if (res->res_dev.cap_bits & CAP_EOF) {
strcat(buf, "CAP_EOF ");
- }
- if (res->res_dev.cap_bits & CAP_BSR) {
+ }
+ if (res->res_dev.cap_bits & CAP_BSR) {
strcat(buf, "CAP_BSR ");
- }
- if (res->res_dev.cap_bits & CAP_BSF) {
+ }
+ if (res->res_dev.cap_bits & CAP_BSF) {
strcat(buf, "CAP_BSF ");
- }
- if (res->res_dev.cap_bits & CAP_FSR) {
+ }
+ if (res->res_dev.cap_bits & CAP_FSR) {
strcat(buf, "CAP_FSR ");
- }
- if (res->res_dev.cap_bits & CAP_FSF) {
+ }
+ if (res->res_dev.cap_bits & CAP_FSF) {
strcat(buf, "CAP_FSF ");
- }
- if (res->res_dev.cap_bits & CAP_EOM) {
+ }
+ if (res->res_dev.cap_bits & CAP_EOM) {
strcat(buf, "CAP_EOM ");
- }
- if (res->res_dev.cap_bits & CAP_REM) {
+ }
+ if (res->res_dev.cap_bits & CAP_REM) {
strcat(buf, "CAP_REM ");
- }
- if (res->res_dev.cap_bits & CAP_RACCESS) {
+ }
+ if (res->res_dev.cap_bits & CAP_RACCESS) {
strcat(buf, "CAP_RACCESS ");
- }
- if (res->res_dev.cap_bits & CAP_AUTOMOUNT) {
+ }
+ if (res->res_dev.cap_bits & CAP_AUTOMOUNT) {
strcat(buf, "CAP_AUTOMOUNT ");
- }
- if (res->res_dev.cap_bits & CAP_LABEL) {
+ }
+ if (res->res_dev.cap_bits & CAP_LABEL) {
strcat(buf, "CAP_LABEL ");
- }
- if (res->res_dev.cap_bits & CAP_ANONVOLS) {
+ }
+ if (res->res_dev.cap_bits & CAP_ANONVOLS) {
strcat(buf, "CAP_ANONVOLS ");
- }
- if (res->res_dev.cap_bits & CAP_ALWAYSOPEN) {
+ }
+ if (res->res_dev.cap_bits & CAP_ALWAYSOPEN) {
strcat(buf, "CAP_ALWAYSOPEN ");
- }
+ }
strcat(buf, "\n");
- sendit(sock, buf);
- break;
+ sendit(sock, buf);
+ break;
case R_MSGS:
sendit(sock, "Messages: name=%s\n", res->res_msgs.hdr.name);
- if (res->res_msgs.mail_cmd)
+ if (res->res_msgs.mail_cmd)
sendit(sock, " mailcmd=%s\n", res->res_msgs.mail_cmd);
- if (res->res_msgs.operator_cmd)
+ if (res->res_msgs.operator_cmd)
sendit(sock, " opcmd=%s\n", res->res_msgs.operator_cmd);
- break;
+ break;
default:
sendit(sock, _("Warning: unknown resource type %d\n"), type);
- break;
+ break;
}
if (recurse && res->res_dir.hdr.next)
dump_resource(type, (RES *)res->res_dir.hdr.next, sendit, sock);
switch (type) {
case R_DIRECTOR:
- if (res->res_dir.password)
- free(res->res_dir.password);
- if (res->res_dir.address)
- free(res->res_dir.address);
- break;
+ if (res->res_dir.password)
+ free(res->res_dir.password);
+ if (res->res_dir.address)
+ free(res->res_dir.address);
+ break;
case R_STORAGE:
- if (res->res_store.address)
- free(res->res_store.address);
- if (res->res_store.working_directory)
- free(res->res_store.working_directory);
- if (res->res_store.pid_directory)
- free(res->res_store.pid_directory);
- if (res->res_store.subsys_directory)
- free(res->res_store.subsys_directory);
- break;
+ if (res->res_store.address)
+ free(res->res_store.address);
+ if (res->res_store.working_directory)
+ free(res->res_store.working_directory);
+ if (res->res_store.pid_directory)
+ free(res->res_store.pid_directory);
+ if (res->res_store.subsys_directory)
+ free(res->res_store.subsys_directory);
+ break;
case R_DEVICE:
- if (res->res_dev.media_type)
- free(res->res_dev.media_type);
- if (res->res_dev.device_name)
- free(res->res_dev.device_name);
- if (res->res_dev.changer_name)
- free(res->res_dev.changer_name);
- if (res->res_dev.changer_command)
- free(res->res_dev.changer_command);
- break;
+ if (res->res_dev.media_type)
+ free(res->res_dev.media_type);
+ if (res->res_dev.device_name)
+ free(res->res_dev.device_name);
+ if (res->res_dev.changer_name)
+ free(res->res_dev.changer_name);
+ if (res->res_dev.changer_command)
+ free(res->res_dev.changer_command);
+ break;
case R_MSGS:
- if (res->res_msgs.mail_cmd)
- free(res->res_msgs.mail_cmd);
- if (res->res_msgs.operator_cmd)
- free(res->res_msgs.operator_cmd);
- free_msgs_res((MSGS *)res); /* free message resource */
- res = NULL;
- break;
+ if (res->res_msgs.mail_cmd)
+ free(res->res_msgs.mail_cmd);
+ if (res->res_msgs.operator_cmd)
+ free(res->res_msgs.operator_cmd);
+ free_msgs_res((MSGS *)res); /* free message resource */
+ res = NULL;
+ break;
default:
Dmsg1(0, "Unknown resource type %d\n", type);
- break;
+ break;
}
/* Common stuff again -- free the resource, recurse to next one */
if (res) {
*/
for (i=0; items[i].name; i++) {
if (items[i].flags & ITEM_REQUIRED) {
- if (!bit_is_set(i, res_all.res_dir.hdr.item_present)) {
+ if (!bit_is_set(i, res_all.res_dir.hdr.item_present)) {
Emsg2(M_ABORT, 0, _("%s item is required in %s resource, but not found.\n"),
- items[i].name, resources[rindex]);
- }
+ items[i].name, resources[rindex]);
+ }
}
/* If this triggers, take a look at lib/parse_conf.h */
if (i >= MAX_RES_ITEMS) {
*/
if (pass == 2) {
switch (type) {
- /* Resources not containing a resource */
- case R_DIRECTOR:
- case R_DEVICE:
- case R_MSGS:
- break;
-
- /* Resources containing a resource */
- case R_STORAGE:
- if ((res = (URES *)GetResWithName(R_STORAGE, res_all.res_dir.hdr.name)) == NULL) {
+ /* Resources not containing a resource */
+ case R_DIRECTOR:
+ case R_DEVICE:
+ case R_MSGS:
+ break;
+
+ /* Resources containing a resource */
+ case R_STORAGE:
+ if ((res = (URES *)GetResWithName(R_STORAGE, res_all.res_dir.hdr.name)) == NULL) {
Emsg1(M_ABORT, 0, "Cannot find Storage resource %s\n", res_all.res_dir.hdr.name);
- }
- res->res_store.messages = res_all.res_store.messages;
- break;
- default:
+ }
+ res->res_store.messages = res_all.res_store.messages;
+ break;
+ default:
printf("Unknown resource type %d\n", type);
- error = 1;
- break;
+ error = 1;
+ break;
}
if (res_all.res_dir.hdr.name) {
- free(res_all.res_dir.hdr.name);
- res_all.res_dir.hdr.name = NULL;
+ free(res_all.res_dir.hdr.name);
+ res_all.res_dir.hdr.name = NULL;
}
if (res_all.res_dir.hdr.desc) {
- free(res_all.res_dir.hdr.desc);
- res_all.res_dir.hdr.desc = NULL;
+ free(res_all.res_dir.hdr.desc);
+ res_all.res_dir.hdr.desc = NULL;
}
return;
}
/* The following code is only executed on pass 1 */
switch (type) {
case R_DIRECTOR:
- size = sizeof(DIRRES);
- break;
+ size = sizeof(DIRRES);
+ break;
case R_STORAGE:
- size = sizeof(STORES);
- break;
+ size = sizeof(STORES);
+ break;
case R_DEVICE:
- size = sizeof(DEVRES);
- break;
+ size = sizeof(DEVRES);
+ break;
case R_MSGS:
- size = sizeof(MSGS);
- break;
+ size = sizeof(MSGS);
+ break;
default:
printf("Unknown resource type %d\n", type);
- error = 1;
- break;
+ error = 1;
+ break;
}
/* Common */
if (!error) {
res = (URES *)malloc(size);
memcpy(res, &res_all, size);
if (!resources[rindex].res_head) {
- resources[rindex].res_head = (RES *)res; /* store first entry */
+ resources[rindex].res_head = (RES *)res; /* store first entry */
} else {
- RES *next;
- /* Add new res to end of chain */
- for (next=resources[rindex].res_head; next->next; next=next->next)
- { }
- next->next = (RES *)res;
+ RES *next;
+ /* Add new res to end of chain */
+ for (next=resources[rindex].res_head; next->next; next=next->next)
+ { }
+ next->next = (RES *)res;
Dmsg2(90, "Inserting %s res: %s\n", res_to_str(type),
- res->res_dir.hdr.name);
+ res->res_dir.hdr.name);
}
}
}
/* */
#define VERSION "1.23"
#define VSTRING "1"
-#define DATE "18 July 2002"
-#define LSMDATE "18Jul02"
+#define DATE "20 July 2002"
+#define LSMDATE "20Jul02"
/* Debug flags */
#define DEBUG 1