Technical notes on version 2.1
General:
+Release 2.1.24 beta
+30Jun07
+kes Integrate patch from Sergey Svishchev <svs@ropnet.ru> that fixes
+ bug in migration code where a job that spanned two volumes
+ was migrated twice.
+29Jun07
+kes Implement new BST_DESPOOLING blocked state. Change from locking
+ during despooling in SD to blocking. This means that other threads
+ can work with the device structure, in particular the reservations
+ system while despooling.
+28Jun07
+kes Fix return in reservation message queue that missed clearing
+ the jcr lock (implemented 26Jun07 below).
+kes Rename a number of dev methods to make locking function names
+ a bit clearer.
+kes Document locking in lock.c. Move lock structures to new file
+ lock.h.
+26Jun07
+kes Move reservations message lock to lock jcr only this
+ fixes bug #861.
+kes Move main SD locking code into lock.c (new file).
+kes Update Win32 build to include lock.c
+
Release 2.1.22 beta
26Jun07
kes Dirk committed the qwt library code for drawing graphs in bat.
- Release Notes for Bacula 2.1.22
+ Release Notes for Bacula 2.1.24
- Bacula code: Total files = 458 Total lines = 198,659 (*.h *.c *.in)
+ Bacula code: Total files = 516 Total lines = 244,807 (*.h *.c *.in)
This Director and Storage daemon must be upgraded at the same time,
but they should be compatible with all 2.0.x File daemons, unless you
points.
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
+Changes since Beta release 2.1.22
+- Different locking in reservations and despooling systems,
+ which means more micro-locking and less macro-locking, which
+ should give a lot more concurrency at the expense of slightly
+ (<0.1%) more overhead due to more locking/unlocking, but
+ concurrent jobs should run much faster.
+- Patch from Sergey Svishchev <svs@ropnet.ru> that fixes bug in
+ migration code where a job that spanned two volumes was
+ migrated twice.
+
+
Changes since Beta release 2.1.20
- New graphs in bat
- Due to a typo, I had inadvertantly turned off batch insert mode.
--- /dev/null
+This patch should resolve some problems with handling of am/pm
+in schedules as reported by bug #808.
+
+According to the NIST (US National Institute of Standards and Technology),
+12am and 12pm are ambiguous and can be defined to anything. However,
+12:01am is the same as 00:01 and 12:01pm is the same as 12:01, so Bacula
+defines 12am as 00:00 (midnight) and 12pm as 12:00 (noon). You can avoid
+this abiguity (confusion) by using 24 hour time specifications (i.e. no
+am/pm). This is the definition in Bacula version 2.0.3 and later.
+
+Apply it to version 2.0.3 with:
+
+ cd <bacula-source>
+ patch -p0 <2.0.3-ampm.patch
+ make
+ ...
+ make install
+
+Index: src/dird/run_conf.c
+===================================================================
+--- src/dird/run_conf.c (revision 4349)
++++ src/dird/run_conf.c (working copy)
+@@ -339,6 +339,7 @@
+ for ( ; token != T_EOL; (token = lex_get_token(lc, T_ALL))) {
+ int len;
+ bool pm = false;
++ bool am = false;
+ switch (token) {
+ case T_NUMBER:
+ state = s_mday;
+@@ -434,6 +435,7 @@
+ if (!have_hour) {
+ clear_bits(0, 23, lrun.hour);
+ }
++// Dmsg1(000, "s_time=%s\n", lc->str);
+ p = strchr(lc->str, ':');
+ if (!p) {
+ scan_err0(lc, _("Time logic error.\n"));
+@@ -441,20 +443,19 @@
+ }
+ *p++ = 0; /* separate two halves */
+ code = atoi(lc->str); /* pick up hour */
++ code2 = atoi(p); /* pick up minutes */
+ len = strlen(p);
+- if (len > 2 && p[len-1] == 'm') {
+- if (p[len-2] == 'a') {
+- pm = false;
+- } else if (p[len-2] == 'p') {
+- pm = true;
+- } else {
+- scan_err0(lc, _("Bad time specification."));
+- /* NOT REACHED */
+- }
+- } else {
+- pm = false;
++ if (len >= 2) {
++ p += 2;
+ }
+- code2 = atoi(p); /* pick up minutes */
++ if (strcasecmp(p, "pm") == 0) {
++ pm = true;
++ } else if (strcasecmp(p, "am") == 0) {
++ am = true;
++ } else if (len != 2) {
++ scan_err0(lc, _("Bad time specification."));
++ /* NOT REACHED */
++ }
+ /*
+ * Note, according to NIST, 12am and 12pm are ambiguous and
+ * can be defined to anything. However, 12:01am is the same
+@@ -467,13 +468,14 @@
+ code += 12;
+ }
+ /* am */
+- } else if (code == 12) {
++ } else if (am && code == 12) {
+ code -= 12;
+ }
+ if (code < 0 || code > 23 || code2 < 0 || code2 > 59) {
+ scan_err0(lc, _("Bad time specification."));
+ /* NOT REACHED */
+ }
++// Dmsg2(000, "hour=%d min=%d\n", code, code2);
+ set_bit(code, lrun.hour);
+ lrun.minute = code2;
+ have_hour = true;
--- /dev/null
+
+ This patch adds the MaxVolBytes to the output of a "show pools" command.
+ It fixes bug #814. Apply it to Bacula version 2.0.3 with:
+
+ cd <bacula-source>
+ patch -p0 <2.0.3-maxbyteslist.patch
+ make
+ ...
+ make install
+
+
+Index: src/dird/dird_conf.c
+===================================================================
+--- src/dird/dird_conf.c (revision 4349)
++++ src/dird/dird_conf.c (working copy)
+@@ -844,10 +844,13 @@
+ NPRT(res->res_pool.label_format));
+ sendit(sock, _(" CleaningPrefix=%s LabelType=%d\n"),
+ NPRT(res->res_pool.cleaning_prefix), res->res_pool.LabelType);
+- sendit(sock, _(" RecyleOldest=%d PurgeOldest=%d MaxVolJobs=%d MaxVolFiles=%d\n"),
++ sendit(sock, _(" RecyleOldest=%d PurgeOldest=%d\n"),
+ res->res_pool.recycle_oldest_volume,
+- res->res_pool.purge_oldest_volume,
+- res->res_pool.MaxVolJobs, res->res_pool.MaxVolFiles);
++ res->res_pool.purge_oldest_volume);
++ sendit(sock, _(" MaxVolJobs=%d MaxVolFiles=%d MaxVolBytes=%s\n"),
++ res->res_pool.MaxVolJobs,
++ res->res_pool.MaxVolFiles,
++ edit_uint64(res->res_pool.MaxVolFiles, ed1));
+ sendit(sock, _(" MigTime=%s MigHiBytes=%s MigLoBytes=%s\n"),
+ edit_utime(res->res_pool.MigrationTime, ed1, sizeof(ed1)),
+ edit_uint64(res->res_pool.MigrationHighBytes, ed2),
--- /dev/null
+
+This patch should fix the logic error in checking for the MaxWaitTime of
+a job in src/dird/job.c. It fixes bug #802.
+
+Apply it to Bacula version 2.0.3 with:
+
+ cd <bacula-source>
+ patch -p0 <2.0.3-maxwaittime.patch
+ make
+ ...
+ make install
+
+
+
+Index: src/dird/job.c
+===================================================================
+--- src/dird/job.c (revision 4349)
++++ src/dird/job.c (working copy)
+@@ -481,7 +481,6 @@
+ static bool job_check_maxwaittime(JCR *control_jcr, JCR *jcr)
+ {
+ bool cancel = false;
+- bool ok_to_cancel = false;
+ JOB *job = jcr->job;
+
+ if (job_canceled(jcr)) {
+@@ -493,69 +492,18 @@
+ }
+ if (jcr->JobLevel == L_FULL && job->FullMaxWaitTime != 0 &&
+ (watchdog_time - jcr->start_time) >= job->FullMaxWaitTime) {
+- ok_to_cancel = true;
++ cancel = true;
+ } else if (jcr->JobLevel == L_DIFFERENTIAL && job->DiffMaxWaitTime != 0 &&
+ (watchdog_time - jcr->start_time) >= job->DiffMaxWaitTime) {
+- ok_to_cancel = true;
++ cancel = true;
+ } else if (jcr->JobLevel == L_INCREMENTAL && job->IncMaxWaitTime != 0 &&
+ (watchdog_time - jcr->start_time) >= job->IncMaxWaitTime) {
+- ok_to_cancel = true;
++ cancel = true;
+ } else if (job->MaxWaitTime != 0 &&
+ (watchdog_time - jcr->start_time) >= job->MaxWaitTime) {
+- ok_to_cancel = true;
+- }
+- if (!ok_to_cancel) {
+- return false;
+- }
+-
+-/*
+- * I don't see the need for all this -- kes 17Dec06
+- */
+-#ifdef xxx
+- Dmsg3(800, "Job %d (%s): MaxWaitTime of %d seconds exceeded, "
+- "checking status\n",
+- jcr->JobId, jcr->Job, job->MaxWaitTime);
+- switch (jcr->JobStatus) {
+- case JS_Created:
+- case JS_Blocked:
+- case JS_WaitFD:
+- case JS_WaitSD:
+- case JS_WaitStoreRes:
+- case JS_WaitClientRes:
+- case JS_WaitJobRes:
+- case JS_WaitPriority:
+- case JS_WaitMaxJobs:
+- case JS_WaitStartTime:
+ cancel = true;
+- Dmsg0(200, "JCR blocked in #1\n");
+- break;
+- case JS_Running:
+- Dmsg0(800, "JCR running, checking SD status\n");
+- switch (jcr->SDJobStatus) {
+- case JS_WaitMount:
+- case JS_WaitMedia:
+- case JS_WaitFD:
+- cancel = true;
+- Dmsg0(800, "JCR blocked in #2\n");
+- break;
+- default:
+- Dmsg0(800, "JCR not blocked in #2\n");
+- break;
+- }
+- break;
+- case JS_Terminated:
+- case JS_ErrorTerminated:
+- case JS_Canceled:
+- case JS_FatalError:
+- Dmsg0(800, "JCR already dead in #3\n");
+- break;
+- default:
+- Jmsg1(jcr, M_ERROR, 0, _("Unhandled job status code %d\n"),
+- jcr->JobStatus);
+ }
+- Dmsg3(800, "MaxWaitTime result: %scancel JCR %p (%s)\n",
+- cancel ? "" : "do not ", jcr, jcr->Job);
+-#endif
++
+ return cancel;
+ }
+
+@@ -574,36 +522,6 @@
+ return false;
+ }
+
+-#ifdef xxx
+- switch (jcr->JobStatus) {
+- case JS_Created:
+- case JS_Running:
+- case JS_Blocked:
+- case JS_WaitFD:
+- case JS_WaitSD:
+- case JS_WaitStoreRes:
+- case JS_WaitClientRes:
+- case JS_WaitJobRes:
+- case JS_WaitPriority:
+- case JS_WaitMaxJobs:
+- case JS_WaitStartTime:
+- case JS_Differences:
+- cancel = true;
+- break;
+- case JS_Terminated:
+- case JS_ErrorTerminated:
+- case JS_Canceled:
+- case JS_FatalError:
+- cancel = false;
+- break;
+- default:
+- Jmsg1(jcr, M_ERROR, 0, _("Unhandled job status code %d\n"),
+- jcr->JobStatus);
+- }
+-
+- Dmsg3(200, "MaxRunTime result: %scancel JCR %p (%s)\n",
+- cancel ? "" : "do not ", jcr, jcr->Job);
+-#endif
+ return true;
+ }
+
--- /dev/null
+
+ This patch should fix bug #812 where the DST time shift was
+ incorrectly handled. This patch was submitted by Martin Simmons.
+ Apply it to Bacula version 2.0.3 with:
+
+ cd <bacula-source>
+ patch -p0 <2.0.3-scheduler-next-hour.patch
+ make
+ ...
+ make install
+
+Index: src/dird/scheduler.c
+===================================================================
+--- src/dird/scheduler.c (revision 4445)
++++ src/dird/scheduler.c (working copy)
+@@ -175,11 +175,11 @@
+ }
+ /* Recheck at least once per minute */
+ bmicrosleep((next_check_secs < twait)?next_check_secs:twait, 0);
+- /* Attempt to handle clock shift from/to daylight savings time
++ /* Attempt to handle clock shift (but not daylight savings time changes)
+ * we allow a skew of 10 seconds before invalidating everything.
+ */
+ now = time(NULL);
+- if (now < prev+10 || now > (prev+next_check_secs+10)) {
++ if (now < prev-10 || now > (prev+next_check_secs+10)) {
+ schedules_invalidated = true;
+ }
+ }
+@@ -284,6 +284,9 @@
+ wom = mday / 7;
+ woy = tm_woy(now); /* get week of year */
+
++ Dmsg7(dbglvl, "now = %x: h=%d m=%d md=%d wd=%d wom=%d woy=%d\n",
++ now, hour, month, mday, wday, wom, woy);
++
+ /*
+ * Compute values for next hour from now.
+ * We do this to be sure we don't miss a job while
+@@ -299,6 +302,9 @@
+ nh_wom = nh_mday / 7;
+ nh_woy = tm_woy(now); /* get week of year */
+
++ Dmsg7(dbglvl, "nh = %x: h=%d m=%d md=%d wd=%d wom=%d woy=%d\n",
++ next_hour, nh_hour, nh_month, nh_mday, nh_wday, nh_wom, nh_woy);
++
+ /* Loop through all jobs */
+ LockRes();
+ foreach_res(job, R_JOB) {
+@@ -351,24 +357,20 @@
+
+ Dmsg3(dbglvl, "run@%p: run_now=%d run_nh=%d\n", run, run_now, run_nh);
+
+- /* find time (time_t) job is to be run */
+- (void)localtime_r(&now, &tm); /* reset tm structure */
+- tm.tm_min = run->minute; /* set run minute */
+- tm.tm_sec = 0; /* zero secs */
+- if (run_now) {
+- runtime = mktime(&tm);
+- add_job(job, run, now, runtime);
+- }
+- /* If job is to be run in the next hour schedule it */
+- if (run_nh) {
+- /* Set correct values */
+- tm.tm_hour = nh_hour;
+- tm.tm_mday = nh_mday + 1; /* fixup because we biased for tests above */
+- tm.tm_mon = nh_month;
+- tm.tm_year = nh_year;
+- runtime = mktime(&tm);
+- add_job(job, run, now, runtime);
+- }
++ if (run_now || run_nh) {
++ /* find time (time_t) job is to be run */
++ (void)localtime_r(&now, &tm); /* reset tm structure */
++ tm.tm_min = run->minute; /* set run minute */
++ tm.tm_sec = 0; /* zero secs */
++ runtime = mktime(&tm);
++ if (run_now) {
++ add_job(job, run, now, runtime);
++ }
++ /* If job is to be run in the next hour schedule it */
++ if (run_nh) {
++ add_job(job, run, now, runtime + 3600);
++ }
++ }
+ }
+ }
+ UnlockRes();
--- /dev/null
+
+This patch should fix the spurious connection drops that fail jobs
+as reported in bug #888.
+Apply it to version 2.0.3 (possibly earlier versions of 2.0) with:
+
+ cd <bacula-source>
+ patch -p0 <2.0.3-tls-disconnect.patch
+ make
+ ...
+ make install
+
+Index: src/lib/tls.c
+===================================================================
+--- src/lib/tls.c (revision 4668)
++++ src/lib/tls.c (working copy)
+@@ -540,14 +540,6 @@
+ * The first time to initiate the shutdown handshake, and the second to
+ * receive the peer's reply.
+ *
+- * However, it is valid to close the SSL connection after the initial
+- * shutdown notification is sent to the peer, without waiting for the
+- * peer's reply, as long as you do not plan to re-use that particular
+- * SSL connection object.
+- *
+- * Because we do not re-use SSL connection objects, I do not bother
+- * calling SSL_shutdown a second time.
+- *
+ * In addition, if the underlying socket is blocking, SSL_shutdown()
+ * will not return until the current stage of the shutdown process has
+ * completed or an error has occured. By setting the socket blocking
+@@ -560,6 +552,10 @@
+ flags = bnet_set_blocking(bsock);
+
+ err = SSL_shutdown(bsock->tls->openssl);
++ if (err == 0) {
++ /* Finish up the closing */
++ err = SSL_shutdown(bsock->tls->openssl);
++ }
+
+ switch (SSL_get_error(bsock->tls->openssl, err)) {
+ case SSL_ERROR_NONE:
+@@ -574,8 +570,6 @@
+ break;
+ }
+
+- /* Restore saved flags */
+- bnet_restore_blocking(bsock, flags);
+ }
+
+ /* Does all the manual labor for tls_bsock_readn() and tls_bsock_writen() */
--- /dev/null
+This patch should fix the problem reported in bug #803 where a Verify
+job select the JobId to verified at schedule time rather than at runtime.
+This makes it difficult or impossible to schedule a verify just after
+a backup.
+
+Apply this patch to Bacula version 2.0.3 (probably 2.0.2 as well) with:
+
+ cd <bacula-source>
+ patch -p0 <2.0.3-verify.patch
+ make
+ ...
+ make install
+
+Index: src/dird/verify.c
+===================================================================
+--- src/dird/verify.c (revision 4353)
++++ src/dird/verify.c (working copy)
+@@ -1,22 +1,7 @@
+ /*
+- *
+- * Bacula Director -- verify.c -- responsible for running file verification
+- *
+- * Kern Sibbald, October MM
+- *
+- * Basic tasks done here:
+- * Open DB
+- * Open connection with File daemon and pass him commands
+- * to do the verify.
+- * When the File daemon sends the attributes, compare them to
+- * what is in the DB.
+- *
+- * Version $Id$
+- */
+-/*
+ Bacula® - The Network Backup Solution
+
+- Copyright (C) 2000-2006 Free Software Foundation Europe e.V.
++ Copyright (C) 2000-2007 Free Software Foundation Europe e.V.
+
+ The main author of Bacula is Kern Sibbald, with contributions from
+ many others, a complete list can be found in the file AUTHORS.
+@@ -40,6 +25,21 @@
+ (FSFE), Fiduciary Program, Sumatrastrasse 25, 8006 Zürich,
+ Switzerland, email:ftf@fsfeurope.org.
+ */
++/*
++ *
++ * Bacula Director -- verify.c -- responsible for running file verification
++ *
++ * Kern Sibbald, October MM
++ *
++ * Basic tasks done here:
++ * Open DB
++ * Open connection with File daemon and pass him commands
++ * to do the verify.
++ * When the File daemon sends the attributes, compare them to
++ * what is in the DB.
++ *
++ * Version $Id$
++ */
+
+
+ #include "bacula.h"
+@@ -66,6 +66,22 @@
+ */
+ bool do_verify_init(JCR *jcr)
+ {
++ return true;
++}
++
++
++/*
++ * Do a verification of the specified files against the Catlaog
++ *
++ * Returns: false on failure
++ * true on success
++ */
++bool do_verify(JCR *jcr)
++{
++ const char *level;
++ BSOCK *fd;
++ int stat;
++ char ed1[100];
+ JOB_DBR jr;
+ JobId_t verify_jobid = 0;
+ const char *Name;
+@@ -74,12 +90,16 @@
+
+ memset(&jcr->previous_jr, 0, sizeof(jcr->previous_jr));
+
+- Dmsg1(9, "bdird: created client %s record\n", jcr->client->hdr.name);
+-
+ /*
+- * Find JobId of last job that ran. E.g.
+- * for VERIFY_CATALOG we want the JobId of the last INIT.
+- * for VERIFY_VOLUME_TO_CATALOG, we want the JobId of the
++ * Find JobId of last job that ran. Note, we do this when
++ * the job actually starts running, not at schedule time,
++ * so that we find the last job that terminated before
++ * this job runs rather than before it is scheduled. This
++ * permits scheduling a Backup and Verify at the same time,
++ * but with the Verify at a lower priority.
++ *
++ * For VERIFY_CATALOG we want the JobId of the last INIT.
++ * For VERIFY_VOLUME_TO_CATALOG, we want the JobId of the
+ * last backup Job.
+ */
+ if (jcr->JobLevel == L_VERIFY_CATALOG ||
+@@ -89,7 +109,7 @@
+ if (jcr->verify_job &&
+ (jcr->JobLevel == L_VERIFY_VOLUME_TO_CATALOG ||
+ jcr->JobLevel == L_VERIFY_DISK_TO_CATALOG)) {
+- Name = jcr->verify_job->hdr.name;
++ Name = jcr->verify_job->name();
+ } else {
+ Name = NULL;
+ }
+@@ -149,23 +169,7 @@
+ jcr->fileset = jcr->verify_job->fileset;
+ }
+ Dmsg2(100, "ClientId=%u JobLevel=%c\n", jcr->previous_jr.ClientId, jcr->JobLevel);
+- return true;
+-}
+
+-
+-/*
+- * Do a verification of the specified files against the Catlaog
+- *
+- * Returns: false on failure
+- * true on success
+- */
+-bool do_verify(JCR *jcr)
+-{
+- const char *level;
+- BSOCK *fd;
+- int stat;
+- char ed1[100];
+-
+ if (!db_update_job_start_record(jcr, jcr->db, &jcr->jr)) {
+ Jmsg(jcr, M_FATAL, 0, "%s", db_strerror(jcr->db));
+ return false;
Projects:
Bacula Projects Roadmap
- Status updated 14 April 2007
+ Status updated 7 July 2007
After re-ordering in vote priority
Items Completed:
+Item: 2 Implement a Bacula GUI/management tool.
Item: 18 Quick release of FD-SD connection after backup.
-Item: 40 Include JobID in spool file name
+Item: 23 Implement from-client and to-client on restore command line.
Item: 25 Implement huge exclude list support using dlist
Item: 41 Enable to relocate files and directories when restoring
+Item: 42 Batch attribute inserts (ten times faster)
+Item: 43 More concurrency in SD using micro-locking
+Item: 44 Performance enhancements (POSIX/Win32 OS file access hints).
+Item: 40 Include JobID in spool file name
Summary:
Item: 1 Accurate restoration of renamed/deleted files
-Item: 2 Implement a Bacula GUI/management tool.
+Item: 2* Implement a Bacula GUI/management tool.
Item: 3 Allow FD to initiate a backup
Item: 4 Merge multiple backups (Synthetic Backup or Consolidation).
Item: 5 Deletion of Disk-Based Bacula Volumes
Item: 20 Archive data
Item: 21 Split documentation
Item: 22 Implement support for stacking arbitrary stream filters, sinks.
-Item: 23 Implement from-client and to-client on restore command line.
+Item: 23* Implement from-client and to-client on restore command line.
Item: 24 Add an override in Schedule for Pools based on backup types.
Item: 25* Implement huge exclude list support using hashing.
Item: 26 Implement more Python events in Bacula.
Item: 39 Message mailing based on backup types
Item: 40* Include JobID in spool file name
Item: 41* Enable to relocate files and directories when restoring
+Item: 42* Batch attribute inserts (ten times faster)
+Item: 43* More concurrency in SD using micro-locking
+Item: 44* Performance enhancements (POSIX/Win32 OS file access hints).
Item 1: Accurate restoration of renamed/deleted files
Date: 28 November 2005
#endif /* __SQL_C */
+/* ==============================================================
+ *
+ * What follows are definitions that are used "globally" for all
+ * the different SQL engines and both inside and external to the
+ * cats directory.
+ */
+
extern uint32_t bacula_db_version;
-/* ***FIXME*** FileId_t should *really* be uint64_t
- * but at the current time, this breaks MySQL.
+/*
+ * These are the sizes of the current definitions of database
+ * Ids. In general, FileId_t can be set to uint64_t and it
+ * *should* work. Users have reported back that it does work
+ * for PostgreSQL. For the other types, all places in Bacula
+ * have been converted, but no one has actually tested it.
+ * In principle, the only field that really should need to be
+ * 64 bits is the FileId_t
*/
typedef uint32_t FileId_t;
typedef uint32_t DBId_t; /* general DB id type */
#define faddr_t long
+/*
+ * Structure used when calling db_get_query_ids()
+ * allows the subroutine to return a list of ids.
+ */
+class dbid_list : public SMARTALLOC {
+public:
+ DBId_t *DBId; /* array of DBIds */
+ char *PurgedFiles; /* Array of PurgedFile flags */
+ int num_ids; /* num of ids actually stored */
+ int max_ids; /* size of id array */
+ int num_seen; /* number of ids processed */
+ int tot_ids; /* total to process */
+
+ dbid_list(); /* in sql.c */
+ ~dbid_list(); /* in sql.c */
+};
+
+
+
/* Job information passed to create job record and update
* job record at end of job. Note, although this record
int db_int64_handler(void *ctx, int num_fields, char **row);
void db_thread_cleanup();
-/* create.c */
+/* sql_create.c */
bool db_create_file_attributes_record(JCR *jcr, B_DB *mdb, ATTR_DBR *ar);
bool db_create_job_record(JCR *jcr, B_DB *db, JOB_DBR *jr);
int db_create_media_record(JCR *jcr, B_DB *db, MEDIA_DBR *media_dbr);
bool my_batch_end(JCR *jcr, B_DB *mdb, const char *error);
bool my_batch_insert(JCR *jcr, B_DB *mdb, ATTR_DBR *ar);
-/* delete.c */
+/* sql_delete.c */
int db_delete_pool_record(JCR *jcr, B_DB *db, POOL_DBR *pool_dbr);
int db_delete_media_record(JCR *jcr, B_DB *mdb, MEDIA_DBR *mr);
-/* find.c */
+/* sql_find.c */
bool db_find_job_start_time(JCR *jcr, B_DB *mdb, JOB_DBR *jr, POOLMEM **stime);
bool db_find_last_jobid(JCR *jcr, B_DB *mdb, const char *Name, JOB_DBR *jr);
int db_find_next_volume(JCR *jcr, B_DB *mdb, int index, bool InChanger, MEDIA_DBR *mr);
bool db_find_failed_job_since(JCR *jcr, B_DB *mdb, JOB_DBR *jr, POOLMEM *stime, int &JobLevel);
-/* get.c */
+/* sql_get.c */
bool db_get_pool_record(JCR *jcr, B_DB *db, POOL_DBR *pdbr);
int db_get_client_record(JCR *jcr, B_DB *mdb, CLIENT_DBR *cr);
bool db_get_job_record(JCR *jcr, B_DB *mdb, JOB_DBR *jr);
int db_get_job_volume_parameters(JCR *jcr, B_DB *mdb, JobId_t JobId, VOL_PARAMS **VolParams);
int db_get_client_record(JCR *jcr, B_DB *mdb, CLIENT_DBR *cdbr);
int db_get_counter_record(JCR *jcr, B_DB *mdb, COUNTER_DBR *cr);
+bool db_get_query_dbids(JCR *jcr, B_DB *mdb, POOL_MEM &query, dbid_list &ids);
-/* list.c */
+/* sql_list.c */
enum e_list_type {
HORZ_LIST,
VERT_LIST
int db_list_sql_query(JCR *jcr, B_DB *mdb, const char *query, DB_LIST_HANDLER *sendit, void *ctx, int verbose, e_list_type type);
void db_list_client_records(JCR *jcr, B_DB *mdb, DB_LIST_HANDLER *sendit, void *ctx, e_list_type type);
-/* update.c */
+/* sql_update.c */
bool db_update_job_start_record(JCR *jcr, B_DB *db, JOB_DBR *jr);
int db_update_job_end_record(JCR *jcr, B_DB *db, JOB_DBR *jr);
int db_update_client_record(JCR *jcr, B_DB *mdb, CLIENT_DBR *cr);
void print_dashes(B_DB *mdb);
void print_result(B_DB *mdb);
+dbid_list::dbid_list()
+{
+ memset(this, 0, sizeof(dbid_list));
+ max_ids = 1000;
+ DBId = (DBId_t *)malloc(max_ids * sizeof(DBId_t));
+ num_ids = num_seen = tot_ids = 0;
+ PurgedFiles = NULL;
+}
+
+dbid_list::~dbid_list()
+{
+ free(DBId);
+}
+
+
/*
* Called here to retrieve an integer from the database
*/
}
+/*
+ * This function returns a list of all the DBIds that are returned
+ * for the query.
+ *
+ * Returns false: on failure
+ * true: on success
+ */
+bool db_get_query_dbids(JCR *jcr, B_DB *mdb, POOL_MEM &query, dbid_list &ids)
+{
+ SQL_ROW row;
+ int i = 0;
+ bool ok = false;
+
+ db_lock(mdb);
+ ids.num_ids = 0;
+ if (QUERY_DB(jcr, mdb, query.c_str())) {
+ ids.num_ids = sql_num_rows(mdb);
+ if (ids.num_ids > 0) {
+ if (ids.max_ids < ids.num_ids) {
+ free(ids.DBId);
+ ids.DBId = (DBId_t *)malloc(ids.num_ids * sizeof(DBId_t));
+ }
+ while ((row = sql_fetch_row(mdb)) != NULL) {
+ ids.DBId[i++] = str_to_uint64(row[0]);
+ }
+ }
+ sql_free_result(mdb);
+ ok = true;
+ } else {
+ Mmsg(mdb->errmsg, _("query dbids failed: ERR=%s\n"), sql_strerror(mdb));
+ Jmsg(jcr, M_ERROR, 0, "%s", mdb->errmsg);
+ ok = false;
+ }
+ db_unlock(mdb);
+ return ok;
+}
+
+
+
/* Get Media Record
*
* Returns: false: on failure
win32 win32/compat findlib lib wx-console stored tools \
win32/wx-console win32/console win32/baculafd win32/filed \
win32/dird win32/libwin32 win32/stored win32/stored/baculasd \
- tray-monitor; do
+ tray-monitor qt-console qt-console/clients qt-console/console \
+ qt-console/fileset qt-console/help qt-console/jobgraphs \
+ qt-console/joblist qt-console/joblog qt-console/jobs qt-console/label \
+ qt-console/mediaedit qt-console/medialist qt-console/mount \
+ qt-console/relabel qt-console/restore qt-console/run qt-console/select \
+ qt-console/storage; do
ls -1 $i/*.c $i/*.cpp $i/*.h $i/*.in 2>/dev/null >>1
done
cat 1 | $HOME/bin/lines
}
/*
- * Prune at least on Volume in current Pool. This is called from
- * catreq.c when the Storage daemon is asking for another
+ * Prune at least one Volume in current Pool. This is called from
+ * catreq.c => next_vol.c when the Storage daemon is asking for another
* volume and no appendable volumes are available.
*
* Return: false if nothing pruned
return 0;
}
memset(&del, 0, sizeof(del));
- del.max_ids = 1000;
+ del.max_ids = 10000;
del.JobId = (JobId_t *)malloc(sizeof(JobId_t) * del.max_ids);
ua = new_ua_context(jcr);
}
memset(&del, 0, sizeof(del));
- del.max_ids = 1000;
+ del.max_ids = 10000;
del.JobId = (JobId_t *)malloc(sizeof(JobId_t) * del.max_ids);
db_lock(ua->db);
bsock->set_blocking();
err = SSL_shutdown(bsock->tls->openssl);
- if (err = 0) {
+ if (err == 0) {
/* Complete shutdown */
err = SSL_shutdown(bsock->tls->openssl);
}
*/
#undef VERSION
-#define VERSION "2.1.23"
-#define BDATE "30 June 2007"
-#define LSMDATE "30Jun07"
+#define VERSION "2.1.24"
+#define BDATE "07 July 2007"
+#define LSMDATE "07Jul07"
#define PROG_COPYRIGHT "Copyright (C) %d-2007 Free Software Foundation Europe e.V.\n"
#define BYEAR "2007" /* year for copyright messages in progs */
Technical notes on version 2.1
General:
+07Jul07
+kes Start work on new more efficient DBId subroutine. First use
+ will be for recycling volume to Scratch inchanger.
+kes Increase number of JobIds in pruning from 1000 to 10000.
+ This to be replaced by above routine.
+kes Begin implementation of building Qt4 on Win32.
+kes Correct typo in fix I added for bad TLS shutdown.
+kes Pull 2.0.3 patches into patches directory.
+kes Update Release notes. Include qt-console in line count.
+kes Update Projects file.
30Jun07
kes Integrate patch from Sergey Svishchev <svs@ropnet.ru> that fixes
bug in migration code where a job that spanned two volumes