automatically and you may no longer exercise any of the rights
granted to you by this license as of the date you commence an
action, including a cross-claim or counterclaim, against any
-licensor of GPL software alleging that the software infringes a
-copyright, an intellectual property right, or a patent.
+licensor of GPL software alleging that the software infringes
+an intellectual property right or a patent. Special dispensation
+from or delay to the execution of this clause may be possible
+by applying directly to the license owner of this software.
+Such a dispensation or delay is valid only in writing and signed
+by the one or more of the license holders.
Code falling under the above conditions will be marked as follows:
AC_PATH_PROG(MTX, mtx, mtx)
AC_PATH_PROG(PKGCONFIG, pkg-config, pkg-config)
AC_PATH_PROG(WXCONFIG, wx-config, wx-config)
+AC_PATH_PROG(CDRECORD, cdrecord)
test -n "$ARFLAG" || ARFLAGS="cr"
AC_SUBST(ARFLAGS)
# End of readline/conio stuff
# -----------------------------------------------------------------------
+# -------------------------------------------
+# check for cdrecord writer location
+# get scsibus,target,lun
+# -------------------------------------------
+CDSTL="3,0,0"
+if test ! x$CDRECORD = x ; then
+ CDSTL=`${CDRECORD} -scanbus 2>/dev/null | grep CD-RW | ${AWK} '{print $1}'`
+ if test x${CDSTL} = x ; then
+ CDSTL=`${CDRECORD} -scanbus 2>/dev/null | grep CD+RW | ${AWK} '{print $1}'`
+ fi
+ if test x${CDSTL} = x ; then
+ CDSTL="3,0,0"
+ fi
+fi
+AC_SUBST(CDSTL)
+
# ---------------------------------------------------
# Check for GMP support/directory
# include <unistd.h>
#endif"
-ac_subst_vars='SHELL PATH_SEPARATOR PACKAGE_NAME PACKAGE_TARNAME PACKAGE_VERSION PACKAGE_STRING PACKAGE_BUGREPORT exec_prefix prefix program_transform_name bindir sbindir libexecdir datadir sysconfdir sharedstatedir localstatedir libdir includedir oldincludedir infodir mandir build_alias host_alias target_alias DEFS ECHO_C ECHO_N ECHO_T LIBS BUILD_DIR TRUEPRG FALSEPRG VERSION DATE LSMDATE CC CFLAGS LDFLAGS CPPFLAGS ac_ct_CC EXEEXT OBJEXT CXX CXXFLAGS ac_ct_CXX CPP EGREP INSTALL_PROGRAM INSTALL_SCRIPT INSTALL_DATA RANLIB ac_ct_RANLIB MV RM CP SED AWK ECHO CMP TBL AR OPENSSL MTX PKGCONFIG WXCONFIG ARFLAGS MAKE_SHELL LOCAL_LIBS LOCAL_CFLAGS LOCAL_LDFLAGS LOCAL_DEFS build build_cpu build_vendor build_os host host_cpu host_vendor host_os HAVE_SUN_OS_TRUE HAVE_SUN_OS_FALSE HAVE_OSF1_OS_TRUE HAVE_OSF1_OS_FALSE HAVE_AIX_OS_TRUE HAVE_AIX_OS_FALSE HAVE_HPUX_OS_TRUE HAVE_HPUX_OS_FALSE HAVE_LINUX_OS_TRUE HAVE_LINUX_OS_FALSE HAVE_FREEBSD_OS_TRUE HAVE_FREEBSD_OS_FALSE HAVE_NETBSD_OS_TRUE HAVE_NETBSD_OS_FALSE HAVE_OPENBSD_OS_TRUE HAVE_OPENBSD_OS_FALSE HAVE_BSDI_OS_TRUE HAVE_BSDI_OS_FALSE HAVE_SGI_OS_TRUE HAVE_SGI_OS_FALSE HAVE_IRIX_OS_TRUE HAVE_IRIX_OS_FALSE HAVE_DARWIN_OS_TRUE HAVE_DARWIN_OS_FALSE INSIDE_GNOME_COMMON_TRUE INSIDE_GNOME_COMMON_FALSE MSGFMT GNOME_INCLUDEDIR GNOMEUI_LIBS GNOME_LIBDIR GNOME_LIBS GNOMEGNORBA_LIBS GTKXMHTML_LIBS ZVT_LIBS GNOME_CONFIG ORBIT_CONFIG ORBIT_IDL HAVE_ORBIT_TRUE HAVE_ORBIT_FALSE ORBIT_CFLAGS ORBIT_LIBS HAVE_GNORBA_TRUE HAVE_GNORBA_FALSE GNORBA_CFLAGS GNORBA_LIBS GNOME_APPLETS_LIBS GNOME_DOCKLETS_LIBS GNOME_CAPPLET_LIBS GNOME_DIR WXCONS_CPPFLAGS WXCONS_LDFLAGS WX_DIR TRAY_MONITOR_CPPFLAGS TRAY_MONITOR_LDFLAGS TRAY_MONITOR_DIR TTOOL_LDFLAGS STATIC_FD STATIC_SD STATIC_DIR STATIC_CONS STATIC_GNOME_CONS STATIC_WX_CONS ALL_DIRS CONS_INC CONS_OBJ CONS_SRC CONS_LIBS CONS_LDFLAGS READLINE_SRC working_dir scriptdir dump_email job_email smtp_host piddir subsysdir baseport dir_port fd_port sd_port dir_password fd_password sd_password mon_dir_password mon_fd_password mon_sd_password dir_user dir_group sd_user sd_group fd_user fd_group SBINPERM SQL_LFLAGS SQL_INCLUDE SQL_BINDIR cats DB_NAME GETCONF ac_ct_GETCONF X_CFLAGS X_PRE_LIBS X_LIBS X_EXTRA_LIBS LIBOBJS ALLOCA FDLIBS DEBUG DINCLUDE DLIB DB_LIBS WCFLAGS WLDFLAGS OBJLIST hostname TAPEDRIVE PSCMD WIN32 MACOSX DISTNAME DISTVER LTLIBOBJS'
+ac_subst_vars='SHELL PATH_SEPARATOR PACKAGE_NAME PACKAGE_TARNAME PACKAGE_VERSION PACKAGE_STRING PACKAGE_BUGREPORT exec_prefix prefix program_transform_name bindir sbindir libexecdir datadir sysconfdir sharedstatedir localstatedir libdir includedir oldincludedir infodir mandir build_alias host_alias target_alias DEFS ECHO_C ECHO_N ECHO_T LIBS BUILD_DIR TRUEPRG FALSEPRG VERSION DATE LSMDATE CC CFLAGS LDFLAGS CPPFLAGS ac_ct_CC EXEEXT OBJEXT CXX CXXFLAGS ac_ct_CXX CPP EGREP INSTALL_PROGRAM INSTALL_SCRIPT INSTALL_DATA RANLIB ac_ct_RANLIB MV RM CP SED AWK ECHO CMP TBL AR OPENSSL MTX PKGCONFIG WXCONFIG CDRECORD ARFLAGS MAKE_SHELL LOCAL_LIBS LOCAL_CFLAGS LOCAL_LDFLAGS LOCAL_DEFS build build_cpu build_vendor build_os host host_cpu host_vendor host_os HAVE_SUN_OS_TRUE HAVE_SUN_OS_FALSE HAVE_OSF1_OS_TRUE HAVE_OSF1_OS_FALSE HAVE_AIX_OS_TRUE HAVE_AIX_OS_FALSE HAVE_HPUX_OS_TRUE HAVE_HPUX_OS_FALSE HAVE_LINUX_OS_TRUE HAVE_LINUX_OS_FALSE HAVE_FREEBSD_OS_TRUE HAVE_FREEBSD_OS_FALSE HAVE_NETBSD_OS_TRUE HAVE_NETBSD_OS_FALSE HAVE_OPENBSD_OS_TRUE HAVE_OPENBSD_OS_FALSE HAVE_BSDI_OS_TRUE HAVE_BSDI_OS_FALSE HAVE_SGI_OS_TRUE HAVE_SGI_OS_FALSE HAVE_IRIX_OS_TRUE HAVE_IRIX_OS_FALSE HAVE_DARWIN_OS_TRUE HAVE_DARWIN_OS_FALSE INSIDE_GNOME_COMMON_TRUE INSIDE_GNOME_COMMON_FALSE MSGFMT GNOME_INCLUDEDIR GNOMEUI_LIBS GNOME_LIBDIR GNOME_LIBS GNOMEGNORBA_LIBS GTKXMHTML_LIBS ZVT_LIBS GNOME_CONFIG ORBIT_CONFIG ORBIT_IDL HAVE_ORBIT_TRUE HAVE_ORBIT_FALSE ORBIT_CFLAGS ORBIT_LIBS HAVE_GNORBA_TRUE HAVE_GNORBA_FALSE GNORBA_CFLAGS GNORBA_LIBS GNOME_APPLETS_LIBS GNOME_DOCKLETS_LIBS GNOME_CAPPLET_LIBS GNOME_DIR WXCONS_CPPFLAGS WXCONS_LDFLAGS WX_DIR TRAY_MONITOR_CPPFLAGS TRAY_MONITOR_LDFLAGS TRAY_MONITOR_DIR TTOOL_LDFLAGS STATIC_FD STATIC_SD STATIC_DIR STATIC_CONS STATIC_GNOME_CONS STATIC_WX_CONS ALL_DIRS CONS_INC CONS_OBJ CONS_SRC CONS_LIBS CONS_LDFLAGS READLINE_SRC CDSTL working_dir scriptdir dump_email job_email smtp_host piddir subsysdir baseport dir_port fd_port sd_port dir_password fd_password sd_password mon_dir_password mon_fd_password mon_sd_password dir_user dir_group sd_user sd_group fd_user fd_group SBINPERM SQL_LFLAGS SQL_INCLUDE SQL_BINDIR cats DB_NAME GETCONF ac_ct_GETCONF X_CFLAGS X_PRE_LIBS X_LIBS X_EXTRA_LIBS LIBOBJS ALLOCA FDLIBS DEBUG DINCLUDE DLIB DB_LIBS WCFLAGS WLDFLAGS OBJLIST hostname TAPEDRIVE PSCMD WIN32 MACOSX DISTNAME DISTVER LTLIBOBJS'
ac_subst_files='MCOMMON'
# Initialize some variables set by options.
echo "${ECHO_T}no" >&6
fi
+# Extract the first word of "cdrecord", so it can be a program name with args.
+set dummy cdrecord; ac_word=$2
+echo "$as_me:$LINENO: checking for $ac_word" >&5
+echo $ECHO_N "checking for $ac_word... $ECHO_C" >&6
+if test "${ac_cv_path_CDRECORD+set}" = set; then
+ echo $ECHO_N "(cached) $ECHO_C" >&6
+else
+ case $CDRECORD in
+ [\\/]* | ?:[\\/]*)
+ ac_cv_path_CDRECORD="$CDRECORD" # Let the user override the test with a path.
+ ;;
+ *)
+ as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+ IFS=$as_save_IFS
+ test -z "$as_dir" && as_dir=.
+ for ac_exec_ext in '' $ac_executable_extensions; do
+ if $as_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+ ac_cv_path_CDRECORD="$as_dir/$ac_word$ac_exec_ext"
+ echo "$as_me:$LINENO: found $as_dir/$ac_word$ac_exec_ext" >&5
+ break 2
+ fi
+done
+done
+
+ ;;
+esac
+fi
+CDRECORD=$ac_cv_path_CDRECORD
+
+if test -n "$CDRECORD"; then
+ echo "$as_me:$LINENO: result: $CDRECORD" >&5
+echo "${ECHO_T}$CDRECORD" >&6
+else
+ echo "$as_me:$LINENO: result: no" >&5
+echo "${ECHO_T}no" >&6
+fi
+
test -n "$ARFLAG" || ARFLAGS="cr"
# End of readline/conio stuff
# -----------------------------------------------------------------------
+# -------------------------------------------
+# check for cdrecord writer location
+# get scsibus,target,lun
+# -------------------------------------------
+CDSTL="3,0,0"
+if test ! x$CDRECORD = x ; then
+ CDSTL=`${CDRECORD} -scanbus 2>/dev/null | grep CD-RW | ${AWK} '{print $1}'`
+ if test x${CDSTL} = x ; then
+ CDSTL=`${CDRECORD} -scanbus 2>/dev/null | grep CD+RW | ${AWK} '{print $1}'`
+ fi
+ if test x${CDSTL} = x ; then
+ CDSTL="3,0,0"
+ fi
+fi
+
+
# ---------------------------------------------------
# Check for GMP support/directory
s,@MTX@,$MTX,;t t
s,@PKGCONFIG@,$PKGCONFIG,;t t
s,@WXCONFIG@,$WXCONFIG,;t t
+s,@CDRECORD@,$CDRECORD,;t t
s,@ARFLAGS@,$ARFLAGS,;t t
s,@MAKE_SHELL@,$MAKE_SHELL,;t t
s,@LOCAL_LIBS@,$LOCAL_LIBS,;t t
s,@CONS_LIBS@,$CONS_LIBS,;t t
s,@CONS_LDFLAGS@,$CONS_LDFLAGS,;t t
s,@READLINE_SRC@,$READLINE_SRC,;t t
+s,@CDSTL@,$CDSTL,;t t
s,@working_dir@,$working_dir,;t t
s,@scriptdir@,$scriptdir,;t t
s,@dump_email@,$dump_email,;t t
--- /dev/null
+
+define(`CLIENT',`
+Job {
+ Name = "$1"
+ JobDefs = "$3"
+ Client = "$1"
+ FileSet = "$1"
+ Write Bootstrap = "/var/bacula/working/$1.bsr"
+}
+
+Client {
+ Name = "$1"
+ Address = "$2"
+ FDPort = 9102
+ Catalog = MyCatalog
+ Password = "ilF0PZoICjQ60R3E3dks08Rq36KK8cDGJUAaW" # password for FileDaemon
+ File Retention = 30 days # 30 days
+ Job Retention = 6 months # six months
+ AutoPrune = yes # Prune expired Jobs/Files
+}
+')
+
+
+define(`STORAGE', `
+Storage {
+ Name = "$1"
+ Address = "$3"
+ SDPort = 9103
+ Password = "KLUwcp1ZTeIc0x265UPrpWW28t7d7cRXmhOqyHxRr"
+ Device = "$1" # must be same as Device in Storage daemon
+ Media Type = "$2" # must be same as MediaType in Storage daemon
+}')
+
+
+define(`POOL', `
+Pool {
+ Name = "$1"
+ Pool Type = Backup
+ Recycle = yes # Bacula can automatically recycle Volumes
+ AutoPrune = yes # Prune expired volumes
+ Volume Retention = 365 days # one year
+ Accept Any Volume = yes # write on any volume in the pool
+ Cleaning Prefix = "CLN"
+}')
+
--- /dev/null
+ETCDIR=/opt/bacula/etc
+M4=/usr/ccs/bin/m4
+DIR=/opt/bacula/sbin/bacula-dir
+FD=/opt/bacula/sbin/bacula-fd
+SD=/opt/bacula/sbin/bacula-sd
+BCON=/opt/bacula/sbin/bconsole
+
+all: $(ETCDIR)/bacula-dir.conf $(ETCDIR)/bacula-sd.conf \
+ $(ETCDIR)/bacula-fd.conf $(ETCDIR)/bconsole.conf
+
+$(ETCDIR)/bacula-dir.conf: bacula-dir.conf bacula-defs.m4
+ $(M4) bacula-dir.conf >$(ETCDIR)/bacula-dir.tmp && \
+ $(DIR) -t -c $(ETCDIR)/bacula-dir.tmp && \
+ mv $(ETCDIR)/bacula-dir.tmp $(ETCDIR)/bacula-dir.conf
+
+$(ETCDIR)/bacula-sd.conf: bacula-sd.conf bacula-defs.m4
+ $(M4) bacula-sd.conf >$(ETCDIR)/bacula-sd.tmp && \
+ $(SD) -t -c $(ETCDIR)/bacula-sd.tmp && \
+ mv $(ETCDIR)/bacula-sd.tmp $(ETCDIR)/bacula-sd.conf
+
+$(ETCDIR)/bacula-fd.conf: bacula-fd.conf bacula-defs.m4
+ $(M4) bacula-fd.conf >$(ETCDIR)/bacula-fd.tmp && \
+ $(FD) -t -c $(ETCDIR)/bacula-fd.tmp && \
+ mv $(ETCDIR)/bacula-fd.tmp $(ETCDIR)/bacula-fd.conf
+
+$(ETCDIR)/bconsole.conf: bconsole.conf bacula-defs.m4
+ $(M4) bconsole.conf >$(ETCDIR)/bconsole.tmp && \
+ $(BCON) -t -c $(ETCDIR)/bconsole.tmp && \
+ mv $(ETCDIR)/bconsole.tmp $(ETCDIR)/bconsole.conf
--- /dev/null
+From: Marc Schoechlin <ms@LF.net>
+To: Peter Eriksson <peter@ifm.liu.se>
+Cc: bacula-users@lists.sourceforge.net
+Subject: Re: [Bacula-users] RE: Feature Request : includes for config-files
+
+Hi !
+
+On Fri, May 21, 2004 at 11:24:13AM +0200, Peter Eriksson wrote:
+
+> > I think that is the 99%-solution for this problem -
+> > but I think many users would be happy with a 90%-solution, which
+> > allows to store configuration-data in distributed files.
+>
+> Or you could do as I just did - generate the configuration
+> files using a Makefile and the m4 macro processor... That way you
+> don't have to reinvent the wheel again inside Bacula but can delegate
+> the tasks to external programs.
+>
+> [See the attached files for details. They can be expanded
+> a lot though, it's just a beginning]
+
+Many thanks for the files!
+
+I adopted this way now - and it works with good results :-)
+
+The different client-definitions can now be placed on distributed
+locations.
+
+Look at the make-target below :
+--
+$(ETCDIR)/bacula-dir.conf: bacula-dir.conf bacula-defs.m4
+ cat bacula-dir.conf > $(ETCDIR)/bacula-dir.conf.tmp && \
+ $(FIND) $(CUSTOMERS) -name "*.cfg" -exec cat {} >> $(ETCDIR)/bacula-dir.conf.tmp \; && \
+ $(M4) $(ETCDIR)/bacula-dir.conf.tmp >$(ETCDIR)/bacula-dir.tmp && \
+ $(DIR) -t -c $(ETCDIR)/bacula-dir.tmp && \
+ mv $(ETCDIR)/bacula-dir.tmp $(ETCDIR)/bacula-dir.conf
+--
+
+
+Best regards
+
+Marc Schoechlin
+
--- /dev/null
+# bacula-dir.conf
+#
+# Default Bacula Director Configuration file
+#
+# WARNING:
+# This file is generated from /opt/lysator/etc/bacula/bacula-dir.conf
+# Edit the source file and then run 'make'.
+#
+
+include(bacula-defs.m4)
+
+Director { # define myself
+ Name = Baccus
+ DIRport = 9101 # where we listen for UA connections
+ QueryFile = "/opt/bacula/etc/query.sql"
+ WorkingDirectory = "/var/bacula/working"
+ PidDirectory = "/var/run"
+ Maximum Concurrent Jobs = 10
+ Password = "djUGGqG0ckdbbTp0J0cAnK6FqZC5YX5i6" # Console password
+ Messages = Standard
+}
+
+
+# Generic catalog service
+Catalog {
+ Name = MyCatalog
+ dbname = bacula; user = bacula; password = ""
+}
+
+
+
+JobDefs {
+ Name = "DefaultJob"
+ Type = Backup
+ Level = Incremental
+ Schedule = "WeeklyCycle"
+ Storage = "DLT-0"
+ Messages = Standard
+ Spool Data = yes
+ Pool = Default
+ Max Start Delay = 20h
+ Priority = 10
+}
+
+
+
+JobDefs {
+ Name = "InservitusJob"
+ Type = Backup
+ Level = Incremental
+ Schedule = "WeeklyCycle"
+ Storage = "DLT-1"
+ Messages = Standard
+ Spool Data = yes
+ Pool = Inservitus
+ Max Start Delay = 20h
+ Priority = 10
+}
+
+JobDefs {
+ Name = "LysdiskJob"
+ Type = Backup
+ Level = Incremental
+ Schedule = "WeeklyCycle"
+ Storage = "DLT-2"
+ Messages = Standard
+ Spool Data = yes
+ Pool = Lysdisk
+ Max Start Delay = 20h
+ Priority = 10
+}
+
+JobDefs {
+ Name = "ShermanJob"
+ Type = Backup
+ Level = Incremental
+ Schedule = "WeeklyCycle"
+ Storage = "DLT-3"
+ Messages = Standard
+ Spool Data = yes
+ Pool = Sherman
+ Max Start Delay = 20h
+ Priority = 10
+}
+
+# Backup the catalog database (after the nightly save)
+Job {
+ Name = "BackupCatalog"
+ Client = Baccus
+ JobDefs = "DefaultJob"
+ Level = Full
+ FileSet="Catalog"
+ Schedule = "WeeklyCycleAfterBackup"
+ # This creates an ASCII copy of the catalog
+ RunBeforeJob = "/opt/bacula/etc/make_catalog_backup -u bacula"
+ # This deletes the copy of the catalog
+ RunAfterJob = "/opt/bacula/etc/delete_catalog_backup"
+ Write Bootstrap = "/var/bacula/working/BackupCatalog.bsr"
+ Priority = 11 # run after main backup
+}
+
+# Standard Restore template, to be changed by Console program
+Job {
+ Name = "Restore"
+ Type = Restore
+ Client = Baccus
+ FileSet="Baccus"
+ Storage = "DLT-0"
+ Pool = Default
+ Messages = Standard
+ Where = /tmp/bacula-restores
+}
+
+
+# Clients to backup --------------------------------------------------
+
+#stalingrad
+#hanna
+#venom
+#klorin
+#britney
+#sherman
+#inservitus
+#tokaimura
+#u137
+
+#elfwood
+#hal
+#sten
+#sirius (skippa? Networkerservern...)
+
+
+CLIENT(Baccus, baccus.ifm.liu.se, DefaultJob)
+FileSet {
+ Name = "Baccus"
+ Include = signature=MD5 {
+ /
+ /usr
+ /var
+ /opt
+ }
+
+ Exclude = {
+ /proc /tmp /var/tmp /devices /etc/mnttab /dev/fd /var/run
+ /export
+ }
+}
+
+
+
+CLIENT(Stalingrad, stalingrad.lysator.liu.se, DefaultJob)
+FileSet {
+ Name = "Stalingrad"
+ Include = signature=MD5 {
+ /
+ /cvsroot
+ }
+ Exclude = {
+ /proc /tmp /var/tmp /etc/mnttab /dev/fd /var/run /dev/shm
+ }
+}
+
+
+
+CLIENT(Hanna, hanna.lysator.liu.se, DefaultJob)
+FileSet {
+ Name = "Hanna"
+ Include = signature=MD5 {
+ /
+ /var
+ /local
+ /export/hanna
+ }
+ Exclude = {
+ /proc /tmp /var/tmp /devices /etc/mnttab /dev/fd /var/run
+ /export/hanna/mirror
+ /export/hanna/ftp/mirror
+ }
+}
+
+
+CLIENT(Venom, venom.lysator.liu.se, DefaultJob)
+FileSet {
+ Name = "Venom"
+ Include = signature=MD5 {
+ /
+ /clone/dsk1
+ /clone/dsk2
+ /export/dsk1
+ /export/dsk2
+ }
+ Exclude = {
+ /proc /tmp /var/tmp /devices /etc/mnttab /dev/fd /var/run
+ }
+}
+
+
+CLIENT(Klorin, klorin.lysator.liu.se, DefaultJob)
+FileSet {
+ Name = "Klorin"
+ Include = signature=MD5 {
+ /
+ /export/mdsk1
+ }
+ Exclude = {
+ /proc /tmp /var/tmp /devices /etc/mnttab /dev/fd /var/run
+ }
+}
+
+
+CLIENT(Britney, britney.lysator.liu.se, DefaultJob)
+FileSet {
+ Name = "Britney"
+ Include = signature=MD5 {
+ /
+ /export/dsk1
+ /export/oldroot
+ /export/lysdisk1
+ /export/lysdisk3
+ /export/lysdisk4
+ /export/lysdisk6
+ /export/lysdisk7
+ /export/lysdisk8
+ /export/lysdisk9
+ /export/lysdisk11
+ }
+ Exclude = {
+ /proc /tmp /var/tmp /devices /etc/mnttab /dev/fd /var/run
+ }
+}
+
+
+
+CLIENT(Sherman, sherman.lysator.liu.se, DefaultJob)
+FileSet {
+ Name = "Sherman"
+ Include = signature=MD5 {
+ /
+ /web
+ /boot
+ /var/opt/mysql
+ }
+ Exclude = {
+ /proc /tmp /var/tmp /etc/mnttab /dev/fd /var/run
+ }
+}
+
+
+
+CLIENT(U137, u137.lysator.liu.se, DefaultJob)
+FileSet {
+ Name = "U137"
+ Include = signature=MD5 {
+ /
+ /export/dsk1
+ /export/dsk2
+ }
+ Exclude = {
+ /proc /tmp /var/tmp /etc/mnttab /dev/fd /var/run
+ }
+}
+
+
+CLIENT(Tokaimura, tokaimura.lysator.liu.se, DefaultJob)
+FileSet {
+ Name = "Tokaimura"
+ Include = signature=MD5 {
+ /
+ /usr
+ /var
+ /opt
+ /export/mdsk
+ }
+ Exclude = {
+ /proc /tmp /var/tmp /devices /etc/mnttab /dev/fd /var/run
+ }
+}
+
+
+CLIENT(Inservitus, inservitus.lysator.liu.se, InservitusJob)
+FileSet {
+ Name = "Inservitus"
+ Include = signature=MD5 {
+ /
+ /var
+ /export
+ /export/d1
+ /export/d2
+ /export/d3
+ /export/home
+ }
+ Exclude = {
+ /proc /tmp /var/tmp /devices /etc/mnttab /dev/fd /var/run
+ /export/snapshot
+ /snapshot
+ }
+}
+
+
+
+#
+# When to do the backups, full backup on first sunday of the month,
+# differential (i.e. incremental since full) every other sunday,
+# and incremental backups other days
+Schedule {
+ Name = "WeeklyCycle"
+ Run = Full 1st sun at 1:05
+ Run = Differential 2nd-5th sun at 1:05
+ Run = Incremental mon-sat at 1:05
+}
+
+# This schedule does the catalog. It starts after the WeeklyCycle
+Schedule {
+ Name = "WeeklyCycleAfterBackup"
+ Run = Full sun-sat at 1:10
+}
+
+# This is the backup of the catalog
+FileSet {
+ Name = "Catalog"
+ Include = signature=MD5 {
+ /var/bacula/working/bacula.sql
+ }
+}
+
+
+STORAGE(File-0, File, baccus.ifm.liu.se)
+STORAGE(DLT-0, DLT7000, baccus.ifm.liu.se)
+STORAGE(DLT-1, DLT7000, baccus.ifm.liu.se)
+STORAGE(DLT-2, DLT7000, baccus.ifm.liu.se)
+STORAGE(DLT-3, DLT7000, baccus.ifm.liu.se)
+STORAGE(DLT-4, DLT7000, baccus.ifm.liu.se)
+STORAGE(DLT-5, DLT7000, baccus.ifm.liu.se)
+
+
+
+# Reasonable message delivery -- send most everything to email address
+# and to the console
+Messages {
+ Name = Standard
+#
+# NOTE! If you send to two email or more email addresses, you will need
+# to replace the %r in the from field (-f part) with a single valid
+# email address in both the mailcommand and the operatorcommand.
+#
+ mailcommand = "/opt/bacula/sbin/bsmtp -h ifm.liu.se -f \"\(Bacula\) bacula@ifm.liu.se\" -s \"Bacula: %t %e of %c %l\" %r"
+ operatorcommand = "/opt/bacula/sbin/bsmtp -h ifm.liu.se -f \"\(Bacula\) bacula@ifm.liu.se\" -s \"Bacula: Intervention needed for %j\" %r"
+ mail = peter@ifm.liu.se,backup-admin@lysator.liu.se = all, !skipped
+ operator = peter@ifm.liu.se,backup-admin@lysator.liu.se = mount
+ console = all, !skipped, !saved
+#
+# WARNING! the following will create a file that you must cycle from
+# time to time as it will grow indefinitely. However, it will
+# also keep all your messages if they scroll off the console.
+#
+ append = "/var/bacula/working/log" = all, !skipped
+}
+
+
+# Define Pools --------------------------------------
+POOL(Default)
+POOL(Inservitus)
+POOL(Sherman)
+POOL(Lysdisk)
--- /dev/null
+# bacula-fd.conf
+#
+# Default Bacula File Daemon Configuration file
+#
+# WARNING:
+# This file is generated from /opt/lysator/etc/bacula/bacula-dir.conf
+# Edit the source file and then run 'make'.
+
+#
+# List Directors who are permitted to contact this File daemon
+#
+Director {
+ Name = Baccus
+ Password = "ilF0PZoICjQ60R3E3dks08Rq36KK8cDGJUAaW"
+}
+
+#
+# "Global" File daemon configuration specifications
+#
+FileDaemon { # this is me
+ Name = Baccus
+ FDport = 9102 # where we listen for the director
+ WorkingDirectory = /var/bacula/working
+ Pid Directory = /var/run
+}
+
+# Send all messages except skipped files back to Director
+Messages {
+ Name = Standard
+ director = Baccus = all, !skipped
+}
--- /dev/null
+# bacula-sd.conf
+#
+# Default Bacula Storage Daemon Configuration file
+#
+# WARNING:
+# This file is generated from /opt/lysator/etc/bacula/bacula-dir.conf
+# Edit the source file and then run 'make'.
+#
+
+Storage { # definition of myself
+ Name = Baccus
+ SDPort = 9103 # Director's port
+ WorkingDirectory = "/var/bacula/working"
+ Pid Directory = "/var/run"
+ Maximum Concurrent Jobs = 20
+}
+
+#
+# List Directors who are permitted to contact Storage daemon
+#
+Director {
+ Name = Baccus
+ Password = "KLUwcp1ZTeIc0x265UPrpWW28t7d7cRXmhOqyHxRr"
+}
+
+#
+# Devices supported by this Storage daemon
+# To connect, the Director's bacula-dir.conf must have the
+# same Name and MediaType.
+#
+
+Device {
+ Name = File-0
+ Media Type = File
+ Archive Device = /var/bacula/storage/file-0
+ LabelMedia = yes; # lets Bacula label unlabeled media
+ Random Access = Yes;
+ AutomaticMount = yes; # when device opened, read it
+ RemovableMedia = no;
+ AlwaysOpen = no;
+}
+
+Device {
+ Name = DLT-0
+ Media Type = DLT7000
+ Archive Device = /dev/rmt/0cbn
+ AutomaticMount = yes; # when device opened, read it
+ AlwaysOpen = yes;
+ RemovableMedia = yes;
+ RandomAccess = no;
+ Autochanger = yes;
+ Changer Device = /dev/scsi/changer/c1t0d0
+ Changer Command = "/opt/bacula/etc/mtx-changer %c %o %S %a %d"
+ Drive Index = 0
+ Maximum Spool Size = 4gb
+ Maximum Job Spool Size = 1gb
+ Spool Directory = /var/bacula/spool/dlt-0
+}
+
+Device {
+ Name = DLT-1
+ Media Type = DLT7000
+ Archive Device = /dev/rmt/1cbn
+ AutomaticMount = yes; # when device opened, read it
+ AlwaysOpen = yes;
+ RemovableMedia = yes;
+ RandomAccess = no;
+ Autochanger = yes;
+ Changer Device = /dev/scsi/changer/c1t0d0
+ Changer Command = "/opt/bacula/etc/mtx-changer %c %o %S %a %d"
+ Drive Index = 1
+ Maximum Spool Size = 2gb
+ Maximum Job Spool Size = 1gb
+ Spool Directory = /var/bacula/spool/dlt-1
+}
+
+Device {
+ Name = DLT-2
+ Media Type = DLT7000
+ Archive Device = /dev/rmt/2cbn
+ AutomaticMount = yes; # when device opened, read it
+ AlwaysOpen = yes;
+ RemovableMedia = yes;
+ RandomAccess = no;
+ Autochanger = yes;
+ Changer Device = /dev/scsi/changer/c1t0d0
+ Changer Command = "/opt/bacula/etc/mtx-changer %c %o %S %a %d"
+ Drive Index = 2
+ Maximum Spool Size = 2gb
+ Maximum Job Spool Size = 1gb
+ Spool Directory = /var/bacula/spool/dlt-2
+}
+
+Device {
+ Name = DLT-3
+ Media Type = DLT7000
+ Archive Device = /dev/rmt/3cbn
+ AutomaticMount = yes; # when device opened, read it
+ AlwaysOpen = yes;
+ RemovableMedia = yes;
+ RandomAccess = no;
+ Autochanger = yes;
+ Changer Device = /dev/scsi/changer/c1t0d0
+ Changer Command = "/opt/bacula/etc/mtx-changer %c %o %S %a %d"
+ Drive Index = 3
+ Maximum Spool Size = 2gb
+ Maximum Job Spool Size = 1gb
+ Spool Directory = /var/bacula/spool/dlt-3
+}
+
+Device {
+ Name = DLT-4
+ Media Type = DLT7000
+ Archive Device = /dev/rmt/4cbn
+ AutomaticMount = yes; # when device opened, read it
+ AlwaysOpen = yes;
+ RemovableMedia = yes;
+ RandomAccess = no;
+ Autochanger = yes;
+ Changer Device = /dev/scsi/changer/c1t0d0
+ Changer Command = "/opt/bacula/etc/mtx-changer %c %o %S %a %d"
+ Drive Index = 4
+ Maximum Spool Size = 2gb
+ Maximum Job Spool Size = 1gb
+ Spool Directory = /var/bacula/spool/dlt-4
+}
+
+Device {
+ Name = DLT-5
+ Media Type = DLT7000
+ Archive Device = /dev/rmt/5cbn
+ AutomaticMount = yes; # when device opened, read it
+ AlwaysOpen = yes;
+ RemovableMedia = yes;
+ RandomAccess = no;
+ Autochanger = yes;
+ Changer Device = /dev/scsi/changer/c1t0d0
+ Changer Command = "/opt/bacula/etc/mtx-changer %c %o %S %a %d"
+ Drive Index = 5
+ Maximum Spool Size = 2gb
+ Maximum Job Spool Size = 1gb
+ Spool Directory = /var/bacula/spool/dlt-5
+}
+
+#
+# Send all messages to the Director,
+# mount messages also are sent to the email address
+#
+Messages {
+ Name = Standard
+ director = Baccus = all
+}
--- /dev/null
+From: Peter Eriksson <peter@ifm.liu.se>
+Reply-To: Peter Eriksson <peter@ifm.liu.se>
+Subject: Re: [Bacula-users] RE: Feature Request : includes for config-files
+To: bacula-users@lists.sourceforge.net
+
+Marc Schoechlin <ms@LF.net> writes:
+
+> I think that is the 99%-solution for this problem -
+> but I think many users would be happy with a 90%-solution, which
+> allows to store configuration-data in distributed files.
+
+Or you could do as I just did - generate the configuration
+files using a Makefile and the m4 macro processor... That way you
+don't have to reinvent the wheel again inside Bacula but can delegate
+the tasks to external programs.
+
+[See the attached files for details. They can be expanded
+a lot though, it's just a beginning]
+
+--
+Peter Eriksson <peter@ifm.liu.se> Phone: +46 13 28 2786
+Computer Systems Manager/BOFH Cell/GSM: +46 705 18 2786
+Physics Department, Linköping University Room: Building F, F203
+SE-581 83 Linköping, Sweden http://www.ifm.liu.se/~peter
+
+See the files bacula-defs.m4 m4.bacula-dir.conf m4.bacula-fd.conf and
+m4.bacula-sd.conf in this directory for the attachments to this
+email.
========================================================
1.35 Items to do for release:
+- Improve error message if old/new FileSet syntax mixed.
- Restore c: with a prefix into /prefix/c/ to prevent c: and d:
files with the same name from overwritting each other.
- Add new DCR calling sequences everywhere in SD. This will permit
- Doc -p option in stored
- Doc Phil's new delete job jobid scanning code.
- Document that console commands can be abbreviated.
+- Document add "</dev/null >/dev/null 2>&1" to the bacula-fd command line
- New IP address specification is used as follows:
[sdaddresses|diraddresses|fdaddresses] = { [[ip|ipv4|ipv6] = {
[[addr|port] = [^ ]+[\n;]+] }] }
- Store info on each file system type (probably in the job header on tape.
This could be the output of df; or perhaps some sort of /etc/mtab record.
+========= ideas ===============
+From: "Jerry K. Schieffer" <jerry@skylinetechnology.com>
+To: <kern@sibbald.com>
+Subject: RE: [Bacula-users] future large programming jobs
+Date: Thu, 26 Feb 2004 11:34:54 -0600
+
+I noticed the subject thread and thought I would offer the following
+merely as sources of ideas, i.e. something to think about, not even as
+strong as a request. In my former life (before retiring) I often
+dealt with backups and storage management issues/products as a
+developer and as a consultant. I am currently migrating my personal
+network from amanda to bacula specifically because of the ability to
+cross media boundaries during storing backups.
+Are you familiar with the commercial product called ADSM (I think IBM
+now sells it under the Tivoli label)? It has a couple of interesting
+ideas that may apply to the following topics.
+
+1. Migration: Consider that when you need to restore a system, there
+may be pressure to hurry. If all the information for a single client
+can eventually end up on the same media (and in chronological order),
+the restore is facillitated by not having to search past information
+from other clients. ADSM has the concept of "client affinity" that
+may be associated with it's storage pools. It seems to me that this
+concept (as an optional feature) might fit in your architecture for
+migration.
+
+ADSM also has the concept of defining one or more storage pools as
+"copy pools" (almost mirrors, but only in the sense of contents).
+These pools provide the ability to have duplicte data stored both
+onsite and offsite. The copy process can be scheduled to be handled
+by their storage manager during periods when there is no backup
+activity. Again, the migration process might be a place to consider
+implementing something like this.
+
+>
+> It strikes me that it would be very nice to be able to do things
+like
+> have the Job(s) backing up the machines run, and once they have all
+> completed, start a migration job to copy the data from disks Volumes
+to
+> a tape library and then to offsite storage. Maybe this can already
+be
+> done with some careful scheduling and Job prioritzation; the events
+> mechanism described below would probably make it very easy.
+
+This is the goal. In the first step (before events), you simply
+schedule
+the Migration to tape later.
+
+2. Base jobs: In ADSM, each copy of each stored file is tracked in
+the database. Once a file (unique by path and metadata such as dates,
+size, ownership, etc.) is in a copy pool, no more copies are made. In
+other words, when you start ADSM, it begins like your concept of a
+base job. After that it is in the "incremental" mode. You can
+configure the number of "generations" of files to be retained, plus a
+retention date after which even old generations are purged. The
+database tracks the contents of media and projects the percentage of
+each volume that is valid. When the valid content of a volume drops
+below a configured percentage, the valid data are migrated to another
+volume and the old volume is marked as empty. Note, this requires
+ADSM to have an idea of the contents of a client, i.e. marking the
+database when an existing file was deleted, but this would solve your
+issue of restoring a client without restoring deleted files.
+
+This is pretty far from what bacula now does, but if you are going to
+rip things up for Base jobs,.....
+Also, the benefits of this are huge for very large shops, especially
+with media robots, but are a pain for shops with manual media
+mounting.
+
+>
+> Base jobs sound pretty useful, but I'm not dying for them.
+
+Nobody is dying for them, but when you see what it does, you will die
+without it.
+
+3. Restoring deleted files: Since I think my comments in (2) above
+have low probability of implementation, I'll also suggest that you
+could approach the issue of deleted files by a mechanism of having the
+fd report to the dir, a list of all files on the client for every
+backup job. The dir could note in the database entry for each file
+the date that the file was seen. Then if a restore as of date X takes
+place, only files that exist from before X until after X would be
+restored. Probably the major cost here is the extra date container in
+each row of the files table.
+
+Thanks for "listening". I hope some of this helps. If you want to
+contact me, please send me an email - I read some but not all of the
+mailing list traffic and might miss a reply there.
+
+Please accept my compliments for bacula. It is doing a great job for
+me!! I sympathize with you in the need to wrestle with excelence in
+execution vs. excelence in feature inclusion.
+
+Regards,
+Jerry Schieffer
+
+==============================
+
Longer term to do:
- Design at hierarchial storage for Bacula. Migration and Clone.
- Implement FSM (File System Modules).
/* */
#undef VERSION
#define VERSION "1.35.3"
-#define BDATE "05 September 2004"
-#define LSMDATE "05Sep04"
+#define BDATE "06 September 2004"
+#define LSMDATE "06Sep04"
/* Debug flags */
#undef DEBUG