\usepackage[utf8]{inputenc}
\usepackage[toc,title,header,page]{appendix}
\usepackage[T1]{fontenc}
-\usepackage{longtable,graphicx,fancyhdr,lastpage,eurosym,dcolumn,ltxtable,textcomp,varioref,lscape,pdfpages,ifthen,setspace,colortbl,diagbox}
+\usepackage{longtable,graphicx,fancyhdr,lastpage,dcolumn,ltxtable,textcomp,varioref,lscape,pdfpages,ifthen,setspace,colortbl,diagbox}
\usepackage{lmodern}
\usepackage{MnSymbol}
\usepackage{bbding,multirow}
\section{General}
\index[general]{General }
-\addcontentsline{toc}{subsection}{General}
This chapter is intended to be a technical discussion of the Catalog services
and as such is not targeted at end users but rather at developers and system
\subsection{Filenames and Maximum Filename Length}
\index[general]{Filenames and Maximum Filename Length }
\index[general]{Length!Filenames and Maximum Filename }
-\addcontentsline{toc}{subsubsection}{Filenames and Maximum Filename Length}
In general, either MySQL, PostgreSQL or SQLite permit storing arbitrary long
path names and file names in the catalog database. In practice, there still
\subsection{Installing and Configuring MySQL}
\index[general]{MySQL!Installing and Configuring }
\index[general]{Installing and Configuring MySQL }
-%\addcontentsline{toc}{subsubsection}{Installing and Configuring MySQL}
For the details of installing and configuring MySQL, please see the
\bsysxrlink{Installing and Configuring MySQL}{MySqlChapter}{main}{chapter} of
\subsection{Installing and Configuring PostgreSQL}
\index[general]{PostgreSQL!Installing and Configuring }
\index[general]{Installing and Configuring PostgreSQL }
-%\addcontentsline{toc}{subsubsection}{Installing and Configuring PostgreSQL}
For the details of installing and configuring PostgreSQL, please see the
\bsysxrlink{Installing and Configuring PostgreSQL}{PostgreSqlChapter}{main}{chapter}
\subsection{Installing and Configuring SQLite}
\index[general]{Installing and Configuring SQLite }
\index[general]{SQLite!Installing and Configuring }
-%\addcontentsline{toc}{subsubsection}{Installing and Configuring SQLite}
For the details of installing and configuring SQLite, please see the
\bsysxrlink{Installing and Configuring SQLite}{SqlLiteChapter}{main}{chapter} of
\subsection{Internal Bacula Catalog}
\index[general]{Catalog!Internal Bacula }
\index[general]{Internal Bacula Catalog }
-%\addcontentsline{toc}{subsubsection}{Internal Bacula Catalog}
Please see the \bsysxrlink{Internal Bacula Database}
{chap:InternalBaculaDatabase}{misc}{chapter} of the \miscman{} for more details.
\subsection{Database Table Design}
\index[general]{Design!Database Table }
\index[general]{Database Table Design }
-%\addcontentsline{toc}{subsubsection}{Database Table Design}
All discussions that follow pertain to the MySQL database. The details for the
PostgreSQL and SQLite databases are essentially identical except for that all
\section{Sequence of Creation of Records for a Save Job}
\index[general]{Sequence of Creation of Records for a Save Job }
\index[general]{Job!Sequence of Creation of Records for a Save }
-\addcontentsline{toc}{subsection}{Sequence of Creation of Records for a Save
-Job}
Start with StartDate, ClientName, Filename, Path, Attributes, MediaName,
MediaCoordinates. (PartNumber, NumParts). In the steps below, ``Create new''
\section{Database Tables}
\index[general]{Database Tables }
\index[general]{Tables!Database }
-%\addcontentsline{toc}{subsection}{Database Tables}
-%\addcontentsline{lot}{table}{Filename Table Layout}
\LTXtable{\linewidth}{table_dbfilename}
The {\bf Filename} table \bsysref{table:dbfilename} contains the name of each file backed up
with the path removed. If different directories or machines contain the same
filename, only one copy will be saved in this table.
-%\addcontentsline{lot}{table}{Path Table Layout}
\LTXtable{\linewidth}{table_dbpath}
The {\bf Path} table \bsysref{table:dbpath} contains the path or directory names of all
creates a 7.35 MB database.
-%\addcontentsline{lot}{table}
\LTXtable{\linewidth}{table_dbfile}
The {\bf File} table \bsysref{table:dbfile} contains one entry for each file backed up by
periodically reduce the number of File records using the {\bf retention}
command in the Console program.
-%\addcontentsline{lot}{table}{Job Table Layout}
\LTXtable{\linewidth}{table_dbjob}
The {\bf Job} table \bsysref{table:dbjob} contains one record for each Job run by Bacula. Thus
The Job Type (or simply Type) can have one of the following values:
-%\addcontentsline{lot}{table}{Job Types}
\LTXtable{\linewidth}{table_dbjobtypes}
Note, the Job Type values in table \bsysref{table:dbjobtypes} noted above are not kept in an SQL table.
\LTXtable{\linewidth}{table_dbjobstatuses}
-%\addcontentsline{lot}{table}{File Sets Table Layout}
\LTXtable{\linewidth}{table_dbfileset}
The {\bf FileSet} table \bsysref{table:dbfileset} contains one entry for each FileSet that is used. The
next incremental.
-%\addcontentsline{lot}{table}
\LTXtable{\linewidth}{table_dbjobmedia}
The {\bf JobMedia} table \bsysref{table:dbjobmedia} contains one entry at the following: start of
-%\addcontentsline{lot}{table}{Media Table Layout}
\LTXtable{\linewidth}{table_dbmedia}
The {\bf Volume} table\footnote{Internally referred to as the Media table} \bsysref{table:dbmedia} contains
or file on which information is or was backed up. There is one Volume record
created for each of the NumVols specified in the Pool resource record.
-%\addcontentsline{lot}{table}{Pool Table Layout}
\LTXtable{\linewidth}{table_dbpool}
The {\bf Pool} table \bsysref{table:dbpool} contains one entry for each media pool controlled by
number of the Media record for the current volume.
-%\addcontentsline{lot}{table}{Client Table Layout}
\LTXtable{\linewidth}{table_dbclient}
The {\bf Client} table \bsysref{table:dbclient} contains one entry for each machine backed up by Bacula
in this database. Normally the Name is a fully qualified domain name.
-%\addcontentsline{lot}{table}{Storage Table Layout}
\LTXtable{\linewidth}{table_dbstorage}
The {\bf Storage} table \bsysref{table:dbstorage} contains one entry for each Storage used.
-%\addcontentsline{lot}{table}{Counter Table Layout}
\LTXtable{\linewidth}{table_dbcounter}
The {\bf Counter} table \bsysref{table:dbcounter} contains one entry for each permanent counter defined
by the user.
-%\addcontentsline{lot}{table}{Job History Table Layout}
\LTXtable{\linewidth}{table_dbjobhistory}
The {\bf JobHisto} table \bsysref{table:dbjobhistory} is the same as the Job table, but it keeps
long term statistics (i.e. it is not pruned with the Job).
-%\addcontentsline{lot}{table}{Log Table Layout}
\LTXtable{\linewidth}{table_dblog}
The {\bf Log} table \bsysref{table:dblog} contains a log of all Job output.
-%\addcontentsline{lot}{table}{Location Table Layout}
\LTXtable{\linewidth}{table_dblocation}
The {\bf Location} table \bsysref{table:dblocation} defines where a Volume is physically.
-%\addcontentsline{lot}{table}{Location Log Table Layout}
\LTXtable{\linewidth}{table_dblocationlog}
The {\bf Location Log} table \bsysref{table:dblocationlog} contains a log of all Job output.
-%\addcontentsline{lot}{table}{Version Table Layout}
\LTXtable{\linewidth}{table_dbversion}
The {\bf Version} table \bsysref{table:dbversion} defines the Bacula database version number. Bacula
with the Bacula binary file.
-%\addcontentsline{lot}{table}{Base Files Table Layout}
\LTXtable{\linewidth}{table_dbbasefiles}
The {\bf BaseFiles} table \bsysref{table:dbbasefiles} contains all the File references for a particular
\subsection{MySQL Table Definition}
\index[general]{MySQL Table Definition }
\index[general]{Definition!MySQL Table }
-\addcontentsline{toc}{subsubsection}{MySQL Table Definition}
The commands used to create the MySQL tables are as follows:
\section{General}
\index{General }
-\addcontentsline{toc}{subsection}{General}
This document describes the protocols used between the various daemons. As
Bacula has developed, it has become quite out of date. The general idea still
holds true, but the details of the fields for each command, and indeed the
-commands themselves have changed considerably.
+commands themselves have changed considerably.
It is intended to be a technical discussion of the general daemon protocols
and as such is not targeted at end users but rather at developers and system
administrators that want or need to know more of the working details of {\bf
-Bacula}.
+Bacula}.
\section{Low Level Network Protocol}
\index{Protocol!Low Level Network }
\index{Low Level Network Protocol }
-\addcontentsline{toc}{subsection}{Low Level Network Protocol}
At the lowest level, the network protocol is handled by {\bf BSOCK} packets
which contain a lot of information about the status of the network connection:
time. It is advised that multiple threads do not read/write the same socket.
If you must do this, you must provide some sort of locking mechanism. It would
not be appropriate for efficiency reasons to make every call to the BSOCK
-routines lock and unlock the packet.
+routines lock and unlock the packet.
\section{General Daemon Protocol}
\index{General Daemon Protocol }
\index{Protocol!General Daemon }
-\addcontentsline{toc}{subsection}{General Daemon Protocol}
In general, all the daemons follow the following global rules. There may be
exceptions depending on the specific case. Normally, one daemon will be
sending commands to another daemon (specifically, the Director to the Storage
-daemon and the Director to the File daemon).
+daemon and the Director to the File daemon).
\begin{bsysitemize}
\item Commands are always ASCII commands that are upper/lower case dependent
- as well as space sensitive.
+ as well as space sensitive.
\item All binary data is converted into ASCII (either with printf statements
- or using base64 encoding).
+ or using base64 encoding).
\item All responses to commands sent are always prefixed with a return
numeric code where codes in the 1000's are reserved for the Director, the
2000's are reserved for the File daemon, and the 3000's are reserved for the
-Storage daemon.
+Storage daemon.
\item Any response that is not prefixed with a numeric code is a command (or
subcommand if you like) coming from the other end. For example, while the
Director is corresponding with the Storage daemon, the Storage daemon can
request Catalog services from the Director. This convention permits each side
to send commands to the other daemon while simultaneously responding to
-commands.
+commands.
\item Any response that is of zero length, depending on the context, either
terminates the data stream being sent or terminates command mode prior to
- closing the connection.
+ closing the connection.
\item Any response that is of negative length is a special sign that normally
requires a response. For example, during data transfer from the File daemon
to the Storage daemon, normally the File daemon sends continuously without
Storage daemon should respond to the packet with an OK, ABORT JOB, PAUSE,
etc. This permits the File daemon to efficiently send data while at the same
time occasionally ``polling'' the Storage daemon for his status or any
-special requests.
+special requests.
Currently, these negative lengths are specific to the daemon, but shortly,
the range 0 to -999 will be standard daemon wide signals, while -1000 to
-1999 will be for Director user, -2000 to -2999 for the File daemon, and
--3000 to -3999 for the Storage daemon.
+-3000 to -3999 for the Storage daemon.
\end{bsysitemize}
\section{The Protocol Used Between the Director and the Storage Daemon}
\index{Daemon!Protocol Used Between the Director and the Storage }
\index{Protocol Used Between the Director and the Storage Daemon }
-\addcontentsline{toc}{subsection}{Protocol Used Between the Director and the
-Storage Daemon}
Before sending commands to the File daemon, the Director opens a Message
channel with the Storage daemon, identifies itself and presents its password.
The Storage daemon will then pass back to the Director a enabling key for this
JobId that must be presented by the File daemon when opening the job. Until
this process is complete, the Storage daemon is not available for use by File
-daemons.
+daemons.
\footnotesize
\begin{lstlisting}
For the Director to be authorized, the \lt{}Director-name\gt{} and the
\lt{}password\gt{} must match the values in one of the Storage daemon's
Director resources (there may be several Directors that can access a single
-Storage daemon).
+Storage daemon).
\section{The Protocol Used Between the Director and the File Daemon}
\index{Daemon!Protocol Used Between the Director and the File }
\index{Protocol Used Between the Director and the File Daemon }
-\addcontentsline{toc}{subsection}{Protocol Used Between the Director and the
-File Daemon}
-A typical conversation might look like the following:
+A typical conversation might look like the following:
\footnotesize
\begin{lstlisting}
\section{The Save Protocol Between the File Daemon and the Storage Daemon}
\index{Save Protocol Between the File Daemon and the Storage Daemon }
\index{Daemon!Save Protocol Between the File Daemon and the Storage }
-\addcontentsline{toc}{subsection}{Save Protocol Between the File Daemon and
-the Storage Daemon}
Once the Director has send a {\bf save} command to the File daemon, the File
-daemon will contact the Storage daemon to begin the save.
+daemon will contact the Storage daemon to begin the save.
In what follows: FD: refers to information set via the network from the File
daemon to the Storage daemon, and SD: refers to information set from the
-Storage daemon to the File daemon.
+Storage daemon to the File daemon.
\subsection{Command and Control Information}
\index{Information!Command and Control }
\index{Command and Control Information }
-\addcontentsline{toc}{subsubsection}{Command and Control Information}
Command and control information is exchanged in human readable ASCII commands.
\subsection{Data Information}
\index{Information!Data }
\index{Data Information }
-\addcontentsline{toc}{subsubsection}{Data Information}
The Data information consists of the file attributes and data to the Storage
daemon. For the most part, the data information is sent one way: from the File
daemon to the Storage daemon. This allows the File daemon to transfer
information as fast as possible without a lot of handshaking and network
-overhead.
+overhead.
However, from time to time, the File daemon needs to do a sort of checkpoint
of the situation to ensure that everything is going well with the Storage
daemon. To do so, the File daemon sends a packet with a negative length
indicating that he wishes the Storage daemon to respond by sending a packet of
information to the File daemon. The File daemon then waits to receive a packet
-from the Storage daemon before continuing.
+from the Storage daemon before continuing.
All data sent are in binary format except for the header packet, which is in
ASCII. There are two packet types used data transfer mode: a header packet,
the contents of which are known to the Storage daemon, and a data packet, the
-contents of which are never examined by the Storage daemon.
+contents of which are never examined by the Storage daemon.
The first data packet to the Storage daemon will be an ASCII header packet
-consisting of the following data.
+consisting of the following data.
\lt{}File-Index\gt{} \lt{}Stream-Id\gt{} \lt{}Info\gt{} where {\bf
\lt{}File-Index\gt{}} is a sequential number beginning from one that
-increments with each file (or directory) sent.
+increments with each file (or directory) sent.
where {\bf \lt{}Stream-Id\gt{}} will be 1 for the Attributes record and 2 for
-uncompressed File data. 3 is reserved for the MD5 signature for the file.
+uncompressed File data. 3 is reserved for the MD5 signature for the file.
where {\bf \lt{}Info\gt{}} transmit information about the Stream to the
Storage Daemon. It is a character string field where each character has a
meaning. The only character currently defined is 0 (zero), which is simply a
place holder (a no op). In the future, there may be codes indicating
-compressed data, encrypted data, etc.
+compressed data, encrypted data, etc.
Immediately following the header packet, the Storage daemon will expect any
number of data packets. The series of data packets is terminated by a zero
be another header packet. As previously mentioned, a negative length packet is
a request for the Storage daemon to temporarily enter command mode and send a
reply to the File daemon. Thus an actual conversation might contain the
-following exchanges:
+following exchanges:
\footnotesize
\begin{lstlisting}
\normalsize
The information returned to the File daemon by the Storage daemon in response
-to the {\bf append close session} is transmit in turn to the Director.
+to the {\bf append close session} is transmit in turn to the Director.
\chapter{Director Services Daemon}
\label{_ChapterStart6}
-\index{Daemon!Director Services }
-\index{Director Services Daemon }
-\addcontentsline{toc}{section}{Director Services Daemon}
+\index{Daemon!Director Services}
+\index{Director Services Daemon}
This chapter is intended to be a technical discussion of the Director services
and as such is not targeted at end users but rather at developers and system
administrators that want or need to know more of the working details of {\bf
-Bacula}.
+Bacula}.
The {\bf Bacula Director} services consist of the program that supervises all
-the backup and restore operations.
+the backup and restore operations.
-To be written ...
+To be written \ldots{}
\label{_ChapterStart11}
\index{File Services Daemon }
\index{Daemon!File Services }
-\addcontentsline{toc}{section}{File Services Daemon}
Please note, this section is somewhat out of date as the code has evolved
-significantly. The basic idea has not changed though.
+significantly. The basic idea has not changed though.
This chapter is intended to be a technical discussion of the File daemon
services and as such is not targeted at end users but rather at developers and
system administrators that want or need to know more of the working details of
-{\bf Bacula}.
+{\bf Bacula}.
The {\bf Bacula File Services} consist of the programs that run on the system
to be backed up and provide the interface between the Host File system and
-Bacula -- in particular, the Director and the Storage services.
+Bacula -- in particular, the Director and the Storage services.
When time comes for a backup, the Director gets in touch with the File daemon
on the client machine and hands it a set of ``marching orders'' which, if
-written in English, might be something like the following:
+written in English, might be something like the following:
OK, {\bf File daemon}, it's time for your daily incremental backup. I want you
to get in touch with the Storage daemon on host archive.mysite.com and perform
backing up the file system. As this is an incremental backup, you should save
only files modified since the time you started your last backup which, as you
may recall, was 2000-11-19-06:43:38. Please let me know when you're done and
-how it went. Thank you.
+how it went. Thank you.
So, having been handed everything it needs to decide what to dump and where to
store it, the File daemon doesn't need to have any further contact with the
are errors, the error messages will be delivered immediately to the Director.
While the backup is proceeding, the File daemon will send the file coordinates
and data for each file being backed up to the Storage daemon, which will in
-turn pass the file coordinates to the Director to put in the catalog.
+turn pass the file coordinates to the Director to put in the catalog.
During a {\bf Verify} of the catalog, the situation is different, since the
File daemon will have an exchange with the Director for each file, and will
-not contact the Storage daemon.
+not contact the Storage daemon.
A {\bf Restore} operation will be very similar to the {\bf Backup} except that
during the {\bf Restore} the Storage daemon will not send storage coordinates
to the Director since the Director presumably already has them. On the other
hand, any error messages from either the Storage daemon or File daemon will
normally be sent directly to the Directory (this, of course, depends on how
-the Message resource is defined).
+the Message resource is defined).
\section{Commands Received from the Director for a Backup}
\index{Backup!Commands Received from the Director for a }
\index{Commands Received from the Director for a Backup }
-\addcontentsline{toc}{subsection}{Commands Received from the Director for a
-Backup}
-To be written ...
+To be written\ldots{}
\section{Commands Received from the Director for a Restore}
\index{Commands Received from the Director for a Restore }
\index{Restore!Commands Received from the Director for a }
-\addcontentsline{toc}{subsection}{Commands Received from the Director for a
-Restore}
-To be written ...
+To be written \ldots{}
\label{_ChapterStart10}
\index{Bacula Developer Notes}
\index{Notes!Bacula Developer}
-\addcontentsline{toc}{section}{Bacula Developer Notes}
This document is intended mostly for developers and describes how you can
contribute to the Bacula project and the the general framework of making
Bacula source changes.
-\subsection{Contributions}
+\section{Contributions}
\index{Contributions}
-\addcontentsline{toc}{subsubsection}{Contributions}
Contributions to the Bacula project come in many forms: ideas,
participation in helping people on the bacula-users email list, packaging
is getting your patch accepted, and two is dealing with copyright issues.
The following text describes some of the requirements for such code.
-\subsection{Patches}
+\section{Patches}
\index{Patches}
-\addcontentsline{toc}{subsubsection}{Patches}
Subject to the copyright assignment described below, your patches should be
-sent in {\bf git format-patch} format relative to the current contents of the
+sent in {\bf git format-patch} format relative to the current contents of the
master branch of the Source Forge Git repository. Please attach the
output file or files generated by the {\bf git format-patch} to the email
rather than include them directory to avoid wrapping of the lines
directly to the Git repository. To do so, you will need a userid on Source
Forge.
-\subsection{Copyrights}
+\section{Copyrights}
\index{Copyrights}
-\addcontentsline{toc}{subsubsection}{Copyrights}
To avoid future problems concerning changing licensing or
copyrights, all code contributions more than a hand full of lines
must be in the Public Domain or have the copyright transferred to
the Free Software Foundation Europe e.V. with a Fiduciary License
-Agreement (FLA) as the case for all the current code.
+Agreement (FLA) as the case for all the current code.
Prior to November 2004, all the code was copyrighted by Kern Sibbald and
John Walker. After November 2004, the code was copyrighted by Kern
http://www.mozilla.org/MPL/missing.html}
{http://www.mozilla.org/MPL/missing.html}. The other important issue is to
avoid copyright, patent, or intellectual property violations as was
-(May 2003) claimed by SCO against IBM.
+(May 2003) claimed by SCO against IBM.
Although the copyright will be held by the Free Software
Foundation Europe e.V., each developer is expected to indicate
this.
If you have any doubts about this, please don't hesitate to ask. The
-objective is to assure the long term survival of the Bacula project.
+objective is to assure the long term survival of the Bacula project.
Items not needing a copyright assignment are: most small changes,
-enhancements, or bug fixes of 5-10 lines of code, which amount to
+enhancements, or bug fixes of 5-10 lines of code, which amount to
less than 20% of any particular file.
-\subsection{Copyright Assignment -- Fiduciary License Agreement}
+\section{Copyright Assignment -- Fiduciary License Agreement}
\index{Copyright Assignment}
\index{Assignment!Copyright}
-\addcontentsline{toc}{subsubsection}{Copyright Assignment -- Fiduciary License Agreement}
Since this is not a commercial enterprise, and we prefer to believe in
everyone's good faith, previously developers could assign the copyright by
Please note that the above address is different from the officially
registered office mentioned in the document. When you send in such a
-complete document, please notify me: kern at sibbald dot com, and
+complete document, please notify me: kern at sibbald dot com, and
please add your email address to the FLA so that I can contact you
to confirm reception of the signed FLA.
\section{The Development Cycle}
\index{Developement Cycle}
\index{Cycle!Developement}
-\addcontentsline{toc}{subsubsection}{Development Cycle}
As discussed on the email lists, the number of contributions are
increasing significantly. We expect this positive trend
{\bf Feature Requests:} \\
In addition, we have "formalizee" the feature requests a bit.
-Instead of me maintaining an informal list of everything I run into
-(kernstodo), we now maintain a "formal" list of projects. This
-means that all new feature requests, including those recently discussed on
-the email lists, must be formally submitted and approved.
+Instead of me maintaining an informal list of everything I run into
+(kernstodo), we now maintain a "formal" list of projects. This
+means that all new feature requests, including those recently discussed on
+the email lists, must be formally submitted and approved.
Formal submission of feature requests will take two forms: \\
1. non-mandatory, but highly recommended is to discuss proposed new features
the time), send it to the email list asking for opinions, or reject it
(very few cases).
-If it is accepted, it will go in the "projects" file (a simple ASCII file)
+If it is accepted, it will go in the "projects" file (a simple ASCII file)
maintained in the main Bacula source directory.
{\bf Implementation of Feature Requests:}\\
\section{Bacula Code Submissions and Projects}
\index{Submissions and Projects}
-\addcontentsline{toc}{subsection}{Code Submissions and Projects}
Getting code implemented in Bacula works roughly as follows:
\section{Patches for Released Versions}
\index{Patches for Released Versions}
-\addcontentsline{toc}{subsection}{Patches for Released Versions}
If you fix a bug in a released version, you should, unless it is
an absolutely trivial bug, create and release a patch file for the
bug. The procedure is as follows:
Fix the bug in the released branch and in the develpment master branch.
-Make a patch file for the branch and add the branch patch to
+Make a patch file for the branch and add the branch patch to
the patches directory in both the branch and the trunk.
-The name should be 2.2.4-xxx.patch where xxx is unique, in this case it can
+The name should be 2.2.4-xxx.patch where xxx is unique, in this case it can
be "restore", e.g. 2.2.4-restore.patch. Add to the top of the
-file a brief description and instructions for applying it -- see for example
+file a brief description and instructions for applying it -- see for example
2.2.4-poll-mount.patch. The best way to create the patch file is as
follows:
it should have the patch for that bug only).
If there is not a bug report on the problem, create one, then add the
-patch to the bug report.
+patch to the bug report.
Then upload it to the 2.2.x release of bacula-patches.
\section{Developing Bacula}
\index{Developing Bacula}
\index{Bacula!Developing}
-\addcontentsline{toc}{subsubsection}{Developing Bacula}
Typically the simplest way to develop Bacula is to open one xterm window
pointing to the source directory you wish to update; a second xterm window at
the top source directory level, and a third xterm window at the bacula
directory \lt{}top\gt{}/src/bacula. After making source changes in one of the
directories, in the top source directory xterm, build the source, and start
-the daemons by entering:
+the daemons by entering:
-make and
+make and
-./startit then in the enter:
+./startit then in the enter:
-./console or
+./console or
./gnome-console to start the Console program. Enter any commands for testing.
-For example: run kernsverify full.
+For example: run kernsverify full.
Note, the instructions here to use {\bf ./startit} are different from using a
production system where the administrator starts Bacula by entering {\bf
to be run on a computer at the same time that a production system is running.
The {\bf ./startit} strip starts {\bf Bacula} using a different set of
configuration files, and thus permits avoiding conflicts with any production
-system.
+system.
To make additional source changes, exit from the Console program, and in the
-top source directory, stop the daemons by entering:
+top source directory, stop the daemons by entering:
-./stopit then repeat the process.
+./stopit then repeat the process.
-\subsection{Debugging}
+\section{Debugging}
\index{Debugging}
-\addcontentsline{toc}{subsubsection}{Debugging}
-Probably the first thing to do is to turn on debug output.
+Probably the first thing to do is to turn on debug output.
A good place to start is with a debug level of 20 as in {\bf ./startit -d20}.
The startit command starts all the daemons with the same debug level.
Alternatively, you can start the appropriate daemon with the debug level you
want. If you really need more info, a debug level of 60 is not bad, and for
-just about everything a level of 200.
+just about everything a level of 200.
-\subsection{Using a Debugger}
+\section{Using a Debugger}
\index{Using a Debugger}
\index{Debugger!Using a}
-\addcontentsline{toc}{subsubsection}{Using a Debugger}
If you have a serious problem such as a segmentation fault, it can usually be
found quickly using a good multiple thread debugger such as {\bf gdb}. For
example, suppose you get a segmentation violation in {\bf bacula-dir}. You
-might use the following to find the problem:
+might use the following to find the problem:
\lt{}start the Storage and File daemons\gt{}
cd dird
The {\bf -f} option is specified on the {\bf run} command to inhibit {\bf
dird} from going into the background. You may also want to add the {\bf -s}
option to the run command to disable signals which can potentially interfere
-with the debugging.
+with the debugging.
As an alternative to using the debugger, each {\bf Bacula} daemon has a built
in back trace feature when a serious error is encountered. It calls the
debugger on itself, produces a back trace, and emails the report to the
developer. For more details on this, please see the chapter in the main Bacula
-manual entitled ``What To Do When Bacula Crashes (Kaboom)''.
+manual entitled ``What To Do When Bacula Crashes (Kaboom)''.
-\subsection{Memory Leaks}
+\section{Memory Leaks}
\index{Leaks!Memory}
\index{Memory Leaks}
-\addcontentsline{toc}{subsubsection}{Memory Leaks}
Because Bacula runs routinely and unattended on client and server machines, it
may run for a long time. As a consequence, from the very beginning, Bacula
routine that can be called at termination time that releases the memory. In
this way, we will be able to detect memory leaks. Be sure to immediately
correct any and all memory leaks that are printed at the termination of the
-daemons.
+daemons.
-\subsection{Special Files}
+\section{Special Files}
\index{Files!Special}
\index{Special Files}
-\addcontentsline{toc}{subsubsection}{Special Files}
Kern uses files named 1, 2, ... 9 with any extension as scratch files. Thus
-any files with these names are subject to being rudely deleted at any time.
+any files with these names are subject to being rudely deleted at any time.
-\subsection{When Implementing Incomplete Code}
+\section{When Implementing Incomplete Code}
\index{Code!When Implementing Incomplete}
\index{When Implementing Incomplete Code}
-\addcontentsline{toc}{subsubsection}{When Implementing Incomplete Code}
-Please identify all incomplete code with a comment that contains
+Please identify all incomplete code with a comment that contains
\begin{lstlisting}
***FIXME***
-\end{lstlisting}
+\end{lstlisting}
where there are three asterisks (*) before and after the word
FIXME (in capitals) and no intervening spaces. This is important as it allows
-new programmers to easily recognize where things are partially implemented.
+new programmers to easily recognize where things are partially implemented.
-\subsection{Bacula Source File Structure}
+\section{Bacula Source File Structure}
\index{Structure!Bacula Source File}
\index{Bacula Source File Structure}
-\addcontentsline{toc}{subsubsection}{Bacula Source File Structure}
The distribution generally comes as a tar file of the form {\bf
bacula.x.y.z.tar.gz} where x, y, and z are the version, release, and update
-numbers respectively.
+numbers respectively.
-Once you detar this file, you will have a directory structure as follows:
+Once you detar this file, you will have a directory structure as follows:
\footnotesize
\begin{lstlisting}
|- developers (Developer's guide)
|- home-page (Bacula's home page source)
|- manual (html document directory)
- |- manual-fr (French translation)
- |- manual-de (German translation)
+ |- manual-fr (French translation)
+ |- manual-de (German translation)
|- techlogs (Technical development notes);
Project rescue:
|- linux (Linux rescue CDROM)
|- cdrom (Linux rescue CDROM code)
...
- |- solaris (Solaris rescue -- incomplete)
+ |- solaris (Solaris rescue -- incomplete)
|- freebsd (FreeBSD rescue -- incomplete)
Project gui:
\end{lstlisting}
\normalsize
-\subsection{Header Files}
+\section{Header Files}
\index{Header Files}
\index{Files!Header}
-\addcontentsline{toc}{subsubsection}{Header Files}
Please carefully follow the scheme defined below as it permits in general only
two header file includes per C file, and thus vastly simplifies programming.
With a large complex project like Bacula, it isn't always easy to ensure that
the right headers are invoked in the right order (there are a few kludges to
make this happen -- i.e. in a few include files because of the chicken and egg
-problem, certain references to typedefs had to be replaced with {\bf void} ).
+problem, certain references to typedefs had to be replaced with {\bf void} ).
Every file should include {\bf bacula.h}. It pulls in just about everything,
with very few exceptions. If you have system dependent ifdefing, please do it
-in {\bf baconfig.h}. The version number and date are kept in {\bf version.h}.
+in {\bf baconfig.h}. The version number and date are kept in {\bf version.h}.
Each of the subdirectories (console, cats, dird, filed, findlib, lib, stored,
...) contains a single directory dependent include file generally the name of
bacula.h}. This file (for example, for the dird directory, it is {\bf dird.h})
contains either definitions of things generally needed in this directory, or
it includes the appropriate header files. It always includes {\bf protos.h}.
-See below.
+See below.
Each subdirectory contains a header file named {\bf protos.h}, which contains
the prototypes for subroutines exported by files in that directory. {\bf
-protos.h} is always included by the main directory dependent include file.
+protos.h} is always included by the main directory dependent include file.
-\subsection{Programming Standards}
+\section{Programming Standards}
\index{Standards!Programming}
\index{Programming Standards}
-\addcontentsline{toc}{subsubsection}{Programming Standards}
For the most part, all code should be written in C unless there is a burning
reason to use C++, and then only the simplest C++ constructs will be used.
-Note, Bacula is slowly evolving to use more and more C++.
+Note, Bacula is slowly evolving to use more and more C++.
Code should have some documentation -- not a lot, but enough so that I can
understand it. Look at the current code, and you will see that I document more
-than most, but am definitely not a fanatic.
+than most, but am definitely not a fanatic.
We prefer simple linear code where possible. Gotos are strongly discouraged
except for handling an error to either bail out or to retry some code, and
-such use of gotos can vastly simplify the program.
+such use of gotos can vastly simplify the program.
Remember this is a C program that is migrating to a {\bf tiny} subset of C++,
-so be conservative in your use of C++ features.
+so be conservative in your use of C++ features.
-\subsection{Do Not Use}
+\section{Do Not Use}
\index{Use!Do Not}
\index{Do Not Use}
-\addcontentsline{toc}{subsubsection}{Do Not Use}
\begin{bsysitemize}
- \item STL -- it is totally incomprehensible.
+ \item STL -- it is totally incomprehensible.
\end{bsysitemize}
-\subsection{Avoid if Possible}
+\section{Avoid if Possible}
\index{Possible!Avoid if}
\index{Avoid if Possible}
-\addcontentsline{toc}{subsubsection}{Avoid if Possible}
\begin{bsysitemize}
\item Using {\bf void *} because this generally means that one must
using casting, and in C++ casting is rather ugly. It is OK to use
- void * to pass structure address where the structure is not known
+ void * to pass structure address where the structure is not known
to the routines accepting the packet (typically callback routines).
However, declaring "void *buf" is a bad idea. Please use the
correct types whenever possible.
\item Using undefined storage specifications such as (short, int, long,
long long, size\_t ...). The problem with all these is that the number of bytes
they allocate depends on the compiler and the system. Instead use
- Bacula's types (int8\_t, uint8\_t, int32\_t, uint32\_t, int64\_t, and
+ Bacula's types (int8\_t, uint8\_t, int32\_t, uint32\_t, int64\_t, and
uint64\_t). This guarantees that the variables are given exactly the
size you want. Please try at all possible to avoid using size\_t ssize\_t
and the such. They are very system dependent. However, some system
routines may need them, so their use is often unavoidable.
\item Returning a malloc'ed buffer from a subroutine -- someone will forget
- to release it.
+ to release it.
\item Heap allocation (malloc) unless needed -- it is expensive. Use
POOL\_MEM instead.
-\item Templates -- they can create portability problems.
+\item Templates -- they can create portability problems.
\item Fancy or tricky C or C++ code, unless you give a good explanation of
- why you used it.
+ why you used it.
\item Too much inheritance -- it can complicate the code, and make reading it
- difficult (unless you are in love with colons)
+ difficult (unless you are in love with colons)
\end{bsysitemize}
-\subsection{Do Use Whenever Possible}
+\section{Do Use Whenever Possible}
\index{Possible!Do Use Whenever}
\index{Do Use Whenever Possible}
-\addcontentsline{toc}{subsubsection}{Do Use Whenever Possible}
\begin{bsysitemize}
-\item Locking and unlocking within a single subroutine.
+\item Locking and unlocking within a single subroutine.
-\item A single point of exit from all subroutines. A goto is
+\item A single point of exit from all subroutines. A goto is
perfectly OK to use to get out early, but only to a label
named bail\_out, and possibly an ok\_out. See current code
examples.
-\item malloc and free within a single subroutine.
+\item malloc and free within a single subroutine.
-\item Comments and global explanations on what your code or algorithm does.
+\item Comments and global explanations on what your code or algorithm does.
\item When committing a fix for a bug, make the comment of the
following form:
\begin{lstlisting}
-Reason for bug fix or other message. Fixes bug #1234
+Reason for bug fix or other message. Fixes bug #1234
\end{lstlisting}
It is important to write the {\bf bug \#1234} like
\end{lstlisting}
\item Use the following keywords at the beginning of
-a git commit message
+a git commit message
\end{bsysitemize}
-\subsection{Indenting Standards}
+\section{Indenting Standards}
\index{Standards!Indenting}
\index{Indenting Standards}
-\addcontentsline{toc}{subsubsection}{Indenting Standards}
-We find it very hard to read code indented 8 columns at a time.
+We find it very hard to read code indented 8 columns at a time.
Even 4 at a time uses a lot of space, so we have adopted indenting
3 spaces at every level. Note, indention is the visual appearance of the
source on the page, while tabbing is replacing a series of up to 8 spaces from
-a tab character.
+a tab character.
The closest set of parameters for the Linux {\bf indent} program that will
-produce reasonably indented code are:
+produce reasonably indented code are:
\footnotesize
\begin{lstlisting}
You can put the above in your .indent.pro file, and then just invoke indent on
your file. However, be warned. This does not produce perfect indenting, and it
-will mess up C++ class statements pretty badly.
+will mess up C++ class statements pretty badly.
Braces are required in all if statements (missing in some very old code). To
avoid generating too many lines, the first brace appears on the first line
-(e.g. of an if), and the closing brace is on a line by itself. E.g.
+(e.g. of an if), and the closing brace is on a line by itself. E.g.
\footnotesize
\begin{lstlisting}
\end{lstlisting}
\normalsize
-Just follow the convention in the code. For example we I prefer non-indented cases.
+Just follow the convention in the code. For example we I prefer non-indented cases.
\footnotesize
\begin{lstlisting}
Avoid using // style comments except for temporary code or turning off debug
code. Standard C comments are preferred (this also keeps the code closer to
-C).
+C).
Attempt to keep all lines less than 85 characters long so that the whole line
-of code is readable at one time. This is not a rigid requirement.
+of code is readable at one time. This is not a rigid requirement.
Always put a brief description at the top of any new file created describing
what it does and including your name and the date it was first written. Please
don't forget any Copyrights and acknowledgments if it isn't 100\% your code.
-Also, include the Bacula copyright notice that is in {\bf src/c}.
+Also, include the Bacula copyright notice that is in {\bf src/c}.
In general you should have two includes at the top of the an include for the
particular directory the code is in, for includes are needed, but this should
-be rare.
+be rare.
In general (except for self-contained packages), prototypes should all be put
-in {\bf protos.h} in each directory.
+in {\bf protos.h} in each directory.
-Always put space around assignment and comparison operators.
+Always put space around assignment and comparison operators.
\footnotesize
\begin{lstlisting}
\end{lstlisting}
\normalsize
-but your can compress things in a {\bf for} statement:
+but your can compress things in a {\bf for} statement:
\footnotesize
\begin{lstlisting}
\normalsize
Don't overuse the inline if (?:). A full {\bf if} is preferred, except in a
-print statement, e.g.:
+print statement, e.g.:
\footnotesize
\begin{lstlisting}
Leave a certain amount of debug code (Dmsg) in code you submit, so that future
problems can be identified. This is particularly true for complicated code
likely to break. However, try to keep the debug code to a minimum to avoid
-bloating the program and above all to keep the code readable.
+bloating the program and above all to keep the code readable.
Please keep the same style in all new code you develop. If you include code
previously written, you have the option of leaving it with the old indenting
or re-indenting it. If the old code is indented with 8 spaces, then please
-re-indent it to Bacula standards.
+re-indent it to Bacula standards.
If you are using {\bf vim}, simply set your tabstop to 8 and your shiftwidth
-to 3.
+to 3.
-\subsection{Tabbing}
+\section{Tabbing}
\index{Tabbing}
-\addcontentsline{toc}{subsubsection}{Tabbing}
Tabbing (inserting the tab character in place of spaces) is as normal on all
Unix systems -- a tab is converted space up to the next column multiple of 8.
My editor converts strings of spaces to tabs automatically -- this results in
significant compression of the files. Thus, you can remove tabs by replacing
them with spaces if you wish. Please don't confuse tabbing (use of tab
-characters) with indenting (visual alignment of the code).
+characters) with indenting (visual alignment of the code).
-\subsection{Don'ts}
+\section{Don'ts}
\index{Don'ts}
-\addcontentsline{toc}{subsubsection}{Don'ts}
-Please don't use:
+Please don't use:
\footnotesize
\begin{lstlisting}
\normalsize
They are system dependent and un-safe. These should be replaced by the Bacula
-safe equivalents:
+safe equivalents:
\footnotesize
\begin{lstlisting}
\end{lstlisting}
\normalsize
-See src/lib/bsys.c for more details on these routines.
+See src/lib/bsys.c for more details on these routines.
Don't use the {\bf \%lld} or the {\bf \%q} printf format editing types to edit
64 bit integers -- they are not portable. Instead, use {\bf \%s} with {\bf
-edit\_uint64()}. For example:
+edit\_uint64()}. For example:
\footnotesize
\begin{lstlisting}
John Walker. The {\bf lld} that appears in the editing routine is actually
{\bf \#define} to a what is needed on your OS (usually ``lld'' or ``q'') and
is defined in autoconf/configure.in for each OS. C string concatenation causes
-the appropriate string to be concatenated to the ``\%''.
+the appropriate string to be concatenated to the ``\%''.
-Also please don't use the STL or Templates or any complicated C++ code.
+Also please don't use the STL or Templates or any complicated C++ code.
-\subsection{Message Classes}
+\section{Message Classes}
\index{Classes!Message}
\index{Message Classes}
-\addcontentsline{toc}{subsubsection}{Message Classes}
-Currently, there are five classes of messages: Debug, Error, Job, Memory,
+Currently, there are five classes of messages: Debug, Error, Job, Memory,
and Queued.
-\subsection{Debug Messages}
+\section{Debug Messages}
\index{Messages!Debug}
\index{Debug Messages}
-\addcontentsline{toc}{subsubsection}{Debug Messages}
Debug messages are designed to be turned on at a specified debug level and are
always sent to STDOUT. There are designed to only be used in the development
-debug process. They are coded as:
+debug process. They are coded as:
DmsgN(level, message, arg1, ...) where the N is a number indicating how many
arguments are to be substituted into the message (i.e. it is a count of the
signs (\%)). {\bf level} is the debug level at which you wish the message to
be printed. message is the debug message to be printed, and arg1, ... are the
arguments to be substituted. Since not all compilers support \#defines with
-varargs, you must explicitly specify how many arguments you have.
+varargs, you must explicitly specify how many arguments you have.
When the debug message is printed, it will automatically be prefixed by the
name of the daemon which is running, the filename where the Dmsg is, and the
-line number within the file.
+line number within the file.
-Some actual examples are:
+Some actual examples are:
-Dmsg2(20, ``MD5len=\%d MD5=\%s\textbackslash{}n'', strlen(buf), buf);
+Dmsg2(20, ``MD5len=\%d MD5=\%s\textbackslash{}n'', strlen(buf), buf);
-Dmsg1(9, ``Created client \%s record\textbackslash{}n'', client->hdr.name);
+Dmsg1(9, ``Created client \%s record\textbackslash{}n'', client->hdr.name);
-\subsection{Error Messages}
+\section{Error Messages}
\index{Messages!Error}
\index{Error Messages}
-\addcontentsline{toc}{subsubsection}{Error Messages}
Error messages are messages that are related to the daemon as a whole rather
than a particular job. For example, an out of memory condition my generate an
error message. They should be very rarely needed. In general, you should be
-using Job and Job Queued messages (Jmsg and Qmsg). They are coded as:
-
-EmsgN(error-code, level, message, arg1, ...) As with debug messages, you must
-explicitly code the of arguments to be substituted in the message. error-code
-indicates the severity or class of error, and it may be one of the following:
-
-\addcontentsline{lot}{table}{Message Error Code Classes}
-\begin{longtable}{lp{3in}}
-{{\bf M\_ABORT} } & {Causes the daemon to immediately abort. This should be
-used only in extreme cases. It attempts to produce a traceback. } \\
-{{\bf M\_ERROR\_TERM} } & {Causes the daemon to immediately terminate. This
-should be used only in extreme cases. It does not produce a traceback. } \\
-{{\bf M\_FATAL} } & {Causes the daemon to terminate the current job, but the
-daemon keeps running } \\
-{{\bf M\_ERROR} } & {Reports the error. The daemon and the job continue
-running } \\
-{{\bf M\_WARNING} } & {Reports an warning message. The daemon and the job
-continue running } \\
-{{\bf M\_INFO} } & {Reports an informational message.}
-
-\end{longtable}
+using Job and Job Queued messages (Jmsg and Qmsg). They are coded as:
+
+EmsgN(error-code, level, message, arg1, \ldots{}) As with debug messages, you
+ must explicitly code the of arguments to be substituted in the message.
+ Error-code indicates the severity or class of error, and it may be one of the
+ following (See table \vref{tabdev:errorcodes}:
+\LTXtable{\linewidth}{table_errorcodes}
There are other error message classes, but they are in a state of being
redesigned or deprecated, so please do not use them. Some actual examples are:
+\begin{lstlisting}
+Emsg1(M\_ABORT, 0, ``Cannot create message thread: %s\n'',strerror(status));
+Emsg3(M\_WARNING, 0, ``Connect to File daemon %s at %s:%d failed. Retrying...\n'', client->hdr.name, client->address,client->port);
-Emsg1(M\_ABORT, 0, ``Cannot create message thread: \%s\textbackslash{}n'',
-strerror(status));
-
-Emsg3(M\_WARNING, 0, ``Connect to File daemon \%s at \%s:\%d failed. Retrying
-...\textbackslash{}n'', client-\gt{}hdr.name, client-\gt{}address,
-client-\gt{}port);
-
-Emsg3(M\_FATAL, 0, ``bdird\lt{}filed: bad response from Filed to \%s command:
-\%d \%s\textbackslash{}n'', cmd, n, strerror(errno));
+Emsg3(M\_FATAL, 0, ``bdird<filed: bad response from Filed to %s command:%d %s\n'', cmd, n, strerror(errno));
+\end{lstlisting}
-\subsection{Job Messages}
+\section{Job Messages}
\index{Job Messages}
\index{Messages!Job}
-\addcontentsline{toc}{subsubsection}{Job Messages}
Job messages are messages that pertain to a particular job such as a file that
could not be saved, or the number of files and bytes that were saved. They
have any number of arguments for substituted in a printf like format.
Output from the Jmsg() will go to the Job report.
<br>
-If the Jmsg is followed with a number such as Jmsg1(...), the number
+If the Jmsg is followed with a number such as Jmsg1(\ldots{}), the number
indicates the number of arguments to be substituted (varargs is not
standard for \#defines), and what is more important is that the file and
line number will be prefixed to the message. This permits a sort of debug
from user's output.
-\subsection{Queued Job Messages}
+\section{Queued Job Messages}
\index{Queued Job Messages}
\index{Messages!Job}
-\addcontentsline{toc}{subsubsection}{Queued Job Messages}
Queued Job messages are similar to Jmsg()s except that the message is
Queued rather than immediately dispatched. This is necessary within the
network subroutines and in the message editing routines. This is to prevent
event of a network error.
-\subsection{Memory Messages}
+\section{Memory Messages}
\index{Messages!Memory}
\index{Memory Messages}
-\addcontentsline{toc}{subsubsection}{Memory Messages}
Memory messages are messages that are edited into a memory buffer. Generally
they are used in low level routines such as the low level device file dev.c in
the Storage daemon or in the low level Catalog routines. These routines do not
generally have access to the Job Control Record and so they return error
-essages reformatted in a memory buffer. Mmsg() is the way to do this.
+essages reformatted in a memory buffer. Mmsg() is the way to do this.
-\subsection{Bugs Database}
+\section{Bugs Database}
\index{Database!Bugs}
\index{Bugs Database}
-\addcontentsline{toc}{subsubsection}{Bugs Database}
We have a bugs database which is at:
\elink{http://bugs.bacula.org}{http://bugs.bacula.org}, and as
-a developer you will need to respond to bugs, perhaps bugs in general
+a developer you will need to respond to bugs, perhaps bugs in general
if you have time, otherwise just bugs that correspond to code that
you wrote.
we have run tests and can see that for us the problem is fixed. However,
in doing so, it avoids misunderstandings if you leave a note while you are
closing the bug that says something to the following effect:
-We are closing this bug because ... If for some reason, it does not fix
+We are closing this bug because \ldots{} If for some reason, it does not fix
your problem, please feel free to reopen it, or to open a new bug report
describing the problem".
We do not recommend that you attempt to edit any of the bug notes that have
been submitted, nor to delete them or make them private. In fact, if
someone accidentally makes a bug note private, you should ask the reason
-and if at all possible (with his agreement) make the bug note public.
+and if at all possible (with his agreement) make the bug note public.
If the user has not properly filled in most of the important fields
-(platorm, OS, Product Version, ...) please do not hesitate to politely ask
+(platorm, OS, Product Version, \ldots{}) please do not hesitate to politely ask
him. Also, if the bug report is a request for a new feature, please
politely send the user to the Feature Request menu item on www.bacula.org.
The same applies to a support request (we answer only bugs), you might give
\label{_GitChapterStart}
\index{Git}
\index{Git!Repo}
-\addcontentsline{toc}{section}{Bacula Bit Usage}
This chapter is intended to help you use the Git source code
repositories to obtain, modify, and submit Bacula source code.
\section{Bacula Git repositories}
\index{Git}
-\addcontentsline{toc}{subsection}{Git repositories}
As of September 2009, the Bacula source code has been split into
three Git repositories. One is a repository that holds the
main Bacula source code with directories {\bf bacula}, {\bf gui},
\section{Git Usage}
\index{Git Usage}
-\addcontentsline{toc}{subsection}{Git Usage}
Please note that if you are familiar with SVN, Git is similar,
(and better), but there can be a few surprising differences that
\label{_ChapterStart}
\index[general]{Interface!Implementing a Bacula GUI }
\index[general]{Implementing a Bacula GUI Interface }
-\addcontentsline{toc}{section}{Implementing a Bacula GUI Interface}
\section{General}
\index[general]{General }
-\addcontentsline{toc}{subsection}{General}
This document is intended mostly for developers who wish to develop a new GUI
-interface to {\bf Bacula}.
+interface to {\bf Bacula}.
\subsection{Minimal Code in Console Program}
\index[general]{Program!Minimal Code in Console }
\index[general]{Minimal Code in Console Program }
-\addcontentsline{toc}{subsubsection}{Minimal Code in Console Program}
Until now, I have kept all the Catalog code in the Directory (with the
exception of dbcheck and bscan). This is because at some point I would like to
in a GUI this will be more difficult. The other advantage is that any code you
add to the Director is automatically available to both the tty console program
and the WX program. The major disadvantage is it increases the size of the
-code -- however, compared to Networker the Bacula Director is really tiny.
+code -- however, compared to Networker the Bacula Director is really tiny.
\subsection{GUI Interface is Difficult}
\index[general]{GUI Interface is Difficult }
\index[general]{Difficult!GUI Interface is }
-\addcontentsline{toc}{subsubsection}{GUI Interface is Difficult}
Interfacing to an interactive program such as Bacula can be very difficult
because the interfacing program must interpret all the prompts that may come.
This can be next to impossible. There are are a number of ways that Bacula is
-designed to facilitate this:
+designed to facilitate this:
\begin{bsysitemize}
\item The Bacula network protocol is packet based, and thus pieces of
-information sent can be ASCII or binary.
-\item The packet interface permits knowing where the end of a list is.
+information sent can be ASCII or binary.
+\item The packet interface permits knowing where the end of a list is.
\item The packet interface permits special ``signals'' to be passed rather
-than data.
+than data.
\item The Director has a number of commands that are non-interactive. They
-all begin with a period, and provide things such as the list of all Jobs,
+all begin with a period, and provide things such as the list of all Jobs,
list of all Clients, list of all Pools, list of all Storage, ... Thus the GUI
interface can get to virtually all information that the Director has in a
-deterministic way. See \lt{}bacula-source\gt{}/src/dird/ua\_dotcmds.c for
-more details on this.
+deterministic way. See \lt{}bacula-source\gt{}/src/dird/ua\_dotcmds.c for
+more details on this.
\item Most console commands allow all the arguments to be specified on the
-command line: e.g. {\bf run job=NightlyBackup level=Full}
+command line: e.g. {\bf run job=NightlyBackup level=Full}
\end{bsysitemize}
One of the first things to overcome is to be able to establish a conversation
with the Director. Although you can write all your own code, it is probably
easier to use the Bacula subroutines. The following code is used by the
-Console program to begin a conversation.
+Console program to begin a conversation.
\footnotesize
\begin{lstlisting}
\end{lstlisting}
\normalsize
-Then the read\_and\_process\_input routine looks like the following:
+Then the read\_and\_process\_input routine looks like the following:
\footnotesize
\begin{lstlisting}
\chapter{Bacula MD5 Algorithm}
\label{MD5Chapter}
-\addcontentsline{toc}{section}{}
\section{Command Line Message Digest Utility }
\index{Utility!Command Line Message Digest }
\index{Command Line Message Digest Utility }
-\addcontentsline{toc}{subsection}{Command Line Message Digest Utility}
This page describes {\bf md5}, a command line utility usable on either Unix or
MS-DOS/Windows, which generates and verifies message digests (digital
signatures) using the MD5 algorithm. This program can be useful when
developing shell scripts or Perl programs for software installation, file
-comparison, and detection of file corruption and tampering.
+comparison, and detection of file corruption and tampering.
\subsection{Name}
\index{Name}
-\addcontentsline{toc}{subsubsection}{Name}
-{\bf md5} - generate / check MD5 message digest
+{\bf md5} - generate / check MD5 message digest
\subsection{Synopsis}
\index{Synopsis }
-\addcontentsline{toc}{subsubsection}{Synopsis}
{\bf md5} [ {\bf -c}{\it signature} ] [ {\bf -u} ] [ {\bf -d}{\it input\_text}
-| {\it infile} ] [ {\it outfile} ]
+| {\it infile} ] [ {\it outfile} ]
\subsection{Description}
\index{Description }
-\addcontentsline{toc}{subsubsection}{Description}
A {\it message digest} is a compact digital signature for an arbitrarily long
stream of binary data. An ideal message digest algorithm would never generate
preparation of input text with a given signature computationally infeasible.
Message digest algorithms have much in common with techniques used in
encryption, but to a different end; verification that data have not been
-altered since the signature was published.
+altered since the signature was published.
Many older programs requiring digital signatures employed 16 or 32 bit {\it
cyclical redundancy codes} (CRC) originally developed to verify correct
transmission in data communication protocols, but these short codes, while
adequate to detect the kind of transmission errors for which they were
intended, are insufficiently secure for applications such as electronic
-commerce and verification of security related software distributions.
+commerce and verification of security related software distributions.
The most commonly used present-day message digest algorithm is the 128 bit MD5
-algorithm, developed by Ron Rivest of the
-\elink{MIT}{http://web.mit.edu/}
-\elink{Laboratory for Computer Science}{http://www.lcs.mit.edu/} and
+algorithm, developed by Ron Rivest of the
+\elink{MIT}{http://web.mit.edu/}
+\elink{Laboratory for Computer Science}{http://www.lcs.mit.edu/} and
\elink{RSA Data Security, Inc.}{http://www.rsa.com/} The algorithm, with a
-reference implementation, was published as Internet
+reference implementation, was published as Internet
\elink{RFC 1321}{http://www.fourmilab.ch/md5/rfc1321.html} in April 1992, and
was placed into the public domain at that time. Message digest algorithms such
as MD5 are not deemed ``encryption technology'' and are not subject to the
export controls some governments impose on other data security products.
(Obviously, the responsibility for obeying the laws in the jurisdiction in
which you reside is entirely your own, but many common Web and Mail utilities
-use MD5, and I am unaware of any restrictions on their distribution and use.)
+use MD5, and I am unaware of any restrictions on their distribution and use.)
The MD5 algorithm has been implemented in numerous computer languages
-including C,
-\elink{Perl}{http://www.perl.org/}, and
+including C,
+\elink{Perl}{http://www.perl.org/}, and
\elink{Java}{http://www.javasoft.com/}; if you're writing a program in such a
language, track down a suitable subroutine and incorporate it into your
program. The program described on this page is a {\it command line}
md5} program was originally developed as part of a suite of tools intended to
monitor large collections of files (for example, the contents of a Web site)
to detect corruption of files and inadvertent (or perhaps malicious) changes.
-That task is now best accomplished with more comprehensive packages such as
+That task is now best accomplished with more comprehensive packages such as
\elink{Tripwire}{ftp://coast.cs.purdue.edu/pub/COAST/Tripwire/}, but the
command line {\bf md5} component continues to prove useful for verifying
correct delivery and installation of software packages, comparing the contents
-of two different systems, and checking for changes in specific files.
+of two different systems, and checking for changes in specific files.
\subsection{Options}
\index{Options }
-\addcontentsline{toc}{subsubsection}{Options}
\begin{description}
If the two signatures match, the exit status will be zero, otherwise the exit
status will be 1. No signature is written to {\it outfile} or standard
output; only the exit status is set. The signature to be checked must be
-specified as 32 hexadecimal digits.
+specified as 32 hexadecimal digits.
\item [{\bf -d}{\it input\_text} ]
\index{-dinput\_text }
A signature is computed for the given {\it input\_text} (which must be quoted
if it contains white space characters) instead of input from {\it infile} or
standard input. If input is specified with the {\bf -d} option, no {\it
-infile} should be specified.
+infile} should be specified.
\item [{\bf -u} ]
- Print how-to-call information.
+ Print how-to-call information.
\end{description}
\subsection{Files}
\index{Files }
-\addcontentsline{toc}{subsubsection}{Files}
If no {\it infile} or {\bf -d} option is specified or {\it infile} is a single
``-'', {\bf md5} reads from standard input; if no {\it outfile} is given, or
{\it outfile} is a single ``-'', output is sent to standard output. Input and
output are processed strictly serially; consequently {\bf md5} may be used in
-pipelines.
+pipelines.
\subsection{Bugs}
\index{Bugs }
-\addcontentsline{toc}{subsubsection}{Bugs}
The mechanism used to set standard input to binary mode may be specific to
Microsoft C; if you rebuild the DOS/Windows version of the program from source
using another compiler, be sure to verify binary files work properly when read
-via redirection or a pipe.
+via redirection or a pipe.
This program has not been tested on a machine on which {\tt int} and/or {\tt
-long} are longer than 32 bits.
+long} are longer than 32 bits.
\section{
\elink{Download md5.zip}{http://www.fourmilab.ch/md5/md5.zip} (Zipped
archive)}
\index{Archive!Download md5.zip Zipped }
\index{Download md5.zip (Zipped archive) }
-\addcontentsline{toc}{subsection}{Download md5.zip (Zipped archive)}
-The program is provided as
-\elink{md5.zip}{http://www.fourmilab.ch/md5/md5.zip}, a
+The program is provided as
+\elink{md5.zip}{http://www.fourmilab.ch/md5/md5.zip}, a
\elink{Zipped}{http://www.pkware.com/} archive containing an ready-to-run
Win32 command-line executable program, {\tt md5.exe} (compiled using Microsoft
Visual C++ 5.0), and in source code form along with a {\tt Makefile} to build
-the program under Unix.
+the program under Unix.
\subsection{See Also}
\index{ALSO!SEE }
\index{See Also }
-\addcontentsline{toc}{subsubsection}{SEE ALSO}
-{\bf sum}(1)
+{\bf sum}(1)
\subsection{Exit Status}
\index{Status!Exit }
\index{Exit Status }
-\addcontentsline{toc}{subsubsection}{Exit Status}
{\bf md5} returns status 0 if processing was completed without errors, 1 if
the {\bf -c} option was specified and the given signature does not match that
of the input, and 2 if processing could not be performed at all due, for
-example, to a nonexistent input file.
+example, to a nonexistent input file.
\subsection{Copying}
\index{Copying }
-\addcontentsline{toc}{subsubsection}{Copying}
\begin{quote}
This software is in the public domain. Permission to use, copy, modify, and
distribute this software and its documentation for any purpose and without
fee is hereby granted, without any conditions or restrictions. This software
-is provided ``as is'' without express or implied warranty.
+is provided ``as is'' without express or implied warranty.
\end{quote}
\subsection{Acknowledgements}
\index{Acknowledgements }
-\addcontentsline{toc}{subsubsection}{Acknowledgements}
The MD5 algorithm was developed by Ron Rivest. The public domain C language
-implementation used in this program was written by Colin Plumb in 1993.
-{\it
+implementation used in this program was written by Colin Plumb in 1993.
+{\it
\elink{by John Walker}{http://www.fourmilab.ch/}
-January 6th, MIM }
+January 6th, MIM }
The File-attributes consist of the following:
-%\addcontentsline{lot}{table}{File Attributes}
\LTXtable{\linewidth}{table_fileattributes}
\section{Old Depreciated Tape Format}
\label{_ChapterStart7}
\index{Management!Bacula Memory}
\index{Bacula Memory Management}
-\addcontentsline{toc}{section}{Bacula Memory Management}
\section{General}
\index{General}
-\addcontentsline{toc}{subsection}{General}
This document describes the memory management routines that are used in Bacula
and is meant to be a technical discussion for developers rather than part of
-the user manual.
+the user manual.
Since Bacula may be called upon to handle filenames of varying and more or
less arbitrary length, special attention needs to be used in the code to
ensure that memory buffers are sufficiently large. There are four
possibilities for memory usage within {\bf Bacula}. Each will be described in
-turn. They are:
+turn. They are:
\begin{bsysitemize}
-\item Statically allocated memory.
-\item Dynamically allocated memory using malloc() and free().
-\item Non-pooled memory.
-\item Pooled memory.
+\item Statically allocated memory.
+\item Dynamically allocated memory using malloc() and free().
+\item Non-pooled memory.
+\item Pooled memory.
\end{bsysitemize}
\subsection{Statically Allocated Memory}
\index{Statically Allocated Memory}
\index{Memory!Statically Allocated}
-\addcontentsline{toc}{subsubsection}{Statically Allocated Memory}
-Statically allocated memory is of the form:
+Statically allocated memory is of the form:
\footnotesize
\begin{lstlisting}
this is appropriate is for {\bf Bacula} resource names, which are currently
limited to 127 characters (MAX\_NAME\_LENGTH). Although this maximum size may
change, particularly to accommodate Unicode, it will remain a relatively small
-value.
+value.
\subsection{Dynamically Allocated Memory}
\index{Dynamically Allocated Memory}
\index{Memory!Dynamically Allocated}
-\addcontentsline{toc}{subsubsection}{Dynamically Allocated Memory}
Dynamically allocated memory is obtained using the standard malloc() routines.
-As in:
+As in:
\footnotesize
\begin{lstlisting}
\end{lstlisting}
\normalsize
-This kind of memory can be released with:
+This kind of memory can be released with:
\footnotesize
\begin{lstlisting}
writing files. When {\bf SmartAlloc} is enabled, the memory obtained by
malloc() will automatically be checked for buffer overwrite (overflow) during
the free() call, and all malloc'ed memory that is not released prior to
-termination of the program will be reported as Orphaned memory.
+termination of the program will be reported as Orphaned memory.
\subsection{Pooled and Non-pooled Memory}
\index{Memory!Pooled and Non-pooled}
\index{Pooled and Non-pooled Memory}
-\addcontentsline{toc}{subsubsection}{Pooled and Non-pooled Memory}
In order to facility the handling of arbitrary length filenames and to
efficiently handle a high volume of dynamic memory usage, we have implemented
block allowing for easy checking if the buffer is of sufficient size. This
kind of memory would normally be used in high volume situations (lots of
malloc()s and free()s) where the buffer length may have to frequently change
-to adapt to varying filename lengths.
+to adapt to varying filename lengths.
The non-pooled memory is handled by routines similar to those used for pooled
memory, allowing for easy size checking. However, non-pooled memory is
returned to the system rather than being saved in the Bacula pool. This kind
of memory would normally be used in low volume situations (few malloc()s and
free()s), but where the size of the buffer might have to be adjusted
-frequently.
+frequently.
\paragraph*{Types of Memory Pool:}
-Currently there are three memory pool types:
+Currently there are three memory pool types:
\begin{bsysitemize}
-\item PM\_NOPOOL -- non-pooled memory.
-\item PM\_FNAME -- a filename pool.
-\item PM\_MESSAGE -- a message buffer pool.
-\item PM\_EMSG -- error message buffer pool.
+\item PM\_NOPOOL -- non-pooled memory.
+\item PM\_FNAME -- a filename pool.
+\item PM\_MESSAGE -- a message buffer pool.
+\item PM\_EMSG -- error message buffer pool.
\end{bsysitemize}
\paragraph*{Getting Memory:}
-To get memory, one uses:
+To get memory, one uses:
\footnotesize
\begin{lstlisting}
where {\bf pool} is one of the above mentioned pool names. The size of the
memory returned will be determined by the system to be most appropriate for
-the application.
+the application.
-If you wish non-pooled memory, you may alternatively call:
+If you wish non-pooled memory, you may alternatively call:
\footnotesize
\begin{lstlisting}
\normalsize
The buffer length will be set to the size specified, and it will be assigned
-to the PM\_NOPOOL pool (no pooling).
+to the PM\_NOPOOL pool (no pooling).
\paragraph*{Releasing Memory:}
-To free memory acquired by either of the above two calls, use:
+To free memory acquired by either of the above two calls, use:
\footnotesize
\begin{lstlisting}
where buffer is the memory buffer returned when the memory was acquired. If
the memory was originally allocated as type PM\_NOPOOL, it will be released to
the system, otherwise, it will be placed on the appropriate Bacula memory pool
-free chain to be used in a subsequent call for memory from that pool.
+free chain to be used in a subsequent call for memory from that pool.
\paragraph*{Determining the Memory Size:}
-To determine the memory buffer size, use:
+To determine the memory buffer size, use:
\footnotesize
\begin{lstlisting}
\paragraph*{Resizing Pool Memory:}
-To resize pool memory, use:
+To resize pool memory, use:
\footnotesize
\begin{lstlisting}
\normalsize
The buffer will be reallocated, and the contents of the original buffer will
-be preserved, but the address of the buffer may change.
+be preserved, but the address of the buffer may change.
\paragraph*{Automatic Size Adjustment:}
To have the system check and if necessary adjust the size of your pooled
-memory buffer, use:
+memory buffer, use:
\footnotesize
\begin{lstlisting}
occur. However, if a buffer size change is needed, the original contents of
the buffer will be preserved, but the buffer address may change. Many of the
low level Bacula subroutines expect to be passed a pool memory buffer and use
-this call to ensure the buffer they use is sufficiently large.
+this call to ensure the buffer they use is sufficiently large.
\paragraph*{Releasing All Pooled Memory:}
In order to avoid orphaned buffer error messages when terminating the program,
-use:
+use:
\footnotesize
\begin{lstlisting}
to free all unused memory retained in the Bacula memory pool. Note, any memory
not returned to the pool via free\_pool\_memory() will not be released by this
-call.
+call.
\paragraph*{Pooled Memory Statistics:}
For debugging purposes and performance tuning, the following call will print
-the current memory pool statistics:
+the current memory pool statistics:
\footnotesize
\begin{lstlisting}
\end{lstlisting}
\normalsize
-an example output is:
+an example output is:
\footnotesize
\begin{lstlisting}
\label{_ChapterStart5}
\index{TCP/IP Network Protocol}
\index{Protocol!TCP/IP Network}
-\addcontentsline{toc}{section}{TCP/IP Network Protocol}
\section{General}
\index{General}
-\addcontentsline{toc}{subsection}{General}
This document describes the TCP/IP protocol used by Bacula to communicate
between the various daemons and services. The definitive definition of the
protocol can be found in src/lib/bsock.h, src/lib/bnet.c and
-src/lib/bnet\_server.c.
+src/lib/bnet\_server.c.
Bacula's network protocol is basically a ``packet oriented'' protocol built on
a standard TCP/IP streams. At the lowest level all packet transfers are done
with read() and write() requests on system sockets. Pipes are not used as they
are considered unreliable for large serial data transfers between various
-hosts.
+hosts.
Using the routines described below (bnet\_open, bnet\_write, bnet\_recv, and
bnet\_close) guarantees that the number of bytes you write into the socket
will be received as a single record on the other end regardless of how many
low level write() and read() calls are needed. All data transferred are
-considered to be binary data.
+considered to be binary data.
\section{bnet and Threads}
\index{Threads!bnet and}
\index{Bnet and Threads}
-\addcontentsline{toc}{subsection}{bnet and Threads}
These bnet routines work fine in a threaded environment. However, they assume
that there is only one reader or writer on the socket at any time. It is
highly recommended that only a single thread access any BSOCK packet. The
exception to this rule is when the socket is first opened and it is waiting
for a job to start. The wait in the Storage daemon is done in one thread and
-then passed to another thread for subsequent handling.
+then passed to another thread for subsequent handling.
If you envision having two threads using the same BSOCK, think twice, then you
must implement some locking mechanism. However, it probably would not be
-appropriate to put locks inside the bnet subroutines for efficiency reasons.
+appropriate to put locks inside the bnet subroutines for efficiency reasons.
\section{bnet\_open}
\index{Bnet\_open}
-\addcontentsline{toc}{subsection}{bnet\_open}
-To establish a connection to a server, use the subroutine:
+To establish a connection to a server, use the subroutine:
BSOCK *bnet\_open(void *jcr, char *host, char *service, int port, int *fatal)
bnet\_open(), if successful, returns the Bacula sock descriptor pointer to be
used in subsequent bnet\_send() and bnet\_read() requests. If not successful,
bnet\_open() returns a NULL. If fatal is set on return, it means that a fatal
error occurred and that you should not repeatedly call bnet\_open(). Any error
-message will generally be sent to the JCR.
+message will generally be sent to the JCR.
\section{bnet\_send}
\index{Bnet\_send}
-\addcontentsline{toc}{subsection}{bnet\_send}
-To send a packet, one uses the subroutine:
+To send a packet, one uses the subroutine:
int bnet\_send(BSOCK *sock) This routine is equivalent to a write() except
that it handles the low level details. The data to be sent is expected to be
in sock-\gt{}msg and be sock-\gt{}msglen bytes. To send a packet, bnet\_send()
first writes four bytes in network byte order than indicate the size of the
-following data packet. It returns:
+following data packet. It returns:
-\footnotesize
\begin{lstlisting}
Returns 0 on failure
Returns 1 on success
\end{lstlisting}
-\normalsize
In the case of a failure, an error message will be sent to the JCR contained
-within the bsock packet.
+within the bsock packet.
\section{bnet\_fsend}
\index{Bnet\_fsend}
-\addcontentsline{toc}{subsection}{bnet\_fsend}
-This form uses:
+This form uses:
int bnet\_fsend(BSOCK *sock, char *format, ...) and it allows you to send a
formatted messages somewhat like fprintf(). The return status is the same as
-bnet\_send.
+bnet\_send.
\section{Additional Error information}
\index{Information!Additional Error}
\index{Additional Error information}
-\addcontentsline{toc}{subsection}{Additional Error information}
-Fro additional error information, you can call {\bf is\_bnet\_error(BSOCK
+For additional error information, you can call {\bf is\_bnet\_error(BSOCK
*bsock)} which will return 0 if there is no error or non-zero if there is an
error on the last transmission. The {\bf is\_bnet\_stop(BSOCK *bsock)}
function will return 0 if there no errors and you can continue sending. It
will return non-zero if there are errors or the line is closed (no more
-transmissions should be sent).
+transmissions should be sent).
\section{bnet\_recv}
\index{Bnet\_recv}
-\addcontentsline{toc}{subsection}{bnet\_recv}
-To read a packet, one uses the subroutine:
+To read a packet, one uses the subroutine:
int bnet\_recv(BSOCK *sock) This routine is similar to a read() except that it
handles the low level details. bnet\_read() first reads packet length that
follows as four bytes in network byte order. The data is read into
sock-\gt{}msg and is sock-\gt{}msglen bytes. If the sock-\gt{}msg is not large
enough, bnet\_recv() realloc() the buffer. It will return an error (-2) if
-maxbytes is less than the record size sent. It returns:
+maxbytes is less than the record size sent. It returns:
-\footnotesize
\begin{lstlisting}
* Returns number of bytes read
* Returns 0 on end of file
* Returns -1 on hard end of file (i.e. network connection close)
* Returns -2 on error
\end{lstlisting}
-\normalsize
-It should be noted that bnet\_recv() is a blocking read.
+It should be noted that bnet\_recv() is a blocking read.
\section{bnet\_sig}
\index{Bnet\_sig}
-\addcontentsline{toc}{subsection}{bnet\_sig}
-To send a ``signal'' from one daemon to another, one uses the subroutine:
+To send a ``signal'' from one daemon to another, one uses the subroutine:
-int bnet\_sig(BSOCK *sock, SIGNAL) where SIGNAL is one of the following:
+int bnet\_sig(BSOCK *sock, SIGNAL) where SIGNAL is one of the following:
\begin{enumerate}
-\item BNET\_EOF - deprecated use BNET\_EOD
-\item BNET\_EOD - End of data stream, new data may follow
-\item BNET\_EOD\_POLL - End of data and poll all in one
-\item BNET\_STATUS - Request full status
-\item BNET\_TERMINATE - Conversation terminated, doing close()
-\item BNET\_POLL - Poll request, I'm hanging on a read
-\item BNET\_HEARTBEAT - Heartbeat Response requested
-\item BNET\_HB\_RESPONSE - Only response permitted to HB
-\item BNET\_PROMPT - Prompt for UA
+\item BNET\_EOF - deprecated use BNET\_EOD
+\item BNET\_EOD - End of data stream, new data may follow
+\item BNET\_EOD\_POLL - End of data and poll all in one
+\item BNET\_STATUS - Request full status
+\item BNET\_TERMINATE - Conversation terminated, doing close()
+\item BNET\_POLL - Poll request, I'm hanging on a read
+\item BNET\_HEARTBEAT - Heartbeat Response requested
+\item BNET\_HB\_RESPONSE - Only response permitted to HB
+\item BNET\_PROMPT - Prompt for UA
\end{enumerate}
\section{bnet\_strerror}
\index{Bnet\_strerror}
-\addcontentsline{toc}{subsection}{bnet\_strerror}
-Returns a formated string corresponding to the last error that occurred.
+Returns a formated string corresponding to the last error that occurred.
\section{bnet\_close}
\index{Bnet\_close}
-\addcontentsline{toc}{subsection}{bnet\_close}
-The connection with the server remains open until closed by the subroutine:
+The connection with the server remains open until closed by the subroutine:
-void bnet\_close(BSOCK *sock)
+void bnet\_close(BSOCK *sock)
\section{Becoming a Server}
\index{Server!Becoming a}
\index{Becoming a Server}
-\addcontentsline{toc}{subsection}{Becoming a Server}
The bnet\_open() and bnet\_close() routines described above are used on the
client side to establish a connection and terminate a connection with the
server. To become a server (i.e. wait for a connection from a client), use the
routine {\bf bnet\_thread\_server}. The calling sequence is a bit complicated,
please refer to the code in bnet\_server.c and the code at the beginning of
-each daemon as examples of how to call it.
+each daemon as examples of how to call it.
\section{Higher Level Conventions}
\index{Conventions!Higher Level}
\index{Higher Level Conventions}
-\addcontentsline{toc}{subsection}{Higher Level Conventions}
Within Bacula, we have established the convention that any time a single
record is passed, it is sent with bnet\_send() and read with bnet\_recv().
-Thus the normal exchange between the server (S) and the client (C) are:
+Thus the normal exchange between the server (S) and the client (C) are:
-\footnotesize
\begin{lstlisting}
S: wait for connection C: attempt connection
S: accept connection C: bnet_send() send request
S: act on request
S: bnet_send() send ack C: bnet_recv() wait for ack
\end{lstlisting}
-\normalsize
Thus a single command is sent, acted upon by the server, and then
-acknowledged.
+acknowledged.
In certain cases, such as the transfer of the data for a file, all the
information or data cannot be sent in a single packet. In this case, the
convention is that the client will send a command to the server, who knows
that more than one packet will be returned. In this case, the server will
-enter a loop:
+enter a loop:
-\footnotesize
\begin{lstlisting}
while ((n=bnet_recv(bsock)) > 0) {
act on request
if (n < 0)
error
\end{lstlisting}
-\normalsize
-The client will perform the following:
+The client will perform the following:
-\footnotesize
\begin{lstlisting}
bnet_send(bsock);
bnet_send(bsock);
...
bnet_sig(bsock, BNET_EOD);
\end{lstlisting}
-\normalsize
Thus the client will send multiple packets and signal to the server when all
-the packets have been sent by sending a zero length record.
+the packets have been sent by sending a zero length record.
\label{_PlatformChapter}
\index{Support!Platform}
\index{Platform Support}
-\addcontentsline{toc}{section}{Platform Support}
\section{General}
\index{General }
-\addcontentsline{toc}{subsection}{General}
-This chapter describes the requirements for having a
+This chapter describes the requirements for having a
supported platform (Operating System). In general, Bacula is
quite portable. It supports 32 and 64 bit architectures as well
-as bigendian and littleendian machines. For full
-support, the platform (Operating System) must implement POSIX Unix
+as bigendian and littleendian machines. For full
+support, the platform (Operating System) must implement POSIX Unix
system calls. However, for File daemon support only, a small
-compatibility library can be written to support almost any
+compatibility library can be written to support almost any
architecture.
Currently Linux, FreeBSD, and Solaris are fully supported
platforms, which means that the code has been tested on those
machines and passes a full set of regression tests.
-In addition, the Windows File daemon is supported on most versions
-of Windows, and finally, there are a number of other platforms
-where the File daemon (client) is known to run: NetBSD, OpenBSD,
+In addition, the Windows File daemon is supported on most versions
+of Windows, and finally, there are a number of other platforms
+where the File daemon (client) is known to run: NetBSD, OpenBSD,
Mac OSX, SGI, ...
\section{Requirements to become a Supported Platform}
\index{Requirements!Platform}
\index{Platform Requirements}
-\addcontentsline{toc}{subsection}{Platform Requirements}
As mentioned above, in order to become a fully supported platform, it
must support POSIX Unix system calls. In addition, the following
a system administrator for the machine that is available. This
person need not be a developer/programmer but must be familiar
with system administration of the platform.
-\item There must be at least one person designated who will
+\item There must be at least one person designated who will
run regression tests prior to each release. Releases occur
approximately once every 6 months, but can be more frequent.
It takes at most a day's effort to setup the regression scripts
\item Ideally there are one or more persons who will package
each Bacula release.
\item Ideally there are one or more developers who can respond to
- and fix platform specific bugs.
+ and fix platform specific bugs.
\end{bsysitemize}
Ideal requirements for a test machine:
%%
%%
-\chapter{Bacula Porting Notes}
-\label{_ChapterStart1}
+\chapter{Bacula Porting Notes}\label{PortingChapter}
+%%%\label{_portingChapter}\label{_ChapterStart1}
\index{Notes!Bacula Porting}
\index{Bacula Porting Notes}
-\addcontentsline{toc}{section}{Bacula Porting Notes}
This document is intended mostly for developers who wish to port Bacula to a
-system that is not {\bf officially} supported.
+system that is not {\bf officially} supported.
It is hoped that Bacula clients will eventually run on every imaginable system
that needs backing up (perhaps even a Palm). It is also hoped that the Bacula
Directory and Storage daemons will run on every system capable of supporting
-them.
+them.
\section{Porting Requirements}
\index{Requirements!Porting}
\index{Porting Requirements}
-\addcontentsline{toc}{section}{Porting Requirements}
-In General, the following holds true:
+In General, the following holds true:
\begin{bsysitemize}
\item {\bf Bacula} has been compiled and run on Linux RedHat, FreeBSD, and
- Solaris systems.
-\item In addition, clients exist on Win32, and Irix
-\item It requires GNU C++ to compile. You can try with other compilers, but
+ Solaris systems.
+\item In addition, clients exist on Win32, and Irix
+\item It requires GNU C++ to compile. You can try with other compilers, but
you are on your own. The Irix client is built with the Irix complier, but, in
- general, you will need GNU.
-\item Your compiler must provide support for 64 bit signed and unsigned
- integers.
+ general, you will need GNU.
+\item Your compiler must provide support for 64 bit signed and unsigned
+ integers.
\item You will need a recent copy of the {\bf autoconf} tools loaded on your
system (version 2.13 or later). The {\bf autoconf} tools are used to build
the configuration program, but are not part of the Bacula source
-distribution.
+distribution.
\item There are certain third party packages that Bacula needs. Except for
- MySQL, they can all be found in the {\bf depkgs} and {\bf depkgs1} releases.
+ MySQL, they can all be found in the {\bf depkgs} and {\bf depkgs1} releases.
\item To build the Win32 binaries, we use Microsoft VC++ standard
2003. Please see the instructions in
bacula-source/src/win32/README.win32 for more details. If you
your own version of the Win32 FD, so you are pretty much on
your own. You can ask the bacula-devel list for help, but
please don't expect much.
-\item {\bf Bacula} requires a good implementation of pthreads to work.
+\item {\bf Bacula} requires a good implementation of pthreads to work.
\item The source code has been written with portability in mind and is mostly
POSIX compatible. Thus porting to any POSIX compatible operating system
- should be relatively easy.
+ should be relatively easy.
\end{bsysitemize}
\section{Steps to Take for Porting}
\index{Porting!Steps to Take for}
\index{Steps to Take for Porting}
-\addcontentsline{toc}{section}{Steps to Take for Porting}
\begin{bsysitemize}
\item The first step is to ensure that you have version 2.13 or later of the
{\bf autoconf} tools loaded. You can skip this step, but making changes to
- the configuration program will be difficult or impossible.
+ the configuration program will be difficult or impossible.
\item The run a {\bf ./configure} command in the main source directory and
- examine the output. It should look something like the following:
+ examine the output. It should look something like the following:
\footnotesize
\begin{lstlisting}
properly identified your host on the {\bf Host:} line. The first part (added
in version 1.27) is the GNU four part identification of your system. The part
after the -- is your system and the system version. Generally, if your system
-is not yet supported, you must correct these.
+is not yet supported, you must correct these.
\item If the {\bf ./configure} does not function properly, you must determine
the cause and fix it. Generally, it will be because some required system
- routine is not available on your machine.
-\item To correct problems with detection of your system type or with routines
+ routine is not available on your machine.
+\item To correct problems with detection of your system type or with routines
and libraries, you must edit the file {\bf
\lt{}bacula-src\gt{}/autoconf/configure.in}. This is the ``source'' from
which {\bf configure} is built. In general, most of the changes for your
{\bf unknown} you will need to make changes. Then as mentioned above, you
will need to set a number of system dependent items in {\bf configure.in} in
the {\bf case} statement at approximately line 1050 (depending on the Bacula
-release).
+release).
\item The items to in the case statement that corresponds to your system are
- the following:
+ the following:
\begin{bsysitemize}
\item DISTVER -- set to the version of your operating system. Typically some
- form of {\bf uname} obtains it.
+ form of {\bf uname} obtains it.
\item TAPEDRIVE -- the default tape drive. Not too important as the user can
- set it as an option.
+ set it as an option.
\item PSCMD -- set to the {\bf ps} command that will provide the PID in the
first field and the program name in the second field. If this is not set
properly, the {\bf bacula stop} script will most likely not be able to stop
-Bacula in all cases.
+Bacula in all cases.
\item hostname -- command to return the base host name (non-qualified) of
your system. This is generally the machine name. Not too important as the
- user can correct this in his configuration file.
+ user can correct this in his configuration file.
\item CFLAGS -- set any special compiler flags needed. Many systems need a
- special flag to make pthreads work. See cygwin for an example.
-\item LDFLAGS -- set any special loader flags. See cygwin for an example.
-\item PTHREAD\_LIB -- set for any special pthreads flags needed during
- linking. See freebsd as an example.
+ special flag to make pthreads work. See cygwin for an example.
+\item LDFLAGS -- set any special loader flags. See cygwin for an example.
+\item PTHREAD\_LIB -- set for any special pthreads flags needed during
+ linking. See freebsd as an example.
\item lld -- set so that a ``long long int'' will be properly edited in a
- printf() call.
-\item llu -- set so that a ``long long unsigned'' will be properly edited in
- a printf() call.
-\item PFILES -- set to add any files that you may define is your platform
+ printf() call.
+\item llu -- set so that a ``long long unsigned'' will be properly edited in
+ a printf() call.
+\item PFILES -- set to add any files that you may define is your platform
subdirectory. These files are used for installation of automatic system
- startup of Bacula daemons.
+ startup of Bacula daemons.
\end{bsysitemize}
\item To rebuild a new version of {\bf configure} from a changed {\bf
autoconf/configure.in} you enter {\bf make configure} in the top level Bacula
source directory. You must have done a ./configure prior to trying to rebuild
- the configure script or it will get into an infinite loop.
+ the configure script or it will get into an infinite loop.
\item If the {\bf make configure} gets into an infinite loop, ctl-c it, then
do {\bf ./configure} (no options are necessary) and retry the {\bf make
- configure}, which should now work.
-\item To rebuild {\bf configure} you will need to have {\bf autoconf} version
- 2.57-3 or higher loaded. Older versions of autoconf will complain about
- unknown or bad options, and won't work.
+ configure}, which should now work.
+\item To rebuild {\bf configure} you will need to have {\bf autoconf} version
+ 2.57-3 or higher loaded. Older versions of autoconf will complain about
+ unknown or bad options, and won't work.
\item After you have a working {\bf configure} script, you may need to make a
few system dependent changes to the way Bacula works. Generally, these are
done in {\bf src/baconfig.h}. You can find a few examples of system dependent
no definition for {\bf socklen\_t}, so it is made in this file. If your
system has structure alignment requirements, check the definition of BALIGN
in this file. Currently, all Bacula allocated memory is aligned on a {\bf
-double} boundary.
-\item If you are having problems with Bacula's type definitions, you might
- look at {\bf src/bc\_types.h} where all the types such as {\bf uint32\_t},
- {\bf uint64\_t}, etc. that Bacula uses are defined.
+double} boundary.
+\item If you are having problems with Bacula's type definitions, you might
+ look at {\bf src/bc\_types.h} where all the types such as {\bf uint32\_t},
+ {\bf uint64\_t}, etc. that Bacula uses are defined.
\end{bsysitemize}
\label{_ChapterStart8}
\index{Testing!Bacula Regression}
\index{Bacula Regression Testing}
-\addcontentsline{toc}{section}{Bacula Regression Testing}
\section{Setting up Regession Testing}
\index{Setting up Regession Testing}
-\addcontentsline{toc}{section}{Setting up Regression Testing}
This document is intended mostly for developers who wish to ensure that their
changes to Bacula don't introduce bugs in the base code. However, you
-don't need to be a developer to run the regression scripts, and we
+don't need to be a developer to run the regression scripts, and we
recommend them before putting your system into production, and before each
upgrade, especially if you build from source code. They are
simply shell scripts that drive Bacula through bconsole and then typically
\normalsize
If you want to test with SQLite and it is not installed on your system,
-you will need to download the latest depkgs release from Source Forge and
-unpack it into {\bf depkgs}, then simply:
+you will need to download the latest depkgs release from Source Forge and
+unpack it into {\bf depkgs}, then simply:
\footnotesize
\begin{lstlisting}
There are two different aspects of regression testing that this document will
-discuss: 1. Running the Regression Script, 2. Writing a Regression test.
+discuss: 1. Running the Regression Script, 2. Writing a Regression test.
\section{Running the Regression Script}
\index{Running the Regression Script}
\index{Script!Running the Regression}
-\addcontentsline{toc}{section}{Running the Regression Script}
There are a number of different tests that may be run, such as: the standard
set that uses disk Volumes and runs under any userid; a small set of tests
\subsection{Setting the Configuration Parameters}
\index{Setting the Configuration Parameters}
\index{Parameters!Setting the Configuration}
-\addcontentsline{toc}{subsection}{Setting the Configuration Parameters}
-There is nothing you need to change in the source directory.
-
+There is nothing you need to change in the source directory.
+
To begin:
\footnotesize
\normalsize
-The
+The
very first time you are going to run the regression scripts, you will
-need to create a custom config file for your system.
+need to create a custom config file for your system.
We suggest that you start by:
\footnotesize
\footnotesize
\begin{lstlisting}
-
+
# Where to get the source to be tested
BACULA_SOURCE="${HOME}/bacula/bacula"
TAPE_DRIVE="/dev/nst0"
# if you don't have an autochanger set AUTOCHANGER to /dev/null
AUTOCHANGER="/dev/sg0"
-# For two drive tests -- set to /dev/null if you do not have it
+# For two drive tests -- set to /dev/null if you do not have it
TAPE_DRIVE1="/dev/null"
# This must be the path to the autochanger including its name
OPENSSL="--with-openssl"
# You may put your real host name here, but localhost or 127.0.0.1
-# is valid also and it has the advantage that it works on a
+# is valid also and it has the advantage that it works on a
# non-networked machine
HOST="localhost"
-
+
\end{lstlisting}
\normalsize
\begin{bsysitemize}
-\item {\bf BACULA\_SOURCE} should be the full path to the Bacula source code
+\item {\bf BACULA\_SOURCE} should be the full path to the Bacula source code
that you wish to test. It will be loaded configured, compiled, and
installed with the "make setup" command, which needs to be done only
once each time you change the source code.
be build before running a Bacula regression, if you are using SQLite. This
variable is ignored if you are using MySQL or PostgreSQL. To use PostgreSQL,
edit the Makefile and change (or add) WHICHDB?=``\lstinline+--with-postgresql+''. For
- MySQL use ``WHICHDB=''\lstinline+--with-mysql+``.
-
+ MySQL use ``WHICHDB=''\lstinline+--with-mysql+``.
+
The advantage of using SQLite is that it is totally independent of any
installation you may have running on your system, and there is no
special configuration or authorization that must be done to run it.
With both MySQL and PostgreSQL, you must pre-install the packages,
- initialize them and ensure that you have authorization to access the
+ initialize them and ensure that you have authorization to access the
database and create and delete tables.
\item {\bf TAPE\_DRIVE} is the full path to your tape drive. The base set of
second tape drive.
\item {\bf AUTOCHANGER} is the name of your autochanger control device. Set this to
- /dev/null if you do not have an autochanger.
+ /dev/null if you do not have an autochanger.
\item {\bf AUTOCHANGER\_PATH} is the full path including the program name for
your autochanger program (normally {\bf mtx}. Leave the default value if you
- do not have one.
+ do not have one.
\item {\bf TCPWRAPPERS} defines whether or not you want the ./configure
to be performed with tcpwrappers enabled.
\subsection{Building the Test Bacula}
\index{Building the Test Bacula}
\index{Bacula!Building the Test}
-\addcontentsline{toc}{subsection}{Building the Test Bacula}
Once the above variables are set, you can build the setup by entering:
\subsection{Setting up your SQL engine}
\index{Setting up your SQL engine}
-\addcontentsline{toc}{subsection}{Setting up your SQL engine}
-If you are using SQLite or SQLite3, there is nothing more to do; you can
+If you are using SQLite or SQLite3, there is nothing more to do; you can
simply run the tests as described in the next section.
If you are using MySQL or PostgreSQL, you will need to establish an
create\_mysql\_database, create\_postgresql\_database, grant\_mysql\_privileges,
and grant\_postgresql\_privileges may be of a help to you.
-Generally, to do the above, you will need to run under root to
+Generally, to do the above, you will need to run under root to
be able to create databases and modify permissions within MySQL and
PostgreSQL.
\subsection{Running the Disk Only Regression}
\index{Regression!Running the Disk Only}
\index{Running the Disk Only Regression}
-\addcontentsline{toc}{subsection}{Running the Disk Only Regression}
The simplest way to copy the source code, configure it, compile it, link
it, and run the tests is to use a helper script:
\footnotesize
\begin{lstlisting}
Test results
-
+
===== Bacula tape test OK =====
===== Small File Size test OK =====
===== restore-by-file-tape test OK =====
Each separate test is self contained in that it initializes to run Bacula from
scratch (i.e. newly created database). It will also kill any Bacula session
that is currently running. In addition, it uses ports 8101, 8102, and 8103 so
-that it does not intefere with a production system.
+that it does not intefere with a production system.
Alternatively, you can do the ./do\_disk work by hand with:
Once Bacula is built, you can run the basic disk only non-root regression test
-by entering:
+by entering:
\footnotesize
\begin{lstlisting}
\subsection{Other Tests}
\index{Other Tests}
\index{Tests!Other}
-\addcontentsline{toc}{subsection}{Other Tests}
There are a number of other tests that can be run as well. All the tests are a
simply shell script keep in the regress directory. For example the ''make
\index{all\_non-root-tests}
All non-tape tests not requiring root. This is the standard set of tests,
that in general, backup some data, then restore it, and finally compares the
-restored data with the original data.
+restored data with the original data.
\item [all-root-tests]
\index{all-root-tests}
All non-tape tests requiring root permission. These are a relatively small
number of tests that require running as root. The amount of data backed up
can be quite large. For example, one test backs up /usr, another backs up
-/etc. One or more of these tests reports an error -- I'll fix it one day.
+/etc. One or more of these tests reports an error -- I'll fix it one day.
\item [all-non-root-tape-tests]
\index{all-non-root-tape-tests}
All tape test not requiring root. There are currently three tests, all run
without being root, and backup to a tape. The first two tests use one volume,
and the third test requires an autochanger, and uses two volumes. If you
-don't have an autochanger, then this script will probably produce an error.
+don't have an autochanger, then this script will probably produce an error.
\item [all-tape-and-file-tests]
\index{all-tape-and-file-tests}
All tape and file tests not requiring root. This includes just about
-everything, and I don't run it very often.
+everything, and I don't run it very often.
\end{description}
\subsection{If a Test Fails}
\index{Fails!If a Test}
\index{If a Test Fails}
-\addcontentsline{toc}{subsection}{If a Test Fails}
-If you one or more tests fail, the line output will be similar to:
+If you one or more tests fail, the line output will be similar to:
\footnotesize
\begin{lstlisting}
...
\end{lstlisting}
-All regression scripts must be run by hand or by calling the test scripts.
+All regression scripts must be run by hand or by calling the test scripts.
These are principally scripts that begin with {\bf all\_...} such as {\bf all\_disk\_tests},
{\bf ./all\_test} ...
-None of the
+None of the
{\bf ./do\_disk}, {\bf ./do\_all}, {\bf ./nightly...} scripts will work.
If you want to switch back to running the regression scripts from source, first
\section{Running a Single Test}
\index{Running a Single Test}
-\addcontentsline{toc}{section}{Running a Single Test}
If you wish to run a single test, you can simply:
\section{Writing a Regression Test}
\index{Test!Writing a Regression}
\index{Writing a Regression Test}
-\addcontentsline{toc}{section}{Writing a Regression Test}
Any developer, who implements a major new feature, should write a regression
test that exercises and validates the new feature. Each regression test is a
complete test by itself. It terminates any running Bacula, initializes the
-database, starts Bacula, then runs the test by using the console program.
+database, starts Bacula, then runs the test by using the console program.
\subsection{Running the Tests by Hand}
\index{Hand!Running the Tests by}
\index{Running the Tests by Hand}
-\addcontentsline{toc}{subsection}{Running the Tests by Hand}
You can run any individual test by hand by cd'ing to the {\bf regress}
-directory and entering:
+directory and entering:
\footnotesize
\begin{lstlisting}
\subsection{Directory Structure}
\index{Structure!Directory}
\index{Directory Structure}
-\addcontentsline{toc}{subsection}{Directory Structure}
-The directory structure of the regression tests is:
+The directory structure of the regression tests is:
\footnotesize
\begin{lstlisting}
\subsection{Adding a New Test}
\index{Adding a New Test}
\index{Test!Adding a New}
-\addcontentsline{toc}{subsection}{Adding a New Test}
If you want to write a new regression test, it is best to start with one of
-the existing test scripts, and modify it to do the new test.
+the existing test scripts, and modify it to do the new test.
When adding a new test, be extremely careful about adding anything to any of
the daemons' configuration files. The reason is that it may change the prompts
that are sent to the console. For example, adding a Pool means that the
current scripts, which assume that Bacula automatically selects a Pool, will
now be presented with a new prompt, so the test will fail. If you need to
-enhance the configuration files, consider making your own versions.
+enhance the configuration files, consider making your own versions.
\subsection{Running a Test Under The Debugger}
\index{Debugger}
-\addcontentsline{toc}{subsection}{Running a Test Under The Debugger}
You can run a test under the debugger (actually run a Bacula daemon
under the debugger) by first setting the environment variable
{\bf REGRESS\_WAIT} with commands such as:
\begin{lstlisting}
cd .../regress/bin
-gdb bacula-sd
+gdb bacula-sd
(possibly set breakpoints, ...)
run -s -f
\end{lstlisting}
-Then enter any character in the window with the above message.
+Then enter any character in the window with the above message.
An error message will appear saying that the daemon you are debugging
is already running, which is the case. You can simply ignore the
-error message.
+error message.
-%%
-%%
-
-%\addcontentsline{lof}{figure}{Smart Memory Allocation with Orphaned Buffer
-
\chapter{Smart Memory Allocation}
\label{_ChapterStart4}
\index{Detection!Smart Memory Allocation With Orphaned Buffer }
\index{Smart Memory Allocation With Orphaned Buffer Detection }
-\addcontentsline{toc}{section}{Smart Memory Allocation With Orphaned Buffer
-Detection}
\bsysimageH{smartall}{Smart Memory Allocation with Orphaned Buffer Detection}{}
Few things are as embarrassing as a program that leaks, yet few errors are so
problem is identified much later in the testing cycle (or even worse, when the
code is in the hands of a customer). When program testing is complete, simply
recompiling with different flags removes SMARTALLOC from your program,
-permitting it to run without speed or storage penalties.
+permitting it to run without speed or storage penalties.
In addition to detecting orphaned buffers, SMARTALLOC also helps to find other
common problems in management of dynamic storage including storing before the
some systems. SMARTALLOC does not conflict with malloc\_debug and both may be
used together, if you wish. SMARTALLOC makes no assumptions regarding the
internal structure of the heap and thus should be compatible with any C
-language implementation of the standard memory allocation functions.
+language implementation of the standard memory allocation functions.
\subsection{ Installing SMARTALLOC}
\index{SMARTALLOC!Installing }
\index{Installing SMARTALLOC }
-\addcontentsline{toc}{subsection}{Installing SMARTALLOC}
-SMARTALLOC is provided as a Zipped archive,
+SMARTALLOC is provided as a Zipped archive,
\elink{smartall.zip}{http://www.fourmilab.ch/smartall/smartall.zip}; see the
-download instructions below.
+download instructions below.
-To install SMARTALLOC in your program, simply add the statement:
+To install SMARTALLOC in your program, simply add the statement:
to every C program file which calls any of the memory allocation functions
({\tt malloc}, {\tt calloc}, {\tt free}, etc.). SMARTALLOC must be used for
compilation before the inclusion of smartall.h. I usually do this by having my
Makefile add the ``{\tt -DSMARTALLOC}'' option to the C compiler for
non-production builds. You can define the symbol manually, if you prefer, by
-adding the statement:
+adding the statement:
-{\tt \#define SMARTALLOC}
+{\tt \#define SMARTALLOC}
At the point where your program is all done and ready to relinquish control to
-the operating system, add the call:
+the operating system, add the call:
-{\tt \ \ \ \ \ \ \ \ sm\_dump(}{\it datadump}{\tt );}
+{\tt \ \ \ \ \ \ \ \ sm\_dump(}{\it datadump}{\tt );}
where {\it datadump} specifies whether the contents of orphaned buffers are to
be dumped in addition printing to their size and place of allocation. The data
be identified from the information this prints about it, replace the statement
with ``{\tt sm\_dump(1)};''. Usually the dump of the buffer's data will
furnish the additional clues you need to excavate and extirpate the elusive
-error that left the buffer allocated.
+error that left the buffer allocated.
Finally, add the files ``smartall.h'' and ``smartall.c'' from this release to
your source directory, make dependencies, and linker input. You needn't make
defined it generates no code, so you may always include it knowing it will
waste no storage in production builds. Now when you run your program, if it
leaves any buffers around when it's done, each will be reported by {\tt
-sm\_dump()} on stderr as follows:
+sm\_dump()} on stderr as follows:
-\footnotesize
\begin{lstlisting}
Orphaned buffer: 120 bytes allocated at line 50 of gutshot.c
\end{lstlisting}
\subsection{ Squelching a SMARTALLOC}
\index{SMARTALLOC!Squelching a }
\index{Squelching a SMARTALLOC }
-\addcontentsline{toc}{subsection}{Squelching a SMARTALLOC}
Usually, when you first install SMARTALLOC in an existing program you'll find
it nattering about lots of orphaned buffers. Some of these turn out to be
about these buffers by adding code to release them, but by doing so you're
adding unnecessary complexity and code size to your program just to silence
the nattering of a SMARTALLOC, so an escape hatch is provided to eliminate the
-need to release these buffers.
+need to release these buffers.
Normally all storage allocated with the functions {\tt malloc()}, {\tt
calloc()}, and {\tt realloc()} is monitored by SMARTALLOC. If you make the
-function call:
+function call:
-\footnotesize
\begin{lstlisting}
sm_static(1);
\end{lstlisting}
be allocated when {\tt sm\_dump()} is called. I use a call on ``{\tt
sm\_static(1);}'' before I allocate things like program configuration tables
so I don't have to add code to release them at end of program time. After
-allocating unmonitored data this way, be sure to add a call to:
+allocating unmonitored data this way, be sure to add a call to:
-\footnotesize
\begin{lstlisting}
sm_static(0);
\end{lstlisting}
to resume normal monitoring of buffer allocations. Buffers allocated while
{\tt sm\_static(1}) is in effect are not checked for having been orphaned but
all the other safeguards provided by SMARTALLOC remain in effect. You may
-release such buffers, if you like; but you don't have to.
+release such buffers, if you like; but you don't have to.
\subsection{ Living with Libraries}
\index{Libraries!Living with }
\index{Living with Libraries }
-\addcontentsline{toc}{subsection}{Living with Libraries}
Some library functions for which source code is unavailable may gratuitously
allocate and return buffers that contain their results, or require you to pass
particularly since this kind of ill-structured dynamic storage management is
the source of so many storage leaks. Without source code, however, there's no
option but to provide a way to bypass SMARTALLOC for the buffers the library
-allocates and/or releases with the standard system functions.
+allocates and/or releases with the standard system functions.
For each function {\it xxx} redefined by SMARTALLOC, a corresponding routine
named ``{\tt actually}{\it xxx}'' is furnished which provides direct access to
-the underlying system function, as follows:
+the underlying system function, as follows:
\begin{quote}
-
-\begin{longtable}{ll}
-\multicolumn{1}{l }{\bf Standard function } & \multicolumn{1}{l }{\bf Direct
-access function } \\
-{{\tt malloc(}{\it size}{\tt )} } & {{\tt actuallymalloc(}{\it size}{\tt )}
-} \\
-{{\tt calloc(}{\it nelem}{\tt ,} {\it elsize}{\tt )} } & {{\tt
-actuallycalloc(}{\it nelem}, {\it elsize}{\tt )} } \\
-{{\tt realloc(}{\it ptr}{\tt ,} {\it size}{\tt )} } & {{\tt
-actuallyrealloc(}{\it ptr}, {\it size}{\tt )} } \\
-{{\tt free(}{\it ptr}{\tt )} } & {{\tt actuallyfree(}{\it ptr}{\tt )} }
-
-\end{longtable}
-
+\LTXtable{\linewidth}{table_systemsfunctions}
\end{quote}
For example, suppose there exists a system library function named ``{\tt
with {\tt malloc()}, you can't use SMARTALLOC's {\tt free()}, as that call
expects information placed in the buffer by SMARTALLOC's special version of
{\tt malloc()}, and hence would report an error. To release the buffer you
-should call {\tt actuallyfree()}, as in this code fragment:
+should call {\tt actuallyfree()}, as in this code fragment:
-\footnotesize
\begin{lstlisting}
struct image *ibuf = getimage("ratpack.img");
display_on_screen(ibuf);
buffer allocated by SMARTALLOC's allocation routines, as it contains special
information that the system {\tt free()} doesn't expect to be there. The
following code uses {\tt actuallymalloc()} to obtain the buffer passed to such
-a routine.
+a routine.
-\footnotesize
\begin{lstlisting}
struct image *obuf =
(struct image *) actuallymalloc(sizeof(struct image));
subvert the error checking of SMARTALLOC; if you want to disable orphaned
buffer detection, use the {\tt sm\_static(1)} mechanism described above. That
way you don't forfeit all the other advantages of SMARTALLOC as you do when
-using {\tt actuallymalloc()} and {\tt actuallyfree()}.
+using {\tt actuallymalloc()} and {\tt actuallyfree()}.
\subsection{ SMARTALLOC Details}
\index{SMARTALLOC Details }
\index{Details!SMARTALLOC }
-\addcontentsline{toc}{subsection}{SMARTALLOC Details}
When you include ``smartall.h'' and define SMARTALLOC, the following standard
system library functions are redefined with the \#define mechanism to call
corresponding functions within smartall.c instead. (For details of the
-redefinitions, please refer to smartall.h.)
+redefinitions, please refer to smartall.h.)
-\footnotesize
\begin{lstlisting}
void *malloc(size_t size)
void *calloc(size_t nelem, size_t elsize)
\end{lstlisting}
\normalsize
-{\tt cfree()} is a historical artifact identical to {\tt free()}.
+{\tt cfree()} is a historical artifact identical to {\tt free()}.
In addition to allocating storage in the same way as the standard library
functions, the SMARTALLOC versions expand the buffers they allocate to include
chain of allocated buffers, to find all orphaned buffers. Buffers allocated
while {\tt sm\_static(1)} is in effect are specially flagged so that, despite
appearing on the allocated buffer chain, {\tt sm\_dump()} will not deem them
-orphans.
+orphans.
When a buffer is allocated by {\tt malloc()} or expanded with {\tt realloc()},
all bytes of newly allocated storage are set to the hexadecimal value 0x55
nonzero pattern is intended to catch code that erroneously assumes newly
allocated buffers are cleared to zero; in fact their contents are random. The
{\tt calloc()} function, defined as returning a buffer cleared to zero,
-continues to zero its buffers under SMARTALLOC.
+continues to zero its buffers under SMARTALLOC.
Buffers obtained with the SMARTALLOC functions contain a special sentinel byte
at the end of the user data area. This byte is set to a special key value
function will fail. This catches incorrect program code that stores beyond the
storage allocated for the buffer. At {\tt free()} time the queue links are
also validated and an assertion failure will occur if the program has
-destroyed them by storing before the start of the allocated storage.
+destroyed them by storing before the start of the allocated storage.
In addition, when a buffer is released with {\tt free()}, its contents are
immediately destroyed by overwriting them with the hexadecimal pattern 0xAA
this is {\it legal} in the standard Unix memory allocation package, which
permits programs to free() buffers, then raise them from the grave with {\tt
realloc()}. Such program ``logic'' should be fixed, not accommodated, and
-SMARTALLOC brooks no such Lazarus buffer`` nonsense.
+SMARTALLOC brooks no such Lazarus buffer`` nonsense.
Some C libraries allow a zero size argument in calls to {\tt malloc()}. Since
this is far more likely to indicate a program error than a defensible
-programming stratagem, SMARTALLOC disallows it with an assertion.
+programming stratagem, SMARTALLOC disallows it with an assertion.
When the standard library {\tt realloc()} function is called to expand a
buffer, it attempts to expand the buffer in place if possible, moving it only
buffers, trading error detection for performance. Although not specified in
the System V Interface Definition, many C library implementations of {\tt
realloc()} permit an old buffer argument of NULL, causing {\tt realloc()} to
-allocate a new buffer. The SMARTALLOC version permits this.
+allocate a new buffer. The SMARTALLOC version permits this.
\subsection{ When SMARTALLOC is Disabled}
\index{When SMARTALLOC is Disabled }
\index{Disabled!When SMARTALLOC is }
-\addcontentsline{toc}{subsection}{When SMARTALLOC is Disabled}
When SMARTALLOC is disabled by compiling a program with the symbol SMARTALLOC
not defined, calls on the functions otherwise redefined by SMARTALLOC go
{\tt sm\_dump()} and {\tt sm\_static()}, are defined to generate no code
(hence the null statement). Finally, if SMARTALLOC is not defined, compilation
of the file smartall.c generates no code or data at all, effectively removing
-it from the program even if named in the link instructions.
+it from the program even if named in the link instructions.
Thus, except for unusual circumstances, a program that works with SMARTALLOC
defined for testing should require no changes when built without it for
-production release.
+production release.
\subsection{ The {\tt alloc()} Function}
\index{Function!alloc }
\index{Alloc() Function }
-\addcontentsline{toc}{subsection}{alloc() Function}
Many programs I've worked on use very few direct calls to {\tt malloc()},
using the identically declared {\tt alloc()} function instead. Alloc detects
out-of-memory conditions and aborts, removing the need for error checking on
every call of {\tt malloc()} (and the temptation to skip checking for
-out-of-memory).
+out-of-memory).
As a convenience, SMARTALLOC supplies a compatible version of {\tt alloc()} in
the file alloc.c, with its definition in the file alloc.h. This version of
SMARTALLOC's orphaned buffer detection. In addition, when SMARTALLOC is
defined and {\tt alloc()} detects an out of memory condition, it takes
advantage of the SMARTALLOC diagnostic information to identify the file and
-line number of the call on {\tt alloc()} that failed.
+line number of the call on {\tt alloc()} that failed.
\subsection{ Overlays and Underhandedness}
\index{Underhandedness!Overlays and }
\index{Overlays and Underhandedness }
-\addcontentsline{toc}{subsection}{Overlays and Underhandedness}
String constants in the C language are considered to be static arrays of
characters accessed through a pointer constant. The arrays are potentially
not overlay its data among modules. If data are overlayed, the area of memory
which contained the file name at the time it was saved in the buffer may
contain something else entirely when {\tt sm\_dump()} gets around to using the
-pointer to edit the file name which allocated the buffer.
+pointer to edit the file name which allocated the buffer.
If you want to use SMARTALLOC in a program with overlayed data, you'll have to
modify smartall.c to either copy the file name to a fixed-length field added
prove a problem. Note that conventional overlaying of code, by far the most
common form of overlaying, poses no problems for SMARTALLOC; you need only be
concerned if you're using exotic tools for data overlaying on MS-DOS or other
-address-space-challenged systems.
+address-space-challenged systems.
Since a C language ''constant`` string can actually be written into, most C
compilers generate a unique copy of each string used in a module, even if the
multiple occurrences of constant strings, enabling this mode will eliminate
the overhead for these strings. Of course, it's up to you to make sure
choosing this compiler mode won't wreak havoc on some other part of your
-program.
+program.
\subsection{ Test and Demonstration Program}
\index{Test and Demonstration Program }
\index{Program!Test and Demonstration }
-\addcontentsline{toc}{subsection}{Test and Demonstration Program}
A test and demonstration program, smtest.c, is supplied with SMARTALLOC. You
can build this program with the Makefile included. Please refer to the
comments in smtest.c and the Makefile for information on this program. If
you're attempting to use SMARTALLOC on a new machine or with a new compiler or
-operating system, it's a wise first step to check it out with smtest first.
+operating system, it's a wise first step to check it out with smtest first.
\subsection{ Invitation to the Hack}
\index{Hack!Invitation to the }
\index{Invitation to the Hack }
-\addcontentsline{toc}{subsection}{Invitation to the Hack}
SMARTALLOC is not intended to be a panacea for storage management problems,
nor is it universally applicable or effective; it's another weapon in the
which has been used in several commercial software products which have,
collectively, sold more than third of a million copies in the retail market,
and can be expected to continue to develop through time as it is applied to
-ever more demanding projects.
+ever more demanding projects.
The version of SMARTALLOC here has been tested on a Sun SPARCStation, Silicon
Graphics Indigo2, and on MS-DOS using both Borland and Microsoft C. Moving
about prototyping of functions, whether the type returned by buffer allocation
is {\tt char\ *} or {\tt void\ *}, and so forth, but following those changes
it works in a variety of environments. I hope you'll find SMARTALLOC as useful
-for your projects as I've found it in mine.
+for your projects as I've found it in mine.
\section{
-\elink{}{http://www.fourmilab.ch/smartall/smartall.zip}
+\elink{}{http://www.fourmilab.ch/smartall/smartall.zip}
\elink{Download smartall.zip}{http://www.fourmilab.ch/smartall/smartall.zip}
(Zipped archive)}
\index{Archive! Download smartall.zip Zipped }
\index{ Download smartall.zip (Zipped archive) }
-\addcontentsline{toc}{section}{ Download smartall.zip (Zipped archive)}
-SMARTALLOC is provided as
-\elink{smartall.zip}{http://www.fourmilab.ch/smartall/smartall.zip}, a
+SMARTALLOC is provided as
+\elink{smartall.zip}{http://www.fourmilab.ch/smartall/smartall.zip}, a
\elink{Zipped}{http://www.pkware.com/} archive containing source code,
-documentation, and a {\tt Makefile} to build the software under Unix.
+documentation, and a {\tt Makefile} to build the software under Unix.
\subsection{ Copying}
\index{Copying }
-\addcontentsline{toc}{subsection}{Copying}
\begin{quote}
SMARTALLOC is in the public domain. Permission to use, copy, modify, and
distribute this software and its documentation for any purpose and without fee
is hereby granted, without any conditions or restrictions. This software is
-provided ''as is`` without express or implied warranty.
+provided ''as is`` without express or implied warranty.
\end{quote}
-{\it
+{\it
\elink{by John Walker}{http://www.fourmilab.ch}
-October 30th, 1998 }
+October 30th, 1998 }
\label{_ChapterStart3}
\index{Storage Daemon Design }
\index{Design!Storage Daemon }
-\addcontentsline{toc}{section}{Storage Daemon Design}
This chapter is intended to be a technical discussion of the Storage daemon
services and as such is not targeted at end users but rather at developers and
\section{SD Design Introduction}
\index{Introduction!SD Design }
\index{SD Design Introduction }
-\addcontentsline{toc}{section}{SD Design Introduction}
The Bacula Storage daemon provides storage resources to a Bacula installation.
An individual Storage daemon is associated with a physical permanent storage
\section{SD Development Outline}
\index{Outline!SD Development }
\index{SD Development Outline }
-\addcontentsline{toc}{section}{SD Development Outline}
In order to provide a high performance backup and restore solution that scales
to very large capacity devices and networks, the storage daemon must be able
\section{SD Connections and Sessions}
\index{Sessions!SD Connections and }
\index{SD Connections and Sessions }
-\addcontentsline{toc}{section}{SD Connections and Sessions}
A client connects to a storage server by initiating a conventional TCP
connection. The storage server accepts the connection unless its maximum
\subsection{SD Append Requests}
\index{Requests!SD Append }
\index{SD Append Requests }
-\addcontentsline{toc}{subsection}{SD Append Requests}
\begin{description}
\subsection{SD Read Requests}
\index{SD Read Requests }
\index{Requests!SD Read }
-\addcontentsline{toc}{subsection}{SD Read Requests}
\begin{description}
\end{description}
{\it by
-\elink{John Walker}{http://www.fourmilab.ch/}
-January 30th, MM }
+\elink{John Walker}{http://www.fourmilab.ch/}January 30th, MM }
\section{SD Data Structures}
\index{SD Data Structures}
-\addcontentsline{toc}{section}{SD Data Structures}
In the Storage daemon, there is a Device resource (i.e. from conf file)
that describes each physical device. When the physical device is used it
the device. However, multiple Jobs (defined by a JCR structure src/jcr.h)
can be writing a physical DEVICE at the same time (of course they are
sequenced by locking the DEVICE structure). There are a lot of job
-dependent "device" variables that may be different for each Job such as
+dependent ``device'' variables that may be different for each Job such as
spooling (one job may spool and another may not, and when a job is
spooling, it must have an i/o packet open, each job has its own record and
block structures, ...), so there is a device control record or DCR that is
\section{Introduction to TLS}
\index{TLS Introduction}
\index{Introduction!TLS}
-\addcontentsline{toc}{section}{TLS Introduction}
This patch includes all the back-end code necessary to add complete TLS
data encryption support to Bacula. In addition, support for TLS in
Adding support for the remaining daemons will be straight-forward.
Supported features of this patchset include:
-\begin{bsysitemize}
-\item Client/Server TLS Requirement Negotiation
+\begin{bsysitemize}
+\item Client/Server TLS Requirement Negotiation
\item TLSv1 Connections with Server and Client Certificate
-Validation
-\item Forward Secrecy Support via Diffie-Hellman Ephemeral Keying
+Validation
+\item Forward Secrecy Support via Diffie-Hellman Ephemeral Keying
\end{bsysitemize}
This document will refer to both ``server'' and ``client'' contexts. These
\section{New Configuration Directives}
\index{TLS Configuration Directives}
\index{Directives!TLS Configuration}
-\addcontentsline{toc}{section}{New Configuration Directives}
Additional configuration directives have been added to both the Console and
Director resources. These new directives are defined as follows:
secrecy of communications. This directive is only valid within a server
context. To generate the parameter file, you may use openssl:
\footnotesize
-\begin{lstlisting}
-openssl dhparam -out dh1024.pem -5 1024
+\begin{lstlisting}
+openssl dhparam -out dh1024.pem -5 1024
\end{lstlisting}
\normalsize
\end{bsysitemize}
\section{TLS API Implementation}
\index{TLS API Implimentation}
\index{API Implimentation!TLS}
-\addcontentsline{toc}{section}{TLS API Implementation}
To facilitate the use of additional TLS libraries, all OpenSSL-specific
code has been implemented within \emph{src/lib/tls.c}. In turn, a generic
\subsection{Library Initialization and Cleanup}
\index{Library Initialization and Cleanup}
\index{Initialization and Cleanup!Library}
-\addcontentsline{toc}{subsection}{Library Initialization and Cleanup}
\footnotesize
\begin{lstlisting}
\subsection{Manipulating TLS Contexts}
\index{TLS Context Manipulation}
\index{Contexts!Manipulating TLS}
-\addcontentsline{toc}{subsection}{Manipulating TLS Contexts}
\footnotesize
\begin{lstlisting}
\subsection{Performing Post-Connection Verification}
\index{TLS Post-Connection Verification}
\index{Verification!TLS Post-Connection}
-\addcontentsline{toc}{subsection}{Performing Post-Connection Verification}
\footnotesize
\begin{lstlisting}
\subsection{Manipulating TLS Connections}
\index{TLS Connection Manipulation}
\index{Connections!Manipulating TLS}
-\addcontentsline{toc}{subsection}{Manipulating TLS Connections}
\footnotesize
\begin{lstlisting}
\section{Bnet API Changes}
\index{Bnet API Changes}
\index{API Changes!Bnet}
-\addcontentsline{toc}{section}{Bnet API Changes}
A minimal number of changes were required in the Bnet socket API. The BSOCK
structure was expanded to include an associated TLS\_CONNECTION structure,
\subsection{Negotiating a TLS Connection}
\index{Negotiating a TLS Connection}
\index{TLS Connection!Negotiating}
-\addcontentsline{toc}{subsection}{Negotiating a TLS Connection}
\emph{bnet\_tls\_server()} and \emph{bnet\_tls\_client()} were both
implemented using the new TLS API as follows:
\index{Manipulating Socket Blocking State}
\index{Socket Blocking State!Manipulating}
\index{Blocking State!Socket!Manipulating}
-\addcontentsline{toc}{subsection}{Manipulating Socket Blocking State}
Three functions were added for manipulating the blocking state of a socket
on both Win32 and Unix-like systems. The Win32 code was written according
\section{Authentication Negotiation}
\index{Authentication Negotiation}
\index{Negotiation!TLS Authentication}
-\addcontentsline{toc}{section}{Authentication Negotiation}
Backwards compatibility with the existing SSL negotiation hooks implemented
in src/lib/cram-md5.c have been maintained. The
the installation process, but you will need to modify them to correspond to
your system. An overall view of the resources can be seen in the following:
-%% \addcontentsline{lof}{figure}{Bacula Objects}
-%% \includegraphics{\idir bacula-objects}
\bsysimageH{bacula-objects}{Bacula Objects}{figconfig:baculaobjects}
\label{ResFormat}
files will already contain at least one example of each permitted resource, so
you need not worry about creating all these kinds of resources from scratch.
-%\addcontentsline{lot}{table}{Resource Types}
\LTXtable{0.95\linewidth}{table_resources}
\section{Names, Passwords and Authorization}
\label{Names}
Here is sort of a picture of what names/passwords in which files/Resources
must match up:
-%\includegraphics{\idir Conf-Diagram}
\bsysimageH{Conf-Diagram}{Configuration diagram}{figconfig:configdiagram}
In the left column, you will find the Director, Storage, and Client resources,
scheduled). This directive works as expected since bacula 2.3.18.
\bsysimageH{different_time}{Job time control directives}{fig:differenttime}
-%% \begin{figure}[htbp]
-%% \centering
-%% \includegraphics[width=13cm]{\idir different_time}
-%% \label{fig:differenttime}
-%% \caption{Job time control directives}
-%% \end{figure}
\label{Director:Job:MaximumBandwidth}
\item [Maximum Bandwidth = \lt{}speed\gt{}]
\item [Allow Duplicate Jobs = \lt{}yes\vb{}no\gt{}]
\index[general]{Allow Duplicate Jobs}
-%\begin{figure}[htbp]
-% \centering
-% \includegraphics[width=13cm]{\idir duplicate-real}
-% \label{fig:allowduplicatejobs}
-% \caption{Allow Duplicate Jobs usage}
-%\end{figure}
\bsysimageH{duplicate-real}{Allow Duplicate Jobs usage}{fig:allowduplicatejobs}
A duplicate job in the sense we use it here means a second or subsequent job
% Client resolver is not possible. Note that using this directive will not allow
% to use multiple Storage Daemon for Backup/Restore jobs.
%
-% \begin{figure}[htbp]
-% \centering
-% \includegraphics[width=10cm]{\idir BackupOverWan1}
-% \caption{Backup over WAN using FD Storage Address}
-% \end{figure}
\label{Director:Client:Priority}
\item [Priority = \lt{}number\gt{}]
Client resolver is not possible.
\bsysimageH{BackupOverWan1}{Backup over WAN using FD Storage Address}{figdirdconf:backupwan}
-%% \begin{figure}[htbp]
-%% \centering
-%% \includegraphics[width=10cm]{\idir BackupOverWan1}
-%% \caption{Backup over WAN using FD Storage Address}
-%% \end{figure}
\label{Director:Storage:SdPort}
\item [SD Port = \lt{}port\gt{}]
Director, Console, File, Storage, and Monitor services.
-%\addcontentsline{lof}{figure}{Bacula Applications}
\bsysimageH{bacula-applications}{Bacula Applications}{figgeneral:bacula-applications}
(thanks to Aristedes Maniatis for this graphic and the one below)
up and how, you must create a number of configuration files containing
resources (or objects). The following presents an overall picture of this:
-%\addcontentsline{lof}{figure}{Bacula Objects}
\bsysimageH{bacula-objects}{Bacula Objects}{figgeneral:baculaojects}
-%\includegraphics{\idir bacula-objects}
\section{Conventions Used in this Document}
\index[general]{Conventions used in this document}
(normally a daemon). In general, the Director oversees the flow of
information. It also maintains the Catalog.
-%\addcontentsline{lof}{figure}{Interactions between Bacula Services}
\bsysimageH{flow}{Interactions between Bacula Services}{figgeneral:interactions}
-%includegraphics{\idir flow}
Although the exact composition of the dependency packages may change from time
to time, the current makeup is the following:
-%\addcontentsline{lot}{table}{Dependency Packages}
\LTXtable{0.95\linewidth}{table_dependencies}
Note, some of these packages are quite large, so that building them can be a
bit time consuming. The above instructions will build all the packages
% pull in the index
%\clearpage
%\backmatter
-\part*{Indexes}
+%\part*{Indexes}
\printindex[general]
\printindex[dir]
\printindex[fd]
By clicking on ``Media'', you can see the list of all your volumes. You will be
able to filter by Pool, Media Type, Location,\dots And sort the result directly
in the table. The old ``Media'' view is now known as ``Pool''.
-%\begin{figure}[htbp]
-% \centering
-% \includegraphics[width=13cm]{\idir
\bsysimageH{bat-mediaview}{List volumes with BAT}{figbs4:mediaview}
-% \label{fig:mediaview}
-%\end{figure}
\subsubsection{Media Information View}
By double-clicking on a volume (on the Media list, in the Autochanger content
or in the Job information panel), you can access a detailed overview of your
Volume. (cf. figure \bsysref{figbs4:mediainfo}.)
-%\begin{figure}[htbp]
-% \centering
-% \includegraphics[width=13cm]{\idir
+
\bsysimageH{bat11}{Media information}{figbs4:mediainfo}
-% \caption{Media information}
-% \label{fig:mediainfo}
-%\end{figure}
+
\subsubsection{Job Information View}
By double-clicking on a Job record (on the Job run list or in the Media
information panel), you can access a detailed overview of your Job. (cf. figure
\bsysref{figbs4:jobinfo}.)
-%\begin{figure}[htbp]
-% \centering
-% \includegraphics[width=13cm]{\idir
+
\bsysimageH{bat12}{Job information}{figbs4:jobinfo}
-% \caption{Job information}
-% \label{fig:jobinfo}
-%\end{figure}
\subsubsection{Autochanger Content View}
By double-clicking on a Storage record (on the Storage list panel), you can
access a detailed overview of your Autochanger. (cf. figure \bsysref{figbs4:jobinfo}.)
-%\begin{figure}[htbp]
-% \centering
-% \includegraphics[width=13cm]{\idir
+
\bsysimageH{bat13}{Autochanger content}{figbs4:achcontent}
-% \caption{Autochanger content}
-% \label{fig:achcontent}
-%\end{figure}
To use this feature, you need to use the latest mtx-changer script
version. (With new \texttt{listall} and \texttt{transfer} commands)
Using \textbf{Full/Diff/Incr Max Run Time}, it's now possible to specify the
maximum allowed time that a job can run depending on the level.
-%\addcontentsline{lof}{figure}{Job time control directives}
-\bsysimageH{different_time}{Job time control directives}{}
-%\includegraphics{\idir different_time}
+\bsysimageH{different_time}{Job time control directives}{figbs4:jobtimecontroldirectives}
\subsubsection{Statistics Enhancements}
\index[general]{Statistics enhancements}
this new version allows you to run Backups from
the tray monitor menu.
-%\begin{figure}[htbp]
-% \centering
-% \includegraphics[width=10cm]{\idir
\bsysimageH{tray-monitor}{New tray monitor}{figbs6:traymonitor}
-% \label{fig:traymonitor}
-% \caption{New tray monitor}
-%\end{figure}
-%\begin{figure}[htbp]
-% \centering
\bsysimageH{tray-monitor1}{Run a Job through the new tray monitor}{figbs6:traymonitor1}
-% \includegraphics[width=10cm]{\idir tray-monitor1}
-% \label{fig:traymonitor1}
-% \caption{Run a Job through the new tray monitor}
-%\end{figure}
+
To be able to run a job from the tray monitor, you need to
this new version allows you to run Backups from
the tray monitor menu.
-%\begin{figure}[htbp]
-% \centering
-% \includegraphics[width=10cm]{\idir tray-monitor}
\bsysimageH{tray-monitor}{New tray monitor}{figcom:traymonitor}
-% \label{figcom:traymonitor}
-% \caption{New tray monitor}
-%\end{figure}
-
-%% \begin{figure}[htbp]
-%% \centering
-%% \includegraphics[width=10cm]{\idir tray-monitor1}
-%% \label{figcom:traymonitor1}
-%% \caption{Run a Job through the new tray monitor}
-%% \end{figure}
+
\bsysimageH{tray-monitor1}{Run a Job through the new tray monitor}{figcom:traymonitor1}
To be able to run a job from the tray monitor, you need to
Bat has now a bRestore panel that uses Bvfs to display files and
directories.
-%% \begin{figure}[htbp]
-%% \centering
-%% \includegraphics[width=12cm]{\idir
-%% \label{figcom:batbrestore}
-%% \caption{Bat Brestore Panel}
-%% \end{figure}
\bsysimageH{bat-brestore}{Bat Brestore Panel}{figcom:batbrestore}
the Bvfs module works correctly with BaseJobs, Copy and Migration jobs.
By clicking on ``Media'', you can see the list of all your volumes. You will be
able to filter by Pool, Media Type, Location,\dots And sort the result directly
in the table. The old ``Media'' view is now known as ``Pool''.
-%% \begin{figure}[htbp]
-%% \centering
-%% \includegraphics[width=13cm]{\idir bat-mediaview}
-%% \label{figcom:mediaview}
-%% \end{figure}
+
\bsysimageH{bat-mediaview}{List of all Volumes}{figcom:mediaview}
\subsubsection{Media Information View}
By double-clicking on a volume (on the Media list, in the Autochanger content
or in the Job information panel), you can access a detailed overview of your
Volume. (cf. figure \bsysref{figcom:mediainfo}.)
-%% \begin{figure}[htbp]
-%% \centering
-%% \includegraphics[width=13cm]{\idir bat11}
-%% \caption{Media information}
-%% \label{figcom:mediainfo}
-%% \end{figure}
+
\bsysimageH{bat11}{Media information}{figcom:mediainfo}
\subsubsection{Job Information View}
By double-clicking on a Job record (on the Job run list or in the Media
information panel), you can access a detailed overview of your Job. (cf
\bsysref{figcom:jobinfo}.)
-%% \begin{figure}[htbp]
-%% \centering
-%% \includegraphics[width=13cm]{\idir bat12}
-%% \caption{Job information}
-%% \label{figcom:jobinfo}
-%% \end{figure}
+
\bsysimageH{bat12}{Job information}{figcom:jobinfo}
\subsubsection{Autochanger Content View}
By double-clicking on a Storage record (on the Storage list panel), you can
access a detailed overview of your Autochanger. (cf. figure \bsysref{figcom:jobinfo}.)
-%% \begin{figure}[htbp]
-%% \centering
-%% \includegraphics[width=13cm]{\idir bat13}
-%% \caption{Autochanger content}
-%% \label{figcom:achcontent}
-%% \end{figure}
+
\bsysimageH{bat13}{Autochanger content}{figcom:achcontent}
To use this feature, you need to use the latest mtx-changer script
Using \textbf{Full/Diff/Incr Max Run Time}, it's now possible to specify the
maximum allowed time that a job can run depending on the level.
-%\addcontentsline{lof}{figure}{Job time control directives}
-%\includegraphics{\idir different_time}
\bsysimageH{different_time}{Job time control directives}{figcom:different_time}
\subsubsection{Statistics Enhancements}
status information about the Director or the backup status on the local
workstation or any other Bacula daemon that is configured.
-%% \addcontentsline{lof}{figure}{Bacula Tray Monitor}
-%% \includegraphics{\idir Bacula-tray-monitor}
\bsysimageH{Bacula-tray-monitor}{Bacula Tray Monitor}{figstart:baculatray}
% TODO: image may be too wide for 6" wide printed page.
\elink{http://mtx.opensource-sw.net/}{http://mtx.opensource-sw.net/}.
-%\addcontentsline{lot}{table}{Autochangers Known to Work with Bacula}
\begin{landscape}
\LTXtable{\linewidth}{table_supportedchangers}
\end{landscape}
unknown:
\LTXtable{0.95\linewidth}{table_tapedrives}
-%\addcontentsline{lot}{table}{Supported Tape Drives}
There is a list of \ilink{supported autochangers}{Models} in the Supported
Autochangers chapter of this document, where you will find other tape drives
\item For MacOSX see \elink{http://fink.sourceforge.net/ for obtaining the packages}{http://fink.sourceforge.net/}
\end{bsysitemize}
-See the \bsysxrlinkdocument{Porting}{_PortingChapter}{developers}{chapter} of the \devman{} for
+See the \bsysxrlink{Porting}{PortingChapter}{developers}{chapter} of the \devman{} for
information on porting to other systems.
If you have a older Red Hat Linux system running the 2.4.x kernel and you have
\item Once launched, the installer wizard will ask you if you want to install
Bacula.
-%\addcontentsline{lof}{figure}{Win32 Client Setup Wizard}
-%\includegraphics{\idir win32-welcome}
\bsysimageH{win32-welcome}{Win32 Client Setup Wizard}{fig:win32clientsetupwizard}
\item Next you will be asked to select the installation type.
-%\addcontentsline{lof}{figure}{Win32 Installation Type}
-%\includegraphics{\idir win32-installation-type}
\bsysimageH{win32-installation-type}{Win32 Installation Type}{fig:win32installationtype}
\item If you proceed, you will be asked to select the components to be
location that you choose later. The components dialog looks like the
following:
-%\addcontentsline{lof}{figure}{Win32 Component Selection Dialog}
-%\includegraphics{\idir win32-pkg}
\bsysimageH{win32-pkg}{Win32 Component Selection Dialog}{fig:win32componentselectiondialog}
\index[general]{Upgrading}
not be displayed.
-%\addcontentsline{lof}{figure}{Win32 Configure}
-%\includegraphics{\idir win32-config}
\bsysimageH{win32-config}{Win32 Configure}{fig:win32configure}
\item While the various files are being loaded, you will see the following
dialog:
-% \addcontentsline{lof}{figure}{Win32 Install Progress}
-% \includegraphics{\idir win32-installing}
\bsysimageH{win32-installing}{Win32 Install Progress}{fig:win32installing}
\item Finally, the finish dialog will appear:
-% \addcontentsline{lof}{figure}{Win32 Client Setup Completed}
-% \includegraphics{\idir win32-finish}
\bsysimageH{win32-finish}{Win32 Client Setup Completed}{fig:win32setupcompleted}
\end{bsysitemize}
ready to serve files, an icon \raisebox{-1ex}{\includegraphics{k7-idle}} representing a
cassette (or tape) will appear in the system tray
\raisebox{-2ex}{\includegraphics{tray-icon}}; right click on it and a menu will appear.\\
-\bsysimageN{menu}{Menu on right click}{}\\
+\bsysimageN{menu}{Menu on right click}{fig:win32menuonrightclick}\\
The {\bf Events} item is currently unimplemented, by selecting the {\bf
Status} item, you can verify whether any jobs are running or not.
The following matrix will give you an idea of what you can expect. Thanks to
Marc Brueckner for doing the tests:
-\addcontentsline{lot}{table}{WinNT/2K/XP Restore Portability Status}
-\begin{longtable}{|l|l|p{2.8in}|}
- \hline
-\multicolumn{1}{|c|}{\bf Backup OS} & \multicolumn{1}{c|}{\bf Restore OS}
-& \multicolumn{1}{c|}{\bf Results } \\
- \hline {WinMe} & {WinMe} & {Works } \\
- \hline {WinMe} & {WinNT} & {Works (SYSTEM permissions) } \\
- \hline {WinMe} & {WinXP} & {Works (SYSTEM permissions) } \\
- \hline {WinMe} & {Linux} & {Works (SYSTEM permissions) } \\
- \hline {\ } & {\ } & {\ } \\
- \hline {WinXP} & {WinXP} & {Works } \\
- \hline {WinXP} & {WinNT} & {Works (all files OK, but got "The data is invalid"
-message) } \\
- \hline {WinXP} & {WinMe} & {Error: Win32 data stream not supported. } \\
- \hline {WinXP} & {WinMe} & {Works if {\bf Portable=yes} specified during backup.} \\
- \hline {WinXP} & {Linux} & {Error: Win32 data stream not supported. } \\
- \hline {WinXP} & {Linux} & {Works if {\bf Portable=yes} specified during backup.}\\
- \hline {\ } & {\ } & {\ } \\
- \hline {WinNT} & {WinNT} & {Works } \\
- \hline {WinNT} & {WinXP} & {Works } \\
- \hline {WinNT} & {WinMe} & {Error: Win32 data stream not supported. } \\
- \hline {WinNT} & {WinMe} & {Works if {\bf Portable=yes} specified during backup.}\\
- \hline {WinNT} & {Linux} & {Error: Win32 data stream not supported. } \\
- \hline {WinNT} & {Linux} & {Works if {\bf Portable=yes} specified during backup. }\\
- \hline {\ } & {\ } & {\ } \\
- \hline {Linux} & {Linux} & {Works } \\
- \hline {Linux} & {WinNT} & {Works (SYSTEM permissions) } \\
- \hline {Linux} & {WinMe} & {Works } \\
- \hline {Linux} & {WinXP} & {Works (SYSTEM permissions)}
-\\ \hline
-\end{longtable}
-
+\LTXtable{\linewidth}{table_restoreportabilitystatus}
Note: with Bacula versions 1.39.x and later, non-portable Windows data can
be restore to any machine.
OK}.
\bsysimageH{view-only}{Message to ignore}{fig:messagetoignore}
-%\includegraphics{\idir view-only}
You should see something like this:
\bsysimageH{properties-security}{Properties security}{fig:propertiessecurity}
-%\includegraphics{\idir properties-security}
\item click on Advanced
\item click on the Owner tab
\item Change the owner to something other than the current owner (which is
{\bf SYSTEM} in this example as shown below).
\bsysimageH{properties-security-advanced-owner}{Properties security advanced owner}{fig:propertiessecurityadvancedowner}
-%\includegraphics{\idir properties-security-advanced-owner}
\item ensure the ``Replace owner on subcontainers and objects'' box is
checked
\item click on OK
on Yes.
\bsysimageH{confirm}{Confirm granting permissions}{fig:confirmgrantingpermissions}
-%\includegraphics{\idir confirm}
\item Click on OK to close the Properties tab
\end{enumerate}
The form of the mailcommand is a bit complicated, but it allows you to
distinguish whether the Job terminated in error or terminated normally. Please
see the
-\bsysxrlink{Mail}{mailcommand}{utility}{command} in the \utilityman{} for the
+\bsysxrlink{Mail}{mailcommand}{main}{command} in the \mainman{} for the
details of the substitution characters used above.
Once you are totally comfortable with Bacula as I am, or if you have a large
If you find yourself using {\bf bextract}, you probably have done
something wrong. For example, if you are trying to recover a file
-but are having problems, please see the \ilink {Restoring When Things Go
+but are having problems, please see the \ilink{Restoring When Things Go
Wrong}{database_restore} section of the Restore chapter of this manual.
Normally, you will restore files by running a {\bf Restore} Job from the {\bf
binary directory, and you replace {\bf mail.domain.com} with the fully
qualified name of your bsmtp (email) server, which normally listens on port
25. For more details on the substitution characters (e.g. \%r) used in the
-above line, please see the documentation of the
-\ilink{ MailCommand in the Messages Resource}{mailcommand}
-chapter of this manual.
+above line, please see the documentation of the \bsysxrlink{Mail Command in
+ the Messages Resource}{mailcommand}{main}{chapter} of the \mainman{}.
It is HIGHLY recommended that you test one or two cases by hand to make sure
that the {\bf mailhost} that you specified is correct and that it will accept