--- /dev/null
+%%
+%%
+
+\chapter{Catalog Services}
+\label{_ChapterStart30}
+\index[general]{Services!Catalog }
+\index[general]{Catalog Services }
+
+\section{General}
+\index[general]{General }
+\addcontentsline{toc}{subsection}{General}
+
+This chapter is intended to be a technical discussion of the Catalog services
+and as such is not targeted at end users but rather at developers and system
+administrators that want or need to know more of the working details of {\bf
+Bacula}.
+
+The {\bf Bacula Catalog} services consist of the programs that provide the SQL
+database engine for storage and retrieval of all information concerning files
+that were backed up and their locations on the storage media.
+
+We have investigated the possibility of using the following SQL engines for
+Bacula: Beagle, mSQL, GNU SQL, PostgreSQL, SQLite, Oracle, and MySQL. Each
+presents certain problems with either licensing or maturity. At present, we
+have chosen for development purposes to use MySQL, PostgreSQL and SQLite.
+MySQL was chosen because it is fast, proven to be reliable, widely used, and
+actively being developed. MySQL is released under the GNU GPL license.
+PostgreSQL was chosen because it is a full-featured, very mature database, and
+because Dan Langille did the Bacula driver for it. PostgreSQL is distributed
+under the BSD license. SQLite was chosen because it is small, efficient, and
+can be directly embedded in {\bf Bacula} thus requiring much less effort from
+the system administrator or person building {\bf Bacula}. In our testing
+SQLite has performed very well, and for the functions that we use, it has
+never encountered any errors except that it does not appear to handle
+databases larger than 2GBytes. That said, we would not recommend it for
+serious production use.
+
+The Bacula SQL code has been written in a manner that will allow it to be
+easily modified to support any of the current SQL database systems on the
+market (for example: mSQL, iODBC, unixODBC, Solid, OpenLink ODBC, EasySoft
+ODBC, InterBase, Oracle8, Oracle7, and DB2).
+
+If you do not specify either {\bf \verb{--{with-mysql} or {\bf \verb{--{with-postgresql} or
+{\bf \verb{--{with-sqlite} on the ./configure line, Bacula will use its minimalist
+internal database. This database is kept for build reasons but is no longer
+supported. Bacula {\bf requires} one of the three databases (MySQL,
+PostgreSQL, or SQLite) to run.
+
+\subsection{Filenames and Maximum Filename Length}
+\index[general]{Filenames and Maximum Filename Length }
+\index[general]{Length!Filenames and Maximum Filename }
+\addcontentsline{toc}{subsubsection}{Filenames and Maximum Filename Length}
+
+In general, either MySQL, PostgreSQL or SQLite permit storing arbitrary long
+path names and file names in the catalog database. In practice, there still
+may be one or two places in the Catalog interface code that restrict the
+maximum path length to 512 characters and the maximum file name length to 512
+characters. These restrictions are believed to have been removed. Please note,
+these restrictions apply only to the Catalog database and thus to your ability
+to list online the files saved during any job. All information received and
+stored by the Storage daemon (normally on tape) allows and handles arbitrarily
+long path and filenames.
+
+\subsection{Installing and Configuring MySQL}
+\index[general]{MySQL!Installing and Configuring }
+\index[general]{Installing and Configuring MySQL }
+\addcontentsline{toc}{subsubsection}{Installing and Configuring MySQL}
+
+For the details of installing and configuring MySQL, please see the
+\ilink{Installing and Configuring MySQL}{_ChapterStart} chapter of
+this manual.
+
+\subsection{Installing and Configuring PostgreSQL}
+\index[general]{PostgreSQL!Installing and Configuring }
+\index[general]{Installing and Configuring PostgreSQL }
+\addcontentsline{toc}{subsubsection}{Installing and Configuring PostgreSQL}
+
+For the details of installing and configuring PostgreSQL, please see the
+\ilink{Installing and Configuring PostgreSQL}{_ChapterStart10}
+chapter of this manual.
+
+\subsection{Installing and Configuring SQLite}
+\index[general]{Installing and Configuring SQLite }
+\index[general]{SQLite!Installing and Configuring }
+\addcontentsline{toc}{subsubsection}{Installing and Configuring SQLite}
+
+For the details of installing and configuring SQLite, please see the
+\ilink{Installing and Configuring SQLite}{_ChapterStart33} chapter of
+this manual.
+
+\subsection{Internal Bacula Catalog}
+\index[general]{Catalog!Internal Bacula }
+\index[general]{Internal Bacula Catalog }
+\addcontentsline{toc}{subsubsection}{Internal Bacula Catalog}
+
+Please see the
+\ilink{Internal Bacula Database}{_ChapterStart42} chapter of this
+manual for more details.
+
+\subsection{Database Table Design}
+\index[general]{Design!Database Table }
+\index[general]{Database Table Design }
+\addcontentsline{toc}{subsubsection}{Database Table Design}
+
+All discussions that follow pertain to the MySQL database. The details for the
+PostgreSQL and SQLite databases are essentially identical except for that all
+fields in the SQLite database are stored as ASCII text and some of the
+database creation statements are a bit different. The details of the internal
+Bacula catalog are not discussed here.
+
+Because the Catalog database may contain very large amounts of data for large
+sites, we have made a modest attempt to normalize the data tables to reduce
+redundant information. While reducing the size of the database significantly,
+it does, unfortunately, add some complications to the structures.
+
+In simple terms, the Catalog database must contain a record of all Jobs run by
+Bacula, and for each Job, it must maintain a list of all files saved, with
+their File Attributes (permissions, create date, ...), and the location and
+Media on which the file is stored. This is seemingly a simple task, but it
+represents a huge amount interlinked data. Note: the list of files and their
+attributes is not maintained when using the internal Bacula database. The data
+stored in the File records, which allows the user or administrator to obtain a
+list of all files backed up during a job, is by far the largest volume of
+information put into the Catalog database.
+
+Although the Catalog database has been designed to handle backup data for
+multiple clients, some users may want to maintain multiple databases, one for
+each machine to be backed up. This reduces the risk of confusion of accidental
+restoring a file to the wrong machine as well as reducing the amount of data
+in a single database, thus increasing efficiency and reducing the impact of a
+lost or damaged database.
+
+\section{Sequence of Creation of Records for a Save Job}
+\index[general]{Sequence of Creation of Records for a Save Job }
+\index[general]{Job!Sequence of Creation of Records for a Save }
+\addcontentsline{toc}{subsection}{Sequence of Creation of Records for a Save
+Job}
+
+Start with StartDate, ClientName, Filename, Path, Attributes, MediaName,
+MediaCoordinates. (PartNumber, NumParts). In the steps below, ``Create new''
+means to create a new record whether or not it is unique. ``Create unique''
+means each record in the database should be unique. Thus, one must first
+search to see if the record exists, and only if not should a new one be
+created, otherwise the existing RecordId should be used.
+
+\begin{enumerate}
+\item Create new Job record with StartDate; save JobId
+\item Create unique Media record; save MediaId
+\item Create unique Client record; save ClientId
+\item Create unique Filename record; save FilenameId
+\item Create unique Path record; save PathId
+\item Create unique Attribute record; save AttributeId
+ store ClientId, FilenameId, PathId, and Attributes
+\item Create new File record
+ store JobId, AttributeId, MediaCoordinates, etc
+\item Repeat steps 4 through 8 for each file
+\item Create a JobMedia record; save MediaId
+\item Update Job record filling in EndDate and other Job statistics
+ \end{enumerate}
+
+\section{Database Tables}
+\index[general]{Database Tables }
+\index[general]{Tables!Database }
+\addcontentsline{toc}{subsection}{Database Tables}
+
+\addcontentsline{lot}{table}{Filename Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf Filename } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{l| }{\bf Data Type }
+& \multicolumn{1}{l| }{\bf Remark } \\
+ \hline
+{FilenameId } & {integer } & {Primary Key } \\
+ \hline
+{Name } & {Blob } & {Filename }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Filename} table shown above contains the name of each file backed up
+with the path removed. If different directories or machines contain the same
+filename, only one copy will be saved in this table.
+
+\
+
+\addcontentsline{lot}{table}{Path Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf Path } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{PathId } & {integer } & {Primary Key } \\
+ \hline
+{Path } & {Blob } & {Full Path }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Path} table contains shown above the path or directory names of all
+directories on the system or systems. The filename and any MSDOS disk name are
+stripped off. As with the filename, only one copy of each directory name is
+kept regardless of how many machines or drives have the same directory. These
+path names should be stored in Unix path name format.
+
+Some simple testing on a Linux file system indicates that separating the
+filename and the path may be more complication than is warranted by the space
+savings. For example, this system has a total of 89,097 files, 60,467 of which
+have unique filenames, and there are 4,374 unique paths.
+
+Finding all those files and doing two stats() per file takes an average wall
+clock time of 1 min 35 seconds on a 400MHz machine running RedHat 6.1 Linux.
+
+Finding all those files and putting them directly into a MySQL database with
+the path and filename defined as TEXT, which is variable length up to 65,535
+characters takes 19 mins 31 seconds and creates a 27.6 MByte database.
+
+Doing the same thing, but inserting them into Blob fields with the filename
+indexed on the first 30 characters and the path name indexed on the 255 (max)
+characters takes 5 mins 18 seconds and creates a 5.24 MB database. Rerunning
+the job (with the database already created) takes about 2 mins 50 seconds.
+
+Running the same as the last one (Path and Filename Blob), but Filename
+indexed on the first 30 characters and the Path on the first 50 characters
+(linear search done there after) takes 5 mins on the average and creates a 3.4
+MB database. Rerunning with the data already in the DB takes 3 mins 35
+seconds.
+
+Finally, saving only the full path name rather than splitting the path and the
+file, and indexing it on the first 50 characters takes 6 mins 43 seconds and
+creates a 7.35 MB database.
+
+\
+
+\addcontentsline{lot}{table}{File Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf File } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{FileId } & {integer } & {Primary Key } \\
+ \hline
+{FileIndex } & {integer } & {The sequential file number in the Job } \\
+ \hline
+{JobId } & {integer } & {Link to Job Record } \\
+ \hline
+{PathId } & {integer } & {Link to Path Record } \\
+ \hline
+{FilenameId } & {integer } & {Link to Filename Record } \\
+ \hline
+{MarkId } & {integer } & {Used to mark files during Verify Jobs } \\
+ \hline
+{LStat } & {tinyblob } & {File attributes in base64 encoding } \\
+ \hline
+{MD5 } & {tinyblob } & {MD5/SHA1 signature in base64 encoding }
+\\ \hline
+
+\end{longtable}
+
+The {\bf File} table shown above contains one entry for each file backed up by
+Bacula. Thus a file that is backed up multiple times (as is normal) will have
+multiple entries in the File table. This will probably be the table with the
+most number of records. Consequently, it is essential to keep the size of this
+record to an absolute minimum. At the same time, this table must contain all
+the information (or pointers to the information) about the file and where it
+is backed up. Since a file may be backed up many times without having changed,
+the path and filename are stored in separate tables.
+
+This table contains by far the largest amount of information in the Catalog
+database, both from the stand point of number of records, and the stand point
+of total database size. As a consequence, the user must take care to
+periodically reduce the number of File records using the {\bf retention}
+command in the Console program.
+
+\
+
+\addcontentsline{lot}{table}{Job Table Layout}
+\begin{longtable}{|l|l|p{2.5in}|}
+ \hline
+\multicolumn{3}{|l| }{\bf Job } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{JobId } & {integer } & {Primary Key } \\
+ \hline
+{Job } & {tinyblob } & {Unique Job Name } \\
+ \hline
+{Name } & {tinyblob } & {Job Name } \\
+ \hline
+{PurgedFiles } & {tinyint } & {Used by Bacula for purging/retention periods
+} \\
+ \hline
+{Type } & {binary(1) } & {Job Type: Backup, Copy, Clone, Archive, Migration
+} \\
+ \hline
+{Level } & {binary(1) } & {Job Level } \\
+ \hline
+{ClientId } & {integer } & {Client index } \\
+ \hline
+{JobStatus } & {binary(1) } & {Job Termination Status } \\
+ \hline
+{SchedTime } & {datetime } & {Time/date when Job scheduled } \\
+ \hline
+{StartTime } & {datetime } & {Time/date when Job started } \\
+ \hline
+{EndTime } & {datetime } & {Time/date when Job ended } \\
+ \hline
+{RealEndTime } & {datetime } & {Time/date when original Job ended } \\
+ \hline
+{JobTDate } & {bigint } & {Start day in Unix format but 64 bits; used for
+Retention period. } \\
+ \hline
+{VolSessionId } & {integer } & {Unique Volume Session ID } \\
+ \hline
+{VolSessionTime } & {integer } & {Unique Volume Session Time } \\
+ \hline
+{JobFiles } & {integer } & {Number of files saved in Job } \\
+ \hline
+{JobBytes } & {bigint } & {Number of bytes saved in Job } \\
+ \hline
+{JobErrors } & {integer } & {Number of errors during Job } \\
+ \hline
+{JobMissingFiles } & {integer } & {Number of files not saved (not yet used) }
+\\
+ \hline
+{PoolId } & {integer } & {Link to Pool Record } \\
+ \hline
+{FileSetId } & {integer } & {Link to FileSet Record } \\
+ \hline
+{PrioJobId } & {integer } & {Link to prior Job Record when migrated } \\
+ \hline
+{PurgedFiles } & {tiny integer } & {Set when all File records purged } \\
+ \hline
+{HasBase } & {tiny integer } & {Set when Base Job run }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Job} table contains one record for each Job run by Bacula. Thus
+normally, there will be one per day per machine added to the database. Note,
+the JobId is used to index Job records in the database, and it often is shown
+to the user in the Console program. However, care must be taken with its use
+as it is not unique from database to database. For example, the user may have
+a database for Client data saved on machine Rufus and another database for
+Client data saved on machine Roxie. In this case, the two database will each
+have JobIds that match those in another database. For a unique reference to a
+Job, see Job below.
+
+The Name field of the Job record corresponds to the Name resource record given
+in the Director's configuration file. Thus it is a generic name, and it will
+be normal to find many Jobs (or even all Jobs) with the same Name.
+
+The Job field contains a combination of the Name and the schedule time of the
+Job by the Director. Thus for a given Director, even with multiple Catalog
+databases, the Job will contain a unique name that represents the Job.
+
+For a given Storage daemon, the VolSessionId and VolSessionTime form a unique
+identification of the Job. This will be the case even if multiple Directors
+are using the same Storage daemon.
+
+The Job Type (or simply Type) can have one of the following values:
+
+\addcontentsline{lot}{table}{Job Types}
+\begin{longtable}{|l|l|}
+ \hline
+\multicolumn{1}{|c| }{\bf Value } & \multicolumn{1}{c| }{\bf Meaning } \\
+ \hline
+{B } & {Backup Job } \\
+ \hline
+{M } & {Migrated Job } \\
+ \hline
+{V } & {Verify Job } \\
+ \hline
+{R } & {Restore Job } \\
+ \hline
+{C } & {Console program (not in database) } \\
+ \hline
+{I } & {Internal or system Job } \\
+ \hline
+{D } & {Admin Job } \\
+ \hline
+{A } & {Archive Job (not implemented) }
+\\ \hline
+{C } & {Copy Job } \\
+ \hline
+{M } & {Migration Job } \\
+ \hline
+
+\end{longtable}
+Note, the Job Type values noted above are not kept in an SQL table.
+
+
+The JobStatus field specifies how the job terminated, and can be one of the
+following:
+
+\addcontentsline{lot}{table}{Job Statuses}
+\begin{longtable}{|l|l|}
+ \hline
+\multicolumn{1}{|c| }{\bf Value } & \multicolumn{1}{c| }{\bf Meaning } \\
+ \hline
+{C } & {Created but not yet running } \\
+ \hline
+{R } & {Running } \\
+ \hline
+{B } & {Blocked } \\
+ \hline
+{T } & {Terminated normally } \\
+ \hline
+{W } & {Terminated normally with warnings }
+\\ \hline
+{E } & {Terminated in Error } \\
+ \hline
+{e } & {Non-fatal error } \\
+ \hline
+{f } & {Fatal error } \\
+ \hline
+{D } & {Verify Differences } \\
+ \hline
+{A } & {Canceled by the user } \\
+ \hline
+{I } & {Incomplete Job }
+\\ \hline
+{F } & {Waiting on the File daemon } \\
+ \hline
+{S } & {Waiting on the Storage daemon } \\
+ \hline
+{m } & {Waiting for a new Volume to be mounted } \\
+ \hline
+{M } & {Waiting for a Mount } \\
+ \hline
+{s } & {Waiting for Storage resource } \\
+ \hline
+{j } & {Waiting for Job resource } \\
+ \hline
+{c } & {Waiting for Client resource } \\
+ \hline
+{d } & {Wating for Maximum jobs } \\
+ \hline
+{t } & {Waiting for Start Time } \\
+ \hline
+{p } & {Waiting for higher priority job to finish }
+\\ \hline
+{i } & {Doing batch insert file records }
+\\ \hline
+{a } & {SD despooling attributes }
+\\ \hline
+{l } & {Doing data despooling }
+\\ \hline
+{L } & {Committing data (last despool) }
+\\ \hline
+
+
+
+\end{longtable}
+
+\addcontentsline{lot}{table}{File Sets Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf FileSet } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type\
+\ \ } & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{FileSetId } & {integer } & {Primary Key } \\
+ \hline
+{FileSet } & {tinyblob } & {FileSet name } \\
+ \hline
+{MD5 } & {tinyblob } & {MD5 checksum of FileSet } \\
+ \hline
+{CreateTime } & {datetime } & {Time and date Fileset created }
+\\ \hline
+
+\end{longtable}
+
+The {\bf FileSet} table contains one entry for each FileSet that is used. The
+MD5 signature is kept to ensure that if the user changes anything inside the
+FileSet, it will be detected and the new FileSet will be used. This is
+particularly important when doing an incremental update. If the user deletes a
+file or adds a file, we need to ensure that a Full backup is done prior to the
+next incremental.
+
+
+\addcontentsline{lot}{table}{JobMedia Table Layout}
+\begin{longtable}{|l|l|p{2.5in}|}
+ \hline
+\multicolumn{3}{|l| }{\bf JobMedia } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type\
+\ \ } & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{JobMediaId } & {integer } & {Primary Key } \\
+ \hline
+{JobId } & {integer } & {Link to Job Record } \\
+ \hline
+{MediaId } & {integer } & {Link to Media Record } \\
+ \hline
+{FirstIndex } & {integer } & {The index (sequence number) of the first file
+written for this Job to the Media } \\
+ \hline
+{LastIndex } & {integer } & {The index of the last file written for this
+Job to the Media } \\
+ \hline
+{StartFile } & {integer } & {The physical media (tape) file number of the
+first block written for this Job } \\
+ \hline
+{EndFile } & {integer } & {The physical media (tape) file number of the
+last block written for this Job } \\
+ \hline
+{StartBlock } & {integer } & {The number of the first block written for
+this Job } \\
+ \hline
+{EndBlock } & {integer } & {The number of the last block written for this
+Job } \\
+ \hline
+{VolIndex } & {integer } & {The Volume use sequence number within the Job }
+\\ \hline
+
+\end{longtable}
+
+The {\bf JobMedia} table contains one entry at the following: start of
+the job, start of each new tape file, start of each new tape, end of the
+job. Since by default, a new tape file is written every 2GB, in general,
+you will have more than 2 JobMedia records per Job. The number can be
+varied by changing the "Maximum File Size" specified in the Device
+resource. This record allows Bacula to efficiently position close to
+(within 2GB) any given file in a backup. For restoring a full Job,
+these records are not very important, but if you want to retrieve
+a single file that was written near the end of a 100GB backup, the
+JobMedia records can speed it up by orders of magnitude by permitting
+forward spacing files and blocks rather than reading the whole 100GB
+backup.
+
+
+
+
+\addcontentsline{lot}{table}{Media Table Layout}
+\begin{longtable}{|l|l|p{2.4in}|}
+ \hline
+\multicolumn{3}{|l| }{\bf Media } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type\
+\ \ } & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{MediaId } & {integer } & {Primary Key } \\
+ \hline
+{VolumeName } & {tinyblob } & {Volume name } \\
+ \hline
+{Slot } & {integer } & {Autochanger Slot number or zero } \\
+ \hline
+{PoolId } & {integer } & {Link to Pool Record } \\
+ \hline
+{MediaType } & {tinyblob } & {The MediaType supplied by the user } \\
+ \hline
+{MediaTypeId } & {integer } & {The MediaTypeId } \\
+ \hline
+{LabelType } & {tinyint } & {The type of label on the Volume } \\
+ \hline
+{FirstWritten } & {datetime } & {Time/date when first written } \\
+ \hline
+{LastWritten } & {datetime } & {Time/date when last written } \\
+ \hline
+{LabelDate } & {datetime } & {Time/date when tape labeled } \\
+ \hline
+{VolJobs } & {integer } & {Number of jobs written to this media } \\
+ \hline
+{VolFiles } & {integer } & {Number of files written to this media } \\
+ \hline
+{VolBlocks } & {integer } & {Number of blocks written to this media } \\
+ \hline
+{VolMounts } & {integer } & {Number of time media mounted } \\
+ \hline
+{VolBytes } & {bigint } & {Number of bytes saved in Job } \\
+ \hline
+{VolParts } & {integer } & {The number of parts for a Volume (DVD) } \\
+ \hline
+{VolErrors } & {integer } & {Number of errors during Job } \\
+ \hline
+{VolWrites } & {integer } & {Number of writes to media } \\
+ \hline
+{MaxVolBytes } & {bigint } & {Maximum bytes to put on this media } \\
+ \hline
+{VolCapacityBytes } & {bigint } & {Capacity estimate for this volume } \\
+ \hline
+{VolStatus } & {enum } & {Status of media: Full, Archive, Append, Recycle,
+Read-Only, Disabled, Error, Busy } \\
+ \hline
+{Enabled } {tinyint } & {Whether or not Volume can be written } \\
+ \hline
+{Recycle } & {tinyint } & {Whether or not Bacula can recycle the Volumes:
+Yes, No } \\
+ \hline
+{ActionOnPurge } & {tinyint } & {What happens to a Volume after purging } \\
+ \hline
+{VolRetention } & {bigint } & {64 bit seconds until expiration } \\
+ \hline
+{VolUseDuration } & {bigint } & {64 bit seconds volume can be used } \\
+ \hline
+{MaxVolJobs } & {integer } & {maximum jobs to put on Volume } \\
+ \hline
+{MaxVolFiles } & {integer } & {maximume EOF marks to put on Volume }
+\\ \hline
+{InChanger } & {tinyint } & {Whether or not Volume in autochanger } \\
+ \hline
+{StorageId } & {integer } & {Storage record ID } \\
+ \hline
+{DeviceId } & {integer } & {Device record ID } \\
+ \hline
+{MediaAddressing } & {integer } & {Method of addressing media } \\
+ \hline
+{VolReadTime } & {bigint } & {Time Reading Volume } \\
+ \hline
+{VolWriteTime } & {bigint } & {Time Writing Volume } \\
+ \hline
+{EndFile } & {integer } & {End File number of Volume } \\
+ \hline
+{EndBlock } & {integer } & {End block number of Volume } \\
+ \hline
+{LocationId } & {integer } & {Location record ID } \\
+ \hline
+{RecycleCount } & {integer } & {Number of times recycled } \\
+ \hline
+{InitialWrite } & {datetime } & {When Volume first written } \\
+ \hline
+{ScratchPoolId } & {integer } & {Id of Scratch Pool } \\
+ \hline
+{RecyclePoolId } & {integer } & {Pool ID where to recycle Volume } \\
+ \hline
+{Comment } & {blob } & {User text field } \\
+ \hline
+
+
+\end{longtable}
+
+The {\bf Volume} table (internally referred to as the Media table) contains
+one entry for each volume, that is each tape, cassette (8mm, DLT, DAT, ...),
+or file on which information is or was backed up. There is one Volume record
+created for each of the NumVols specified in the Pool resource record.
+
+\
+
+\addcontentsline{lot}{table}{Pool Table Layout}
+\begin{longtable}{|l|l|p{2.4in}|}
+ \hline
+\multicolumn{3}{|l| }{\bf Pool } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{PoolId } & {integer } & {Primary Key } \\
+ \hline
+{Name } & {Tinyblob } & {Pool Name } \\
+ \hline
+{NumVols } & {Integer } & {Number of Volumes in the Pool } \\
+ \hline
+{MaxVols } & {Integer } & {Maximum Volumes in the Pool } \\
+ \hline
+{UseOnce } & {tinyint } & {Use volume once } \\
+ \hline
+{UseCatalog } & {tinyint } & {Set to use catalog } \\
+ \hline
+{AcceptAnyVolume } & {tinyint } & {Accept any volume from Pool } \\
+ \hline
+{VolRetention } & {bigint } & {64 bit seconds to retain volume } \\
+ \hline
+{VolUseDuration } & {bigint } & {64 bit seconds volume can be used } \\
+ \hline
+{MaxVolJobs } & {integer } & {max jobs on volume } \\
+ \hline
+{MaxVolFiles } & {integer } & {max EOF marks to put on Volume } \\
+ \hline
+{MaxVolBytes } & {bigint } & {max bytes to write on Volume } \\
+ \hline
+{AutoPrune } & {tinyint } & {yes|no for autopruning } \\
+ \hline
+{Recycle } & {tinyint } & {yes|no for allowing auto recycling of Volume } \\
+ \hline
+{ActionOnPurge } & {tinyint } & {Default Volume ActionOnPurge } \\
+ \hline
+{PoolType } & {enum } & {Backup, Copy, Cloned, Archive, Migration } \\
+ \hline
+{LabelType } & {tinyint } & {Type of label ANSI/Bacula } \\
+ \hline
+{LabelFormat } & {Tinyblob } & {Label format }
+\\ \hline
+{Enabled } {tinyint } & {Whether or not Volume can be written } \\
+ \hline
+{ScratchPoolId } & {integer } & {Id of Scratch Pool } \\
+ \hline
+{RecyclePoolId } & {integer } & {Pool ID where to recycle Volume } \\
+ \hline
+{NextPoolId } & {integer } & {Pool ID of next Pool } \\
+ \hline
+{MigrationHighBytes } & {bigint } & {High water mark for migration } \\
+ \hline
+{MigrationLowBytes } & {bigint } & {Low water mark for migration } \\
+ \hline
+{MigrationTime } & {bigint } & {Time before migration } \\
+ \hline
+
+
+
+\end{longtable}
+
+The {\bf Pool} table contains one entry for each media pool controlled by
+Bacula in this database. One media record exists for each of the NumVols
+contained in the Pool. The PoolType is a Bacula defined keyword. The MediaType
+is defined by the administrator, and corresponds to the MediaType specified in
+the Director's Storage definition record. The CurrentVol is the sequence
+number of the Media record for the current volume.
+
+\
+
+\addcontentsline{lot}{table}{Client Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf Client } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{ClientId } & {integer } & {Primary Key } \\
+ \hline
+{Name } & {TinyBlob } & {File Services Name } \\
+ \hline
+{UName } & {TinyBlob } & {uname -a from Client (not yet used) } \\
+ \hline
+{AutoPrune } & {tinyint } & {yes|no for autopruning } \\
+ \hline
+{FileRetention } & {bigint } & {64 bit seconds to retain Files } \\
+ \hline
+{JobRetention } & {bigint } & {64 bit seconds to retain Job }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Client} table contains one entry for each machine backed up by Bacula
+in this database. Normally the Name is a fully qualified domain name.
+
+
+\addcontentsline{lot}{table}{Storage Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf Storage } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{StorageId } & {integer } & {Unique Id } \\
+ \hline
+{Name } & {tinyblob } & {Resource name of Storage device } \\
+ \hline
+{AutoChanger } & {tinyint } & {Set if it is an autochanger } \\
+ \hline
+
+\end{longtable}
+
+The {\bf Storage} table contains one entry for each Storage used.
+
+
+\addcontentsline{lot}{table}{Counter Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf Counter } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{Counter } & {tinyblob } & {Counter name } \\
+ \hline
+{MinValue } & {integer } & {Start/Min value for counter } \\
+ \hline
+{MaxValue } & {integer } & {Max value for counter } \\
+ \hline
+{CurrentValue } & {integer } & {Current counter value } \\
+ \hline
+{WrapCounter } & {tinyblob } & {Name of another counter }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Counter} table contains one entry for each permanent counter defined
+by the user.
+
+\addcontentsline{lot}{table}{Job History Table Layout}
+\begin{longtable}{|l|l|p{2.5in}|}
+ \hline
+\multicolumn{3}{|l| }{\bf JobHisto } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{JobId } & {integer } & {Primary Key } \\
+ \hline
+{Job } & {tinyblob } & {Unique Job Name } \\
+ \hline
+{Name } & {tinyblob } & {Job Name } \\
+ \hline
+{Type } & {binary(1) } & {Job Type: Backup, Copy, Clone, Archive, Migration
+} \\
+ \hline
+{Level } & {binary(1) } & {Job Level } \\
+ \hline
+{ClientId } & {integer } & {Client index } \\
+ \hline
+{JobStatus } & {binary(1) } & {Job Termination Status } \\
+ \hline
+{SchedTime } & {datetime } & {Time/date when Job scheduled } \\
+ \hline
+{StartTime } & {datetime } & {Time/date when Job started } \\
+ \hline
+{EndTime } & {datetime } & {Time/date when Job ended } \\
+ \hline
+{RealEndTime } & {datetime } & {Time/date when original Job ended } \\
+ \hline
+{JobTDate } & {bigint } & {Start day in Unix format but 64 bits; used for
+Retention period. } \\
+ \hline
+{VolSessionId } & {integer } & {Unique Volume Session ID } \\
+ \hline
+{VolSessionTime } & {integer } & {Unique Volume Session Time } \\
+ \hline
+{JobFiles } & {integer } & {Number of files saved in Job } \\
+ \hline
+{JobBytes } & {bigint } & {Number of bytes saved in Job } \\
+ \hline
+{JobErrors } & {integer } & {Number of errors during Job } \\
+ \hline
+{JobMissingFiles } & {integer } & {Number of files not saved (not yet used) }
+\\
+ \hline
+{PoolId } & {integer } & {Link to Pool Record } \\
+ \hline
+{FileSetId } & {integer } & {Link to FileSet Record } \\
+ \hline
+{PrioJobId } & {integer } & {Link to prior Job Record when migrated } \\
+ \hline
+{PurgedFiles } & {tiny integer } & {Set when all File records purged } \\
+ \hline
+{HasBase } & {tiny integer } & {Set when Base Job run }
+\\ \hline
+
+\end{longtable}
+
+The {bf JobHisto} table is the same as the Job table, but it keeps
+long term statistics (i.e. it is not pruned with the Job).
+
+
+\addcontentsline{lot}{table}{Log Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf Version } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{LogIdId } & {integer } & {Primary Key }
+\\ \hline
+{JobId } & {integer } & {Points to Job record }
+\\ \hline
+{Time } & {datetime } & {Time/date log record created }
+\\ \hline
+{LogText } & {blob } & {Log text }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Log} table contains a log of all Job output.
+
+\addcontentsline{lot}{table}{Location Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf Location } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{LocationId } & {integer } & {Primary Key }
+\\ \hline
+{Location } & {tinyblob } & {Text defining location }
+\\ \hline
+{Cost } & {integer } & {Relative cost of obtaining Volume }
+\\ \hline
+{Enabled } & {tinyint } & {Whether or not Volume is enabled }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Location} table defines where a Volume is physically.
+
+
+\addcontentsline{lot}{table}{Location Log Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf LocationLog } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{locLogIdId } & {integer } & {Primary Key }
+\\ \hline
+{Date } & {datetime } & {Time/date log record created }
+\\ \hline
+{MediaId } & {integer } & {Points to Media record }
+\\ \hline
+{LocationId } & {integer } & {Points to Location record }
+\\ \hline
+{NewVolStatus } & {integer } & {enum: Full, Archive, Append, Recycle, Purged
+ Read-only, Disabled, Error, Busy, Used, Cleaning }
+\\ \hline
+{Enabled } & {tinyint } & {Whether or not Volume is enabled }
+\\ \hline
+
+
+\end{longtable}
+
+The {\bf Log} table contains a log of all Job output.
+
+
+\addcontentsline{lot}{table}{Version Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf Version } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{VersionId } & {integer } & {Primary Key }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Version} table defines the Bacula database version number. Bacula
+checks this number before reading the database to ensure that it is compatible
+with the Bacula binary file.
+
+
+\addcontentsline{lot}{table}{Base Files Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf BaseFiles } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{BaseId } & {integer } & {Primary Key } \\
+ \hline
+{BaseJobId } & {integer } & {JobId of Base Job } \\
+ \hline
+{JobId } & {integer } & {Reference to Job } \\
+ \hline
+{FileId } & {integer } & {Reference to File } \\
+ \hline
+{FileIndex } & {integer } & {File Index number }
+\\ \hline
+
+\end{longtable}
+
+The {\bf BaseFiles} table contains all the File references for a particular
+JobId that point to a Base file -- i.e. they were previously saved and hence
+were not saved in the current JobId but in BaseJobId under FileId. FileIndex
+is the index of the file, and is used for optimization of Restore jobs to
+prevent the need to read the FileId record when creating the in memory tree.
+This record is not yet implemented.
+
+\
+
+\subsection{MySQL Table Definition}
+\index[general]{MySQL Table Definition }
+\index[general]{Definition!MySQL Table }
+\addcontentsline{toc}{subsubsection}{MySQL Table Definition}
+
+The commands used to create the MySQL tables are as follows:
+
+\footnotesize
+\begin{verbatim}
+USE bacula;
+CREATE TABLE Filename (
+ FilenameId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ Name BLOB NOT NULL,
+ PRIMARY KEY(FilenameId),
+ INDEX (Name(30))
+ );
+CREATE TABLE Path (
+ PathId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ Path BLOB NOT NULL,
+ PRIMARY KEY(PathId),
+ INDEX (Path(50))
+ );
+CREATE TABLE File (
+ FileId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ FileIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
+ PathId INTEGER UNSIGNED NOT NULL REFERENCES Path,
+ FilenameId INTEGER UNSIGNED NOT NULL REFERENCES Filename,
+ MarkId INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ LStat TINYBLOB NOT NULL,
+ MD5 TINYBLOB NOT NULL,
+ PRIMARY KEY(FileId),
+ INDEX (JobId),
+ INDEX (PathId),
+ INDEX (FilenameId)
+ );
+CREATE TABLE Job (
+ JobId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ Job TINYBLOB NOT NULL,
+ Name TINYBLOB NOT NULL,
+ Type BINARY(1) NOT NULL,
+ Level BINARY(1) NOT NULL,
+ ClientId INTEGER NOT NULL REFERENCES Client,
+ JobStatus BINARY(1) NOT NULL,
+ SchedTime DATETIME NOT NULL,
+ StartTime DATETIME NOT NULL,
+ EndTime DATETIME NOT NULL,
+ JobTDate BIGINT UNSIGNED NOT NULL,
+ VolSessionId INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolSessionTime INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ JobFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ JobBytes BIGINT UNSIGNED NOT NULL,
+ JobErrors INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ JobMissingFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ PoolId INTEGER UNSIGNED NOT NULL REFERENCES Pool,
+ FileSetId INTEGER UNSIGNED NOT NULL REFERENCES FileSet,
+ PurgedFiles TINYINT NOT NULL DEFAULT 0,
+ HasBase TINYINT NOT NULL DEFAULT 0,
+ PRIMARY KEY(JobId),
+ INDEX (Name(128))
+ );
+CREATE TABLE FileSet (
+ FileSetId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ FileSet TINYBLOB NOT NULL,
+ MD5 TINYBLOB NOT NULL,
+ CreateTime DATETIME NOT NULL,
+ PRIMARY KEY(FileSetId)
+ );
+CREATE TABLE JobMedia (
+ JobMediaId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
+ MediaId INTEGER UNSIGNED NOT NULL REFERENCES Media,
+ FirstIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ LastIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ StartFile INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ EndFile INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ StartBlock INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ EndBlock INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ PRIMARY KEY(JobMediaId),
+ INDEX (JobId, MediaId)
+ );
+CREATE TABLE Media (
+ MediaId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ VolumeName TINYBLOB NOT NULL,
+ Slot INTEGER NOT NULL DEFAULT 0,
+ PoolId INTEGER UNSIGNED NOT NULL REFERENCES Pool,
+ MediaType TINYBLOB NOT NULL,
+ FirstWritten DATETIME NOT NULL,
+ LastWritten DATETIME NOT NULL,
+ LabelDate DATETIME NOT NULL,
+ VolJobs INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolBlocks INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolMounts INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolBytes BIGINT UNSIGNED NOT NULL DEFAULT 0,
+ VolErrors INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolWrites INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolCapacityBytes BIGINT UNSIGNED NOT NULL,
+ VolStatus ENUM('Full', 'Archive', 'Append', 'Recycle', 'Purged',
+ 'Read-Only', 'Disabled', 'Error', 'Busy', 'Used', 'Cleaning') NOT NULL,
+ Recycle TINYINT NOT NULL DEFAULT 0,
+ VolRetention BIGINT UNSIGNED NOT NULL DEFAULT 0,
+ VolUseDuration BIGINT UNSIGNED NOT NULL DEFAULT 0,
+ MaxVolJobs INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ MaxVolFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ MaxVolBytes BIGINT UNSIGNED NOT NULL DEFAULT 0,
+ InChanger TINYINT NOT NULL DEFAULT 0,
+ MediaAddressing TINYINT NOT NULL DEFAULT 0,
+ VolReadTime BIGINT UNSIGNED NOT NULL DEFAULT 0,
+ VolWriteTime BIGINT UNSIGNED NOT NULL DEFAULT 0,
+ PRIMARY KEY(MediaId),
+ INDEX (PoolId)
+ );
+CREATE TABLE Pool (
+ PoolId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ Name TINYBLOB NOT NULL,
+ NumVols INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ MaxVols INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ UseOnce TINYINT NOT NULL,
+ UseCatalog TINYINT NOT NULL,
+ AcceptAnyVolume TINYINT DEFAULT 0,
+ VolRetention BIGINT UNSIGNED NOT NULL,
+ VolUseDuration BIGINT UNSIGNED NOT NULL,
+ MaxVolJobs INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ MaxVolFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ MaxVolBytes BIGINT UNSIGNED NOT NULL,
+ AutoPrune TINYINT DEFAULT 0,
+ Recycle TINYINT DEFAULT 0,
+ PoolType ENUM('Backup', 'Copy', 'Cloned', 'Archive', 'Migration', 'Scratch') NOT NULL,
+ LabelFormat TINYBLOB,
+ Enabled TINYINT DEFAULT 1,
+ ScratchPoolId INTEGER UNSIGNED DEFAULT 0 REFERENCES Pool,
+ RecyclePoolId INTEGER UNSIGNED DEFAULT 0 REFERENCES Pool,
+ UNIQUE (Name(128)),
+ PRIMARY KEY (PoolId)
+ );
+CREATE TABLE Client (
+ ClientId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ Name TINYBLOB NOT NULL,
+ Uname TINYBLOB NOT NULL, /* full uname -a of client */
+ AutoPrune TINYINT DEFAULT 0,
+ FileRetention BIGINT UNSIGNED NOT NULL,
+ JobRetention BIGINT UNSIGNED NOT NULL,
+ UNIQUE (Name(128)),
+ PRIMARY KEY(ClientId)
+ );
+CREATE TABLE BaseFiles (
+ BaseId INTEGER UNSIGNED AUTO_INCREMENT,
+ BaseJobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
+ JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
+ FileId INTEGER UNSIGNED NOT NULL REFERENCES File,
+ FileIndex INTEGER UNSIGNED,
+ PRIMARY KEY(BaseId)
+ );
+CREATE TABLE UnsavedFiles (
+ UnsavedId INTEGER UNSIGNED AUTO_INCREMENT,
+ JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
+ PathId INTEGER UNSIGNED NOT NULL REFERENCES Path,
+ FilenameId INTEGER UNSIGNED NOT NULL REFERENCES Filename,
+ PRIMARY KEY (UnsavedId)
+ );
+CREATE TABLE Version (
+ VersionId INTEGER UNSIGNED NOT NULL
+ );
+-- Initialize Version
+INSERT INTO Version (VersionId) VALUES (7);
+CREATE TABLE Counters (
+ Counter TINYBLOB NOT NULL,
+ MinValue INTEGER,
+ MaxValue INTEGER,
+ CurrentValue INTEGER,
+ WrapCounter TINYBLOB NOT NULL,
+ PRIMARY KEY (Counter(128))
+ );
+\end{verbatim}
+\normalsize
--- /dev/null
+\newfont{\bighead}{cmr17 at 36pt}
+\parskip 10pt
+\parindent 0pt
+
+\title{\includegraphics{\idir bacula-logo.eps} \\ \bigskip
+ \Huge{Bacula}$^{\normalsize \textregistered}$ \Huge{Developer's Guide}
+ \begin{center}
+ \large{It comes in the night and sucks
+ the essence from your computers. }
+ \end{center}
+}
+
+
+\author{Kern Sibbald}
+\date{\vspace{1.0in}\today \\
+ This manual documents Bacula version \input{version} \\
+ \vspace{0.2in}
+ Copyright {\copyright} 1999-2010, Free Software Foundation Europe
+ e.V. \\
+ Bacula {\textregistered} is a registered trademark of Kern Sibbald.\\
+ \vspace{0.2in}
+ Permission is granted to copy, distribute and/or modify this document under the terms of the
+ GNU Free Documentation License, Version 1.2 published by the Free Software Foundation;
+ with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
+ A copy of the license is included in the section entitled "GNU Free Documentation License".
+}
+
+\maketitle
--- /dev/null
+%%
+%%
+
+\chapter{Daemon Protocol}
+\label{_ChapterStart2}
+\index{Protocol!Daemon }
+\index{Daemon Protocol }
+
+\section{General}
+\index{General }
+\addcontentsline{toc}{subsection}{General}
+
+This document describes the protocols used between the various daemons. As
+Bacula has developed, it has become quite out of date. The general idea still
+holds true, but the details of the fields for each command, and indeed the
+commands themselves have changed considerably.
+
+It is intended to be a technical discussion of the general daemon protocols
+and as such is not targeted at end users but rather at developers and system
+administrators that want or need to know more of the working details of {\bf
+Bacula}.
+
+\section{Low Level Network Protocol}
+\index{Protocol!Low Level Network }
+\index{Low Level Network Protocol }
+\addcontentsline{toc}{subsection}{Low Level Network Protocol}
+
+At the lowest level, the network protocol is handled by {\bf BSOCK} packets
+which contain a lot of information about the status of the network connection:
+who is at the other end, etc. Each basic {\bf Bacula} network read or write
+actually consists of two low level network read/writes. The first write always
+sends four bytes of data in machine independent byte order. If data is to
+follow, the first four bytes are a positive non-zero integer indicating the
+length of the data that follow in the subsequent write. If the four byte
+integer is zero or negative, it indicates a special request, a sort of network
+signaling capability. In this case, no data packet will follow. The low level
+BSOCK routines expect that only a single thread is accessing the socket at a
+time. It is advised that multiple threads do not read/write the same socket.
+If you must do this, you must provide some sort of locking mechanism. It would
+not be appropriate for efficiency reasons to make every call to the BSOCK
+routines lock and unlock the packet.
+
+\section{General Daemon Protocol}
+\index{General Daemon Protocol }
+\index{Protocol!General Daemon }
+\addcontentsline{toc}{subsection}{General Daemon Protocol}
+
+In general, all the daemons follow the following global rules. There may be
+exceptions depending on the specific case. Normally, one daemon will be
+sending commands to another daemon (specifically, the Director to the Storage
+daemon and the Director to the File daemon).
+
+\begin{itemize}
+\item Commands are always ASCII commands that are upper/lower case dependent
+ as well as space sensitive.
+\item All binary data is converted into ASCII (either with printf statements
+ or using base64 encoding).
+\item All responses to commands sent are always prefixed with a return
+ numeric code where codes in the 1000's are reserved for the Director, the
+ 2000's are reserved for the File daemon, and the 3000's are reserved for the
+Storage daemon.
+\item Any response that is not prefixed with a numeric code is a command (or
+ subcommand if you like) coming from the other end. For example, while the
+ Director is corresponding with the Storage daemon, the Storage daemon can
+request Catalog services from the Director. This convention permits each side
+to send commands to the other daemon while simultaneously responding to
+commands.
+\item Any response that is of zero length, depending on the context, either
+ terminates the data stream being sent or terminates command mode prior to
+ closing the connection.
+\item Any response that is of negative length is a special sign that normally
+ requires a response. For example, during data transfer from the File daemon
+ to the Storage daemon, normally the File daemon sends continuously without
+intervening reads. However, periodically, the File daemon will send a packet
+of length -1 indicating that the current data stream is complete and that the
+Storage daemon should respond to the packet with an OK, ABORT JOB, PAUSE,
+etc. This permits the File daemon to efficiently send data while at the same
+time occasionally ``polling'' the Storage daemon for his status or any
+special requests.
+
+Currently, these negative lengths are specific to the daemon, but shortly,
+the range 0 to -999 will be standard daemon wide signals, while -1000 to
+-1999 will be for Director user, -2000 to -2999 for the File daemon, and
+-3000 to -3999 for the Storage daemon.
+\end{itemize}
+
+\section{The Protocol Used Between the Director and the Storage Daemon}
+\index{Daemon!Protocol Used Between the Director and the Storage }
+\index{Protocol Used Between the Director and the Storage Daemon }
+\addcontentsline{toc}{subsection}{Protocol Used Between the Director and the
+Storage Daemon}
+
+Before sending commands to the File daemon, the Director opens a Message
+channel with the Storage daemon, identifies itself and presents its password.
+If the password check is OK, the Storage daemon accepts the Director. The
+Director then passes the Storage daemon, the JobId to be run as well as the
+File daemon authorization (append, read all, or read for a specific session).
+The Storage daemon will then pass back to the Director a enabling key for this
+JobId that must be presented by the File daemon when opening the job. Until
+this process is complete, the Storage daemon is not available for use by File
+daemons.
+
+\footnotesize
+\begin{verbatim}
+SD: listens
+DR: makes connection
+DR: Hello <Director-name> calling <password>
+SD: 3000 OK Hello
+DR: JobId=nnn Allow=(append, read) Session=(*, SessionId)
+ (Session not implemented yet)
+SD: 3000 OK Job Authorization=<password>
+DR: use device=<device-name> media_type=<media-type>
+ pool_name=<pool-name> pool_type=<pool_type>
+SD: 3000 OK use device
+\end{verbatim}
+\normalsize
+
+For the Director to be authorized, the \lt{}Director-name\gt{} and the
+\lt{}password\gt{} must match the values in one of the Storage daemon's
+Director resources (there may be several Directors that can access a single
+Storage daemon).
+
+\section{The Protocol Used Between the Director and the File Daemon}
+\index{Daemon!Protocol Used Between the Director and the File }
+\index{Protocol Used Between the Director and the File Daemon }
+\addcontentsline{toc}{subsection}{Protocol Used Between the Director and the
+File Daemon}
+
+A typical conversation might look like the following:
+
+\footnotesize
+\begin{verbatim}
+FD: listens
+DR: makes connection
+DR: Hello <Director-name> calling <password>
+FD: 2000 OK Hello
+DR: JobId=nnn Authorization=<password>
+FD: 2000 OK Job
+DR: storage address = <Storage daemon address> port = <port-number>
+ name = <DeviceName> mediatype = <MediaType>
+FD: 2000 OK storage
+DR: include
+DR: <directory1>
+DR: <directory2>
+ ...
+DR: Null packet
+FD: 2000 OK include
+DR: exclude
+DR: <directory1>
+DR: <directory2>
+ ...
+DR: Null packet
+FD: 2000 OK exclude
+DR: full
+FD: 2000 OK full
+DR: save
+FD: 2000 OK save
+FD: Attribute record for each file as sent to the
+ Storage daemon (described above).
+FD: Null packet
+FD: <append close responses from Storage daemon>
+ e.g.
+ 3000 OK Volumes = <number of volumes>
+ 3001 Volume = <volume-id> <start file> <start block>
+ <end file> <end block> <volume session-id>
+ 3002 Volume data = <date/time of last write> <Number bytes written>
+ <number errors>
+ ... additional Volume / Volume data pairs for volumes 2 .. n
+FD: Null packet
+FD: close socket
+\end{verbatim}
+\normalsize
+
+\section{The Save Protocol Between the File Daemon and the Storage Daemon}
+\index{Save Protocol Between the File Daemon and the Storage Daemon }
+\index{Daemon!Save Protocol Between the File Daemon and the Storage }
+\addcontentsline{toc}{subsection}{Save Protocol Between the File Daemon and
+the Storage Daemon}
+
+Once the Director has send a {\bf save} command to the File daemon, the File
+daemon will contact the Storage daemon to begin the save.
+
+In what follows: FD: refers to information set via the network from the File
+daemon to the Storage daemon, and SD: refers to information set from the
+Storage daemon to the File daemon.
+
+\subsection{Command and Control Information}
+\index{Information!Command and Control }
+\index{Command and Control Information }
+\addcontentsline{toc}{subsubsection}{Command and Control Information}
+
+Command and control information is exchanged in human readable ASCII commands.
+
+
+\footnotesize
+\begin{verbatim}
+FD: listens
+SD: makes connection
+FD: append open session = <JobId> [<password>]
+SD: 3000 OK ticket = <number>
+FD: append data <ticket-number>
+SD: 3000 OK data address = <IPaddress> port = <port>
+\end{verbatim}
+\normalsize
+
+\subsection{Data Information}
+\index{Information!Data }
+\index{Data Information }
+\addcontentsline{toc}{subsubsection}{Data Information}
+
+The Data information consists of the file attributes and data to the Storage
+daemon. For the most part, the data information is sent one way: from the File
+daemon to the Storage daemon. This allows the File daemon to transfer
+information as fast as possible without a lot of handshaking and network
+overhead.
+
+However, from time to time, the File daemon needs to do a sort of checkpoint
+of the situation to ensure that everything is going well with the Storage
+daemon. To do so, the File daemon sends a packet with a negative length
+indicating that he wishes the Storage daemon to respond by sending a packet of
+information to the File daemon. The File daemon then waits to receive a packet
+from the Storage daemon before continuing.
+
+All data sent are in binary format except for the header packet, which is in
+ASCII. There are two packet types used data transfer mode: a header packet,
+the contents of which are known to the Storage daemon, and a data packet, the
+contents of which are never examined by the Storage daemon.
+
+The first data packet to the Storage daemon will be an ASCII header packet
+consisting of the following data.
+
+\lt{}File-Index\gt{} \lt{}Stream-Id\gt{} \lt{}Info\gt{} where {\bf
+\lt{}File-Index\gt{}} is a sequential number beginning from one that
+increments with each file (or directory) sent.
+
+where {\bf \lt{}Stream-Id\gt{}} will be 1 for the Attributes record and 2 for
+uncompressed File data. 3 is reserved for the MD5 signature for the file.
+
+where {\bf \lt{}Info\gt{}} transmit information about the Stream to the
+Storage Daemon. It is a character string field where each character has a
+meaning. The only character currently defined is 0 (zero), which is simply a
+place holder (a no op). In the future, there may be codes indicating
+compressed data, encrypted data, etc.
+
+Immediately following the header packet, the Storage daemon will expect any
+number of data packets. The series of data packets is terminated by a zero
+length packet, which indicates to the Storage daemon that the next packet will
+be another header packet. As previously mentioned, a negative length packet is
+a request for the Storage daemon to temporarily enter command mode and send a
+reply to the File daemon. Thus an actual conversation might contain the
+following exchanges:
+
+\footnotesize
+\begin{verbatim}
+FD: <1 1 0> (header packet)
+FD: <data packet containing file-attributes>
+FD: Null packet
+FD: <1 2 0>
+FD: <multiple data packets containing the file data>
+FD: Packet length = -1
+SD: 3000 OK
+FD: <2 1 0>
+FD: <data packet containing file-attributes>
+FD: Null packet
+FD: <2 2 0>
+FD: <multiple data packets containing the file data>
+FD: Null packet
+FD: Null packet
+FD: append end session <ticket-number>
+SD: 3000 OK end
+FD: append close session <ticket-number>
+SD: 3000 OK Volumes = <number of volumes>
+SD: 3001 Volume = <volumeid> <start file> <start block>
+ <end file> <end block> <volume session-id>
+SD: 3002 Volume data = <date/time of last write> <Number bytes written>
+ <number errors>
+SD: ... additional Volume / Volume data pairs for
+ volumes 2 .. n
+FD: close socket
+\end{verbatim}
+\normalsize
+
+The information returned to the File daemon by the Storage daemon in response
+to the {\bf append close session} is transmit in turn to the Director.
--- /dev/null
+%%
+%%
+%% The following characters must be preceded by a backslash
+%% to be entered as printable characters:
+%%
+%% # $ % & ~ _ ^ \ { }
+%%
+
+\documentclass[10pt,a4paper]{book}
+
+\topmargin -0.5in
+\oddsidemargin 0.0in
+\evensidemargin 0.0in
+\textheight 10in
+\textwidth 6.5in
+
+
+\usepackage{html}
+\usepackage{float}
+\usepackage{graphicx}
+\usepackage{bacula}
+\usepackage{longtable}
+\usepackage{makeidx}
+\usepackage{index}
+\usepackage{setspace}
+\usepackage{hyperref}
+% \usepackage[linkcolor=black,colorlinks=true]{hyperref}
+\usepackage{url}
+
+\makeindex
+\newindex{dir}{ddx}{dnd}{Director Index}
+\newindex{fd}{fdx}{fnd}{File Daemon Index}
+\newindex{sd}{sdx}{snd}{Storage Daemon Index}
+\newindex{console}{cdx}{cnd}{Console Index}
+\newindex{general}{idx}{ind}{General Index}
+
+\sloppy
+
+\begin{document}
+\sloppy
+
+\include{coverpage}
+
+\clearpage
+\pagenumbering{roman}
+\tableofcontents
+\clearpage
+
+\pagestyle{myheadings}
+\markboth{Bacula Version \version}{Bacula Version \version}
+\pagenumbering{arabic}
+\include{generaldevel}
+\include{git}
+\include{pluginAPI}
+\include{platformsupport}
+\include{daemonprotocol}
+\include{director}
+\include{file}
+\include{storage}
+\include{catalog}
+\include{mediaformat}
+\include{porting}
+\include{gui-interface}
+\include{tls-techdoc}
+\include{regression}
+\include{md5}
+\include{mempool}
+\include{netprotocol}
+\include{smartall}
+\include{fdl}
+
+
+% The following line tells link_resolver.pl to not include these files:
+% nolinks developersi baculai-dir baculai-fd baculai-sd baculai-console baculai-main
+
+% pull in the index
+\clearpage
+\printindex
+
+\end{document}
--- /dev/null
+%%
+%%
+
+\chapter{Director Services Daemon}
+\label{_ChapterStart6}
+\index{Daemon!Director Services }
+\index{Director Services Daemon }
+\addcontentsline{toc}{section}{Director Services Daemon}
+
+This chapter is intended to be a technical discussion of the Director services
+and as such is not targeted at end users but rather at developers and system
+administrators that want or need to know more of the working details of {\bf
+Bacula}.
+
+The {\bf Bacula Director} services consist of the program that supervises all
+the backup and restore operations.
+
+To be written ...
--- /dev/null
+%---------The file header---------------------------------------------
+
+%% \usepackage[english]{babel} %language selection
+%% \usepackage[T1]{fontenc}
+
+%%\pagenumbering{arabic}
+
+%% \usepackage{hyperref}
+%% \hypersetup{colorlinks,
+%% citecolor=black,
+%% filecolor=black,
+%% linkcolor=black,
+%% urlcolor=black,
+%% pdftex}
+
+
+%---------------------------------------------------------------------
+\chapter{GNU Free Documentation License}
+\index[general]{GNU ree Documentation License}
+\index[general]{License!GNU ree Documentation}
+\addcontentsline{toc}{section}{GNU ree Documentation License}
+
+%\label{label_fdl}
+
+ \begin{center}
+
+ Version 1.2, November 2002
+
+
+ Copyright \copyright 2000,2001,2002 Free Software Foundation, Inc.
+
+ \bigskip
+
+ 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+
+ \bigskip
+
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+\end{center}
+
+
+\begin{center}
+{\bf\large Preamble}
+\end{center}
+
+The purpose of this License is to make a manual, textbook, or other
+functional and useful document "free" in the sense of freedom: to
+assure everyone the effective freedom to copy and redistribute it,
+with or without modifying it, either commercially or noncommercially.
+Secondarily, this License preserves for the author and publisher a way
+to get credit for their work, while not being considered responsible
+for modifications made by others.
+
+This License is a kind of "copyleft", which means that derivative
+works of the document must themselves be free in the same sense. It
+complements the GNU General Public License, which is a copyleft
+license designed for free software.
+
+We have designed this License in order to use it for manuals for free
+software, because free software needs free documentation: a free
+program should come with manuals providing the same freedoms that the
+software does. But this License is not limited to software manuals;
+it can be used for any textual work, regardless of subject matter or
+whether it is published as a printed book. We recommend this License
+principally for works whose purpose is instruction or reference.
+
+
+\begin{center}
+{\Large\bf 1. APPLICABILITY AND DEFINITIONS}
+\addcontentsline{toc}{section}{1. APPLICABILITY AND DEFINITIONS}
+\end{center}
+
+This License applies to any manual or other work, in any medium, that
+contains a notice placed by the copyright holder saying it can be
+distributed under the terms of this License. Such a notice grants a
+world-wide, royalty-free license, unlimited in duration, to use that
+work under the conditions stated herein. The \textbf{"Document"}, below,
+refers to any such manual or work. Any member of the public is a
+licensee, and is addressed as \textbf{"you"}. You accept the license if you
+copy, modify or distribute the work in a way requiring permission
+under copyright law.
+
+A \textbf{"Modified Version"} of the Document means any work containing the
+Document or a portion of it, either copied verbatim, or with
+modifications and/or translated into another language.
+
+A \textbf{"Secondary Section"} is a named appendix or a front-matter section of
+the Document that deals exclusively with the relationship of the
+publishers or authors of the Document to the Document's overall subject
+(or to related matters) and contains nothing that could fall directly
+within that overall subject. (Thus, if the Document is in part a
+textbook of mathematics, a Secondary Section may not explain any
+mathematics.) The relationship could be a matter of historical
+connection with the subject or with related matters, or of legal,
+commercial, philosophical, ethical or political position regarding
+them.
+
+The \textbf{"Invariant Sections"} are certain Secondary Sections whose titles
+are designated, as being those of Invariant Sections, in the notice
+that says that the Document is released under this License. If a
+section does not fit the above definition of Secondary then it is not
+allowed to be designated as Invariant. The Document may contain zero
+Invariant Sections. If the Document does not identify any Invariant
+Sections then there are none.
+
+The \textbf{"Cover Texts"} are certain short passages of text that are listed,
+as Front-Cover Texts or Back-Cover Texts, in the notice that says that
+the Document is released under this License. A Front-Cover Text may
+be at most 5 words, and a Back-Cover Text may be at most 25 words.
+
+A \textbf{"Transparent"} copy of the Document means a machine-readable copy,
+represented in a format whose specification is available to the
+general public, that is suitable for revising the document
+straightforwardly with generic text editors or (for images composed of
+pixels) generic paint programs or (for drawings) some widely available
+drawing editor, and that is suitable for input to text formatters or
+for automatic translation to a variety of formats suitable for input
+to text formatters. A copy made in an otherwise Transparent file
+format whose markup, or absence of markup, has been arranged to thwart
+or discourage subsequent modification by readers is not Transparent.
+An image format is not Transparent if used for any substantial amount
+of text. A copy that is not "Transparent" is called \textbf{"Opaque"}.
+
+Examples of suitable formats for Transparent copies include plain
+ASCII without markup, Texinfo input format, LaTeX input format, SGML
+or XML using a publicly available DTD, and standard-conforming simple
+HTML, PostScript or PDF designed for human modification. Examples of
+transparent image formats include PNG, XCF and JPG. Opaque formats
+include proprietary formats that can be read and edited only by
+proprietary word processors, SGML or XML for which the DTD and/or
+processing tools are not generally available, and the
+machine-generated HTML, PostScript or PDF produced by some word
+processors for output purposes only.
+
+The \textbf{"Title Page"} means, for a printed book, the title page itself,
+plus such following pages as are needed to hold, legibly, the material
+this License requires to appear in the title page. For works in
+formats which do not have any title page as such, "Title Page" means
+the text near the most prominent appearance of the work's title,
+preceding the beginning of the body of the text.
+
+A section \textbf{"Entitled XYZ"} means a named subunit of the Document whose
+title either is precisely XYZ or contains XYZ in parentheses following
+text that translates XYZ in another language. (Here XYZ stands for a
+specific section name mentioned below, such as \textbf{"Acknowledgements"},
+\textbf{"Dedications"}, \textbf{"Endorsements"}, or \textbf{"History"}.)
+To \textbf{"Preserve the Title"}
+of such a section when you modify the Document means that it remains a
+section "Entitled XYZ" according to this definition.
+
+The Document may include Warranty Disclaimers next to the notice which
+states that this License applies to the Document. These Warranty
+Disclaimers are considered to be included by reference in this
+License, but only as regards disclaiming warranties: any other
+implication that these Warranty Disclaimers may have is void and has
+no effect on the meaning of this License.
+
+
+\begin{center}
+{\Large\bf 2. VERBATIM COPYING}
+\addcontentsline{toc}{section}{2. VERBATIM COPYING}
+\end{center}
+
+You may copy and distribute the Document in any medium, either
+commercially or noncommercially, provided that this License, the
+copyright notices, and the license notice saying this License applies
+to the Document are reproduced in all copies, and that you add no other
+conditions whatsoever to those of this License. You may not use
+technical measures to obstruct or control the reading or further
+copying of the copies you make or distribute. However, you may accept
+compensation in exchange for copies. If you distribute a large enough
+number of copies you must also follow the conditions in section 3.
+
+You may also lend copies, under the same conditions stated above, and
+you may publicly display copies.
+
+
+\begin{center}
+{\Large\bf 3. COPYING IN QUANTITY}
+\addcontentsline{toc}{section}{3. COPYING IN QUANTITY}
+\end{center}
+
+
+If you publish printed copies (or copies in media that commonly have
+printed covers) of the Document, numbering more than 100, and the
+Document's license notice requires Cover Texts, you must enclose the
+copies in covers that carry, clearly and legibly, all these Cover
+Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on
+the back cover. Both covers must also clearly and legibly identify
+you as the publisher of these copies. The front cover must present
+the full title with all words of the title equally prominent and
+visible. You may add other material on the covers in addition.
+Copying with changes limited to the covers, as long as they preserve
+the title of the Document and satisfy these conditions, can be treated
+as verbatim copying in other respects.
+
+If the required texts for either cover are too voluminous to fit
+legibly, you should put the first ones listed (as many as fit
+reasonably) on the actual cover, and continue the rest onto adjacent
+pages.
+
+If you publish or distribute Opaque copies of the Document numbering
+more than 100, you must either include a machine-readable Transparent
+copy along with each Opaque copy, or state in or with each Opaque copy
+a computer-network location from which the general network-using
+public has access to download using public-standard network protocols
+a complete Transparent copy of the Document, free of added material.
+If you use the latter option, you must take reasonably prudent steps,
+when you begin distribution of Opaque copies in quantity, to ensure
+that this Transparent copy will remain thus accessible at the stated
+location until at least one year after the last time you distribute an
+Opaque copy (directly or through your agents or retailers) of that
+edition to the public.
+
+It is requested, but not required, that you contact the authors of the
+Document well before redistributing any large number of copies, to give
+them a chance to provide you with an updated version of the Document.
+
+
+\begin{center}
+{\Large\bf 4. MODIFICATIONS}
+\addcontentsline{toc}{section}{4. MODIFICATIONS}
+\end{center}
+
+You may copy and distribute a Modified Version of the Document under
+the conditions of sections 2 and 3 above, provided that you release
+the Modified Version under precisely this License, with the Modified
+Version filling the role of the Document, thus licensing distribution
+and modification of the Modified Version to whoever possesses a copy
+of it. In addition, you must do these things in the Modified Version:
+
+\begin{itemize}
+\item[A.]
+ Use in the Title Page (and on the covers, if any) a title distinct
+ from that of the Document, and from those of previous versions
+ (which should, if there were any, be listed in the History section
+ of the Document). You may use the same title as a previous version
+ if the original publisher of that version gives permission.
+
+\item[B.]
+ List on the Title Page, as authors, one or more persons or entities
+ responsible for authorship of the modifications in the Modified
+ Version, together with at least five of the principal authors of the
+ Document (all of its principal authors, if it has fewer than five),
+ unless they release you from this requirement.
+
+\item[C.]
+ State on the Title page the name of the publisher of the
+ Modified Version, as the publisher.
+
+\item[D.]
+ Preserve all the copyright notices of the Document.
+
+\item[E.]
+ Add an appropriate copyright notice for your modifications
+ adjacent to the other copyright notices.
+
+\item[F.]
+ Include, immediately after the copyright notices, a license notice
+ giving the public permission to use the Modified Version under the
+ terms of this License, in the form shown in the Addendum below.
+
+\item[G.]
+ Preserve in that license notice the full lists of Invariant Sections
+ and required Cover Texts given in the Document's license notice.
+
+\item[H.]
+ Include an unaltered copy of this License.
+
+\item[I.]
+ Preserve the section Entitled "History", Preserve its Title, and add
+ to it an item stating at least the title, year, new authors, and
+ publisher of the Modified Version as given on the Title Page. If
+ there is no section Entitled "History" in the Document, create one
+ stating the title, year, authors, and publisher of the Document as
+ given on its Title Page, then add an item describing the Modified
+ Version as stated in the previous sentence.
+
+\item[J.]
+ Preserve the network location, if any, given in the Document for
+ public access to a Transparent copy of the Document, and likewise
+ the network locations given in the Document for previous versions
+ it was based on. These may be placed in the "History" section.
+ You may omit a network location for a work that was published at
+ least four years before the Document itself, or if the original
+ publisher of the version it refers to gives permission.
+
+\item[K.]
+ For any section Entitled "Acknowledgements" or "Dedications",
+ Preserve the Title of the section, and preserve in the section all
+ the substance and tone of each of the contributor acknowledgements
+ and/or dedications given therein.
+
+\item[L.]
+ Preserve all the Invariant Sections of the Document,
+ unaltered in their text and in their titles. Section numbers
+ or the equivalent are not considered part of the section titles.
+
+\item[M.]
+ Delete any section Entitled "Endorsements". Such a section
+ may not be included in the Modified Version.
+
+\item[N.]
+ Do not retitle any existing section to be Entitled "Endorsements"
+ or to conflict in title with any Invariant Section.
+
+\item[O.]
+ Preserve any Warranty Disclaimers.
+\end{itemize}
+
+If the Modified Version includes new front-matter sections or
+appendices that qualify as Secondary Sections and contain no material
+copied from the Document, you may at your option designate some or all
+of these sections as invariant. To do this, add their titles to the
+list of Invariant Sections in the Modified Version's license notice.
+These titles must be distinct from any other section titles.
+
+You may add a section Entitled "Endorsements", provided it contains
+nothing but endorsements of your Modified Version by various
+parties--for example, statements of peer review or that the text has
+been approved by an organization as the authoritative definition of a
+standard.
+
+You may add a passage of up to five words as a Front-Cover Text, and a
+passage of up to 25 words as a Back-Cover Text, to the end of the list
+of Cover Texts in the Modified Version. Only one passage of
+Front-Cover Text and one of Back-Cover Text may be added by (or
+through arrangements made by) any one entity. If the Document already
+includes a cover text for the same cover, previously added by you or
+by arrangement made by the same entity you are acting on behalf of,
+you may not add another; but you may replace the old one, on explicit
+permission from the previous publisher that added the old one.
+
+The author(s) and publisher(s) of the Document do not by this License
+give permission to use their names for publicity for or to assert or
+imply endorsement of any Modified Version.
+
+
+\begin{center}
+{\Large\bf 5. COMBINING DOCUMENTS}
+\addcontentsline{toc}{section}{5. COMBINING DOCUMENTS}
+\end{center}
+
+
+You may combine the Document with other documents released under this
+License, under the terms defined in section 4 above for modified
+versions, provided that you include in the combination all of the
+Invariant Sections of all of the original documents, unmodified, and
+list them all as Invariant Sections of your combined work in its
+license notice, and that you preserve all their Warranty Disclaimers.
+
+The combined work need only contain one copy of this License, and
+multiple identical Invariant Sections may be replaced with a single
+copy. If there are multiple Invariant Sections with the same name but
+different contents, make the title of each such section unique by
+adding at the end of it, in parentheses, the name of the original
+author or publisher of that section if known, or else a unique number.
+Make the same adjustment to the section titles in the list of
+Invariant Sections in the license notice of the combined work.
+
+In the combination, you must combine any sections Entitled "History"
+in the various original documents, forming one section Entitled
+"History"; likewise combine any sections Entitled "Acknowledgements",
+and any sections Entitled "Dedications". You must delete all sections
+Entitled "Endorsements".
+
+\begin{center}
+{\Large\bf 6. COLLECTIONS OF DOCUMENTS}
+\addcontentsline{toc}{section}{6. COLLECTIONS OF DOCUMENTS}
+\end{center}
+
+You may make a collection consisting of the Document and other documents
+released under this License, and replace the individual copies of this
+License in the various documents with a single copy that is included in
+the collection, provided that you follow the rules of this License for
+verbatim copying of each of the documents in all other respects.
+
+You may extract a single document from such a collection, and distribute
+it individually under this License, provided you insert a copy of this
+License into the extracted document, and follow this License in all
+other respects regarding verbatim copying of that document.
+
+
+\begin{center}
+{\Large\bf 7. AGGREGATION WITH INDEPENDENT WORKS}
+\addcontentsline{toc}{section}{7. AGGREGATION WITH INDEPENDENT WORKS}
+\end{center}
+
+
+A compilation of the Document or its derivatives with other separate
+and independent documents or works, in or on a volume of a storage or
+distribution medium, is called an "aggregate" if the copyright
+resulting from the compilation is not used to limit the legal rights
+of the compilation's users beyond what the individual works permit.
+When the Document is included in an aggregate, this License does not
+apply to the other works in the aggregate which are not themselves
+derivative works of the Document.
+
+If the Cover Text requirement of section 3 is applicable to these
+copies of the Document, then if the Document is less than one half of
+the entire aggregate, the Document's Cover Texts may be placed on
+covers that bracket the Document within the aggregate, or the
+electronic equivalent of covers if the Document is in electronic form.
+Otherwise they must appear on printed covers that bracket the whole
+aggregate.
+
+
+\begin{center}
+{\Large\bf 8. TRANSLATION}
+\addcontentsline{toc}{section}{8. TRANSLATION}
+\end{center}
+
+
+Translation is considered a kind of modification, so you may
+distribute translations of the Document under the terms of section 4.
+Replacing Invariant Sections with translations requires special
+permission from their copyright holders, but you may include
+translations of some or all Invariant Sections in addition to the
+original versions of these Invariant Sections. You may include a
+translation of this License, and all the license notices in the
+Document, and any Warranty Disclaimers, provided that you also include
+the original English version of this License and the original versions
+of those notices and disclaimers. In case of a disagreement between
+the translation and the original version of this License or a notice
+or disclaimer, the original version will prevail.
+
+If a section in the Document is Entitled "Acknowledgements",
+"Dedications", or "History", the requirement (section 4) to Preserve
+its Title (section 1) will typically require changing the actual
+title.
+
+
+\begin{center}
+{\Large\bf 9. TERMINATION}
+\addcontentsline{toc}{section}{9. TERMINATION}
+\end{center}
+
+
+You may not copy, modify, sublicense, or distribute the Document except
+as expressly provided for under this License. Any other attempt to
+copy, modify, sublicense or distribute the Document is void, and will
+automatically terminate your rights under this License. However,
+parties who have received copies, or rights, from you under this
+License will not have their licenses terminated so long as such
+parties remain in full compliance.
+
+
+\begin{center}
+{\Large\bf 10. FUTURE REVISIONS OF THIS LICENSE}
+\addcontentsline{toc}{section}{10. FUTURE REVISIONS OF THIS LICENSE}
+\end{center}
+
+
+The Free Software Foundation may publish new, revised versions
+of the GNU Free Documentation License from time to time. Such new
+versions will be similar in spirit to the present version, but may
+differ in detail to address new problems or concerns. See
+http://www.gnu.org/copyleft/.
+
+Each version of the License is given a distinguishing version number.
+If the Document specifies that a particular numbered version of this
+License "or any later version" applies to it, you have the option of
+following the terms and conditions either of that specified version or
+of any later version that has been published (not as a draft) by the
+Free Software Foundation. If the Document does not specify a version
+number of this License, you may choose any version ever published (not
+as a draft) by the Free Software Foundation.
+
+
+\begin{center}
+{\Large\bf ADDENDUM: How to use this License for your documents}
+\addcontentsline{toc}{section}{ADDENDUM: How to use this License for your documents}
+\end{center}
+
+To use this License in a document you have written, include a copy of
+the License in the document and put the following copyright and
+license notices just after the title page:
+
+\bigskip
+\begin{quote}
+ Copyright \copyright YEAR YOUR NAME.
+ Permission is granted to copy, distribute and/or modify this document
+ under the terms of the GNU Free Documentation License, Version 1.2
+ or any later version published by the Free Software Foundation;
+ with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
+ A copy of the license is included in the section entitled "GNU
+ Free Documentation License".
+\end{quote}
+\bigskip
+
+If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts,
+replace the "with...Texts." line with this:
+
+\bigskip
+\begin{quote}
+ with the Invariant Sections being LIST THEIR TITLES, with the
+ Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
+\end{quote}
+\bigskip
+
+If you have Invariant Sections without Cover Texts, or some other
+combination of the three, merge those two alternatives to suit the
+situation.
+
+If your document contains nontrivial examples of program code, we
+recommend releasing these examples in parallel under your choice of
+free software license, such as the GNU General Public License,
+to permit their use in free software.
+
+%---------------------------------------------------------------------
--- /dev/null
+%%
+%%
+
+\chapter{File Services Daemon}
+\label{_ChapterStart11}
+\index{File Services Daemon }
+\index{Daemon!File Services }
+\addcontentsline{toc}{section}{File Services Daemon}
+
+Please note, this section is somewhat out of date as the code has evolved
+significantly. The basic idea has not changed though.
+
+This chapter is intended to be a technical discussion of the File daemon
+services and as such is not targeted at end users but rather at developers and
+system administrators that want or need to know more of the working details of
+{\bf Bacula}.
+
+The {\bf Bacula File Services} consist of the programs that run on the system
+to be backed up and provide the interface between the Host File system and
+Bacula -- in particular, the Director and the Storage services.
+
+When time comes for a backup, the Director gets in touch with the File daemon
+on the client machine and hands it a set of ``marching orders'' which, if
+written in English, might be something like the following:
+
+OK, {\bf File daemon}, it's time for your daily incremental backup. I want you
+to get in touch with the Storage daemon on host archive.mysite.com and perform
+the following save operations with the designated options. You'll note that
+I've attached include and exclude lists and patterns you should apply when
+backing up the file system. As this is an incremental backup, you should save
+only files modified since the time you started your last backup which, as you
+may recall, was 2000-11-19-06:43:38. Please let me know when you're done and
+how it went. Thank you.
+
+So, having been handed everything it needs to decide what to dump and where to
+store it, the File daemon doesn't need to have any further contact with the
+Director until the backup is complete providing there are no errors. If there
+are errors, the error messages will be delivered immediately to the Director.
+While the backup is proceeding, the File daemon will send the file coordinates
+and data for each file being backed up to the Storage daemon, which will in
+turn pass the file coordinates to the Director to put in the catalog.
+
+During a {\bf Verify} of the catalog, the situation is different, since the
+File daemon will have an exchange with the Director for each file, and will
+not contact the Storage daemon.
+
+A {\bf Restore} operation will be very similar to the {\bf Backup} except that
+during the {\bf Restore} the Storage daemon will not send storage coordinates
+to the Director since the Director presumably already has them. On the other
+hand, any error messages from either the Storage daemon or File daemon will
+normally be sent directly to the Directory (this, of course, depends on how
+the Message resource is defined).
+
+\section{Commands Received from the Director for a Backup}
+\index{Backup!Commands Received from the Director for a }
+\index{Commands Received from the Director for a Backup }
+\addcontentsline{toc}{subsection}{Commands Received from the Director for a
+Backup}
+
+To be written ...
+
+\section{Commands Received from the Director for a Restore}
+\index{Commands Received from the Director for a Restore }
+\index{Restore!Commands Received from the Director for a }
+\addcontentsline{toc}{subsection}{Commands Received from the Director for a
+Restore}
+
+To be written ...
--- /dev/null
+%%
+%%
+
+\chapter{Bacula Developer Notes}
+\label{_ChapterStart10}
+\index{Bacula Developer Notes}
+\index{Notes!Bacula Developer}
+\addcontentsline{toc}{section}{Bacula Developer Notes}
+
+This document is intended mostly for developers and describes how you can
+contribute to the Bacula project and the the general framework of making
+Bacula source changes.
+
+\subsection{Contributions}
+\index{Contributions}
+\addcontentsline{toc}{subsubsection}{Contributions}
+
+Contributions to the Bacula project come in many forms: ideas,
+participation in helping people on the bacula-users email list, packaging
+Bacula binaries for the community, helping improve the documentation, and
+submitting code.
+
+Contributions in the form of submissions for inclusion in the project are
+broken into two groups. The first are contributions that are aids and not
+essential to Bacula. In general, these will be scripts or will go into the
+{\bf bacula/examples} directory. For these kinds of non-essential
+contributions there is no obligation to do a copyright assignment as
+described below. However, a copyright assignment would still be
+appreciated.
+
+The second class of contributions are those which will be integrated with
+Bacula and become an essential part (code, scripts, documentation, ...)
+Within this class of contributions, there are two hurdles to surmount. One
+is getting your patch accepted, and two is dealing with copyright issues.
+The following text describes some of the requirements for such code.
+
+\subsection{Patches}
+\index{Patches}
+\addcontentsline{toc}{subsubsection}{Patches}
+
+Subject to the copyright assignment described below, your patches should be
+sent in {\bf git format-patch} format relative to the current contents of the
+master branch of the Source Forge Git repository. Please attach the
+output file or files generated by the {\bf git format-patch} to the email
+rather than include them directory to avoid wrapping of the lines
+in the patch. Please be sure to use the Bacula
+indenting standard (see below) for source code. If you have checked out
+the source with Git, you can get a diff using.
+
+\begin{verbatim}
+git pull
+git format-patch -M
+\end{verbatim}
+
+If you plan on doing significant development work over a period of time,
+after having your first patch reviewed and approved, you will be eligible
+for having developer Git write access so that you can commit your changes
+directly to the Git repository. To do so, you will need a userid on Source
+Forge.
+
+\subsection{Copyrights}
+\index{Copyrights}
+\addcontentsline{toc}{subsubsection}{Copyrights}
+
+To avoid future problems concerning changing licensing or
+copyrights, all code contributions more than a hand full of lines
+must be in the Public Domain or have the copyright transferred to
+the Free Software Foundation Europe e.V. with a Fiduciary License
+Agreement (FLA) as the case for all the current code.
+
+Prior to November 2004, all the code was copyrighted by Kern Sibbald and
+John Walker. After November 2004, the code was copyrighted by Kern
+Sibbald, then on the 15th of November 2006, Kern transferred the copyright
+to the Free Software Foundation Europe e.V. In signing the FLA and
+transferring the copyright, you retain the right to use the code you have
+submitted as you want, and you ensure that Bacula will always remain Free
+and Open Source.
+
+Your name should be clearly indicated as the author of the code, and you
+must be extremely careful not to violate any copyrights or patents or use
+other people's code without acknowledging it. The purpose of this
+requirement is to avoid future copyright, patent, or intellectual property
+problems. Please read the LICENSE agreement in the main Bacula source code
+directory. When you sign the Fiduciary License Agreement (FLA) and send it
+in, you are agreeing to the terms of that LICENSE file.
+
+If you don't understand what we mean by future problems, please
+examine the difficulties Mozilla was having finding
+previous contributors at \elink{
+http://www.mozilla.org/MPL/missing.html}
+{http://www.mozilla.org/MPL/missing.html}. The other important issue is to
+avoid copyright, patent, or intellectual property violations as was
+(May 2003) claimed by SCO against IBM.
+
+Although the copyright will be held by the Free Software
+Foundation Europe e.V., each developer is expected to indicate
+that he wrote and/or modified a particular module (or file) and
+any other sources. The copyright assignment may seem a bit
+unusual, but in reality, it is not. Most large projects require
+this.
+
+If you have any doubts about this, please don't hesitate to ask. The
+objective is to assure the long term survival of the Bacula project.
+
+Items not needing a copyright assignment are: most small changes,
+enhancements, or bug fixes of 5-10 lines of code, which amount to
+less than 20% of any particular file.
+
+\subsection{Copyright Assignment -- Fiduciary License Agreement}
+\index{Copyright Assignment}
+\index{Assignment!Copyright}
+\addcontentsline{toc}{subsubsection}{Copyright Assignment -- Fiduciary License Agreement}
+
+Since this is not a commercial enterprise, and we prefer to believe in
+everyone's good faith, previously developers could assign the copyright by
+explicitly acknowledging that they do so in their first submission. This
+was sufficient if the developer is independent, or an employee of a
+not-for-profit organization or a university. However, in an effort to
+ensure that the Bacula code is really clean, beginning in August 2006, all
+previous and future developers with SVN write access will be asked to submit a
+copyright assignment (or Fiduciary License Agreement -- FLA),
+which means you agree to the LICENSE in the main source
+directory. It also means that you receive back the right to use
+the code that you have submitted.
+
+Any developer who wants to contribute and is employed by a company should
+either list the employer as the owner of the code, or get explicit
+permission from him to sign the copyright assignment. This is because in
+many countries, all work that an employee does whether on company time or
+in the employee's free time is considered to be Intellectual Property of
+the company. Obtaining official approval or an FLA from the company will
+avoid misunderstandings between the employee, the company, and the Bacula
+project. A good number of companies have already followed this procedure.
+
+The Fiduciary License Agreement is posted on the Bacula web site at:
+\elink{http://www.bacula.org/en/FLA-bacula.en.pdf}{http://www.bacula.org/en/FLA-bacula.en.pdf}
+
+The instructions for filling out this agreement are also at:
+\elink{http://www.bacula.org/?page=fsfe}{http://www.bacula.org/?page=fsfe}
+
+It should be filled out, then sent to:
+
+\begin{verbatim}
+ Kern Sibbald
+ Cotes-de-Montmoiret 9
+ 1012 Lausanne
+ Switzerland
+\end{verbatim}
+
+Please note that the above address is different from the officially
+registered office mentioned in the document. When you send in such a
+complete document, please notify me: kern at sibbald dot com, and
+please add your email address to the FLA so that I can contact you
+to confirm reception of the signed FLA.
+
+
+\section{The Development Cycle}
+\index{Developement Cycle}
+\index{Cycle!Developement}
+\addcontentsline{toc}{subsubsection}{Development Cycle}
+
+As discussed on the email lists, the number of contributions are
+increasing significantly. We expect this positive trend
+will continue. As a consequence, we have modified how we do
+development, and instead of making a list of all the features that we will
+implement in the next version, each developer signs up for one (maybe
+two) projects at a time, and when they are complete, and the code
+is stable, we will release a new version. The release cycle will probably
+be roughly six months.
+
+The difference is that with a shorter release cycle and fewer released
+feature, we will have more time to review the new code that is being
+contributed, and will be able to devote more time to a smaller number of
+projects (some prior versions had too many new features for us to handle
+correctly).
+
+Future release schedules will be much the same, and the
+number of new features will also be much the same providing that the
+contributions continue to come -- and they show no signs of let up :-)
+
+\index{Feature Requests}
+{\bf Feature Requests:} \\
+In addition, we have "formalizee" the feature requests a bit.
+
+Instead of me maintaining an informal list of everything I run into
+(kernstodo), we now maintain a "formal" list of projects. This
+means that all new feature requests, including those recently discussed on
+the email lists, must be formally submitted and approved.
+
+Formal submission of feature requests will take two forms: \\
+1. non-mandatory, but highly recommended is to discuss proposed new features
+on the mailing list.\\
+2. Formal submission of an Feature Request in a special format. We'll
+give an example of this below, but you can also find it on the web site
+under "Support -\gt{} Feature Requests". Since it takes a bit of time to
+properly fill out a Feature Request form, you probably should check on the
+email list first.
+
+Once the Feature Request is received by the keeper of the projects list, it
+will be sent to the Bacula project manager (Kern), and he will either
+accept it (90% of the time), send it back asking for clarification (10% of
+the time), send it to the email list asking for opinions, or reject it
+(very few cases).
+
+If it is accepted, it will go in the "projects" file (a simple ASCII file)
+maintained in the main Bacula source directory.
+
+{\bf Implementation of Feature Requests:}\\
+Any qualified developer can sign up for a project. The project must have
+an entry in the projects file, and the developer's name will appear in the
+Status field.
+
+{\bf How Feature Requests are accepted:}\\
+Acceptance of Feature Requests depends on several things: \\
+1. feedback from users. If it is negative, the Feature Request will probably not be
+accepted. \\
+2. the difficulty of the project. A project that is so
+difficult that we cannot imagine finding someone to implement probably won't
+be accepted. Obviously if you know how to implement it, don't hesitate
+to put it in your Feature Request \\
+ 3. whether or not the Feature Request fits within the current strategy of
+Bacula (for example an Feature Request that requests changing the tape to
+tar format probably would not be accepted, ...).
+
+{\bf How Feature Requests are prioritized:}\\
+Once an Feature Request is accepted, it needs to be implemented. If you
+can find a developer for it, or one signs up for implementing it, then the
+Feature Request becomes top priority (at least for that developer).
+
+Between releases of Bacula, we will generally solicit Feature Request input
+for the next version, and by way of this email, we suggest that you send
+discuss and send in your Feature Requests for the next release. Please
+verify that the Feature Request is not in the current list (attached to this email).
+
+Once users have had several weeks to submit Feature Requests, the keeper of
+the projects list will organize them, and request users to vote on them.
+This will allow fixing prioritizing the Feature Requests. Having a
+priority is one thing, but getting it implement is another thing -- we are
+hoping that the Bacula community will take more responsibility for assuring
+the implementation of accepted Feature Requests.
+
+Feature Request format:
+\begin{verbatim}
+============= Empty Feature Request form ===========
+Item n: One line summary ...
+ Date: Date submitted
+ Origin: Name and email of originator.
+ Status:
+
+ What: More detailed explanation ...
+
+ Why: Why it is important ...
+
+ Notes: Additional notes or features (omit if not used)
+============== End Feature Request form ==============
+\end{verbatim}
+
+\begin{verbatim}
+============= Example Completed Feature Request form ===========
+Item 1: Implement a Migration job type that will move the job
+ data from one device to another.
+ Origin: Sponsored by Riege Sofware International GmbH. Contact:
+ Daniel Holtkamp <holtkamp at riege dot com>
+ Date: 28 October 2005
+ Status: Partially coded in 1.37 -- much more to do. Assigned to
+ Kern.
+
+ What: The ability to copy, move, or archive data that is on a
+ device to another device is very important.
+
+ Why: An ISP might want to backup to disk, but after 30 days
+ migrate the data to tape backup and delete it from
+ disk. Bacula should be able to handle this
+ automatically. It needs to know what was put where,
+ and when, and what to migrate -- it is a bit like
+ retention periods. Doing so would allow space to be
+ freed up for current backups while maintaining older
+ data on tape drives.
+
+ Notes: Migration could be triggered by:
+ Number of Jobs
+ Number of Volumes
+ Age of Jobs
+ Highwater size (keep total size)
+ Lowwater mark
+=================================================
+\end{verbatim}
+
+
+\section{Bacula Code Submissions and Projects}
+\index{Submissions and Projects}
+\addcontentsline{toc}{subsection}{Code Submissions and Projects}
+
+Getting code implemented in Bacula works roughly as follows:
+
+\begin{itemize}
+
+\item Kern is the project manager, but prefers not to be a "gate keeper".
+ This means that the developers are expected to be self-motivated,
+ and once they have experience submit directly to the Git
+ repositories. However,
+ it is a good idea to have your patches reviewed prior to submitting,
+ and it is a bad idea to submit monster patches because no one will
+ be able to properly review them. See below for more details on this.
+
+\item There are growing numbers of contributions (very good).
+
+\item Some contributions come in the form of relatively small patches,
+ which Kern reviews, integrates, documents, tests, and maintains.
+
+\item All Bacula developers take full
+ responsibility for writing the code, posting as patches so that we can
+ review it as time permits, integrating it at an appropriate time,
+ responding to our requests for tweaking it (name changes, ...),
+ document it in the code, document it in the manual (even though
+ their mother tongue is not English), test it, develop and commit
+ regression scripts, and answer in a timely fashion all bug reports --
+ even occasionally accepting additional bugs :-)
+
+ This is a sustainable way of going forward with Bacula, and the
+ direction that the project will be taking more and more. For
+ example, in the past, we have had some very dedicated programmers
+ who did major projects. However, some of these
+ programmers due to outside obligations (job responsibilities change of
+ job, school duties, ...) could not continue to maintain the code. In
+ those cases, the code suffers from lack of maintenance, sometimes we
+ patch it, sometimes not. In the end, if the code is not maintained, the
+ code gets dropped from the project (there are two such contributions
+ that are heading in that direction). When ever possible, we would like
+ to avoid this, and ensure a continuation of the code and a sharing of
+ the development, debugging, documentation, and maintenance
+ responsibilities.
+\end{itemize}
+
+\section{Patches for Released Versions}
+\index{Patches for Released Versions}
+\addcontentsline{toc}{subsection}{Patches for Released Versions}
+If you fix a bug in a released version, you should, unless it is
+an absolutely trivial bug, create and release a patch file for the
+bug. The procedure is as follows:
+
+Fix the bug in the released branch and in the develpment master branch.
+
+Make a patch file for the branch and add the branch patch to
+the patches directory in both the branch and the trunk.
+The name should be 2.2.4-xxx.patch where xxx is unique, in this case it can
+be "restore", e.g. 2.2.4-restore.patch. Add to the top of the
+file a brief description and instructions for applying it -- see for example
+2.2.4-poll-mount.patch. The best way to create the patch file is as
+follows:
+
+\begin{verbatim}
+ (edit) 2.2.4-restore.patch
+ (input description)
+ (end edit)
+
+ git format-patch -M
+ mv 0001-xxx 2.2.4-restore.patch
+\end{verbatim}
+
+check to make sure no extra junk got put into the patch file (i.e.
+it should have the patch for that bug only).
+
+If there is not a bug report on the problem, create one, then add the
+patch to the bug report.
+
+Then upload it to the 2.2.x release of bacula-patches.
+
+So, end the end, the patch file is:
+\begin{itemize}
+\item Attached to the bug report
+
+\item In Branch-2.2/bacula/patches/...
+
+\item In the trunk
+
+\item Loaded on Source Forge bacula-patches 2.2.x release. When
+ you add it, click on the check box to send an Email so that all the
+ users that are monitoring SF patches get notified.
+\end{itemize}
+
+
+\section{Developing Bacula}
+\index{Developing Bacula}
+\index{Bacula!Developing}
+\addcontentsline{toc}{subsubsection}{Developing Bacula}
+
+Typically the simplest way to develop Bacula is to open one xterm window
+pointing to the source directory you wish to update; a second xterm window at
+the top source directory level, and a third xterm window at the bacula
+directory \lt{}top\gt{}/src/bacula. After making source changes in one of the
+directories, in the top source directory xterm, build the source, and start
+the daemons by entering:
+
+make and
+
+./startit then in the enter:
+
+./console or
+
+./gnome-console to start the Console program. Enter any commands for testing.
+For example: run kernsverify full.
+
+Note, the instructions here to use {\bf ./startit} are different from using a
+production system where the administrator starts Bacula by entering {\bf
+./bacula start}. This difference allows a development version of {\bf Bacula}
+to be run on a computer at the same time that a production system is running.
+The {\bf ./startit} strip starts {\bf Bacula} using a different set of
+configuration files, and thus permits avoiding conflicts with any production
+system.
+
+To make additional source changes, exit from the Console program, and in the
+top source directory, stop the daemons by entering:
+
+./stopit then repeat the process.
+
+\subsection{Debugging}
+\index{Debugging}
+\addcontentsline{toc}{subsubsection}{Debugging}
+
+Probably the first thing to do is to turn on debug output.
+
+A good place to start is with a debug level of 20 as in {\bf ./startit -d20}.
+The startit command starts all the daemons with the same debug level.
+Alternatively, you can start the appropriate daemon with the debug level you
+want. If you really need more info, a debug level of 60 is not bad, and for
+just about everything a level of 200.
+
+\subsection{Using a Debugger}
+\index{Using a Debugger}
+\index{Debugger!Using a}
+\addcontentsline{toc}{subsubsection}{Using a Debugger}
+
+If you have a serious problem such as a segmentation fault, it can usually be
+found quickly using a good multiple thread debugger such as {\bf gdb}. For
+example, suppose you get a segmentation violation in {\bf bacula-dir}. You
+might use the following to find the problem:
+
+\lt{}start the Storage and File daemons\gt{}
+cd dird
+gdb ./bacula-dir
+run -f -s -c ./dird.conf
+\lt{}it dies with a segmentation fault\gt{}
+where
+The {\bf -f} option is specified on the {\bf run} command to inhibit {\bf
+dird} from going into the background. You may also want to add the {\bf -s}
+option to the run command to disable signals which can potentially interfere
+with the debugging.
+
+As an alternative to using the debugger, each {\bf Bacula} daemon has a built
+in back trace feature when a serious error is encountered. It calls the
+debugger on itself, produces a back trace, and emails the report to the
+developer. For more details on this, please see the chapter in the main Bacula
+manual entitled ``What To Do When Bacula Crashes (Kaboom)''.
+
+\subsection{Memory Leaks}
+\index{Leaks!Memory}
+\index{Memory Leaks}
+\addcontentsline{toc}{subsubsection}{Memory Leaks}
+
+Because Bacula runs routinely and unattended on client and server machines, it
+may run for a long time. As a consequence, from the very beginning, Bacula
+uses SmartAlloc to ensure that there are no memory leaks. To make detection of
+memory leaks effective, all Bacula code that dynamically allocates memory MUST
+have a way to release it. In general when the memory is no longer needed, it
+should be immediately released, but in some cases, the memory will be held
+during the entire time that Bacula is executing. In that case, there MUST be a
+routine that can be called at termination time that releases the memory. In
+this way, we will be able to detect memory leaks. Be sure to immediately
+correct any and all memory leaks that are printed at the termination of the
+daemons.
+
+\subsection{Special Files}
+\index{Files!Special}
+\index{Special Files}
+\addcontentsline{toc}{subsubsection}{Special Files}
+
+Kern uses files named 1, 2, ... 9 with any extension as scratch files. Thus
+any files with these names are subject to being rudely deleted at any time.
+
+\subsection{When Implementing Incomplete Code}
+\index{Code!When Implementing Incomplete}
+\index{When Implementing Incomplete Code}
+\addcontentsline{toc}{subsubsection}{When Implementing Incomplete Code}
+
+Please identify all incomplete code with a comment that contains
+
+\begin{verbatim}
+***FIXME***
+\end{verbatim}
+
+where there are three asterisks (*) before and after the word
+FIXME (in capitals) and no intervening spaces. This is important as it allows
+new programmers to easily recognize where things are partially implemented.
+
+\subsection{Bacula Source File Structure}
+\index{Structure!Bacula Source File}
+\index{Bacula Source File Structure}
+\addcontentsline{toc}{subsubsection}{Bacula Source File Structure}
+
+The distribution generally comes as a tar file of the form {\bf
+bacula.x.y.z.tar.gz} where x, y, and z are the version, release, and update
+numbers respectively.
+
+Once you detar this file, you will have a directory structure as follows:
+
+\footnotesize
+\begin{verbatim}
+|
+Tar file:
+|- depkgs
+ |- mtx (autochanger control program + tape drive info)
+ |- sqlite (SQLite database program)
+
+Tar file:
+|- depkgs-win32
+ |- pthreads (Native win32 pthreads library -- dll)
+ |- zlib (Native win32 zlib library)
+ |- wx (wxWidgets source code)
+
+Project bacula:
+|- bacula (main source directory containing configuration
+ | and installation files)
+ |- autoconf (automatic configuration files, not normally used
+ | by users)
+ |- intl (programs used to translate)
+ |- platforms (OS specific installation files)
+ |- redhat (Red Hat installation)
+ |- solaris (Sun installation)
+ |- freebsd (FreeBSD installation)
+ |- irix (Irix installation -- not tested)
+ |- unknown (Default if system not identified)
+ |- po (translations of source strings)
+ |- src (source directory; contains global header files)
+ |- cats (SQL catalog database interface directory)
+ |- console (bacula user agent directory)
+ |- dird (Director daemon)
+ |- filed (Unix File daemon)
+ |- win32 (Win32 files to make bacula-fd be a service)
+ |- findlib (Unix file find library for File daemon)
+ |- gnome-console (GNOME version of console program)
+ |- lib (General Bacula library)
+ |- stored (Storage daemon)
+ |- tconsole (Tcl/tk console program -- not yet working)
+ |- testprogs (test programs -- normally only in Kern's tree)
+ |- tools (Various tool programs)
+ |- win32 (Native Win32 File daemon)
+ |- baculafd (Visual Studio project file)
+ |- compat (compatibility interface library)
+ |- filed (links to src/filed)
+ |- findlib (links to src/findlib)
+ |- lib (links to src/lib)
+ |- console (beginning of native console program)
+ |- wx-console (wxWidget console Win32 specific parts)
+ |- wx-console (wxWidgets console main source program)
+
+Project regress:
+|- regress (Regression scripts)
+ |- bin (temporary directory to hold Bacula installed binaries)
+ |- build (temporary directory to hold Bacula source)
+ |- scripts (scripts and .conf files)
+ |- tests (test scripts)
+ |- tmp (temporary directory for temp files)
+ |- working (temporary working directory for Bacula daemons)
+
+Project docs:
+|- docs (documentation directory)
+ |- developers (Developer's guide)
+ |- home-page (Bacula's home page source)
+ |- manual (html document directory)
+ |- manual-fr (French translation)
+ |- manual-de (German translation)
+ |- techlogs (Technical development notes);
+
+Project rescue:
+|- rescue (Bacula rescue CDROM)
+ |- linux (Linux rescue CDROM)
+ |- cdrom (Linux rescue CDROM code)
+ ...
+ |- solaris (Solaris rescue -- incomplete)
+ |- freebsd (FreeBSD rescue -- incomplete)
+
+Project gui:
+|- gui (Bacula GUI projects)
+ |- bacula-web (Bacula web php management code)
+ |- bimagemgr (Web application for burning CDROMs)
+
+
+\end{verbatim}
+\normalsize
+
+\subsection{Header Files}
+\index{Header Files}
+\index{Files!Header}
+\addcontentsline{toc}{subsubsection}{Header Files}
+
+Please carefully follow the scheme defined below as it permits in general only
+two header file includes per C file, and thus vastly simplifies programming.
+With a large complex project like Bacula, it isn't always easy to ensure that
+the right headers are invoked in the right order (there are a few kludges to
+make this happen -- i.e. in a few include files because of the chicken and egg
+problem, certain references to typedefs had to be replaced with {\bf void} ).
+
+Every file should include {\bf bacula.h}. It pulls in just about everything,
+with very few exceptions. If you have system dependent ifdefing, please do it
+in {\bf baconfig.h}. The version number and date are kept in {\bf version.h}.
+
+Each of the subdirectories (console, cats, dird, filed, findlib, lib, stored,
+...) contains a single directory dependent include file generally the name of
+the directory, which should be included just after the include of {\bf
+bacula.h}. This file (for example, for the dird directory, it is {\bf dird.h})
+contains either definitions of things generally needed in this directory, or
+it includes the appropriate header files. It always includes {\bf protos.h}.
+See below.
+
+Each subdirectory contains a header file named {\bf protos.h}, which contains
+the prototypes for subroutines exported by files in that directory. {\bf
+protos.h} is always included by the main directory dependent include file.
+
+\subsection{Programming Standards}
+\index{Standards!Programming}
+\index{Programming Standards}
+\addcontentsline{toc}{subsubsection}{Programming Standards}
+
+For the most part, all code should be written in C unless there is a burning
+reason to use C++, and then only the simplest C++ constructs will be used.
+Note, Bacula is slowly evolving to use more and more C++.
+
+Code should have some documentation -- not a lot, but enough so that I can
+understand it. Look at the current code, and you will see that I document more
+than most, but am definitely not a fanatic.
+
+We prefer simple linear code where possible. Gotos are strongly discouraged
+except for handling an error to either bail out or to retry some code, and
+such use of gotos can vastly simplify the program.
+
+Remember this is a C program that is migrating to a {\bf tiny} subset of C++,
+so be conservative in your use of C++ features.
+
+\subsection{Do Not Use}
+\index{Use!Do Not}
+\index{Do Not Use}
+\addcontentsline{toc}{subsubsection}{Do Not Use}
+
+\begin{itemize}
+ \item STL -- it is totally incomprehensible.
+\end{itemize}
+
+\subsection{Avoid if Possible}
+\index{Possible!Avoid if}
+\index{Avoid if Possible}
+\addcontentsline{toc}{subsubsection}{Avoid if Possible}
+
+\begin{itemize}
+\item Using {\bf void *} because this generally means that one must
+ using casting, and in C++ casting is rather ugly. It is OK to use
+ void * to pass structure address where the structure is not known
+ to the routines accepting the packet (typically callback routines).
+ However, declaring "void *buf" is a bad idea. Please use the
+ correct types whenever possible.
+
+\item Using undefined storage specifications such as (short, int, long,
+ long long, size\_t ...). The problem with all these is that the number of bytes
+ they allocate depends on the compiler and the system. Instead use
+ Bacula's types (int8\_t, uint8\_t, int32\_t, uint32\_t, int64\_t, and
+ uint64\_t). This guarantees that the variables are given exactly the
+ size you want. Please try at all possible to avoid using size\_t ssize\_t
+ and the such. They are very system dependent. However, some system
+ routines may need them, so their use is often unavoidable.
+
+\item Returning a malloc'ed buffer from a subroutine -- someone will forget
+ to release it.
+
+\item Heap allocation (malloc) unless needed -- it is expensive. Use
+ POOL\_MEM instead.
+
+\item Templates -- they can create portability problems.
+
+\item Fancy or tricky C or C++ code, unless you give a good explanation of
+ why you used it.
+
+\item Too much inheritance -- it can complicate the code, and make reading it
+ difficult (unless you are in love with colons)
+
+\end{itemize}
+
+\subsection{Do Use Whenever Possible}
+\index{Possible!Do Use Whenever}
+\index{Do Use Whenever Possible}
+\addcontentsline{toc}{subsubsection}{Do Use Whenever Possible}
+
+\begin{itemize}
+\item Locking and unlocking within a single subroutine.
+
+\item A single point of exit from all subroutines. A goto is
+ perfectly OK to use to get out early, but only to a label
+ named bail\_out, and possibly an ok\_out. See current code
+ examples.
+
+\item Malloc and free within a single subroutine.
+
+\item Comments and global explanations on what your code or algorithm does.
+
+\end{itemize}
+
+\subsection{Indenting Standards}
+\index{Standards!Indenting}
+\index{Indenting Standards}
+\addcontentsline{toc}{subsubsection}{Indenting Standards}
+
+We find it very hard to read code indented 8 columns at a time.
+Even 4 at a time uses a lot of space, so we have adopted indenting
+3 spaces at every level. Note, indention is the visual appearance of the
+source on the page, while tabbing is replacing a series of up to 8 spaces from
+a tab character.
+
+The closest set of parameters for the Linux {\bf indent} program that will
+produce reasonably indented code are:
+
+\footnotesize
+\begin{verbatim}
+-nbad -bap -bbo -nbc -br -brs -c36 -cd36 -ncdb -ce -ci3 -cli0
+-cp36 -d0 -di1 -ndj -nfc1 -nfca -hnl -i3 -ip0 -l85 -lp -npcs
+-nprs -npsl -saf -sai -saw -nsob -nss -nbc -ncs -nbfda
+\end{verbatim}
+\normalsize
+
+You can put the above in your .indent.pro file, and then just invoke indent on
+your file. However, be warned. This does not produce perfect indenting, and it
+will mess up C++ class statements pretty badly.
+
+Braces are required in all if statements (missing in some very old code). To
+avoid generating too many lines, the first brace appears on the first line
+(e.g. of an if), and the closing brace is on a line by itself. E.g.
+
+\footnotesize
+\begin{verbatim}
+ if (abc) {
+ some_code;
+ }
+\end{verbatim}
+\normalsize
+
+Just follow the convention in the code. For example we I prefer non-indented cases.
+
+\footnotesize
+\begin{verbatim}
+ switch (code) {
+ case 'A':
+ do something
+ break;
+ case 'B':
+ again();
+ break;
+ default:
+ break;
+ }
+\end{verbatim}
+\normalsize
+
+Avoid using // style comments except for temporary code or turning off debug
+code. Standard C comments are preferred (this also keeps the code closer to
+C).
+
+Attempt to keep all lines less than 85 characters long so that the whole line
+of code is readable at one time. This is not a rigid requirement.
+
+Always put a brief description at the top of any new file created describing
+what it does and including your name and the date it was first written. Please
+don't forget any Copyrights and acknowledgments if it isn't 100\% your code.
+Also, include the Bacula copyright notice that is in {\bf src/c}.
+
+In general you should have two includes at the top of the an include for the
+particular directory the code is in, for includes are needed, but this should
+be rare.
+
+In general (except for self-contained packages), prototypes should all be put
+in {\bf protos.h} in each directory.
+
+Always put space around assignment and comparison operators.
+
+\footnotesize
+\begin{verbatim}
+ a = 1;
+ if (b >= 2) {
+ cleanup();
+ }
+\end{verbatim}
+\normalsize
+
+but your can compress things in a {\bf for} statement:
+
+\footnotesize
+\begin{verbatim}
+ for (i=0; i < del.num_ids; i++) {
+ ...
+\end{verbatim}
+\normalsize
+
+Don't overuse the inline if (?:). A full {\bf if} is preferred, except in a
+print statement, e.g.:
+
+\footnotesize
+\begin{verbatim}
+ if (ua->verbose \&& del.num_del != 0) {
+ bsendmsg(ua, _("Pruned %d %s on Volume %s from catalog.\n"), del.num_del,
+ del.num_del == 1 ? "Job" : "Jobs", mr->VolumeName);
+ }
+\end{verbatim}
+\normalsize
+
+Leave a certain amount of debug code (Dmsg) in code you submit, so that future
+problems can be identified. This is particularly true for complicated code
+likely to break. However, try to keep the debug code to a minimum to avoid
+bloating the program and above all to keep the code readable.
+
+Please keep the same style in all new code you develop. If you include code
+previously written, you have the option of leaving it with the old indenting
+or re-indenting it. If the old code is indented with 8 spaces, then please
+re-indent it to Bacula standards.
+
+If you are using {\bf vim}, simply set your tabstop to 8 and your shiftwidth
+to 3.
+
+\subsection{Tabbing}
+\index{Tabbing}
+\addcontentsline{toc}{subsubsection}{Tabbing}
+
+Tabbing (inserting the tab character in place of spaces) is as normal on all
+Unix systems -- a tab is converted space up to the next column multiple of 8.
+My editor converts strings of spaces to tabs automatically -- this results in
+significant compression of the files. Thus, you can remove tabs by replacing
+them with spaces if you wish. Please don't confuse tabbing (use of tab
+characters) with indenting (visual alignment of the code).
+
+\subsection{Don'ts}
+\index{Don'ts}
+\addcontentsline{toc}{subsubsection}{Don'ts}
+
+Please don't use:
+
+\footnotesize
+\begin{verbatim}
+strcpy()
+strcat()
+strncpy()
+strncat();
+sprintf()
+snprintf()
+\end{verbatim}
+\normalsize
+
+They are system dependent and un-safe. These should be replaced by the Bacula
+safe equivalents:
+
+\footnotesize
+\begin{verbatim}
+char *bstrncpy(char *dest, char *source, int dest_size);
+char *bstrncat(char *dest, char *source, int dest_size);
+int bsnprintf(char *buf, int32_t buf_len, const char *fmt, ...);
+int bvsnprintf(char *str, int32_t size, const char *format, va_list ap);
+\end{verbatim}
+\normalsize
+
+See src/lib/bsys.c for more details on these routines.
+
+Don't use the {\bf \%lld} or the {\bf \%q} printf format editing types to edit
+64 bit integers -- they are not portable. Instead, use {\bf \%s} with {\bf
+edit\_uint64()}. For example:
+
+\footnotesize
+\begin{verbatim}
+ char buf[100];
+ uint64_t num = something;
+ char ed1[50];
+ bsnprintf(buf, sizeof(buf), "Num=%s\n", edit_uint64(num, ed1));
+\end{verbatim}
+\normalsize
+
+Note: {\bf \%lld} is now permitted in Bacula code -- we have our
+own printf routines which handle it correctly. The edit\_uint64() subroutine
+can still be used if you wish, but over time, most of that old style will
+be removed.
+
+The edit buffer {\bf ed1} must be at least 27 bytes long to avoid overflow.
+See src/lib/edit.c for more details. If you look at the code, don't start
+screaming that I use {\bf lld}. I actually use subtle trick taught to me by
+John Walker. The {\bf lld} that appears in the editing routine is actually
+{\bf \#define} to a what is needed on your OS (usually ``lld'' or ``q'') and
+is defined in autoconf/configure.in for each OS. C string concatenation causes
+the appropriate string to be concatenated to the ``\%''.
+
+Also please don't use the STL or Templates or any complicated C++ code.
+
+\subsection{Message Classes}
+\index{Classes!Message}
+\index{Message Classes}
+\addcontentsline{toc}{subsubsection}{Message Classes}
+
+Currently, there are five classes of messages: Debug, Error, Job, Memory,
+and Queued.
+
+\subsection{Debug Messages}
+\index{Messages!Debug}
+\index{Debug Messages}
+\addcontentsline{toc}{subsubsection}{Debug Messages}
+
+Debug messages are designed to be turned on at a specified debug level and are
+always sent to STDOUT. There are designed to only be used in the development
+debug process. They are coded as:
+
+DmsgN(level, message, arg1, ...) where the N is a number indicating how many
+arguments are to be substituted into the message (i.e. it is a count of the
+number arguments you have in your message -- generally the number of percent
+signs (\%)). {\bf level} is the debug level at which you wish the message to
+be printed. message is the debug message to be printed, and arg1, ... are the
+arguments to be substituted. Since not all compilers support \#defines with
+varargs, you must explicitly specify how many arguments you have.
+
+When the debug message is printed, it will automatically be prefixed by the
+name of the daemon which is running, the filename where the Dmsg is, and the
+line number within the file.
+
+Some actual examples are:
+
+Dmsg2(20, ``MD5len=\%d MD5=\%s\textbackslash{}n'', strlen(buf), buf);
+
+Dmsg1(9, ``Created client \%s record\textbackslash{}n'', client->hdr.name);
+
+\subsection{Error Messages}
+\index{Messages!Error}
+\index{Error Messages}
+\addcontentsline{toc}{subsubsection}{Error Messages}
+
+Error messages are messages that are related to the daemon as a whole rather
+than a particular job. For example, an out of memory condition my generate an
+error message. They should be very rarely needed. In general, you should be
+using Job and Job Queued messages (Jmsg and Qmsg). They are coded as:
+
+EmsgN(error-code, level, message, arg1, ...) As with debug messages, you must
+explicitly code the of arguments to be substituted in the message. error-code
+indicates the severity or class of error, and it may be one of the following:
+
+\addcontentsline{lot}{table}{Message Error Code Classes}
+\begin{longtable}{lp{3in}}
+{{\bf M\_ABORT} } & {Causes the daemon to immediately abort. This should be
+used only in extreme cases. It attempts to produce a traceback. } \\
+{{\bf M\_ERROR\_TERM} } & {Causes the daemon to immediately terminate. This
+should be used only in extreme cases. It does not produce a traceback. } \\
+{{\bf M\_FATAL} } & {Causes the daemon to terminate the current job, but the
+daemon keeps running } \\
+{{\bf M\_ERROR} } & {Reports the error. The daemon and the job continue
+running } \\
+{{\bf M\_WARNING} } & {Reports an warning message. The daemon and the job
+continue running } \\
+{{\bf M\_INFO} } & {Reports an informational message.}
+
+\end{longtable}
+
+There are other error message classes, but they are in a state of being
+redesigned or deprecated, so please do not use them. Some actual examples are:
+
+
+Emsg1(M\_ABORT, 0, ``Cannot create message thread: \%s\textbackslash{}n'',
+strerror(status));
+
+Emsg3(M\_WARNING, 0, ``Connect to File daemon \%s at \%s:\%d failed. Retrying
+...\textbackslash{}n'', client-\gt{}hdr.name, client-\gt{}address,
+client-\gt{}port);
+
+Emsg3(M\_FATAL, 0, ``bdird\lt{}filed: bad response from Filed to \%s command:
+\%d \%s\textbackslash{}n'', cmd, n, strerror(errno));
+
+\subsection{Job Messages}
+\index{Job Messages}
+\index{Messages!Job}
+\addcontentsline{toc}{subsubsection}{Job Messages}
+
+Job messages are messages that pertain to a particular job such as a file that
+could not be saved, or the number of files and bytes that were saved. They
+Are coded as:
+\begin{verbatim}
+Jmsg(jcr, M\_FATAL, 0, "Text of message");
+\end{verbatim}
+A Jmsg with M\_FATAL will fail the job. The Jmsg() takes varargs so can
+have any number of arguments for substituted in a printf like format.
+Output from the Jmsg() will go to the Job report.
+<br>
+If the Jmsg is followed with a number such as Jmsg1(...), the number
+indicates the number of arguments to be substituted (varargs is not
+standard for \#defines), and what is more important is that the file and
+line number will be prefixed to the message. This permits a sort of debug
+from user's output.
+
+\subsection{Queued Job Messages}
+\index{Queued Job Messages}
+\index{Messages!Job}
+\addcontentsline{toc}{subsubsection}{Queued Job Messages}
+Queued Job messages are similar to Jmsg()s except that the message is
+Queued rather than immediately dispatched. This is necessary within the
+network subroutines and in the message editing routines. This is to prevent
+recursive loops, and to ensure that messages can be delivered even in the
+event of a network error.
+
+
+\subsection{Memory Messages}
+\index{Messages!Memory}
+\index{Memory Messages}
+\addcontentsline{toc}{subsubsection}{Memory Messages}
+
+Memory messages are messages that are edited into a memory buffer. Generally
+they are used in low level routines such as the low level device file dev.c in
+the Storage daemon or in the low level Catalog routines. These routines do not
+generally have access to the Job Control Record and so they return error
+essages reformatted in a memory buffer. Mmsg() is the way to do this.
+
+\subsection{Bugs Database}
+\index{Database!Bugs}
+\index{Bugs Database}
+\addcontentsline{toc}{subsubsection}{Bugs Database}
+We have a bugs database which is at:
+\elink{http://bugs.bacula.org}{http://bugs.bacula.org}, and as
+a developer you will need to respond to bugs, perhaps bugs in general
+if you have time, otherwise just bugs that correspond to code that
+you wrote.
+
+If you need to answer bugs, please be sure to ask the Project Manager
+(currently Kern) to give you Developer access to the bugs database. This
+allows you to modify statuses and close bugs.
+
+The first thing is if you want to take over a bug, rather than just make a
+note, you should assign the bug to yourself. This helps other developers
+know that you are the principal person to deal with the bug. You can do so
+by going into the bug and clicking on the {\bf Update Issue} button. Then
+you simply go to the {\bf Assigned To} box and select your name from the
+drop down box. To actually update it you must click on the {\bf Update
+Information} button a bit further down on the screen, but if you have other
+things to do such as add a Note, you might wait before clicking on the {\bf
+Update Information} button.
+
+Generally, we set the {\bf Status} field to either acknowledged, confirmed,
+or feedback when we first start working on the bug. Feedback is set when
+we expect that the user should give us more information.
+
+Normally, once you are reasonably sure that the bug is fixed, and a patch
+is made and attached to the bug report, and/or in the SVN, you can close
+the bug. If you want the user to test the patch, then leave the bug open,
+otherwise close it and set {\bf Resolution} to {\bf Fixed}. We generally
+close bug reports rather quickly, even without confirmation, especially if
+we have run tests and can see that for us the problem is fixed. However,
+in doing so, it avoids misunderstandings if you leave a note while you are
+closing the bug that says something to the following effect:
+We are closing this bug because ... If for some reason, it does not fix
+your problem, please feel free to reopen it, or to open a new bug report
+describing the problem".
+
+We do not recommend that you attempt to edit any of the bug notes that have
+been submitted, nor to delete them or make them private. In fact, if
+someone accidentally makes a bug note private, you should ask the reason
+and if at all possible (with his agreement) make the bug note public.
+
+If the user has not properly filled in most of the important fields
+(platorm, OS, Product Version, ...) please do not hesitate to politely ask
+him. Also, if the bug report is a request for a new feature, please
+politely send the user to the Feature Request menu item on www.bacula.org.
+The same applies to a support request (we answer only bugs), you might give
+the user a tip, but please politely refer him to the manual and the
+Getting Support page of www.bacula.org.
--- /dev/null
+\chapter{Bacula Git Usage}
+\label{_GitChapterStart}
+\index{Git}
+\index{Git!Repo}
+\addcontentsline{toc}{section}{Bacula Bit Usage}
+
+This chapter is intended to help you use the Git source code
+repositories to obtain, modify, and submit Bacula source code.
+
+
+\section{Bacula Git repositories}
+\index{Git}
+\addcontentsline{toc}{subsection}{Git repositories}
+As of September 2009, the Bacula source code has been split into
+three Git repositories. One is a repository that holds the
+main Bacula source code with directories {\bf bacula}, {\bf gui},
+and {\bf regress}. The second repository contains
+the directories {\bf docs} directory, and the third repository
+contains the {\bf rescue} directory. All three repositories are
+hosted on Source Forge.
+
+Previously everything was in a single SVN repository.
+We have split the SVN repository into three because Git
+offers significant advantages for ease of managing and integrating
+developer's changes. However, one of the disadvantages of Git is that you
+must work with the full repository, while SVN allows you to checkout
+individual directories. If we put everything into a single Git
+repository it would be far bigger than most developers would want
+to checkout, so we have separted the docs and rescue into their own
+repositories, and moved only the parts that are most actively
+worked on by the developers (bacula, gui, and regress) to a the
+Git Bacula repository.
+
+Bacula developers must now have a certain knowledege of Git.
+
+\section{Git Usage}
+\index{Git Usage}
+\addcontentsline{toc}{subsection}{Git Usage}
+
+Please note that if you are familiar with SVN, Git is similar,
+(and better), but there can be a few surprising differences that
+can be very confusing (nothing worse than converting from CVS to SVN).
+
+The main Bacula Git repo contains the subdirectories {\bf bacula}, {\bf gui},
+and {\bf regress}. With Git it is not possible to pull only a
+single directory, because of the hash code nature of Git, you
+must take all or nothing.
+
+For developers, the most important thing to remember about Git and
+the Source Forge repository is not to "force" a {\bf push} to the
+repository. Doing so, can possibly rewrite
+the Git repository history and cause a lot of problems for the
+project.
+
+You can get a full copy of the Source Forge Bacula Git repository with the
+following command:
+
+\begin{verbatim}
+git clone git://bacula.git.sourceforge.net/gitroot/bacula/bacula trunk
+\end{verbatim}
+
+This will put a read-only copy into the directory {\bf trunk}
+in your current directory, and {\bf trunk} will contain
+the subdirectories: {\bf bacula}, {\bf gui}, and {\bf regress}.
+Obviously you can use any name an not just {\bf trunk}. In fact,
+once you have the repository in say {\bf trunk}, you can copy the
+whole directory to another place and have a fully functional
+git repository.
+
+If you have write permission to the Source Forge
+repository, you can get a copy of the Git repo with:
+
+\begin{verbatim}
+git clone ssh://<userid>@bacula.git.sourceforge.net/gitroot/bacula/bacula trunk
+\end{verbatim}
+
+where you replace \verb+<userid>+ with your Source Forge login
+userid, and you must have previously uploaded your public ssh key
+to Source Forge.
+
+The above command needs to be done only once. Thereafter, you can:
+
+\begin{verbatim}
+cd trunk
+git pull # refresh my repo with the latest code
+\end{verbatim}
+
+As of August 2009, the size of the repository ({\bf trunk} in the above
+example) will be approximately 55 Megabytes. However, if you build
+from source in this directory and do a lot of updates and regression
+testing, the directory could become several hundred megabytes.
+
+\subsection{Learning Git}
+\index{Learning Git}
+If you want to learn more about Git, we recommend that you visit:\\
+\elink{http://book.git-scm.com/}{http://book.git-scm.com/}.
+
+Some of the differences between Git and SVN are:
+\begin{itemize}
+\item Your main Git directory is a full Git repository to which you can
+ and must commit. In fact, we suggest you commit frequently.
+\item When you commit, the commit goes into your local Git
+ database. You must use another command to write it to the
+ master Source Forge repository (see below).
+\item The local Git database is kept in the directory {\bf .git} at the
+ top level of the directory.
+\item All the important Git configuration information is kept in the
+ file {\bf .git/config} in ASCII format that is easy to manually edit.
+\item When you do a {\bf commit} the changes are put in {\bf .git}
+ rather but not in the main Source Forge repository.
+\item You can push your changes to the external repository using
+ the command {\bf git push} providing you have write permission
+ on the repository.
+\item We restrict developers just learning git to have read-only
+ access until they feel comfortable with git before giving them
+ write access.
+\item You can download all the current changes in the external repository
+ and merge them into your {\bf master} branch using the command
+ {\bf git pull}.
+\item The command {\bf git add} is used to add a new file to the
+ repository AND to tell Git that you want a file that has changed
+ to be in the next commit. This has lots of advantages, because
+ a {\bf git commit} only commits those files that have been
+ explicitly added. Note with SVN {\bf add} is used only
+ to add new files to the repo.
+\item You can add and commit all files modifed in one command
+ using {\bf git commit -a}.
+\item This extra use of {\bf add} allows you to make a number
+ of changes then add only a few of the files and commit them,
+ then add more files and commit them until you have committed
+ everything. This has the advantage of allowing you to more
+ easily group small changes and do individaual commits on them.
+ By keeping commits smaller, and separated into topics, it makes
+ it much easier to later select certain commits for backporting.
+\item If you {\bf git pull} from the main repository and make
+ some changes, and before you do a {\bf git push} someone
+ else pushes changes to the Git repository, your changes will
+ apply to an older version of the repository you will probably
+ get an error message such as:
+
+\begin{verbatim}
+ git push
+ To git@github.com:bacula/bacula.git
+ ! [rejected] master -> master (non-fast forward)
+ error: failed to push some refs to 'git@github.com:bacula/bacula.git'
+\end{verbatim}
+
+ which is Git's way of telling you that the main repository has changed
+ and that if you push your changes, they will not be integrated properly.
+ This is very similar to what happens when you do an "svn update" and
+ get merge conflicts.
+ As we have noted above, you should never ask Git to force the push.
+ See below for an explanation of why.
+\item To integrate (merge) your changes properly, you should always do
+ a {\bf git pull} just prior to doing a {\bf git push}.
+\item If Git is unable to merge your changes or finds a conflict it
+ will tell you and you must do conflict resolution, which is much
+ easier in Git than in SVN.
+\item Resolving conflicts is described below in the {\bf github} section.
+\end{itemize}
+
+\section{Step by Step Modifying Bacula Code}
+Suppose you want to download Bacula source code, build it, make
+a change, then submit your change to the Bacula developers. What
+would you do?
+
+\begin{itemize}
+\item Download the Source code:\\
+\begin{verbatim}
+git clone ssh://<userid>@bacula.git.sourceforge.net/gitroot/bacula/bacula trunk
+\end{verbatim}
+
+\item Configure and Build Bacula:\\
+\begin{verbatim}
+./configure (all-your-normal-options)
+make
+\end{verbatim}
+
+\item Create a branch to work on:
+\begin{verbatim}
+cd trunk/bacula
+git checkout -b bugfix master
+\end{verbatim}
+
+\item Edit, build, Test, ...\\
+\begin{verbatim}
+edit file jcr.h
+make
+test
+\end{verbatim}
+
+\item commit your work:
+\begin{verbatim}
+git commit -am "Short comment on what I did"
+\end{verbatim}
+
+\item Possibly repeat the above two items
+
+\item Switch back to the master branch:\\
+\begin{verbatim}
+git checkout master
+\end{verbatim}
+
+\item Pull the latest changes:\\
+\begin{verbatim}
+git pull
+\end{verbatim}
+
+\item Get back on your bugfix branch:\\
+\begin{verbatim}
+git checkout bugfix
+\end{verbatim}
+
+\item Merge your changes and correct any conflicts:\\
+\begin{verbatim}
+git rebase master bugfix
+\end{verbatim}
+
+\item Fix any conflicts:\\
+You will be notified if there are conflicts. The first
+thing to do is:
+
+\begin{verbatim}
+git diff
+\end{verbatim}
+
+This will produce a diff of only the files having a conflict.
+Fix each file in turn. When it is fixed, the diff for that file
+will go away.
+
+For each file fixed, you must do the same as SVN, inform git with:
+
+\begin{verbatim}
+git add (name-of-file-no-longer-in-conflict)
+\end{verbatim}
+
+\item When all files are fixed do:
+\begin{verbatim}
+git rebase --continue
+\end{verbatim}
+
+\item When you are ready to send a patch, do the following:\\
+\begin{verbatim}
+git checkout bugfix
+git format-patch -M master
+\end{verbatim}
+Look at the files produced. They should be numbered 0001-xxx.patch
+where there is one file for each commit you did, number sequentially,
+and the xxx is what you put in the commit comment.
+
+\item If the patch files are good, send them by email to the developers
+as attachments.
+
+\end{itemize}
+
+
+
+\subsection{More Details}
+
+Normally, you will work by creating a branch of the master branch of your
+repository, make your modifications, then make sure it is up to date, and finally
+create format-patch patches or push it to the Source Forge repo. Assuming
+you call the Bacula repository {\bf trunk}, you might use the following
+commands:
+
+\begin{verbatim}
+cd trunk
+git checkout master
+git pull
+git checkout -b newbranch master
+(edit, ...)
+git add <file-edited>
+git commit -m "<comment about commit>"
+...
+\end{verbatim}
+
+When you have completed working on your branch, you will do:
+
+\begin{verbatim}
+cd trunk
+git checkout newbranch # ensure I am on my branch
+git pull # get latest source code
+git rebase master # merge my code
+\end{verbatim}
+
+If you have completed your edits before anyone has modified the repository,
+the {\bf git rebase master} will report that there was nothing to do. Otherwise,
+it will merge the changes that were made in the repository before your changes.
+If there are any conflicts, Git will tell you. Typically resolving conflicts with
+Git is relatively easy. You simply make a diff:
+
+\begin{verbatim}
+git diff
+\end{verbatim}
+
+Then edit each file that was listed in the {\bf git diff} to remove the
+conflict, which will be indicated by lines of:
+
+\begin{verbatim}
+<<<<<<< HEAD
+text
+>>>>>>>>
+other text
+=====
+\end{verbatim}
+
+where {\bf text} is what is in the Bacula repository, and {\bf other text}
+is what you have changed.
+
+Once you have eliminated the conflict, the {\bf git diff} will show nothing,
+and you must do a:
+
+\begin{verbatim}
+git add <file-with-conflicts-fixed>
+\end{verbatim}
+
+Once you have fixed all the files with conflicts in the above manner, you enter:
+
+\begin{verbatim}
+git rebase --continue
+\end{verbatim}
+
+and your rebase will be complete.
+
+If for some reason, before doing the --continue, you want to abort the rebase and return to what you had, you enter:
+
+\begin{verbatim}
+git rebase --abort
+\end{verbatim}
+
+Finally to make a set of patch files
+
+\begin{verbatim}
+git format-patch -M master
+\end{verbatim}
+
+When you see your changes have been integrated and pushed to the
+main repo, you can delete your branch with:
+
+\begin{verbatim}
+git checkout master
+git branch -D newbranch
+\end{verbatim}
+
+
+\section{Forcing Changes}
+If you want to understand why it is not a good idea to force a
+push to the repository, look at the following picture:
+
+\includegraphics[width=0.85\textwidth]{\idir git-edit-commit.eps}
+
+The above graphic has three lines of circles. Each circle represents
+a commit, and time runs from the left to the right. The top line
+shows the repository just before you are going to do a push. Note the
+point at which you pulled is the circle on the left, your changes are
+represented by the circle labeled {\bf Your mods}. It is shown below
+to indicate that the changes are only in your local repository. Finally,
+there are pushes A and B that came after the time at which you pulled.
+
+If you were to force your changes into the repository, Git would place them
+immediately after the point at which you pulled them, so they would
+go before the pushes A and B. However, doing so would rewrite the history
+of the repository and make it very difficult for other users to synchronize
+since they would have to somehow wedge their changes at some point before the
+current HEAD of the repository. This situation is shown by the second line of
+pushes.
+
+What you really want to do is to put your changes after Push B (the current HEAD).
+This is shown in the third line of pushes. The best way to accomplish this is to
+work in a branch, pull the repository so you have your master equal to HEAD (in first
+line), then to rebase your branch on the current master and then commit it. The
+exact commands to accomplish this are shown in the next couple of sections.
--- /dev/null
+%%
+%%
+
+\chapter*{Implementing a GUI Interface}
+\label{_ChapterStart}
+\index[general]{Interface!Implementing a Bacula GUI }
+\index[general]{Implementing a Bacula GUI Interface }
+\addcontentsline{toc}{section}{Implementing a Bacula GUI Interface}
+
+\section{General}
+\index[general]{General }
+\addcontentsline{toc}{subsection}{General}
+
+This document is intended mostly for developers who wish to develop a new GUI
+interface to {\bf Bacula}.
+
+\subsection{Minimal Code in Console Program}
+\index[general]{Program!Minimal Code in Console }
+\index[general]{Minimal Code in Console Program }
+\addcontentsline{toc}{subsubsection}{Minimal Code in Console Program}
+
+Until now, I have kept all the Catalog code in the Directory (with the
+exception of dbcheck and bscan). This is because at some point I would like to
+add user level security and access. If we have code spread everywhere such as
+in a GUI this will be more difficult. The other advantage is that any code you
+add to the Director is automatically available to both the tty console program
+and the WX program. The major disadvantage is it increases the size of the
+code -- however, compared to Networker the Bacula Director is really tiny.
+
+\subsection{GUI Interface is Difficult}
+\index[general]{GUI Interface is Difficult }
+\index[general]{Difficult!GUI Interface is }
+\addcontentsline{toc}{subsubsection}{GUI Interface is Difficult}
+
+Interfacing to an interactive program such as Bacula can be very difficult
+because the interfacing program must interpret all the prompts that may come.
+This can be next to impossible. There are are a number of ways that Bacula is
+designed to facilitate this:
+
+\begin{itemize}
+\item The Bacula network protocol is packet based, and thus pieces of
+information sent can be ASCII or binary.
+\item The packet interface permits knowing where the end of a list is.
+\item The packet interface permits special ``signals'' to be passed rather
+than data.
+\item The Director has a number of commands that are non-interactive. They
+all begin with a period, and provide things such as the list of all Jobs,
+list of all Clients, list of all Pools, list of all Storage, ... Thus the GUI
+interface can get to virtually all information that the Director has in a
+deterministic way. See \lt{}bacula-source\gt{}/src/dird/ua\_dotcmds.c for
+more details on this.
+\item Most console commands allow all the arguments to be specified on the
+command line: e.g. {\bf run job=NightlyBackup level=Full}
+\end{itemize}
+
+One of the first things to overcome is to be able to establish a conversation
+with the Director. Although you can write all your own code, it is probably
+easier to use the Bacula subroutines. The following code is used by the
+Console program to begin a conversation.
+
+\footnotesize
+\begin{verbatim}
+static BSOCK *UA_sock = NULL;
+static JCR *jcr;
+...
+ read-your-config-getting-address-and-pasword;
+ UA_sock = bnet_connect(NULL, 5, 15, "Director daemon", dir->address,
+ NULL, dir->DIRport, 0);
+ if (UA_sock == NULL) {
+ terminate_console(0);
+ return 1;
+ }
+ jcr.dir_bsock = UA_sock;
+ if (!authenticate_director(\&jcr, dir)) {
+ fprintf(stderr, "ERR=%s", UA_sock->msg);
+ terminate_console(0);
+ return 1;
+ }
+ read_and_process_input(stdin, UA_sock);
+ if (UA_sock) {
+ bnet_sig(UA_sock, BNET_TERMINATE); /* send EOF */
+ bnet_close(UA_sock);
+ }
+ exit 0;
+\end{verbatim}
+\normalsize
+
+Then the read\_and\_process\_input routine looks like the following:
+
+\footnotesize
+\begin{verbatim}
+ get-input-to-send-to-the-Director;
+ bnet_fsend(UA_sock, "%s", input);
+ stat = bnet_recv(UA_sock);
+ process-output-from-the-Director;
+\end{verbatim}
+\normalsize
+
+For a GUI program things will be a bit more complicated. Basically in the very
+inner loop, you will need to check and see if any output is available on the
+UA\_sock. For an example, please take a look at the WX GUI interface code
+in: \lt{bacula-source/src/wx-console}
+
+\section{Bvfs API}
+\label{sec:bvfs}
+
+To help developers of restore GUI interfaces, we have added new \textsl{dot
+ commands} that permit browsing the catalog in a very simple way.
+
+\begin{itemize}
+\item \texttt{.bvfs\_update [jobid=x,y,z]} This command is required to update
+ the Bvfs cache in the catalog. You need to run it before any access to the
+ Bvfs layer.
+
+\item \texttt{.bvfs\_lsdirs jobid=x,y,z path=/path | pathid=101} This command
+ will list all directories in the specified \texttt{path} or
+ \texttt{pathid}. Using \texttt{pathid} avoids problems with character
+ encoding of path/filenames.
+
+\item \texttt{.bvfs\_lsfiles jobid=x,y,z path=/path | pathid=101} This command
+ will list all files in the specified \texttt{path} or \texttt{pathid}. Using
+ \texttt{pathid} avoids problems with character encoding.
+\end{itemize}
+
+You can use \texttt{limit=xxx} and \texttt{offset=yyy} to limit the amount of
+data that will be displayed.
+
+\begin{verbatim}
+* .bvfs_update jobid=1,2
+* .bvfs_update
+* .bvfs_lsdir path=/ jobid=1,2
+\end{verbatim}
--- /dev/null
+%%
+%%
+
+\chapter{Bacula MD5 Algorithm}
+\label{MD5Chapter}
+\addcontentsline{toc}{section}{}
+
+\section{Command Line Message Digest Utility }
+\index{Utility!Command Line Message Digest }
+\index{Command Line Message Digest Utility }
+\addcontentsline{toc}{subsection}{Command Line Message Digest Utility}
+
+
+This page describes {\bf md5}, a command line utility usable on either Unix or
+MS-DOS/Windows, which generates and verifies message digests (digital
+signatures) using the MD5 algorithm. This program can be useful when
+developing shell scripts or Perl programs for software installation, file
+comparison, and detection of file corruption and tampering.
+
+\subsection{Name}
+\index{Name}
+\addcontentsline{toc}{subsubsection}{Name}
+
+{\bf md5} - generate / check MD5 message digest
+
+\subsection{Synopsis}
+\index{Synopsis }
+\addcontentsline{toc}{subsubsection}{Synopsis}
+
+{\bf md5} [ {\bf -c}{\it signature} ] [ {\bf -u} ] [ {\bf -d}{\it input\_text}
+| {\it infile} ] [ {\it outfile} ]
+
+\subsection{Description}
+\index{Description }
+\addcontentsline{toc}{subsubsection}{Description}
+
+A {\it message digest} is a compact digital signature for an arbitrarily long
+stream of binary data. An ideal message digest algorithm would never generate
+the same signature for two different sets of input, but achieving such
+theoretical perfection would require a message digest as long as the input
+file. Practical message digest algorithms compromise in favour of a digital
+signature of modest size created with an algorithm designed to make
+preparation of input text with a given signature computationally infeasible.
+Message digest algorithms have much in common with techniques used in
+encryption, but to a different end; verification that data have not been
+altered since the signature was published.
+
+Many older programs requiring digital signatures employed 16 or 32 bit {\it
+cyclical redundancy codes} (CRC) originally developed to verify correct
+transmission in data communication protocols, but these short codes, while
+adequate to detect the kind of transmission errors for which they were
+intended, are insufficiently secure for applications such as electronic
+commerce and verification of security related software distributions.
+
+The most commonly used present-day message digest algorithm is the 128 bit MD5
+algorithm, developed by Ron Rivest of the
+\elink{MIT}{http://web.mit.edu/}
+\elink{Laboratory for Computer Science}{http://www.lcs.mit.edu/} and
+\elink{RSA Data Security, Inc.}{http://www.rsa.com/} The algorithm, with a
+reference implementation, was published as Internet
+\elink{RFC 1321}{http://www.fourmilab.ch/md5/rfc1321.html} in April 1992, and
+was placed into the public domain at that time. Message digest algorithms such
+as MD5 are not deemed ``encryption technology'' and are not subject to the
+export controls some governments impose on other data security products.
+(Obviously, the responsibility for obeying the laws in the jurisdiction in
+which you reside is entirely your own, but many common Web and Mail utilities
+use MD5, and I am unaware of any restrictions on their distribution and use.)
+
+The MD5 algorithm has been implemented in numerous computer languages
+including C,
+\elink{Perl}{http://www.perl.org/}, and
+\elink{Java}{http://www.javasoft.com/}; if you're writing a program in such a
+language, track down a suitable subroutine and incorporate it into your
+program. The program described on this page is a {\it command line}
+implementation of MD5, intended for use in shell scripts and Perl programs (it
+is much faster than computing an MD5 signature directly in Perl). This {\bf
+md5} program was originally developed as part of a suite of tools intended to
+monitor large collections of files (for example, the contents of a Web site)
+to detect corruption of files and inadvertent (or perhaps malicious) changes.
+That task is now best accomplished with more comprehensive packages such as
+\elink{Tripwire}{ftp://coast.cs.purdue.edu/pub/COAST/Tripwire/}, but the
+command line {\bf md5} component continues to prove useful for verifying
+correct delivery and installation of software packages, comparing the contents
+of two different systems, and checking for changes in specific files.
+
+\subsection{Options}
+\index{Options }
+\addcontentsline{toc}{subsubsection}{Options}
+
+\begin{description}
+
+\item [{\bf -c}{\it signature} ]
+ \index{-csignature }
+ Computes the signature of the specified {\it infile} or the string supplied
+by the {\bf -d} option and compares it against the specified {\it signature}.
+If the two signatures match, the exit status will be zero, otherwise the exit
+status will be 1. No signature is written to {\it outfile} or standard
+output; only the exit status is set. The signature to be checked must be
+specified as 32 hexadecimal digits.
+
+\item [{\bf -d}{\it input\_text} ]
+ \index{-dinput\_text }
+ A signature is computed for the given {\it input\_text} (which must be quoted
+if it contains white space characters) instead of input from {\it infile} or
+standard input. If input is specified with the {\bf -d} option, no {\it
+infile} should be specified.
+
+\item [{\bf -u} ]
+ Print how-to-call information.
+ \end{description}
+
+\subsection{Files}
+\index{Files }
+\addcontentsline{toc}{subsubsection}{Files}
+
+If no {\it infile} or {\bf -d} option is specified or {\it infile} is a single
+``-'', {\bf md5} reads from standard input; if no {\it outfile} is given, or
+{\it outfile} is a single ``-'', output is sent to standard output. Input and
+output are processed strictly serially; consequently {\bf md5} may be used in
+pipelines.
+
+\subsection{Bugs}
+\index{Bugs }
+\addcontentsline{toc}{subsubsection}{Bugs}
+
+The mechanism used to set standard input to binary mode may be specific to
+Microsoft C; if you rebuild the DOS/Windows version of the program from source
+using another compiler, be sure to verify binary files work properly when read
+via redirection or a pipe.
+
+This program has not been tested on a machine on which {\tt int} and/or {\tt
+long} are longer than 32 bits.
+
+\section{
+\elink{Download md5.zip}{http://www.fourmilab.ch/md5/md5.zip} (Zipped
+archive)}
+\index{Archive!Download md5.zip Zipped }
+\index{Download md5.zip (Zipped archive) }
+\addcontentsline{toc}{subsection}{Download md5.zip (Zipped archive)}
+
+The program is provided as
+\elink{md5.zip}{http://www.fourmilab.ch/md5/md5.zip}, a
+\elink{Zipped}{http://www.pkware.com/} archive containing an ready-to-run
+Win32 command-line executable program, {\tt md5.exe} (compiled using Microsoft
+Visual C++ 5.0), and in source code form along with a {\tt Makefile} to build
+the program under Unix.
+
+\subsection{See Also}
+\index{ALSO!SEE }
+\index{See Also }
+\addcontentsline{toc}{subsubsection}{SEE ALSO}
+
+{\bf sum}(1)
+
+\subsection{Exit Status}
+\index{Status!Exit }
+\index{Exit Status }
+\addcontentsline{toc}{subsubsection}{Exit Status}
+
+{\bf md5} returns status 0 if processing was completed without errors, 1 if
+the {\bf -c} option was specified and the given signature does not match that
+of the input, and 2 if processing could not be performed at all due, for
+example, to a nonexistent input file.
+
+\subsection{Copying}
+\index{Copying }
+\addcontentsline{toc}{subsubsection}{Copying}
+
+\begin{quote}
+This software is in the public domain. Permission to use, copy, modify, and
+distribute this software and its documentation for any purpose and without
+fee is hereby granted, without any conditions or restrictions. This software
+is provided ``as is'' without express or implied warranty.
+\end{quote}
+
+\subsection{Acknowledgements}
+\index{Acknowledgements }
+\addcontentsline{toc}{subsubsection}{Acknowledgements}
+
+The MD5 algorithm was developed by Ron Rivest. The public domain C language
+implementation used in this program was written by Colin Plumb in 1993.
+{\it
+\elink{by John Walker}{http://www.fourmilab.ch/}
+January 6th, MIM }
--- /dev/null
+%%
+%%
+
+\chapter{Storage Media Output Format}
+\label{_ChapterStart9}
+\index{Format!Storage Media Output}
+\index{Storage Media Output Format}
+\addcontentsline{toc}{section}{Storage Media Output Format}
+
+\section{General}
+\index{General}
+\addcontentsline{toc}{subsection}{General}
+
+This document describes the media format written by the Storage daemon. The
+Storage daemon reads and writes in units of blocks. Blocks contain records.
+Each block has a block header followed by records, and each record has a
+record header followed by record data.
+
+This chapter is intended to be a technical discussion of the Media Format and
+as such is not targeted at end users but rather at developers and system
+administrators that want or need to know more of the working details of {\bf
+Bacula}.
+
+\section{Definitions}
+\index{Definitions}
+\addcontentsline{toc}{subsection}{Definitions}
+
+\begin{description}
+
+\item [Block]
+ \index{Block}
+ A block represents the primitive unit of information that the Storage daemon
+reads and writes to a physical device. Normally, for a tape device, it will
+be the same as a tape block. The Storage daemon always reads and writes
+blocks. A block consists of block header information followed by records.
+Clients of the Storage daemon (the File daemon) normally never see blocks.
+However, some of the Storage tools (bls, bscan, bextract, ...) may be use
+block header information. In older Bacula tape versions, a block could
+contain records (see record definition below) from multiple jobs. However,
+all blocks currently written by Bacula are block level BB02, and a given
+block contains records for only a single job. Different jobs simply have
+their own private blocks that are intermingled with the other blocks from
+other jobs on the Volume (previously the records were intermingled within
+the blocks). Having only records from a single job in any give block
+permitted moving the VolumeSessionId and VolumeSessionTime (see below) from
+each record heading to the Block header. This has two advantages: 1. a block
+can be quickly rejected based on the contents of the header without reading
+all the records. 2. because there is on the average more than one record per
+block, less data is written to the Volume for each job.
+
+\item [Record]
+ \index{Record}
+ A record consists of a Record Header, which is managed by the Storage daemon
+and Record Data, which is the data received from the Client. A record is the
+primitive unit of information sent to and from the Storage daemon by the
+Client (File daemon) programs. The details are described below.
+
+\item [JobId]
+ \index{JobId}
+ A number assigned by the Director daemon for a particular job. This number
+will be unique for that particular Director (Catalog). The daemons use this
+number to keep track of individual jobs. Within the Storage daemon, the JobId
+may not be unique if several Directors are accessing the Storage daemon
+simultaneously.
+
+\item [Session]
+ \index{Session}
+ A Session is a concept used in the Storage daemon corresponds one to one to a
+Job with the exception that each session is uniquely identified within the
+Storage daemon by a unique SessionId/SessionTime pair (see below).
+
+\item [VolSessionId]
+ \index{VolSessionId}
+ A unique number assigned by the Storage daemon to a particular session (Job)
+it is having with a File daemon. This number by itself is not unique to the
+given Volume, but with the VolSessionTime, it is unique.
+
+\item [VolSessionTime]
+ \index{VolSessionTime}
+ A unique number assigned by the Storage daemon to a particular Storage daemon
+execution. It is actually the Unix time\_t value of when the Storage daemon
+began execution cast to a 32 bit unsigned integer. The combination of the
+{\bf VolSessionId} and the {\bf VolSessionTime} for a given Storage daemon is
+guaranteed to be unique for each Job (or session).
+
+\item [FileIndex]
+ \index{FileIndex}
+ A sequential number beginning at one assigned by the File daemon to the files
+within a job that are sent to the Storage daemon for backup. The Storage
+daemon ensures that this number is greater than zero and sequential. Note,
+the Storage daemon uses negative FileIndexes to flag Session Start and End
+Labels as well as End of Volume Labels. Thus, the combination of
+VolSessionId, VolSessionTime, and FileIndex uniquely identifies the records
+for a single file written to a Volume.
+
+\item [Stream]
+ \index{Stream}
+ While writing the information for any particular file to the Volume, there
+can be any number of distinct pieces of information about that file, e.g. the
+attributes, the file data, ... The Stream indicates what piece of data it
+is, and it is an arbitrary number assigned by the File daemon to the parts
+(Unix attributes, Win32 attributes, data, compressed data,\ ...) of a file
+that are sent to the Storage daemon. The Storage daemon has no knowledge of
+the details of a Stream; it simply represents a numbered stream of bytes. The
+data for a given stream may be passed to the Storage daemon in single record,
+or in multiple records.
+
+\item [Block Header]
+ \index{Block Header}
+ A block header consists of a block identification (``BB02''), a block length
+in bytes (typically 64,512) a checksum, and sequential block number. Each
+block starts with a Block Header and is followed by Records. Current block
+headers also contain the VolSessionId and VolSessionTime for the records
+written to that block.
+
+\item [Record Header]
+ \index{Record Header}
+ A record header contains the Volume Session Id, the Volume Session Time, the
+FileIndex, the Stream, and the size of the data record which follows. The
+Record Header is always immediately followed by a Data Record if the size
+given in the Header is greater than zero. Note, for Block headers of level
+BB02 (version 1.27 and later), the Record header as written to tape does not
+contain the Volume Session Id and the Volume Session Time as these two
+fields are stored in the BB02 Block header. The in-memory record header does
+have those fields for convenience.
+
+\item [Data Record]
+ \index{Data Record}
+ A data record consists of a binary stream of bytes and is always preceded by
+a Record Header. The details of the meaning of the binary stream of bytes are
+unknown to the Storage daemon, but the Client programs (File daemon) defines
+and thus knows the details of each record type.
+
+\item [Volume Label]
+ \index{Volume Label}
+ A label placed by the Storage daemon at the beginning of each storage volume.
+It contains general information about the volume. It is written in Record
+format. The Storage daemon manages Volume Labels, and if the client wants, he
+may also read them.
+
+\item [Begin Session Label]
+ \index{Begin Session Label}
+ The Begin Session Label is a special record placed by the Storage daemon on
+the storage medium as the first record of an append session job with a File
+daemon. This record is useful for finding the beginning of a particular
+session (Job), since no records with the same VolSessionId and VolSessionTime
+will precede this record. This record is not normally visible outside of the
+Storage daemon. The Begin Session Label is similar to the Volume Label except
+that it contains additional information pertaining to the Session.
+
+\item [End Session Label]
+ \index{End Session Label}
+ The End Session Label is a special record placed by the Storage daemon on the
+storage medium as the last record of an append session job with a File
+daemon. The End Session Record is distinguished by a FileIndex with a value
+of minus two (-2). This record is useful for detecting the end of a
+particular session since no records with the same VolSessionId and
+VolSessionTime will follow this record. This record is not normally visible
+outside of the Storage daemon. The End Session Label is similar to the Volume
+Label except that it contains additional information pertaining to the
+Session.
+\end{description}
+
+\section{Storage Daemon File Output Format}
+\index{Format!Storage Daemon File Output}
+\index{Storage Daemon File Output Format}
+\addcontentsline{toc}{subsection}{Storage Daemon File Output Format}
+
+The file storage and tape storage formats are identical except that tape
+records are by default blocked into blocks of 64,512 bytes, except for the
+last block, which is the actual number of bytes written rounded up to a
+multiple of 1024 whereas the last record of file storage is not rounded up.
+The default block size of 64,512 bytes may be overridden by the user (some
+older tape drives only support block sizes of 32K). Each Session written to
+tape is terminated with an End of File mark (this will be removed later).
+Sessions written to file are simply appended to the end of the file.
+
+\section{Overall Format}
+\index{Format!Overall}
+\index{Overall Format}
+\addcontentsline{toc}{subsection}{Overall Format}
+
+A Bacula output file consists of Blocks of data. Each block contains a block
+header followed by records. Each record consists of a record header followed
+by the record data. The first record on a tape will always be the Volume Label
+Record.
+
+No Record Header will be split across Bacula blocks. However, Record Data may
+be split across any number of Bacula blocks. Obviously this will not be the
+case for the Volume Label which will always be smaller than the Bacula Block
+size.
+
+To simplify reading tapes, the Start of Session (SOS) and End of Session (EOS)
+records are never split across blocks. If this is about to happen, Bacula will
+write a short block before writing the session record (actually, the SOS
+record should always be the first record in a block, excepting perhaps the
+Volume label).
+
+Due to hardware limitations, the last block written to the tape may not be
+fully written. If your drive permits backspace record, Bacula will backup over
+the last record written on the tape, re-read it and verify that it was
+correctly written.
+
+When a new tape is mounted Bacula will write the full contents of the
+partially written block to the new tape ensuring that there is no loss of
+data. When reading a tape, Bacula will discard any block that is not totally
+written, thus ensuring that there is no duplication of data. In addition,
+since Bacula blocks are sequentially numbered within a Job, it is easy to
+ensure that no block is missing or duplicated.
+
+\section{Serialization}
+\index{Serialization}
+\addcontentsline{toc}{subsection}{Serialization}
+
+All Block Headers, Record Headers, and Label Records are written using
+Bacula's serialization routines. These routines guarantee that the data is
+written to the output volume in a machine independent format.
+
+\section{Block Header}
+\index{Header!Block}
+\index{Block Header}
+\addcontentsline{toc}{subsection}{Block Header}
+
+The format of the Block Header (version 1.27 and later) is:
+
+\footnotesize
+\begin{verbatim}
+ uint32_t CheckSum; /* Block check sum */
+ uint32_t BlockSize; /* Block byte size including the header */
+ uint32_t BlockNumber; /* Block number */
+ char ID[4] = "BB02"; /* Identification and block level */
+ uint32_t VolSessionId; /* Session Id for Job */
+ uint32_t VolSessionTime; /* Session Time for Job */
+\end{verbatim}
+\normalsize
+
+The Block header is a fixed length and fixed format and is followed by Record
+Headers and Record Data. The CheckSum field is a 32 bit checksum of the block
+data and the block header but not including the CheckSum field. The Block
+Header is always immediately followed by a Record Header. If the tape is
+damaged, a Bacula utility will be able to recover as much information as
+possible from the tape by recovering blocks which are valid. The Block header
+is written using the Bacula serialization routines and thus is guaranteed to
+be in machine independent format. See below for version 2 of the block header.
+
+
+\section{Record Header}
+\index{Header!Record}
+\index{Record Header}
+\addcontentsline{toc}{subsection}{Record Header}
+
+Each binary data record is preceded by a Record Header. The Record Header is
+fixed length and fixed format, whereas the binary data record is of variable
+length. The Record Header is written using the Bacula serialization routines
+and thus is guaranteed to be in machine independent format.
+
+The format of the Record Header (version 1.27 or later) is:
+
+\footnotesize
+\begin{verbatim}
+ int32_t FileIndex; /* File index supplied by File daemon */
+ int32_t Stream; /* Stream number supplied by File daemon */
+ uint32_t DataSize; /* size of following data record in bytes */
+\end{verbatim}
+\normalsize
+
+This record is followed by the binary Stream data of DataSize bytes, followed
+by another Record Header record and the binary stream data. For the definitive
+definition of this record, see record.h in the src/stored directory.
+
+Additional notes on the above:
+
+\begin{description}
+
+\item [The {\bf VolSessionId} ]
+ \index{VolSessionId}
+ is a unique sequential number that is assigned by the Storage Daemon to a
+particular Job. This number is sequential since the start of execution of the
+daemon.
+
+\item [The {\bf VolSessionTime} ]
+ \index{VolSessionTime}
+ is the time/date that the current execution of the Storage Daemon started. It
+assures that the combination of VolSessionId and VolSessionTime is unique for
+every jobs written to the tape, even if there was a machine crash between two
+writes.
+
+\item [The {\bf FileIndex} ]
+ \index{FileIndex}
+ is a sequential file number within a job. The Storage daemon requires this
+index to be greater than zero and sequential. Note, however, that the File
+daemon may send multiple Streams for the same FileIndex. In addition, the
+Storage daemon uses negative FileIndices to hold the Begin Session Label, the
+End Session Label, and the End of Volume Label.
+
+\item [The {\bf Stream} ]
+ \index{Stream}
+ is defined by the File daemon and is used to identify separate parts of the
+data saved for each file (Unix attributes, Win32 attributes, file data,
+compressed file data, sparse file data, ...). The Storage Daemon has no idea
+of what a Stream is or what it contains except that the Stream is required to
+be a positive integer. Negative Stream numbers are used internally by the
+Storage daemon to indicate that the record is a continuation of the previous
+record (the previous record would not entirely fit in the block).
+
+For Start Session and End Session Labels (where the FileIndex is negative),
+the Storage daemon uses the Stream field to contain the JobId. The current
+stream definitions are:
+
+\footnotesize
+\begin{verbatim}
+#define STREAM_UNIX_ATTRIBUTES 1 /* Generic Unix attributes */
+#define STREAM_FILE_DATA 2 /* Standard uncompressed data */
+#define STREAM_MD5_SIGNATURE 3 /* MD5 signature for the file */
+#define STREAM_GZIP_DATA 4 /* GZip compressed file data */
+/* Extended Unix attributes with Win32 Extended data. Deprecated. */
+#define STREAM_UNIX_ATTRIBUTES_EX 5 /* Extended Unix attr for Win32 EX */
+#define STREAM_SPARSE_DATA 6 /* Sparse data stream */
+#define STREAM_SPARSE_GZIP_DATA 7
+#define STREAM_PROGRAM_NAMES 8 /* program names for program data */
+#define STREAM_PROGRAM_DATA 9 /* Data needing program */
+#define STREAM_SHA1_SIGNATURE 10 /* SHA1 signature for the file */
+#define STREAM_WIN32_DATA 11 /* Win32 BackupRead data */
+#define STREAM_WIN32_GZIP_DATA 12 /* Gzipped Win32 BackupRead data */
+#define STREAM_MACOS_FORK_DATA 13 /* Mac resource fork */
+#define STREAM_HFSPLUS_ATTRIBUTES 14 /* Mac OS extra attributes */
+#define STREAM_UNIX_ATTRIBUTES_ACCESS_ACL 15 /* Standard ACL attributes on UNIX */
+#define STREAM_UNIX_ATTRIBUTES_DEFAULT_ACL 16 /* Default ACL attributes on UNIX */
+\end{verbatim}
+\normalsize
+
+\item [The {\bf DataSize} ]
+ \index{DataSize}
+ is the size in bytes of the binary data record that follows the Session
+Record header. The Storage Daemon has no idea of the actual contents of the
+binary data record. For standard Unix files, the data record typically
+contains the file attributes or the file data. For a sparse file the first
+64 bits of the file data contains the storage address for the data block.
+\end{description}
+
+The Record Header is never split across two blocks. If there is not enough
+room in a block for the full Record Header, the block is padded to the end
+with zeros and the Record Header begins in the next block. The data record, on
+the other hand, may be split across multiple blocks and even multiple physical
+volumes. When a data record is split, the second (and possibly subsequent)
+piece of the data is preceded by a new Record Header. Thus each piece of data
+is always immediately preceded by a Record Header. When reading a record, if
+Bacula finds only part of the data in the first record, it will automatically
+read the next record and concatenate the data record to form a full data
+record.
+
+\section{Version BB02 Block Header}
+\index{Version BB02 Block Header}
+\index{Header!Version BB02 Block}
+\addcontentsline{toc}{subsection}{Version BB02 Block Header}
+
+Each session or Job has its own private block. As a consequence, the SessionId
+and SessionTime are written once in each Block Header and not in the Record
+Header. So, the second and current version of the Block Header BB02 is:
+
+\footnotesize
+\begin{verbatim}
+ uint32_t CheckSum; /* Block check sum */
+ uint32_t BlockSize; /* Block byte size including the header */
+ uint32_t BlockNumber; /* Block number */
+ char ID[4] = "BB02"; /* Identification and block level */
+ uint32_t VolSessionId; /* Applies to all records */
+ uint32_t VolSessionTime; /* contained in this block */
+\end{verbatim}
+\normalsize
+
+As with the previous version, the BB02 Block header is a fixed length and
+fixed format and is followed by Record Headers and Record Data. The CheckSum
+field is a 32 bit CRC checksum of the block data and the block header but not
+including the CheckSum field. The Block Header is always immediately followed
+by a Record Header. If the tape is damaged, a Bacula utility will be able to
+recover as much information as possible from the tape by recovering blocks
+which are valid. The Block header is written using the Bacula serialization
+routines and thus is guaranteed to be in machine independent format.
+
+\section{Version 2 Record Header}
+\index{Version 2 Record Header}
+\index{Header!Version 2 Record}
+\addcontentsline{toc}{subsection}{Version 2 Record Header}
+
+Version 2 Record Header is written to the medium when using Version BB02 Block
+Headers. The memory representation of the record is identical to the old BB01
+Record Header, but on the storage medium, the first two fields, namely
+VolSessionId and VolSessionTime are not written. The Block Header is filled
+with these values when the First user record is written (i.e. non label
+record) so that when the block is written, it will have the current and unique
+VolSessionId and VolSessionTime. On reading each record from the Block, the
+VolSessionId and VolSessionTime is filled in the Record Header from the Block
+Header.
+
+\section{Volume Label Format}
+\index{Volume Label Format}
+\index{Format!Volume Label}
+\addcontentsline{toc}{subsection}{Volume Label Format}
+
+Tape volume labels are created by the Storage daemon in response to a {\bf
+label} command given to the Console program, or alternatively by the {\bf
+btape} program. created. Each volume is labeled with the following information
+using the Bacula serialization routines, which guarantee machine byte order
+independence.
+
+For Bacula versions 1.27 and later, the Volume Label Format is:
+
+\footnotesize
+\begin{verbatim}
+ char Id[32]; /* Bacula 1.0 Immortal\n */
+ uint32_t VerNum; /* Label version number */
+ /* VerNum 11 and greater Bacula 1.27 and later */
+ btime_t label_btime; /* Time/date tape labeled */
+ btime_t write_btime; /* Time/date tape first written */
+ /* The following are 0 in VerNum 11 and greater */
+ float64_t write_date; /* Date this label written */
+ float64_t write_time; /* Time this label written */
+ char VolName[128]; /* Volume name */
+ char PrevVolName[128]; /* Previous Volume Name */
+ char PoolName[128]; /* Pool name */
+ char PoolType[128]; /* Pool type */
+ char MediaType[128]; /* Type of this media */
+ char HostName[128]; /* Host name of writing computer */
+ char LabelProg[32]; /* Label program name */
+ char ProgVersion[32]; /* Program version */
+ char ProgDate[32]; /* Program build date/time */
+\end{verbatim}
+\normalsize
+
+Note, the LabelType (Volume Label, Volume PreLabel, Session Start Label, ...)
+is stored in the record FileIndex field of the Record Header and does not
+appear in the data part of the record.
+
+\section{Session Label}
+\index{Label!Session}
+\index{Session Label}
+\addcontentsline{toc}{subsection}{Session Label}
+
+The Session Label is written at the beginning and end of each session as well
+as the last record on the physical medium. It has the following binary format:
+
+
+\footnotesize
+\begin{verbatim}
+ char Id[32]; /* Bacula Immortal ... */
+ uint32_t VerNum; /* Label version number */
+ uint32_t JobId; /* Job id */
+ uint32_t VolumeIndex; /* sequence no of vol */
+ /* Prior to VerNum 11 */
+ float64_t write_date; /* Date this label written */
+ /* VerNum 11 and greater */
+ btime_t write_btime; /* time/date record written */
+ /* The following is zero VerNum 11 and greater */
+ float64_t write_time; /* Time this label written */
+ char PoolName[128]; /* Pool name */
+ char PoolType[128]; /* Pool type */
+ char JobName[128]; /* base Job name */
+ char ClientName[128];
+ /* Added in VerNum 10 */
+ char Job[128]; /* Unique Job name */
+ char FileSetName[128]; /* FileSet name */
+ uint32_t JobType;
+ uint32_t JobLevel;
+\end{verbatim}
+\normalsize
+
+In addition, the EOS label contains:
+
+\footnotesize
+\begin{verbatim}
+ /* The remainder are part of EOS label only */
+ uint32_t JobFiles;
+ uint64_t JobBytes;
+ uint32_t start_block;
+ uint32_t end_block;
+ uint32_t start_file;
+ uint32_t end_file;
+ uint32_t JobErrors;
+\end{verbatim}
+\normalsize
+
+In addition, for VerNum greater than 10, the EOS label contains (in addition
+to the above):
+
+\footnotesize
+\begin{verbatim}
+ uint32_t JobStatus /* Job termination code */
+\end{verbatim}
+\normalsize
+
+: Note, the LabelType (Volume Label, Volume PreLabel, Session Start Label,
+...) is stored in the record FileIndex field and does not appear in the data
+part of the record. Also, the Stream field of the Record Header contains the
+JobId. This permits quick filtering without actually reading all the session
+data in many cases.
+
+\section{Overall Storage Format}
+\index{Format!Overall Storage}
+\index{Overall Storage Format}
+\addcontentsline{toc}{subsection}{Overall Storage Format}
+
+\footnotesize
+\begin{verbatim}
+ Current Bacula Tape Format
+ 6 June 2001
+ Version BB02 added 28 September 2002
+ Version BB01 is the old deprecated format.
+ A Bacula tape is composed of tape Blocks. Each block
+ has a Block header followed by the block data. Block
+ Data consists of Records. Records consist of Record
+ Headers followed by Record Data.
+ :=======================================================:
+ | |
+ | Block Header (24 bytes) |
+ | |
+ |-------------------------------------------------------|
+ | |
+ | Record Header (12 bytes) |
+ | |
+ |-------------------------------------------------------|
+ | |
+ | Record Data |
+ | |
+ |-------------------------------------------------------|
+ | |
+ | Record Header (12 bytes) |
+ | |
+ |-------------------------------------------------------|
+ | |
+ | ... |
+ Block Header: the first item in each block. The format is
+ shown below.
+ Partial Data block: occurs if the data from a previous
+ block spills over to this block (the normal case except
+ for the first block on a tape). However, this partial
+ data block is always preceded by a record header.
+ Record Header: identifies the Volume Session, the Stream
+ and the following Record Data size. See below for format.
+ Record data: arbitrary binary data.
+ Block Header Format BB02
+ :=======================================================:
+ | CheckSum (uint32_t) |
+ |-------------------------------------------------------|
+ | BlockSize (uint32_t) |
+ |-------------------------------------------------------|
+ | BlockNumber (uint32_t) |
+ |-------------------------------------------------------|
+ | "BB02" (char [4]) |
+ |-------------------------------------------------------|
+ | VolSessionId (uint32_t) |
+ |-------------------------------------------------------|
+ | VolSessionTime (uint32_t) |
+ :=======================================================:
+ BBO2: Serves to identify the block as a
+ Bacula block and also servers as a block format identifier
+ should we ever need to change the format.
+ BlockSize: is the size in bytes of the block. When reading
+ back a block, if the BlockSize does not agree with the
+ actual size read, Bacula discards the block.
+ CheckSum: a checksum for the Block.
+ BlockNumber: is the sequential block number on the tape.
+ VolSessionId: a unique sequential number that is assigned
+ by the Storage Daemon to a particular Job.
+ This number is sequential since the start
+ of execution of the daemon.
+ VolSessionTime: the time/date that the current execution
+ of the Storage Daemon started. It assures
+ that the combination of VolSessionId and
+ VolSessionTime is unique for all jobs
+ written to the tape, even if there was a
+ machine crash between two writes.
+ Record Header Format BB02
+ :=======================================================:
+ | FileIndex (int32_t) |
+ |-------------------------------------------------------|
+ | Stream (int32_t) |
+ |-------------------------------------------------------|
+ | DataSize (uint32_t) |
+ :=======================================================:
+ FileIndex: a sequential file number within a job. The
+ Storage daemon enforces this index to be
+ greater than zero and sequential. Note,
+ however, that the File daemon may send
+ multiple Streams for the same FileIndex.
+ The Storage Daemon uses negative FileIndices
+ to identify Session Start and End labels
+ as well as the End of Volume labels.
+ Stream: defined by the File daemon and is intended to be
+ used to identify separate parts of the data
+ saved for each file (attributes, file data,
+ ...). The Storage Daemon has no idea of
+ what a Stream is or what it contains.
+ DataSize: the size in bytes of the binary data record
+ that follows the Session Record header.
+ The Storage Daemon has no idea of the
+ actual contents of the binary data record.
+ For standard Unix files, the data record
+ typically contains the file attributes or
+ the file data. For a sparse file
+ the first 64 bits of the data contains
+ the storage address for the data block.
+ Volume Label
+ :=======================================================:
+ | Id (32 bytes) |
+ |-------------------------------------------------------|
+ | VerNum (uint32_t) |
+ |-------------------------------------------------------|
+ | label_date (float64_t) |
+ | label_btime (btime_t VerNum 11 |
+ |-------------------------------------------------------|
+ | label_time (float64_t) |
+ | write_btime (btime_t VerNum 11 |
+ |-------------------------------------------------------|
+ | write_date (float64_t) |
+ | 0 (float64_t) VerNum 11 |
+ |-------------------------------------------------------|
+ | write_time (float64_t) |
+ | 0 (float64_t) VerNum 11 |
+ |-------------------------------------------------------|
+ | VolName (128 bytes) |
+ |-------------------------------------------------------|
+ | PrevVolName (128 bytes) |
+ |-------------------------------------------------------|
+ | PoolName (128 bytes) |
+ |-------------------------------------------------------|
+ | PoolType (128 bytes) |
+ |-------------------------------------------------------|
+ | MediaType (128 bytes) |
+ |-------------------------------------------------------|
+ | HostName (128 bytes) |
+ |-------------------------------------------------------|
+ | LabelProg (32 bytes) |
+ |-------------------------------------------------------|
+ | ProgVersion (32 bytes) |
+ |-------------------------------------------------------|
+ | ProgDate (32 bytes) |
+ |-------------------------------------------------------|
+ :=======================================================:
+
+ Id: 32 byte Bacula identifier "Bacula 1.0 immortal\n"
+ (old version also recognized:)
+ Id: 32 byte Bacula identifier "Bacula 0.9 mortal\n"
+ LabelType (Saved in the FileIndex of the Header record).
+ PRE_LABEL -1 Volume label on unwritten tape
+ VOL_LABEL -2 Volume label after tape written
+ EOM_LABEL -3 Label at EOM (not currently implemented)
+ SOS_LABEL -4 Start of Session label (format given below)
+ EOS_LABEL -5 End of Session label (format given below)
+ VerNum: 11
+ label_date: Julian day tape labeled
+ label_time: Julian time tape labeled
+ write_date: Julian date tape first used (data written)
+ write_time: Julian time tape first used (data written)
+ VolName: "Physical" Volume name
+ PrevVolName: The VolName of the previous tape (if this tape is
+ a continuation of the previous one).
+ PoolName: Pool Name
+ PoolType: Pool Type
+ MediaType: Media Type
+ HostName: Name of host that is first writing the tape
+ LabelProg: Name of the program that labeled the tape
+ ProgVersion: Version of the label program
+ ProgDate: Date Label program built
+ Session Label
+ :=======================================================:
+ | Id (32 bytes) |
+ |-------------------------------------------------------|
+ | VerNum (uint32_t) |
+ |-------------------------------------------------------|
+ | JobId (uint32_t) |
+ |-------------------------------------------------------|
+ | write_btime (btime_t) VerNum 11 |
+ |-------------------------------------------------------|
+ | 0 (float64_t) VerNum 11 |
+ |-------------------------------------------------------|
+ | PoolName (128 bytes) |
+ |-------------------------------------------------------|
+ | PoolType (128 bytes) |
+ |-------------------------------------------------------|
+ | JobName (128 bytes) |
+ |-------------------------------------------------------|
+ | ClientName (128 bytes) |
+ |-------------------------------------------------------|
+ | Job (128 bytes) |
+ |-------------------------------------------------------|
+ | FileSetName (128 bytes) |
+ |-------------------------------------------------------|
+ | JobType (uint32_t) |
+ |-------------------------------------------------------|
+ | JobLevel (uint32_t) |
+ |-------------------------------------------------------|
+ | FileSetMD5 (50 bytes) VerNum 11 |
+ |-------------------------------------------------------|
+ Additional fields in End Of Session Label
+ |-------------------------------------------------------|
+ | JobFiles (uint32_t) |
+ |-------------------------------------------------------|
+ | JobBytes (uint32_t) |
+ |-------------------------------------------------------|
+ | start_block (uint32_t) |
+ |-------------------------------------------------------|
+ | end_block (uint32_t) |
+ |-------------------------------------------------------|
+ | start_file (uint32_t) |
+ |-------------------------------------------------------|
+ | end_file (uint32_t) |
+ |-------------------------------------------------------|
+ | JobErrors (uint32_t) |
+ |-------------------------------------------------------|
+ | JobStatus (uint32_t) VerNum 11 |
+ :=======================================================:
+ * => fields deprecated
+ Id: 32 byte Bacula Identifier "Bacula 1.0 immortal\n"
+ LabelType (in FileIndex field of Header):
+ EOM_LABEL -3 Label at EOM
+ SOS_LABEL -4 Start of Session label
+ EOS_LABEL -5 End of Session label
+ VerNum: 11
+ JobId: JobId
+ write_btime: Bacula time/date this tape record written
+ write_date: Julian date tape this record written - deprecated
+ write_time: Julian time tape this record written - deprecated.
+ PoolName: Pool Name
+ PoolType: Pool Type
+ MediaType: Media Type
+ ClientName: Name of File daemon or Client writing this session
+ Not used for EOM_LABEL.
+\end{verbatim}
+\normalsize
+
+\section{Unix File Attributes}
+\index{Unix File Attributes}
+\index{Attributes!Unix File}
+\addcontentsline{toc}{subsection}{Unix File Attributes}
+
+The Unix File Attributes packet consists of the following:
+
+\lt{}File-Index\gt{} \lt{}Type\gt{}
+\lt{}Filename\gt{}@\lt{}File-Attributes\gt{}@\lt{}Link\gt{}
+@\lt{}Extended-Attributes@\gt{} where
+
+\begin{description}
+
+\item [@]
+ represents a byte containing a binary zero.
+
+\item [FileIndex]
+ \index{FileIndex}
+ is the sequential file index starting from one assigned by the File daemon.
+
+\item [Type]
+ \index{Type}
+ is one of the following:
+
+\footnotesize
+\begin{verbatim}
+#define FT_LNKSAVED 1 /* hard link to file already saved */
+#define FT_REGE 2 /* Regular file but empty */
+#define FT_REG 3 /* Regular file */
+#define FT_LNK 4 /* Soft Link */
+#define FT_DIR 5 /* Directory */
+#define FT_SPEC 6 /* Special file -- chr, blk, fifo, sock */
+#define FT_NOACCESS 7 /* Not able to access */
+#define FT_NOFOLLOW 8 /* Could not follow link */
+#define FT_NOSTAT 9 /* Could not stat file */
+#define FT_NOCHG 10 /* Incremental option, file not changed */
+#define FT_DIRNOCHG 11 /* Incremental option, directory not changed */
+#define FT_ISARCH 12 /* Trying to save archive file */
+#define FT_NORECURSE 13 /* No recursion into directory */
+#define FT_NOFSCHG 14 /* Different file system, prohibited */
+#define FT_NOOPEN 15 /* Could not open directory */
+#define FT_RAW 16 /* Raw block device */
+#define FT_FIFO 17 /* Raw fifo device */
+\end{verbatim}
+\normalsize
+
+\item [Filename]
+ \index{Filename}
+ is the fully qualified filename.
+
+\item [File-Attributes]
+ \index{File-Attributes}
+ consists of the 13 fields of the stat() buffer in ASCII base64 format
+separated by spaces. These fields and their meanings are shown below. This
+stat() packet is in Unix format, and MUST be provided (constructed) for ALL
+systems.
+
+\item [Link]
+ \index{Link}
+ when the FT code is FT\_LNK or FT\_LNKSAVED, the item in question is a Unix
+link, and this field contains the fully qualified link name. When the FT code
+is not FT\_LNK or FT\_LNKSAVED, this field is null.
+
+\item [Extended-Attributes]
+ \index{Extended-Attributes}
+ The exact format of this field is operating system dependent. It contains
+additional or extended attributes of a system dependent nature. Currently,
+this field is used only on WIN32 systems where it contains a ASCII base64
+representation of the WIN32\_FILE\_ATTRIBUTE\_DATA structure as defined by
+Windows. The fields in the base64 representation of this structure are like
+the File-Attributes separated by spaces.
+\end{description}
+
+The File-attributes consist of the following:
+
+\addcontentsline{lot}{table}{File Attributes}
+\begin{longtable}{|p{0.6in}|p{0.7in}|p{1in}|p{1in}|p{1.4in}|}
+ \hline
+\multicolumn{1}{|c|}{\bf Field No. } & \multicolumn{1}{c|}{\bf Stat Name }
+& \multicolumn{1}{c|}{\bf Unix } & \multicolumn{1}{c|}{\bf Win98/NT } &
+\multicolumn{1}{c|}{\bf MacOS } \\
+ \hline
+\multicolumn{1}{|c|}{1 } & {st\_dev } & {Device number of filesystem } &
+{Drive number } & {vRefNum } \\
+ \hline
+\multicolumn{1}{|c|}{2 } & {st\_ino } & {Inode number } & {Always 0 } &
+{fileID/dirID } \\
+ \hline
+\multicolumn{1}{|c|}{3 } & {st\_mode } & {File mode } & {File mode } &
+{777 dirs/apps; 666 docs; 444 locked docs } \\
+ \hline
+\multicolumn{1}{|c|}{4 } & {st\_nlink } & {Number of links to the file } &
+{Number of link (only on NTFS) } & {Always 1 } \\
+ \hline
+\multicolumn{1}{|c|}{5 } & {st\_uid } & {Owner ID } & {Always 0 } &
+{Always 0 } \\
+ \hline
+\multicolumn{1}{|c|}{6 } & {st\_gid } & {Group ID } & {Always 0 } &
+{Always 0 } \\
+ \hline
+\multicolumn{1}{|c|}{7 } & {st\_rdev } & {Device ID for special files } &
+{Drive No. } & {Always 0 } \\
+ \hline
+\multicolumn{1}{|c|}{8 } & {st\_size } & {File size in bytes } & {File
+size in bytes } & {Data fork file size in bytes } \\
+ \hline
+\multicolumn{1}{|c|}{9 } & {st\_blksize } & {Preferred block size } &
+{Always 0 } & {Preferred block size } \\
+ \hline
+\multicolumn{1}{|c|}{10 } & {st\_blocks } & {Number of blocks allocated }
+& {Always 0 } & {Number of blocks allocated } \\
+ \hline
+\multicolumn{1}{|c|}{11 } & {st\_atime } & {Last access time since epoch }
+& {Last access time since epoch } & {Last access time -66 years } \\
+ \hline
+\multicolumn{1}{|c|}{12 } & {st\_mtime } & {Last modify time since epoch }
+& {Last modify time since epoch } & {Last access time -66 years } \\
+ \hline
+\multicolumn{1}{|c|}{13 } & {st\_ctime } & {Inode change time since epoch
+} & {File create time since epoch } & {File create time -66 years}
+\\ \hline
+
+\end{longtable}
+
+\section{Old Depreciated Tape Format}
+\index{Old Depreciated Tape Format}
+\index{Format!Old Depreciated Tape}
+\addcontentsline{toc}{subsection}{Old Depreciated Tape Format}
+
+The format of the Block Header (version 1.26 and earlier) is:
+
+\footnotesize
+\begin{verbatim}
+ uint32_t CheckSum; /* Block check sum */
+ uint32_t BlockSize; /* Block byte size including the header */
+ uint32_t BlockNumber; /* Block number */
+ char ID[4] = "BB01"; /* Identification and block level */
+\end{verbatim}
+\normalsize
+
+The format of the Record Header (version 1.26 or earlier) is:
+
+\footnotesize
+\begin{verbatim}
+ uint32_t VolSessionId; /* Unique ID for this session */
+ uint32_t VolSessionTime; /* Start time/date of session */
+ int32_t FileIndex; /* File index supplied by File daemon */
+ int32_t Stream; /* Stream number supplied by File daemon */
+ uint32_t DataSize; /* size of following data record in bytes */
+\end{verbatim}
+\normalsize
+
+\footnotesize
+\begin{verbatim}
+ Current Bacula Tape Format
+ 6 June 2001
+ Version BB01 is the old deprecated format.
+ A Bacula tape is composed of tape Blocks. Each block
+ has a Block header followed by the block data. Block
+ Data consists of Records. Records consist of Record
+ Headers followed by Record Data.
+ :=======================================================:
+ | |
+ | Block Header |
+ | (16 bytes version BB01) |
+ |-------------------------------------------------------|
+ | |
+ | Record Header |
+ | (20 bytes version BB01) |
+ |-------------------------------------------------------|
+ | |
+ | Record Data |
+ | |
+ |-------------------------------------------------------|
+ | |
+ | Record Header |
+ | (20 bytes version BB01) |
+ |-------------------------------------------------------|
+ | |
+ | ... |
+ Block Header: the first item in each block. The format is
+ shown below.
+ Partial Data block: occurs if the data from a previous
+ block spills over to this block (the normal case except
+ for the first block on a tape). However, this partial
+ data block is always preceded by a record header.
+ Record Header: identifies the Volume Session, the Stream
+ and the following Record Data size. See below for format.
+ Record data: arbitrary binary data.
+ Block Header Format BB01 (deprecated)
+ :=======================================================:
+ | CheckSum (uint32_t) |
+ |-------------------------------------------------------|
+ | BlockSize (uint32_t) |
+ |-------------------------------------------------------|
+ | BlockNumber (uint32_t) |
+ |-------------------------------------------------------|
+ | "BB01" (char [4]) |
+ :=======================================================:
+ BBO1: Serves to identify the block as a
+ Bacula block and also servers as a block format identifier
+ should we ever need to change the format.
+ BlockSize: is the size in bytes of the block. When reading
+ back a block, if the BlockSize does not agree with the
+ actual size read, Bacula discards the block.
+ CheckSum: a checksum for the Block.
+ BlockNumber: is the sequential block number on the tape.
+ VolSessionId: a unique sequential number that is assigned
+ by the Storage Daemon to a particular Job.
+ This number is sequential since the start
+ of execution of the daemon.
+ VolSessionTime: the time/date that the current execution
+ of the Storage Daemon started. It assures
+ that the combination of VolSessionId and
+ VolSessionTime is unique for all jobs
+ written to the tape, even if there was a
+ machine crash between two writes.
+ Record Header Format BB01 (deprecated)
+ :=======================================================:
+ | VolSessionId (uint32_t) |
+ |-------------------------------------------------------|
+ | VolSessionTime (uint32_t) |
+ |-------------------------------------------------------|
+ | FileIndex (int32_t) |
+ |-------------------------------------------------------|
+ | Stream (int32_t) |
+ |-------------------------------------------------------|
+ | DataSize (uint32_t) |
+ :=======================================================:
+ VolSessionId: a unique sequential number that is assigned
+ by the Storage Daemon to a particular Job.
+ This number is sequential since the start
+ of execution of the daemon.
+ VolSessionTime: the time/date that the current execution
+ of the Storage Daemon started. It assures
+ that the combination of VolSessionId and
+ VolSessionTime is unique for all jobs
+ written to the tape, even if there was a
+ machine crash between two writes.
+ FileIndex: a sequential file number within a job. The
+ Storage daemon enforces this index to be
+ greater than zero and sequential. Note,
+ however, that the File daemon may send
+ multiple Streams for the same FileIndex.
+ The Storage Daemon uses negative FileIndices
+ to identify Session Start and End labels
+ as well as the End of Volume labels.
+ Stream: defined by the File daemon and is intended to be
+ used to identify separate parts of the data
+ saved for each file (attributes, file data,
+ ...). The Storage Daemon has no idea of
+ what a Stream is or what it contains.
+ DataSize: the size in bytes of the binary data record
+ that follows the Session Record header.
+ The Storage Daemon has no idea of the
+ actual contents of the binary data record.
+ For standard Unix files, the data record
+ typically contains the file attributes or
+ the file data. For a sparse file
+ the first 64 bits of the data contains
+ the storage address for the data block.
+ Volume Label
+ :=======================================================:
+ | Id (32 bytes) |
+ |-------------------------------------------------------|
+ | VerNum (uint32_t) |
+ |-------------------------------------------------------|
+ | label_date (float64_t) |
+ |-------------------------------------------------------|
+ | label_time (float64_t) |
+ |-------------------------------------------------------|
+ | write_date (float64_t) |
+ |-------------------------------------------------------|
+ | write_time (float64_t) |
+ |-------------------------------------------------------|
+ | VolName (128 bytes) |
+ |-------------------------------------------------------|
+ | PrevVolName (128 bytes) |
+ |-------------------------------------------------------|
+ | PoolName (128 bytes) |
+ |-------------------------------------------------------|
+ | PoolType (128 bytes) |
+ |-------------------------------------------------------|
+ | MediaType (128 bytes) |
+ |-------------------------------------------------------|
+ | HostName (128 bytes) |
+ |-------------------------------------------------------|
+ | LabelProg (32 bytes) |
+ |-------------------------------------------------------|
+ | ProgVersion (32 bytes) |
+ |-------------------------------------------------------|
+ | ProgDate (32 bytes) |
+ |-------------------------------------------------------|
+ :=======================================================:
+
+ Id: 32 byte Bacula identifier "Bacula 1.0 immortal\n"
+ (old version also recognized:)
+ Id: 32 byte Bacula identifier "Bacula 0.9 mortal\n"
+ LabelType (Saved in the FileIndex of the Header record).
+ PRE_LABEL -1 Volume label on unwritten tape
+ VOL_LABEL -2 Volume label after tape written
+ EOM_LABEL -3 Label at EOM (not currently implemented)
+ SOS_LABEL -4 Start of Session label (format given below)
+ EOS_LABEL -5 End of Session label (format given below)
+ label_date: Julian day tape labeled
+ label_time: Julian time tape labeled
+ write_date: Julian date tape first used (data written)
+ write_time: Julian time tape first used (data written)
+ VolName: "Physical" Volume name
+ PrevVolName: The VolName of the previous tape (if this tape is
+ a continuation of the previous one).
+ PoolName: Pool Name
+ PoolType: Pool Type
+ MediaType: Media Type
+ HostName: Name of host that is first writing the tape
+ LabelProg: Name of the program that labeled the tape
+ ProgVersion: Version of the label program
+ ProgDate: Date Label program built
+ Session Label
+ :=======================================================:
+ | Id (32 bytes) |
+ |-------------------------------------------------------|
+ | VerNum (uint32_t) |
+ |-------------------------------------------------------|
+ | JobId (uint32_t) |
+ |-------------------------------------------------------|
+ | *write_date (float64_t) VerNum 10 |
+ |-------------------------------------------------------|
+ | *write_time (float64_t) VerNum 10 |
+ |-------------------------------------------------------|
+ | PoolName (128 bytes) |
+ |-------------------------------------------------------|
+ | PoolType (128 bytes) |
+ |-------------------------------------------------------|
+ | JobName (128 bytes) |
+ |-------------------------------------------------------|
+ | ClientName (128 bytes) |
+ |-------------------------------------------------------|
+ | Job (128 bytes) |
+ |-------------------------------------------------------|
+ | FileSetName (128 bytes) |
+ |-------------------------------------------------------|
+ | JobType (uint32_t) |
+ |-------------------------------------------------------|
+ | JobLevel (uint32_t) |
+ |-------------------------------------------------------|
+ | FileSetMD5 (50 bytes) VerNum 11 |
+ |-------------------------------------------------------|
+ Additional fields in End Of Session Label
+ |-------------------------------------------------------|
+ | JobFiles (uint32_t) |
+ |-------------------------------------------------------|
+ | JobBytes (uint32_t) |
+ |-------------------------------------------------------|
+ | start_block (uint32_t) |
+ |-------------------------------------------------------|
+ | end_block (uint32_t) |
+ |-------------------------------------------------------|
+ | start_file (uint32_t) |
+ |-------------------------------------------------------|
+ | end_file (uint32_t) |
+ |-------------------------------------------------------|
+ | JobErrors (uint32_t) |
+ |-------------------------------------------------------|
+ | JobStatus (uint32_t) VerNum 11 |
+ :=======================================================:
+ * => fields deprecated
+ Id: 32 byte Bacula Identifier "Bacula 1.0 immortal\n"
+ LabelType (in FileIndex field of Header):
+ EOM_LABEL -3 Label at EOM
+ SOS_LABEL -4 Start of Session label
+ EOS_LABEL -5 End of Session label
+ VerNum: 11
+ JobId: JobId
+ write_btime: Bacula time/date this tape record written
+ write_date: Julian date tape this record written - deprecated
+ write_time: Julian time tape this record written - deprecated.
+ PoolName: Pool Name
+ PoolType: Pool Type
+ MediaType: Media Type
+ ClientName: Name of File daemon or Client writing this session
+ Not used for EOM_LABEL.
+\end{verbatim}
+\normalsize
--- /dev/null
+%%
+%%
+
+\chapter{Bacula Memory Management}
+\label{_ChapterStart7}
+\index{Management!Bacula Memory}
+\index{Bacula Memory Management}
+\addcontentsline{toc}{section}{Bacula Memory Management}
+
+\section{General}
+\index{General}
+\addcontentsline{toc}{subsection}{General}
+
+This document describes the memory management routines that are used in Bacula
+and is meant to be a technical discussion for developers rather than part of
+the user manual.
+
+Since Bacula may be called upon to handle filenames of varying and more or
+less arbitrary length, special attention needs to be used in the code to
+ensure that memory buffers are sufficiently large. There are four
+possibilities for memory usage within {\bf Bacula}. Each will be described in
+turn. They are:
+
+\begin{itemize}
+\item Statically allocated memory.
+\item Dynamically allocated memory using malloc() and free().
+\item Non-pooled memory.
+\item Pooled memory.
+ \end{itemize}
+
+\subsection{Statically Allocated Memory}
+\index{Statically Allocated Memory}
+\index{Memory!Statically Allocated}
+\addcontentsline{toc}{subsubsection}{Statically Allocated Memory}
+
+Statically allocated memory is of the form:
+
+\footnotesize
+\begin{verbatim}
+char buffer[MAXSTRING];
+\end{verbatim}
+\normalsize
+
+The use of this kind of memory is discouraged except when you are 100\% sure
+that the strings to be used will be of a fixed length. One example of where
+this is appropriate is for {\bf Bacula} resource names, which are currently
+limited to 127 characters (MAX\_NAME\_LENGTH). Although this maximum size may
+change, particularly to accommodate Unicode, it will remain a relatively small
+value.
+
+\subsection{Dynamically Allocated Memory}
+\index{Dynamically Allocated Memory}
+\index{Memory!Dynamically Allocated}
+\addcontentsline{toc}{subsubsection}{Dynamically Allocated Memory}
+
+Dynamically allocated memory is obtained using the standard malloc() routines.
+As in:
+
+\footnotesize
+\begin{verbatim}
+char *buf;
+buf = malloc(256);
+\end{verbatim}
+\normalsize
+
+This kind of memory can be released with:
+
+\footnotesize
+\begin{verbatim}
+free(buf);
+\end{verbatim}
+\normalsize
+
+It is recommended to use this kind of memory only when you are sure that you
+know the memory size needed and the memory will be used for short periods of
+time -- that is it would not be appropriate to use statically allocated
+memory. An example might be to obtain a large memory buffer for reading and
+writing files. When {\bf SmartAlloc} is enabled, the memory obtained by
+malloc() will automatically be checked for buffer overwrite (overflow) during
+the free() call, and all malloc'ed memory that is not released prior to
+termination of the program will be reported as Orphaned memory.
+
+\subsection{Pooled and Non-pooled Memory}
+\index{Memory!Pooled and Non-pooled}
+\index{Pooled and Non-pooled Memory}
+\addcontentsline{toc}{subsubsection}{Pooled and Non-pooled Memory}
+
+In order to facility the handling of arbitrary length filenames and to
+efficiently handle a high volume of dynamic memory usage, we have implemented
+routines between the C code and the malloc routines. The first is called
+``Pooled'' memory, and is memory, which once allocated and then released, is
+not returned to the system memory pool, but rather retained in a Bacula memory
+pool. The next request to acquire pooled memory will return any free memory
+block. In addition, each memory block has its current size associated with the
+block allowing for easy checking if the buffer is of sufficient size. This
+kind of memory would normally be used in high volume situations (lots of
+malloc()s and free()s) where the buffer length may have to frequently change
+to adapt to varying filename lengths.
+
+The non-pooled memory is handled by routines similar to those used for pooled
+memory, allowing for easy size checking. However, non-pooled memory is
+returned to the system rather than being saved in the Bacula pool. This kind
+of memory would normally be used in low volume situations (few malloc()s and
+free()s), but where the size of the buffer might have to be adjusted
+frequently.
+
+\paragraph*{Types of Memory Pool:}
+
+Currently there are three memory pool types:
+
+\begin{itemize}
+\item PM\_NOPOOL -- non-pooled memory.
+\item PM\_FNAME -- a filename pool.
+\item PM\_MESSAGE -- a message buffer pool.
+\item PM\_EMSG -- error message buffer pool.
+ \end{itemize}
+
+\paragraph*{Getting Memory:}
+
+To get memory, one uses:
+
+\footnotesize
+\begin{verbatim}
+void *get_pool_memory(pool);
+\end{verbatim}
+\normalsize
+
+where {\bf pool} is one of the above mentioned pool names. The size of the
+memory returned will be determined by the system to be most appropriate for
+the application.
+
+If you wish non-pooled memory, you may alternatively call:
+
+\footnotesize
+\begin{verbatim}
+void *get_memory(size_t size);
+\end{verbatim}
+\normalsize
+
+The buffer length will be set to the size specified, and it will be assigned
+to the PM\_NOPOOL pool (no pooling).
+
+\paragraph*{Releasing Memory:}
+
+To free memory acquired by either of the above two calls, use:
+
+\footnotesize
+\begin{verbatim}
+void free_pool_memory(void *buffer);
+\end{verbatim}
+\normalsize
+
+where buffer is the memory buffer returned when the memory was acquired. If
+the memory was originally allocated as type PM\_NOPOOL, it will be released to
+the system, otherwise, it will be placed on the appropriate Bacula memory pool
+free chain to be used in a subsequent call for memory from that pool.
+
+\paragraph*{Determining the Memory Size:}
+
+To determine the memory buffer size, use:
+
+\footnotesize
+\begin{verbatim}
+size_t sizeof_pool_memory(void *buffer);
+\end{verbatim}
+\normalsize
+
+\paragraph*{Resizing Pool Memory:}
+
+To resize pool memory, use:
+
+\footnotesize
+\begin{verbatim}
+void *realloc_pool_memory(void *buffer);
+\end{verbatim}
+\normalsize
+
+The buffer will be reallocated, and the contents of the original buffer will
+be preserved, but the address of the buffer may change.
+
+\paragraph*{Automatic Size Adjustment:}
+
+To have the system check and if necessary adjust the size of your pooled
+memory buffer, use:
+
+\footnotesize
+\begin{verbatim}
+void *check_pool_memory_size(void *buffer, size_t new-size);
+\end{verbatim}
+\normalsize
+
+where {\bf new-size} is the buffer length needed. Note, if the buffer is
+already equal to or larger than {\bf new-size} no buffer size change will
+occur. However, if a buffer size change is needed, the original contents of
+the buffer will be preserved, but the buffer address may change. Many of the
+low level Bacula subroutines expect to be passed a pool memory buffer and use
+this call to ensure the buffer they use is sufficiently large.
+
+\paragraph*{Releasing All Pooled Memory:}
+
+In order to avoid orphaned buffer error messages when terminating the program,
+use:
+
+\footnotesize
+\begin{verbatim}
+void close_memory_pool();
+\end{verbatim}
+\normalsize
+
+to free all unused memory retained in the Bacula memory pool. Note, any memory
+not returned to the pool via free\_pool\_memory() will not be released by this
+call.
+
+\paragraph*{Pooled Memory Statistics:}
+
+For debugging purposes and performance tuning, the following call will print
+the current memory pool statistics:
+
+\footnotesize
+\begin{verbatim}
+void print_memory_pool_stats();
+\end{verbatim}
+\normalsize
+
+an example output is:
+
+\footnotesize
+\begin{verbatim}
+Pool Maxsize Maxused Inuse
+ 0 256 0 0
+ 1 256 1 0
+ 2 256 1 0
+\end{verbatim}
+\normalsize
--- /dev/null
+%%
+%%
+
+\chapter{TCP/IP Network Protocol}
+\label{_ChapterStart5}
+\index{TCP/IP Network Protocol}
+\index{Protocol!TCP/IP Network}
+\addcontentsline{toc}{section}{TCP/IP Network Protocol}
+
+\section{General}
+\index{General}
+\addcontentsline{toc}{subsection}{General}
+
+This document describes the TCP/IP protocol used by Bacula to communicate
+between the various daemons and services. The definitive definition of the
+protocol can be found in src/lib/bsock.h, src/lib/bnet.c and
+src/lib/bnet\_server.c.
+
+Bacula's network protocol is basically a ``packet oriented'' protocol built on
+a standard TCP/IP streams. At the lowest level all packet transfers are done
+with read() and write() requests on system sockets. Pipes are not used as they
+are considered unreliable for large serial data transfers between various
+hosts.
+
+Using the routines described below (bnet\_open, bnet\_write, bnet\_recv, and
+bnet\_close) guarantees that the number of bytes you write into the socket
+will be received as a single record on the other end regardless of how many
+low level write() and read() calls are needed. All data transferred are
+considered to be binary data.
+
+\section{bnet and Threads}
+\index{Threads!bnet and}
+\index{Bnet and Threads}
+\addcontentsline{toc}{subsection}{bnet and Threads}
+
+These bnet routines work fine in a threaded environment. However, they assume
+that there is only one reader or writer on the socket at any time. It is
+highly recommended that only a single thread access any BSOCK packet. The
+exception to this rule is when the socket is first opened and it is waiting
+for a job to start. The wait in the Storage daemon is done in one thread and
+then passed to another thread for subsequent handling.
+
+If you envision having two threads using the same BSOCK, think twice, then you
+must implement some locking mechanism. However, it probably would not be
+appropriate to put locks inside the bnet subroutines for efficiency reasons.
+
+\section{bnet\_open}
+\index{Bnet\_open}
+\addcontentsline{toc}{subsection}{bnet\_open}
+
+To establish a connection to a server, use the subroutine:
+
+BSOCK *bnet\_open(void *jcr, char *host, char *service, int port, int *fatal)
+bnet\_open(), if successful, returns the Bacula sock descriptor pointer to be
+used in subsequent bnet\_send() and bnet\_read() requests. If not successful,
+bnet\_open() returns a NULL. If fatal is set on return, it means that a fatal
+error occurred and that you should not repeatedly call bnet\_open(). Any error
+message will generally be sent to the JCR.
+
+\section{bnet\_send}
+\index{Bnet\_send}
+\addcontentsline{toc}{subsection}{bnet\_send}
+
+To send a packet, one uses the subroutine:
+
+int bnet\_send(BSOCK *sock) This routine is equivalent to a write() except
+that it handles the low level details. The data to be sent is expected to be
+in sock-\gt{}msg and be sock-\gt{}msglen bytes. To send a packet, bnet\_send()
+first writes four bytes in network byte order than indicate the size of the
+following data packet. It returns:
+
+\footnotesize
+\begin{verbatim}
+ Returns 0 on failure
+ Returns 1 on success
+\end{verbatim}
+\normalsize
+
+In the case of a failure, an error message will be sent to the JCR contained
+within the bsock packet.
+
+\section{bnet\_fsend}
+\index{Bnet\_fsend}
+\addcontentsline{toc}{subsection}{bnet\_fsend}
+
+This form uses:
+
+int bnet\_fsend(BSOCK *sock, char *format, ...) and it allows you to send a
+formatted messages somewhat like fprintf(). The return status is the same as
+bnet\_send.
+
+\section{Additional Error information}
+\index{Information!Additional Error}
+\index{Additional Error information}
+\addcontentsline{toc}{subsection}{Additional Error information}
+
+Fro additional error information, you can call {\bf is\_bnet\_error(BSOCK
+*bsock)} which will return 0 if there is no error or non-zero if there is an
+error on the last transmission. The {\bf is\_bnet\_stop(BSOCK *bsock)}
+function will return 0 if there no errors and you can continue sending. It
+will return non-zero if there are errors or the line is closed (no more
+transmissions should be sent).
+
+\section{bnet\_recv}
+\index{Bnet\_recv}
+\addcontentsline{toc}{subsection}{bnet\_recv}
+
+To read a packet, one uses the subroutine:
+
+int bnet\_recv(BSOCK *sock) This routine is similar to a read() except that it
+handles the low level details. bnet\_read() first reads packet length that
+follows as four bytes in network byte order. The data is read into
+sock-\gt{}msg and is sock-\gt{}msglen bytes. If the sock-\gt{}msg is not large
+enough, bnet\_recv() realloc() the buffer. It will return an error (-2) if
+maxbytes is less than the record size sent. It returns:
+
+\footnotesize
+\begin{verbatim}
+ * Returns number of bytes read
+ * Returns 0 on end of file
+ * Returns -1 on hard end of file (i.e. network connection close)
+ * Returns -2 on error
+\end{verbatim}
+\normalsize
+
+It should be noted that bnet\_recv() is a blocking read.
+
+\section{bnet\_sig}
+\index{Bnet\_sig}
+\addcontentsline{toc}{subsection}{bnet\_sig}
+
+To send a ``signal'' from one daemon to another, one uses the subroutine:
+
+int bnet\_sig(BSOCK *sock, SIGNAL) where SIGNAL is one of the following:
+
+\begin{enumerate}
+\item BNET\_EOF - deprecated use BNET\_EOD
+\item BNET\_EOD - End of data stream, new data may follow
+\item BNET\_EOD\_POLL - End of data and poll all in one
+\item BNET\_STATUS - Request full status
+\item BNET\_TERMINATE - Conversation terminated, doing close()
+\item BNET\_POLL - Poll request, I'm hanging on a read
+\item BNET\_HEARTBEAT - Heartbeat Response requested
+\item BNET\_HB\_RESPONSE - Only response permitted to HB
+\item BNET\_PROMPT - Prompt for UA
+ \end{enumerate}
+
+\section{bnet\_strerror}
+\index{Bnet\_strerror}
+\addcontentsline{toc}{subsection}{bnet\_strerror}
+
+Returns a formated string corresponding to the last error that occurred.
+
+\section{bnet\_close}
+\index{Bnet\_close}
+\addcontentsline{toc}{subsection}{bnet\_close}
+
+The connection with the server remains open until closed by the subroutine:
+
+void bnet\_close(BSOCK *sock)
+
+\section{Becoming a Server}
+\index{Server!Becoming a}
+\index{Becoming a Server}
+\addcontentsline{toc}{subsection}{Becoming a Server}
+
+The bnet\_open() and bnet\_close() routines described above are used on the
+client side to establish a connection and terminate a connection with the
+server. To become a server (i.e. wait for a connection from a client), use the
+routine {\bf bnet\_thread\_server}. The calling sequence is a bit complicated,
+please refer to the code in bnet\_server.c and the code at the beginning of
+each daemon as examples of how to call it.
+
+\section{Higher Level Conventions}
+\index{Conventions!Higher Level}
+\index{Higher Level Conventions}
+\addcontentsline{toc}{subsection}{Higher Level Conventions}
+
+Within Bacula, we have established the convention that any time a single
+record is passed, it is sent with bnet\_send() and read with bnet\_recv().
+Thus the normal exchange between the server (S) and the client (C) are:
+
+\footnotesize
+\begin{verbatim}
+S: wait for connection C: attempt connection
+S: accept connection C: bnet_send() send request
+S: bnet_recv() wait for request
+S: act on request
+S: bnet_send() send ack C: bnet_recv() wait for ack
+\end{verbatim}
+\normalsize
+
+Thus a single command is sent, acted upon by the server, and then
+acknowledged.
+
+In certain cases, such as the transfer of the data for a file, all the
+information or data cannot be sent in a single packet. In this case, the
+convention is that the client will send a command to the server, who knows
+that more than one packet will be returned. In this case, the server will
+enter a loop:
+
+\footnotesize
+\begin{verbatim}
+while ((n=bnet_recv(bsock)) > 0) {
+ act on request
+}
+if (n < 0)
+ error
+\end{verbatim}
+\normalsize
+
+The client will perform the following:
+
+\footnotesize
+\begin{verbatim}
+bnet_send(bsock);
+bnet_send(bsock);
+...
+bnet_sig(bsock, BNET_EOD);
+\end{verbatim}
+\normalsize
+
+Thus the client will send multiple packets and signal to the server when all
+the packets have been sent by sending a zero length record.
--- /dev/null
+%%
+%%
+
+\chapter{Platform Support}
+\label{_PlatformChapter}
+\index{Support!Platform}
+\index{Platform Support}
+\addcontentsline{toc}{section}{Platform Support}
+
+\section{General}
+\index{General }
+\addcontentsline{toc}{subsection}{General}
+
+This chapter describes the requirements for having a
+supported platform (Operating System). In general, Bacula is
+quite portable. It supports 32 and 64 bit architectures as well
+as bigendian and littleendian machines. For full
+support, the platform (Operating System) must implement POSIX Unix
+system calls. However, for File daemon support only, a small
+compatibility library can be written to support almost any
+architecture.
+
+Currently Linux, FreeBSD, and Solaris are fully supported
+platforms, which means that the code has been tested on those
+machines and passes a full set of regression tests.
+
+In addition, the Windows File daemon is supported on most versions
+of Windows, and finally, there are a number of other platforms
+where the File daemon (client) is known to run: NetBSD, OpenBSD,
+Mac OSX, SGI, ...
+
+\section{Requirements to become a Supported Platform}
+\index{Requirements!Platform}
+\index{Platform Requirements}
+\addcontentsline{toc}{subsection}{Platform Requirements}
+
+As mentioned above, in order to become a fully supported platform, it
+must support POSIX Unix system calls. In addition, the following
+requirements must be met:
+
+\begin{itemize}
+\item The principal developer (currently Kern) must have
+ non-root ssh access to a test machine running the platform.
+\item The ideal requirements and minimum requirements
+ for this machine are given below.
+\item There must be a defined platform champion who is normally
+ a system administrator for the machine that is available. This
+ person need not be a developer/programmer but must be familiar
+ with system administration of the platform.
+\item There must be at least one person designated who will
+ run regression tests prior to each release. Releases occur
+ approximately once every 6 months, but can be more frequent.
+ It takes at most a day's effort to setup the regression scripts
+ in the beginning, and after that, they can either be run daily
+ or on demand before a release. Running the regression scripts
+ involves only one or two command line commands and is fully
+ automated.
+\item Ideally there are one or more persons who will package
+ each Bacula release.
+\item Ideally there are one or more developers who can respond to
+ and fix platform specific bugs.
+\end{itemize}
+
+Ideal requirements for a test machine:
+\begin{itemize}
+\item The principal developer will have non-root ssh access to
+ the test machine at all times.
+\item The pricipal developer will have a root password.
+\item The test machine will provide approximately 200 MB of
+ disk space for continual use.
+\item The test machine will have approximately 500 MB of free
+ disk space for temporary use.
+\item The test machine will run the most common version of the OS.
+\item The test machine will have an autochanger of DDS-4 technology
+ or later having two or more tapes.
+\item The test machine will have MySQL and/or PostgreSQL database
+ access for account "bacula" available.
+\item The test machine will have sftp access.
+\item The test machine will provide an smtp server.
+\end{itemize}
+
+Minimum requirements for a test machine:
+\begin{itemize}
+\item The principal developer will have non-root ssh access to
+ the test machine when requested approximately once a month.
+\item The pricipal developer not have root access.
+\item The test machine will provide approximately 80 MB of
+ disk space for continual use.
+\item The test machine will have approximately 300 MB of free
+ disk space for temporary use.
+\item The test machine will run the the OS.
+\item The test machine will have a tape drive of DDS-4 technology
+ or later that can be scheduled for access.
+\item The test machine will not have MySQL and/or PostgreSQL database
+ access.
+\item The test machine will have no sftp access.
+\item The test machine will provide no email access.
+\end{itemize}
+
+Bare bones test machine requirements:
+\begin{itemize}
+\item The test machine is available only to a designated
+ test person (your own machine).
+\item The designated test person runs the regession
+ tests on demand.
+\item The test machine has a tape drive available.
+\end{itemize}
--- /dev/null
+%%
+%%
+
+\chapter{Bacula FD Plugin API}
+To write a Bacula plugin, you create a dynamic shared object program (or dll on
+Win32) with a particular name and two exported entry points, place it in the
+{\bf Plugins Directory}, which is defined in the {\bf bacula-fd.conf} file in
+the {\bf Client} resource, and when the FD starts, it will load all the plugins
+that end with {\bf -fd.so} (or {\bf -fd.dll} on Win32) found in that directory.
+
+\section{Normal vs Command Plugins}
+In general, there are two ways that plugins are called. The first way, is when
+a particular event is detected in Bacula, it will transfer control to each
+plugin that is loaded in turn informing the plugin of the event. This is very
+similar to how a {\bf RunScript} works, and the events are very similar. Once
+the plugin gets control, it can interact with Bacula by getting and setting
+Bacula variables. In this way, it behaves much like a RunScript. Currently
+very few Bacula variables are defined, but they will be implemented as the need
+arrises, and it is very extensible.
+
+We plan to have plugins register to receive events that they normally would
+not receive, such as an event for each file examined for backup or restore.
+This feature is not yet implemented.
+
+The second type of plugin, which is more useful and fully implemented in the
+current version is what we call a command plugin. As with all plugins, it gets
+notified of important events as noted above (details described below), but in
+addition, this kind of plugin can accept a command line, which is a:
+
+\begin{verbatim}
+ Plugin = <command-string>
+\end{verbatim}
+
+directive that is placed in the Include section of a FileSet and is very
+similar to the "File = " directive. When this Plugin directive is encountered
+by Bacula during backup, it passes the "command" part of the Plugin directive
+only to the plugin that is explicitly named in the first field of that command
+string. This allows that plugin to backup any file or files on the system that
+it wants. It can even create "virtual files" in the catalog that contain data
+to be restored but do not necessarily correspond to actual files on the
+filesystem.
+
+The important features of the command plugin entry points are:
+\begin{itemize}
+ \item It is triggered by a "Plugin =" directive in the FileSet
+ \item Only a single plugin is called that is named on the "Plugin =" directive.
+ \item The full command string after the "Plugin =" is passed to the plugin
+ so that it can be told what to backup/restore.
+\end{itemize}
+
+
+\section{Loading Plugins}
+Once the File daemon loads the plugins, it asks the OS for the
+two entry points (loadPlugin and unloadPlugin) then calls the
+{\bf loadPlugin} entry point (see below).
+
+Bacula passes information to the plugin through this call and it gets
+back information that it needs to use the plugin. Later, Bacula
+ will call particular functions that are defined by the
+{\bf loadPlugin} interface.
+
+When Bacula is finished with the plugin
+(when Bacula is going to exit), it will call the {\bf unloadPlugin}
+entry point.
+
+The two entry points are:
+
+\begin{verbatim}
+bRC loadPlugin(bInfo *lbinfo, bFuncs *lbfuncs, pInfo **pinfo, pFuncs **pfuncs)
+
+and
+
+bRC unloadPlugin()
+\end{verbatim}
+
+both these external entry points to the shared object are defined as C entry
+points to avoid name mangling complications with C++. However, the shared
+object can actually be written in any language (preferrably C or C++) providing
+that it follows C language calling conventions.
+
+The definitions for {\bf bRC} and the arguments are {\bf
+src/filed/fd-plugins.h} and so this header file needs to be included in
+your plugin. It along with {\bf src/lib/plugins.h} define basically the whole
+plugin interface. Within this header file, it includes the following
+files:
+
+\begin{verbatim}
+#include <sys/types.h>
+#include "config.h"
+#include "bc_types.h"
+#include "lib/plugins.h"
+#include <sys/stat.h>
+\end{verbatim}
+
+Aside from the {\bf bc\_types.h} and {\bf confit.h} headers, the plugin
+definition uses the minimum code from Bacula. The bc\_types.h file is required
+to ensure that the data type defintions in arguments correspond to the Bacula
+core code.
+
+The return codes are defined as:
+\begin{verbatim}
+typedef enum {
+ bRC_OK = 0, /* OK */
+ bRC_Stop = 1, /* Stop calling other plugins */
+ bRC_Error = 2, /* Some kind of error */
+ bRC_More = 3, /* More files to backup */
+} bRC;
+\end{verbatim}
+
+
+At a future point in time, we hope to make the Bacula libbac.a into a
+shared object so that the plugin can use much more of Bacula's
+infrastructure, but for this first cut, we have tried to minimize the
+dependence on Bacula.
+
+\section{loadPlugin}
+As previously mentioned, the {\bf loadPlugin} entry point in the plugin
+is called immediately after Bacula loads the plugin when the File daemon
+itself is first starting. This entry point is only called once during the
+execution of the File daemon. In calling the
+plugin, the first two arguments are information from Bacula that
+is passed to the plugin, and the last two arguments are information
+about the plugin that the plugin must return to Bacula. The call is:
+
+\begin{verbatim}
+bRC loadPlugin(bInfo *lbinfo, bFuncs *lbfuncs, pInfo **pinfo, pFuncs **pfuncs)
+\end{verbatim}
+
+and the arguments are:
+
+\begin{description}
+\item [lbinfo]
+This is information about Bacula in general. Currently, the only value
+defined in the bInfo structure is the version, which is the Bacula plugin
+interface version, currently defined as 1. The {\bf size} is set to the
+byte size of the structure. The exact definition of the bInfo structure
+as of this writing is:
+
+\begin{verbatim}
+typedef struct s_baculaInfo {
+ uint32_t size;
+ uint32_t version;
+} bInfo;
+\end{verbatim}
+
+\item [lbfuncs]
+The bFuncs structure defines the callback entry points within Bacula
+that the plugin can use register events, get Bacula values, set
+Bacula values, and send messages to the Job output or debug output.
+
+The exact definition as of this writing is:
+\begin{verbatim}
+typedef struct s_baculaFuncs {
+ uint32_t size;
+ uint32_t version;
+ bRC (*registerBaculaEvents)(bpContext *ctx, ...);
+ bRC (*getBaculaValue)(bpContext *ctx, bVariable var, void *value);
+ bRC (*setBaculaValue)(bpContext *ctx, bVariable var, void *value);
+ bRC (*JobMessage)(bpContext *ctx, const char *file, int line,
+ int type, utime_t mtime, const char *fmt, ...);
+ bRC (*DebugMessage)(bpContext *ctx, const char *file, int line,
+ int level, const char *fmt, ...);
+ void *(*baculaMalloc)(bpContext *ctx, const char *file, int line,
+ size_t size);
+ void (*baculaFree)(bpContext *ctx, const char *file, int line, void *mem);
+} bFuncs;
+\end{verbatim}
+
+We will discuss these entry points and how to use them a bit later when
+describing the plugin code.
+
+
+\item [pInfo]
+When the loadPlugin entry point is called, the plugin must initialize
+an information structure about the plugin and return a pointer to
+this structure to Bacula.
+
+The exact definition as of this writing is:
+
+\begin{verbatim}
+typedef struct s_pluginInfo {
+ uint32_t size;
+ uint32_t version;
+ const char *plugin_magic;
+ const char *plugin_license;
+ const char *plugin_author;
+ const char *plugin_date;
+ const char *plugin_version;
+ const char *plugin_description;
+} pInfo;
+\end{verbatim}
+
+Where:
+ \begin{description}
+ \item [version] is the current Bacula defined plugin interface version, currently
+ set to 1. If the interface version differs from the current version of
+ Bacula, the plugin will not be run (not yet implemented).
+ \item [plugin\_magic] is a pointer to the text string "*FDPluginData*", a
+ sort of sanity check. If this value is not specified, the plugin
+ will not be run (not yet implemented).
+ \item [plugin\_license] is a pointer to a text string that describes the
+ plugin license. Bacula will only accept compatible licenses (not yet
+ implemented).
+ \item [plugin\_author] is a pointer to the text name of the author of the program.
+ This string can be anything but is generally the author's name.
+ \item [plugin\_date] is the pointer text string containing the date of the plugin.
+ This string can be anything but is generally some human readable form of
+ the date.
+ \item [plugin\_version] is a pointer to a text string containing the version of
+ the plugin. The contents are determined by the plugin writer.
+ \item [plugin\_description] is a pointer to a string describing what the
+ plugin does. The contents are determined by the plugin writer.
+ \end{description}
+
+The pInfo structure must be defined in static memory because Bacula does not
+copy it and may refer to the values at any time while the plugin is
+loaded. All values must be supplied or the plugin will not run (not yet
+implemented). All text strings must be either ASCII or UTF-8 strings that
+are terminated with a zero byte.
+
+\item [pFuncs]
+When the loadPlugin entry point is called, the plugin must initialize
+an entry point structure about the plugin and return a pointer to
+this structure to Bacula. This structure contains pointer to each
+of the entry points that the plugin must provide for Bacula. When
+Bacula is actually running the plugin, it will call the defined
+entry points at particular times. All entry points must be defined.
+
+The pFuncs structure must be defined in static memory because Bacula does not
+copy it and may refer to the values at any time while the plugin is
+loaded.
+
+The exact definition as of this writing is:
+
+\begin{verbatim}
+typedef struct s_pluginFuncs {
+ uint32_t size;
+ uint32_t version;
+ bRC (*newPlugin)(bpContext *ctx);
+ bRC (*freePlugin)(bpContext *ctx);
+ bRC (*getPluginValue)(bpContext *ctx, pVariable var, void *value);
+ bRC (*setPluginValue)(bpContext *ctx, pVariable var, void *value);
+ bRC (*handlePluginEvent)(bpContext *ctx, bEvent *event, void *value);
+ bRC (*startBackupFile)(bpContext *ctx, struct save_pkt *sp);
+ bRC (*endBackupFile)(bpContext *ctx);
+ bRC (*startRestoreFile)(bpContext *ctx, const char *cmd);
+ bRC (*endRestoreFile)(bpContext *ctx);
+ bRC (*pluginIO)(bpContext *ctx, struct io_pkt *io);
+ bRC (*createFile)(bpContext *ctx, struct restore_pkt *rp);
+ bRC (*setFileAttributes)(bpContext *ctx, struct restore_pkt *rp);
+ bRC (*checkFile)(bpContext *ctx, char *fname);
+} pFuncs;
+\end{verbatim}
+
+The details of the entry points will be presented in
+separate sections below.
+
+Where:
+ \begin{description}
+ \item [size] is the byte size of the structure.
+ \item [version] is the plugin interface version currently set to 3.
+ \end{description}
+
+Sample code for loadPlugin:
+\begin{verbatim}
+ bfuncs = lbfuncs; /* set Bacula funct pointers */
+ binfo = lbinfo;
+ *pinfo = &pluginInfo; /* return pointer to our info */
+ *pfuncs = &pluginFuncs; /* return pointer to our functions */
+
+ return bRC_OK;
+\end{verbatim}
+
+where pluginInfo and pluginFuncs are statically defined structures.
+See bpipe-fd.c for details.
+
+
+
+\end{description}
+
+\section{Plugin Entry Points}
+This section will describe each of the entry points (subroutines) within
+the plugin that the plugin must provide for Bacula, when they are called
+and their arguments. As noted above, pointers to these subroutines are
+passed back to Bacula in the pFuncs structure when Bacula calls the
+loadPlugin() externally defined entry point.
+
+\subsection{newPlugin(bpContext *ctx)}
+ This is the entry point that Bacula will call
+ when a new "instance" of the plugin is created. This typically
+ happens at the beginning of a Job. If 10 Jobs are running
+ simultaneously, there will be at least 10 instances of the
+ plugin.
+
+ The bpContext structure will be passed to the plugin, and
+ during this call, if the plugin needs to have its private
+ working storage that is associated with the particular
+ instance of the plugin, it should create it from the heap
+ (malloc the memory) and store a pointer to
+ its private working storage in the {\bf pContext} variable.
+ Note: since Bacula is a multi-threaded program, you must not
+ keep any variable data in your plugin unless it is truely meant
+ to apply globally to the whole plugin. In addition, you must
+ be aware that except the first and last call to the plugin
+ (loadPlugin and unloadPlugin) all the other calls will be
+ made by threads that correspond to a Bacula job. The
+ bpContext that will be passed for each thread will remain the
+ same throughout the Job thus you can keep your privat Job specific
+ data in it ({\bf bContext}).
+
+\begin{verbatim}
+typedef struct s_bpContext {
+ void *pContext; /* Plugin private context */
+ void *bContext; /* Bacula private context */
+} bpContext;
+
+\end{verbatim}
+
+ This context pointer will be passed as the first argument to all
+ the entry points that Bacula calls within the plugin. Needless
+ to say, the plugin should not change the bContext variable, which
+ is Bacula's private context pointer for this instance (Job) of this
+ plugin.
+
+\subsection{freePlugin(bpContext *ctx)}
+This entry point is called when the
+this instance of the plugin is no longer needed (the Job is
+ending), and the plugin should release all memory it may
+have allocated for this particular instance (Job) i.e. the pContext.
+This is not the final termination
+of the plugin signaled by a call to {\bf unloadPlugin}.
+Any other instances (Job) will
+continue to run, and the entry point {\bf newPlugin} may be called
+again if other jobs start.
+
+\subsection{getPluginValue(bpContext *ctx, pVariable var, void *value)}
+Bacula will call this entry point to get
+a value from the plugin. This entry point is currently not called.
+
+\subsection{setPluginValue(bpContext *ctx, pVariable var, void *value)}
+Bacula will call this entry point to set
+a value in the plugin. This entry point is currently not called.
+
+\subsection{handlePluginEvent(bpContext *ctx, bEvent *event, void *value)}
+This entry point is called when Bacula
+encounters certain events (discussed below). This is, in fact, the
+main way that most plugins get control when a Job runs and how
+they know what is happening in the job. It can be likened to the
+{\bf RunScript} feature that calls external programs and scripts,
+and is very similar to the Bacula Python interface.
+When the plugin is called, Bacula passes it the pointer to an event
+structure (bEvent), which currently has one item, the eventType:
+
+\begin{verbatim}
+typedef struct s_bEvent {
+ uint32_t eventType;
+} bEvent;
+\end{verbatim}
+
+ which defines what event has been triggered, and for each event,
+ Bacula will pass a pointer to a value associated with that event.
+ If no value is associated with a particular event, Bacula will
+ pass a NULL pointer, so the plugin must be careful to always check
+ value pointer prior to dereferencing it.
+
+ The current list of events are:
+
+\begin{verbatim}
+typedef enum {
+ bEventJobStart = 1,
+ bEventJobEnd = 2,
+ bEventStartBackupJob = 3,
+ bEventEndBackupJob = 4,
+ bEventStartRestoreJob = 5,
+ bEventEndRestoreJob = 6,
+ bEventStartVerifyJob = 7,
+ bEventEndVerifyJob = 8,
+ bEventBackupCommand = 9,
+ bEventRestoreCommand = 10,
+ bEventLevel = 11,
+ bEventSince = 12,
+} bEventType;
+
+\end{verbatim}
+
+Most of the above are self-explanatory.
+
+\begin{description}
+ \item [bEventJobStart] is called whenever a Job starts. The value
+ passed is a pointer to a string that contains: "Jobid=nnn
+ Job=job-name". Where nnn will be replaced by the JobId and job-name
+ will be replaced by the Job name. The variable is temporary so if you
+ need the values, you must copy them.
+
+ \item [bEventJobEnd] is called whenever a Job ends. No value is passed.
+
+ \item [bEventStartBackupJob] is called when a Backup Job begins. No value
+ is passed.
+
+ \item [bEventEndBackupJob] is called when a Backup Job ends. No value is
+ passed.
+
+ \item [bEventStartRestoreJob] is called when a Restore Job starts. No value
+ is passed.
+
+ \item [bEventEndRestoreJob] is called when a Restore Job ends. No value is
+ passed.
+
+ \item [bEventStartVerifyJob] is called when a Verify Job starts. No value
+ is passed.
+
+ \item [bEventEndVerifyJob] is called when a Verify Job ends. No value
+ is passed.
+
+ \item [bEventBackupCommand] is called prior to the bEventStartBackupJob and
+ the plugin is passed the command string (everything after the equal sign
+ in "Plugin =" as the value.
+
+ Note, if you intend to backup a file, this is an important first point to
+ write code that copies the command string passed into your pContext area
+ so that you will know that a backup is being performed and you will know
+ the full contents of the "Plugin =" command (i.e. what to backup and
+ what virtual filename the user wants to call it.
+
+ \item [bEventRestoreCommand] is called prior to the bEventStartRestoreJob and
+ the plugin is passed the command string (everything after the equal sign
+ in "Plugin =" as the value.
+
+ See the notes above concerning backup and the command string. This is the
+ point at which Bacula passes you the original command string that was
+ specified during the backup, so you will want to save it in your pContext
+ area for later use when Bacula calls the plugin again.
+
+ \item [bEventLevel] is called when the level is set for a new Job. The value
+ is a 32 bit integer stored in the void*, which represents the Job Level code.
+
+ \item [bEventSince] is called when the since time is set for a new Job. The
+ value is a time\_t time at which the last job was run.
+\end{description}
+
+During each of the above calls, the plugin receives either no specific value or
+only one value, which in some cases may not be sufficient. However, knowing
+the context of the event, the plugin can call back to the Bacula entry points
+it was passed during the {\bf loadPlugin} call and get to a number of Bacula
+variables. (at the current time few Bacula variables are implemented, but it
+easily extended at a future time and as needs require).
+
+\subsection{startBackupFile(bpContext *ctx, struct save\_pkt *sp)}
+This entry point is called only if your plugin is a command plugin, and
+it is called when Bacula encounters the "Plugin = " directive in
+the Include section of the FileSet.
+Called when beginning the backup of a file. Here Bacula provides you
+with a pointer to the {\bf save\_pkt} structure and you must fill in
+this packet with the "attribute" data of the file.
+
+\begin{verbatim}
+struct save_pkt {
+ int32_t pkt_size; /* size of this packet */
+ char *fname; /* Full path and filename */
+ char *link; /* Link name if any */
+ struct stat statp; /* System stat() packet for file */
+ int32_t type; /* FT_xx for this file */
+ uint32_t flags; /* Bacula internal flags */
+ bool portable; /* set if data format is portable */
+ char *cmd; /* command */
+ int32_t pkt_end; /* end packet sentinel */
+};
+\end{verbatim}
+
+The second argument is a pointer to the {\bf save\_pkt} structure for the file
+to be backed up. The plugin is responsible for filling in all the fields
+of the {\bf save\_pkt}. If you are backing up
+a real file, then generally, the statp structure can be filled in by doing
+a {\bf stat} system call on the file.
+
+If you are backing up a database or
+something that is more complex, you might want to create a virtual file.
+That is a file that does not actually exist on the filesystem, but represents
+say an object that you are backing up. In that case, you need to ensure
+that the {\bf fname} string that you pass back is unique so that it
+does not conflict with a real file on the system, and you need to
+artifically create values in the statp packet.
+
+Example programs such as {\bf bpipe-fd.c} show how to set these fields. You
+must take care not to store pointers the stack in the pointer fields such as
+fname and link, because when you return from your function, your stack entries
+will be destroyed. The solution in that case is to malloc() and return the
+pointer to it. In order to not have memory leaks, you should store a pointer to
+all memory allocated in your pContext structure so that in subsequent calls or
+at termination, you can release it back to the system.
+
+Once the backup has begun, Bacula will call your plugin at the {\bf pluginIO}
+entry point to "read" the data to be backed up. Please see the {\bf bpipe-fd.c}
+plugin for how to do I/O.
+
+Example of filling in the save\_pkt as used in bpipe-fd.c:
+
+\begin{verbatim}
+ struct plugin_ctx *p_ctx = (struct plugin_ctx *)ctx->pContext;
+ time_t now = time(NULL);
+ sp->fname = p_ctx->fname;
+ sp->statp.st_mode = 0700 | S_IFREG;
+ sp->statp.st_ctime = now;
+ sp->statp.st_mtime = now;
+ sp->statp.st_atime = now;
+ sp->statp.st_size = -1;
+ sp->statp.st_blksize = 4096;
+ sp->statp.st_blocks = 1;
+ p_ctx->backup = true;
+ return bRC_OK;
+\end{verbatim}
+
+Note: the filename to be created has already been created from the
+command string previously sent to the plugin and is in the plugin
+context (p\_ctx->fname) and is a malloc()ed string. This example
+creates a regular file (S\_IFREG), with various fields being created.
+
+In general, the sequence of commands issued from Bacula to the plugin
+to do a backup while processing the "Plugin = " directive are:
+
+\begin{enumerate}
+ \item generate a bEventBackupCommand event to the specified plugin
+ and pass it the command string.
+ \item make a startPluginBackup call to the plugin, which
+ fills in the data needed in save\_pkt to save as the file
+ attributes and to put on the Volume and in the catalog.
+ \item call Bacula's internal save\_file() subroutine to save the specified
+ file. The plugin will then be called at pluginIO() to "open"
+ the file, and then to read the file data.
+ Note, if you are dealing with a virtual file, the "open" operation
+ is something the plugin does internally and it doesn't necessarily
+ mean opening a file on the filesystem. For example in the case of
+ the bpipe-fd.c program, it initiates a pipe to the requested program.
+ Finally when the plugin signals to Bacula that all the data was read,
+ Bacula will call the plugin with the "close" pluginIO() function.
+\end{enumerate}
+
+
+\subsection{endBackupFile(bpContext *ctx)}
+Called at the end of backing up a file for a command plugin. If the plugin's
+work is done, it should return bRC\_OK. If the plugin wishes to create another
+file and back it up, then it must return bRC\_More (not yet implemented). This
+is probably a good time to release any malloc()ed memory you used to pass back
+filenames.
+
+\subsection{startRestoreFile(bpContext *ctx, const char *cmd)}
+Called when the first record is read from the Volume that was
+previously written by the command plugin.
+
+\subsection{createFile(bpContext *ctx, struct restore\_pkt *rp)}
+Called for a command plugin to create a file during a Restore job before
+restoring the data.
+This entry point is called before any I/O is done on the file. After
+this call, Bacula will call pluginIO() to open the file for write.
+
+The data in the
+restore\_pkt is passed to the plugin and is based on the data that was
+originally given by the plugin during the backup and the current user
+restore settings (e.g. where, RegexWhere, replace). This allows the
+plugin to first create a file (if necessary) so that the data can
+be transmitted to it. The next call to the plugin will be a
+pluginIO command with a request to open the file write-only.
+
+This call must return one of the following values:
+
+\begin{verbatim}
+ enum {
+ CF_SKIP = 1, /* skip file (not newer or something) */
+ CF_ERROR, /* error creating file */
+ CF_EXTRACT, /* file created, data to extract */
+ CF_CREATED /* file created, no data to extract */
+};
+\end{verbatim}
+
+in the restore\_pkt value {\bf create\_status}. For a normal file,
+unless there is an error, you must return {\bf CF\_EXTRACT}.
+
+\begin{verbatim}
+
+struct restore_pkt {
+ int32_t pkt_size; /* size of this packet */
+ int32_t stream; /* attribute stream id */
+ int32_t data_stream; /* id of data stream to follow */
+ int32_t type; /* file type FT */
+ int32_t file_index; /* file index */
+ int32_t LinkFI; /* file index to data if hard link */
+ uid_t uid; /* userid */
+ struct stat statp; /* decoded stat packet */
+ const char *attrEx; /* extended attributes if any */
+ const char *ofname; /* output filename */
+ const char *olname; /* output link name */
+ const char *where; /* where */
+ const char *RegexWhere; /* regex where */
+ int replace; /* replace flag */
+ int create_status; /* status from createFile() */
+ int32_t pkt_end; /* end packet sentinel */
+
+};
+\end{verbatim}
+
+Typical code to create a regular file would be the following:
+
+\begin{verbatim}
+ struct plugin_ctx *p_ctx = (struct plugin_ctx *)ctx->pContext;
+ time_t now = time(NULL);
+ sp->fname = p_ctx->fname; /* set the full path/filename I want to create */
+ sp->type = FT_REG;
+ sp->statp.st_mode = 0700 | S_IFREG;
+ sp->statp.st_ctime = now;
+ sp->statp.st_mtime = now;
+ sp->statp.st_atime = now;
+ sp->statp.st_size = -1;
+ sp->statp.st_blksize = 4096;
+ sp->statp.st_blocks = 1;
+ return bRC_OK;
+\end{verbatim}
+
+This will create a virtual file. If you are creating a file that actually
+exists, you will most likely want to fill the statp packet using the
+stat() system call.
+
+Creating a directory is similar, but requires a few extra steps:
+
+\begin{verbatim}
+ struct plugin_ctx *p_ctx = (struct plugin_ctx *)ctx->pContext;
+ time_t now = time(NULL);
+ sp->fname = p_ctx->fname; /* set the full path I want to create */
+ sp->link = xxx; where xxx is p_ctx->fname with a trailing forward slash
+ sp->type = FT_DIREND
+ sp->statp.st_mode = 0700 | S_IFDIR;
+ sp->statp.st_ctime = now;
+ sp->statp.st_mtime = now;
+ sp->statp.st_atime = now;
+ sp->statp.st_size = -1;
+ sp->statp.st_blksize = 4096;
+ sp->statp.st_blocks = 1;
+ return bRC_OK;
+\end{verbatim}
+
+The link field must be set with the full cononical path name, which always
+ends with a forward slash. If you do not terminate it with a forward slash,
+you will surely have problems later.
+
+As with the example that creates a file, if you are backing up a real
+directory, you will want to do an stat() on the directory.
+
+Note, if you want the directory permissions and times to be correctly
+restored, you must create the directory {\bf after} all the file directories
+have been sent to Bacula. That allows the restore process to restore all the
+files in a directory using default directory options, then at the end, restore
+the directory permissions. If you do it the other way around, each time you
+restore a file, the OS will modify the time values for the directory entry.
+
+\subsection{setFileAttributes(bpContext *ctx, struct restore\_pkt *rp)}
+This is call not yet implemented. Called for a command plugin.
+
+See the definition of {\bf restre\_pkt} in the above section.
+
+\subsection{endRestoreFile(bpContext *ctx)}
+Called when a command plugin is done restoring a file.
+
+\subsection{pluginIO(bpContext *ctx, struct io\_pkt *io)}
+Called to do the input (backup) or output (restore) of data from or to a file
+for a command plugin. These routines simulate the Unix read(), write(), open(),
+close(), and lseek() I/O calls, and the arguments are passed in the packet and
+the return values are also placed in the packet. In addition for Win32 systems
+the plugin must return two additional values (described below).
+
+\begin{verbatim}
+ enum {
+ IO_OPEN = 1,
+ IO_READ = 2,
+ IO_WRITE = 3,
+ IO_CLOSE = 4,
+ IO_SEEK = 5
+};
+
+struct io_pkt {
+ int32_t pkt_size; /* Size of this packet */
+ int32_t func; /* Function code */
+ int32_t count; /* read/write count */
+ mode_t mode; /* permissions for created files */
+ int32_t flags; /* Open flags */
+ char *buf; /* read/write buffer */
+ const char *fname; /* open filename */
+ int32_t status; /* return status */
+ int32_t io_errno; /* errno code */
+ int32_t lerror; /* Win32 error code */
+ int32_t whence; /* lseek argument */
+ boffset_t offset; /* lseek argument */
+ bool win32; /* Win32 GetLastError returned */
+ int32_t pkt_end; /* end packet sentinel */
+};
+\end{verbatim}
+
+The particular Unix function being simulated is indicated by the {\bf func},
+which will have one of the IO\_OPEN, IO\_READ, ... codes listed above. The
+status code that would be returned from a Unix call is returned in {\bf status}
+for IO\_OPEN, IO\_CLOSE, IO\_READ, and IO\_WRITE. The return value for IO\_SEEK
+is returned in {\bf offset} which in general is a 64 bit value.
+
+When there is an error on Unix systems, you must always set io\_error, and
+on a Win32 system, you must always set win32, and the returned value from
+the OS call GetLastError() in lerror.
+
+For all except IO\_SEEK, {\bf status} is the return result. In general it is
+a positive integer unless there is an error in which case it is -1.
+
+The following describes each call and what you get and what you
+should return:
+
+\begin{description}
+ \item [IO\_OPEN]
+ You will be passed fname, mode, and flags.
+ You must set on return: status, and if there is a Unix error
+ io\_errno must be set to the errno value, and if there is a
+ Win32 error win32 and lerror.
+
+ \item [IO\_READ]
+ You will be passed: count, and buf (buffer of size count).
+ You must set on return: status to the number of bytes
+ read into the buffer (buf) or -1 on an error,
+ and if there is a Unix error
+ io\_errno must be set to the errno value, and if there is a
+ Win32 error, win32 and lerror must be set.
+
+ \item [IO\_WRITE]
+ You will be passed: count, and buf (buffer of size count).
+ You must set on return: status to the number of bytes
+ written from the buffer (buf) or -1 on an error,
+ and if there is a Unix error
+ io\_errno must be set to the errno value, and if there is a
+ Win32 error, win32 and lerror must be set.
+
+ \item [IO\_CLOSE]
+ Nothing will be passed to you. On return you must set
+ status to 0 on success and -1 on failure. If there is a Unix error
+ io\_errno must be set to the errno value, and if there is a
+ Win32 error, win32 and lerror must be set.
+
+ \item [IO\_LSEEK]
+ You will be passed: offset, and whence. offset is a 64 bit value
+ and is the position to seek to relative to whence. whence is one
+ of the following SEEK\_SET, SEEK\_CUR, or SEEK\_END indicating to
+ either to seek to an absolute possition, relative to the current
+ position or relative to the end of the file.
+ You must pass back in offset the absolute location to which you
+ seeked. If there is an error, offset should be set to -1.
+ If there is a Unix error
+ io\_errno must be set to the errno value, and if there is a
+ Win32 error, win32 and lerror must be set.
+
+ Note: Bacula will call IO\_SEEK only when writing a sparse file.
+
+\end{description}
+
+\subsection{bool checkFile(bpContext *ctx, char *fname)}
+If this entry point is set, Bacula will call it after backing up all file
+data during an Accurate backup. It will be passed the full filename for
+each file that Bacula is proposing to mark as deleted. Only files
+previously backed up but not backed up in the current session will be
+marked to be deleted. If you return {\bf false}, the file will be be
+marked deleted. If you return {\bf true} the file will not be marked
+deleted. This permits a plugin to ensure that previously saved virtual
+files or files controlled by your plugin that have not change (not backed
+up in the current job) are not marked to be deleted. This entry point will
+only be called during Accurate Incrmental and Differential backup jobs.
+
+
+\section{Bacula Plugin Entrypoints}
+When Bacula calls one of your plugin entrypoints, you can call back to
+the entrypoints in Bacula that were supplied during the xxx plugin call
+to get or set information within Bacula.
+
+\subsection{bRC registerBaculaEvents(bpContext *ctx, ...)}
+This Bacula entrypoint will allow you to register to receive events
+that are not autmatically passed to your plugin by default. This
+entrypoint currently is unimplemented.
+
+\subsection{bRC getBaculaValue(bpContext *ctx, bVariable var, void *value)}
+Calling this entrypoint, you can obtain specific values that are available
+in Bacula. The following Variables can be referenced:
+\begin{itemize}
+\item bVarJobId returns an int
+\item bVarFDName returns a char *
+\item bVarLevel returns an int
+\item bVarClient returns a char *
+\item bVarJobName returns a char *
+\item bVarJobStatus returns an int
+\item bVarSinceTime returns an int (time\_t)
+\item bVarAccurate returns an int
+\end{itemize}
+
+\subsection{bRC setBaculaValue(bpContext *ctx, bVariable var, void *value)}
+Calling this entrypoint allows you to set particular values in
+Bacula. The only variable that can currently be set is
+{\bf bVarFileSeen} and the value passed is a char * that points
+to the full filename for a file that you are indicating has been
+seen and hence is not deleted.
+
+\subsection{bRC JobMessage(bpContext *ctx, const char *file, int line,
+ int type, utime\_t mtime, const char *fmt, ...)}
+This call permits you to put a message in the Job Report.
+
+
+\subsection{bRC DebugMessage(bpContext *ctx, const char *file, int line,
+ int level, const char *fmt, ...)}
+This call permits you to print a debug message.
+
+
+\subsection{void baculaMalloc(bpContext *ctx, const char *file, int line,
+ size\_t size)}
+This call permits you to obtain memory from Bacula's memory allocator.
+
+
+\subsection{void baculaFree(bpContext *ctx, const char *file, int line, void *mem)}
+This call permits you to free memory obtained from Bacula's memory allocator.
+
+\section{Building Bacula Plugins}
+There is currently one sample program {\bf example-plugin-fd.c} and
+one working plugin {\bf bpipe-fd.c} that can be found in the Bacula
+{\bf src/plugins/fd} directory. Both are built with the following:
+
+\begin{verbatim}
+ cd <bacula-source>
+ ./configure <your-options>
+ make
+ ...
+ cd src/plugins/fd
+ make
+ make test
+\end{verbatim}
+
+After building Bacula and changing into the src/plugins/fd directory,
+the {\bf make} command will build the {\bf bpipe-fd.so} plugin, which
+is a very useful and working program.
+
+The {\bf make test} command will build the {\bf example-plugin-fd.so}
+plugin and a binary named {\bf main}, which is build from the source
+code located in {\bf src/filed/fd\_plugins.c}.
+
+If you execute {\bf ./main}, it will load and run the example-plugin-fd
+plugin simulating a small number of the calling sequences that Bacula uses
+in calling a real plugin. This allows you to do initial testing of
+your plugin prior to trying it with Bacula.
+
+You can get a good idea of how to write your own plugin by first
+studying the example-plugin-fd, and actually running it. Then
+it can also be instructive to read the bpipe-fd.c code as it is
+a real plugin, which is still rather simple and small.
+
+When actually writing your own plugin, you may use the example-plugin-fd.c
+code as a template for your code.
+
--- /dev/null
+%%
+%%
+
+\chapter{Bacula Porting Notes}
+\label{_ChapterStart1}
+\index{Notes!Bacula Porting}
+\index{Bacula Porting Notes}
+\addcontentsline{toc}{section}{Bacula Porting Notes}
+
+This document is intended mostly for developers who wish to port Bacula to a
+system that is not {\bf officially} supported.
+
+It is hoped that Bacula clients will eventually run on every imaginable system
+that needs backing up (perhaps even a Palm). It is also hoped that the Bacula
+Directory and Storage daemons will run on every system capable of supporting
+them.
+
+\section{Porting Requirements}
+\index{Requirements!Porting}
+\index{Porting Requirements}
+\addcontentsline{toc}{section}{Porting Requirements}
+
+In General, the following holds true:
+
+\begin{itemize}
+\item {\bf Bacula} has been compiled and run on Linux RedHat, FreeBSD, and
+ Solaris systems.
+\item In addition, clients exist on Win32, and Irix
+\item It requires GNU C++ to compile. You can try with other compilers, but
+ you are on your own. The Irix client is built with the Irix complier, but, in
+ general, you will need GNU.
+\item Your compiler must provide support for 64 bit signed and unsigned
+ integers.
+\item You will need a recent copy of the {\bf autoconf} tools loaded on your
+ system (version 2.13 or later). The {\bf autoconf} tools are used to build
+ the configuration program, but are not part of the Bacula source
+distribution.
+\item There are certain third party packages that Bacula needs. Except for
+ MySQL, they can all be found in the {\bf depkgs} and {\bf depkgs1} releases.
+\item To build the Win32 binaries, we use Microsoft VC++ standard
+ 2003. Please see the instructions in
+ bacula-source/src/win32/README.win32 for more details. If you
+ want to use VC++ Express, please see README.vc8. Our build is
+ done under the most recent version of Cygwin, but Cygwin is
+ not used in the Bacula binaries that are produced.
+ Unfortunately, we do not have the resources to help you build
+ your own version of the Win32 FD, so you are pretty much on
+ your own. You can ask the bacula-devel list for help, but
+ please don't expect much.
+\item {\bf Bacula} requires a good implementation of pthreads to work.
+\item The source code has been written with portability in mind and is mostly
+ POSIX compatible. Thus porting to any POSIX compatible operating system
+ should be relatively easy.
+\end{itemize}
+
+\section{Steps to Take for Porting}
+\index{Porting!Steps to Take for}
+\index{Steps to Take for Porting}
+\addcontentsline{toc}{section}{Steps to Take for Porting}
+
+\begin{itemize}
+\item The first step is to ensure that you have version 2.13 or later of the
+ {\bf autoconf} tools loaded. You can skip this step, but making changes to
+ the configuration program will be difficult or impossible.
+\item The run a {\bf ./configure} command in the main source directory and
+ examine the output. It should look something like the following:
+
+\footnotesize
+\begin{verbatim}
+Configuration on Mon Oct 28 11:42:27 CET 2002:
+ Host: i686-pc-linux-gnu -- redhat 7.3
+ Bacula version: 1.27 (26 October 2002)
+ Source code location: .
+ Install binaries: /sbin
+ Install config files: /etc/bacula
+ C Compiler: gcc
+ C++ Compiler: c++
+ Compiler flags: -g -O2
+ Linker flags:
+ Libraries: -lpthread
+ Statically Linked Tools: no
+ Database found: no
+ Database type: Internal
+ Database lib:
+ Job Output Email: root@localhost
+ Traceback Email: root@localhost
+ SMTP Host Address: localhost
+ Director Port 9101
+ File daemon Port 9102
+ Storage daemon Port 9103
+ Working directory /etc/bacula/working
+ SQL binaries Directory
+ Large file support: yes
+ readline support: yes
+ cweb support: yes /home/kern/bacula/depkgs/cweb
+ TCP Wrappers support: no
+ ZLIB support: yes
+ enable-smartalloc: yes
+ enable-gnome: no
+ gmp support: yes
+\end{verbatim}
+\normalsize
+
+The details depend on your system. The first thing to check is that it
+properly identified your host on the {\bf Host:} line. The first part (added
+in version 1.27) is the GNU four part identification of your system. The part
+after the -- is your system and the system version. Generally, if your system
+is not yet supported, you must correct these.
+\item If the {\bf ./configure} does not function properly, you must determine
+ the cause and fix it. Generally, it will be because some required system
+ routine is not available on your machine.
+\item To correct problems with detection of your system type or with routines
+ and libraries, you must edit the file {\bf
+ \lt{}bacula-src\gt{}/autoconf/configure.in}. This is the ``source'' from
+which {\bf configure} is built. In general, most of the changes for your
+system will be made in {\bf autoconf/aclocal.m4} in the routine {\bf
+BA\_CHECK\_OPSYS} or in the routine {\bf BA\_CHECK\_OPSYS\_DISTNAME}. I have
+already added the necessary code for most systems, but if yours shows up as
+{\bf unknown} you will need to make changes. Then as mentioned above, you
+will need to set a number of system dependent items in {\bf configure.in} in
+the {\bf case} statement at approximately line 1050 (depending on the Bacula
+release).
+\item The items to in the case statement that corresponds to your system are
+ the following:
+
+\begin{itemize}
+\item DISTVER -- set to the version of your operating system. Typically some
+ form of {\bf uname} obtains it.
+\item TAPEDRIVE -- the default tape drive. Not too important as the user can
+ set it as an option.
+\item PSCMD -- set to the {\bf ps} command that will provide the PID in the
+ first field and the program name in the second field. If this is not set
+ properly, the {\bf bacula stop} script will most likely not be able to stop
+Bacula in all cases.
+\item hostname -- command to return the base host name (non-qualified) of
+ your system. This is generally the machine name. Not too important as the
+ user can correct this in his configuration file.
+\item CFLAGS -- set any special compiler flags needed. Many systems need a
+ special flag to make pthreads work. See cygwin for an example.
+\item LDFLAGS -- set any special loader flags. See cygwin for an example.
+\item PTHREAD\_LIB -- set for any special pthreads flags needed during
+ linking. See freebsd as an example.
+\item lld -- set so that a ``long long int'' will be properly edited in a
+ printf() call.
+\item llu -- set so that a ``long long unsigned'' will be properly edited in
+ a printf() call.
+\item PFILES -- set to add any files that you may define is your platform
+ subdirectory. These files are used for installation of automatic system
+ startup of Bacula daemons.
+\end{itemize}
+
+\item To rebuild a new version of {\bf configure} from a changed {\bf
+ autoconf/configure.in} you enter {\bf make configure} in the top level Bacula
+ source directory. You must have done a ./configure prior to trying to rebuild
+ the configure script or it will get into an infinite loop.
+\item If the {\bf make configure} gets into an infinite loop, ctl-c it, then
+ do {\bf ./configure} (no options are necessary) and retry the {\bf make
+ configure}, which should now work.
+\item To rebuild {\bf configure} you will need to have {\bf autoconf} version
+ 2.57-3 or higher loaded. Older versions of autoconf will complain about
+ unknown or bad options, and won't work.
+\item After you have a working {\bf configure} script, you may need to make a
+ few system dependent changes to the way Bacula works. Generally, these are
+ done in {\bf src/baconfig.h}. You can find a few examples of system dependent
+changes toward the end of this file. For example, on Irix systems, there is
+no definition for {\bf socklen\_t}, so it is made in this file. If your
+system has structure alignment requirements, check the definition of BALIGN
+in this file. Currently, all Bacula allocated memory is aligned on a {\bf
+double} boundary.
+\item If you are having problems with Bacula's type definitions, you might
+ look at {\bf src/bc\_types.h} where all the types such as {\bf uint32\_t},
+ {\bf uint64\_t}, etc. that Bacula uses are defined.
+\end{itemize}
--- /dev/null
+%%
+%%
+
+\chapter{Bacula Regression Testing}
+\label{_ChapterStart8}
+\index{Testing!Bacula Regression}
+\index{Bacula Regression Testing}
+\addcontentsline{toc}{section}{Bacula Regression Testing}
+
+\section{General}
+\index{General}
+\addcontentsline{toc}{section}{General}
+
+This document is intended mostly for developers who wish to ensure that their
+changes to Bacula don't introduce bugs in the base code. However, you
+don't need to be a developer to run the regression scripts, and we
+recommend them before putting your system into production, and before each
+upgrade, especially if you build from source code. They are
+simply shell scripts that drive Bacula through bconsole and then typically
+compare the input and output with {\bf diff}.
+
+You can find the existing regression scripts in the Bacula developer's
+{\bf git} repository on SourceForge. We strongly recommend that you {\bf
+clone} the repository because afterwards, you can easily get pull the
+updates that have been made.
+
+To get started, we recommend that you create a directory named {\bf
+bacula}, under which you will put the current source code and the current
+set of regression scripts. Below, we will describe how to set this up.
+
+The top level directory that we call {\bf bacula} can be named anything you
+want. Note, all the standard regression scripts run as non-root and can be
+run on the same machine as a production Bacula system (the developers run
+it this way).
+
+To create the directory structure for the current trunk and to
+clone the repository, do the following (note, we assume you
+are working in your home directory in a non-root account):
+
+\footnotesize
+\begin{verbatim}
+cd
+git clone git://bacula.git.sourceforge.net/gitroot/bacula bacula
+\end{verbatim}
+\normalsize
+
+This will create the directory {\bf bacula} and populate it with
+three directories: {\bf bacula}, {\bf gui}, and {\bf regress}.
+{\bf bacula} contains the Bacula source code; {\bf gui} contains
+certain gui programs that you will not need, and {\bf regress} contains
+all the regression scripts. The above should be needed only
+once. Thereafter to update to the latest code, you do:
+
+\footnotesize
+\begin{verbatim}
+cd bacula
+git pull
+\end{verbatim}
+\normalsize
+
+If you want to test with SQLite and it is not installed on your system,
+you will need to download the latest depkgs release from Source Forge and
+unpack it into {\bf depkgs}, then simply:
+
+\footnotesize
+\begin{verbatim}
+cd depkgs
+make
+\end{verbatim}
+\normalsize
+
+
+There are two different aspects of regression testing that this document will
+discuss: 1. Running the Regression Script, 2. Writing a Regression test.
+
+\section{Running the Regression Script}
+\index{Running the Regression Script}
+\index{Script!Running the Regression}
+\addcontentsline{toc}{section}{Running the Regression Script}
+
+There are a number of different tests that may be run, such as: the standard
+set that uses disk Volumes and runs under any userid; a small set of tests
+that write to tape; another set of tests where you must be root to run them.
+Normally, I run all my tests as non-root and very rarely run the root
+tests. The tests vary in length, and running the full tests including disk
+based testing, tape based testing, autochanger based testing, and multiple
+drive autochanger based testing can take 3 or 4 hours.
+
+\subsection{Setting the Configuration Parameters}
+\index{Setting the Configuration Parameters}
+\index{Parameters!Setting the Configuration}
+\addcontentsline{toc}{subsection}{Setting the Configuration Parameters}
+
+There is nothing you need to change in the source directory.
+
+To begin:
+
+\footnotesize
+\begin{verbatim}
+cd bacula/regress
+\end{verbatim}
+\normalsize
+
+
+The
+very first time you are going to run the regression scripts, you will
+need to create a custom config file for your system.
+We suggest that you start by:
+
+\footnotesize
+\begin{verbatim}
+cp prototype.conf config
+\end{verbatim}
+\normalsize
+
+Then you can edit the {\bf config} file directly.
+
+\footnotesize
+\begin{verbatim}
+
+# Where to get the source to be tested
+BACULA_SOURCE="${HOME}/bacula/bacula"
+
+# Where to send email !!!!! Change me !!!!!!!
+EMAIL=your-name@your-domain.com
+SMTP_HOST="localhost"
+
+# Full "default" path where to find sqlite (no quotes!)
+SQLITE3_DIR=${HOME}/depkgs/sqlite3
+SQLITE_DIR=${HOME}/depkgs/sqlite
+
+TAPE_DRIVE="/dev/nst0"
+# if you don't have an autochanger set AUTOCHANGER to /dev/null
+AUTOCHANGER="/dev/sg0"
+# For two drive tests -- set to /dev/null if you do not have it
+TAPE_DRIVE1="/dev/null"
+
+# This must be the path to the autochanger including its name
+AUTOCHANGER_PATH="/usr/sbin/mtx"
+
+# Set your database here
+#WHICHDB="--with-sqlite=${SQLITE_DIR}"
+#WHICHDB="--with-sqlite3=${SQLITE3_DIR}"
+#WHICHDB="--with-mysql"
+WHICHDB="--with-postgresql"
+
+# Set this to "--with-tcp-wrappers" or "--without-tcp-wrappers"
+TCPWRAPPERS="--with-tcp-wrappers"
+
+# Set this to "" to disable OpenSSL support, "--with-openssl=yes"
+# to enable it, or provide the path to the OpenSSL installation,
+# eg "--with-openssl=/usr/local"
+OPENSSL="--with-openssl"
+
+# You may put your real host name here, but localhost is valid also
+# and it has the advantage that it works on a non-newtworked machine
+HOST="localhost"
+
+\end{verbatim}
+\normalsize
+
+\begin{itemize}
+\item {\bf BACULA\_SOURCE} should be the full path to the Bacula source code
+ that you wish to test. It will be loaded configured, compiled, and
+ installed with the "make setup" command, which needs to be done only
+ once each time you change the source code.
+
+\item {\bf EMAIL} should be your email addres. Please remember to change this
+ or I will get a flood of unwanted messages. You may or may not want to see
+ these emails. In my case, I don't need them so I direct it to the bit bucket.
+
+\item {\bf SMTP\_HOST} defines where your SMTP server is.
+
+\item {\bf SQLITE\_DIR} should be the full path to the sqlite package, must
+ be build before running a Bacula regression, if you are using SQLite. This
+ variable is ignored if you are using MySQL or PostgreSQL. To use PostgreSQL,
+ edit the Makefile and change (or add) WHICHDB?=``\verb{--{with-postgresql''. For
+ MySQL use ``WHICHDB=''\verb{--{with-mysql``.
+
+ The advantage of using SQLite is that it is totally independent of any
+ installation you may have running on your system, and there is no
+ special configuration or authorization that must be done to run it.
+ With both MySQL and PostgreSQL, you must pre-install the packages,
+ initialize them and ensure that you have authorization to access the
+ database and create and delete tables.
+
+\item {\bf TAPE\_DRIVE} is the full path to your tape drive. The base set of
+ regression tests do not use a tape, so this is only important if you want to
+ run the full tests. Set this to /dev/null if you do not have a tape drive.
+
+\item {\bf TAPE\_DRIVE1} is the full path to your second tape drive, if
+ have one. The base set of
+ regression tests do not use a tape, so this is only important if you want to
+ run the full two drive tests. Set this to /dev/null if you do not have a
+ second tape drive.
+
+\item {\bf AUTOCHANGER} is the name of your autochanger control device. Set this to
+ /dev/null if you do not have an autochanger.
+
+\item {\bf AUTOCHANGER\_PATH} is the full path including the program name for
+ your autochanger program (normally {\bf mtx}. Leave the default value if you
+ do not have one.
+
+\item {\bf TCPWRAPPERS} defines whether or not you want the ./configure
+ to be performed with tcpwrappers enabled.
+
+\item {\bf OPENSSL} used to enable/disable SSL support for Bacula
+ communications and data encryption.
+
+\item {\bf HOST} is the hostname that it will use when building the
+ scripts. The Bacula daemons will be named <HOST>-dir, <HOST>-fd,
+ ... It is also the name of the HOST machine that to connect to the
+ daemons by the network. Hence the name should either be your real
+ hostname (with an appropriate DNS or /etc/hosts entry) or {\bf
+ localhost} as it is in the default file.
+
+\item {\bf bin} is the binary location.
+
+\item {\bf scripts} is the bacula scripts location (where we could find
+ database creation script, autochanger handler, etc.)
+
+\end{itemize}
+
+\subsection{Building the Test Bacula}
+\index{Building the Test Bacula}
+\index{Bacula!Building the Test}
+\addcontentsline{toc}{subsection}{Building the Test Bacula}
+
+Once the above variables are set, you can build the Makefile by entering:
+
+\footnotesize
+\begin{verbatim}
+./config xxx.conf
+\end{verbatim}
+\normalsize
+
+Where xxx.conf is the name of the conf file containing your system parameters.
+This will build a Makefile from Makefile.in, and you should not need to
+do this again unless you want to change the database or other regression
+configuration parameter.
+
+
+\subsection{Setting up your SQL engine}
+\index{Setting up your SQL engine}
+\addcontentsline{toc}{subsection}{Setting up your SQL engine}
+If you are using SQLite or SQLite3, there is nothing more to do; you can
+simply run the tests as described in the next section.
+
+If you are using MySQL or PostgreSQL, you will need to establish an
+account with your database engine for the user name {\bf regress} and
+you will need to manually create a database named {\bf regress} that can be
+used by user name regress, which means you will have to give the user
+regress sufficient permissions to use the database named regress.
+There is no password on the regress account.
+
+You have probably already done this procedure for the user name and
+database named bacula. If not, the manual describes roughly how to
+do it, and the scripts in bacula/regress/build/src/cats named
+create\_mysql\_database, create\_postgresql\_database, grant\_mysql\_privileges,
+and grant\_postgresql\_privileges may be of a help to you.
+
+Generally, to do the above, you will need to run under root to
+be able to create databases and modify permissions within MySQL and
+PostgreSQL.
+
+
+\subsection{Running the Disk Only Regression}
+\index{Regression!Running the Disk Only}
+\index{Running the Disk Only Regression}
+\addcontentsline{toc}{subsection}{Running the Disk Only Regression}
+
+The simplest way to copy the source code, configure it, compile it, link
+it, and run the tests is to use a helper script:
+
+\footnotesize
+\begin{verbatim}
+./do_disk
+\end{verbatim}
+\normalsize
+
+
+
+
+This will run the base set of tests using disk Volumes.
+If you are testing on a
+non-Linux machine several of the of the tests may not be run. In any case,
+as we add new tests, the number will vary. It will take about 1 hour
+and you don't need to be root
+to run these tests (I run under my regular userid). The result should be
+something similar to:
+
+\footnotesize
+\begin{verbatim}
+Test results
+ ===== auto-label-test OK 12:31:33 =====
+ ===== backup-bacula-test OK 12:32:32 =====
+ ===== bextract-test OK 12:33:27 =====
+ ===== bscan-test OK 12:34:47 =====
+ ===== bsr-opt-test OK 12:35:46 =====
+ ===== compressed-test OK 12:36:52 =====
+ ===== compressed-encrypt-test OK 12:38:18 =====
+ ===== concurrent-jobs-test OK 12:39:49 =====
+ ===== data-encrypt-test OK 12:41:11 =====
+ ===== encrypt-bug-test OK 12:42:00 =====
+ ===== fifo-test OK 12:43:46 =====
+ ===== backup-bacula-fifo OK 12:44:54 =====
+ ===== differential-test OK 12:45:36 =====
+ ===== four-concurrent-jobs-test OK 12:47:39 =====
+ ===== four-jobs-test OK 12:49:22 =====
+ ===== incremental-test OK 12:50:38 =====
+ ===== query-test OK 12:51:37 =====
+ ===== recycle-test OK 12:53:52 =====
+ ===== restore2-by-file-test OK 12:54:53 =====
+ ===== restore-by-file-test OK 12:55:40 =====
+ ===== restore-disk-seek-test OK 12:56:29 =====
+ ===== six-vol-test OK 12:57:44 =====
+ ===== span-vol-test OK 12:58:52 =====
+ ===== sparse-compressed-test OK 13:00:00 =====
+ ===== sparse-test OK 13:01:04 =====
+ ===== two-jobs-test OK 13:02:39 =====
+ ===== two-vol-test OK 13:03:49 =====
+ ===== verify-vol-test OK 13:04:56 =====
+ ===== weird-files2-test OK 13:05:47 =====
+ ===== weird-files-test OK 13:06:33 =====
+ ===== migration-job-test OK 13:08:15 =====
+ ===== migration-jobspan-test OK 13:09:33 =====
+ ===== migration-volume-test OK 13:10:48 =====
+ ===== migration-time-test OK 13:12:59 =====
+ ===== hardlink-test OK 13:13:50 =====
+ ===== two-pool-test OK 13:18:17 =====
+ ===== fast-two-pool-test OK 13:24:02 =====
+ ===== two-volume-test OK 13:25:06 =====
+ ===== incremental-2disk OK 13:25:57 =====
+ ===== 2drive-incremental-2disk OK 13:26:53 =====
+ ===== scratch-pool-test OK 13:28:01 =====
+Total time = 0:57:55 or 3475 secs
+
+\end{verbatim}
+\normalsize
+
+and the working tape tests are run with
+
+\footnotesize
+\begin{verbatim}
+make full_test
+\end{verbatim}
+\normalsize
+
+
+\footnotesize
+\begin{verbatim}
+Test results
+
+ ===== Bacula tape test OK =====
+ ===== Small File Size test OK =====
+ ===== restore-by-file-tape test OK =====
+ ===== incremental-tape test OK =====
+ ===== four-concurrent-jobs-tape OK =====
+ ===== four-jobs-tape OK =====
+\end{verbatim}
+\normalsize
+
+Each separate test is self contained in that it initializes to run Bacula from
+scratch (i.e. newly created database). It will also kill any Bacula session
+that is currently running. In addition, it uses ports 8101, 8102, and 8103 so
+that it does not intefere with a production system.
+
+Alternatively, you can do the ./do\_disk work by hand with:
+
+\footnotesize
+\begin{verbatim}
+make setup
+\end{verbatim}
+\normalsize
+
+The above will then copy the source code within
+the regression tree (in directory regress/build), configure it, and build it.
+There should be no errors. If there are, please correct them before
+continuing. From this point on, as long as you don't change the Bacula
+source code, you should not need to repeat any of the above steps. If
+you pull down a new version of the source code, simply run {\bf make setup}
+again.
+
+
+Once Bacula is built, you can run the basic disk only non-root regression test
+by entering:
+
+\footnotesize
+\begin{verbatim}
+make test
+\end{verbatim}
+\normalsize
+
+
+\subsection{Other Tests}
+\index{Other Tests}
+\index{Tests!Other}
+\addcontentsline{toc}{subsection}{Other Tests}
+
+There are a number of other tests that can be run as well. All the tests are a
+simply shell script keep in the regress directory. For example the ''make
+test`` simply executes {\bf ./all-non-root-tests}. The other tests, which
+are invoked by directly running the script are:
+
+\begin{description}
+
+\item [all\_non-root-tests]
+ \index{all\_non-root-tests}
+ All non-tape tests not requiring root. This is the standard set of tests,
+that in general, backup some data, then restore it, and finally compares the
+restored data with the original data.
+
+\item [all-root-tests]
+ \index{all-root-tests}
+ All non-tape tests requiring root permission. These are a relatively small
+number of tests that require running as root. The amount of data backed up
+can be quite large. For example, one test backs up /usr, another backs up
+/etc. One or more of these tests reports an error -- I'll fix it one day.
+
+\item [all-non-root-tape-tests]
+ \index{all-non-root-tape-tests}
+ All tape test not requiring root. There are currently three tests, all run
+without being root, and backup to a tape. The first two tests use one volume,
+and the third test requires an autochanger, and uses two volumes. If you
+don't have an autochanger, then this script will probably produce an error.
+
+\item [all-tape-and-file-tests]
+ \index{all-tape-and-file-tests}
+ All tape and file tests not requiring root. This includes just about
+everything, and I don't run it very often.
+\end{description}
+
+\subsection{If a Test Fails}
+\index{Fails!If a Test}
+\index{If a Test Fails}
+\addcontentsline{toc}{subsection}{If a Test Fails}
+
+If you one or more tests fail, the line output will be similar to:
+
+\footnotesize
+\begin{verbatim}
+ !!!!! concurrent-jobs-test failed!!! !!!!!
+\end{verbatim}
+\normalsize
+
+If you want to determine why the test failed, you will need to rerun the
+script with the debug output turned on. You do so by defining the
+environment variable {\bf REGRESS\_DEBUG} with commands such as:
+
+\begin{verbatim}
+REGRESS_DEBUG=1
+export REGRESS_DEBUG
+\end{verbatim}
+
+Then from the "regress" directory (all regression scripts assume that
+you have "regress" as the current directory), enter:
+
+\begin{verbatim}
+tests/test-name
+\end{verbatim}
+
+where test-name should be the name of a test script -- for example:
+{\bf tests/backup-bacula-test}.
+
+\section{Testing a Binary Installation}
+\index{Test!Testing a Binary Installation}
+
+If you have installed your Bacula from a binary release such as (rpms or debs),
+you can still run regression tests on it.
+First, make sure that your regression {\bf config} file uses the same catalog backend as
+your installed binaries. Then define the variables \texttt{bin} and \texttt{scripts} variables
+in your config file.
+
+Example:
+\begin{verbatim}
+bin=/opt/bacula/bin
+scripts=/opt/bacula/scripts
+\end{verbatim}
+
+The \texttt{./scripts/prepare-other-loc} will tweak the regress scripts to use
+your binary location. You will need to run it manually once before you run any
+regression tests.
+
+\begin{verbatim}
+$ ./scripts/prepare-other-loc
+$ ./tests/backup-bacula-test
+...
+\end{verbatim}
+
+All regression scripts must be run by hand or by calling the test scripts.
+These are principally scripts that begin with {\bf all\_...} such as {\bf all\_disk\_tests},
+{\bf ./all\_test} ...
+None of the
+{\bf ./do\_disk}, {\bf ./do\_all}, {\bf ./nightly...} scripts will work.
+
+If you want to switch back to running the regression scripts from source, first
+remove the {\bf bin} and {\bf scripts} variables from your {\bf config} file and
+rerun the {\bf make setup} step.
+
+\section{Running a Single Test}
+\index{Running a Single Test}
+\addcontentsline{toc}{section}{Running a Single Test}
+
+If you wish to run a single test, you can simply:
+
+\begin{verbatim}
+cd regress
+tests/<name-of-test>
+\end{verbatim}
+
+or, if the source code has been updated, you would do:
+
+\begin{verbatim}
+cd bacula
+git pull
+cd regress
+make setup
+tests/backup-to-null
+\end{verbatim}
+
+
+\section{Writing a Regression Test}
+\index{Test!Writing a Regression}
+\index{Writing a Regression Test}
+\addcontentsline{toc}{section}{Writing a Regression Test}
+
+Any developer, who implements a major new feature, should write a regression
+test that exercises and validates the new feature. Each regression test is a
+complete test by itself. It terminates any running Bacula, initializes the
+database, starts Bacula, then runs the test by using the console program.
+
+\subsection{Running the Tests by Hand}
+\index{Hand!Running the Tests by}
+\index{Running the Tests by Hand}
+\addcontentsline{toc}{subsection}{Running the Tests by Hand}
+
+You can run any individual test by hand by cd'ing to the {\bf regress}
+directory and entering:
+
+\footnotesize
+\begin{verbatim}
+tests/<test-name>
+\end{verbatim}
+\normalsize
+
+\subsection{Directory Structure}
+\index{Structure!Directory}
+\index{Directory Structure}
+\addcontentsline{toc}{subsection}{Directory Structure}
+
+The directory structure of the regression tests is:
+
+\footnotesize
+\begin{verbatim}
+ regress - Makefile, scripts to start tests
+ |------ scripts - Scripts and conf files
+ |-------tests - All test scripts are here
+ |
+ |------------------ -- All directories below this point are used
+ | for testing, but are created from the
+ | above directories and are removed with
+ | "make distclean"
+ |
+ |------ bin - This is the install directory for
+ | Bacula to be used testing
+ |------ build - Where the Bacula source build tree is
+ |------ tmp - Most temp files go here
+ |------ working - Bacula working directory
+ |------ weird-files - Weird files used in two of the tests.
+\end{verbatim}
+\normalsize
+
+\subsection{Adding a New Test}
+\index{Adding a New Test}
+\index{Test!Adding a New}
+\addcontentsline{toc}{subsection}{Adding a New Test}
+
+If you want to write a new regression test, it is best to start with one of
+the existing test scripts, and modify it to do the new test.
+
+When adding a new test, be extremely careful about adding anything to any of
+the daemons' configuration files. The reason is that it may change the prompts
+that are sent to the console. For example, adding a Pool means that the
+current scripts, which assume that Bacula automatically selects a Pool, will
+now be presented with a new prompt, so the test will fail. If you need to
+enhance the configuration files, consider making your own versions.
+
+\subsection{Running a Test Under The Debugger}
+\index{Debugger}
+\addcontentsline{toc}{subsection}{Running a Test Under The Debugger}
+You can run a test under the debugger (actually run a Bacula daemon
+under the debugger) by first setting the environment variable
+{\bf REGRESS\_WAIT} with commands such as:
+
+\begin{verbatim}
+REGRESS_WAIT=1
+export REGRESS_WAIT
+\end{verbatim}
+
+Then executing the script. When the script prints the following line:
+
+\begin{verbatim}
+Start Bacula under debugger and enter anything when ready ...
+\end{verbatim}
+
+You start the Bacula component you want to run under the debugger in a
+different shell window. For example:
+
+\begin{verbatim}
+cd .../regress/bin
+gdb bacula-sd
+(possibly set breakpoints, ...)
+run -s -f
+\end{verbatim}
+
+Then enter any character in the window with the above message.
+An error message will appear saying that the daemon you are debugging
+is already running, which is the case. You can simply ignore the
+error message.
--- /dev/null
+%%
+%%
+
+\addcontentsline{lof}{figure}{Smart Memory Allocation with Orphaned Buffer
+Detection}
+\includegraphics{\idir smartall.eps}
+
+\chapter{Smart Memory Allocation}
+\label{_ChapterStart4}
+\index{Detection!Smart Memory Allocation With Orphaned Buffer }
+\index{Smart Memory Allocation With Orphaned Buffer Detection }
+\addcontentsline{toc}{section}{Smart Memory Allocation With Orphaned Buffer
+Detection}
+
+Few things are as embarrassing as a program that leaks, yet few errors are so
+easy to commit or as difficult to track down in a large, complicated program
+as failure to release allocated memory. SMARTALLOC replaces the standard C
+library memory allocation functions with versions which keep track of buffer
+allocations and releases and report all orphaned buffers at the end of program
+execution. By including this package in your program during development and
+testing, you can identify code that loses buffers right when it's added and
+most easily fixed, rather than as part of a crisis debugging push when the
+problem is identified much later in the testing cycle (or even worse, when the
+code is in the hands of a customer). When program testing is complete, simply
+recompiling with different flags removes SMARTALLOC from your program,
+permitting it to run without speed or storage penalties.
+
+In addition to detecting orphaned buffers, SMARTALLOC also helps to find other
+common problems in management of dynamic storage including storing before the
+start or beyond the end of an allocated buffer, referencing data through a
+pointer to a previously released buffer, attempting to release a buffer twice
+or releasing storage not obtained from the allocator, and assuming the initial
+contents of storage allocated by functions that do not guarantee a known
+value. SMARTALLOC's checking does not usually add a large amount of overhead
+to a program (except for programs which use {\tt realloc()} extensively; see
+below). SMARTALLOC focuses on proper storage management rather than internal
+consistency of the heap as checked by the malloc\_debug facility available on
+some systems. SMARTALLOC does not conflict with malloc\_debug and both may be
+used together, if you wish. SMARTALLOC makes no assumptions regarding the
+internal structure of the heap and thus should be compatible with any C
+language implementation of the standard memory allocation functions.
+
+\subsection{ Installing SMARTALLOC}
+\index{SMARTALLOC!Installing }
+\index{Installing SMARTALLOC }
+\addcontentsline{toc}{subsection}{Installing SMARTALLOC}
+
+SMARTALLOC is provided as a Zipped archive,
+\elink{smartall.zip}{http://www.fourmilab.ch/smartall/smartall.zip}; see the
+download instructions below.
+
+To install SMARTALLOC in your program, simply add the statement:
+
+to every C program file which calls any of the memory allocation functions
+({\tt malloc}, {\tt calloc}, {\tt free}, etc.). SMARTALLOC must be used for
+all memory allocation with a program, so include file for your entire program,
+if you have such a thing. Next, define the symbol SMARTALLOC in the
+compilation before the inclusion of smartall.h. I usually do this by having my
+Makefile add the ``{\tt -DSMARTALLOC}'' option to the C compiler for
+non-production builds. You can define the symbol manually, if you prefer, by
+adding the statement:
+
+{\tt \#define SMARTALLOC}
+
+At the point where your program is all done and ready to relinquish control to
+the operating system, add the call:
+
+{\tt \ \ \ \ \ \ \ \ sm\_dump(}{\it datadump}{\tt );}
+
+where {\it datadump} specifies whether the contents of orphaned buffers are to
+be dumped in addition printing to their size and place of allocation. The data
+are dumped only if {\it datadump} is nonzero, so most programs will normally
+use ``{\tt sm\_dump(0);}''. If a mysterious orphaned buffer appears that can't
+be identified from the information this prints about it, replace the statement
+with ``{\tt sm\_dump(1)};''. Usually the dump of the buffer's data will
+furnish the additional clues you need to excavate and extirpate the elusive
+error that left the buffer allocated.
+
+Finally, add the files ``smartall.h'' and ``smartall.c'' from this release to
+your source directory, make dependencies, and linker input. You needn't make
+inclusion of smartall.c in your link optional; if compiled with SMARTALLOC not
+defined it generates no code, so you may always include it knowing it will
+waste no storage in production builds. Now when you run your program, if it
+leaves any buffers around when it's done, each will be reported by {\tt
+sm\_dump()} on stderr as follows:
+
+\footnotesize
+\begin{verbatim}
+Orphaned buffer: 120 bytes allocated at line 50 of gutshot.c
+\end{verbatim}
+\normalsize
+
+\subsection{ Squelching a SMARTALLOC}
+\index{SMARTALLOC!Squelching a }
+\index{Squelching a SMARTALLOC }
+\addcontentsline{toc}{subsection}{Squelching a SMARTALLOC}
+
+Usually, when you first install SMARTALLOC in an existing program you'll find
+it nattering about lots of orphaned buffers. Some of these turn out to be
+legitimate errors, but some are storage allocated during program
+initialisation that, while dynamically allocated, is logically static storage
+not intended to be released. Of course, you can get rid of the complaints
+about these buffers by adding code to release them, but by doing so you're
+adding unnecessary complexity and code size to your program just to silence
+the nattering of a SMARTALLOC, so an escape hatch is provided to eliminate the
+need to release these buffers.
+
+Normally all storage allocated with the functions {\tt malloc()}, {\tt
+calloc()}, and {\tt realloc()} is monitored by SMARTALLOC. If you make the
+function call:
+
+\footnotesize
+\begin{verbatim}
+ sm_static(1);
+\end{verbatim}
+\normalsize
+
+you declare that subsequent storage allocated by {\tt malloc()}, {\tt
+calloc()}, and {\tt realloc()} should not be considered orphaned if found to
+be allocated when {\tt sm\_dump()} is called. I use a call on ``{\tt
+sm\_static(1);}'' before I allocate things like program configuration tables
+so I don't have to add code to release them at end of program time. After
+allocating unmonitored data this way, be sure to add a call to:
+
+\footnotesize
+\begin{verbatim}
+ sm_static(0);
+\end{verbatim}
+\normalsize
+
+to resume normal monitoring of buffer allocations. Buffers allocated while
+{\tt sm\_static(1}) is in effect are not checked for having been orphaned but
+all the other safeguards provided by SMARTALLOC remain in effect. You may
+release such buffers, if you like; but you don't have to.
+
+\subsection{ Living with Libraries}
+\index{Libraries!Living with }
+\index{Living with Libraries }
+\addcontentsline{toc}{subsection}{Living with Libraries}
+
+Some library functions for which source code is unavailable may gratuitously
+allocate and return buffers that contain their results, or require you to pass
+them buffers which they subsequently release. If you have source code for the
+library, by far the best approach is to simply install SMARTALLOC in it,
+particularly since this kind of ill-structured dynamic storage management is
+the source of so many storage leaks. Without source code, however, there's no
+option but to provide a way to bypass SMARTALLOC for the buffers the library
+allocates and/or releases with the standard system functions.
+
+For each function {\it xxx} redefined by SMARTALLOC, a corresponding routine
+named ``{\tt actually}{\it xxx}'' is furnished which provides direct access to
+the underlying system function, as follows:
+
+\begin{quote}
+
+\begin{longtable}{ll}
+\multicolumn{1}{l }{\bf Standard function } & \multicolumn{1}{l }{\bf Direct
+access function } \\
+{{\tt malloc(}{\it size}{\tt )} } & {{\tt actuallymalloc(}{\it size}{\tt )}
+} \\
+{{\tt calloc(}{\it nelem}{\tt ,} {\it elsize}{\tt )} } & {{\tt
+actuallycalloc(}{\it nelem}, {\it elsize}{\tt )} } \\
+{{\tt realloc(}{\it ptr}{\tt ,} {\it size}{\tt )} } & {{\tt
+actuallyrealloc(}{\it ptr}, {\it size}{\tt )} } \\
+{{\tt free(}{\it ptr}{\tt )} } & {{\tt actuallyfree(}{\it ptr}{\tt )} }
+
+\end{longtable}
+
+\end{quote}
+
+For example, suppose there exists a system library function named ``{\tt
+getimage()}'' which reads a raster image file and returns the address of a
+buffer containing it. Since the library routine allocates the image directly
+with {\tt malloc()}, you can't use SMARTALLOC's {\tt free()}, as that call
+expects information placed in the buffer by SMARTALLOC's special version of
+{\tt malloc()}, and hence would report an error. To release the buffer you
+should call {\tt actuallyfree()}, as in this code fragment:
+
+\footnotesize
+\begin{verbatim}
+ struct image *ibuf = getimage("ratpack.img");
+ display_on_screen(ibuf);
+ actuallyfree(ibuf);
+\end{verbatim}
+\normalsize
+
+Conversely, suppose we are to call a library function, ``{\tt putimage()}'',
+which writes an image buffer into a file and then releases the buffer with
+{\tt free()}. Since the system {\tt free()} is being called, we can't pass a
+buffer allocated by SMARTALLOC's allocation routines, as it contains special
+information that the system {\tt free()} doesn't expect to be there. The
+following code uses {\tt actuallymalloc()} to obtain the buffer passed to such
+a routine.
+
+\footnotesize
+\begin{verbatim}
+ struct image *obuf =
+ (struct image *) actuallymalloc(sizeof(struct image));
+ dump_screen_to_image(obuf);
+ putimage("scrdump.img", obuf); /* putimage() releases obuf */
+\end{verbatim}
+\normalsize
+
+It's unlikely you'll need any of the ``actually'' calls except under very odd
+circumstances (in four products and three years, I've only needed them once),
+but they're there for the rare occasions that demand them. Don't use them to
+subvert the error checking of SMARTALLOC; if you want to disable orphaned
+buffer detection, use the {\tt sm\_static(1)} mechanism described above. That
+way you don't forfeit all the other advantages of SMARTALLOC as you do when
+using {\tt actuallymalloc()} and {\tt actuallyfree()}.
+
+\subsection{ SMARTALLOC Details}
+\index{SMARTALLOC Details }
+\index{Details!SMARTALLOC }
+\addcontentsline{toc}{subsection}{SMARTALLOC Details}
+
+When you include ``smartall.h'' and define SMARTALLOC, the following standard
+system library functions are redefined with the \#define mechanism to call
+corresponding functions within smartall.c instead. (For details of the
+redefinitions, please refer to smartall.h.)
+
+\footnotesize
+\begin{verbatim}
+ void *malloc(size_t size)
+ void *calloc(size_t nelem, size_t elsize)
+ void *realloc(void *ptr, size_t size)
+ void free(void *ptr)
+ void cfree(void *ptr)
+\end{verbatim}
+\normalsize
+
+{\tt cfree()} is a historical artifact identical to {\tt free()}.
+
+In addition to allocating storage in the same way as the standard library
+functions, the SMARTALLOC versions expand the buffers they allocate to include
+information that identifies where each buffer was allocated and to chain all
+allocated buffers together. When a buffer is released, it is removed from the
+allocated buffer chain. A call on {\tt sm\_dump()} is able, by scanning the
+chain of allocated buffers, to find all orphaned buffers. Buffers allocated
+while {\tt sm\_static(1)} is in effect are specially flagged so that, despite
+appearing on the allocated buffer chain, {\tt sm\_dump()} will not deem them
+orphans.
+
+When a buffer is allocated by {\tt malloc()} or expanded with {\tt realloc()},
+all bytes of newly allocated storage are set to the hexadecimal value 0x55
+(alternating one and zero bits). Note that for {\tt realloc()} this applies
+only to the bytes added at the end of buffer; the original contents of the
+buffer are not modified. Initializing allocated storage to a distinctive
+nonzero pattern is intended to catch code that erroneously assumes newly
+allocated buffers are cleared to zero; in fact their contents are random. The
+{\tt calloc()} function, defined as returning a buffer cleared to zero,
+continues to zero its buffers under SMARTALLOC.
+
+Buffers obtained with the SMARTALLOC functions contain a special sentinel byte
+at the end of the user data area. This byte is set to a special key value
+based upon the buffer's memory address. When the buffer is released, the key
+is tested and if it has been overwritten an assertion in the {\tt free}
+function will fail. This catches incorrect program code that stores beyond the
+storage allocated for the buffer. At {\tt free()} time the queue links are
+also validated and an assertion failure will occur if the program has
+destroyed them by storing before the start of the allocated storage.
+
+In addition, when a buffer is released with {\tt free()}, its contents are
+immediately destroyed by overwriting them with the hexadecimal pattern 0xAA
+(alternating bits, the one's complement of the initial value pattern). This
+will usually trip up code that keeps a pointer to a buffer that's been freed
+and later attempts to reference data within the released buffer. Incredibly,
+this is {\it legal} in the standard Unix memory allocation package, which
+permits programs to free() buffers, then raise them from the grave with {\tt
+realloc()}. Such program ``logic'' should be fixed, not accommodated, and
+SMARTALLOC brooks no such Lazarus buffer`` nonsense.
+
+Some C libraries allow a zero size argument in calls to {\tt malloc()}. Since
+this is far more likely to indicate a program error than a defensible
+programming stratagem, SMARTALLOC disallows it with an assertion.
+
+When the standard library {\tt realloc()} function is called to expand a
+buffer, it attempts to expand the buffer in place if possible, moving it only
+if necessary. Because SMARTALLOC must place its own private storage in the
+buffer and also to aid in error detection, its version of {\tt realloc()}
+always moves and copies the buffer except in the trivial case where the size
+of the buffer is not being changed. By forcing the buffer to move on every
+call and destroying the contents of the old buffer when it is released,
+SMARTALLOC traps programs which keep pointers into a buffer across a call on
+{\tt realloc()} which may move it. This strategy may prove very costly to
+programs which make extensive use of {\tt realloc()}. If this proves to be a
+problem, such programs may wish to use {\tt actuallymalloc()}, {\tt
+actuallyrealloc()}, and {\tt actuallyfree()} for such frequently-adjusted
+buffers, trading error detection for performance. Although not specified in
+the System V Interface Definition, many C library implementations of {\tt
+realloc()} permit an old buffer argument of NULL, causing {\tt realloc()} to
+allocate a new buffer. The SMARTALLOC version permits this.
+
+\subsection{ When SMARTALLOC is Disabled}
+\index{When SMARTALLOC is Disabled }
+\index{Disabled!When SMARTALLOC is }
+\addcontentsline{toc}{subsection}{When SMARTALLOC is Disabled}
+
+When SMARTALLOC is disabled by compiling a program with the symbol SMARTALLOC
+not defined, calls on the functions otherwise redefined by SMARTALLOC go
+directly to the system functions. In addition, compile-time definitions
+translate calls on the ''{\tt actually}...{\tt ()}`` functions into the
+corresponding library calls; ''{\tt actuallymalloc(100)}``, for example,
+compiles into ''{\tt malloc(100)}``. The two special SMARTALLOC functions,
+{\tt sm\_dump()} and {\tt sm\_static()}, are defined to generate no code
+(hence the null statement). Finally, if SMARTALLOC is not defined, compilation
+of the file smartall.c generates no code or data at all, effectively removing
+it from the program even if named in the link instructions.
+
+Thus, except for unusual circumstances, a program that works with SMARTALLOC
+defined for testing should require no changes when built without it for
+production release.
+
+\subsection{ The {\tt alloc()} Function}
+\index{Function!alloc }
+\index{Alloc() Function }
+\addcontentsline{toc}{subsection}{alloc() Function}
+
+Many programs I've worked on use very few direct calls to {\tt malloc()},
+using the identically declared {\tt alloc()} function instead. Alloc detects
+out-of-memory conditions and aborts, removing the need for error checking on
+every call of {\tt malloc()} (and the temptation to skip checking for
+out-of-memory).
+
+As a convenience, SMARTALLOC supplies a compatible version of {\tt alloc()} in
+the file alloc.c, with its definition in the file alloc.h. This version of
+{\tt alloc()} is sensitive to the definition of SMARTALLOC and cooperates with
+SMARTALLOC's orphaned buffer detection. In addition, when SMARTALLOC is
+defined and {\tt alloc()} detects an out of memory condition, it takes
+advantage of the SMARTALLOC diagnostic information to identify the file and
+line number of the call on {\tt alloc()} that failed.
+
+\subsection{ Overlays and Underhandedness}
+\index{Underhandedness!Overlays and }
+\index{Overlays and Underhandedness }
+\addcontentsline{toc}{subsection}{Overlays and Underhandedness}
+
+String constants in the C language are considered to be static arrays of
+characters accessed through a pointer constant. The arrays are potentially
+writable even though their pointer is a constant. SMARTALLOC uses the
+compile-time definition {\tt ./smartall.wml} to obtain the name of the file in
+which a call on buffer allocation was performed. Rather than reserve space in
+a buffer to save this information, SMARTALLOC simply stores the pointer to the
+compiled-in text of the file name. This works fine as long as the program does
+not overlay its data among modules. If data are overlayed, the area of memory
+which contained the file name at the time it was saved in the buffer may
+contain something else entirely when {\tt sm\_dump()} gets around to using the
+pointer to edit the file name which allocated the buffer.
+
+If you want to use SMARTALLOC in a program with overlayed data, you'll have to
+modify smartall.c to either copy the file name to a fixed-length field added
+to the {\tt abufhead} structure, or else allocate storage with {\tt malloc()},
+copy the file name there, and set the {\tt abfname} pointer to that buffer,
+then remember to release the buffer in {\tt sm\_free}. Either of these
+approaches are wasteful of storage and time, and should be considered only if
+there is no alternative. Since most initial debugging is done in non-overlayed
+environments, the restrictions on SMARTALLOC with data overlaying may never
+prove a problem. Note that conventional overlaying of code, by far the most
+common form of overlaying, poses no problems for SMARTALLOC; you need only be
+concerned if you're using exotic tools for data overlaying on MS-DOS or other
+address-space-challenged systems.
+
+Since a C language ''constant`` string can actually be written into, most C
+compilers generate a unique copy of each string used in a module, even if the
+same constant string appears many times. In modules that contain many calls on
+allocation functions, this results in substantial wasted storage for the
+strings that identify the file name. If your compiler permits optimization of
+multiple occurrences of constant strings, enabling this mode will eliminate
+the overhead for these strings. Of course, it's up to you to make sure
+choosing this compiler mode won't wreak havoc on some other part of your
+program.
+
+\subsection{ Test and Demonstration Program}
+\index{Test and Demonstration Program }
+\index{Program!Test and Demonstration }
+\addcontentsline{toc}{subsection}{Test and Demonstration Program}
+
+A test and demonstration program, smtest.c, is supplied with SMARTALLOC. You
+can build this program with the Makefile included. Please refer to the
+comments in smtest.c and the Makefile for information on this program. If
+you're attempting to use SMARTALLOC on a new machine or with a new compiler or
+operating system, it's a wise first step to check it out with smtest first.
+
+\subsection{ Invitation to the Hack}
+\index{Hack!Invitation to the }
+\index{Invitation to the Hack }
+\addcontentsline{toc}{subsection}{Invitation to the Hack}
+
+SMARTALLOC is not intended to be a panacea for storage management problems,
+nor is it universally applicable or effective; it's another weapon in the
+arsenal of the defensive professional programmer attempting to create reliable
+products. It represents the current state of evolution of expedient debug code
+which has been used in several commercial software products which have,
+collectively, sold more than third of a million copies in the retail market,
+and can be expected to continue to develop through time as it is applied to
+ever more demanding projects.
+
+The version of SMARTALLOC here has been tested on a Sun SPARCStation, Silicon
+Graphics Indigo2, and on MS-DOS using both Borland and Microsoft C. Moving
+from compiler to compiler requires the usual small changes to resolve disputes
+about prototyping of functions, whether the type returned by buffer allocation
+is {\tt char\ *} or {\tt void\ *}, and so forth, but following those changes
+it works in a variety of environments. I hope you'll find SMARTALLOC as useful
+for your projects as I've found it in mine.
+
+\section{
+\elink{}{http://www.fourmilab.ch/smartall/smartall.zip}
+\elink{Download smartall.zip}{http://www.fourmilab.ch/smartall/smartall.zip}
+(Zipped archive)}
+\index{Archive! Download smartall.zip Zipped }
+\index{ Download smartall.zip (Zipped archive) }
+\addcontentsline{toc}{section}{ Download smartall.zip (Zipped archive)}
+
+SMARTALLOC is provided as
+\elink{smartall.zip}{http://www.fourmilab.ch/smartall/smartall.zip}, a
+\elink{Zipped}{http://www.pkware.com/} archive containing source code,
+documentation, and a {\tt Makefile} to build the software under Unix.
+
+\subsection{ Copying}
+\index{Copying }
+\addcontentsline{toc}{subsection}{Copying}
+
+\begin{quote}
+SMARTALLOC is in the public domain. Permission to use, copy, modify, and
+distribute this software and its documentation for any purpose and without fee
+is hereby granted, without any conditions or restrictions. This software is
+provided ''as is`` without express or implied warranty.
+\end{quote}
+
+{\it
+\elink{by John Walker}{http://www.fourmilab.ch}
+October 30th, 1998 }
--- /dev/null
+%%
+%%
+
+\chapter{Storage Daemon Design}
+\label{_ChapterStart3}
+\index{Storage Daemon Design }
+\index{Design!Storage Daemon }
+\addcontentsline{toc}{section}{Storage Daemon Design}
+
+This chapter is intended to be a technical discussion of the Storage daemon
+services and as such is not targeted at end users but rather at developers and
+system administrators that want or need to know more of the working details of
+{\bf Bacula}.
+
+This document is somewhat out of date.
+
+\section{SD Design Introduction}
+\index{Introduction!SD Design }
+\index{SD Design Introduction }
+\addcontentsline{toc}{section}{SD Design Introduction}
+
+The Bacula Storage daemon provides storage resources to a Bacula installation.
+An individual Storage daemon is associated with a physical permanent storage
+device (for example, a tape drive, CD writer, tape changer or jukebox, etc.),
+and may employ auxiliary storage resources (such as space on a hard disk file
+system) to increase performance and/or optimize use of the permanent storage
+medium.
+
+Any number of storage daemons may be run on a given machine; each associated
+with an individual storage device connected to it, and BACULA operations may
+employ storage daemons on any number of hosts connected by a network, local or
+remote. The ability to employ remote storage daemons (with appropriate
+security measures) permits automatic off-site backup, possibly to publicly
+available backup repositories.
+
+\section{SD Development Outline}
+\index{Outline!SD Development }
+\index{SD Development Outline }
+\addcontentsline{toc}{section}{SD Development Outline}
+
+In order to provide a high performance backup and restore solution that scales
+to very large capacity devices and networks, the storage daemon must be able
+to extract as much performance from the storage device and network with which
+it interacts. In order to accomplish this, storage daemons will eventually
+have to sacrifice simplicity and painless portability in favor of techniques
+which improve performance. My goal in designing the storage daemon protocol
+and developing the initial prototype storage daemon is to provide for these
+additions in the future, while implementing an initial storage daemon which is
+very simple and portable to almost any POSIX-like environment. This original
+storage daemon (and its evolved descendants) can serve as a portable solution
+for non-demanding backup requirements (such as single servers of modest size,
+individual machines, or small local networks), while serving as the starting
+point for development of higher performance configurable derivatives which use
+techniques such as POSIX threads, shared memory, asynchronous I/O, buffering
+to high-speed intermediate media, and support for tape changers and jukeboxes.
+
+
+\section{SD Connections and Sessions}
+\index{Sessions!SD Connections and }
+\index{SD Connections and Sessions }
+\addcontentsline{toc}{section}{SD Connections and Sessions}
+
+A client connects to a storage server by initiating a conventional TCP
+connection. The storage server accepts the connection unless its maximum
+number of connections has been reached or the specified host is not granted
+access to the storage server. Once a connection has been opened, the client
+may make any number of Query requests, and/or initiate (if permitted), one or
+more Append sessions (which transmit data to be stored by the storage daemon)
+and/or Read sessions (which retrieve data from the storage daemon).
+
+Most requests and replies sent across the connection are simple ASCII strings,
+with status replies prefixed by a four digit status code for easier parsing.
+Binary data appear in blocks stored and retrieved from the storage. Any
+request may result in a single-line status reply of ``{\tt 3201\ Notification\
+pending}'', which indicates the client must send a ``Query notification''
+request to retrieve one or more notifications posted to it. Once the
+notifications have been returned, the client may then resubmit the request
+which resulted in the 3201 status.
+
+The following descriptions omit common error codes, yet to be defined, which
+can occur from most or many requests due to events like media errors,
+restarting of the storage daemon, etc. These details will be filled in, along
+with a comprehensive list of status codes along with which requests can
+produce them in an update to this document.
+
+\subsection{SD Append Requests}
+\index{Requests!SD Append }
+\index{SD Append Requests }
+\addcontentsline{toc}{subsection}{SD Append Requests}
+
+\begin{description}
+
+\item [{append open session = \lt{}JobId\gt{} [ \lt{}Password\gt{} ] }]
+ \index{SPAN class }
+ A data append session is opened with the Job ID given by {\it JobId} with
+client password (if required) given by {\it Password}. If the session is
+successfully opened, a status of {\tt 3000\ OK} is returned with a ``{\tt
+ticket\ =\ }{\it number}'' reply used to identify subsequent messages in the
+session. If too many sessions are open, or a conflicting session (for
+example, a read in progress when simultaneous read and append sessions are
+not permitted), a status of ``{\tt 3502\ Volume\ busy}'' is returned. If no
+volume is mounted, or the volume mounted cannot be appended to, a status of
+``{\tt 3503\ Volume\ not\ mounted}'' is returned.
+
+\item [append data = \lt{}ticket-number\gt{} ]
+ \index{SPAN class }
+ If the append data is accepted, a status of {\tt 3000\ OK data address =
+\lt{}IPaddress\gt{} port = \lt{}port\gt{}} is returned, where the {\tt
+IPaddress} and {\tt port} specify the IP address and port number of the data
+channel. Error status codes are {\tt 3504\ Invalid\ ticket\ number} and {\tt
+3505\ Session\ aborted}, the latter of which indicates the entire append
+session has failed due to a daemon or media error.
+
+Once the File daemon has established the connection to the data channel
+opened by the Storage daemon, it will transfer a header packet followed by
+any number of data packets. The header packet is of the form:
+
+{\tt \lt{}file-index\gt{} \lt{}stream-id\gt{} \lt{}info\gt{}}
+
+The details are specified in the
+\ilink{Daemon Protocol}{_ChapterStart2} section of this
+document.
+
+\item [*append abort session = \lt{}ticket-number\gt{} ]
+ \index{SPAN class }
+ The open append session with ticket {\it ticket-number} is aborted; any blocks
+not yet written to permanent media are discarded. Subsequent attempts to
+append data to the session will receive an error status of {\tt 3505\
+Session\ aborted}.
+
+\item [append end session = \lt{}ticket-number\gt{} ]
+ \index{SPAN class }
+ The open append session with ticket {\it ticket-number} is marked complete; no
+further blocks may be appended. The storage daemon will give priority to
+saving any buffered blocks from this session to permanent media as soon as
+possible.
+
+\item [append close session = \lt{}ticket-number\gt{} ]
+ \index{SPAN class }
+ The append session with ticket {\it ticket} is closed. This message does not
+receive an {\tt 3000\ OK} reply until all of the content of the session are
+stored on permanent media, at which time said reply is given, followed by a
+list of volumes, from first to last, which contain blocks from the session,
+along with the first and last file and block on each containing session data
+and the volume session key identifying data from that session in lines with
+the following format:
+
+{\tt {\tt Volume = }\lt{}Volume-id\gt{} \lt{}start-file\gt{}
+\lt{}start-block\gt{} \lt{}end-file\gt{} \lt{}end-block\gt{}
+\lt{}volume-session-id\gt{}}where {\it Volume-id} is the volume label, {\it
+start-file} and {\it start-block} are the file and block containing the first
+data from that session on the volume, {\it end-file} and {\it end-block} are
+the file and block with the last data from the session on the volume and {\it
+volume-session-id} is the volume session ID for blocks from the session
+stored on that volume.
+\end{description}
+
+\subsection{SD Read Requests}
+\index{SD Read Requests }
+\index{Requests!SD Read }
+\addcontentsline{toc}{subsection}{SD Read Requests}
+
+\begin{description}
+
+\item [Read open session = \lt{}JobId\gt{} \lt{}Volume-id\gt{}
+ \lt{}start-file\gt{} \lt{}start-block\gt{} \lt{}end-file\gt{}
+ \lt{}end-block\gt{} \lt{}volume-session-id\gt{} \lt{}password\gt{} ]
+\index{SPAN class }
+where {\it Volume-id} is the volume label, {\it start-file} and {\it
+start-block} are the file and block containing the first data from that
+session on the volume, {\it end-file} and {\it end-block} are the file and
+block with the last data from the session on the volume and {\it
+volume-session-id} is the volume session ID for blocks from the session
+stored on that volume.
+
+If the session is successfully opened, a status of
+
+{\tt {\tt 3100\ OK Ticket\ =\ }{\it number}``}
+
+is returned with a reply used to identify subsequent messages in the session.
+If too many sessions are open, or a conflicting session (for example, an
+append in progress when simultaneous read and append sessions are not
+permitted), a status of ''{\tt 3502\ Volume\ busy}`` is returned. If no
+volume is mounted, or the volume mounted cannot be appended to, a status of
+''{\tt 3503\ Volume\ not\ mounted}`` is returned. If no block with the given
+volume session ID and the correct client ID number appears in the given first
+file and block for the volume, a status of ''{\tt 3505\ Session\ not\
+found}`` is returned.
+
+\item [Read data = \lt{}Ticket\gt{} \gt{} \lt{}Block\gt{} ]
+ \index{SPAN class }
+ The specified Block of data from open read session with the specified Ticket
+number is returned, with a status of {\tt 3000\ OK} followed by a ''{\tt
+Length\ =\ }{\it size}`` line giving the length in bytes of the block data
+which immediately follows. Blocks must be retrieved in ascending order, but
+blocks may be skipped. If a block number greater than the largest stored on
+the volume is requested, a status of ''{\tt 3201\ End\ of\ volume}`` is
+returned. If a block number greater than the largest in the file is
+requested, a status of ''{\tt 3401\ End\ of\ file}`` is returned.
+
+\item [Read close session = \lt{}Ticket\gt{} ]
+ \index{SPAN class }
+ The read session with Ticket number is closed. A read session may be closed
+at any time; you needn't read all its blocks before closing it.
+\end{description}
+
+{\it by
+\elink{John Walker}{http://www.fourmilab.ch/}
+January 30th, MM }
+
+\section{SD Data Structures}
+\index{SD Data Structures}
+\addcontentsline{toc}{section}{SD Data Structures}
+
+In the Storage daemon, there is a Device resource (i.e. from conf file)
+that describes each physical device. When the physical device is used it
+is controled by the DEVICE structure (defined in dev.h), and typically
+refered to as dev in the C++ code. Anyone writing or reading a physical
+device must ultimately get a lock on the DEVICE structure -- this controls
+the device. However, multiple Jobs (defined by a JCR structure src/jcr.h)
+can be writing a physical DEVICE at the same time (of course they are
+sequenced by locking the DEVICE structure). There are a lot of job
+dependent "device" variables that may be different for each Job such as
+spooling (one job may spool and another may not, and when a job is
+spooling, it must have an i/o packet open, each job has its own record and
+block structures, ...), so there is a device control record or DCR that is
+the primary way of interfacing to the physical device. The DCR contains
+all the job specific data as well as a pointer to the Device resource
+(DEVRES structure) and the physical DEVICE structure.
+
+Now if a job is writing to two devices (it could be writing two separate
+streams to the same device), it must have two DCRs. Today, the code only
+permits one. This won't be hard to change, but it is new code.
+
+Today three jobs (threads), two physical devices each job
+ writes to only one device:
+
+\begin{verbatim}
+ Job1 -> DCR1 -> DEVICE1
+ Job2 -> DCR2 -> DEVICE1
+ Job3 -> DCR3 -> DEVICE2
+\end{verbatim}
+
+To be implemented three jobs, three physical devices, but
+ job1 is writing simultaneously to three devices:
+
+\begin{verbatim}
+ Job1 -> DCR1 -> DEVICE1
+ -> DCR4 -> DEVICE2
+ -> DCR5 -> DEVICE3
+ Job2 -> DCR2 -> DEVICE1
+ Job3 -> DCR3 -> DEVICE2
+
+ Job = job control record
+ DCR = Job contorl data for a specific device
+ DEVICE = Device only control data
+\end{verbatim}
+
--- /dev/null
+%%
+%%
+
+%\author{Landon Fuller}
+%\title{Bacula TLS Additions}
+
+\chapter{TLS}
+\label{_Chapter_TLS}
+\index{TLS}
+
+Written by Landon Fuller
+
+\section{Introduction to TLS}
+\index{TLS Introduction}
+\index{Introduction!TLS}
+\addcontentsline{toc}{section}{TLS Introduction}
+
+This patch includes all the back-end code necessary to add complete TLS
+data encryption support to Bacula. In addition, support for TLS in
+Console/Director communications has been added as a proof of concept.
+Adding support for the remaining daemons will be straight-forward.
+Supported features of this patchset include:
+
+\begin{itemize}
+\item Client/Server TLS Requirement Negotiation
+\item TLSv1 Connections with Server and Client Certificate
+Validation
+\item Forward Secrecy Support via Diffie-Hellman Ephemeral Keying
+\end{itemize}
+
+This document will refer to both ``server'' and ``client'' contexts. These
+terms refer to the accepting and initiating peer, respectively.
+
+Diffie-Hellman anonymous ciphers are not supported by this patchset. The
+use of DH anonymous ciphers increases the code complexity and places
+explicit trust upon the two-way Cram-MD5 implementation. Cram-MD5 is
+subject to known plaintext attacks, and is should be considered
+considerably less secure than PKI certificate-based authentication.
+
+Appropriate autoconf macros have been added to detect and use OpenSSL. Two
+additional preprocessor defines have been added: \emph{HAVE\_TLS} and
+\emph{HAVE\_OPENSSL}. All changes not specific to OpenSSL rely on
+\emph{HAVE\_TLS}. OpenSSL-specific code is constrained to
+\emph{src/lib/tls.c} to facilitate the support of alternative TLS
+implementations.
+
+\section{New Configuration Directives}
+\index{TLS Configuration Directives}
+\index{Directives!TLS Configuration}
+\addcontentsline{toc}{section}{New Configuration Directives}
+
+Additional configuration directives have been added to both the Console and
+Director resources. These new directives are defined as follows:
+
+\begin{itemize}
+\item \underline{TLS Enable} \emph{(yes/no)}
+Enable TLS support.
+
+\item \underline{TLS Require} \emph{(yes/no)}
+Require TLS connections.
+
+\item \underline{TLS Certificate} \emph{(path)}
+Path to PEM encoded TLS certificate. Used as either a client or server
+certificate.
+
+\item \underline{TLS Key} \emph{(path)}
+Path to PEM encoded TLS private key. Must correspond with the TLS
+certificate.
+
+\item \underline{TLS Verify Peer} \emph{(yes/no)}
+Verify peer certificate. Instructs server to request and verify the
+client's x509 certificate. Any client certificate signed by a known-CA
+will be accepted unless the TLS Allowed CN configuration directive is used.
+Not valid in a client context.
+
+\item \underline{TLS Allowed CN} \emph{(string list)}
+Common name attribute of allowed peer certificates. If directive is
+specified, all client certificates will be verified against this list.
+This directive may be specified more than once. Not valid in a client
+context.
+
+\item \underline{TLS CA Certificate File} \emph{(path)}
+Path to PEM encoded TLS CA certificate(s). Multiple certificates are
+permitted in the file. One of \emph{TLS CA Certificate File} or \emph{TLS
+CA Certificate Dir} are required in a server context if \underline{TLS
+Verify Peer} is also specified, and are always required in a client
+context.
+
+\item \underline{TLS CA Certificate Dir} \emph{(path)}
+Path to TLS CA certificate directory. In the current implementation,
+certificates must be stored PEM encoded with OpenSSL-compatible hashes.
+One of \emph{TLS CA Certificate File} or \emph{TLS CA Certificate Dir} are
+required in a server context if \emph{TLS Verify Peer} is also specified,
+and are always required in a client context.
+
+\item \underline{TLS DH File} \emph{(path)}
+Path to PEM encoded Diffie-Hellman parameter file. If this directive is
+specified, DH ephemeral keying will be enabled, allowing for forward
+secrecy of communications. This directive is only valid within a server
+context. To generate the parameter file, you may use openssl:
+\footnotesize
+\begin{verbatim}
+openssl dhparam -out dh1024.pem -5 1024
+\end{verbatim}
+\normalsize
+\end{itemize}
+
+\section{TLS API Implementation}
+\index{TLS API Implimentation}
+\index{API Implimentation!TLS}
+\addcontentsline{toc}{section}{TLS API Implementation}
+
+To facilitate the use of additional TLS libraries, all OpenSSL-specific
+code has been implemented within \emph{src/lib/tls.c}. In turn, a generic
+TLS API is exported.
+
+\subsection{Library Initialization and Cleanup}
+\index{Library Initialization and Cleanup}
+\index{Initialization and Cleanup!Library}
+\addcontentsline{toc}{subsection}{Library Initialization and Cleanup}
+
+\footnotesize
+\begin{verbatim}
+int init_tls (void);
+\end{verbatim}
+\normalsize
+
+Performs TLS library initialization, including seeding of the PRNG. PRNG
+seeding has not yet been implemented for win32.
+
+\footnotesize
+\begin{verbatim}
+int cleanup_tls (void);
+\end{verbatim}
+\normalsize
+
+Performs TLS library cleanup.
+
+\subsection{Manipulating TLS Contexts}
+\index{TLS Context Manipulation}
+\index{Contexts!Manipulating TLS}
+\addcontentsline{toc}{subsection}{Manipulating TLS Contexts}
+
+\footnotesize
+\begin{verbatim}
+TLS_CONTEXT *new_tls_context (const char *ca_certfile,
+ const char *ca_certdir, const char *certfile,
+ const char *keyfile, const char *dhfile, bool verify_peer);
+\end{verbatim}
+\normalsize
+
+Allocates and initalizes a new opaque \emph{TLS\_CONTEXT} structure. The
+\emph{TLS\_CONTEXT} structure maintains default TLS settings from which
+\emph{TLS\_CONNECTION} structures are instantiated. In the future the
+\emph{TLS\_CONTEXT} structure may be used to maintain the TLS session
+cache. \emph{ca\_certfile} and \emph{ca\_certdir} arguments are used to
+initialize the CA verification stores. The \emph{certfile} and
+\emph{keyfile} arguments are used to initialize the local certificate and
+private key. If \emph{dhfile} is non-NULL, it is used to initialize
+Diffie-Hellman ephemeral keying. If \emph{verify\_peer} is \emph{true} ,
+client certificate validation is enabled.
+
+\footnotesize
+\begin{verbatim}
+void free_tls_context (TLS_CONTEXT *ctx);
+\end{verbatim}
+\normalsize
+
+Deallocated a previously allocated \emph{TLS\_CONTEXT} structure.
+
+\subsection{Performing Post-Connection Verification}
+\index{TLS Post-Connection Verification}
+\index{Verification!TLS Post-Connection}
+\addcontentsline{toc}{subsection}{Performing Post-Connection Verification}
+
+\footnotesize
+\begin{verbatim}
+bool tls_postconnect_verify_host (TLS_CONNECTION *tls, const char *host);
+\end{verbatim}
+\normalsize
+
+Performs post-connection verification of the peer-supplied x509
+certificate. Checks whether the \emph{subjectAltName} and
+\emph{commonName} attributes match the supplied \emph{host} string.
+Returns \emph{true} if there is a match, \emph{false} otherwise.
+
+\footnotesize
+\begin{verbatim}
+bool tls_postconnect_verify_cn (TLS_CONNECTION *tls, alist *verify_list);
+\end{verbatim}
+\normalsize
+
+Performs post-connection verification of the peer-supplied x509
+certificate. Checks whether the \emph{commonName} attribute matches any
+strings supplied via the \emph{verify\_list} parameter. Returns
+\emph{true} if there is a match, \emph{false} otherwise.
+
+\subsection{Manipulating TLS Connections}
+\index{TLS Connection Manipulation}
+\index{Connections!Manipulating TLS}
+\addcontentsline{toc}{subsection}{Manipulating TLS Connections}
+
+\footnotesize
+\begin{verbatim}
+TLS_CONNECTION *new_tls_connection (TLS_CONTEXT *ctx, int fd);
+\end{verbatim}
+\normalsize
+
+Allocates and initializes a new \emph{TLS\_CONNECTION} structure with
+context \emph{ctx} and file descriptor \emph{fd}.
+
+\footnotesize
+\begin{verbatim}
+void free_tls_connection (TLS_CONNECTION *tls);
+\end{verbatim}
+\normalsize
+
+Deallocates memory associated with the \emph{tls} structure.
+
+\footnotesize
+\begin{verbatim}
+bool tls_bsock_connect (BSOCK *bsock);
+\end{verbatim}
+\normalsize
+
+Negotiates a a TLS client connection via \emph{bsock}. Returns \emph{true}
+if successful, \emph{false} otherwise. Will fail if there is a TLS
+protocol error or an invalid certificate is presented
+
+\footnotesize
+\begin{verbatim}
+bool tls_bsock_accept (BSOCK *bsock);
+\end{verbatim}
+\normalsize
+
+Accepts a TLS client connection via \emph{bsock}. Returns \emph{true} if
+successful, \emph{false} otherwise. Will fail if there is a TLS protocol
+error or an invalid certificate is presented.
+
+\footnotesize
+\begin{verbatim}
+bool tls_bsock_shutdown (BSOCK *bsock);
+\end{verbatim}
+\normalsize
+
+Issues a blocking TLS shutdown request to the peer via \emph{bsock}. This function may not wait for the peer's reply.
+
+\footnotesize
+\begin{verbatim}
+int tls_bsock_writen (BSOCK *bsock, char *ptr, int32_t nbytes);
+\end{verbatim}
+\normalsize
+
+Writes \emph{nbytes} from \emph{ptr} via the \emph{TLS\_CONNECTION}
+associated with \emph{bsock}. Due to OpenSSL's handling of \emph{EINTR},
+\emph{bsock} is set non-blocking at the start of the function, and restored
+to its original blocking state before the function returns. Less than
+\emph{nbytes} may be written if an error occurs. The actual number of
+bytes written will be returned.
+
+\footnotesize
+\begin{verbatim}
+int tls_bsock_readn (BSOCK *bsock, char *ptr, int32_t nbytes);
+\end{verbatim}
+\normalsize
+
+Reads \emph{nbytes} from the \emph{TLS\_CONNECTION} associated with
+\emph{bsock} and stores the result in \emph{ptr}. Due to OpenSSL's
+handling of \emph{EINTR}, \emph{bsock} is set non-blocking at the start of
+the function, and restored to its original blocking state before the
+function returns. Less than \emph{nbytes} may be read if an error occurs.
+The actual number of bytes read will be returned.
+
+\section{Bnet API Changes}
+\index{Bnet API Changes}
+\index{API Changes!Bnet}
+\addcontentsline{toc}{section}{Bnet API Changes}
+
+A minimal number of changes were required in the Bnet socket API. The BSOCK
+structure was expanded to include an associated TLS\_CONNECTION structure,
+as well as a flag to designate the current blocking state of the socket.
+The blocking state flag is required for win32, where it does not appear
+possible to discern the current blocking state of a socket.
+
+\subsection{Negotiating a TLS Connection}
+\index{Negotiating a TLS Connection}
+\index{TLS Connection!Negotiating}
+\addcontentsline{toc}{subsection}{Negotiating a TLS Connection}
+
+\emph{bnet\_tls\_server()} and \emph{bnet\_tls\_client()} were both
+implemented using the new TLS API as follows:
+
+\footnotesize
+\begin{verbatim}
+int bnet_tls_client(TLS_CONTEXT *ctx, BSOCK * bsock);
+\end{verbatim}
+\normalsize
+
+Negotiates a TLS session via \emph{bsock} using the settings from
+\emph{ctx}. Returns 1 if successful, 0 otherwise.
+
+\footnotesize
+\begin{verbatim}
+int bnet_tls_server(TLS_CONTEXT *ctx, BSOCK * bsock, alist *verify_list);
+\end{verbatim}
+\normalsize
+
+Accepts a TLS client session via \emph{bsock} using the settings from
+\emph{ctx}. If \emph{verify\_list} is non-NULL, it is passed to
+\emph{tls\_postconnect\_verify\_cn()} for client certificate verification.
+
+\subsection{Manipulating Socket Blocking State}
+\index{Manipulating Socket Blocking State}
+\index{Socket Blocking State!Manipulating}
+\index{Blocking State!Socket!Manipulating}
+\addcontentsline{toc}{subsection}{Manipulating Socket Blocking State}
+
+Three functions were added for manipulating the blocking state of a socket
+on both Win32 and Unix-like systems. The Win32 code was written according
+to the MSDN documentation, but has not been tested.
+
+These functions are prototyped as follows:
+
+\footnotesize
+\begin{verbatim}
+int bnet_set_nonblocking (BSOCK *bsock);
+\end{verbatim}
+\normalsize
+
+Enables non-blocking I/O on the socket associated with \emph{bsock}.
+Returns a copy of the socket flags prior to modification.
+
+\footnotesize
+\begin{verbatim}
+int bnet_set_blocking (BSOCK *bsock);
+\end{verbatim}
+\normalsize
+
+Enables blocking I/O on the socket associated with \emph{bsock}. Returns a
+copy of the socket flags prior to modification.
+
+\footnotesize
+\begin{verbatim}
+void bnet_restore_blocking (BSOCK *bsock, int flags);
+\end{verbatim}
+\normalsize
+
+Restores blocking or non-blocking IO setting on the socket associated with
+\emph{bsock}. The \emph{flags} argument must be the return value of either
+\emph{bnet\_set\_blocking()} or \emph{bnet\_restore\_blocking()}.
+
+\pagebreak
+
+\section{Authentication Negotiation}
+\index{Authentication Negotiation}
+\index{Negotiation!TLS Authentication}
+\addcontentsline{toc}{section}{Authentication Negotiation}
+
+Backwards compatibility with the existing SSL negotiation hooks implemented
+in src/lib/cram-md5.c have been maintained. The
+\emph{cram\_md5\_get\_auth()} function has been modified to accept an
+integer pointer argument, tls\_remote\_need. The TLS requirement
+advertised by the remote host is returned via this pointer.
+
+After exchanging cram-md5 authentication and TLS requirements, both the
+client and server independently decide whether to continue:
+
+\footnotesize
+\begin{verbatim}
+if (!cram_md5_get_auth(dir, password, &tls_remote_need) ||
+ !cram_md5_auth(dir, password, tls_local_need)) {
+[snip]
+/* Verify that the remote host is willing to meet our TLS requirements */
+if (tls_remote_need < tls_local_need && tls_local_need != BNET_TLS_OK &&
+ tls_remote_need != BNET_TLS_OK) {
+ sendit(_("Authorization problem:"
+ " Remote server did not advertise required TLS support.\n"));
+ auth_success = false;
+ goto auth_done;
+}
+
+/* Verify that we are willing to meet the remote host's requirements */
+if (tls_remote_need > tls_local_need && tls_local_need != BNET_TLS_OK &&
+ tls_remote_need != BNET_TLS_OK) {
+ sendit(_("Authorization problem:"
+ " Remote server requires TLS.\n"));
+ auth_success = false;
+ goto auth_done;
+}
+\end{verbatim}
+\normalsize
--- /dev/null
+5.1.2 (26 February 2010)
--- /dev/null
+%%
+%%
+
+\chapter{Catalog Services}
+\label{_ChapterStart30}
+\index[general]{Services!Catalog }
+\index[general]{Catalog Services }
+
+\section{General}
+\index[general]{General }
+\addcontentsline{toc}{subsection}{General}
+
+This chapter is intended to be a technical discussion of the Catalog services
+and as such is not targeted at end users but rather at developers and system
+administrators that want or need to know more of the working details of {\bf
+Bacula}.
+
+The {\bf Bacula Catalog} services consist of the programs that provide the SQL
+database engine for storage and retrieval of all information concerning files
+that were backed up and their locations on the storage media.
+
+We have investigated the possibility of using the following SQL engines for
+Bacula: Beagle, mSQL, GNU SQL, PostgreSQL, SQLite, Oracle, and MySQL. Each
+presents certain problems with either licensing or maturity. At present, we
+have chosen for development purposes to use MySQL, PostgreSQL and SQLite.
+MySQL was chosen because it is fast, proven to be reliable, widely used, and
+actively being developed. MySQL is released under the GNU GPL license.
+PostgreSQL was chosen because it is a full-featured, very mature database, and
+because Dan Langille did the Bacula driver for it. PostgreSQL is distributed
+under the BSD license. SQLite was chosen because it is small, efficient, and
+can be directly embedded in {\bf Bacula} thus requiring much less effort from
+the system administrator or person building {\bf Bacula}. In our testing
+SQLite has performed very well, and for the functions that we use, it has
+never encountered any errors except that it does not appear to handle
+databases larger than 2GBytes. That said, we would not recommend it for
+serious production use.
+
+The Bacula SQL code has been written in a manner that will allow it to be
+easily modified to support any of the current SQL database systems on the
+market (for example: mSQL, iODBC, unixODBC, Solid, OpenLink ODBC, EasySoft
+ODBC, InterBase, Oracle8, Oracle7, and DB2).
+
+If you do not specify either {\bf \verb{--{with-mysql} or {\bf \verb{--{with-postgresql} or
+{\bf \verb{--{with-sqlite} on the ./configure line, Bacula will use its minimalist
+internal database. This database is kept for build reasons but is no longer
+supported. Bacula {\bf requires} one of the three databases (MySQL,
+PostgreSQL, or SQLite) to run.
+
+\subsection{Filenames and Maximum Filename Length}
+\index[general]{Filenames and Maximum Filename Length }
+\index[general]{Length!Filenames and Maximum Filename }
+\addcontentsline{toc}{subsubsection}{Filenames and Maximum Filename Length}
+
+In general, either MySQL, PostgreSQL or SQLite permit storing arbitrary long
+path names and file names in the catalog database. In practice, there still
+may be one or two places in the Catalog interface code that restrict the
+maximum path length to 512 characters and the maximum file name length to 512
+characters. These restrictions are believed to have been removed. Please note,
+these restrictions apply only to the Catalog database and thus to your ability
+to list online the files saved during any job. All information received and
+stored by the Storage daemon (normally on tape) allows and handles arbitrarily
+long path and filenames.
+
+\subsection{Installing and Configuring MySQL}
+\index[general]{MySQL!Installing and Configuring }
+\index[general]{Installing and Configuring MySQL }
+\addcontentsline{toc}{subsubsection}{Installing and Configuring MySQL}
+
+For the details of installing and configuring MySQL, please see the
+\ilink{Installing and Configuring MySQL}{_ChapterStart} chapter of
+this manual.
+
+\subsection{Installing and Configuring PostgreSQL}
+\index[general]{PostgreSQL!Installing and Configuring }
+\index[general]{Installing and Configuring PostgreSQL }
+\addcontentsline{toc}{subsubsection}{Installing and Configuring PostgreSQL}
+
+For the details of installing and configuring PostgreSQL, please see the
+\ilink{Installing and Configuring PostgreSQL}{_ChapterStart10}
+chapter of this manual.
+
+\subsection{Installing and Configuring SQLite}
+\index[general]{Installing and Configuring SQLite }
+\index[general]{SQLite!Installing and Configuring }
+\addcontentsline{toc}{subsubsection}{Installing and Configuring SQLite}
+
+For the details of installing and configuring SQLite, please see the
+\ilink{Installing and Configuring SQLite}{_ChapterStart33} chapter of
+this manual.
+
+\subsection{Internal Bacula Catalog}
+\index[general]{Catalog!Internal Bacula }
+\index[general]{Internal Bacula Catalog }
+\addcontentsline{toc}{subsubsection}{Internal Bacula Catalog}
+
+Please see the
+\ilink{Internal Bacula Database}{_ChapterStart42} chapter of this
+manual for more details.
+
+\subsection{Database Table Design}
+\index[general]{Design!Database Table }
+\index[general]{Database Table Design }
+\addcontentsline{toc}{subsubsection}{Database Table Design}
+
+All discussions that follow pertain to the MySQL database. The details for the
+PostgreSQL and SQLite databases are essentially identical except for that all
+fields in the SQLite database are stored as ASCII text and some of the
+database creation statements are a bit different. The details of the internal
+Bacula catalog are not discussed here.
+
+Because the Catalog database may contain very large amounts of data for large
+sites, we have made a modest attempt to normalize the data tables to reduce
+redundant information. While reducing the size of the database significantly,
+it does, unfortunately, add some complications to the structures.
+
+In simple terms, the Catalog database must contain a record of all Jobs run by
+Bacula, and for each Job, it must maintain a list of all files saved, with
+their File Attributes (permissions, create date, ...), and the location and
+Media on which the file is stored. This is seemingly a simple task, but it
+represents a huge amount interlinked data. Note: the list of files and their
+attributes is not maintained when using the internal Bacula database. The data
+stored in the File records, which allows the user or administrator to obtain a
+list of all files backed up during a job, is by far the largest volume of
+information put into the Catalog database.
+
+Although the Catalog database has been designed to handle backup data for
+multiple clients, some users may want to maintain multiple databases, one for
+each machine to be backed up. This reduces the risk of confusion of accidental
+restoring a file to the wrong machine as well as reducing the amount of data
+in a single database, thus increasing efficiency and reducing the impact of a
+lost or damaged database.
+
+\section{Sequence of Creation of Records for a Save Job}
+\index[general]{Sequence of Creation of Records for a Save Job }
+\index[general]{Job!Sequence of Creation of Records for a Save }
+\addcontentsline{toc}{subsection}{Sequence of Creation of Records for a Save
+Job}
+
+Start with StartDate, ClientName, Filename, Path, Attributes, MediaName,
+MediaCoordinates. (PartNumber, NumParts). In the steps below, ``Create new''
+means to create a new record whether or not it is unique. ``Create unique''
+means each record in the database should be unique. Thus, one must first
+search to see if the record exists, and only if not should a new one be
+created, otherwise the existing RecordId should be used.
+
+\begin{enumerate}
+\item Create new Job record with StartDate; save JobId
+\item Create unique Media record; save MediaId
+\item Create unique Client record; save ClientId
+\item Create unique Filename record; save FilenameId
+\item Create unique Path record; save PathId
+\item Create unique Attribute record; save AttributeId
+ store ClientId, FilenameId, PathId, and Attributes
+\item Create new File record
+ store JobId, AttributeId, MediaCoordinates, etc
+\item Repeat steps 4 through 8 for each file
+\item Create a JobMedia record; save MediaId
+\item Update Job record filling in EndDate and other Job statistics
+ \end{enumerate}
+
+\section{Database Tables}
+\index[general]{Database Tables }
+\index[general]{Tables!Database }
+\addcontentsline{toc}{subsection}{Database Tables}
+
+\addcontentsline{lot}{table}{Filename Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf Filename } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{l| }{\bf Data Type }
+& \multicolumn{1}{l| }{\bf Remark } \\
+ \hline
+{FilenameId } & {integer } & {Primary Key } \\
+ \hline
+{Name } & {Blob } & {Filename }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Filename} table shown above contains the name of each file backed up
+with the path removed. If different directories or machines contain the same
+filename, only one copy will be saved in this table.
+
+\
+
+\addcontentsline{lot}{table}{Path Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf Path } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{PathId } & {integer } & {Primary Key } \\
+ \hline
+{Path } & {Blob } & {Full Path }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Path} table contains shown above the path or directory names of all
+directories on the system or systems. The filename and any MSDOS disk name are
+stripped off. As with the filename, only one copy of each directory name is
+kept regardless of how many machines or drives have the same directory. These
+path names should be stored in Unix path name format.
+
+Some simple testing on a Linux file system indicates that separating the
+filename and the path may be more complication than is warranted by the space
+savings. For example, this system has a total of 89,097 files, 60,467 of which
+have unique filenames, and there are 4,374 unique paths.
+
+Finding all those files and doing two stats() per file takes an average wall
+clock time of 1 min 35 seconds on a 400MHz machine running RedHat 6.1 Linux.
+
+Finding all those files and putting them directly into a MySQL database with
+the path and filename defined as TEXT, which is variable length up to 65,535
+characters takes 19 mins 31 seconds and creates a 27.6 MByte database.
+
+Doing the same thing, but inserting them into Blob fields with the filename
+indexed on the first 30 characters and the path name indexed on the 255 (max)
+characters takes 5 mins 18 seconds and creates a 5.24 MB database. Rerunning
+the job (with the database already created) takes about 2 mins 50 seconds.
+
+Running the same as the last one (Path and Filename Blob), but Filename
+indexed on the first 30 characters and the Path on the first 50 characters
+(linear search done there after) takes 5 mins on the average and creates a 3.4
+MB database. Rerunning with the data already in the DB takes 3 mins 35
+seconds.
+
+Finally, saving only the full path name rather than splitting the path and the
+file, and indexing it on the first 50 characters takes 6 mins 43 seconds and
+creates a 7.35 MB database.
+
+\
+
+\addcontentsline{lot}{table}{File Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf File } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{FileId } & {integer } & {Primary Key } \\
+ \hline
+{FileIndex } & {integer } & {The sequential file number in the Job } \\
+ \hline
+{JobId } & {integer } & {Link to Job Record } \\
+ \hline
+{PathId } & {integer } & {Link to Path Record } \\
+ \hline
+{FilenameId } & {integer } & {Link to Filename Record } \\
+ \hline
+{MarkId } & {integer } & {Used to mark files during Verify Jobs } \\
+ \hline
+{LStat } & {tinyblob } & {File attributes in base64 encoding } \\
+ \hline
+{MD5 } & {tinyblob } & {MD5/SHA1 signature in base64 encoding }
+\\ \hline
+
+\end{longtable}
+
+The {\bf File} table shown above contains one entry for each file backed up by
+Bacula. Thus a file that is backed up multiple times (as is normal) will have
+multiple entries in the File table. This will probably be the table with the
+most number of records. Consequently, it is essential to keep the size of this
+record to an absolute minimum. At the same time, this table must contain all
+the information (or pointers to the information) about the file and where it
+is backed up. Since a file may be backed up many times without having changed,
+the path and filename are stored in separate tables.
+
+This table contains by far the largest amount of information in the Catalog
+database, both from the stand point of number of records, and the stand point
+of total database size. As a consequence, the user must take care to
+periodically reduce the number of File records using the {\bf retention}
+command in the Console program.
+
+\
+
+\addcontentsline{lot}{table}{Job Table Layout}
+\begin{longtable}{|l|l|p{2.5in}|}
+ \hline
+\multicolumn{3}{|l| }{\bf Job } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{JobId } & {integer } & {Primary Key } \\
+ \hline
+{Job } & {tinyblob } & {Unique Job Name } \\
+ \hline
+{Name } & {tinyblob } & {Job Name } \\
+ \hline
+{PurgedFiles } & {tinyint } & {Used by Bacula for purging/retention periods
+} \\
+ \hline
+{Type } & {binary(1) } & {Job Type: Backup, Copy, Clone, Archive, Migration
+} \\
+ \hline
+{Level } & {binary(1) } & {Job Level } \\
+ \hline
+{ClientId } & {integer } & {Client index } \\
+ \hline
+{JobStatus } & {binary(1) } & {Job Termination Status } \\
+ \hline
+{SchedTime } & {datetime } & {Time/date when Job scheduled } \\
+ \hline
+{StartTime } & {datetime } & {Time/date when Job started } \\
+ \hline
+{EndTime } & {datetime } & {Time/date when Job ended } \\
+ \hline
+{RealEndTime } & {datetime } & {Time/date when original Job ended } \\
+ \hline
+{JobTDate } & {bigint } & {Start day in Unix format but 64 bits; used for
+Retention period. } \\
+ \hline
+{VolSessionId } & {integer } & {Unique Volume Session ID } \\
+ \hline
+{VolSessionTime } & {integer } & {Unique Volume Session Time } \\
+ \hline
+{JobFiles } & {integer } & {Number of files saved in Job } \\
+ \hline
+{JobBytes } & {bigint } & {Number of bytes saved in Job } \\
+ \hline
+{JobErrors } & {integer } & {Number of errors during Job } \\
+ \hline
+{JobMissingFiles } & {integer } & {Number of files not saved (not yet used) }
+\\
+ \hline
+{PoolId } & {integer } & {Link to Pool Record } \\
+ \hline
+{FileSetId } & {integer } & {Link to FileSet Record } \\
+ \hline
+{PrioJobId } & {integer } & {Link to prior Job Record when migrated } \\
+ \hline
+{PurgedFiles } & {tiny integer } & {Set when all File records purged } \\
+ \hline
+{HasBase } & {tiny integer } & {Set when Base Job run }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Job} table contains one record for each Job run by Bacula. Thus
+normally, there will be one per day per machine added to the database. Note,
+the JobId is used to index Job records in the database, and it often is shown
+to the user in the Console program. However, care must be taken with its use
+as it is not unique from database to database. For example, the user may have
+a database for Client data saved on machine Rufus and another database for
+Client data saved on machine Roxie. In this case, the two database will each
+have JobIds that match those in another database. For a unique reference to a
+Job, see Job below.
+
+The Name field of the Job record corresponds to the Name resource record given
+in the Director's configuration file. Thus it is a generic name, and it will
+be normal to find many Jobs (or even all Jobs) with the same Name.
+
+The Job field contains a combination of the Name and the schedule time of the
+Job by the Director. Thus for a given Director, even with multiple Catalog
+databases, the Job will contain a unique name that represents the Job.
+
+For a given Storage daemon, the VolSessionId and VolSessionTime form a unique
+identification of the Job. This will be the case even if multiple Directors
+are using the same Storage daemon.
+
+The Job Type (or simply Type) can have one of the following values:
+
+\addcontentsline{lot}{table}{Job Types}
+\begin{longtable}{|l|l|}
+ \hline
+\multicolumn{1}{|c| }{\bf Value } & \multicolumn{1}{c| }{\bf Meaning } \\
+ \hline
+{B } & {Backup Job } \\
+ \hline
+{M } & {Migrated Job } \\
+ \hline
+{V } & {Verify Job } \\
+ \hline
+{R } & {Restore Job } \\
+ \hline
+{C } & {Console program (not in database) } \\
+ \hline
+{I } & {Internal or system Job } \\
+ \hline
+{D } & {Admin Job } \\
+ \hline
+{A } & {Archive Job (not implemented) }
+\\ \hline
+{C } & {Copy Job } \\
+ \hline
+{M } & {Migration Job } \\
+ \hline
+
+\end{longtable}
+Note, the Job Type values noted above are not kept in an SQL table.
+
+
+The JobStatus field specifies how the job terminated, and can be one of the
+following:
+
+\addcontentsline{lot}{table}{Job Statuses}
+\begin{longtable}{|l|l|}
+ \hline
+\multicolumn{1}{|c| }{\bf Value } & \multicolumn{1}{c| }{\bf Meaning } \\
+ \hline
+{C } & {Created but not yet running } \\
+ \hline
+{R } & {Running } \\
+ \hline
+{B } & {Blocked } \\
+ \hline
+{T } & {Terminated normally } \\
+ \hline
+{W } & {Terminated normally with warnings }
+\\ \hline
+{E } & {Terminated in Error } \\
+ \hline
+{e } & {Non-fatal error } \\
+ \hline
+{f } & {Fatal error } \\
+ \hline
+{D } & {Verify Differences } \\
+ \hline
+{A } & {Canceled by the user } \\
+ \hline
+{I } & {Incomplete Job }
+\\ \hline
+{F } & {Waiting on the File daemon } \\
+ \hline
+{S } & {Waiting on the Storage daemon } \\
+ \hline
+{m } & {Waiting for a new Volume to be mounted } \\
+ \hline
+{M } & {Waiting for a Mount } \\
+ \hline
+{s } & {Waiting for Storage resource } \\
+ \hline
+{j } & {Waiting for Job resource } \\
+ \hline
+{c } & {Waiting for Client resource } \\
+ \hline
+{d } & {Wating for Maximum jobs } \\
+ \hline
+{t } & {Waiting for Start Time } \\
+ \hline
+{p } & {Waiting for higher priority job to finish }
+\\ \hline
+{i } & {Doing batch insert file records }
+\\ \hline
+{a } & {SD despooling attributes }
+\\ \hline
+{l } & {Doing data despooling }
+\\ \hline
+{L } & {Committing data (last despool) }
+\\ \hline
+
+
+
+\end{longtable}
+
+\addcontentsline{lot}{table}{File Sets Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf FileSet } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type\
+\ \ } & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{FileSetId } & {integer } & {Primary Key } \\
+ \hline
+{FileSet } & {tinyblob } & {FileSet name } \\
+ \hline
+{MD5 } & {tinyblob } & {MD5 checksum of FileSet } \\
+ \hline
+{CreateTime } & {datetime } & {Time and date Fileset created }
+\\ \hline
+
+\end{longtable}
+
+The {\bf FileSet} table contains one entry for each FileSet that is used. The
+MD5 signature is kept to ensure that if the user changes anything inside the
+FileSet, it will be detected and the new FileSet will be used. This is
+particularly important when doing an incremental update. If the user deletes a
+file or adds a file, we need to ensure that a Full backup is done prior to the
+next incremental.
+
+
+\addcontentsline{lot}{table}{JobMedia Table Layout}
+\begin{longtable}{|l|l|p{2.5in}|}
+ \hline
+\multicolumn{3}{|l| }{\bf JobMedia } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type\
+\ \ } & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{JobMediaId } & {integer } & {Primary Key } \\
+ \hline
+{JobId } & {integer } & {Link to Job Record } \\
+ \hline
+{MediaId } & {integer } & {Link to Media Record } \\
+ \hline
+{FirstIndex } & {integer } & {The index (sequence number) of the first file
+written for this Job to the Media } \\
+ \hline
+{LastIndex } & {integer } & {The index of the last file written for this
+Job to the Media } \\
+ \hline
+{StartFile } & {integer } & {The physical media (tape) file number of the
+first block written for this Job } \\
+ \hline
+{EndFile } & {integer } & {The physical media (tape) file number of the
+last block written for this Job } \\
+ \hline
+{StartBlock } & {integer } & {The number of the first block written for
+this Job } \\
+ \hline
+{EndBlock } & {integer } & {The number of the last block written for this
+Job } \\
+ \hline
+{VolIndex } & {integer } & {The Volume use sequence number within the Job }
+\\ \hline
+
+\end{longtable}
+
+The {\bf JobMedia} table contains one entry at the following: start of
+the job, start of each new tape file, start of each new tape, end of the
+job. Since by default, a new tape file is written every 2GB, in general,
+you will have more than 2 JobMedia records per Job. The number can be
+varied by changing the "Maximum File Size" specified in the Device
+resource. This record allows Bacula to efficiently position close to
+(within 2GB) any given file in a backup. For restoring a full Job,
+these records are not very important, but if you want to retrieve
+a single file that was written near the end of a 100GB backup, the
+JobMedia records can speed it up by orders of magnitude by permitting
+forward spacing files and blocks rather than reading the whole 100GB
+backup.
+
+
+
+
+\addcontentsline{lot}{table}{Media Table Layout}
+\begin{longtable}{|l|l|p{2.4in}|}
+ \hline
+\multicolumn{3}{|l| }{\bf Media } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type\
+\ \ } & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{MediaId } & {integer } & {Primary Key } \\
+ \hline
+{VolumeName } & {tinyblob } & {Volume name } \\
+ \hline
+{Slot } & {integer } & {Autochanger Slot number or zero } \\
+ \hline
+{PoolId } & {integer } & {Link to Pool Record } \\
+ \hline
+{MediaType } & {tinyblob } & {The MediaType supplied by the user } \\
+ \hline
+{MediaTypeId } & {integer } & {The MediaTypeId } \\
+ \hline
+{LabelType } & {tinyint } & {The type of label on the Volume } \\
+ \hline
+{FirstWritten } & {datetime } & {Time/date when first written } \\
+ \hline
+{LastWritten } & {datetime } & {Time/date when last written } \\
+ \hline
+{LabelDate } & {datetime } & {Time/date when tape labeled } \\
+ \hline
+{VolJobs } & {integer } & {Number of jobs written to this media } \\
+ \hline
+{VolFiles } & {integer } & {Number of files written to this media } \\
+ \hline
+{VolBlocks } & {integer } & {Number of blocks written to this media } \\
+ \hline
+{VolMounts } & {integer } & {Number of time media mounted } \\
+ \hline
+{VolBytes } & {bigint } & {Number of bytes saved in Job } \\
+ \hline
+{VolParts } & {integer } & {The number of parts for a Volume (DVD) } \\
+ \hline
+{VolErrors } & {integer } & {Number of errors during Job } \\
+ \hline
+{VolWrites } & {integer } & {Number of writes to media } \\
+ \hline
+{MaxVolBytes } & {bigint } & {Maximum bytes to put on this media } \\
+ \hline
+{VolCapacityBytes } & {bigint } & {Capacity estimate for this volume } \\
+ \hline
+{VolStatus } & {enum } & {Status of media: Full, Archive, Append, Recycle,
+Read-Only, Disabled, Error, Busy } \\
+ \hline
+{Enabled } {tinyint } & {Whether or not Volume can be written } \\
+ \hline
+{Recycle } & {tinyint } & {Whether or not Bacula can recycle the Volumes:
+Yes, No } \\
+ \hline
+{ActionOnPurge } & {tinyint } & {What happens to a Volume after purging } \\
+ \hline
+{VolRetention } & {bigint } & {64 bit seconds until expiration } \\
+ \hline
+{VolUseDuration } & {bigint } & {64 bit seconds volume can be used } \\
+ \hline
+{MaxVolJobs } & {integer } & {maximum jobs to put on Volume } \\
+ \hline
+{MaxVolFiles } & {integer } & {maximume EOF marks to put on Volume }
+\\ \hline
+{InChanger } & {tinyint } & {Whether or not Volume in autochanger } \\
+ \hline
+{StorageId } & {integer } & {Storage record ID } \\
+ \hline
+{DeviceId } & {integer } & {Device record ID } \\
+ \hline
+{MediaAddressing } & {integer } & {Method of addressing media } \\
+ \hline
+{VolReadTime } & {bigint } & {Time Reading Volume } \\
+ \hline
+{VolWriteTime } & {bigint } & {Time Writing Volume } \\
+ \hline
+{EndFile } & {integer } & {End File number of Volume } \\
+ \hline
+{EndBlock } & {integer } & {End block number of Volume } \\
+ \hline
+{LocationId } & {integer } & {Location record ID } \\
+ \hline
+{RecycleCount } & {integer } & {Number of times recycled } \\
+ \hline
+{InitialWrite } & {datetime } & {When Volume first written } \\
+ \hline
+{ScratchPoolId } & {integer } & {Id of Scratch Pool } \\
+ \hline
+{RecyclePoolId } & {integer } & {Pool ID where to recycle Volume } \\
+ \hline
+{Comment } & {blob } & {User text field } \\
+ \hline
+
+
+\end{longtable}
+
+The {\bf Volume} table (internally referred to as the Media table) contains
+one entry for each volume, that is each tape, cassette (8mm, DLT, DAT, ...),
+or file on which information is or was backed up. There is one Volume record
+created for each of the NumVols specified in the Pool resource record.
+
+\
+
+\addcontentsline{lot}{table}{Pool Table Layout}
+\begin{longtable}{|l|l|p{2.4in}|}
+ \hline
+\multicolumn{3}{|l| }{\bf Pool } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{PoolId } & {integer } & {Primary Key } \\
+ \hline
+{Name } & {Tinyblob } & {Pool Name } \\
+ \hline
+{NumVols } & {Integer } & {Number of Volumes in the Pool } \\
+ \hline
+{MaxVols } & {Integer } & {Maximum Volumes in the Pool } \\
+ \hline
+{UseOnce } & {tinyint } & {Use volume once } \\
+ \hline
+{UseCatalog } & {tinyint } & {Set to use catalog } \\
+ \hline
+{AcceptAnyVolume } & {tinyint } & {Accept any volume from Pool } \\
+ \hline
+{VolRetention } & {bigint } & {64 bit seconds to retain volume } \\
+ \hline
+{VolUseDuration } & {bigint } & {64 bit seconds volume can be used } \\
+ \hline
+{MaxVolJobs } & {integer } & {max jobs on volume } \\
+ \hline
+{MaxVolFiles } & {integer } & {max EOF marks to put on Volume } \\
+ \hline
+{MaxVolBytes } & {bigint } & {max bytes to write on Volume } \\
+ \hline
+{AutoPrune } & {tinyint } & {yes|no for autopruning } \\
+ \hline
+{Recycle } & {tinyint } & {yes|no for allowing auto recycling of Volume } \\
+ \hline
+{ActionOnPurge } & {tinyint } & {Default Volume ActionOnPurge } \\
+ \hline
+{PoolType } & {enum } & {Backup, Copy, Cloned, Archive, Migration } \\
+ \hline
+{LabelType } & {tinyint } & {Type of label ANSI/Bacula } \\
+ \hline
+{LabelFormat } & {Tinyblob } & {Label format }
+\\ \hline
+{Enabled } {tinyint } & {Whether or not Volume can be written } \\
+ \hline
+{ScratchPoolId } & {integer } & {Id of Scratch Pool } \\
+ \hline
+{RecyclePoolId } & {integer } & {Pool ID where to recycle Volume } \\
+ \hline
+{NextPoolId } & {integer } & {Pool ID of next Pool } \\
+ \hline
+{MigrationHighBytes } & {bigint } & {High water mark for migration } \\
+ \hline
+{MigrationLowBytes } & {bigint } & {Low water mark for migration } \\
+ \hline
+{MigrationTime } & {bigint } & {Time before migration } \\
+ \hline
+
+
+
+\end{longtable}
+
+The {\bf Pool} table contains one entry for each media pool controlled by
+Bacula in this database. One media record exists for each of the NumVols
+contained in the Pool. The PoolType is a Bacula defined keyword. The MediaType
+is defined by the administrator, and corresponds to the MediaType specified in
+the Director's Storage definition record. The CurrentVol is the sequence
+number of the Media record for the current volume.
+
+\
+
+\addcontentsline{lot}{table}{Client Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf Client } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{ClientId } & {integer } & {Primary Key } \\
+ \hline
+{Name } & {TinyBlob } & {File Services Name } \\
+ \hline
+{UName } & {TinyBlob } & {uname -a from Client (not yet used) } \\
+ \hline
+{AutoPrune } & {tinyint } & {yes|no for autopruning } \\
+ \hline
+{FileRetention } & {bigint } & {64 bit seconds to retain Files } \\
+ \hline
+{JobRetention } & {bigint } & {64 bit seconds to retain Job }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Client} table contains one entry for each machine backed up by Bacula
+in this database. Normally the Name is a fully qualified domain name.
+
+
+\addcontentsline{lot}{table}{Storage Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf Storage } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{StorageId } & {integer } & {Unique Id } \\
+ \hline
+{Name } & {tinyblob } & {Resource name of Storage device } \\
+ \hline
+{AutoChanger } & {tinyint } & {Set if it is an autochanger } \\
+ \hline
+
+\end{longtable}
+
+The {\bf Storage} table contains one entry for each Storage used.
+
+
+\addcontentsline{lot}{table}{Counter Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf Counter } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{Counter } & {tinyblob } & {Counter name } \\
+ \hline
+{MinValue } & {integer } & {Start/Min value for counter } \\
+ \hline
+{MaxValue } & {integer } & {Max value for counter } \\
+ \hline
+{CurrentValue } & {integer } & {Current counter value } \\
+ \hline
+{WrapCounter } & {tinyblob } & {Name of another counter }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Counter} table contains one entry for each permanent counter defined
+by the user.
+
+\addcontentsline{lot}{table}{Job History Table Layout}
+\begin{longtable}{|l|l|p{2.5in}|}
+ \hline
+\multicolumn{3}{|l| }{\bf JobHisto } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{JobId } & {integer } & {Primary Key } \\
+ \hline
+{Job } & {tinyblob } & {Unique Job Name } \\
+ \hline
+{Name } & {tinyblob } & {Job Name } \\
+ \hline
+{Type } & {binary(1) } & {Job Type: Backup, Copy, Clone, Archive, Migration
+} \\
+ \hline
+{Level } & {binary(1) } & {Job Level } \\
+ \hline
+{ClientId } & {integer } & {Client index } \\
+ \hline
+{JobStatus } & {binary(1) } & {Job Termination Status } \\
+ \hline
+{SchedTime } & {datetime } & {Time/date when Job scheduled } \\
+ \hline
+{StartTime } & {datetime } & {Time/date when Job started } \\
+ \hline
+{EndTime } & {datetime } & {Time/date when Job ended } \\
+ \hline
+{RealEndTime } & {datetime } & {Time/date when original Job ended } \\
+ \hline
+{JobTDate } & {bigint } & {Start day in Unix format but 64 bits; used for
+Retention period. } \\
+ \hline
+{VolSessionId } & {integer } & {Unique Volume Session ID } \\
+ \hline
+{VolSessionTime } & {integer } & {Unique Volume Session Time } \\
+ \hline
+{JobFiles } & {integer } & {Number of files saved in Job } \\
+ \hline
+{JobBytes } & {bigint } & {Number of bytes saved in Job } \\
+ \hline
+{JobErrors } & {integer } & {Number of errors during Job } \\
+ \hline
+{JobMissingFiles } & {integer } & {Number of files not saved (not yet used) }
+\\
+ \hline
+{PoolId } & {integer } & {Link to Pool Record } \\
+ \hline
+{FileSetId } & {integer } & {Link to FileSet Record } \\
+ \hline
+{PrioJobId } & {integer } & {Link to prior Job Record when migrated } \\
+ \hline
+{PurgedFiles } & {tiny integer } & {Set when all File records purged } \\
+ \hline
+{HasBase } & {tiny integer } & {Set when Base Job run }
+\\ \hline
+
+\end{longtable}
+
+The {bf JobHisto} table is the same as the Job table, but it keeps
+long term statistics (i.e. it is not pruned with the Job).
+
+
+\addcontentsline{lot}{table}{Log Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf Version } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{LogIdId } & {integer } & {Primary Key }
+\\ \hline
+{JobId } & {integer } & {Points to Job record }
+\\ \hline
+{Time } & {datetime } & {Time/date log record created }
+\\ \hline
+{LogText } & {blob } & {Log text }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Log} table contains a log of all Job output.
+
+\addcontentsline{lot}{table}{Location Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf Location } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{LocationId } & {integer } & {Primary Key }
+\\ \hline
+{Location } & {tinyblob } & {Text defining location }
+\\ \hline
+{Cost } & {integer } & {Relative cost of obtaining Volume }
+\\ \hline
+{Enabled } & {tinyint } & {Whether or not Volume is enabled }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Location} table defines where a Volume is physically.
+
+
+\addcontentsline{lot}{table}{Location Log Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf LocationLog } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{locLogIdId } & {integer } & {Primary Key }
+\\ \hline
+{Date } & {datetime } & {Time/date log record created }
+\\ \hline
+{MediaId } & {integer } & {Points to Media record }
+\\ \hline
+{LocationId } & {integer } & {Points to Location record }
+\\ \hline
+{NewVolStatus } & {integer } & {enum: Full, Archive, Append, Recycle, Purged
+ Read-only, Disabled, Error, Busy, Used, Cleaning }
+\\ \hline
+{Enabled } & {tinyint } & {Whether or not Volume is enabled }
+\\ \hline
+
+
+\end{longtable}
+
+The {\bf Log} table contains a log of all Job output.
+
+
+\addcontentsline{lot}{table}{Version Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf Version } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{VersionId } & {integer } & {Primary Key }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Version} table defines the Bacula database version number. Bacula
+checks this number before reading the database to ensure that it is compatible
+with the Bacula binary file.
+
+
+\addcontentsline{lot}{table}{Base Files Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf BaseFiles } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{BaseId } & {integer } & {Primary Key } \\
+ \hline
+{BaseJobId } & {integer } & {JobId of Base Job } \\
+ \hline
+{JobId } & {integer } & {Reference to Job } \\
+ \hline
+{FileId } & {integer } & {Reference to File } \\
+ \hline
+{FileIndex } & {integer } & {File Index number }
+\\ \hline
+
+\end{longtable}
+
+The {\bf BaseFiles} table contains all the File references for a particular
+JobId that point to a Base file -- i.e. they were previously saved and hence
+were not saved in the current JobId but in BaseJobId under FileId. FileIndex
+is the index of the file, and is used for optimization of Restore jobs to
+prevent the need to read the FileId record when creating the in memory tree.
+This record is not yet implemented.
+
+\
+
+\subsection{MySQL Table Definition}
+\index[general]{MySQL Table Definition }
+\index[general]{Definition!MySQL Table }
+\addcontentsline{toc}{subsubsection}{MySQL Table Definition}
+
+The commands used to create the MySQL tables are as follows:
+
+\footnotesize
+\begin{verbatim}
+USE bacula;
+CREATE TABLE Filename (
+ FilenameId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ Name BLOB NOT NULL,
+ PRIMARY KEY(FilenameId),
+ INDEX (Name(30))
+ );
+CREATE TABLE Path (
+ PathId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ Path BLOB NOT NULL,
+ PRIMARY KEY(PathId),
+ INDEX (Path(50))
+ );
+CREATE TABLE File (
+ FileId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ FileIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
+ PathId INTEGER UNSIGNED NOT NULL REFERENCES Path,
+ FilenameId INTEGER UNSIGNED NOT NULL REFERENCES Filename,
+ MarkId INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ LStat TINYBLOB NOT NULL,
+ MD5 TINYBLOB NOT NULL,
+ PRIMARY KEY(FileId),
+ INDEX (JobId),
+ INDEX (PathId),
+ INDEX (FilenameId)
+ );
+CREATE TABLE Job (
+ JobId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ Job TINYBLOB NOT NULL,
+ Name TINYBLOB NOT NULL,
+ Type BINARY(1) NOT NULL,
+ Level BINARY(1) NOT NULL,
+ ClientId INTEGER NOT NULL REFERENCES Client,
+ JobStatus BINARY(1) NOT NULL,
+ SchedTime DATETIME NOT NULL,
+ StartTime DATETIME NOT NULL,
+ EndTime DATETIME NOT NULL,
+ JobTDate BIGINT UNSIGNED NOT NULL,
+ VolSessionId INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolSessionTime INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ JobFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ JobBytes BIGINT UNSIGNED NOT NULL,
+ JobErrors INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ JobMissingFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ PoolId INTEGER UNSIGNED NOT NULL REFERENCES Pool,
+ FileSetId INTEGER UNSIGNED NOT NULL REFERENCES FileSet,
+ PurgedFiles TINYINT NOT NULL DEFAULT 0,
+ HasBase TINYINT NOT NULL DEFAULT 0,
+ PRIMARY KEY(JobId),
+ INDEX (Name(128))
+ );
+CREATE TABLE FileSet (
+ FileSetId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ FileSet TINYBLOB NOT NULL,
+ MD5 TINYBLOB NOT NULL,
+ CreateTime DATETIME NOT NULL,
+ PRIMARY KEY(FileSetId)
+ );
+CREATE TABLE JobMedia (
+ JobMediaId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
+ MediaId INTEGER UNSIGNED NOT NULL REFERENCES Media,
+ FirstIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ LastIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ StartFile INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ EndFile INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ StartBlock INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ EndBlock INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ PRIMARY KEY(JobMediaId),
+ INDEX (JobId, MediaId)
+ );
+CREATE TABLE Media (
+ MediaId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ VolumeName TINYBLOB NOT NULL,
+ Slot INTEGER NOT NULL DEFAULT 0,
+ PoolId INTEGER UNSIGNED NOT NULL REFERENCES Pool,
+ MediaType TINYBLOB NOT NULL,
+ FirstWritten DATETIME NOT NULL,
+ LastWritten DATETIME NOT NULL,
+ LabelDate DATETIME NOT NULL,
+ VolJobs INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolBlocks INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolMounts INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolBytes BIGINT UNSIGNED NOT NULL DEFAULT 0,
+ VolErrors INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolWrites INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolCapacityBytes BIGINT UNSIGNED NOT NULL,
+ VolStatus ENUM('Full', 'Archive', 'Append', 'Recycle', 'Purged',
+ 'Read-Only', 'Disabled', 'Error', 'Busy', 'Used', 'Cleaning') NOT NULL,
+ Recycle TINYINT NOT NULL DEFAULT 0,
+ VolRetention BIGINT UNSIGNED NOT NULL DEFAULT 0,
+ VolUseDuration BIGINT UNSIGNED NOT NULL DEFAULT 0,
+ MaxVolJobs INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ MaxVolFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ MaxVolBytes BIGINT UNSIGNED NOT NULL DEFAULT 0,
+ InChanger TINYINT NOT NULL DEFAULT 0,
+ MediaAddressing TINYINT NOT NULL DEFAULT 0,
+ VolReadTime BIGINT UNSIGNED NOT NULL DEFAULT 0,
+ VolWriteTime BIGINT UNSIGNED NOT NULL DEFAULT 0,
+ PRIMARY KEY(MediaId),
+ INDEX (PoolId)
+ );
+CREATE TABLE Pool (
+ PoolId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ Name TINYBLOB NOT NULL,
+ NumVols INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ MaxVols INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ UseOnce TINYINT NOT NULL,
+ UseCatalog TINYINT NOT NULL,
+ AcceptAnyVolume TINYINT DEFAULT 0,
+ VolRetention BIGINT UNSIGNED NOT NULL,
+ VolUseDuration BIGINT UNSIGNED NOT NULL,
+ MaxVolJobs INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ MaxVolFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ MaxVolBytes BIGINT UNSIGNED NOT NULL,
+ AutoPrune TINYINT DEFAULT 0,
+ Recycle TINYINT DEFAULT 0,
+ PoolType ENUM('Backup', 'Copy', 'Cloned', 'Archive', 'Migration', 'Scratch') NOT NULL,
+ LabelFormat TINYBLOB,
+ Enabled TINYINT DEFAULT 1,
+ ScratchPoolId INTEGER UNSIGNED DEFAULT 0 REFERENCES Pool,
+ RecyclePoolId INTEGER UNSIGNED DEFAULT 0 REFERENCES Pool,
+ UNIQUE (Name(128)),
+ PRIMARY KEY (PoolId)
+ );
+CREATE TABLE Client (
+ ClientId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ Name TINYBLOB NOT NULL,
+ Uname TINYBLOB NOT NULL, /* full uname -a of client */
+ AutoPrune TINYINT DEFAULT 0,
+ FileRetention BIGINT UNSIGNED NOT NULL,
+ JobRetention BIGINT UNSIGNED NOT NULL,
+ UNIQUE (Name(128)),
+ PRIMARY KEY(ClientId)
+ );
+CREATE TABLE BaseFiles (
+ BaseId INTEGER UNSIGNED AUTO_INCREMENT,
+ BaseJobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
+ JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
+ FileId INTEGER UNSIGNED NOT NULL REFERENCES File,
+ FileIndex INTEGER UNSIGNED,
+ PRIMARY KEY(BaseId)
+ );
+CREATE TABLE UnsavedFiles (
+ UnsavedId INTEGER UNSIGNED AUTO_INCREMENT,
+ JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
+ PathId INTEGER UNSIGNED NOT NULL REFERENCES Path,
+ FilenameId INTEGER UNSIGNED NOT NULL REFERENCES Filename,
+ PRIMARY KEY (UnsavedId)
+ );
+CREATE TABLE Version (
+ VersionId INTEGER UNSIGNED NOT NULL
+ );
+-- Initialize Version
+INSERT INTO Version (VersionId) VALUES (7);
+CREATE TABLE Counters (
+ Counter TINYBLOB NOT NULL,
+ MinValue INTEGER,
+ MaxValue INTEGER,
+ CurrentValue INTEGER,
+ WrapCounter TINYBLOB NOT NULL,
+ PRIMARY KEY (Counter(128))
+ );
+\end{verbatim}
+\normalsize
+++ /dev/null
-%%
-%%
-
-\chapter{Catalog Services}
-\label{_ChapterStart30}
-\index[general]{Services!Catalog }
-\index[general]{Catalog Services }
-
-\section{General}
-\index[general]{General }
-\addcontentsline{toc}{subsection}{General}
-
-This chapter is intended to be a technical discussion of the Catalog services
-and as such is not targeted at end users but rather at developers and system
-administrators that want or need to know more of the working details of {\bf
-Bacula}.
-
-The {\bf Bacula Catalog} services consist of the programs that provide the SQL
-database engine for storage and retrieval of all information concerning files
-that were backed up and their locations on the storage media.
-
-We have investigated the possibility of using the following SQL engines for
-Bacula: Beagle, mSQL, GNU SQL, PostgreSQL, SQLite, Oracle, and MySQL. Each
-presents certain problems with either licensing or maturity. At present, we
-have chosen for development purposes to use MySQL, PostgreSQL and SQLite.
-MySQL was chosen because it is fast, proven to be reliable, widely used, and
-actively being developed. MySQL is released under the GNU GPL license.
-PostgreSQL was chosen because it is a full-featured, very mature database, and
-because Dan Langille did the Bacula driver for it. PostgreSQL is distributed
-under the BSD license. SQLite was chosen because it is small, efficient, and
-can be directly embedded in {\bf Bacula} thus requiring much less effort from
-the system administrator or person building {\bf Bacula}. In our testing
-SQLite has performed very well, and for the functions that we use, it has
-never encountered any errors except that it does not appear to handle
-databases larger than 2GBytes. That said, we would not recommend it for
-serious production use.
-
-The Bacula SQL code has been written in a manner that will allow it to be
-easily modified to support any of the current SQL database systems on the
-market (for example: mSQL, iODBC, unixODBC, Solid, OpenLink ODBC, EasySoft
-ODBC, InterBase, Oracle8, Oracle7, and DB2).
-
-If you do not specify either {\bf \verb{--{with-mysql} or {\bf \verb{--{with-postgresql} or
-{\bf \verb{--{with-sqlite} on the ./configure line, Bacula will use its minimalist
-internal database. This database is kept for build reasons but is no longer
-supported. Bacula {\bf requires} one of the three databases (MySQL,
-PostgreSQL, or SQLite) to run.
-
-\subsection{Filenames and Maximum Filename Length}
-\index[general]{Filenames and Maximum Filename Length }
-\index[general]{Length!Filenames and Maximum Filename }
-\addcontentsline{toc}{subsubsection}{Filenames and Maximum Filename Length}
-
-In general, either MySQL, PostgreSQL or SQLite permit storing arbitrary long
-path names and file names in the catalog database. In practice, there still
-may be one or two places in the Catalog interface code that restrict the
-maximum path length to 512 characters and the maximum file name length to 512
-characters. These restrictions are believed to have been removed. Please note,
-these restrictions apply only to the Catalog database and thus to your ability
-to list online the files saved during any job. All information received and
-stored by the Storage daemon (normally on tape) allows and handles arbitrarily
-long path and filenames.
-
-\subsection{Installing and Configuring MySQL}
-\index[general]{MySQL!Installing and Configuring }
-\index[general]{Installing and Configuring MySQL }
-\addcontentsline{toc}{subsubsection}{Installing and Configuring MySQL}
-
-For the details of installing and configuring MySQL, please see the
-\ilink{Installing and Configuring MySQL}{_ChapterStart} chapter of
-this manual.
-
-\subsection{Installing and Configuring PostgreSQL}
-\index[general]{PostgreSQL!Installing and Configuring }
-\index[general]{Installing and Configuring PostgreSQL }
-\addcontentsline{toc}{subsubsection}{Installing and Configuring PostgreSQL}
-
-For the details of installing and configuring PostgreSQL, please see the
-\ilink{Installing and Configuring PostgreSQL}{_ChapterStart10}
-chapter of this manual.
-
-\subsection{Installing and Configuring SQLite}
-\index[general]{Installing and Configuring SQLite }
-\index[general]{SQLite!Installing and Configuring }
-\addcontentsline{toc}{subsubsection}{Installing and Configuring SQLite}
-
-For the details of installing and configuring SQLite, please see the
-\ilink{Installing and Configuring SQLite}{_ChapterStart33} chapter of
-this manual.
-
-\subsection{Internal Bacula Catalog}
-\index[general]{Catalog!Internal Bacula }
-\index[general]{Internal Bacula Catalog }
-\addcontentsline{toc}{subsubsection}{Internal Bacula Catalog}
-
-Please see the
-\ilink{Internal Bacula Database}{_ChapterStart42} chapter of this
-manual for more details.
-
-\subsection{Database Table Design}
-\index[general]{Design!Database Table }
-\index[general]{Database Table Design }
-\addcontentsline{toc}{subsubsection}{Database Table Design}
-
-All discussions that follow pertain to the MySQL database. The details for the
-PostgreSQL and SQLite databases are essentially identical except for that all
-fields in the SQLite database are stored as ASCII text and some of the
-database creation statements are a bit different. The details of the internal
-Bacula catalog are not discussed here.
-
-Because the Catalog database may contain very large amounts of data for large
-sites, we have made a modest attempt to normalize the data tables to reduce
-redundant information. While reducing the size of the database significantly,
-it does, unfortunately, add some complications to the structures.
-
-In simple terms, the Catalog database must contain a record of all Jobs run by
-Bacula, and for each Job, it must maintain a list of all files saved, with
-their File Attributes (permissions, create date, ...), and the location and
-Media on which the file is stored. This is seemingly a simple task, but it
-represents a huge amount interlinked data. Note: the list of files and their
-attributes is not maintained when using the internal Bacula database. The data
-stored in the File records, which allows the user or administrator to obtain a
-list of all files backed up during a job, is by far the largest volume of
-information put into the Catalog database.
-
-Although the Catalog database has been designed to handle backup data for
-multiple clients, some users may want to maintain multiple databases, one for
-each machine to be backed up. This reduces the risk of confusion of accidental
-restoring a file to the wrong machine as well as reducing the amount of data
-in a single database, thus increasing efficiency and reducing the impact of a
-lost or damaged database.
-
-\section{Sequence of Creation of Records for a Save Job}
-\index[general]{Sequence of Creation of Records for a Save Job }
-\index[general]{Job!Sequence of Creation of Records for a Save }
-\addcontentsline{toc}{subsection}{Sequence of Creation of Records for a Save
-Job}
-
-Start with StartDate, ClientName, Filename, Path, Attributes, MediaName,
-MediaCoordinates. (PartNumber, NumParts). In the steps below, ``Create new''
-means to create a new record whether or not it is unique. ``Create unique''
-means each record in the database should be unique. Thus, one must first
-search to see if the record exists, and only if not should a new one be
-created, otherwise the existing RecordId should be used.
-
-\begin{enumerate}
-\item Create new Job record with StartDate; save JobId
-\item Create unique Media record; save MediaId
-\item Create unique Client record; save ClientId
-\item Create unique Filename record; save FilenameId
-\item Create unique Path record; save PathId
-\item Create unique Attribute record; save AttributeId
- store ClientId, FilenameId, PathId, and Attributes
-\item Create new File record
- store JobId, AttributeId, MediaCoordinates, etc
-\item Repeat steps 4 through 8 for each file
-\item Create a JobMedia record; save MediaId
-\item Update Job record filling in EndDate and other Job statistics
- \end{enumerate}
-
-\section{Database Tables}
-\index[general]{Database Tables }
-\index[general]{Tables!Database }
-\addcontentsline{toc}{subsection}{Database Tables}
-
-\addcontentsline{lot}{table}{Filename Table Layout}
-\begin{longtable}{|l|l|l|}
- \hline
-\multicolumn{3}{|l| }{\bf Filename } \\
- \hline
-\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{l| }{\bf Data Type }
-& \multicolumn{1}{l| }{\bf Remark } \\
- \hline
-{FilenameId } & {integer } & {Primary Key } \\
- \hline
-{Name } & {Blob } & {Filename }
-\\ \hline
-
-\end{longtable}
-
-The {\bf Filename} table shown above contains the name of each file backed up
-with the path removed. If different directories or machines contain the same
-filename, only one copy will be saved in this table.
-
-\
-
-\addcontentsline{lot}{table}{Path Table Layout}
-\begin{longtable}{|l|l|l|}
- \hline
-\multicolumn{3}{|l| }{\bf Path } \\
- \hline
-\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
-} & \multicolumn{1}{c| }{\bf Remark } \\
- \hline
-{PathId } & {integer } & {Primary Key } \\
- \hline
-{Path } & {Blob } & {Full Path }
-\\ \hline
-
-\end{longtable}
-
-The {\bf Path} table contains shown above the path or directory names of all
-directories on the system or systems. The filename and any MSDOS disk name are
-stripped off. As with the filename, only one copy of each directory name is
-kept regardless of how many machines or drives have the same directory. These
-path names should be stored in Unix path name format.
-
-Some simple testing on a Linux file system indicates that separating the
-filename and the path may be more complication than is warranted by the space
-savings. For example, this system has a total of 89,097 files, 60,467 of which
-have unique filenames, and there are 4,374 unique paths.
-
-Finding all those files and doing two stats() per file takes an average wall
-clock time of 1 min 35 seconds on a 400MHz machine running RedHat 6.1 Linux.
-
-Finding all those files and putting them directly into a MySQL database with
-the path and filename defined as TEXT, which is variable length up to 65,535
-characters takes 19 mins 31 seconds and creates a 27.6 MByte database.
-
-Doing the same thing, but inserting them into Blob fields with the filename
-indexed on the first 30 characters and the path name indexed on the 255 (max)
-characters takes 5 mins 18 seconds and creates a 5.24 MB database. Rerunning
-the job (with the database already created) takes about 2 mins 50 seconds.
-
-Running the same as the last one (Path and Filename Blob), but Filename
-indexed on the first 30 characters and the Path on the first 50 characters
-(linear search done there after) takes 5 mins on the average and creates a 3.4
-MB database. Rerunning with the data already in the DB takes 3 mins 35
-seconds.
-
-Finally, saving only the full path name rather than splitting the path and the
-file, and indexing it on the first 50 characters takes 6 mins 43 seconds and
-creates a 7.35 MB database.
-
-\
-
-\addcontentsline{lot}{table}{File Table Layout}
-\begin{longtable}{|l|l|l|}
- \hline
-\multicolumn{3}{|l| }{\bf File } \\
- \hline
-\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
-} & \multicolumn{1}{c| }{\bf Remark } \\
- \hline
-{FileId } & {integer } & {Primary Key } \\
- \hline
-{FileIndex } & {integer } & {The sequential file number in the Job } \\
- \hline
-{JobId } & {integer } & {Link to Job Record } \\
- \hline
-{PathId } & {integer } & {Link to Path Record } \\
- \hline
-{FilenameId } & {integer } & {Link to Filename Record } \\
- \hline
-{MarkId } & {integer } & {Used to mark files during Verify Jobs } \\
- \hline
-{LStat } & {tinyblob } & {File attributes in base64 encoding } \\
- \hline
-{MD5 } & {tinyblob } & {MD5/SHA1 signature in base64 encoding }
-\\ \hline
-
-\end{longtable}
-
-The {\bf File} table shown above contains one entry for each file backed up by
-Bacula. Thus a file that is backed up multiple times (as is normal) will have
-multiple entries in the File table. This will probably be the table with the
-most number of records. Consequently, it is essential to keep the size of this
-record to an absolute minimum. At the same time, this table must contain all
-the information (or pointers to the information) about the file and where it
-is backed up. Since a file may be backed up many times without having changed,
-the path and filename are stored in separate tables.
-
-This table contains by far the largest amount of information in the Catalog
-database, both from the stand point of number of records, and the stand point
-of total database size. As a consequence, the user must take care to
-periodically reduce the number of File records using the {\bf retention}
-command in the Console program.
-
-\
-
-\addcontentsline{lot}{table}{Job Table Layout}
-\begin{longtable}{|l|l|p{2.5in}|}
- \hline
-\multicolumn{3}{|l| }{\bf Job } \\
- \hline
-\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
-} & \multicolumn{1}{c| }{\bf Remark } \\
- \hline
-{JobId } & {integer } & {Primary Key } \\
- \hline
-{Job } & {tinyblob } & {Unique Job Name } \\
- \hline
-{Name } & {tinyblob } & {Job Name } \\
- \hline
-{PurgedFiles } & {tinyint } & {Used by Bacula for purging/retention periods
-} \\
- \hline
-{Type } & {binary(1) } & {Job Type: Backup, Copy, Clone, Archive, Migration
-} \\
- \hline
-{Level } & {binary(1) } & {Job Level } \\
- \hline
-{ClientId } & {integer } & {Client index } \\
- \hline
-{JobStatus } & {binary(1) } & {Job Termination Status } \\
- \hline
-{SchedTime } & {datetime } & {Time/date when Job scheduled } \\
- \hline
-{StartTime } & {datetime } & {Time/date when Job started } \\
- \hline
-{EndTime } & {datetime } & {Time/date when Job ended } \\
- \hline
-{RealEndTime } & {datetime } & {Time/date when original Job ended } \\
- \hline
-{JobTDate } & {bigint } & {Start day in Unix format but 64 bits; used for
-Retention period. } \\
- \hline
-{VolSessionId } & {integer } & {Unique Volume Session ID } \\
- \hline
-{VolSessionTime } & {integer } & {Unique Volume Session Time } \\
- \hline
-{JobFiles } & {integer } & {Number of files saved in Job } \\
- \hline
-{JobBytes } & {bigint } & {Number of bytes saved in Job } \\
- \hline
-{JobErrors } & {integer } & {Number of errors during Job } \\
- \hline
-{JobMissingFiles } & {integer } & {Number of files not saved (not yet used) }
-\\
- \hline
-{PoolId } & {integer } & {Link to Pool Record } \\
- \hline
-{FileSetId } & {integer } & {Link to FileSet Record } \\
- \hline
-{PrioJobId } & {integer } & {Link to prior Job Record when migrated } \\
- \hline
-{PurgedFiles } & {tiny integer } & {Set when all File records purged } \\
- \hline
-{HasBase } & {tiny integer } & {Set when Base Job run }
-\\ \hline
-
-\end{longtable}
-
-The {\bf Job} table contains one record for each Job run by Bacula. Thus
-normally, there will be one per day per machine added to the database. Note,
-the JobId is used to index Job records in the database, and it often is shown
-to the user in the Console program. However, care must be taken with its use
-as it is not unique from database to database. For example, the user may have
-a database for Client data saved on machine Rufus and another database for
-Client data saved on machine Roxie. In this case, the two database will each
-have JobIds that match those in another database. For a unique reference to a
-Job, see Job below.
-
-The Name field of the Job record corresponds to the Name resource record given
-in the Director's configuration file. Thus it is a generic name, and it will
-be normal to find many Jobs (or even all Jobs) with the same Name.
-
-The Job field contains a combination of the Name and the schedule time of the
-Job by the Director. Thus for a given Director, even with multiple Catalog
-databases, the Job will contain a unique name that represents the Job.
-
-For a given Storage daemon, the VolSessionId and VolSessionTime form a unique
-identification of the Job. This will be the case even if multiple Directors
-are using the same Storage daemon.
-
-The Job Type (or simply Type) can have one of the following values:
-
-\addcontentsline{lot}{table}{Job Types}
-\begin{longtable}{|l|l|}
- \hline
-\multicolumn{1}{|c| }{\bf Value } & \multicolumn{1}{c| }{\bf Meaning } \\
- \hline
-{B } & {Backup Job } \\
- \hline
-{M } & {Migrated Job } \\
- \hline
-{V } & {Verify Job } \\
- \hline
-{R } & {Restore Job } \\
- \hline
-{C } & {Console program (not in database) } \\
- \hline
-{I } & {Internal or system Job } \\
- \hline
-{D } & {Admin Job } \\
- \hline
-{A } & {Archive Job (not implemented) }
-\\ \hline
-{C } & {Copy Job } \\
- \hline
-{M } & {Migration Job } \\
- \hline
-
-\end{longtable}
-Note, the Job Type values noted above are not kept in an SQL table.
-
-
-The JobStatus field specifies how the job terminated, and can be one of the
-following:
-
-\addcontentsline{lot}{table}{Job Statuses}
-\begin{longtable}{|l|l|}
- \hline
-\multicolumn{1}{|c| }{\bf Value } & \multicolumn{1}{c| }{\bf Meaning } \\
- \hline
-{C } & {Created but not yet running } \\
- \hline
-{R } & {Running } \\
- \hline
-{B } & {Blocked } \\
- \hline
-{T } & {Terminated normally } \\
- \hline
-{W } & {Terminated normally with warnings }
-\\ \hline
-{E } & {Terminated in Error } \\
- \hline
-{e } & {Non-fatal error } \\
- \hline
-{f } & {Fatal error } \\
- \hline
-{D } & {Verify Differences } \\
- \hline
-{A } & {Canceled by the user } \\
- \hline
-{I } & {Incomplete Job }
-\\ \hline
-{F } & {Waiting on the File daemon } \\
- \hline
-{S } & {Waiting on the Storage daemon } \\
- \hline
-{m } & {Waiting for a new Volume to be mounted } \\
- \hline
-{M } & {Waiting for a Mount } \\
- \hline
-{s } & {Waiting for Storage resource } \\
- \hline
-{j } & {Waiting for Job resource } \\
- \hline
-{c } & {Waiting for Client resource } \\
- \hline
-{d } & {Wating for Maximum jobs } \\
- \hline
-{t } & {Waiting for Start Time } \\
- \hline
-{p } & {Waiting for higher priority job to finish }
-\\ \hline
-{i } & {Doing batch insert file records }
-\\ \hline
-{a } & {SD despooling attributes }
-\\ \hline
-{l } & {Doing data despooling }
-\\ \hline
-{L } & {Committing data (last despool) }
-\\ \hline
-
-
-
-\end{longtable}
-
-\addcontentsline{lot}{table}{File Sets Table Layout}
-\begin{longtable}{|l|l|l|}
- \hline
-\multicolumn{3}{|l| }{\bf FileSet } \\
- \hline
-\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type\
-\ \ } & \multicolumn{1}{c| }{\bf Remark } \\
- \hline
-{FileSetId } & {integer } & {Primary Key } \\
- \hline
-{FileSet } & {tinyblob } & {FileSet name } \\
- \hline
-{MD5 } & {tinyblob } & {MD5 checksum of FileSet } \\
- \hline
-{CreateTime } & {datetime } & {Time and date Fileset created }
-\\ \hline
-
-\end{longtable}
-
-The {\bf FileSet} table contains one entry for each FileSet that is used. The
-MD5 signature is kept to ensure that if the user changes anything inside the
-FileSet, it will be detected and the new FileSet will be used. This is
-particularly important when doing an incremental update. If the user deletes a
-file or adds a file, we need to ensure that a Full backup is done prior to the
-next incremental.
-
-
-\addcontentsline{lot}{table}{JobMedia Table Layout}
-\begin{longtable}{|l|l|p{2.5in}|}
- \hline
-\multicolumn{3}{|l| }{\bf JobMedia } \\
- \hline
-\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type\
-\ \ } & \multicolumn{1}{c| }{\bf Remark } \\
- \hline
-{JobMediaId } & {integer } & {Primary Key } \\
- \hline
-{JobId } & {integer } & {Link to Job Record } \\
- \hline
-{MediaId } & {integer } & {Link to Media Record } \\
- \hline
-{FirstIndex } & {integer } & {The index (sequence number) of the first file
-written for this Job to the Media } \\
- \hline
-{LastIndex } & {integer } & {The index of the last file written for this
-Job to the Media } \\
- \hline
-{StartFile } & {integer } & {The physical media (tape) file number of the
-first block written for this Job } \\
- \hline
-{EndFile } & {integer } & {The physical media (tape) file number of the
-last block written for this Job } \\
- \hline
-{StartBlock } & {integer } & {The number of the first block written for
-this Job } \\
- \hline
-{EndBlock } & {integer } & {The number of the last block written for this
-Job } \\
- \hline
-{VolIndex } & {integer } & {The Volume use sequence number within the Job }
-\\ \hline
-
-\end{longtable}
-
-The {\bf JobMedia} table contains one entry at the following: start of
-the job, start of each new tape file, start of each new tape, end of the
-job. Since by default, a new tape file is written every 2GB, in general,
-you will have more than 2 JobMedia records per Job. The number can be
-varied by changing the "Maximum File Size" specified in the Device
-resource. This record allows Bacula to efficiently position close to
-(within 2GB) any given file in a backup. For restoring a full Job,
-these records are not very important, but if you want to retrieve
-a single file that was written near the end of a 100GB backup, the
-JobMedia records can speed it up by orders of magnitude by permitting
-forward spacing files and blocks rather than reading the whole 100GB
-backup.
-
-
-
-
-\addcontentsline{lot}{table}{Media Table Layout}
-\begin{longtable}{|l|l|p{2.4in}|}
- \hline
-\multicolumn{3}{|l| }{\bf Media } \\
- \hline
-\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type\
-\ \ } & \multicolumn{1}{c| }{\bf Remark } \\
- \hline
-{MediaId } & {integer } & {Primary Key } \\
- \hline
-{VolumeName } & {tinyblob } & {Volume name } \\
- \hline
-{Slot } & {integer } & {Autochanger Slot number or zero } \\
- \hline
-{PoolId } & {integer } & {Link to Pool Record } \\
- \hline
-{MediaType } & {tinyblob } & {The MediaType supplied by the user } \\
- \hline
-{MediaTypeId } & {integer } & {The MediaTypeId } \\
- \hline
-{LabelType } & {tinyint } & {The type of label on the Volume } \\
- \hline
-{FirstWritten } & {datetime } & {Time/date when first written } \\
- \hline
-{LastWritten } & {datetime } & {Time/date when last written } \\
- \hline
-{LabelDate } & {datetime } & {Time/date when tape labeled } \\
- \hline
-{VolJobs } & {integer } & {Number of jobs written to this media } \\
- \hline
-{VolFiles } & {integer } & {Number of files written to this media } \\
- \hline
-{VolBlocks } & {integer } & {Number of blocks written to this media } \\
- \hline
-{VolMounts } & {integer } & {Number of time media mounted } \\
- \hline
-{VolBytes } & {bigint } & {Number of bytes saved in Job } \\
- \hline
-{VolParts } & {integer } & {The number of parts for a Volume (DVD) } \\
- \hline
-{VolErrors } & {integer } & {Number of errors during Job } \\
- \hline
-{VolWrites } & {integer } & {Number of writes to media } \\
- \hline
-{MaxVolBytes } & {bigint } & {Maximum bytes to put on this media } \\
- \hline
-{VolCapacityBytes } & {bigint } & {Capacity estimate for this volume } \\
- \hline
-{VolStatus } & {enum } & {Status of media: Full, Archive, Append, Recycle,
-Read-Only, Disabled, Error, Busy } \\
- \hline
-{Enabled } {tinyint } & {Whether or not Volume can be written } \\
- \hline
-{Recycle } & {tinyint } & {Whether or not Bacula can recycle the Volumes:
-Yes, No } \\
- \hline
-{ActionOnPurge } & {tinyint } & {What happens to a Volume after purging } \\
- \hline
-{VolRetention } & {bigint } & {64 bit seconds until expiration } \\
- \hline
-{VolUseDuration } & {bigint } & {64 bit seconds volume can be used } \\
- \hline
-{MaxVolJobs } & {integer } & {maximum jobs to put on Volume } \\
- \hline
-{MaxVolFiles } & {integer } & {maximume EOF marks to put on Volume }
-\\ \hline
-{InChanger } & {tinyint } & {Whether or not Volume in autochanger } \\
- \hline
-{StorageId } & {integer } & {Storage record ID } \\
- \hline
-{DeviceId } & {integer } & {Device record ID } \\
- \hline
-{MediaAddressing } & {integer } & {Method of addressing media } \\
- \hline
-{VolReadTime } & {bigint } & {Time Reading Volume } \\
- \hline
-{VolWriteTime } & {bigint } & {Time Writing Volume } \\
- \hline
-{EndFile } & {integer } & {End File number of Volume } \\
- \hline
-{EndBlock } & {integer } & {End block number of Volume } \\
- \hline
-{LocationId } & {integer } & {Location record ID } \\
- \hline
-{RecycleCount } & {integer } & {Number of times recycled } \\
- \hline
-{InitialWrite } & {datetime } & {When Volume first written } \\
- \hline
-{ScratchPoolId } & {integer } & {Id of Scratch Pool } \\
- \hline
-{RecyclePoolId } & {integer } & {Pool ID where to recycle Volume } \\
- \hline
-{Comment } & {blob } & {User text field } \\
- \hline
-
-
-\end{longtable}
-
-The {\bf Volume} table (internally referred to as the Media table) contains
-one entry for each volume, that is each tape, cassette (8mm, DLT, DAT, ...),
-or file on which information is or was backed up. There is one Volume record
-created for each of the NumVols specified in the Pool resource record.
-
-\
-
-\addcontentsline{lot}{table}{Pool Table Layout}
-\begin{longtable}{|l|l|p{2.4in}|}
- \hline
-\multicolumn{3}{|l| }{\bf Pool } \\
- \hline
-\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
-} & \multicolumn{1}{c| }{\bf Remark } \\
- \hline
-{PoolId } & {integer } & {Primary Key } \\
- \hline
-{Name } & {Tinyblob } & {Pool Name } \\
- \hline
-{NumVols } & {Integer } & {Number of Volumes in the Pool } \\
- \hline
-{MaxVols } & {Integer } & {Maximum Volumes in the Pool } \\
- \hline
-{UseOnce } & {tinyint } & {Use volume once } \\
- \hline
-{UseCatalog } & {tinyint } & {Set to use catalog } \\
- \hline
-{AcceptAnyVolume } & {tinyint } & {Accept any volume from Pool } \\
- \hline
-{VolRetention } & {bigint } & {64 bit seconds to retain volume } \\
- \hline
-{VolUseDuration } & {bigint } & {64 bit seconds volume can be used } \\
- \hline
-{MaxVolJobs } & {integer } & {max jobs on volume } \\
- \hline
-{MaxVolFiles } & {integer } & {max EOF marks to put on Volume } \\
- \hline
-{MaxVolBytes } & {bigint } & {max bytes to write on Volume } \\
- \hline
-{AutoPrune } & {tinyint } & {yes|no for autopruning } \\
- \hline
-{Recycle } & {tinyint } & {yes|no for allowing auto recycling of Volume } \\
- \hline
-{ActionOnPurge } & {tinyint } & {Default Volume ActionOnPurge } \\
- \hline
-{PoolType } & {enum } & {Backup, Copy, Cloned, Archive, Migration } \\
- \hline
-{LabelType } & {tinyint } & {Type of label ANSI/Bacula } \\
- \hline
-{LabelFormat } & {Tinyblob } & {Label format }
-\\ \hline
-{Enabled } {tinyint } & {Whether or not Volume can be written } \\
- \hline
-{ScratchPoolId } & {integer } & {Id of Scratch Pool } \\
- \hline
-{RecyclePoolId } & {integer } & {Pool ID where to recycle Volume } \\
- \hline
-{NextPoolId } & {integer } & {Pool ID of next Pool } \\
- \hline
-{MigrationHighBytes } & {bigint } & {High water mark for migration } \\
- \hline
-{MigrationLowBytes } & {bigint } & {Low water mark for migration } \\
- \hline
-{MigrationTime } & {bigint } & {Time before migration } \\
- \hline
-
-
-
-\end{longtable}
-
-The {\bf Pool} table contains one entry for each media pool controlled by
-Bacula in this database. One media record exists for each of the NumVols
-contained in the Pool. The PoolType is a Bacula defined keyword. The MediaType
-is defined by the administrator, and corresponds to the MediaType specified in
-the Director's Storage definition record. The CurrentVol is the sequence
-number of the Media record for the current volume.
-
-\
-
-\addcontentsline{lot}{table}{Client Table Layout}
-\begin{longtable}{|l|l|l|}
- \hline
-\multicolumn{3}{|l| }{\bf Client } \\
- \hline
-\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
-} & \multicolumn{1}{c| }{\bf Remark } \\
- \hline
-{ClientId } & {integer } & {Primary Key } \\
- \hline
-{Name } & {TinyBlob } & {File Services Name } \\
- \hline
-{UName } & {TinyBlob } & {uname -a from Client (not yet used) } \\
- \hline
-{AutoPrune } & {tinyint } & {yes|no for autopruning } \\
- \hline
-{FileRetention } & {bigint } & {64 bit seconds to retain Files } \\
- \hline
-{JobRetention } & {bigint } & {64 bit seconds to retain Job }
-\\ \hline
-
-\end{longtable}
-
-The {\bf Client} table contains one entry for each machine backed up by Bacula
-in this database. Normally the Name is a fully qualified domain name.
-
-
-\addcontentsline{lot}{table}{Storage Table Layout}
-\begin{longtable}{|l|l|l|}
- \hline
-\multicolumn{3}{|l| }{\bf Storage } \\
- \hline
-\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
-} & \multicolumn{1}{c| }{\bf Remark } \\
- \hline
-{StorageId } & {integer } & {Unique Id } \\
- \hline
-{Name } & {tinyblob } & {Resource name of Storage device } \\
- \hline
-{AutoChanger } & {tinyint } & {Set if it is an autochanger } \\
- \hline
-
-\end{longtable}
-
-The {\bf Storage} table contains one entry for each Storage used.
-
-
-\addcontentsline{lot}{table}{Counter Table Layout}
-\begin{longtable}{|l|l|l|}
- \hline
-\multicolumn{3}{|l| }{\bf Counter } \\
- \hline
-\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
-} & \multicolumn{1}{c| }{\bf Remark } \\
- \hline
-{Counter } & {tinyblob } & {Counter name } \\
- \hline
-{MinValue } & {integer } & {Start/Min value for counter } \\
- \hline
-{MaxValue } & {integer } & {Max value for counter } \\
- \hline
-{CurrentValue } & {integer } & {Current counter value } \\
- \hline
-{WrapCounter } & {tinyblob } & {Name of another counter }
-\\ \hline
-
-\end{longtable}
-
-The {\bf Counter} table contains one entry for each permanent counter defined
-by the user.
-
-\addcontentsline{lot}{table}{Job History Table Layout}
-\begin{longtable}{|l|l|p{2.5in}|}
- \hline
-\multicolumn{3}{|l| }{\bf JobHisto } \\
- \hline
-\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
-} & \multicolumn{1}{c| }{\bf Remark } \\
- \hline
-{JobId } & {integer } & {Primary Key } \\
- \hline
-{Job } & {tinyblob } & {Unique Job Name } \\
- \hline
-{Name } & {tinyblob } & {Job Name } \\
- \hline
-{Type } & {binary(1) } & {Job Type: Backup, Copy, Clone, Archive, Migration
-} \\
- \hline
-{Level } & {binary(1) } & {Job Level } \\
- \hline
-{ClientId } & {integer } & {Client index } \\
- \hline
-{JobStatus } & {binary(1) } & {Job Termination Status } \\
- \hline
-{SchedTime } & {datetime } & {Time/date when Job scheduled } \\
- \hline
-{StartTime } & {datetime } & {Time/date when Job started } \\
- \hline
-{EndTime } & {datetime } & {Time/date when Job ended } \\
- \hline
-{RealEndTime } & {datetime } & {Time/date when original Job ended } \\
- \hline
-{JobTDate } & {bigint } & {Start day in Unix format but 64 bits; used for
-Retention period. } \\
- \hline
-{VolSessionId } & {integer } & {Unique Volume Session ID } \\
- \hline
-{VolSessionTime } & {integer } & {Unique Volume Session Time } \\
- \hline
-{JobFiles } & {integer } & {Number of files saved in Job } \\
- \hline
-{JobBytes } & {bigint } & {Number of bytes saved in Job } \\
- \hline
-{JobErrors } & {integer } & {Number of errors during Job } \\
- \hline
-{JobMissingFiles } & {integer } & {Number of files not saved (not yet used) }
-\\
- \hline
-{PoolId } & {integer } & {Link to Pool Record } \\
- \hline
-{FileSetId } & {integer } & {Link to FileSet Record } \\
- \hline
-{PrioJobId } & {integer } & {Link to prior Job Record when migrated } \\
- \hline
-{PurgedFiles } & {tiny integer } & {Set when all File records purged } \\
- \hline
-{HasBase } & {tiny integer } & {Set when Base Job run }
-\\ \hline
-
-\end{longtable}
-
-The {bf JobHisto} table is the same as the Job table, but it keeps
-long term statistics (i.e. it is not pruned with the Job).
-
-
-\addcontentsline{lot}{table}{Log Table Layout}
-\begin{longtable}{|l|l|l|}
- \hline
-\multicolumn{3}{|l| }{\bf Version } \\
- \hline
-\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
-} & \multicolumn{1}{c| }{\bf Remark } \\
- \hline
-{LogIdId } & {integer } & {Primary Key }
-\\ \hline
-{JobId } & {integer } & {Points to Job record }
-\\ \hline
-{Time } & {datetime } & {Time/date log record created }
-\\ \hline
-{LogText } & {blob } & {Log text }
-\\ \hline
-
-\end{longtable}
-
-The {\bf Log} table contains a log of all Job output.
-
-\addcontentsline{lot}{table}{Location Table Layout}
-\begin{longtable}{|l|l|l|}
- \hline
-\multicolumn{3}{|l| }{\bf Location } \\
- \hline
-\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
-} & \multicolumn{1}{c| }{\bf Remark } \\
- \hline
-{LocationId } & {integer } & {Primary Key }
-\\ \hline
-{Location } & {tinyblob } & {Text defining location }
-\\ \hline
-{Cost } & {integer } & {Relative cost of obtaining Volume }
-\\ \hline
-{Enabled } & {tinyint } & {Whether or not Volume is enabled }
-\\ \hline
-
-\end{longtable}
-
-The {\bf Location} table defines where a Volume is physically.
-
-
-\addcontentsline{lot}{table}{Location Log Table Layout}
-\begin{longtable}{|l|l|l|}
- \hline
-\multicolumn{3}{|l| }{\bf LocationLog } \\
- \hline
-\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
-} & \multicolumn{1}{c| }{\bf Remark } \\
- \hline
-{locLogIdId } & {integer } & {Primary Key }
-\\ \hline
-{Date } & {datetime } & {Time/date log record created }
-\\ \hline
-{MediaId } & {integer } & {Points to Media record }
-\\ \hline
-{LocationId } & {integer } & {Points to Location record }
-\\ \hline
-{NewVolStatus } & {integer } & {enum: Full, Archive, Append, Recycle, Purged
- Read-only, Disabled, Error, Busy, Used, Cleaning }
-\\ \hline
-{Enabled } & {tinyint } & {Whether or not Volume is enabled }
-\\ \hline
-
-
-\end{longtable}
-
-The {\bf Log} table contains a log of all Job output.
-
-
-\addcontentsline{lot}{table}{Version Table Layout}
-\begin{longtable}{|l|l|l|}
- \hline
-\multicolumn{3}{|l| }{\bf Version } \\
- \hline
-\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
-} & \multicolumn{1}{c| }{\bf Remark } \\
- \hline
-{VersionId } & {integer } & {Primary Key }
-\\ \hline
-
-\end{longtable}
-
-The {\bf Version} table defines the Bacula database version number. Bacula
-checks this number before reading the database to ensure that it is compatible
-with the Bacula binary file.
-
-
-\addcontentsline{lot}{table}{Base Files Table Layout}
-\begin{longtable}{|l|l|l|}
- \hline
-\multicolumn{3}{|l| }{\bf BaseFiles } \\
- \hline
-\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
-} & \multicolumn{1}{c| }{\bf Remark } \\
- \hline
-{BaseId } & {integer } & {Primary Key } \\
- \hline
-{BaseJobId } & {integer } & {JobId of Base Job } \\
- \hline
-{JobId } & {integer } & {Reference to Job } \\
- \hline
-{FileId } & {integer } & {Reference to File } \\
- \hline
-{FileIndex } & {integer } & {File Index number }
-\\ \hline
-
-\end{longtable}
-
-The {\bf BaseFiles} table contains all the File references for a particular
-JobId that point to a Base file -- i.e. they were previously saved and hence
-were not saved in the current JobId but in BaseJobId under FileId. FileIndex
-is the index of the file, and is used for optimization of Restore jobs to
-prevent the need to read the FileId record when creating the in memory tree.
-This record is not yet implemented.
-
-\
-
-\subsection{MySQL Table Definition}
-\index[general]{MySQL Table Definition }
-\index[general]{Definition!MySQL Table }
-\addcontentsline{toc}{subsubsection}{MySQL Table Definition}
-
-The commands used to create the MySQL tables are as follows:
-
-\footnotesize
-\begin{verbatim}
-USE bacula;
-CREATE TABLE Filename (
- FilenameId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
- Name BLOB NOT NULL,
- PRIMARY KEY(FilenameId),
- INDEX (Name(30))
- );
-CREATE TABLE Path (
- PathId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
- Path BLOB NOT NULL,
- PRIMARY KEY(PathId),
- INDEX (Path(50))
- );
-CREATE TABLE File (
- FileId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
- FileIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
- JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
- PathId INTEGER UNSIGNED NOT NULL REFERENCES Path,
- FilenameId INTEGER UNSIGNED NOT NULL REFERENCES Filename,
- MarkId INTEGER UNSIGNED NOT NULL DEFAULT 0,
- LStat TINYBLOB NOT NULL,
- MD5 TINYBLOB NOT NULL,
- PRIMARY KEY(FileId),
- INDEX (JobId),
- INDEX (PathId),
- INDEX (FilenameId)
- );
-CREATE TABLE Job (
- JobId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
- Job TINYBLOB NOT NULL,
- Name TINYBLOB NOT NULL,
- Type BINARY(1) NOT NULL,
- Level BINARY(1) NOT NULL,
- ClientId INTEGER NOT NULL REFERENCES Client,
- JobStatus BINARY(1) NOT NULL,
- SchedTime DATETIME NOT NULL,
- StartTime DATETIME NOT NULL,
- EndTime DATETIME NOT NULL,
- JobTDate BIGINT UNSIGNED NOT NULL,
- VolSessionId INTEGER UNSIGNED NOT NULL DEFAULT 0,
- VolSessionTime INTEGER UNSIGNED NOT NULL DEFAULT 0,
- JobFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
- JobBytes BIGINT UNSIGNED NOT NULL,
- JobErrors INTEGER UNSIGNED NOT NULL DEFAULT 0,
- JobMissingFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
- PoolId INTEGER UNSIGNED NOT NULL REFERENCES Pool,
- FileSetId INTEGER UNSIGNED NOT NULL REFERENCES FileSet,
- PurgedFiles TINYINT NOT NULL DEFAULT 0,
- HasBase TINYINT NOT NULL DEFAULT 0,
- PRIMARY KEY(JobId),
- INDEX (Name(128))
- );
-CREATE TABLE FileSet (
- FileSetId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
- FileSet TINYBLOB NOT NULL,
- MD5 TINYBLOB NOT NULL,
- CreateTime DATETIME NOT NULL,
- PRIMARY KEY(FileSetId)
- );
-CREATE TABLE JobMedia (
- JobMediaId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
- JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
- MediaId INTEGER UNSIGNED NOT NULL REFERENCES Media,
- FirstIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
- LastIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
- StartFile INTEGER UNSIGNED NOT NULL DEFAULT 0,
- EndFile INTEGER UNSIGNED NOT NULL DEFAULT 0,
- StartBlock INTEGER UNSIGNED NOT NULL DEFAULT 0,
- EndBlock INTEGER UNSIGNED NOT NULL DEFAULT 0,
- VolIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
- PRIMARY KEY(JobMediaId),
- INDEX (JobId, MediaId)
- );
-CREATE TABLE Media (
- MediaId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
- VolumeName TINYBLOB NOT NULL,
- Slot INTEGER NOT NULL DEFAULT 0,
- PoolId INTEGER UNSIGNED NOT NULL REFERENCES Pool,
- MediaType TINYBLOB NOT NULL,
- FirstWritten DATETIME NOT NULL,
- LastWritten DATETIME NOT NULL,
- LabelDate DATETIME NOT NULL,
- VolJobs INTEGER UNSIGNED NOT NULL DEFAULT 0,
- VolFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
- VolBlocks INTEGER UNSIGNED NOT NULL DEFAULT 0,
- VolMounts INTEGER UNSIGNED NOT NULL DEFAULT 0,
- VolBytes BIGINT UNSIGNED NOT NULL DEFAULT 0,
- VolErrors INTEGER UNSIGNED NOT NULL DEFAULT 0,
- VolWrites INTEGER UNSIGNED NOT NULL DEFAULT 0,
- VolCapacityBytes BIGINT UNSIGNED NOT NULL,
- VolStatus ENUM('Full', 'Archive', 'Append', 'Recycle', 'Purged',
- 'Read-Only', 'Disabled', 'Error', 'Busy', 'Used', 'Cleaning') NOT NULL,
- Recycle TINYINT NOT NULL DEFAULT 0,
- VolRetention BIGINT UNSIGNED NOT NULL DEFAULT 0,
- VolUseDuration BIGINT UNSIGNED NOT NULL DEFAULT 0,
- MaxVolJobs INTEGER UNSIGNED NOT NULL DEFAULT 0,
- MaxVolFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
- MaxVolBytes BIGINT UNSIGNED NOT NULL DEFAULT 0,
- InChanger TINYINT NOT NULL DEFAULT 0,
- MediaAddressing TINYINT NOT NULL DEFAULT 0,
- VolReadTime BIGINT UNSIGNED NOT NULL DEFAULT 0,
- VolWriteTime BIGINT UNSIGNED NOT NULL DEFAULT 0,
- PRIMARY KEY(MediaId),
- INDEX (PoolId)
- );
-CREATE TABLE Pool (
- PoolId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
- Name TINYBLOB NOT NULL,
- NumVols INTEGER UNSIGNED NOT NULL DEFAULT 0,
- MaxVols INTEGER UNSIGNED NOT NULL DEFAULT 0,
- UseOnce TINYINT NOT NULL,
- UseCatalog TINYINT NOT NULL,
- AcceptAnyVolume TINYINT DEFAULT 0,
- VolRetention BIGINT UNSIGNED NOT NULL,
- VolUseDuration BIGINT UNSIGNED NOT NULL,
- MaxVolJobs INTEGER UNSIGNED NOT NULL DEFAULT 0,
- MaxVolFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
- MaxVolBytes BIGINT UNSIGNED NOT NULL,
- AutoPrune TINYINT DEFAULT 0,
- Recycle TINYINT DEFAULT 0,
- PoolType ENUM('Backup', 'Copy', 'Cloned', 'Archive', 'Migration', 'Scratch') NOT NULL,
- LabelFormat TINYBLOB,
- Enabled TINYINT DEFAULT 1,
- ScratchPoolId INTEGER UNSIGNED DEFAULT 0 REFERENCES Pool,
- RecyclePoolId INTEGER UNSIGNED DEFAULT 0 REFERENCES Pool,
- UNIQUE (Name(128)),
- PRIMARY KEY (PoolId)
- );
-CREATE TABLE Client (
- ClientId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
- Name TINYBLOB NOT NULL,
- Uname TINYBLOB NOT NULL, /* full uname -a of client */
- AutoPrune TINYINT DEFAULT 0,
- FileRetention BIGINT UNSIGNED NOT NULL,
- JobRetention BIGINT UNSIGNED NOT NULL,
- UNIQUE (Name(128)),
- PRIMARY KEY(ClientId)
- );
-CREATE TABLE BaseFiles (
- BaseId INTEGER UNSIGNED AUTO_INCREMENT,
- BaseJobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
- JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
- FileId INTEGER UNSIGNED NOT NULL REFERENCES File,
- FileIndex INTEGER UNSIGNED,
- PRIMARY KEY(BaseId)
- );
-CREATE TABLE UnsavedFiles (
- UnsavedId INTEGER UNSIGNED AUTO_INCREMENT,
- JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
- PathId INTEGER UNSIGNED NOT NULL REFERENCES Path,
- FilenameId INTEGER UNSIGNED NOT NULL REFERENCES Filename,
- PRIMARY KEY (UnsavedId)
- );
-CREATE TABLE Version (
- VersionId INTEGER UNSIGNED NOT NULL
- );
--- Initialize Version
-INSERT INTO Version (VersionId) VALUES (7);
-CREATE TABLE Counters (
- Counter TINYBLOB NOT NULL,
- MinValue INTEGER,
- MaxValue INTEGER,
- CurrentValue INTEGER,
- WrapCounter TINYBLOB NOT NULL,
- PRIMARY KEY (Counter(128))
- );
-\end{verbatim}
-\normalsize
--- /dev/null
+\newfont{\bighead}{cmr17 at 36pt}
+\parskip 10pt
+\parindent 0pt
+
+\title{\includegraphics{\idir bacula-logo.eps} \\ \bigskip
+ \Huge{Bacula}$^{\normalsize \textregistered}$ \Huge{Developer's Guide}
+ \begin{center}
+ \large{It comes in the night and sucks
+ the essence from your computers. }
+ \end{center}
+}
+
+
+\author{Kern Sibbald}
+\date{\vspace{1.0in}\today \\
+ This manual documents Bacula version \input{version} \\
+ \vspace{0.2in}
+ Copyright {\copyright} 1999-2010, Free Software Foundation Europe
+ e.V. \\
+ Bacula {\textregistered} is a registered trademark of Kern Sibbald.\\
+ \vspace{0.2in}
+ Permission is granted to copy, distribute and/or modify this document under the terms of the
+ GNU Free Documentation License, Version 1.2 published by the Free Software Foundation;
+ with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
+ A copy of the license is included in the section entitled "GNU Free Documentation License".
+}
+
+\maketitle
+++ /dev/null
-\newfont{\bighead}{cmr17 at 36pt}
-\parskip 10pt
-\parindent 0pt
-
-\title{\includegraphics{\idir bacula-logo.eps} \\ \bigskip
- \Huge{Bacula}$^{\normalsize \textregistered}$ \Huge{Developer's Guide}
- \begin{center}
- \large{It comes in the night and sucks
- the essence from your computers. }
- \end{center}
-}
-
-
-\author{Kern Sibbald}
-\date{\vspace{1.0in}\today \\
- This manual documents Bacula version \input{version} \\
- \vspace{0.2in}
- Copyright {\copyright} 1999-2010, Free Software Foundation Europe
- e.V. \\
- Bacula {\textregistered} is a registered trademark of Kern Sibbald.\\
- \vspace{0.2in}
- Permission is granted to copy, distribute and/or modify this document under the terms of the
- GNU Free Documentation License, Version 1.2 published by the Free Software Foundation;
- with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
- A copy of the license is included in the section entitled "GNU Free Documentation License".
-}
-
-\maketitle
--- /dev/null
+%%
+%%
+
+\chapter{Daemon Protocol}
+\label{_ChapterStart2}
+\index{Protocol!Daemon }
+\index{Daemon Protocol }
+
+\section{General}
+\index{General }
+\addcontentsline{toc}{subsection}{General}
+
+This document describes the protocols used between the various daemons. As
+Bacula has developed, it has become quite out of date. The general idea still
+holds true, but the details of the fields for each command, and indeed the
+commands themselves have changed considerably.
+
+It is intended to be a technical discussion of the general daemon protocols
+and as such is not targeted at end users but rather at developers and system
+administrators that want or need to know more of the working details of {\bf
+Bacula}.
+
+\section{Low Level Network Protocol}
+\index{Protocol!Low Level Network }
+\index{Low Level Network Protocol }
+\addcontentsline{toc}{subsection}{Low Level Network Protocol}
+
+At the lowest level, the network protocol is handled by {\bf BSOCK} packets
+which contain a lot of information about the status of the network connection:
+who is at the other end, etc. Each basic {\bf Bacula} network read or write
+actually consists of two low level network read/writes. The first write always
+sends four bytes of data in machine independent byte order. If data is to
+follow, the first four bytes are a positive non-zero integer indicating the
+length of the data that follow in the subsequent write. If the four byte
+integer is zero or negative, it indicates a special request, a sort of network
+signaling capability. In this case, no data packet will follow. The low level
+BSOCK routines expect that only a single thread is accessing the socket at a
+time. It is advised that multiple threads do not read/write the same socket.
+If you must do this, you must provide some sort of locking mechanism. It would
+not be appropriate for efficiency reasons to make every call to the BSOCK
+routines lock and unlock the packet.
+
+\section{General Daemon Protocol}
+\index{General Daemon Protocol }
+\index{Protocol!General Daemon }
+\addcontentsline{toc}{subsection}{General Daemon Protocol}
+
+In general, all the daemons follow the following global rules. There may be
+exceptions depending on the specific case. Normally, one daemon will be
+sending commands to another daemon (specifically, the Director to the Storage
+daemon and the Director to the File daemon).
+
+\begin{itemize}
+\item Commands are always ASCII commands that are upper/lower case dependent
+ as well as space sensitive.
+\item All binary data is converted into ASCII (either with printf statements
+ or using base64 encoding).
+\item All responses to commands sent are always prefixed with a return
+ numeric code where codes in the 1000's are reserved for the Director, the
+ 2000's are reserved for the File daemon, and the 3000's are reserved for the
+Storage daemon.
+\item Any response that is not prefixed with a numeric code is a command (or
+ subcommand if you like) coming from the other end. For example, while the
+ Director is corresponding with the Storage daemon, the Storage daemon can
+request Catalog services from the Director. This convention permits each side
+to send commands to the other daemon while simultaneously responding to
+commands.
+\item Any response that is of zero length, depending on the context, either
+ terminates the data stream being sent or terminates command mode prior to
+ closing the connection.
+\item Any response that is of negative length is a special sign that normally
+ requires a response. For example, during data transfer from the File daemon
+ to the Storage daemon, normally the File daemon sends continuously without
+intervening reads. However, periodically, the File daemon will send a packet
+of length -1 indicating that the current data stream is complete and that the
+Storage daemon should respond to the packet with an OK, ABORT JOB, PAUSE,
+etc. This permits the File daemon to efficiently send data while at the same
+time occasionally ``polling'' the Storage daemon for his status or any
+special requests.
+
+Currently, these negative lengths are specific to the daemon, but shortly,
+the range 0 to -999 will be standard daemon wide signals, while -1000 to
+-1999 will be for Director user, -2000 to -2999 for the File daemon, and
+-3000 to -3999 for the Storage daemon.
+\end{itemize}
+
+\section{The Protocol Used Between the Director and the Storage Daemon}
+\index{Daemon!Protocol Used Between the Director and the Storage }
+\index{Protocol Used Between the Director and the Storage Daemon }
+\addcontentsline{toc}{subsection}{Protocol Used Between the Director and the
+Storage Daemon}
+
+Before sending commands to the File daemon, the Director opens a Message
+channel with the Storage daemon, identifies itself and presents its password.
+If the password check is OK, the Storage daemon accepts the Director. The
+Director then passes the Storage daemon, the JobId to be run as well as the
+File daemon authorization (append, read all, or read for a specific session).
+The Storage daemon will then pass back to the Director a enabling key for this
+JobId that must be presented by the File daemon when opening the job. Until
+this process is complete, the Storage daemon is not available for use by File
+daemons.
+
+\footnotesize
+\begin{verbatim}
+SD: listens
+DR: makes connection
+DR: Hello <Director-name> calling <password>
+SD: 3000 OK Hello
+DR: JobId=nnn Allow=(append, read) Session=(*, SessionId)
+ (Session not implemented yet)
+SD: 3000 OK Job Authorization=<password>
+DR: use device=<device-name> media_type=<media-type>
+ pool_name=<pool-name> pool_type=<pool_type>
+SD: 3000 OK use device
+\end{verbatim}
+\normalsize
+
+For the Director to be authorized, the \lt{}Director-name\gt{} and the
+\lt{}password\gt{} must match the values in one of the Storage daemon's
+Director resources (there may be several Directors that can access a single
+Storage daemon).
+
+\section{The Protocol Used Between the Director and the File Daemon}
+\index{Daemon!Protocol Used Between the Director and the File }
+\index{Protocol Used Between the Director and the File Daemon }
+\addcontentsline{toc}{subsection}{Protocol Used Between the Director and the
+File Daemon}
+
+A typical conversation might look like the following:
+
+\footnotesize
+\begin{verbatim}
+FD: listens
+DR: makes connection
+DR: Hello <Director-name> calling <password>
+FD: 2000 OK Hello
+DR: JobId=nnn Authorization=<password>
+FD: 2000 OK Job
+DR: storage address = <Storage daemon address> port = <port-number>
+ name = <DeviceName> mediatype = <MediaType>
+FD: 2000 OK storage
+DR: include
+DR: <directory1>
+DR: <directory2>
+ ...
+DR: Null packet
+FD: 2000 OK include
+DR: exclude
+DR: <directory1>
+DR: <directory2>
+ ...
+DR: Null packet
+FD: 2000 OK exclude
+DR: full
+FD: 2000 OK full
+DR: save
+FD: 2000 OK save
+FD: Attribute record for each file as sent to the
+ Storage daemon (described above).
+FD: Null packet
+FD: <append close responses from Storage daemon>
+ e.g.
+ 3000 OK Volumes = <number of volumes>
+ 3001 Volume = <volume-id> <start file> <start block>
+ <end file> <end block> <volume session-id>
+ 3002 Volume data = <date/time of last write> <Number bytes written>
+ <number errors>
+ ... additional Volume / Volume data pairs for volumes 2 .. n
+FD: Null packet
+FD: close socket
+\end{verbatim}
+\normalsize
+
+\section{The Save Protocol Between the File Daemon and the Storage Daemon}
+\index{Save Protocol Between the File Daemon and the Storage Daemon }
+\index{Daemon!Save Protocol Between the File Daemon and the Storage }
+\addcontentsline{toc}{subsection}{Save Protocol Between the File Daemon and
+the Storage Daemon}
+
+Once the Director has send a {\bf save} command to the File daemon, the File
+daemon will contact the Storage daemon to begin the save.
+
+In what follows: FD: refers to information set via the network from the File
+daemon to the Storage daemon, and SD: refers to information set from the
+Storage daemon to the File daemon.
+
+\subsection{Command and Control Information}
+\index{Information!Command and Control }
+\index{Command and Control Information }
+\addcontentsline{toc}{subsubsection}{Command and Control Information}
+
+Command and control information is exchanged in human readable ASCII commands.
+
+
+\footnotesize
+\begin{verbatim}
+FD: listens
+SD: makes connection
+FD: append open session = <JobId> [<password>]
+SD: 3000 OK ticket = <number>
+FD: append data <ticket-number>
+SD: 3000 OK data address = <IPaddress> port = <port>
+\end{verbatim}
+\normalsize
+
+\subsection{Data Information}
+\index{Information!Data }
+\index{Data Information }
+\addcontentsline{toc}{subsubsection}{Data Information}
+
+The Data information consists of the file attributes and data to the Storage
+daemon. For the most part, the data information is sent one way: from the File
+daemon to the Storage daemon. This allows the File daemon to transfer
+information as fast as possible without a lot of handshaking and network
+overhead.
+
+However, from time to time, the File daemon needs to do a sort of checkpoint
+of the situation to ensure that everything is going well with the Storage
+daemon. To do so, the File daemon sends a packet with a negative length
+indicating that he wishes the Storage daemon to respond by sending a packet of
+information to the File daemon. The File daemon then waits to receive a packet
+from the Storage daemon before continuing.
+
+All data sent are in binary format except for the header packet, which is in
+ASCII. There are two packet types used data transfer mode: a header packet,
+the contents of which are known to the Storage daemon, and a data packet, the
+contents of which are never examined by the Storage daemon.
+
+The first data packet to the Storage daemon will be an ASCII header packet
+consisting of the following data.
+
+\lt{}File-Index\gt{} \lt{}Stream-Id\gt{} \lt{}Info\gt{} where {\bf
+\lt{}File-Index\gt{}} is a sequential number beginning from one that
+increments with each file (or directory) sent.
+
+where {\bf \lt{}Stream-Id\gt{}} will be 1 for the Attributes record and 2 for
+uncompressed File data. 3 is reserved for the MD5 signature for the file.
+
+where {\bf \lt{}Info\gt{}} transmit information about the Stream to the
+Storage Daemon. It is a character string field where each character has a
+meaning. The only character currently defined is 0 (zero), which is simply a
+place holder (a no op). In the future, there may be codes indicating
+compressed data, encrypted data, etc.
+
+Immediately following the header packet, the Storage daemon will expect any
+number of data packets. The series of data packets is terminated by a zero
+length packet, which indicates to the Storage daemon that the next packet will
+be another header packet. As previously mentioned, a negative length packet is
+a request for the Storage daemon to temporarily enter command mode and send a
+reply to the File daemon. Thus an actual conversation might contain the
+following exchanges:
+
+\footnotesize
+\begin{verbatim}
+FD: <1 1 0> (header packet)
+FD: <data packet containing file-attributes>
+FD: Null packet
+FD: <1 2 0>
+FD: <multiple data packets containing the file data>
+FD: Packet length = -1
+SD: 3000 OK
+FD: <2 1 0>
+FD: <data packet containing file-attributes>
+FD: Null packet
+FD: <2 2 0>
+FD: <multiple data packets containing the file data>
+FD: Null packet
+FD: Null packet
+FD: append end session <ticket-number>
+SD: 3000 OK end
+FD: append close session <ticket-number>
+SD: 3000 OK Volumes = <number of volumes>
+SD: 3001 Volume = <volumeid> <start file> <start block>
+ <end file> <end block> <volume session-id>
+SD: 3002 Volume data = <date/time of last write> <Number bytes written>
+ <number errors>
+SD: ... additional Volume / Volume data pairs for
+ volumes 2 .. n
+FD: close socket
+\end{verbatim}
+\normalsize
+
+The information returned to the File daemon by the Storage daemon in response
+to the {\bf append close session} is transmit in turn to the Director.
+++ /dev/null
-%%
-%%
-
-\chapter{Daemon Protocol}
-\label{_ChapterStart2}
-\index{Protocol!Daemon }
-\index{Daemon Protocol }
-
-\section{General}
-\index{General }
-\addcontentsline{toc}{subsection}{General}
-
-This document describes the protocols used between the various daemons. As
-Bacula has developed, it has become quite out of date. The general idea still
-holds true, but the details of the fields for each command, and indeed the
-commands themselves have changed considerably.
-
-It is intended to be a technical discussion of the general daemon protocols
-and as such is not targeted at end users but rather at developers and system
-administrators that want or need to know more of the working details of {\bf
-Bacula}.
-
-\section{Low Level Network Protocol}
-\index{Protocol!Low Level Network }
-\index{Low Level Network Protocol }
-\addcontentsline{toc}{subsection}{Low Level Network Protocol}
-
-At the lowest level, the network protocol is handled by {\bf BSOCK} packets
-which contain a lot of information about the status of the network connection:
-who is at the other end, etc. Each basic {\bf Bacula} network read or write
-actually consists of two low level network read/writes. The first write always
-sends four bytes of data in machine independent byte order. If data is to
-follow, the first four bytes are a positive non-zero integer indicating the
-length of the data that follow in the subsequent write. If the four byte
-integer is zero or negative, it indicates a special request, a sort of network
-signaling capability. In this case, no data packet will follow. The low level
-BSOCK routines expect that only a single thread is accessing the socket at a
-time. It is advised that multiple threads do not read/write the same socket.
-If you must do this, you must provide some sort of locking mechanism. It would
-not be appropriate for efficiency reasons to make every call to the BSOCK
-routines lock and unlock the packet.
-
-\section{General Daemon Protocol}
-\index{General Daemon Protocol }
-\index{Protocol!General Daemon }
-\addcontentsline{toc}{subsection}{General Daemon Protocol}
-
-In general, all the daemons follow the following global rules. There may be
-exceptions depending on the specific case. Normally, one daemon will be
-sending commands to another daemon (specifically, the Director to the Storage
-daemon and the Director to the File daemon).
-
-\begin{itemize}
-\item Commands are always ASCII commands that are upper/lower case dependent
- as well as space sensitive.
-\item All binary data is converted into ASCII (either with printf statements
- or using base64 encoding).
-\item All responses to commands sent are always prefixed with a return
- numeric code where codes in the 1000's are reserved for the Director, the
- 2000's are reserved for the File daemon, and the 3000's are reserved for the
-Storage daemon.
-\item Any response that is not prefixed with a numeric code is a command (or
- subcommand if you like) coming from the other end. For example, while the
- Director is corresponding with the Storage daemon, the Storage daemon can
-request Catalog services from the Director. This convention permits each side
-to send commands to the other daemon while simultaneously responding to
-commands.
-\item Any response that is of zero length, depending on the context, either
- terminates the data stream being sent or terminates command mode prior to
- closing the connection.
-\item Any response that is of negative length is a special sign that normally
- requires a response. For example, during data transfer from the File daemon
- to the Storage daemon, normally the File daemon sends continuously without
-intervening reads. However, periodically, the File daemon will send a packet
-of length -1 indicating that the current data stream is complete and that the
-Storage daemon should respond to the packet with an OK, ABORT JOB, PAUSE,
-etc. This permits the File daemon to efficiently send data while at the same
-time occasionally ``polling'' the Storage daemon for his status or any
-special requests.
-
-Currently, these negative lengths are specific to the daemon, but shortly,
-the range 0 to -999 will be standard daemon wide signals, while -1000 to
--1999 will be for Director user, -2000 to -2999 for the File daemon, and
--3000 to -3999 for the Storage daemon.
-\end{itemize}
-
-\section{The Protocol Used Between the Director and the Storage Daemon}
-\index{Daemon!Protocol Used Between the Director and the Storage }
-\index{Protocol Used Between the Director and the Storage Daemon }
-\addcontentsline{toc}{subsection}{Protocol Used Between the Director and the
-Storage Daemon}
-
-Before sending commands to the File daemon, the Director opens a Message
-channel with the Storage daemon, identifies itself and presents its password.
-If the password check is OK, the Storage daemon accepts the Director. The
-Director then passes the Storage daemon, the JobId to be run as well as the
-File daemon authorization (append, read all, or read for a specific session).
-The Storage daemon will then pass back to the Director a enabling key for this
-JobId that must be presented by the File daemon when opening the job. Until
-this process is complete, the Storage daemon is not available for use by File
-daemons.
-
-\footnotesize
-\begin{verbatim}
-SD: listens
-DR: makes connection
-DR: Hello <Director-name> calling <password>
-SD: 3000 OK Hello
-DR: JobId=nnn Allow=(append, read) Session=(*, SessionId)
- (Session not implemented yet)
-SD: 3000 OK Job Authorization=<password>
-DR: use device=<device-name> media_type=<media-type>
- pool_name=<pool-name> pool_type=<pool_type>
-SD: 3000 OK use device
-\end{verbatim}
-\normalsize
-
-For the Director to be authorized, the \lt{}Director-name\gt{} and the
-\lt{}password\gt{} must match the values in one of the Storage daemon's
-Director resources (there may be several Directors that can access a single
-Storage daemon).
-
-\section{The Protocol Used Between the Director and the File Daemon}
-\index{Daemon!Protocol Used Between the Director and the File }
-\index{Protocol Used Between the Director and the File Daemon }
-\addcontentsline{toc}{subsection}{Protocol Used Between the Director and the
-File Daemon}
-
-A typical conversation might look like the following:
-
-\footnotesize
-\begin{verbatim}
-FD: listens
-DR: makes connection
-DR: Hello <Director-name> calling <password>
-FD: 2000 OK Hello
-DR: JobId=nnn Authorization=<password>
-FD: 2000 OK Job
-DR: storage address = <Storage daemon address> port = <port-number>
- name = <DeviceName> mediatype = <MediaType>
-FD: 2000 OK storage
-DR: include
-DR: <directory1>
-DR: <directory2>
- ...
-DR: Null packet
-FD: 2000 OK include
-DR: exclude
-DR: <directory1>
-DR: <directory2>
- ...
-DR: Null packet
-FD: 2000 OK exclude
-DR: full
-FD: 2000 OK full
-DR: save
-FD: 2000 OK save
-FD: Attribute record for each file as sent to the
- Storage daemon (described above).
-FD: Null packet
-FD: <append close responses from Storage daemon>
- e.g.
- 3000 OK Volumes = <number of volumes>
- 3001 Volume = <volume-id> <start file> <start block>
- <end file> <end block> <volume session-id>
- 3002 Volume data = <date/time of last write> <Number bytes written>
- <number errors>
- ... additional Volume / Volume data pairs for volumes 2 .. n
-FD: Null packet
-FD: close socket
-\end{verbatim}
-\normalsize
-
-\section{The Save Protocol Between the File Daemon and the Storage Daemon}
-\index{Save Protocol Between the File Daemon and the Storage Daemon }
-\index{Daemon!Save Protocol Between the File Daemon and the Storage }
-\addcontentsline{toc}{subsection}{Save Protocol Between the File Daemon and
-the Storage Daemon}
-
-Once the Director has send a {\bf save} command to the File daemon, the File
-daemon will contact the Storage daemon to begin the save.
-
-In what follows: FD: refers to information set via the network from the File
-daemon to the Storage daemon, and SD: refers to information set from the
-Storage daemon to the File daemon.
-
-\subsection{Command and Control Information}
-\index{Information!Command and Control }
-\index{Command and Control Information }
-\addcontentsline{toc}{subsubsection}{Command and Control Information}
-
-Command and control information is exchanged in human readable ASCII commands.
-
-
-\footnotesize
-\begin{verbatim}
-FD: listens
-SD: makes connection
-FD: append open session = <JobId> [<password>]
-SD: 3000 OK ticket = <number>
-FD: append data <ticket-number>
-SD: 3000 OK data address = <IPaddress> port = <port>
-\end{verbatim}
-\normalsize
-
-\subsection{Data Information}
-\index{Information!Data }
-\index{Data Information }
-\addcontentsline{toc}{subsubsection}{Data Information}
-
-The Data information consists of the file attributes and data to the Storage
-daemon. For the most part, the data information is sent one way: from the File
-daemon to the Storage daemon. This allows the File daemon to transfer
-information as fast as possible without a lot of handshaking and network
-overhead.
-
-However, from time to time, the File daemon needs to do a sort of checkpoint
-of the situation to ensure that everything is going well with the Storage
-daemon. To do so, the File daemon sends a packet with a negative length
-indicating that he wishes the Storage daemon to respond by sending a packet of
-information to the File daemon. The File daemon then waits to receive a packet
-from the Storage daemon before continuing.
-
-All data sent are in binary format except for the header packet, which is in
-ASCII. There are two packet types used data transfer mode: a header packet,
-the contents of which are known to the Storage daemon, and a data packet, the
-contents of which are never examined by the Storage daemon.
-
-The first data packet to the Storage daemon will be an ASCII header packet
-consisting of the following data.
-
-\lt{}File-Index\gt{} \lt{}Stream-Id\gt{} \lt{}Info\gt{} where {\bf
-\lt{}File-Index\gt{}} is a sequential number beginning from one that
-increments with each file (or directory) sent.
-
-where {\bf \lt{}Stream-Id\gt{}} will be 1 for the Attributes record and 2 for
-uncompressed File data. 3 is reserved for the MD5 signature for the file.
-
-where {\bf \lt{}Info\gt{}} transmit information about the Stream to the
-Storage Daemon. It is a character string field where each character has a
-meaning. The only character currently defined is 0 (zero), which is simply a
-place holder (a no op). In the future, there may be codes indicating
-compressed data, encrypted data, etc.
-
-Immediately following the header packet, the Storage daemon will expect any
-number of data packets. The series of data packets is terminated by a zero
-length packet, which indicates to the Storage daemon that the next packet will
-be another header packet. As previously mentioned, a negative length packet is
-a request for the Storage daemon to temporarily enter command mode and send a
-reply to the File daemon. Thus an actual conversation might contain the
-following exchanges:
-
-\footnotesize
-\begin{verbatim}
-FD: <1 1 0> (header packet)
-FD: <data packet containing file-attributes>
-FD: Null packet
-FD: <1 2 0>
-FD: <multiple data packets containing the file data>
-FD: Packet length = -1
-SD: 3000 OK
-FD: <2 1 0>
-FD: <data packet containing file-attributes>
-FD: Null packet
-FD: <2 2 0>
-FD: <multiple data packets containing the file data>
-FD: Null packet
-FD: Null packet
-FD: append end session <ticket-number>
-SD: 3000 OK end
-FD: append close session <ticket-number>
-SD: 3000 OK Volumes = <number of volumes>
-SD: 3001 Volume = <volumeid> <start file> <start block>
- <end file> <end block> <volume session-id>
-SD: 3002 Volume data = <date/time of last write> <Number bytes written>
- <number errors>
-SD: ... additional Volume / Volume data pairs for
- volumes 2 .. n
-FD: close socket
-\end{verbatim}
-\normalsize
-
-The information returned to the File daemon by the Storage daemon in response
-to the {\bf append close session} is transmit in turn to the Director.
\begin{document}
\sloppy
-\include{coverpage}
+\include{coverpage-en}
\clearpage
\pagenumbering{roman}
\pagestyle{myheadings}
\markboth{Bacula Version \version}{Bacula Version \version}
\pagenumbering{arabic}
-\include{generaldevel}
-\include{git}
-\include{pluginAPI}
-\include{platformsupport}
-\include{daemonprotocol}
-\include{director}
-\include{file}
-\include{storage}
-\include{catalog}
-\include{mediaformat}
-\include{porting}
-\include{gui-interface}
-\include{tls-techdoc}
-\include{regression}
-\include{md5}
-\include{mempool}
-\include{netprotocol}
-\include{smartall}
-\include{fdl}
+\include{generaldevel-en}
+\include{git-en}
+\include{pluginAPI-en}
+\include{platformsupport-en}
+\include{daemonprotocol-en}
+\include{director-en}
+\include{file-en}
+\include{storage-en}
+\include{catalog-en}
+\include{mediaformat-en}
+\include{porting-en}
+\include{gui-interface-en}
+\include{tls-techdoc-en}
+\include{regression-en}
+\include{md5-en}
+\include{mempool-en}
+\include{netprotocol-en}
+\include{smartall-en}
+\include{fdl-en}
% The following line tells link_resolver.pl to not include these files:
--- /dev/null
+%%
+%%
+
+\chapter{Director Services Daemon}
+\label{_ChapterStart6}
+\index{Daemon!Director Services }
+\index{Director Services Daemon }
+\addcontentsline{toc}{section}{Director Services Daemon}
+
+This chapter is intended to be a technical discussion of the Director services
+and as such is not targeted at end users but rather at developers and system
+administrators that want or need to know more of the working details of {\bf
+Bacula}.
+
+The {\bf Bacula Director} services consist of the program that supervises all
+the backup and restore operations.
+
+To be written ...
+++ /dev/null
-%%
-%%
-
-\chapter{Director Services Daemon}
-\label{_ChapterStart6}
-\index{Daemon!Director Services }
-\index{Director Services Daemon }
-\addcontentsline{toc}{section}{Director Services Daemon}
-
-This chapter is intended to be a technical discussion of the Director services
-and as such is not targeted at end users but rather at developers and system
-administrators that want or need to know more of the working details of {\bf
-Bacula}.
-
-The {\bf Bacula Director} services consist of the program that supervises all
-the backup and restore operations.
-
-To be written ...
--- /dev/null
+%---------The file header---------------------------------------------
+
+%% \usepackage[english]{babel} %language selection
+%% \usepackage[T1]{fontenc}
+
+%%\pagenumbering{arabic}
+
+%% \usepackage{hyperref}
+%% \hypersetup{colorlinks,
+%% citecolor=black,
+%% filecolor=black,
+%% linkcolor=black,
+%% urlcolor=black,
+%% pdftex}
+
+
+%---------------------------------------------------------------------
+\chapter{GNU Free Documentation License}
+\index[general]{GNU ree Documentation License}
+\index[general]{License!GNU ree Documentation}
+\addcontentsline{toc}{section}{GNU ree Documentation License}
+
+%\label{label_fdl}
+
+ \begin{center}
+
+ Version 1.2, November 2002
+
+
+ Copyright \copyright 2000,2001,2002 Free Software Foundation, Inc.
+
+ \bigskip
+
+ 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
+
+ \bigskip
+
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+\end{center}
+
+
+\begin{center}
+{\bf\large Preamble}
+\end{center}
+
+The purpose of this License is to make a manual, textbook, or other
+functional and useful document "free" in the sense of freedom: to
+assure everyone the effective freedom to copy and redistribute it,
+with or without modifying it, either commercially or noncommercially.
+Secondarily, this License preserves for the author and publisher a way
+to get credit for their work, while not being considered responsible
+for modifications made by others.
+
+This License is a kind of "copyleft", which means that derivative
+works of the document must themselves be free in the same sense. It
+complements the GNU General Public License, which is a copyleft
+license designed for free software.
+
+We have designed this License in order to use it for manuals for free
+software, because free software needs free documentation: a free
+program should come with manuals providing the same freedoms that the
+software does. But this License is not limited to software manuals;
+it can be used for any textual work, regardless of subject matter or
+whether it is published as a printed book. We recommend this License
+principally for works whose purpose is instruction or reference.
+
+
+\begin{center}
+{\Large\bf 1. APPLICABILITY AND DEFINITIONS}
+\addcontentsline{toc}{section}{1. APPLICABILITY AND DEFINITIONS}
+\end{center}
+
+This License applies to any manual or other work, in any medium, that
+contains a notice placed by the copyright holder saying it can be
+distributed under the terms of this License. Such a notice grants a
+world-wide, royalty-free license, unlimited in duration, to use that
+work under the conditions stated herein. The \textbf{"Document"}, below,
+refers to any such manual or work. Any member of the public is a
+licensee, and is addressed as \textbf{"you"}. You accept the license if you
+copy, modify or distribute the work in a way requiring permission
+under copyright law.
+
+A \textbf{"Modified Version"} of the Document means any work containing the
+Document or a portion of it, either copied verbatim, or with
+modifications and/or translated into another language.
+
+A \textbf{"Secondary Section"} is a named appendix or a front-matter section of
+the Document that deals exclusively with the relationship of the
+publishers or authors of the Document to the Document's overall subject
+(or to related matters) and contains nothing that could fall directly
+within that overall subject. (Thus, if the Document is in part a
+textbook of mathematics, a Secondary Section may not explain any
+mathematics.) The relationship could be a matter of historical
+connection with the subject or with related matters, or of legal,
+commercial, philosophical, ethical or political position regarding
+them.
+
+The \textbf{"Invariant Sections"} are certain Secondary Sections whose titles
+are designated, as being those of Invariant Sections, in the notice
+that says that the Document is released under this License. If a
+section does not fit the above definition of Secondary then it is not
+allowed to be designated as Invariant. The Document may contain zero
+Invariant Sections. If the Document does not identify any Invariant
+Sections then there are none.
+
+The \textbf{"Cover Texts"} are certain short passages of text that are listed,
+as Front-Cover Texts or Back-Cover Texts, in the notice that says that
+the Document is released under this License. A Front-Cover Text may
+be at most 5 words, and a Back-Cover Text may be at most 25 words.
+
+A \textbf{"Transparent"} copy of the Document means a machine-readable copy,
+represented in a format whose specification is available to the
+general public, that is suitable for revising the document
+straightforwardly with generic text editors or (for images composed of
+pixels) generic paint programs or (for drawings) some widely available
+drawing editor, and that is suitable for input to text formatters or
+for automatic translation to a variety of formats suitable for input
+to text formatters. A copy made in an otherwise Transparent file
+format whose markup, or absence of markup, has been arranged to thwart
+or discourage subsequent modification by readers is not Transparent.
+An image format is not Transparent if used for any substantial amount
+of text. A copy that is not "Transparent" is called \textbf{"Opaque"}.
+
+Examples of suitable formats for Transparent copies include plain
+ASCII without markup, Texinfo input format, LaTeX input format, SGML
+or XML using a publicly available DTD, and standard-conforming simple
+HTML, PostScript or PDF designed for human modification. Examples of
+transparent image formats include PNG, XCF and JPG. Opaque formats
+include proprietary formats that can be read and edited only by
+proprietary word processors, SGML or XML for which the DTD and/or
+processing tools are not generally available, and the
+machine-generated HTML, PostScript or PDF produced by some word
+processors for output purposes only.
+
+The \textbf{"Title Page"} means, for a printed book, the title page itself,
+plus such following pages as are needed to hold, legibly, the material
+this License requires to appear in the title page. For works in
+formats which do not have any title page as such, "Title Page" means
+the text near the most prominent appearance of the work's title,
+preceding the beginning of the body of the text.
+
+A section \textbf{"Entitled XYZ"} means a named subunit of the Document whose
+title either is precisely XYZ or contains XYZ in parentheses following
+text that translates XYZ in another language. (Here XYZ stands for a
+specific section name mentioned below, such as \textbf{"Acknowledgements"},
+\textbf{"Dedications"}, \textbf{"Endorsements"}, or \textbf{"History"}.)
+To \textbf{"Preserve the Title"}
+of such a section when you modify the Document means that it remains a
+section "Entitled XYZ" according to this definition.
+
+The Document may include Warranty Disclaimers next to the notice which
+states that this License applies to the Document. These Warranty
+Disclaimers are considered to be included by reference in this
+License, but only as regards disclaiming warranties: any other
+implication that these Warranty Disclaimers may have is void and has
+no effect on the meaning of this License.
+
+
+\begin{center}
+{\Large\bf 2. VERBATIM COPYING}
+\addcontentsline{toc}{section}{2. VERBATIM COPYING}
+\end{center}
+
+You may copy and distribute the Document in any medium, either
+commercially or noncommercially, provided that this License, the
+copyright notices, and the license notice saying this License applies
+to the Document are reproduced in all copies, and that you add no other
+conditions whatsoever to those of this License. You may not use
+technical measures to obstruct or control the reading or further
+copying of the copies you make or distribute. However, you may accept
+compensation in exchange for copies. If you distribute a large enough
+number of copies you must also follow the conditions in section 3.
+
+You may also lend copies, under the same conditions stated above, and
+you may publicly display copies.
+
+
+\begin{center}
+{\Large\bf 3. COPYING IN QUANTITY}
+\addcontentsline{toc}{section}{3. COPYING IN QUANTITY}
+\end{center}
+
+
+If you publish printed copies (or copies in media that commonly have
+printed covers) of the Document, numbering more than 100, and the
+Document's license notice requires Cover Texts, you must enclose the
+copies in covers that carry, clearly and legibly, all these Cover
+Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on
+the back cover. Both covers must also clearly and legibly identify
+you as the publisher of these copies. The front cover must present
+the full title with all words of the title equally prominent and
+visible. You may add other material on the covers in addition.
+Copying with changes limited to the covers, as long as they preserve
+the title of the Document and satisfy these conditions, can be treated
+as verbatim copying in other respects.
+
+If the required texts for either cover are too voluminous to fit
+legibly, you should put the first ones listed (as many as fit
+reasonably) on the actual cover, and continue the rest onto adjacent
+pages.
+
+If you publish or distribute Opaque copies of the Document numbering
+more than 100, you must either include a machine-readable Transparent
+copy along with each Opaque copy, or state in or with each Opaque copy
+a computer-network location from which the general network-using
+public has access to download using public-standard network protocols
+a complete Transparent copy of the Document, free of added material.
+If you use the latter option, you must take reasonably prudent steps,
+when you begin distribution of Opaque copies in quantity, to ensure
+that this Transparent copy will remain thus accessible at the stated
+location until at least one year after the last time you distribute an
+Opaque copy (directly or through your agents or retailers) of that
+edition to the public.
+
+It is requested, but not required, that you contact the authors of the
+Document well before redistributing any large number of copies, to give
+them a chance to provide you with an updated version of the Document.
+
+
+\begin{center}
+{\Large\bf 4. MODIFICATIONS}
+\addcontentsline{toc}{section}{4. MODIFICATIONS}
+\end{center}
+
+You may copy and distribute a Modified Version of the Document under
+the conditions of sections 2 and 3 above, provided that you release
+the Modified Version under precisely this License, with the Modified
+Version filling the role of the Document, thus licensing distribution
+and modification of the Modified Version to whoever possesses a copy
+of it. In addition, you must do these things in the Modified Version:
+
+\begin{itemize}
+\item[A.]
+ Use in the Title Page (and on the covers, if any) a title distinct
+ from that of the Document, and from those of previous versions
+ (which should, if there were any, be listed in the History section
+ of the Document). You may use the same title as a previous version
+ if the original publisher of that version gives permission.
+
+\item[B.]
+ List on the Title Page, as authors, one or more persons or entities
+ responsible for authorship of the modifications in the Modified
+ Version, together with at least five of the principal authors of the
+ Document (all of its principal authors, if it has fewer than five),
+ unless they release you from this requirement.
+
+\item[C.]
+ State on the Title page the name of the publisher of the
+ Modified Version, as the publisher.
+
+\item[D.]
+ Preserve all the copyright notices of the Document.
+
+\item[E.]
+ Add an appropriate copyright notice for your modifications
+ adjacent to the other copyright notices.
+
+\item[F.]
+ Include, immediately after the copyright notices, a license notice
+ giving the public permission to use the Modified Version under the
+ terms of this License, in the form shown in the Addendum below.
+
+\item[G.]
+ Preserve in that license notice the full lists of Invariant Sections
+ and required Cover Texts given in the Document's license notice.
+
+\item[H.]
+ Include an unaltered copy of this License.
+
+\item[I.]
+ Preserve the section Entitled "History", Preserve its Title, and add
+ to it an item stating at least the title, year, new authors, and
+ publisher of the Modified Version as given on the Title Page. If
+ there is no section Entitled "History" in the Document, create one
+ stating the title, year, authors, and publisher of the Document as
+ given on its Title Page, then add an item describing the Modified
+ Version as stated in the previous sentence.
+
+\item[J.]
+ Preserve the network location, if any, given in the Document for
+ public access to a Transparent copy of the Document, and likewise
+ the network locations given in the Document for previous versions
+ it was based on. These may be placed in the "History" section.
+ You may omit a network location for a work that was published at
+ least four years before the Document itself, or if the original
+ publisher of the version it refers to gives permission.
+
+\item[K.]
+ For any section Entitled "Acknowledgements" or "Dedications",
+ Preserve the Title of the section, and preserve in the section all
+ the substance and tone of each of the contributor acknowledgements
+ and/or dedications given therein.
+
+\item[L.]
+ Preserve all the Invariant Sections of the Document,
+ unaltered in their text and in their titles. Section numbers
+ or the equivalent are not considered part of the section titles.
+
+\item[M.]
+ Delete any section Entitled "Endorsements". Such a section
+ may not be included in the Modified Version.
+
+\item[N.]
+ Do not retitle any existing section to be Entitled "Endorsements"
+ or to conflict in title with any Invariant Section.
+
+\item[O.]
+ Preserve any Warranty Disclaimers.
+\end{itemize}
+
+If the Modified Version includes new front-matter sections or
+appendices that qualify as Secondary Sections and contain no material
+copied from the Document, you may at your option designate some or all
+of these sections as invariant. To do this, add their titles to the
+list of Invariant Sections in the Modified Version's license notice.
+These titles must be distinct from any other section titles.
+
+You may add a section Entitled "Endorsements", provided it contains
+nothing but endorsements of your Modified Version by various
+parties--for example, statements of peer review or that the text has
+been approved by an organization as the authoritative definition of a
+standard.
+
+You may add a passage of up to five words as a Front-Cover Text, and a
+passage of up to 25 words as a Back-Cover Text, to the end of the list
+of Cover Texts in the Modified Version. Only one passage of
+Front-Cover Text and one of Back-Cover Text may be added by (or
+through arrangements made by) any one entity. If the Document already
+includes a cover text for the same cover, previously added by you or
+by arrangement made by the same entity you are acting on behalf of,
+you may not add another; but you may replace the old one, on explicit
+permission from the previous publisher that added the old one.
+
+The author(s) and publisher(s) of the Document do not by this License
+give permission to use their names for publicity for or to assert or
+imply endorsement of any Modified Version.
+
+
+\begin{center}
+{\Large\bf 5. COMBINING DOCUMENTS}
+\addcontentsline{toc}{section}{5. COMBINING DOCUMENTS}
+\end{center}
+
+
+You may combine the Document with other documents released under this
+License, under the terms defined in section 4 above for modified
+versions, provided that you include in the combination all of the
+Invariant Sections of all of the original documents, unmodified, and
+list them all as Invariant Sections of your combined work in its
+license notice, and that you preserve all their Warranty Disclaimers.
+
+The combined work need only contain one copy of this License, and
+multiple identical Invariant Sections may be replaced with a single
+copy. If there are multiple Invariant Sections with the same name but
+different contents, make the title of each such section unique by
+adding at the end of it, in parentheses, the name of the original
+author or publisher of that section if known, or else a unique number.
+Make the same adjustment to the section titles in the list of
+Invariant Sections in the license notice of the combined work.
+
+In the combination, you must combine any sections Entitled "History"
+in the various original documents, forming one section Entitled
+"History"; likewise combine any sections Entitled "Acknowledgements",
+and any sections Entitled "Dedications". You must delete all sections
+Entitled "Endorsements".
+
+\begin{center}
+{\Large\bf 6. COLLECTIONS OF DOCUMENTS}
+\addcontentsline{toc}{section}{6. COLLECTIONS OF DOCUMENTS}
+\end{center}
+
+You may make a collection consisting of the Document and other documents
+released under this License, and replace the individual copies of this
+License in the various documents with a single copy that is included in
+the collection, provided that you follow the rules of this License for
+verbatim copying of each of the documents in all other respects.
+
+You may extract a single document from such a collection, and distribute
+it individually under this License, provided you insert a copy of this
+License into the extracted document, and follow this License in all
+other respects regarding verbatim copying of that document.
+
+
+\begin{center}
+{\Large\bf 7. AGGREGATION WITH INDEPENDENT WORKS}
+\addcontentsline{toc}{section}{7. AGGREGATION WITH INDEPENDENT WORKS}
+\end{center}
+
+
+A compilation of the Document or its derivatives with other separate
+and independent documents or works, in or on a volume of a storage or
+distribution medium, is called an "aggregate" if the copyright
+resulting from the compilation is not used to limit the legal rights
+of the compilation's users beyond what the individual works permit.
+When the Document is included in an aggregate, this License does not
+apply to the other works in the aggregate which are not themselves
+derivative works of the Document.
+
+If the Cover Text requirement of section 3 is applicable to these
+copies of the Document, then if the Document is less than one half of
+the entire aggregate, the Document's Cover Texts may be placed on
+covers that bracket the Document within the aggregate, or the
+electronic equivalent of covers if the Document is in electronic form.
+Otherwise they must appear on printed covers that bracket the whole
+aggregate.
+
+
+\begin{center}
+{\Large\bf 8. TRANSLATION}
+\addcontentsline{toc}{section}{8. TRANSLATION}
+\end{center}
+
+
+Translation is considered a kind of modification, so you may
+distribute translations of the Document under the terms of section 4.
+Replacing Invariant Sections with translations requires special
+permission from their copyright holders, but you may include
+translations of some or all Invariant Sections in addition to the
+original versions of these Invariant Sections. You may include a
+translation of this License, and all the license notices in the
+Document, and any Warranty Disclaimers, provided that you also include
+the original English version of this License and the original versions
+of those notices and disclaimers. In case of a disagreement between
+the translation and the original version of this License or a notice
+or disclaimer, the original version will prevail.
+
+If a section in the Document is Entitled "Acknowledgements",
+"Dedications", or "History", the requirement (section 4) to Preserve
+its Title (section 1) will typically require changing the actual
+title.
+
+
+\begin{center}
+{\Large\bf 9. TERMINATION}
+\addcontentsline{toc}{section}{9. TERMINATION}
+\end{center}
+
+
+You may not copy, modify, sublicense, or distribute the Document except
+as expressly provided for under this License. Any other attempt to
+copy, modify, sublicense or distribute the Document is void, and will
+automatically terminate your rights under this License. However,
+parties who have received copies, or rights, from you under this
+License will not have their licenses terminated so long as such
+parties remain in full compliance.
+
+
+\begin{center}
+{\Large\bf 10. FUTURE REVISIONS OF THIS LICENSE}
+\addcontentsline{toc}{section}{10. FUTURE REVISIONS OF THIS LICENSE}
+\end{center}
+
+
+The Free Software Foundation may publish new, revised versions
+of the GNU Free Documentation License from time to time. Such new
+versions will be similar in spirit to the present version, but may
+differ in detail to address new problems or concerns. See
+http://www.gnu.org/copyleft/.
+
+Each version of the License is given a distinguishing version number.
+If the Document specifies that a particular numbered version of this
+License "or any later version" applies to it, you have the option of
+following the terms and conditions either of that specified version or
+of any later version that has been published (not as a draft) by the
+Free Software Foundation. If the Document does not specify a version
+number of this License, you may choose any version ever published (not
+as a draft) by the Free Software Foundation.
+
+
+\begin{center}
+{\Large\bf ADDENDUM: How to use this License for your documents}
+\addcontentsline{toc}{section}{ADDENDUM: How to use this License for your documents}
+\end{center}
+
+To use this License in a document you have written, include a copy of
+the License in the document and put the following copyright and
+license notices just after the title page:
+
+\bigskip
+\begin{quote}
+ Copyright \copyright YEAR YOUR NAME.
+ Permission is granted to copy, distribute and/or modify this document
+ under the terms of the GNU Free Documentation License, Version 1.2
+ or any later version published by the Free Software Foundation;
+ with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
+ A copy of the license is included in the section entitled "GNU
+ Free Documentation License".
+\end{quote}
+\bigskip
+
+If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts,
+replace the "with...Texts." line with this:
+
+\bigskip
+\begin{quote}
+ with the Invariant Sections being LIST THEIR TITLES, with the
+ Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
+\end{quote}
+\bigskip
+
+If you have Invariant Sections without Cover Texts, or some other
+combination of the three, merge those two alternatives to suit the
+situation.
+
+If your document contains nontrivial examples of program code, we
+recommend releasing these examples in parallel under your choice of
+free software license, such as the GNU General Public License,
+to permit their use in free software.
+
+%---------------------------------------------------------------------
+++ /dev/null
-%---------The file header---------------------------------------------
-
-%% \usepackage[english]{babel} %language selection
-%% \usepackage[T1]{fontenc}
-
-%%\pagenumbering{arabic}
-
-%% \usepackage{hyperref}
-%% \hypersetup{colorlinks,
-%% citecolor=black,
-%% filecolor=black,
-%% linkcolor=black,
-%% urlcolor=black,
-%% pdftex}
-
-
-%---------------------------------------------------------------------
-\chapter{GNU Free Documentation License}
-\index[general]{GNU ree Documentation License}
-\index[general]{License!GNU ree Documentation}
-\addcontentsline{toc}{section}{GNU ree Documentation License}
-
-%\label{label_fdl}
-
- \begin{center}
-
- Version 1.2, November 2002
-
-
- Copyright \copyright 2000,2001,2002 Free Software Foundation, Inc.
-
- \bigskip
-
- 51 Franklin St, Fifth Floor, Boston, MA 02110-1301 USA
-
- \bigskip
-
- Everyone is permitted to copy and distribute verbatim copies
- of this license document, but changing it is not allowed.
-\end{center}
-
-
-\begin{center}
-{\bf\large Preamble}
-\end{center}
-
-The purpose of this License is to make a manual, textbook, or other
-functional and useful document "free" in the sense of freedom: to
-assure everyone the effective freedom to copy and redistribute it,
-with or without modifying it, either commercially or noncommercially.
-Secondarily, this License preserves for the author and publisher a way
-to get credit for their work, while not being considered responsible
-for modifications made by others.
-
-This License is a kind of "copyleft", which means that derivative
-works of the document must themselves be free in the same sense. It
-complements the GNU General Public License, which is a copyleft
-license designed for free software.
-
-We have designed this License in order to use it for manuals for free
-software, because free software needs free documentation: a free
-program should come with manuals providing the same freedoms that the
-software does. But this License is not limited to software manuals;
-it can be used for any textual work, regardless of subject matter or
-whether it is published as a printed book. We recommend this License
-principally for works whose purpose is instruction or reference.
-
-
-\begin{center}
-{\Large\bf 1. APPLICABILITY AND DEFINITIONS}
-\addcontentsline{toc}{section}{1. APPLICABILITY AND DEFINITIONS}
-\end{center}
-
-This License applies to any manual or other work, in any medium, that
-contains a notice placed by the copyright holder saying it can be
-distributed under the terms of this License. Such a notice grants a
-world-wide, royalty-free license, unlimited in duration, to use that
-work under the conditions stated herein. The \textbf{"Document"}, below,
-refers to any such manual or work. Any member of the public is a
-licensee, and is addressed as \textbf{"you"}. You accept the license if you
-copy, modify or distribute the work in a way requiring permission
-under copyright law.
-
-A \textbf{"Modified Version"} of the Document means any work containing the
-Document or a portion of it, either copied verbatim, or with
-modifications and/or translated into another language.
-
-A \textbf{"Secondary Section"} is a named appendix or a front-matter section of
-the Document that deals exclusively with the relationship of the
-publishers or authors of the Document to the Document's overall subject
-(or to related matters) and contains nothing that could fall directly
-within that overall subject. (Thus, if the Document is in part a
-textbook of mathematics, a Secondary Section may not explain any
-mathematics.) The relationship could be a matter of historical
-connection with the subject or with related matters, or of legal,
-commercial, philosophical, ethical or political position regarding
-them.
-
-The \textbf{"Invariant Sections"} are certain Secondary Sections whose titles
-are designated, as being those of Invariant Sections, in the notice
-that says that the Document is released under this License. If a
-section does not fit the above definition of Secondary then it is not
-allowed to be designated as Invariant. The Document may contain zero
-Invariant Sections. If the Document does not identify any Invariant
-Sections then there are none.
-
-The \textbf{"Cover Texts"} are certain short passages of text that are listed,
-as Front-Cover Texts or Back-Cover Texts, in the notice that says that
-the Document is released under this License. A Front-Cover Text may
-be at most 5 words, and a Back-Cover Text may be at most 25 words.
-
-A \textbf{"Transparent"} copy of the Document means a machine-readable copy,
-represented in a format whose specification is available to the
-general public, that is suitable for revising the document
-straightforwardly with generic text editors or (for images composed of
-pixels) generic paint programs or (for drawings) some widely available
-drawing editor, and that is suitable for input to text formatters or
-for automatic translation to a variety of formats suitable for input
-to text formatters. A copy made in an otherwise Transparent file
-format whose markup, or absence of markup, has been arranged to thwart
-or discourage subsequent modification by readers is not Transparent.
-An image format is not Transparent if used for any substantial amount
-of text. A copy that is not "Transparent" is called \textbf{"Opaque"}.
-
-Examples of suitable formats for Transparent copies include plain
-ASCII without markup, Texinfo input format, LaTeX input format, SGML
-or XML using a publicly available DTD, and standard-conforming simple
-HTML, PostScript or PDF designed for human modification. Examples of
-transparent image formats include PNG, XCF and JPG. Opaque formats
-include proprietary formats that can be read and edited only by
-proprietary word processors, SGML or XML for which the DTD and/or
-processing tools are not generally available, and the
-machine-generated HTML, PostScript or PDF produced by some word
-processors for output purposes only.
-
-The \textbf{"Title Page"} means, for a printed book, the title page itself,
-plus such following pages as are needed to hold, legibly, the material
-this License requires to appear in the title page. For works in
-formats which do not have any title page as such, "Title Page" means
-the text near the most prominent appearance of the work's title,
-preceding the beginning of the body of the text.
-
-A section \textbf{"Entitled XYZ"} means a named subunit of the Document whose
-title either is precisely XYZ or contains XYZ in parentheses following
-text that translates XYZ in another language. (Here XYZ stands for a
-specific section name mentioned below, such as \textbf{"Acknowledgements"},
-\textbf{"Dedications"}, \textbf{"Endorsements"}, or \textbf{"History"}.)
-To \textbf{"Preserve the Title"}
-of such a section when you modify the Document means that it remains a
-section "Entitled XYZ" according to this definition.
-
-The Document may include Warranty Disclaimers next to the notice which
-states that this License applies to the Document. These Warranty
-Disclaimers are considered to be included by reference in this
-License, but only as regards disclaiming warranties: any other
-implication that these Warranty Disclaimers may have is void and has
-no effect on the meaning of this License.
-
-
-\begin{center}
-{\Large\bf 2. VERBATIM COPYING}
-\addcontentsline{toc}{section}{2. VERBATIM COPYING}
-\end{center}
-
-You may copy and distribute the Document in any medium, either
-commercially or noncommercially, provided that this License, the
-copyright notices, and the license notice saying this License applies
-to the Document are reproduced in all copies, and that you add no other
-conditions whatsoever to those of this License. You may not use
-technical measures to obstruct or control the reading or further
-copying of the copies you make or distribute. However, you may accept
-compensation in exchange for copies. If you distribute a large enough
-number of copies you must also follow the conditions in section 3.
-
-You may also lend copies, under the same conditions stated above, and
-you may publicly display copies.
-
-
-\begin{center}
-{\Large\bf 3. COPYING IN QUANTITY}
-\addcontentsline{toc}{section}{3. COPYING IN QUANTITY}
-\end{center}
-
-
-If you publish printed copies (or copies in media that commonly have
-printed covers) of the Document, numbering more than 100, and the
-Document's license notice requires Cover Texts, you must enclose the
-copies in covers that carry, clearly and legibly, all these Cover
-Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on
-the back cover. Both covers must also clearly and legibly identify
-you as the publisher of these copies. The front cover must present
-the full title with all words of the title equally prominent and
-visible. You may add other material on the covers in addition.
-Copying with changes limited to the covers, as long as they preserve
-the title of the Document and satisfy these conditions, can be treated
-as verbatim copying in other respects.
-
-If the required texts for either cover are too voluminous to fit
-legibly, you should put the first ones listed (as many as fit
-reasonably) on the actual cover, and continue the rest onto adjacent
-pages.
-
-If you publish or distribute Opaque copies of the Document numbering
-more than 100, you must either include a machine-readable Transparent
-copy along with each Opaque copy, or state in or with each Opaque copy
-a computer-network location from which the general network-using
-public has access to download using public-standard network protocols
-a complete Transparent copy of the Document, free of added material.
-If you use the latter option, you must take reasonably prudent steps,
-when you begin distribution of Opaque copies in quantity, to ensure
-that this Transparent copy will remain thus accessible at the stated
-location until at least one year after the last time you distribute an
-Opaque copy (directly or through your agents or retailers) of that
-edition to the public.
-
-It is requested, but not required, that you contact the authors of the
-Document well before redistributing any large number of copies, to give
-them a chance to provide you with an updated version of the Document.
-
-
-\begin{center}
-{\Large\bf 4. MODIFICATIONS}
-\addcontentsline{toc}{section}{4. MODIFICATIONS}
-\end{center}
-
-You may copy and distribute a Modified Version of the Document under
-the conditions of sections 2 and 3 above, provided that you release
-the Modified Version under precisely this License, with the Modified
-Version filling the role of the Document, thus licensing distribution
-and modification of the Modified Version to whoever possesses a copy
-of it. In addition, you must do these things in the Modified Version:
-
-\begin{itemize}
-\item[A.]
- Use in the Title Page (and on the covers, if any) a title distinct
- from that of the Document, and from those of previous versions
- (which should, if there were any, be listed in the History section
- of the Document). You may use the same title as a previous version
- if the original publisher of that version gives permission.
-
-\item[B.]
- List on the Title Page, as authors, one or more persons or entities
- responsible for authorship of the modifications in the Modified
- Version, together with at least five of the principal authors of the
- Document (all of its principal authors, if it has fewer than five),
- unless they release you from this requirement.
-
-\item[C.]
- State on the Title page the name of the publisher of the
- Modified Version, as the publisher.
-
-\item[D.]
- Preserve all the copyright notices of the Document.
-
-\item[E.]
- Add an appropriate copyright notice for your modifications
- adjacent to the other copyright notices.
-
-\item[F.]
- Include, immediately after the copyright notices, a license notice
- giving the public permission to use the Modified Version under the
- terms of this License, in the form shown in the Addendum below.
-
-\item[G.]
- Preserve in that license notice the full lists of Invariant Sections
- and required Cover Texts given in the Document's license notice.
-
-\item[H.]
- Include an unaltered copy of this License.
-
-\item[I.]
- Preserve the section Entitled "History", Preserve its Title, and add
- to it an item stating at least the title, year, new authors, and
- publisher of the Modified Version as given on the Title Page. If
- there is no section Entitled "History" in the Document, create one
- stating the title, year, authors, and publisher of the Document as
- given on its Title Page, then add an item describing the Modified
- Version as stated in the previous sentence.
-
-\item[J.]
- Preserve the network location, if any, given in the Document for
- public access to a Transparent copy of the Document, and likewise
- the network locations given in the Document for previous versions
- it was based on. These may be placed in the "History" section.
- You may omit a network location for a work that was published at
- least four years before the Document itself, or if the original
- publisher of the version it refers to gives permission.
-
-\item[K.]
- For any section Entitled "Acknowledgements" or "Dedications",
- Preserve the Title of the section, and preserve in the section all
- the substance and tone of each of the contributor acknowledgements
- and/or dedications given therein.
-
-\item[L.]
- Preserve all the Invariant Sections of the Document,
- unaltered in their text and in their titles. Section numbers
- or the equivalent are not considered part of the section titles.
-
-\item[M.]
- Delete any section Entitled "Endorsements". Such a section
- may not be included in the Modified Version.
-
-\item[N.]
- Do not retitle any existing section to be Entitled "Endorsements"
- or to conflict in title with any Invariant Section.
-
-\item[O.]
- Preserve any Warranty Disclaimers.
-\end{itemize}
-
-If the Modified Version includes new front-matter sections or
-appendices that qualify as Secondary Sections and contain no material
-copied from the Document, you may at your option designate some or all
-of these sections as invariant. To do this, add their titles to the
-list of Invariant Sections in the Modified Version's license notice.
-These titles must be distinct from any other section titles.
-
-You may add a section Entitled "Endorsements", provided it contains
-nothing but endorsements of your Modified Version by various
-parties--for example, statements of peer review or that the text has
-been approved by an organization as the authoritative definition of a
-standard.
-
-You may add a passage of up to five words as a Front-Cover Text, and a
-passage of up to 25 words as a Back-Cover Text, to the end of the list
-of Cover Texts in the Modified Version. Only one passage of
-Front-Cover Text and one of Back-Cover Text may be added by (or
-through arrangements made by) any one entity. If the Document already
-includes a cover text for the same cover, previously added by you or
-by arrangement made by the same entity you are acting on behalf of,
-you may not add another; but you may replace the old one, on explicit
-permission from the previous publisher that added the old one.
-
-The author(s) and publisher(s) of the Document do not by this License
-give permission to use their names for publicity for or to assert or
-imply endorsement of any Modified Version.
-
-
-\begin{center}
-{\Large\bf 5. COMBINING DOCUMENTS}
-\addcontentsline{toc}{section}{5. COMBINING DOCUMENTS}
-\end{center}
-
-
-You may combine the Document with other documents released under this
-License, under the terms defined in section 4 above for modified
-versions, provided that you include in the combination all of the
-Invariant Sections of all of the original documents, unmodified, and
-list them all as Invariant Sections of your combined work in its
-license notice, and that you preserve all their Warranty Disclaimers.
-
-The combined work need only contain one copy of this License, and
-multiple identical Invariant Sections may be replaced with a single
-copy. If there are multiple Invariant Sections with the same name but
-different contents, make the title of each such section unique by
-adding at the end of it, in parentheses, the name of the original
-author or publisher of that section if known, or else a unique number.
-Make the same adjustment to the section titles in the list of
-Invariant Sections in the license notice of the combined work.
-
-In the combination, you must combine any sections Entitled "History"
-in the various original documents, forming one section Entitled
-"History"; likewise combine any sections Entitled "Acknowledgements",
-and any sections Entitled "Dedications". You must delete all sections
-Entitled "Endorsements".
-
-\begin{center}
-{\Large\bf 6. COLLECTIONS OF DOCUMENTS}
-\addcontentsline{toc}{section}{6. COLLECTIONS OF DOCUMENTS}
-\end{center}
-
-You may make a collection consisting of the Document and other documents
-released under this License, and replace the individual copies of this
-License in the various documents with a single copy that is included in
-the collection, provided that you follow the rules of this License for
-verbatim copying of each of the documents in all other respects.
-
-You may extract a single document from such a collection, and distribute
-it individually under this License, provided you insert a copy of this
-License into the extracted document, and follow this License in all
-other respects regarding verbatim copying of that document.
-
-
-\begin{center}
-{\Large\bf 7. AGGREGATION WITH INDEPENDENT WORKS}
-\addcontentsline{toc}{section}{7. AGGREGATION WITH INDEPENDENT WORKS}
-\end{center}
-
-
-A compilation of the Document or its derivatives with other separate
-and independent documents or works, in or on a volume of a storage or
-distribution medium, is called an "aggregate" if the copyright
-resulting from the compilation is not used to limit the legal rights
-of the compilation's users beyond what the individual works permit.
-When the Document is included in an aggregate, this License does not
-apply to the other works in the aggregate which are not themselves
-derivative works of the Document.
-
-If the Cover Text requirement of section 3 is applicable to these
-copies of the Document, then if the Document is less than one half of
-the entire aggregate, the Document's Cover Texts may be placed on
-covers that bracket the Document within the aggregate, or the
-electronic equivalent of covers if the Document is in electronic form.
-Otherwise they must appear on printed covers that bracket the whole
-aggregate.
-
-
-\begin{center}
-{\Large\bf 8. TRANSLATION}
-\addcontentsline{toc}{section}{8. TRANSLATION}
-\end{center}
-
-
-Translation is considered a kind of modification, so you may
-distribute translations of the Document under the terms of section 4.
-Replacing Invariant Sections with translations requires special
-permission from their copyright holders, but you may include
-translations of some or all Invariant Sections in addition to the
-original versions of these Invariant Sections. You may include a
-translation of this License, and all the license notices in the
-Document, and any Warranty Disclaimers, provided that you also include
-the original English version of this License and the original versions
-of those notices and disclaimers. In case of a disagreement between
-the translation and the original version of this License or a notice
-or disclaimer, the original version will prevail.
-
-If a section in the Document is Entitled "Acknowledgements",
-"Dedications", or "History", the requirement (section 4) to Preserve
-its Title (section 1) will typically require changing the actual
-title.
-
-
-\begin{center}
-{\Large\bf 9. TERMINATION}
-\addcontentsline{toc}{section}{9. TERMINATION}
-\end{center}
-
-
-You may not copy, modify, sublicense, or distribute the Document except
-as expressly provided for under this License. Any other attempt to
-copy, modify, sublicense or distribute the Document is void, and will
-automatically terminate your rights under this License. However,
-parties who have received copies, or rights, from you under this
-License will not have their licenses terminated so long as such
-parties remain in full compliance.
-
-
-\begin{center}
-{\Large\bf 10. FUTURE REVISIONS OF THIS LICENSE}
-\addcontentsline{toc}{section}{10. FUTURE REVISIONS OF THIS LICENSE}
-\end{center}
-
-
-The Free Software Foundation may publish new, revised versions
-of the GNU Free Documentation License from time to time. Such new
-versions will be similar in spirit to the present version, but may
-differ in detail to address new problems or concerns. See
-http://www.gnu.org/copyleft/.
-
-Each version of the License is given a distinguishing version number.
-If the Document specifies that a particular numbered version of this
-License "or any later version" applies to it, you have the option of
-following the terms and conditions either of that specified version or
-of any later version that has been published (not as a draft) by the
-Free Software Foundation. If the Document does not specify a version
-number of this License, you may choose any version ever published (not
-as a draft) by the Free Software Foundation.
-
-
-\begin{center}
-{\Large\bf ADDENDUM: How to use this License for your documents}
-\addcontentsline{toc}{section}{ADDENDUM: How to use this License for your documents}
-\end{center}
-
-To use this License in a document you have written, include a copy of
-the License in the document and put the following copyright and
-license notices just after the title page:
-
-\bigskip
-\begin{quote}
- Copyright \copyright YEAR YOUR NAME.
- Permission is granted to copy, distribute and/or modify this document
- under the terms of the GNU Free Documentation License, Version 1.2
- or any later version published by the Free Software Foundation;
- with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts.
- A copy of the license is included in the section entitled "GNU
- Free Documentation License".
-\end{quote}
-\bigskip
-
-If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts,
-replace the "with...Texts." line with this:
-
-\bigskip
-\begin{quote}
- with the Invariant Sections being LIST THEIR TITLES, with the
- Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST.
-\end{quote}
-\bigskip
-
-If you have Invariant Sections without Cover Texts, or some other
-combination of the three, merge those two alternatives to suit the
-situation.
-
-If your document contains nontrivial examples of program code, we
-recommend releasing these examples in parallel under your choice of
-free software license, such as the GNU General Public License,
-to permit their use in free software.
-
-%---------------------------------------------------------------------
--- /dev/null
+%%
+%%
+
+\chapter{File Services Daemon}
+\label{_ChapterStart11}
+\index{File Services Daemon }
+\index{Daemon!File Services }
+\addcontentsline{toc}{section}{File Services Daemon}
+
+Please note, this section is somewhat out of date as the code has evolved
+significantly. The basic idea has not changed though.
+
+This chapter is intended to be a technical discussion of the File daemon
+services and as such is not targeted at end users but rather at developers and
+system administrators that want or need to know more of the working details of
+{\bf Bacula}.
+
+The {\bf Bacula File Services} consist of the programs that run on the system
+to be backed up and provide the interface between the Host File system and
+Bacula -- in particular, the Director and the Storage services.
+
+When time comes for a backup, the Director gets in touch with the File daemon
+on the client machine and hands it a set of ``marching orders'' which, if
+written in English, might be something like the following:
+
+OK, {\bf File daemon}, it's time for your daily incremental backup. I want you
+to get in touch with the Storage daemon on host archive.mysite.com and perform
+the following save operations with the designated options. You'll note that
+I've attached include and exclude lists and patterns you should apply when
+backing up the file system. As this is an incremental backup, you should save
+only files modified since the time you started your last backup which, as you
+may recall, was 2000-11-19-06:43:38. Please let me know when you're done and
+how it went. Thank you.
+
+So, having been handed everything it needs to decide what to dump and where to
+store it, the File daemon doesn't need to have any further contact with the
+Director until the backup is complete providing there are no errors. If there
+are errors, the error messages will be delivered immediately to the Director.
+While the backup is proceeding, the File daemon will send the file coordinates
+and data for each file being backed up to the Storage daemon, which will in
+turn pass the file coordinates to the Director to put in the catalog.
+
+During a {\bf Verify} of the catalog, the situation is different, since the
+File daemon will have an exchange with the Director for each file, and will
+not contact the Storage daemon.
+
+A {\bf Restore} operation will be very similar to the {\bf Backup} except that
+during the {\bf Restore} the Storage daemon will not send storage coordinates
+to the Director since the Director presumably already has them. On the other
+hand, any error messages from either the Storage daemon or File daemon will
+normally be sent directly to the Directory (this, of course, depends on how
+the Message resource is defined).
+
+\section{Commands Received from the Director for a Backup}
+\index{Backup!Commands Received from the Director for a }
+\index{Commands Received from the Director for a Backup }
+\addcontentsline{toc}{subsection}{Commands Received from the Director for a
+Backup}
+
+To be written ...
+
+\section{Commands Received from the Director for a Restore}
+\index{Commands Received from the Director for a Restore }
+\index{Restore!Commands Received from the Director for a }
+\addcontentsline{toc}{subsection}{Commands Received from the Director for a
+Restore}
+
+To be written ...
+++ /dev/null
-%%
-%%
-
-\chapter{File Services Daemon}
-\label{_ChapterStart11}
-\index{File Services Daemon }
-\index{Daemon!File Services }
-\addcontentsline{toc}{section}{File Services Daemon}
-
-Please note, this section is somewhat out of date as the code has evolved
-significantly. The basic idea has not changed though.
-
-This chapter is intended to be a technical discussion of the File daemon
-services and as such is not targeted at end users but rather at developers and
-system administrators that want or need to know more of the working details of
-{\bf Bacula}.
-
-The {\bf Bacula File Services} consist of the programs that run on the system
-to be backed up and provide the interface between the Host File system and
-Bacula -- in particular, the Director and the Storage services.
-
-When time comes for a backup, the Director gets in touch with the File daemon
-on the client machine and hands it a set of ``marching orders'' which, if
-written in English, might be something like the following:
-
-OK, {\bf File daemon}, it's time for your daily incremental backup. I want you
-to get in touch with the Storage daemon on host archive.mysite.com and perform
-the following save operations with the designated options. You'll note that
-I've attached include and exclude lists and patterns you should apply when
-backing up the file system. As this is an incremental backup, you should save
-only files modified since the time you started your last backup which, as you
-may recall, was 2000-11-19-06:43:38. Please let me know when you're done and
-how it went. Thank you.
-
-So, having been handed everything it needs to decide what to dump and where to
-store it, the File daemon doesn't need to have any further contact with the
-Director until the backup is complete providing there are no errors. If there
-are errors, the error messages will be delivered immediately to the Director.
-While the backup is proceeding, the File daemon will send the file coordinates
-and data for each file being backed up to the Storage daemon, which will in
-turn pass the file coordinates to the Director to put in the catalog.
-
-During a {\bf Verify} of the catalog, the situation is different, since the
-File daemon will have an exchange with the Director for each file, and will
-not contact the Storage daemon.
-
-A {\bf Restore} operation will be very similar to the {\bf Backup} except that
-during the {\bf Restore} the Storage daemon will not send storage coordinates
-to the Director since the Director presumably already has them. On the other
-hand, any error messages from either the Storage daemon or File daemon will
-normally be sent directly to the Directory (this, of course, depends on how
-the Message resource is defined).
-
-\section{Commands Received from the Director for a Backup}
-\index{Backup!Commands Received from the Director for a }
-\index{Commands Received from the Director for a Backup }
-\addcontentsline{toc}{subsection}{Commands Received from the Director for a
-Backup}
-
-To be written ...
-
-\section{Commands Received from the Director for a Restore}
-\index{Commands Received from the Director for a Restore }
-\index{Restore!Commands Received from the Director for a }
-\addcontentsline{toc}{subsection}{Commands Received from the Director for a
-Restore}
-
-To be written ...
--- /dev/null
+%%
+%%
+
+\chapter{Bacula Developer Notes}
+\label{_ChapterStart10}
+\index{Bacula Developer Notes}
+\index{Notes!Bacula Developer}
+\addcontentsline{toc}{section}{Bacula Developer Notes}
+
+This document is intended mostly for developers and describes how you can
+contribute to the Bacula project and the the general framework of making
+Bacula source changes.
+
+\subsection{Contributions}
+\index{Contributions}
+\addcontentsline{toc}{subsubsection}{Contributions}
+
+Contributions to the Bacula project come in many forms: ideas,
+participation in helping people on the bacula-users email list, packaging
+Bacula binaries for the community, helping improve the documentation, and
+submitting code.
+
+Contributions in the form of submissions for inclusion in the project are
+broken into two groups. The first are contributions that are aids and not
+essential to Bacula. In general, these will be scripts or will go into the
+{\bf bacula/examples} directory. For these kinds of non-essential
+contributions there is no obligation to do a copyright assignment as
+described below. However, a copyright assignment would still be
+appreciated.
+
+The second class of contributions are those which will be integrated with
+Bacula and become an essential part (code, scripts, documentation, ...)
+Within this class of contributions, there are two hurdles to surmount. One
+is getting your patch accepted, and two is dealing with copyright issues.
+The following text describes some of the requirements for such code.
+
+\subsection{Patches}
+\index{Patches}
+\addcontentsline{toc}{subsubsection}{Patches}
+
+Subject to the copyright assignment described below, your patches should be
+sent in {\bf git format-patch} format relative to the current contents of the
+master branch of the Source Forge Git repository. Please attach the
+output file or files generated by the {\bf git format-patch} to the email
+rather than include them directory to avoid wrapping of the lines
+in the patch. Please be sure to use the Bacula
+indenting standard (see below) for source code. If you have checked out
+the source with Git, you can get a diff using.
+
+\begin{verbatim}
+git pull
+git format-patch -M
+\end{verbatim}
+
+If you plan on doing significant development work over a period of time,
+after having your first patch reviewed and approved, you will be eligible
+for having developer Git write access so that you can commit your changes
+directly to the Git repository. To do so, you will need a userid on Source
+Forge.
+
+\subsection{Copyrights}
+\index{Copyrights}
+\addcontentsline{toc}{subsubsection}{Copyrights}
+
+To avoid future problems concerning changing licensing or
+copyrights, all code contributions more than a hand full of lines
+must be in the Public Domain or have the copyright transferred to
+the Free Software Foundation Europe e.V. with a Fiduciary License
+Agreement (FLA) as the case for all the current code.
+
+Prior to November 2004, all the code was copyrighted by Kern Sibbald and
+John Walker. After November 2004, the code was copyrighted by Kern
+Sibbald, then on the 15th of November 2006, Kern transferred the copyright
+to the Free Software Foundation Europe e.V. In signing the FLA and
+transferring the copyright, you retain the right to use the code you have
+submitted as you want, and you ensure that Bacula will always remain Free
+and Open Source.
+
+Your name should be clearly indicated as the author of the code, and you
+must be extremely careful not to violate any copyrights or patents or use
+other people's code without acknowledging it. The purpose of this
+requirement is to avoid future copyright, patent, or intellectual property
+problems. Please read the LICENSE agreement in the main Bacula source code
+directory. When you sign the Fiduciary License Agreement (FLA) and send it
+in, you are agreeing to the terms of that LICENSE file.
+
+If you don't understand what we mean by future problems, please
+examine the difficulties Mozilla was having finding
+previous contributors at \elink{
+http://www.mozilla.org/MPL/missing.html}
+{http://www.mozilla.org/MPL/missing.html}. The other important issue is to
+avoid copyright, patent, or intellectual property violations as was
+(May 2003) claimed by SCO against IBM.
+
+Although the copyright will be held by the Free Software
+Foundation Europe e.V., each developer is expected to indicate
+that he wrote and/or modified a particular module (or file) and
+any other sources. The copyright assignment may seem a bit
+unusual, but in reality, it is not. Most large projects require
+this.
+
+If you have any doubts about this, please don't hesitate to ask. The
+objective is to assure the long term survival of the Bacula project.
+
+Items not needing a copyright assignment are: most small changes,
+enhancements, or bug fixes of 5-10 lines of code, which amount to
+less than 20% of any particular file.
+
+\subsection{Copyright Assignment -- Fiduciary License Agreement}
+\index{Copyright Assignment}
+\index{Assignment!Copyright}
+\addcontentsline{toc}{subsubsection}{Copyright Assignment -- Fiduciary License Agreement}
+
+Since this is not a commercial enterprise, and we prefer to believe in
+everyone's good faith, previously developers could assign the copyright by
+explicitly acknowledging that they do so in their first submission. This
+was sufficient if the developer is independent, or an employee of a
+not-for-profit organization or a university. However, in an effort to
+ensure that the Bacula code is really clean, beginning in August 2006, all
+previous and future developers with SVN write access will be asked to submit a
+copyright assignment (or Fiduciary License Agreement -- FLA),
+which means you agree to the LICENSE in the main source
+directory. It also means that you receive back the right to use
+the code that you have submitted.
+
+Any developer who wants to contribute and is employed by a company should
+either list the employer as the owner of the code, or get explicit
+permission from him to sign the copyright assignment. This is because in
+many countries, all work that an employee does whether on company time or
+in the employee's free time is considered to be Intellectual Property of
+the company. Obtaining official approval or an FLA from the company will
+avoid misunderstandings between the employee, the company, and the Bacula
+project. A good number of companies have already followed this procedure.
+
+The Fiduciary License Agreement is posted on the Bacula web site at:
+\elink{http://www.bacula.org/en/FLA-bacula.en.pdf}{http://www.bacula.org/en/FLA-bacula.en.pdf}
+
+The instructions for filling out this agreement are also at:
+\elink{http://www.bacula.org/?page=fsfe}{http://www.bacula.org/?page=fsfe}
+
+It should be filled out, then sent to:
+
+\begin{verbatim}
+ Kern Sibbald
+ Cotes-de-Montmoiret 9
+ 1012 Lausanne
+ Switzerland
+\end{verbatim}
+
+Please note that the above address is different from the officially
+registered office mentioned in the document. When you send in such a
+complete document, please notify me: kern at sibbald dot com, and
+please add your email address to the FLA so that I can contact you
+to confirm reception of the signed FLA.
+
+
+\section{The Development Cycle}
+\index{Developement Cycle}
+\index{Cycle!Developement}
+\addcontentsline{toc}{subsubsection}{Development Cycle}
+
+As discussed on the email lists, the number of contributions are
+increasing significantly. We expect this positive trend
+will continue. As a consequence, we have modified how we do
+development, and instead of making a list of all the features that we will
+implement in the next version, each developer signs up for one (maybe
+two) projects at a time, and when they are complete, and the code
+is stable, we will release a new version. The release cycle will probably
+be roughly six months.
+
+The difference is that with a shorter release cycle and fewer released
+feature, we will have more time to review the new code that is being
+contributed, and will be able to devote more time to a smaller number of
+projects (some prior versions had too many new features for us to handle
+correctly).
+
+Future release schedules will be much the same, and the
+number of new features will also be much the same providing that the
+contributions continue to come -- and they show no signs of let up :-)
+
+\index{Feature Requests}
+{\bf Feature Requests:} \\
+In addition, we have "formalizee" the feature requests a bit.
+
+Instead of me maintaining an informal list of everything I run into
+(kernstodo), we now maintain a "formal" list of projects. This
+means that all new feature requests, including those recently discussed on
+the email lists, must be formally submitted and approved.
+
+Formal submission of feature requests will take two forms: \\
+1. non-mandatory, but highly recommended is to discuss proposed new features
+on the mailing list.\\
+2. Formal submission of an Feature Request in a special format. We'll
+give an example of this below, but you can also find it on the web site
+under "Support -\gt{} Feature Requests". Since it takes a bit of time to
+properly fill out a Feature Request form, you probably should check on the
+email list first.
+
+Once the Feature Request is received by the keeper of the projects list, it
+will be sent to the Bacula project manager (Kern), and he will either
+accept it (90% of the time), send it back asking for clarification (10% of
+the time), send it to the email list asking for opinions, or reject it
+(very few cases).
+
+If it is accepted, it will go in the "projects" file (a simple ASCII file)
+maintained in the main Bacula source directory.
+
+{\bf Implementation of Feature Requests:}\\
+Any qualified developer can sign up for a project. The project must have
+an entry in the projects file, and the developer's name will appear in the
+Status field.
+
+{\bf How Feature Requests are accepted:}\\
+Acceptance of Feature Requests depends on several things: \\
+1. feedback from users. If it is negative, the Feature Request will probably not be
+accepted. \\
+2. the difficulty of the project. A project that is so
+difficult that we cannot imagine finding someone to implement probably won't
+be accepted. Obviously if you know how to implement it, don't hesitate
+to put it in your Feature Request \\
+ 3. whether or not the Feature Request fits within the current strategy of
+Bacula (for example an Feature Request that requests changing the tape to
+tar format probably would not be accepted, ...).
+
+{\bf How Feature Requests are prioritized:}\\
+Once an Feature Request is accepted, it needs to be implemented. If you
+can find a developer for it, or one signs up for implementing it, then the
+Feature Request becomes top priority (at least for that developer).
+
+Between releases of Bacula, we will generally solicit Feature Request input
+for the next version, and by way of this email, we suggest that you send
+discuss and send in your Feature Requests for the next release. Please
+verify that the Feature Request is not in the current list (attached to this email).
+
+Once users have had several weeks to submit Feature Requests, the keeper of
+the projects list will organize them, and request users to vote on them.
+This will allow fixing prioritizing the Feature Requests. Having a
+priority is one thing, but getting it implement is another thing -- we are
+hoping that the Bacula community will take more responsibility for assuring
+the implementation of accepted Feature Requests.
+
+Feature Request format:
+\begin{verbatim}
+============= Empty Feature Request form ===========
+Item n: One line summary ...
+ Date: Date submitted
+ Origin: Name and email of originator.
+ Status:
+
+ What: More detailed explanation ...
+
+ Why: Why it is important ...
+
+ Notes: Additional notes or features (omit if not used)
+============== End Feature Request form ==============
+\end{verbatim}
+
+\begin{verbatim}
+============= Example Completed Feature Request form ===========
+Item 1: Implement a Migration job type that will move the job
+ data from one device to another.
+ Origin: Sponsored by Riege Sofware International GmbH. Contact:
+ Daniel Holtkamp <holtkamp at riege dot com>
+ Date: 28 October 2005
+ Status: Partially coded in 1.37 -- much more to do. Assigned to
+ Kern.
+
+ What: The ability to copy, move, or archive data that is on a
+ device to another device is very important.
+
+ Why: An ISP might want to backup to disk, but after 30 days
+ migrate the data to tape backup and delete it from
+ disk. Bacula should be able to handle this
+ automatically. It needs to know what was put where,
+ and when, and what to migrate -- it is a bit like
+ retention periods. Doing so would allow space to be
+ freed up for current backups while maintaining older
+ data on tape drives.
+
+ Notes: Migration could be triggered by:
+ Number of Jobs
+ Number of Volumes
+ Age of Jobs
+ Highwater size (keep total size)
+ Lowwater mark
+=================================================
+\end{verbatim}
+
+
+\section{Bacula Code Submissions and Projects}
+\index{Submissions and Projects}
+\addcontentsline{toc}{subsection}{Code Submissions and Projects}
+
+Getting code implemented in Bacula works roughly as follows:
+
+\begin{itemize}
+
+\item Kern is the project manager, but prefers not to be a "gate keeper".
+ This means that the developers are expected to be self-motivated,
+ and once they have experience submit directly to the Git
+ repositories. However,
+ it is a good idea to have your patches reviewed prior to submitting,
+ and it is a bad idea to submit monster patches because no one will
+ be able to properly review them. See below for more details on this.
+
+\item There are growing numbers of contributions (very good).
+
+\item Some contributions come in the form of relatively small patches,
+ which Kern reviews, integrates, documents, tests, and maintains.
+
+\item All Bacula developers take full
+ responsibility for writing the code, posting as patches so that we can
+ review it as time permits, integrating it at an appropriate time,
+ responding to our requests for tweaking it (name changes, ...),
+ document it in the code, document it in the manual (even though
+ their mother tongue is not English), test it, develop and commit
+ regression scripts, and answer in a timely fashion all bug reports --
+ even occasionally accepting additional bugs :-)
+
+ This is a sustainable way of going forward with Bacula, and the
+ direction that the project will be taking more and more. For
+ example, in the past, we have had some very dedicated programmers
+ who did major projects. However, some of these
+ programmers due to outside obligations (job responsibilities change of
+ job, school duties, ...) could not continue to maintain the code. In
+ those cases, the code suffers from lack of maintenance, sometimes we
+ patch it, sometimes not. In the end, if the code is not maintained, the
+ code gets dropped from the project (there are two such contributions
+ that are heading in that direction). When ever possible, we would like
+ to avoid this, and ensure a continuation of the code and a sharing of
+ the development, debugging, documentation, and maintenance
+ responsibilities.
+\end{itemize}
+
+\section{Patches for Released Versions}
+\index{Patches for Released Versions}
+\addcontentsline{toc}{subsection}{Patches for Released Versions}
+If you fix a bug in a released version, you should, unless it is
+an absolutely trivial bug, create and release a patch file for the
+bug. The procedure is as follows:
+
+Fix the bug in the released branch and in the develpment master branch.
+
+Make a patch file for the branch and add the branch patch to
+the patches directory in both the branch and the trunk.
+The name should be 2.2.4-xxx.patch where xxx is unique, in this case it can
+be "restore", e.g. 2.2.4-restore.patch. Add to the top of the
+file a brief description and instructions for applying it -- see for example
+2.2.4-poll-mount.patch. The best way to create the patch file is as
+follows:
+
+\begin{verbatim}
+ (edit) 2.2.4-restore.patch
+ (input description)
+ (end edit)
+
+ git format-patch -M
+ mv 0001-xxx 2.2.4-restore.patch
+\end{verbatim}
+
+check to make sure no extra junk got put into the patch file (i.e.
+it should have the patch for that bug only).
+
+If there is not a bug report on the problem, create one, then add the
+patch to the bug report.
+
+Then upload it to the 2.2.x release of bacula-patches.
+
+So, end the end, the patch file is:
+\begin{itemize}
+\item Attached to the bug report
+
+\item In Branch-2.2/bacula/patches/...
+
+\item In the trunk
+
+\item Loaded on Source Forge bacula-patches 2.2.x release. When
+ you add it, click on the check box to send an Email so that all the
+ users that are monitoring SF patches get notified.
+\end{itemize}
+
+
+\section{Developing Bacula}
+\index{Developing Bacula}
+\index{Bacula!Developing}
+\addcontentsline{toc}{subsubsection}{Developing Bacula}
+
+Typically the simplest way to develop Bacula is to open one xterm window
+pointing to the source directory you wish to update; a second xterm window at
+the top source directory level, and a third xterm window at the bacula
+directory \lt{}top\gt{}/src/bacula. After making source changes in one of the
+directories, in the top source directory xterm, build the source, and start
+the daemons by entering:
+
+make and
+
+./startit then in the enter:
+
+./console or
+
+./gnome-console to start the Console program. Enter any commands for testing.
+For example: run kernsverify full.
+
+Note, the instructions here to use {\bf ./startit} are different from using a
+production system where the administrator starts Bacula by entering {\bf
+./bacula start}. This difference allows a development version of {\bf Bacula}
+to be run on a computer at the same time that a production system is running.
+The {\bf ./startit} strip starts {\bf Bacula} using a different set of
+configuration files, and thus permits avoiding conflicts with any production
+system.
+
+To make additional source changes, exit from the Console program, and in the
+top source directory, stop the daemons by entering:
+
+./stopit then repeat the process.
+
+\subsection{Debugging}
+\index{Debugging}
+\addcontentsline{toc}{subsubsection}{Debugging}
+
+Probably the first thing to do is to turn on debug output.
+
+A good place to start is with a debug level of 20 as in {\bf ./startit -d20}.
+The startit command starts all the daemons with the same debug level.
+Alternatively, you can start the appropriate daemon with the debug level you
+want. If you really need more info, a debug level of 60 is not bad, and for
+just about everything a level of 200.
+
+\subsection{Using a Debugger}
+\index{Using a Debugger}
+\index{Debugger!Using a}
+\addcontentsline{toc}{subsubsection}{Using a Debugger}
+
+If you have a serious problem such as a segmentation fault, it can usually be
+found quickly using a good multiple thread debugger such as {\bf gdb}. For
+example, suppose you get a segmentation violation in {\bf bacula-dir}. You
+might use the following to find the problem:
+
+\lt{}start the Storage and File daemons\gt{}
+cd dird
+gdb ./bacula-dir
+run -f -s -c ./dird.conf
+\lt{}it dies with a segmentation fault\gt{}
+where
+The {\bf -f} option is specified on the {\bf run} command to inhibit {\bf
+dird} from going into the background. You may also want to add the {\bf -s}
+option to the run command to disable signals which can potentially interfere
+with the debugging.
+
+As an alternative to using the debugger, each {\bf Bacula} daemon has a built
+in back trace feature when a serious error is encountered. It calls the
+debugger on itself, produces a back trace, and emails the report to the
+developer. For more details on this, please see the chapter in the main Bacula
+manual entitled ``What To Do When Bacula Crashes (Kaboom)''.
+
+\subsection{Memory Leaks}
+\index{Leaks!Memory}
+\index{Memory Leaks}
+\addcontentsline{toc}{subsubsection}{Memory Leaks}
+
+Because Bacula runs routinely and unattended on client and server machines, it
+may run for a long time. As a consequence, from the very beginning, Bacula
+uses SmartAlloc to ensure that there are no memory leaks. To make detection of
+memory leaks effective, all Bacula code that dynamically allocates memory MUST
+have a way to release it. In general when the memory is no longer needed, it
+should be immediately released, but in some cases, the memory will be held
+during the entire time that Bacula is executing. In that case, there MUST be a
+routine that can be called at termination time that releases the memory. In
+this way, we will be able to detect memory leaks. Be sure to immediately
+correct any and all memory leaks that are printed at the termination of the
+daemons.
+
+\subsection{Special Files}
+\index{Files!Special}
+\index{Special Files}
+\addcontentsline{toc}{subsubsection}{Special Files}
+
+Kern uses files named 1, 2, ... 9 with any extension as scratch files. Thus
+any files with these names are subject to being rudely deleted at any time.
+
+\subsection{When Implementing Incomplete Code}
+\index{Code!When Implementing Incomplete}
+\index{When Implementing Incomplete Code}
+\addcontentsline{toc}{subsubsection}{When Implementing Incomplete Code}
+
+Please identify all incomplete code with a comment that contains
+
+\begin{verbatim}
+***FIXME***
+\end{verbatim}
+
+where there are three asterisks (*) before and after the word
+FIXME (in capitals) and no intervening spaces. This is important as it allows
+new programmers to easily recognize where things are partially implemented.
+
+\subsection{Bacula Source File Structure}
+\index{Structure!Bacula Source File}
+\index{Bacula Source File Structure}
+\addcontentsline{toc}{subsubsection}{Bacula Source File Structure}
+
+The distribution generally comes as a tar file of the form {\bf
+bacula.x.y.z.tar.gz} where x, y, and z are the version, release, and update
+numbers respectively.
+
+Once you detar this file, you will have a directory structure as follows:
+
+\footnotesize
+\begin{verbatim}
+|
+Tar file:
+|- depkgs
+ |- mtx (autochanger control program + tape drive info)
+ |- sqlite (SQLite database program)
+
+Tar file:
+|- depkgs-win32
+ |- pthreads (Native win32 pthreads library -- dll)
+ |- zlib (Native win32 zlib library)
+ |- wx (wxWidgets source code)
+
+Project bacula:
+|- bacula (main source directory containing configuration
+ | and installation files)
+ |- autoconf (automatic configuration files, not normally used
+ | by users)
+ |- intl (programs used to translate)
+ |- platforms (OS specific installation files)
+ |- redhat (Red Hat installation)
+ |- solaris (Sun installation)
+ |- freebsd (FreeBSD installation)
+ |- irix (Irix installation -- not tested)
+ |- unknown (Default if system not identified)
+ |- po (translations of source strings)
+ |- src (source directory; contains global header files)
+ |- cats (SQL catalog database interface directory)
+ |- console (bacula user agent directory)
+ |- dird (Director daemon)
+ |- filed (Unix File daemon)
+ |- win32 (Win32 files to make bacula-fd be a service)
+ |- findlib (Unix file find library for File daemon)
+ |- gnome-console (GNOME version of console program)
+ |- lib (General Bacula library)
+ |- stored (Storage daemon)
+ |- tconsole (Tcl/tk console program -- not yet working)
+ |- testprogs (test programs -- normally only in Kern's tree)
+ |- tools (Various tool programs)
+ |- win32 (Native Win32 File daemon)
+ |- baculafd (Visual Studio project file)
+ |- compat (compatibility interface library)
+ |- filed (links to src/filed)
+ |- findlib (links to src/findlib)
+ |- lib (links to src/lib)
+ |- console (beginning of native console program)
+ |- wx-console (wxWidget console Win32 specific parts)
+ |- wx-console (wxWidgets console main source program)
+
+Project regress:
+|- regress (Regression scripts)
+ |- bin (temporary directory to hold Bacula installed binaries)
+ |- build (temporary directory to hold Bacula source)
+ |- scripts (scripts and .conf files)
+ |- tests (test scripts)
+ |- tmp (temporary directory for temp files)
+ |- working (temporary working directory for Bacula daemons)
+
+Project docs:
+|- docs (documentation directory)
+ |- developers (Developer's guide)
+ |- home-page (Bacula's home page source)
+ |- manual (html document directory)
+ |- manual-fr (French translation)
+ |- manual-de (German translation)
+ |- techlogs (Technical development notes);
+
+Project rescue:
+|- rescue (Bacula rescue CDROM)
+ |- linux (Linux rescue CDROM)
+ |- cdrom (Linux rescue CDROM code)
+ ...
+ |- solaris (Solaris rescue -- incomplete)
+ |- freebsd (FreeBSD rescue -- incomplete)
+
+Project gui:
+|- gui (Bacula GUI projects)
+ |- bacula-web (Bacula web php management code)
+ |- bimagemgr (Web application for burning CDROMs)
+
+
+\end{verbatim}
+\normalsize
+
+\subsection{Header Files}
+\index{Header Files}
+\index{Files!Header}
+\addcontentsline{toc}{subsubsection}{Header Files}
+
+Please carefully follow the scheme defined below as it permits in general only
+two header file includes per C file, and thus vastly simplifies programming.
+With a large complex project like Bacula, it isn't always easy to ensure that
+the right headers are invoked in the right order (there are a few kludges to
+make this happen -- i.e. in a few include files because of the chicken and egg
+problem, certain references to typedefs had to be replaced with {\bf void} ).
+
+Every file should include {\bf bacula.h}. It pulls in just about everything,
+with very few exceptions. If you have system dependent ifdefing, please do it
+in {\bf baconfig.h}. The version number and date are kept in {\bf version.h}.
+
+Each of the subdirectories (console, cats, dird, filed, findlib, lib, stored,
+...) contains a single directory dependent include file generally the name of
+the directory, which should be included just after the include of {\bf
+bacula.h}. This file (for example, for the dird directory, it is {\bf dird.h})
+contains either definitions of things generally needed in this directory, or
+it includes the appropriate header files. It always includes {\bf protos.h}.
+See below.
+
+Each subdirectory contains a header file named {\bf protos.h}, which contains
+the prototypes for subroutines exported by files in that directory. {\bf
+protos.h} is always included by the main directory dependent include file.
+
+\subsection{Programming Standards}
+\index{Standards!Programming}
+\index{Programming Standards}
+\addcontentsline{toc}{subsubsection}{Programming Standards}
+
+For the most part, all code should be written in C unless there is a burning
+reason to use C++, and then only the simplest C++ constructs will be used.
+Note, Bacula is slowly evolving to use more and more C++.
+
+Code should have some documentation -- not a lot, but enough so that I can
+understand it. Look at the current code, and you will see that I document more
+than most, but am definitely not a fanatic.
+
+We prefer simple linear code where possible. Gotos are strongly discouraged
+except for handling an error to either bail out or to retry some code, and
+such use of gotos can vastly simplify the program.
+
+Remember this is a C program that is migrating to a {\bf tiny} subset of C++,
+so be conservative in your use of C++ features.
+
+\subsection{Do Not Use}
+\index{Use!Do Not}
+\index{Do Not Use}
+\addcontentsline{toc}{subsubsection}{Do Not Use}
+
+\begin{itemize}
+ \item STL -- it is totally incomprehensible.
+\end{itemize}
+
+\subsection{Avoid if Possible}
+\index{Possible!Avoid if}
+\index{Avoid if Possible}
+\addcontentsline{toc}{subsubsection}{Avoid if Possible}
+
+\begin{itemize}
+\item Using {\bf void *} because this generally means that one must
+ using casting, and in C++ casting is rather ugly. It is OK to use
+ void * to pass structure address where the structure is not known
+ to the routines accepting the packet (typically callback routines).
+ However, declaring "void *buf" is a bad idea. Please use the
+ correct types whenever possible.
+
+\item Using undefined storage specifications such as (short, int, long,
+ long long, size\_t ...). The problem with all these is that the number of bytes
+ they allocate depends on the compiler and the system. Instead use
+ Bacula's types (int8\_t, uint8\_t, int32\_t, uint32\_t, int64\_t, and
+ uint64\_t). This guarantees that the variables are given exactly the
+ size you want. Please try at all possible to avoid using size\_t ssize\_t
+ and the such. They are very system dependent. However, some system
+ routines may need them, so their use is often unavoidable.
+
+\item Returning a malloc'ed buffer from a subroutine -- someone will forget
+ to release it.
+
+\item Heap allocation (malloc) unless needed -- it is expensive. Use
+ POOL\_MEM instead.
+
+\item Templates -- they can create portability problems.
+
+\item Fancy or tricky C or C++ code, unless you give a good explanation of
+ why you used it.
+
+\item Too much inheritance -- it can complicate the code, and make reading it
+ difficult (unless you are in love with colons)
+
+\end{itemize}
+
+\subsection{Do Use Whenever Possible}
+\index{Possible!Do Use Whenever}
+\index{Do Use Whenever Possible}
+\addcontentsline{toc}{subsubsection}{Do Use Whenever Possible}
+
+\begin{itemize}
+\item Locking and unlocking within a single subroutine.
+
+\item A single point of exit from all subroutines. A goto is
+ perfectly OK to use to get out early, but only to a label
+ named bail\_out, and possibly an ok\_out. See current code
+ examples.
+
+\item Malloc and free within a single subroutine.
+
+\item Comments and global explanations on what your code or algorithm does.
+
+\end{itemize}
+
+\subsection{Indenting Standards}
+\index{Standards!Indenting}
+\index{Indenting Standards}
+\addcontentsline{toc}{subsubsection}{Indenting Standards}
+
+We find it very hard to read code indented 8 columns at a time.
+Even 4 at a time uses a lot of space, so we have adopted indenting
+3 spaces at every level. Note, indention is the visual appearance of the
+source on the page, while tabbing is replacing a series of up to 8 spaces from
+a tab character.
+
+The closest set of parameters for the Linux {\bf indent} program that will
+produce reasonably indented code are:
+
+\footnotesize
+\begin{verbatim}
+-nbad -bap -bbo -nbc -br -brs -c36 -cd36 -ncdb -ce -ci3 -cli0
+-cp36 -d0 -di1 -ndj -nfc1 -nfca -hnl -i3 -ip0 -l85 -lp -npcs
+-nprs -npsl -saf -sai -saw -nsob -nss -nbc -ncs -nbfda
+\end{verbatim}
+\normalsize
+
+You can put the above in your .indent.pro file, and then just invoke indent on
+your file. However, be warned. This does not produce perfect indenting, and it
+will mess up C++ class statements pretty badly.
+
+Braces are required in all if statements (missing in some very old code). To
+avoid generating too many lines, the first brace appears on the first line
+(e.g. of an if), and the closing brace is on a line by itself. E.g.
+
+\footnotesize
+\begin{verbatim}
+ if (abc) {
+ some_code;
+ }
+\end{verbatim}
+\normalsize
+
+Just follow the convention in the code. For example we I prefer non-indented cases.
+
+\footnotesize
+\begin{verbatim}
+ switch (code) {
+ case 'A':
+ do something
+ break;
+ case 'B':
+ again();
+ break;
+ default:
+ break;
+ }
+\end{verbatim}
+\normalsize
+
+Avoid using // style comments except for temporary code or turning off debug
+code. Standard C comments are preferred (this also keeps the code closer to
+C).
+
+Attempt to keep all lines less than 85 characters long so that the whole line
+of code is readable at one time. This is not a rigid requirement.
+
+Always put a brief description at the top of any new file created describing
+what it does and including your name and the date it was first written. Please
+don't forget any Copyrights and acknowledgments if it isn't 100\% your code.
+Also, include the Bacula copyright notice that is in {\bf src/c}.
+
+In general you should have two includes at the top of the an include for the
+particular directory the code is in, for includes are needed, but this should
+be rare.
+
+In general (except for self-contained packages), prototypes should all be put
+in {\bf protos.h} in each directory.
+
+Always put space around assignment and comparison operators.
+
+\footnotesize
+\begin{verbatim}
+ a = 1;
+ if (b >= 2) {
+ cleanup();
+ }
+\end{verbatim}
+\normalsize
+
+but your can compress things in a {\bf for} statement:
+
+\footnotesize
+\begin{verbatim}
+ for (i=0; i < del.num_ids; i++) {
+ ...
+\end{verbatim}
+\normalsize
+
+Don't overuse the inline if (?:). A full {\bf if} is preferred, except in a
+print statement, e.g.:
+
+\footnotesize
+\begin{verbatim}
+ if (ua->verbose \&& del.num_del != 0) {
+ bsendmsg(ua, _("Pruned %d %s on Volume %s from catalog.\n"), del.num_del,
+ del.num_del == 1 ? "Job" : "Jobs", mr->VolumeName);
+ }
+\end{verbatim}
+\normalsize
+
+Leave a certain amount of debug code (Dmsg) in code you submit, so that future
+problems can be identified. This is particularly true for complicated code
+likely to break. However, try to keep the debug code to a minimum to avoid
+bloating the program and above all to keep the code readable.
+
+Please keep the same style in all new code you develop. If you include code
+previously written, you have the option of leaving it with the old indenting
+or re-indenting it. If the old code is indented with 8 spaces, then please
+re-indent it to Bacula standards.
+
+If you are using {\bf vim}, simply set your tabstop to 8 and your shiftwidth
+to 3.
+
+\subsection{Tabbing}
+\index{Tabbing}
+\addcontentsline{toc}{subsubsection}{Tabbing}
+
+Tabbing (inserting the tab character in place of spaces) is as normal on all
+Unix systems -- a tab is converted space up to the next column multiple of 8.
+My editor converts strings of spaces to tabs automatically -- this results in
+significant compression of the files. Thus, you can remove tabs by replacing
+them with spaces if you wish. Please don't confuse tabbing (use of tab
+characters) with indenting (visual alignment of the code).
+
+\subsection{Don'ts}
+\index{Don'ts}
+\addcontentsline{toc}{subsubsection}{Don'ts}
+
+Please don't use:
+
+\footnotesize
+\begin{verbatim}
+strcpy()
+strcat()
+strncpy()
+strncat();
+sprintf()
+snprintf()
+\end{verbatim}
+\normalsize
+
+They are system dependent and un-safe. These should be replaced by the Bacula
+safe equivalents:
+
+\footnotesize
+\begin{verbatim}
+char *bstrncpy(char *dest, char *source, int dest_size);
+char *bstrncat(char *dest, char *source, int dest_size);
+int bsnprintf(char *buf, int32_t buf_len, const char *fmt, ...);
+int bvsnprintf(char *str, int32_t size, const char *format, va_list ap);
+\end{verbatim}
+\normalsize
+
+See src/lib/bsys.c for more details on these routines.
+
+Don't use the {\bf \%lld} or the {\bf \%q} printf format editing types to edit
+64 bit integers -- they are not portable. Instead, use {\bf \%s} with {\bf
+edit\_uint64()}. For example:
+
+\footnotesize
+\begin{verbatim}
+ char buf[100];
+ uint64_t num = something;
+ char ed1[50];
+ bsnprintf(buf, sizeof(buf), "Num=%s\n", edit_uint64(num, ed1));
+\end{verbatim}
+\normalsize
+
+Note: {\bf \%lld} is now permitted in Bacula code -- we have our
+own printf routines which handle it correctly. The edit\_uint64() subroutine
+can still be used if you wish, but over time, most of that old style will
+be removed.
+
+The edit buffer {\bf ed1} must be at least 27 bytes long to avoid overflow.
+See src/lib/edit.c for more details. If you look at the code, don't start
+screaming that I use {\bf lld}. I actually use subtle trick taught to me by
+John Walker. The {\bf lld} that appears in the editing routine is actually
+{\bf \#define} to a what is needed on your OS (usually ``lld'' or ``q'') and
+is defined in autoconf/configure.in for each OS. C string concatenation causes
+the appropriate string to be concatenated to the ``\%''.
+
+Also please don't use the STL or Templates or any complicated C++ code.
+
+\subsection{Message Classes}
+\index{Classes!Message}
+\index{Message Classes}
+\addcontentsline{toc}{subsubsection}{Message Classes}
+
+Currently, there are five classes of messages: Debug, Error, Job, Memory,
+and Queued.
+
+\subsection{Debug Messages}
+\index{Messages!Debug}
+\index{Debug Messages}
+\addcontentsline{toc}{subsubsection}{Debug Messages}
+
+Debug messages are designed to be turned on at a specified debug level and are
+always sent to STDOUT. There are designed to only be used in the development
+debug process. They are coded as:
+
+DmsgN(level, message, arg1, ...) where the N is a number indicating how many
+arguments are to be substituted into the message (i.e. it is a count of the
+number arguments you have in your message -- generally the number of percent
+signs (\%)). {\bf level} is the debug level at which you wish the message to
+be printed. message is the debug message to be printed, and arg1, ... are the
+arguments to be substituted. Since not all compilers support \#defines with
+varargs, you must explicitly specify how many arguments you have.
+
+When the debug message is printed, it will automatically be prefixed by the
+name of the daemon which is running, the filename where the Dmsg is, and the
+line number within the file.
+
+Some actual examples are:
+
+Dmsg2(20, ``MD5len=\%d MD5=\%s\textbackslash{}n'', strlen(buf), buf);
+
+Dmsg1(9, ``Created client \%s record\textbackslash{}n'', client->hdr.name);
+
+\subsection{Error Messages}
+\index{Messages!Error}
+\index{Error Messages}
+\addcontentsline{toc}{subsubsection}{Error Messages}
+
+Error messages are messages that are related to the daemon as a whole rather
+than a particular job. For example, an out of memory condition my generate an
+error message. They should be very rarely needed. In general, you should be
+using Job and Job Queued messages (Jmsg and Qmsg). They are coded as:
+
+EmsgN(error-code, level, message, arg1, ...) As with debug messages, you must
+explicitly code the of arguments to be substituted in the message. error-code
+indicates the severity or class of error, and it may be one of the following:
+
+\addcontentsline{lot}{table}{Message Error Code Classes}
+\begin{longtable}{lp{3in}}
+{{\bf M\_ABORT} } & {Causes the daemon to immediately abort. This should be
+used only in extreme cases. It attempts to produce a traceback. } \\
+{{\bf M\_ERROR\_TERM} } & {Causes the daemon to immediately terminate. This
+should be used only in extreme cases. It does not produce a traceback. } \\
+{{\bf M\_FATAL} } & {Causes the daemon to terminate the current job, but the
+daemon keeps running } \\
+{{\bf M\_ERROR} } & {Reports the error. The daemon and the job continue
+running } \\
+{{\bf M\_WARNING} } & {Reports an warning message. The daemon and the job
+continue running } \\
+{{\bf M\_INFO} } & {Reports an informational message.}
+
+\end{longtable}
+
+There are other error message classes, but they are in a state of being
+redesigned or deprecated, so please do not use them. Some actual examples are:
+
+
+Emsg1(M\_ABORT, 0, ``Cannot create message thread: \%s\textbackslash{}n'',
+strerror(status));
+
+Emsg3(M\_WARNING, 0, ``Connect to File daemon \%s at \%s:\%d failed. Retrying
+...\textbackslash{}n'', client-\gt{}hdr.name, client-\gt{}address,
+client-\gt{}port);
+
+Emsg3(M\_FATAL, 0, ``bdird\lt{}filed: bad response from Filed to \%s command:
+\%d \%s\textbackslash{}n'', cmd, n, strerror(errno));
+
+\subsection{Job Messages}
+\index{Job Messages}
+\index{Messages!Job}
+\addcontentsline{toc}{subsubsection}{Job Messages}
+
+Job messages are messages that pertain to a particular job such as a file that
+could not be saved, or the number of files and bytes that were saved. They
+Are coded as:
+\begin{verbatim}
+Jmsg(jcr, M\_FATAL, 0, "Text of message");
+\end{verbatim}
+A Jmsg with M\_FATAL will fail the job. The Jmsg() takes varargs so can
+have any number of arguments for substituted in a printf like format.
+Output from the Jmsg() will go to the Job report.
+<br>
+If the Jmsg is followed with a number such as Jmsg1(...), the number
+indicates the number of arguments to be substituted (varargs is not
+standard for \#defines), and what is more important is that the file and
+line number will be prefixed to the message. This permits a sort of debug
+from user's output.
+
+\subsection{Queued Job Messages}
+\index{Queued Job Messages}
+\index{Messages!Job}
+\addcontentsline{toc}{subsubsection}{Queued Job Messages}
+Queued Job messages are similar to Jmsg()s except that the message is
+Queued rather than immediately dispatched. This is necessary within the
+network subroutines and in the message editing routines. This is to prevent
+recursive loops, and to ensure that messages can be delivered even in the
+event of a network error.
+
+
+\subsection{Memory Messages}
+\index{Messages!Memory}
+\index{Memory Messages}
+\addcontentsline{toc}{subsubsection}{Memory Messages}
+
+Memory messages are messages that are edited into a memory buffer. Generally
+they are used in low level routines such as the low level device file dev.c in
+the Storage daemon or in the low level Catalog routines. These routines do not
+generally have access to the Job Control Record and so they return error
+essages reformatted in a memory buffer. Mmsg() is the way to do this.
+
+\subsection{Bugs Database}
+\index{Database!Bugs}
+\index{Bugs Database}
+\addcontentsline{toc}{subsubsection}{Bugs Database}
+We have a bugs database which is at:
+\elink{http://bugs.bacula.org}{http://bugs.bacula.org}, and as
+a developer you will need to respond to bugs, perhaps bugs in general
+if you have time, otherwise just bugs that correspond to code that
+you wrote.
+
+If you need to answer bugs, please be sure to ask the Project Manager
+(currently Kern) to give you Developer access to the bugs database. This
+allows you to modify statuses and close bugs.
+
+The first thing is if you want to take over a bug, rather than just make a
+note, you should assign the bug to yourself. This helps other developers
+know that you are the principal person to deal with the bug. You can do so
+by going into the bug and clicking on the {\bf Update Issue} button. Then
+you simply go to the {\bf Assigned To} box and select your name from the
+drop down box. To actually update it you must click on the {\bf Update
+Information} button a bit further down on the screen, but if you have other
+things to do such as add a Note, you might wait before clicking on the {\bf
+Update Information} button.
+
+Generally, we set the {\bf Status} field to either acknowledged, confirmed,
+or feedback when we first start working on the bug. Feedback is set when
+we expect that the user should give us more information.
+
+Normally, once you are reasonably sure that the bug is fixed, and a patch
+is made and attached to the bug report, and/or in the SVN, you can close
+the bug. If you want the user to test the patch, then leave the bug open,
+otherwise close it and set {\bf Resolution} to {\bf Fixed}. We generally
+close bug reports rather quickly, even without confirmation, especially if
+we have run tests and can see that for us the problem is fixed. However,
+in doing so, it avoids misunderstandings if you leave a note while you are
+closing the bug that says something to the following effect:
+We are closing this bug because ... If for some reason, it does not fix
+your problem, please feel free to reopen it, or to open a new bug report
+describing the problem".
+
+We do not recommend that you attempt to edit any of the bug notes that have
+been submitted, nor to delete them or make them private. In fact, if
+someone accidentally makes a bug note private, you should ask the reason
+and if at all possible (with his agreement) make the bug note public.
+
+If the user has not properly filled in most of the important fields
+(platorm, OS, Product Version, ...) please do not hesitate to politely ask
+him. Also, if the bug report is a request for a new feature, please
+politely send the user to the Feature Request menu item on www.bacula.org.
+The same applies to a support request (we answer only bugs), you might give
+the user a tip, but please politely refer him to the manual and the
+Getting Support page of www.bacula.org.
+++ /dev/null
-%%
-%%
-
-\chapter{Bacula Developer Notes}
-\label{_ChapterStart10}
-\index{Bacula Developer Notes}
-\index{Notes!Bacula Developer}
-\addcontentsline{toc}{section}{Bacula Developer Notes}
-
-This document is intended mostly for developers and describes how you can
-contribute to the Bacula project and the the general framework of making
-Bacula source changes.
-
-\subsection{Contributions}
-\index{Contributions}
-\addcontentsline{toc}{subsubsection}{Contributions}
-
-Contributions to the Bacula project come in many forms: ideas,
-participation in helping people on the bacula-users email list, packaging
-Bacula binaries for the community, helping improve the documentation, and
-submitting code.
-
-Contributions in the form of submissions for inclusion in the project are
-broken into two groups. The first are contributions that are aids and not
-essential to Bacula. In general, these will be scripts or will go into the
-{\bf bacula/examples} directory. For these kinds of non-essential
-contributions there is no obligation to do a copyright assignment as
-described below. However, a copyright assignment would still be
-appreciated.
-
-The second class of contributions are those which will be integrated with
-Bacula and become an essential part (code, scripts, documentation, ...)
-Within this class of contributions, there are two hurdles to surmount. One
-is getting your patch accepted, and two is dealing with copyright issues.
-The following text describes some of the requirements for such code.
-
-\subsection{Patches}
-\index{Patches}
-\addcontentsline{toc}{subsubsection}{Patches}
-
-Subject to the copyright assignment described below, your patches should be
-sent in {\bf git format-patch} format relative to the current contents of the
-master branch of the Source Forge Git repository. Please attach the
-output file or files generated by the {\bf git format-patch} to the email
-rather than include them directory to avoid wrapping of the lines
-in the patch. Please be sure to use the Bacula
-indenting standard (see below) for source code. If you have checked out
-the source with Git, you can get a diff using.
-
-\begin{verbatim}
-git pull
-git format-patch -M
-\end{verbatim}
-
-If you plan on doing significant development work over a period of time,
-after having your first patch reviewed and approved, you will be eligible
-for having developer Git write access so that you can commit your changes
-directly to the Git repository. To do so, you will need a userid on Source
-Forge.
-
-\subsection{Copyrights}
-\index{Copyrights}
-\addcontentsline{toc}{subsubsection}{Copyrights}
-
-To avoid future problems concerning changing licensing or
-copyrights, all code contributions more than a hand full of lines
-must be in the Public Domain or have the copyright transferred to
-the Free Software Foundation Europe e.V. with a Fiduciary License
-Agreement (FLA) as the case for all the current code.
-
-Prior to November 2004, all the code was copyrighted by Kern Sibbald and
-John Walker. After November 2004, the code was copyrighted by Kern
-Sibbald, then on the 15th of November 2006, Kern transferred the copyright
-to the Free Software Foundation Europe e.V. In signing the FLA and
-transferring the copyright, you retain the right to use the code you have
-submitted as you want, and you ensure that Bacula will always remain Free
-and Open Source.
-
-Your name should be clearly indicated as the author of the code, and you
-must be extremely careful not to violate any copyrights or patents or use
-other people's code without acknowledging it. The purpose of this
-requirement is to avoid future copyright, patent, or intellectual property
-problems. Please read the LICENSE agreement in the main Bacula source code
-directory. When you sign the Fiduciary License Agreement (FLA) and send it
-in, you are agreeing to the terms of that LICENSE file.
-
-If you don't understand what we mean by future problems, please
-examine the difficulties Mozilla was having finding
-previous contributors at \elink{
-http://www.mozilla.org/MPL/missing.html}
-{http://www.mozilla.org/MPL/missing.html}. The other important issue is to
-avoid copyright, patent, or intellectual property violations as was
-(May 2003) claimed by SCO against IBM.
-
-Although the copyright will be held by the Free Software
-Foundation Europe e.V., each developer is expected to indicate
-that he wrote and/or modified a particular module (or file) and
-any other sources. The copyright assignment may seem a bit
-unusual, but in reality, it is not. Most large projects require
-this.
-
-If you have any doubts about this, please don't hesitate to ask. The
-objective is to assure the long term survival of the Bacula project.
-
-Items not needing a copyright assignment are: most small changes,
-enhancements, or bug fixes of 5-10 lines of code, which amount to
-less than 20% of any particular file.
-
-\subsection{Copyright Assignment -- Fiduciary License Agreement}
-\index{Copyright Assignment}
-\index{Assignment!Copyright}
-\addcontentsline{toc}{subsubsection}{Copyright Assignment -- Fiduciary License Agreement}
-
-Since this is not a commercial enterprise, and we prefer to believe in
-everyone's good faith, previously developers could assign the copyright by
-explicitly acknowledging that they do so in their first submission. This
-was sufficient if the developer is independent, or an employee of a
-not-for-profit organization or a university. However, in an effort to
-ensure that the Bacula code is really clean, beginning in August 2006, all
-previous and future developers with SVN write access will be asked to submit a
-copyright assignment (or Fiduciary License Agreement -- FLA),
-which means you agree to the LICENSE in the main source
-directory. It also means that you receive back the right to use
-the code that you have submitted.
-
-Any developer who wants to contribute and is employed by a company should
-either list the employer as the owner of the code, or get explicit
-permission from him to sign the copyright assignment. This is because in
-many countries, all work that an employee does whether on company time or
-in the employee's free time is considered to be Intellectual Property of
-the company. Obtaining official approval or an FLA from the company will
-avoid misunderstandings between the employee, the company, and the Bacula
-project. A good number of companies have already followed this procedure.
-
-The Fiduciary License Agreement is posted on the Bacula web site at:
-\elink{http://www.bacula.org/en/FLA-bacula.en.pdf}{http://www.bacula.org/en/FLA-bacula.en.pdf}
-
-The instructions for filling out this agreement are also at:
-\elink{http://www.bacula.org/?page=fsfe}{http://www.bacula.org/?page=fsfe}
-
-It should be filled out, then sent to:
-
-\begin{verbatim}
- Kern Sibbald
- Cotes-de-Montmoiret 9
- 1012 Lausanne
- Switzerland
-\end{verbatim}
-
-Please note that the above address is different from the officially
-registered office mentioned in the document. When you send in such a
-complete document, please notify me: kern at sibbald dot com, and
-please add your email address to the FLA so that I can contact you
-to confirm reception of the signed FLA.
-
-
-\section{The Development Cycle}
-\index{Developement Cycle}
-\index{Cycle!Developement}
-\addcontentsline{toc}{subsubsection}{Development Cycle}
-
-As discussed on the email lists, the number of contributions are
-increasing significantly. We expect this positive trend
-will continue. As a consequence, we have modified how we do
-development, and instead of making a list of all the features that we will
-implement in the next version, each developer signs up for one (maybe
-two) projects at a time, and when they are complete, and the code
-is stable, we will release a new version. The release cycle will probably
-be roughly six months.
-
-The difference is that with a shorter release cycle and fewer released
-feature, we will have more time to review the new code that is being
-contributed, and will be able to devote more time to a smaller number of
-projects (some prior versions had too many new features for us to handle
-correctly).
-
-Future release schedules will be much the same, and the
-number of new features will also be much the same providing that the
-contributions continue to come -- and they show no signs of let up :-)
-
-\index{Feature Requests}
-{\bf Feature Requests:} \\
-In addition, we have "formalizee" the feature requests a bit.
-
-Instead of me maintaining an informal list of everything I run into
-(kernstodo), we now maintain a "formal" list of projects. This
-means that all new feature requests, including those recently discussed on
-the email lists, must be formally submitted and approved.
-
-Formal submission of feature requests will take two forms: \\
-1. non-mandatory, but highly recommended is to discuss proposed new features
-on the mailing list.\\
-2. Formal submission of an Feature Request in a special format. We'll
-give an example of this below, but you can also find it on the web site
-under "Support -\gt{} Feature Requests". Since it takes a bit of time to
-properly fill out a Feature Request form, you probably should check on the
-email list first.
-
-Once the Feature Request is received by the keeper of the projects list, it
-will be sent to the Bacula project manager (Kern), and he will either
-accept it (90% of the time), send it back asking for clarification (10% of
-the time), send it to the email list asking for opinions, or reject it
-(very few cases).
-
-If it is accepted, it will go in the "projects" file (a simple ASCII file)
-maintained in the main Bacula source directory.
-
-{\bf Implementation of Feature Requests:}\\
-Any qualified developer can sign up for a project. The project must have
-an entry in the projects file, and the developer's name will appear in the
-Status field.
-
-{\bf How Feature Requests are accepted:}\\
-Acceptance of Feature Requests depends on several things: \\
-1. feedback from users. If it is negative, the Feature Request will probably not be
-accepted. \\
-2. the difficulty of the project. A project that is so
-difficult that we cannot imagine finding someone to implement probably won't
-be accepted. Obviously if you know how to implement it, don't hesitate
-to put it in your Feature Request \\
- 3. whether or not the Feature Request fits within the current strategy of
-Bacula (for example an Feature Request that requests changing the tape to
-tar format probably would not be accepted, ...).
-
-{\bf How Feature Requests are prioritized:}\\
-Once an Feature Request is accepted, it needs to be implemented. If you
-can find a developer for it, or one signs up for implementing it, then the
-Feature Request becomes top priority (at least for that developer).
-
-Between releases of Bacula, we will generally solicit Feature Request input
-for the next version, and by way of this email, we suggest that you send
-discuss and send in your Feature Requests for the next release. Please
-verify that the Feature Request is not in the current list (attached to this email).
-
-Once users have had several weeks to submit Feature Requests, the keeper of
-the projects list will organize them, and request users to vote on them.
-This will allow fixing prioritizing the Feature Requests. Having a
-priority is one thing, but getting it implement is another thing -- we are
-hoping that the Bacula community will take more responsibility for assuring
-the implementation of accepted Feature Requests.
-
-Feature Request format:
-\begin{verbatim}
-============= Empty Feature Request form ===========
-Item n: One line summary ...
- Date: Date submitted
- Origin: Name and email of originator.
- Status:
-
- What: More detailed explanation ...
-
- Why: Why it is important ...
-
- Notes: Additional notes or features (omit if not used)
-============== End Feature Request form ==============
-\end{verbatim}
-
-\begin{verbatim}
-============= Example Completed Feature Request form ===========
-Item 1: Implement a Migration job type that will move the job
- data from one device to another.
- Origin: Sponsored by Riege Sofware International GmbH. Contact:
- Daniel Holtkamp <holtkamp at riege dot com>
- Date: 28 October 2005
- Status: Partially coded in 1.37 -- much more to do. Assigned to
- Kern.
-
- What: The ability to copy, move, or archive data that is on a
- device to another device is very important.
-
- Why: An ISP might want to backup to disk, but after 30 days
- migrate the data to tape backup and delete it from
- disk. Bacula should be able to handle this
- automatically. It needs to know what was put where,
- and when, and what to migrate -- it is a bit like
- retention periods. Doing so would allow space to be
- freed up for current backups while maintaining older
- data on tape drives.
-
- Notes: Migration could be triggered by:
- Number of Jobs
- Number of Volumes
- Age of Jobs
- Highwater size (keep total size)
- Lowwater mark
-=================================================
-\end{verbatim}
-
-
-\section{Bacula Code Submissions and Projects}
-\index{Submissions and Projects}
-\addcontentsline{toc}{subsection}{Code Submissions and Projects}
-
-Getting code implemented in Bacula works roughly as follows:
-
-\begin{itemize}
-
-\item Kern is the project manager, but prefers not to be a "gate keeper".
- This means that the developers are expected to be self-motivated,
- and once they have experience submit directly to the Git
- repositories. However,
- it is a good idea to have your patches reviewed prior to submitting,
- and it is a bad idea to submit monster patches because no one will
- be able to properly review them. See below for more details on this.
-
-\item There are growing numbers of contributions (very good).
-
-\item Some contributions come in the form of relatively small patches,
- which Kern reviews, integrates, documents, tests, and maintains.
-
-\item All Bacula developers take full
- responsibility for writing the code, posting as patches so that we can
- review it as time permits, integrating it at an appropriate time,
- responding to our requests for tweaking it (name changes, ...),
- document it in the code, document it in the manual (even though
- their mother tongue is not English), test it, develop and commit
- regression scripts, and answer in a timely fashion all bug reports --
- even occasionally accepting additional bugs :-)
-
- This is a sustainable way of going forward with Bacula, and the
- direction that the project will be taking more and more. For
- example, in the past, we have had some very dedicated programmers
- who did major projects. However, some of these
- programmers due to outside obligations (job responsibilities change of
- job, school duties, ...) could not continue to maintain the code. In
- those cases, the code suffers from lack of maintenance, sometimes we
- patch it, sometimes not. In the end, if the code is not maintained, the
- code gets dropped from the project (there are two such contributions
- that are heading in that direction). When ever possible, we would like
- to avoid this, and ensure a continuation of the code and a sharing of
- the development, debugging, documentation, and maintenance
- responsibilities.
-\end{itemize}
-
-\section{Patches for Released Versions}
-\index{Patches for Released Versions}
-\addcontentsline{toc}{subsection}{Patches for Released Versions}
-If you fix a bug in a released version, you should, unless it is
-an absolutely trivial bug, create and release a patch file for the
-bug. The procedure is as follows:
-
-Fix the bug in the released branch and in the develpment master branch.
-
-Make a patch file for the branch and add the branch patch to
-the patches directory in both the branch and the trunk.
-The name should be 2.2.4-xxx.patch where xxx is unique, in this case it can
-be "restore", e.g. 2.2.4-restore.patch. Add to the top of the
-file a brief description and instructions for applying it -- see for example
-2.2.4-poll-mount.patch. The best way to create the patch file is as
-follows:
-
-\begin{verbatim}
- (edit) 2.2.4-restore.patch
- (input description)
- (end edit)
-
- git format-patch -M
- mv 0001-xxx 2.2.4-restore.patch
-\end{verbatim}
-
-check to make sure no extra junk got put into the patch file (i.e.
-it should have the patch for that bug only).
-
-If there is not a bug report on the problem, create one, then add the
-patch to the bug report.
-
-Then upload it to the 2.2.x release of bacula-patches.
-
-So, end the end, the patch file is:
-\begin{itemize}
-\item Attached to the bug report
-
-\item In Branch-2.2/bacula/patches/...
-
-\item In the trunk
-
-\item Loaded on Source Forge bacula-patches 2.2.x release. When
- you add it, click on the check box to send an Email so that all the
- users that are monitoring SF patches get notified.
-\end{itemize}
-
-
-\section{Developing Bacula}
-\index{Developing Bacula}
-\index{Bacula!Developing}
-\addcontentsline{toc}{subsubsection}{Developing Bacula}
-
-Typically the simplest way to develop Bacula is to open one xterm window
-pointing to the source directory you wish to update; a second xterm window at
-the top source directory level, and a third xterm window at the bacula
-directory \lt{}top\gt{}/src/bacula. After making source changes in one of the
-directories, in the top source directory xterm, build the source, and start
-the daemons by entering:
-
-make and
-
-./startit then in the enter:
-
-./console or
-
-./gnome-console to start the Console program. Enter any commands for testing.
-For example: run kernsverify full.
-
-Note, the instructions here to use {\bf ./startit} are different from using a
-production system where the administrator starts Bacula by entering {\bf
-./bacula start}. This difference allows a development version of {\bf Bacula}
-to be run on a computer at the same time that a production system is running.
-The {\bf ./startit} strip starts {\bf Bacula} using a different set of
-configuration files, and thus permits avoiding conflicts with any production
-system.
-
-To make additional source changes, exit from the Console program, and in the
-top source directory, stop the daemons by entering:
-
-./stopit then repeat the process.
-
-\subsection{Debugging}
-\index{Debugging}
-\addcontentsline{toc}{subsubsection}{Debugging}
-
-Probably the first thing to do is to turn on debug output.
-
-A good place to start is with a debug level of 20 as in {\bf ./startit -d20}.
-The startit command starts all the daemons with the same debug level.
-Alternatively, you can start the appropriate daemon with the debug level you
-want. If you really need more info, a debug level of 60 is not bad, and for
-just about everything a level of 200.
-
-\subsection{Using a Debugger}
-\index{Using a Debugger}
-\index{Debugger!Using a}
-\addcontentsline{toc}{subsubsection}{Using a Debugger}
-
-If you have a serious problem such as a segmentation fault, it can usually be
-found quickly using a good multiple thread debugger such as {\bf gdb}. For
-example, suppose you get a segmentation violation in {\bf bacula-dir}. You
-might use the following to find the problem:
-
-\lt{}start the Storage and File daemons\gt{}
-cd dird
-gdb ./bacula-dir
-run -f -s -c ./dird.conf
-\lt{}it dies with a segmentation fault\gt{}
-where
-The {\bf -f} option is specified on the {\bf run} command to inhibit {\bf
-dird} from going into the background. You may also want to add the {\bf -s}
-option to the run command to disable signals which can potentially interfere
-with the debugging.
-
-As an alternative to using the debugger, each {\bf Bacula} daemon has a built
-in back trace feature when a serious error is encountered. It calls the
-debugger on itself, produces a back trace, and emails the report to the
-developer. For more details on this, please see the chapter in the main Bacula
-manual entitled ``What To Do When Bacula Crashes (Kaboom)''.
-
-\subsection{Memory Leaks}
-\index{Leaks!Memory}
-\index{Memory Leaks}
-\addcontentsline{toc}{subsubsection}{Memory Leaks}
-
-Because Bacula runs routinely and unattended on client and server machines, it
-may run for a long time. As a consequence, from the very beginning, Bacula
-uses SmartAlloc to ensure that there are no memory leaks. To make detection of
-memory leaks effective, all Bacula code that dynamically allocates memory MUST
-have a way to release it. In general when the memory is no longer needed, it
-should be immediately released, but in some cases, the memory will be held
-during the entire time that Bacula is executing. In that case, there MUST be a
-routine that can be called at termination time that releases the memory. In
-this way, we will be able to detect memory leaks. Be sure to immediately
-correct any and all memory leaks that are printed at the termination of the
-daemons.
-
-\subsection{Special Files}
-\index{Files!Special}
-\index{Special Files}
-\addcontentsline{toc}{subsubsection}{Special Files}
-
-Kern uses files named 1, 2, ... 9 with any extension as scratch files. Thus
-any files with these names are subject to being rudely deleted at any time.
-
-\subsection{When Implementing Incomplete Code}
-\index{Code!When Implementing Incomplete}
-\index{When Implementing Incomplete Code}
-\addcontentsline{toc}{subsubsection}{When Implementing Incomplete Code}
-
-Please identify all incomplete code with a comment that contains
-
-\begin{verbatim}
-***FIXME***
-\end{verbatim}
-
-where there are three asterisks (*) before and after the word
-FIXME (in capitals) and no intervening spaces. This is important as it allows
-new programmers to easily recognize where things are partially implemented.
-
-\subsection{Bacula Source File Structure}
-\index{Structure!Bacula Source File}
-\index{Bacula Source File Structure}
-\addcontentsline{toc}{subsubsection}{Bacula Source File Structure}
-
-The distribution generally comes as a tar file of the form {\bf
-bacula.x.y.z.tar.gz} where x, y, and z are the version, release, and update
-numbers respectively.
-
-Once you detar this file, you will have a directory structure as follows:
-
-\footnotesize
-\begin{verbatim}
-|
-Tar file:
-|- depkgs
- |- mtx (autochanger control program + tape drive info)
- |- sqlite (SQLite database program)
-
-Tar file:
-|- depkgs-win32
- |- pthreads (Native win32 pthreads library -- dll)
- |- zlib (Native win32 zlib library)
- |- wx (wxWidgets source code)
-
-Project bacula:
-|- bacula (main source directory containing configuration
- | and installation files)
- |- autoconf (automatic configuration files, not normally used
- | by users)
- |- intl (programs used to translate)
- |- platforms (OS specific installation files)
- |- redhat (Red Hat installation)
- |- solaris (Sun installation)
- |- freebsd (FreeBSD installation)
- |- irix (Irix installation -- not tested)
- |- unknown (Default if system not identified)
- |- po (translations of source strings)
- |- src (source directory; contains global header files)
- |- cats (SQL catalog database interface directory)
- |- console (bacula user agent directory)
- |- dird (Director daemon)
- |- filed (Unix File daemon)
- |- win32 (Win32 files to make bacula-fd be a service)
- |- findlib (Unix file find library for File daemon)
- |- gnome-console (GNOME version of console program)
- |- lib (General Bacula library)
- |- stored (Storage daemon)
- |- tconsole (Tcl/tk console program -- not yet working)
- |- testprogs (test programs -- normally only in Kern's tree)
- |- tools (Various tool programs)
- |- win32 (Native Win32 File daemon)
- |- baculafd (Visual Studio project file)
- |- compat (compatibility interface library)
- |- filed (links to src/filed)
- |- findlib (links to src/findlib)
- |- lib (links to src/lib)
- |- console (beginning of native console program)
- |- wx-console (wxWidget console Win32 specific parts)
- |- wx-console (wxWidgets console main source program)
-
-Project regress:
-|- regress (Regression scripts)
- |- bin (temporary directory to hold Bacula installed binaries)
- |- build (temporary directory to hold Bacula source)
- |- scripts (scripts and .conf files)
- |- tests (test scripts)
- |- tmp (temporary directory for temp files)
- |- working (temporary working directory for Bacula daemons)
-
-Project docs:
-|- docs (documentation directory)
- |- developers (Developer's guide)
- |- home-page (Bacula's home page source)
- |- manual (html document directory)
- |- manual-fr (French translation)
- |- manual-de (German translation)
- |- techlogs (Technical development notes);
-
-Project rescue:
-|- rescue (Bacula rescue CDROM)
- |- linux (Linux rescue CDROM)
- |- cdrom (Linux rescue CDROM code)
- ...
- |- solaris (Solaris rescue -- incomplete)
- |- freebsd (FreeBSD rescue -- incomplete)
-
-Project gui:
-|- gui (Bacula GUI projects)
- |- bacula-web (Bacula web php management code)
- |- bimagemgr (Web application for burning CDROMs)
-
-
-\end{verbatim}
-\normalsize
-
-\subsection{Header Files}
-\index{Header Files}
-\index{Files!Header}
-\addcontentsline{toc}{subsubsection}{Header Files}
-
-Please carefully follow the scheme defined below as it permits in general only
-two header file includes per C file, and thus vastly simplifies programming.
-With a large complex project like Bacula, it isn't always easy to ensure that
-the right headers are invoked in the right order (there are a few kludges to
-make this happen -- i.e. in a few include files because of the chicken and egg
-problem, certain references to typedefs had to be replaced with {\bf void} ).
-
-Every file should include {\bf bacula.h}. It pulls in just about everything,
-with very few exceptions. If you have system dependent ifdefing, please do it
-in {\bf baconfig.h}. The version number and date are kept in {\bf version.h}.
-
-Each of the subdirectories (console, cats, dird, filed, findlib, lib, stored,
-...) contains a single directory dependent include file generally the name of
-the directory, which should be included just after the include of {\bf
-bacula.h}. This file (for example, for the dird directory, it is {\bf dird.h})
-contains either definitions of things generally needed in this directory, or
-it includes the appropriate header files. It always includes {\bf protos.h}.
-See below.
-
-Each subdirectory contains a header file named {\bf protos.h}, which contains
-the prototypes for subroutines exported by files in that directory. {\bf
-protos.h} is always included by the main directory dependent include file.
-
-\subsection{Programming Standards}
-\index{Standards!Programming}
-\index{Programming Standards}
-\addcontentsline{toc}{subsubsection}{Programming Standards}
-
-For the most part, all code should be written in C unless there is a burning
-reason to use C++, and then only the simplest C++ constructs will be used.
-Note, Bacula is slowly evolving to use more and more C++.
-
-Code should have some documentation -- not a lot, but enough so that I can
-understand it. Look at the current code, and you will see that I document more
-than most, but am definitely not a fanatic.
-
-We prefer simple linear code where possible. Gotos are strongly discouraged
-except for handling an error to either bail out or to retry some code, and
-such use of gotos can vastly simplify the program.
-
-Remember this is a C program that is migrating to a {\bf tiny} subset of C++,
-so be conservative in your use of C++ features.
-
-\subsection{Do Not Use}
-\index{Use!Do Not}
-\index{Do Not Use}
-\addcontentsline{toc}{subsubsection}{Do Not Use}
-
-\begin{itemize}
- \item STL -- it is totally incomprehensible.
-\end{itemize}
-
-\subsection{Avoid if Possible}
-\index{Possible!Avoid if}
-\index{Avoid if Possible}
-\addcontentsline{toc}{subsubsection}{Avoid if Possible}
-
-\begin{itemize}
-\item Using {\bf void *} because this generally means that one must
- using casting, and in C++ casting is rather ugly. It is OK to use
- void * to pass structure address where the structure is not known
- to the routines accepting the packet (typically callback routines).
- However, declaring "void *buf" is a bad idea. Please use the
- correct types whenever possible.
-
-\item Using undefined storage specifications such as (short, int, long,
- long long, size\_t ...). The problem with all these is that the number of bytes
- they allocate depends on the compiler and the system. Instead use
- Bacula's types (int8\_t, uint8\_t, int32\_t, uint32\_t, int64\_t, and
- uint64\_t). This guarantees that the variables are given exactly the
- size you want. Please try at all possible to avoid using size\_t ssize\_t
- and the such. They are very system dependent. However, some system
- routines may need them, so their use is often unavoidable.
-
-\item Returning a malloc'ed buffer from a subroutine -- someone will forget
- to release it.
-
-\item Heap allocation (malloc) unless needed -- it is expensive. Use
- POOL\_MEM instead.
-
-\item Templates -- they can create portability problems.
-
-\item Fancy or tricky C or C++ code, unless you give a good explanation of
- why you used it.
-
-\item Too much inheritance -- it can complicate the code, and make reading it
- difficult (unless you are in love with colons)
-
-\end{itemize}
-
-\subsection{Do Use Whenever Possible}
-\index{Possible!Do Use Whenever}
-\index{Do Use Whenever Possible}
-\addcontentsline{toc}{subsubsection}{Do Use Whenever Possible}
-
-\begin{itemize}
-\item Locking and unlocking within a single subroutine.
-
-\item A single point of exit from all subroutines. A goto is
- perfectly OK to use to get out early, but only to a label
- named bail\_out, and possibly an ok\_out. See current code
- examples.
-
-\item Malloc and free within a single subroutine.
-
-\item Comments and global explanations on what your code or algorithm does.
-
-\end{itemize}
-
-\subsection{Indenting Standards}
-\index{Standards!Indenting}
-\index{Indenting Standards}
-\addcontentsline{toc}{subsubsection}{Indenting Standards}
-
-We find it very hard to read code indented 8 columns at a time.
-Even 4 at a time uses a lot of space, so we have adopted indenting
-3 spaces at every level. Note, indention is the visual appearance of the
-source on the page, while tabbing is replacing a series of up to 8 spaces from
-a tab character.
-
-The closest set of parameters for the Linux {\bf indent} program that will
-produce reasonably indented code are:
-
-\footnotesize
-\begin{verbatim}
--nbad -bap -bbo -nbc -br -brs -c36 -cd36 -ncdb -ce -ci3 -cli0
--cp36 -d0 -di1 -ndj -nfc1 -nfca -hnl -i3 -ip0 -l85 -lp -npcs
--nprs -npsl -saf -sai -saw -nsob -nss -nbc -ncs -nbfda
-\end{verbatim}
-\normalsize
-
-You can put the above in your .indent.pro file, and then just invoke indent on
-your file. However, be warned. This does not produce perfect indenting, and it
-will mess up C++ class statements pretty badly.
-
-Braces are required in all if statements (missing in some very old code). To
-avoid generating too many lines, the first brace appears on the first line
-(e.g. of an if), and the closing brace is on a line by itself. E.g.
-
-\footnotesize
-\begin{verbatim}
- if (abc) {
- some_code;
- }
-\end{verbatim}
-\normalsize
-
-Just follow the convention in the code. For example we I prefer non-indented cases.
-
-\footnotesize
-\begin{verbatim}
- switch (code) {
- case 'A':
- do something
- break;
- case 'B':
- again();
- break;
- default:
- break;
- }
-\end{verbatim}
-\normalsize
-
-Avoid using // style comments except for temporary code or turning off debug
-code. Standard C comments are preferred (this also keeps the code closer to
-C).
-
-Attempt to keep all lines less than 85 characters long so that the whole line
-of code is readable at one time. This is not a rigid requirement.
-
-Always put a brief description at the top of any new file created describing
-what it does and including your name and the date it was first written. Please
-don't forget any Copyrights and acknowledgments if it isn't 100\% your code.
-Also, include the Bacula copyright notice that is in {\bf src/c}.
-
-In general you should have two includes at the top of the an include for the
-particular directory the code is in, for includes are needed, but this should
-be rare.
-
-In general (except for self-contained packages), prototypes should all be put
-in {\bf protos.h} in each directory.
-
-Always put space around assignment and comparison operators.
-
-\footnotesize
-\begin{verbatim}
- a = 1;
- if (b >= 2) {
- cleanup();
- }
-\end{verbatim}
-\normalsize
-
-but your can compress things in a {\bf for} statement:
-
-\footnotesize
-\begin{verbatim}
- for (i=0; i < del.num_ids; i++) {
- ...
-\end{verbatim}
-\normalsize
-
-Don't overuse the inline if (?:). A full {\bf if} is preferred, except in a
-print statement, e.g.:
-
-\footnotesize
-\begin{verbatim}
- if (ua->verbose \&& del.num_del != 0) {
- bsendmsg(ua, _("Pruned %d %s on Volume %s from catalog.\n"), del.num_del,
- del.num_del == 1 ? "Job" : "Jobs", mr->VolumeName);
- }
-\end{verbatim}
-\normalsize
-
-Leave a certain amount of debug code (Dmsg) in code you submit, so that future
-problems can be identified. This is particularly true for complicated code
-likely to break. However, try to keep the debug code to a minimum to avoid
-bloating the program and above all to keep the code readable.
-
-Please keep the same style in all new code you develop. If you include code
-previously written, you have the option of leaving it with the old indenting
-or re-indenting it. If the old code is indented with 8 spaces, then please
-re-indent it to Bacula standards.
-
-If you are using {\bf vim}, simply set your tabstop to 8 and your shiftwidth
-to 3.
-
-\subsection{Tabbing}
-\index{Tabbing}
-\addcontentsline{toc}{subsubsection}{Tabbing}
-
-Tabbing (inserting the tab character in place of spaces) is as normal on all
-Unix systems -- a tab is converted space up to the next column multiple of 8.
-My editor converts strings of spaces to tabs automatically -- this results in
-significant compression of the files. Thus, you can remove tabs by replacing
-them with spaces if you wish. Please don't confuse tabbing (use of tab
-characters) with indenting (visual alignment of the code).
-
-\subsection{Don'ts}
-\index{Don'ts}
-\addcontentsline{toc}{subsubsection}{Don'ts}
-
-Please don't use:
-
-\footnotesize
-\begin{verbatim}
-strcpy()
-strcat()
-strncpy()
-strncat();
-sprintf()
-snprintf()
-\end{verbatim}
-\normalsize
-
-They are system dependent and un-safe. These should be replaced by the Bacula
-safe equivalents:
-
-\footnotesize
-\begin{verbatim}
-char *bstrncpy(char *dest, char *source, int dest_size);
-char *bstrncat(char *dest, char *source, int dest_size);
-int bsnprintf(char *buf, int32_t buf_len, const char *fmt, ...);
-int bvsnprintf(char *str, int32_t size, const char *format, va_list ap);
-\end{verbatim}
-\normalsize
-
-See src/lib/bsys.c for more details on these routines.
-
-Don't use the {\bf \%lld} or the {\bf \%q} printf format editing types to edit
-64 bit integers -- they are not portable. Instead, use {\bf \%s} with {\bf
-edit\_uint64()}. For example:
-
-\footnotesize
-\begin{verbatim}
- char buf[100];
- uint64_t num = something;
- char ed1[50];
- bsnprintf(buf, sizeof(buf), "Num=%s\n", edit_uint64(num, ed1));
-\end{verbatim}
-\normalsize
-
-Note: {\bf \%lld} is now permitted in Bacula code -- we have our
-own printf routines which handle it correctly. The edit\_uint64() subroutine
-can still be used if you wish, but over time, most of that old style will
-be removed.
-
-The edit buffer {\bf ed1} must be at least 27 bytes long to avoid overflow.
-See src/lib/edit.c for more details. If you look at the code, don't start
-screaming that I use {\bf lld}. I actually use subtle trick taught to me by
-John Walker. The {\bf lld} that appears in the editing routine is actually
-{\bf \#define} to a what is needed on your OS (usually ``lld'' or ``q'') and
-is defined in autoconf/configure.in for each OS. C string concatenation causes
-the appropriate string to be concatenated to the ``\%''.
-
-Also please don't use the STL or Templates or any complicated C++ code.
-
-\subsection{Message Classes}
-\index{Classes!Message}
-\index{Message Classes}
-\addcontentsline{toc}{subsubsection}{Message Classes}
-
-Currently, there are five classes of messages: Debug, Error, Job, Memory,
-and Queued.
-
-\subsection{Debug Messages}
-\index{Messages!Debug}
-\index{Debug Messages}
-\addcontentsline{toc}{subsubsection}{Debug Messages}
-
-Debug messages are designed to be turned on at a specified debug level and are
-always sent to STDOUT. There are designed to only be used in the development
-debug process. They are coded as:
-
-DmsgN(level, message, arg1, ...) where the N is a number indicating how many
-arguments are to be substituted into the message (i.e. it is a count of the
-number arguments you have in your message -- generally the number of percent
-signs (\%)). {\bf level} is the debug level at which you wish the message to
-be printed. message is the debug message to be printed, and arg1, ... are the
-arguments to be substituted. Since not all compilers support \#defines with
-varargs, you must explicitly specify how many arguments you have.
-
-When the debug message is printed, it will automatically be prefixed by the
-name of the daemon which is running, the filename where the Dmsg is, and the
-line number within the file.
-
-Some actual examples are:
-
-Dmsg2(20, ``MD5len=\%d MD5=\%s\textbackslash{}n'', strlen(buf), buf);
-
-Dmsg1(9, ``Created client \%s record\textbackslash{}n'', client->hdr.name);
-
-\subsection{Error Messages}
-\index{Messages!Error}
-\index{Error Messages}
-\addcontentsline{toc}{subsubsection}{Error Messages}
-
-Error messages are messages that are related to the daemon as a whole rather
-than a particular job. For example, an out of memory condition my generate an
-error message. They should be very rarely needed. In general, you should be
-using Job and Job Queued messages (Jmsg and Qmsg). They are coded as:
-
-EmsgN(error-code, level, message, arg1, ...) As with debug messages, you must
-explicitly code the of arguments to be substituted in the message. error-code
-indicates the severity or class of error, and it may be one of the following:
-
-\addcontentsline{lot}{table}{Message Error Code Classes}
-\begin{longtable}{lp{3in}}
-{{\bf M\_ABORT} } & {Causes the daemon to immediately abort. This should be
-used only in extreme cases. It attempts to produce a traceback. } \\
-{{\bf M\_ERROR\_TERM} } & {Causes the daemon to immediately terminate. This
-should be used only in extreme cases. It does not produce a traceback. } \\
-{{\bf M\_FATAL} } & {Causes the daemon to terminate the current job, but the
-daemon keeps running } \\
-{{\bf M\_ERROR} } & {Reports the error. The daemon and the job continue
-running } \\
-{{\bf M\_WARNING} } & {Reports an warning message. The daemon and the job
-continue running } \\
-{{\bf M\_INFO} } & {Reports an informational message.}
-
-\end{longtable}
-
-There are other error message classes, but they are in a state of being
-redesigned or deprecated, so please do not use them. Some actual examples are:
-
-
-Emsg1(M\_ABORT, 0, ``Cannot create message thread: \%s\textbackslash{}n'',
-strerror(status));
-
-Emsg3(M\_WARNING, 0, ``Connect to File daemon \%s at \%s:\%d failed. Retrying
-...\textbackslash{}n'', client-\gt{}hdr.name, client-\gt{}address,
-client-\gt{}port);
-
-Emsg3(M\_FATAL, 0, ``bdird\lt{}filed: bad response from Filed to \%s command:
-\%d \%s\textbackslash{}n'', cmd, n, strerror(errno));
-
-\subsection{Job Messages}
-\index{Job Messages}
-\index{Messages!Job}
-\addcontentsline{toc}{subsubsection}{Job Messages}
-
-Job messages are messages that pertain to a particular job such as a file that
-could not be saved, or the number of files and bytes that were saved. They
-Are coded as:
-\begin{verbatim}
-Jmsg(jcr, M\_FATAL, 0, "Text of message");
-\end{verbatim}
-A Jmsg with M\_FATAL will fail the job. The Jmsg() takes varargs so can
-have any number of arguments for substituted in a printf like format.
-Output from the Jmsg() will go to the Job report.
-<br>
-If the Jmsg is followed with a number such as Jmsg1(...), the number
-indicates the number of arguments to be substituted (varargs is not
-standard for \#defines), and what is more important is that the file and
-line number will be prefixed to the message. This permits a sort of debug
-from user's output.
-
-\subsection{Queued Job Messages}
-\index{Queued Job Messages}
-\index{Messages!Job}
-\addcontentsline{toc}{subsubsection}{Queued Job Messages}
-Queued Job messages are similar to Jmsg()s except that the message is
-Queued rather than immediately dispatched. This is necessary within the
-network subroutines and in the message editing routines. This is to prevent
-recursive loops, and to ensure that messages can be delivered even in the
-event of a network error.
-
-
-\subsection{Memory Messages}
-\index{Messages!Memory}
-\index{Memory Messages}
-\addcontentsline{toc}{subsubsection}{Memory Messages}
-
-Memory messages are messages that are edited into a memory buffer. Generally
-they are used in low level routines such as the low level device file dev.c in
-the Storage daemon or in the low level Catalog routines. These routines do not
-generally have access to the Job Control Record and so they return error
-essages reformatted in a memory buffer. Mmsg() is the way to do this.
-
-\subsection{Bugs Database}
-\index{Database!Bugs}
-\index{Bugs Database}
-\addcontentsline{toc}{subsubsection}{Bugs Database}
-We have a bugs database which is at:
-\elink{http://bugs.bacula.org}{http://bugs.bacula.org}, and as
-a developer you will need to respond to bugs, perhaps bugs in general
-if you have time, otherwise just bugs that correspond to code that
-you wrote.
-
-If you need to answer bugs, please be sure to ask the Project Manager
-(currently Kern) to give you Developer access to the bugs database. This
-allows you to modify statuses and close bugs.
-
-The first thing is if you want to take over a bug, rather than just make a
-note, you should assign the bug to yourself. This helps other developers
-know that you are the principal person to deal with the bug. You can do so
-by going into the bug and clicking on the {\bf Update Issue} button. Then
-you simply go to the {\bf Assigned To} box and select your name from the
-drop down box. To actually update it you must click on the {\bf Update
-Information} button a bit further down on the screen, but if you have other
-things to do such as add a Note, you might wait before clicking on the {\bf
-Update Information} button.
-
-Generally, we set the {\bf Status} field to either acknowledged, confirmed,
-or feedback when we first start working on the bug. Feedback is set when
-we expect that the user should give us more information.
-
-Normally, once you are reasonably sure that the bug is fixed, and a patch
-is made and attached to the bug report, and/or in the SVN, you can close
-the bug. If you want the user to test the patch, then leave the bug open,
-otherwise close it and set {\bf Resolution} to {\bf Fixed}. We generally
-close bug reports rather quickly, even without confirmation, especially if
-we have run tests and can see that for us the problem is fixed. However,
-in doing so, it avoids misunderstandings if you leave a note while you are
-closing the bug that says something to the following effect:
-We are closing this bug because ... If for some reason, it does not fix
-your problem, please feel free to reopen it, or to open a new bug report
-describing the problem".
-
-We do not recommend that you attempt to edit any of the bug notes that have
-been submitted, nor to delete them or make them private. In fact, if
-someone accidentally makes a bug note private, you should ask the reason
-and if at all possible (with his agreement) make the bug note public.
-
-If the user has not properly filled in most of the important fields
-(platorm, OS, Product Version, ...) please do not hesitate to politely ask
-him. Also, if the bug report is a request for a new feature, please
-politely send the user to the Feature Request menu item on www.bacula.org.
-The same applies to a support request (we answer only bugs), you might give
-the user a tip, but please politely refer him to the manual and the
-Getting Support page of www.bacula.org.
--- /dev/null
+\chapter{Bacula Git Usage}
+\label{_GitChapterStart}
+\index{Git}
+\index{Git!Repo}
+\addcontentsline{toc}{section}{Bacula Bit Usage}
+
+This chapter is intended to help you use the Git source code
+repositories to obtain, modify, and submit Bacula source code.
+
+
+\section{Bacula Git repositories}
+\index{Git}
+\addcontentsline{toc}{subsection}{Git repositories}
+As of September 2009, the Bacula source code has been split into
+three Git repositories. One is a repository that holds the
+main Bacula source code with directories {\bf bacula}, {\bf gui},
+and {\bf regress}. The second repository contains
+the directories {\bf docs} directory, and the third repository
+contains the {\bf rescue} directory. All three repositories are
+hosted on Source Forge.
+
+Previously everything was in a single SVN repository.
+We have split the SVN repository into three because Git
+offers significant advantages for ease of managing and integrating
+developer's changes. However, one of the disadvantages of Git is that you
+must work with the full repository, while SVN allows you to checkout
+individual directories. If we put everything into a single Git
+repository it would be far bigger than most developers would want
+to checkout, so we have separted the docs and rescue into their own
+repositories, and moved only the parts that are most actively
+worked on by the developers (bacula, gui, and regress) to a the
+Git Bacula repository.
+
+Bacula developers must now have a certain knowledege of Git.
+
+\section{Git Usage}
+\index{Git Usage}
+\addcontentsline{toc}{subsection}{Git Usage}
+
+Please note that if you are familiar with SVN, Git is similar,
+(and better), but there can be a few surprising differences that
+can be very confusing (nothing worse than converting from CVS to SVN).
+
+The main Bacula Git repo contains the subdirectories {\bf bacula}, {\bf gui},
+and {\bf regress}. With Git it is not possible to pull only a
+single directory, because of the hash code nature of Git, you
+must take all or nothing.
+
+For developers, the most important thing to remember about Git and
+the Source Forge repository is not to "force" a {\bf push} to the
+repository. Doing so, can possibly rewrite
+the Git repository history and cause a lot of problems for the
+project.
+
+You can get a full copy of the Source Forge Bacula Git repository with the
+following command:
+
+\begin{verbatim}
+git clone git://bacula.git.sourceforge.net/gitroot/bacula/bacula trunk
+\end{verbatim}
+
+This will put a read-only copy into the directory {\bf trunk}
+in your current directory, and {\bf trunk} will contain
+the subdirectories: {\bf bacula}, {\bf gui}, and {\bf regress}.
+Obviously you can use any name an not just {\bf trunk}. In fact,
+once you have the repository in say {\bf trunk}, you can copy the
+whole directory to another place and have a fully functional
+git repository.
+
+If you have write permission to the Source Forge
+repository, you can get a copy of the Git repo with:
+
+\begin{verbatim}
+git clone ssh://<userid>@bacula.git.sourceforge.net/gitroot/bacula/bacula trunk
+\end{verbatim}
+
+where you replace \verb+<userid>+ with your Source Forge login
+userid, and you must have previously uploaded your public ssh key
+to Source Forge.
+
+The above command needs to be done only once. Thereafter, you can:
+
+\begin{verbatim}
+cd trunk
+git pull # refresh my repo with the latest code
+\end{verbatim}
+
+As of August 2009, the size of the repository ({\bf trunk} in the above
+example) will be approximately 55 Megabytes. However, if you build
+from source in this directory and do a lot of updates and regression
+testing, the directory could become several hundred megabytes.
+
+\subsection{Learning Git}
+\index{Learning Git}
+If you want to learn more about Git, we recommend that you visit:\\
+\elink{http://book.git-scm.com/}{http://book.git-scm.com/}.
+
+Some of the differences between Git and SVN are:
+\begin{itemize}
+\item Your main Git directory is a full Git repository to which you can
+ and must commit. In fact, we suggest you commit frequently.
+\item When you commit, the commit goes into your local Git
+ database. You must use another command to write it to the
+ master Source Forge repository (see below).
+\item The local Git database is kept in the directory {\bf .git} at the
+ top level of the directory.
+\item All the important Git configuration information is kept in the
+ file {\bf .git/config} in ASCII format that is easy to manually edit.
+\item When you do a {\bf commit} the changes are put in {\bf .git}
+ rather but not in the main Source Forge repository.
+\item You can push your changes to the external repository using
+ the command {\bf git push} providing you have write permission
+ on the repository.
+\item We restrict developers just learning git to have read-only
+ access until they feel comfortable with git before giving them
+ write access.
+\item You can download all the current changes in the external repository
+ and merge them into your {\bf master} branch using the command
+ {\bf git pull}.
+\item The command {\bf git add} is used to add a new file to the
+ repository AND to tell Git that you want a file that has changed
+ to be in the next commit. This has lots of advantages, because
+ a {\bf git commit} only commits those files that have been
+ explicitly added. Note with SVN {\bf add} is used only
+ to add new files to the repo.
+\item You can add and commit all files modifed in one command
+ using {\bf git commit -a}.
+\item This extra use of {\bf add} allows you to make a number
+ of changes then add only a few of the files and commit them,
+ then add more files and commit them until you have committed
+ everything. This has the advantage of allowing you to more
+ easily group small changes and do individaual commits on them.
+ By keeping commits smaller, and separated into topics, it makes
+ it much easier to later select certain commits for backporting.
+\item If you {\bf git pull} from the main repository and make
+ some changes, and before you do a {\bf git push} someone
+ else pushes changes to the Git repository, your changes will
+ apply to an older version of the repository you will probably
+ get an error message such as:
+
+\begin{verbatim}
+ git push
+ To git@github.com:bacula/bacula.git
+ ! [rejected] master -> master (non-fast forward)
+ error: failed to push some refs to 'git@github.com:bacula/bacula.git'
+\end{verbatim}
+
+ which is Git's way of telling you that the main repository has changed
+ and that if you push your changes, they will not be integrated properly.
+ This is very similar to what happens when you do an "svn update" and
+ get merge conflicts.
+ As we have noted above, you should never ask Git to force the push.
+ See below for an explanation of why.
+\item To integrate (merge) your changes properly, you should always do
+ a {\bf git pull} just prior to doing a {\bf git push}.
+\item If Git is unable to merge your changes or finds a conflict it
+ will tell you and you must do conflict resolution, which is much
+ easier in Git than in SVN.
+\item Resolving conflicts is described below in the {\bf github} section.
+\end{itemize}
+
+\section{Step by Step Modifying Bacula Code}
+Suppose you want to download Bacula source code, build it, make
+a change, then submit your change to the Bacula developers. What
+would you do?
+
+\begin{itemize}
+\item Download the Source code:\\
+\begin{verbatim}
+git clone ssh://<userid>@bacula.git.sourceforge.net/gitroot/bacula/bacula trunk
+\end{verbatim}
+
+\item Configure and Build Bacula:\\
+\begin{verbatim}
+./configure (all-your-normal-options)
+make
+\end{verbatim}
+
+\item Create a branch to work on:
+\begin{verbatim}
+cd trunk/bacula
+git checkout -b bugfix master
+\end{verbatim}
+
+\item Edit, build, Test, ...\\
+\begin{verbatim}
+edit file jcr.h
+make
+test
+\end{verbatim}
+
+\item commit your work:
+\begin{verbatim}
+git commit -am "Short comment on what I did"
+\end{verbatim}
+
+\item Possibly repeat the above two items
+
+\item Switch back to the master branch:\\
+\begin{verbatim}
+git checkout master
+\end{verbatim}
+
+\item Pull the latest changes:\\
+\begin{verbatim}
+git pull
+\end{verbatim}
+
+\item Get back on your bugfix branch:\\
+\begin{verbatim}
+git checkout bugfix
+\end{verbatim}
+
+\item Merge your changes and correct any conflicts:\\
+\begin{verbatim}
+git rebase master bugfix
+\end{verbatim}
+
+\item Fix any conflicts:\\
+You will be notified if there are conflicts. The first
+thing to do is:
+
+\begin{verbatim}
+git diff
+\end{verbatim}
+
+This will produce a diff of only the files having a conflict.
+Fix each file in turn. When it is fixed, the diff for that file
+will go away.
+
+For each file fixed, you must do the same as SVN, inform git with:
+
+\begin{verbatim}
+git add (name-of-file-no-longer-in-conflict)
+\end{verbatim}
+
+\item When all files are fixed do:
+\begin{verbatim}
+git rebase --continue
+\end{verbatim}
+
+\item When you are ready to send a patch, do the following:\\
+\begin{verbatim}
+git checkout bugfix
+git format-patch -M master
+\end{verbatim}
+Look at the files produced. They should be numbered 0001-xxx.patch
+where there is one file for each commit you did, number sequentially,
+and the xxx is what you put in the commit comment.
+
+\item If the patch files are good, send them by email to the developers
+as attachments.
+
+\end{itemize}
+
+
+
+\subsection{More Details}
+
+Normally, you will work by creating a branch of the master branch of your
+repository, make your modifications, then make sure it is up to date, and finally
+create format-patch patches or push it to the Source Forge repo. Assuming
+you call the Bacula repository {\bf trunk}, you might use the following
+commands:
+
+\begin{verbatim}
+cd trunk
+git checkout master
+git pull
+git checkout -b newbranch master
+(edit, ...)
+git add <file-edited>
+git commit -m "<comment about commit>"
+...
+\end{verbatim}
+
+When you have completed working on your branch, you will do:
+
+\begin{verbatim}
+cd trunk
+git checkout newbranch # ensure I am on my branch
+git pull # get latest source code
+git rebase master # merge my code
+\end{verbatim}
+
+If you have completed your edits before anyone has modified the repository,
+the {\bf git rebase master} will report that there was nothing to do. Otherwise,
+it will merge the changes that were made in the repository before your changes.
+If there are any conflicts, Git will tell you. Typically resolving conflicts with
+Git is relatively easy. You simply make a diff:
+
+\begin{verbatim}
+git diff
+\end{verbatim}
+
+Then edit each file that was listed in the {\bf git diff} to remove the
+conflict, which will be indicated by lines of:
+
+\begin{verbatim}
+<<<<<<< HEAD
+text
+>>>>>>>>
+other text
+=====
+\end{verbatim}
+
+where {\bf text} is what is in the Bacula repository, and {\bf other text}
+is what you have changed.
+
+Once you have eliminated the conflict, the {\bf git diff} will show nothing,
+and you must do a:
+
+\begin{verbatim}
+git add <file-with-conflicts-fixed>
+\end{verbatim}
+
+Once you have fixed all the files with conflicts in the above manner, you enter:
+
+\begin{verbatim}
+git rebase --continue
+\end{verbatim}
+
+and your rebase will be complete.
+
+If for some reason, before doing the --continue, you want to abort the rebase and return to what you had, you enter:
+
+\begin{verbatim}
+git rebase --abort
+\end{verbatim}
+
+Finally to make a set of patch files
+
+\begin{verbatim}
+git format-patch -M master
+\end{verbatim}
+
+When you see your changes have been integrated and pushed to the
+main repo, you can delete your branch with:
+
+\begin{verbatim}
+git checkout master
+git branch -D newbranch
+\end{verbatim}
+
+
+\section{Forcing Changes}
+If you want to understand why it is not a good idea to force a
+push to the repository, look at the following picture:
+
+\includegraphics[width=0.85\textwidth]{\idir git-edit-commit.eps}
+
+The above graphic has three lines of circles. Each circle represents
+a commit, and time runs from the left to the right. The top line
+shows the repository just before you are going to do a push. Note the
+point at which you pulled is the circle on the left, your changes are
+represented by the circle labeled {\bf Your mods}. It is shown below
+to indicate that the changes are only in your local repository. Finally,
+there are pushes A and B that came after the time at which you pulled.
+
+If you were to force your changes into the repository, Git would place them
+immediately after the point at which you pulled them, so they would
+go before the pushes A and B. However, doing so would rewrite the history
+of the repository and make it very difficult for other users to synchronize
+since they would have to somehow wedge their changes at some point before the
+current HEAD of the repository. This situation is shown by the second line of
+pushes.
+
+What you really want to do is to put your changes after Push B (the current HEAD).
+This is shown in the third line of pushes. The best way to accomplish this is to
+work in a branch, pull the repository so you have your master equal to HEAD (in first
+line), then to rebase your branch on the current master and then commit it. The
+exact commands to accomplish this are shown in the next couple of sections.
+++ /dev/null
-\chapter{Bacula Git Usage}
-\label{_GitChapterStart}
-\index{Git}
-\index{Git!Repo}
-\addcontentsline{toc}{section}{Bacula Bit Usage}
-
-This chapter is intended to help you use the Git source code
-repositories to obtain, modify, and submit Bacula source code.
-
-
-\section{Bacula Git repositories}
-\index{Git}
-\addcontentsline{toc}{subsection}{Git repositories}
-As of September 2009, the Bacula source code has been split into
-three Git repositories. One is a repository that holds the
-main Bacula source code with directories {\bf bacula}, {\bf gui},
-and {\bf regress}. The second repository contains
-the directories {\bf docs} directory, and the third repository
-contains the {\bf rescue} directory. All three repositories are
-hosted on Source Forge.
-
-Previously everything was in a single SVN repository.
-We have split the SVN repository into three because Git
-offers significant advantages for ease of managing and integrating
-developer's changes. However, one of the disadvantages of Git is that you
-must work with the full repository, while SVN allows you to checkout
-individual directories. If we put everything into a single Git
-repository it would be far bigger than most developers would want
-to checkout, so we have separted the docs and rescue into their own
-repositories, and moved only the parts that are most actively
-worked on by the developers (bacula, gui, and regress) to a the
-Git Bacula repository.
-
-Bacula developers must now have a certain knowledege of Git.
-
-\section{Git Usage}
-\index{Git Usage}
-\addcontentsline{toc}{subsection}{Git Usage}
-
-Please note that if you are familiar with SVN, Git is similar,
-(and better), but there can be a few surprising differences that
-can be very confusing (nothing worse than converting from CVS to SVN).
-
-The main Bacula Git repo contains the subdirectories {\bf bacula}, {\bf gui},
-and {\bf regress}. With Git it is not possible to pull only a
-single directory, because of the hash code nature of Git, you
-must take all or nothing.
-
-For developers, the most important thing to remember about Git and
-the Source Forge repository is not to "force" a {\bf push} to the
-repository. Doing so, can possibly rewrite
-the Git repository history and cause a lot of problems for the
-project.
-
-You can get a full copy of the Source Forge Bacula Git repository with the
-following command:
-
-\begin{verbatim}
-git clone git://bacula.git.sourceforge.net/gitroot/bacula/bacula trunk
-\end{verbatim}
-
-This will put a read-only copy into the directory {\bf trunk}
-in your current directory, and {\bf trunk} will contain
-the subdirectories: {\bf bacula}, {\bf gui}, and {\bf regress}.
-Obviously you can use any name an not just {\bf trunk}. In fact,
-once you have the repository in say {\bf trunk}, you can copy the
-whole directory to another place and have a fully functional
-git repository.
-
-If you have write permission to the Source Forge
-repository, you can get a copy of the Git repo with:
-
-\begin{verbatim}
-git clone ssh://<userid>@bacula.git.sourceforge.net/gitroot/bacula/bacula trunk
-\end{verbatim}
-
-where you replace \verb+<userid>+ with your Source Forge login
-userid, and you must have previously uploaded your public ssh key
-to Source Forge.
-
-The above command needs to be done only once. Thereafter, you can:
-
-\begin{verbatim}
-cd trunk
-git pull # refresh my repo with the latest code
-\end{verbatim}
-
-As of August 2009, the size of the repository ({\bf trunk} in the above
-example) will be approximately 55 Megabytes. However, if you build
-from source in this directory and do a lot of updates and regression
-testing, the directory could become several hundred megabytes.
-
-\subsection{Learning Git}
-\index{Learning Git}
-If you want to learn more about Git, we recommend that you visit:\\
-\elink{http://book.git-scm.com/}{http://book.git-scm.com/}.
-
-Some of the differences between Git and SVN are:
-\begin{itemize}
-\item Your main Git directory is a full Git repository to which you can
- and must commit. In fact, we suggest you commit frequently.
-\item When you commit, the commit goes into your local Git
- database. You must use another command to write it to the
- master Source Forge repository (see below).
-\item The local Git database is kept in the directory {\bf .git} at the
- top level of the directory.
-\item All the important Git configuration information is kept in the
- file {\bf .git/config} in ASCII format that is easy to manually edit.
-\item When you do a {\bf commit} the changes are put in {\bf .git}
- rather but not in the main Source Forge repository.
-\item You can push your changes to the external repository using
- the command {\bf git push} providing you have write permission
- on the repository.
-\item We restrict developers just learning git to have read-only
- access until they feel comfortable with git before giving them
- write access.
-\item You can download all the current changes in the external repository
- and merge them into your {\bf master} branch using the command
- {\bf git pull}.
-\item The command {\bf git add} is used to add a new file to the
- repository AND to tell Git that you want a file that has changed
- to be in the next commit. This has lots of advantages, because
- a {\bf git commit} only commits those files that have been
- explicitly added. Note with SVN {\bf add} is used only
- to add new files to the repo.
-\item You can add and commit all files modifed in one command
- using {\bf git commit -a}.
-\item This extra use of {\bf add} allows you to make a number
- of changes then add only a few of the files and commit them,
- then add more files and commit them until you have committed
- everything. This has the advantage of allowing you to more
- easily group small changes and do individaual commits on them.
- By keeping commits smaller, and separated into topics, it makes
- it much easier to later select certain commits for backporting.
-\item If you {\bf git pull} from the main repository and make
- some changes, and before you do a {\bf git push} someone
- else pushes changes to the Git repository, your changes will
- apply to an older version of the repository you will probably
- get an error message such as:
-
-\begin{verbatim}
- git push
- To git@github.com:bacula/bacula.git
- ! [rejected] master -> master (non-fast forward)
- error: failed to push some refs to 'git@github.com:bacula/bacula.git'
-\end{verbatim}
-
- which is Git's way of telling you that the main repository has changed
- and that if you push your changes, they will not be integrated properly.
- This is very similar to what happens when you do an "svn update" and
- get merge conflicts.
- As we have noted above, you should never ask Git to force the push.
- See below for an explanation of why.
-\item To integrate (merge) your changes properly, you should always do
- a {\bf git pull} just prior to doing a {\bf git push}.
-\item If Git is unable to merge your changes or finds a conflict it
- will tell you and you must do conflict resolution, which is much
- easier in Git than in SVN.
-\item Resolving conflicts is described below in the {\bf github} section.
-\end{itemize}
-
-\section{Step by Step Modifying Bacula Code}
-Suppose you want to download Bacula source code, build it, make
-a change, then submit your change to the Bacula developers. What
-would you do?
-
-\begin{itemize}
-\item Download the Source code:\\
-\begin{verbatim}
-git clone ssh://<userid>@bacula.git.sourceforge.net/gitroot/bacula/bacula trunk
-\end{verbatim}
-
-\item Configure and Build Bacula:\\
-\begin{verbatim}
-./configure (all-your-normal-options)
-make
-\end{verbatim}
-
-\item Create a branch to work on:
-\begin{verbatim}
-cd trunk/bacula
-git checkout -b bugfix master
-\end{verbatim}
-
-\item Edit, build, Test, ...\\
-\begin{verbatim}
-edit file jcr.h
-make
-test
-\end{verbatim}
-
-\item commit your work:
-\begin{verbatim}
-git commit -am "Short comment on what I did"
-\end{verbatim}
-
-\item Possibly repeat the above two items
-
-\item Switch back to the master branch:\\
-\begin{verbatim}
-git checkout master
-\end{verbatim}
-
-\item Pull the latest changes:\\
-\begin{verbatim}
-git pull
-\end{verbatim}
-
-\item Get back on your bugfix branch:\\
-\begin{verbatim}
-git checkout bugfix
-\end{verbatim}
-
-\item Merge your changes and correct any conflicts:\\
-\begin{verbatim}
-git rebase master bugfix
-\end{verbatim}
-
-\item Fix any conflicts:\\
-You will be notified if there are conflicts. The first
-thing to do is:
-
-\begin{verbatim}
-git diff
-\end{verbatim}
-
-This will produce a diff of only the files having a conflict.
-Fix each file in turn. When it is fixed, the diff for that file
-will go away.
-
-For each file fixed, you must do the same as SVN, inform git with:
-
-\begin{verbatim}
-git add (name-of-file-no-longer-in-conflict)
-\end{verbatim}
-
-\item When all files are fixed do:
-\begin{verbatim}
-git rebase --continue
-\end{verbatim}
-
-\item When you are ready to send a patch, do the following:\\
-\begin{verbatim}
-git checkout bugfix
-git format-patch -M master
-\end{verbatim}
-Look at the files produced. They should be numbered 0001-xxx.patch
-where there is one file for each commit you did, number sequentially,
-and the xxx is what you put in the commit comment.
-
-\item If the patch files are good, send them by email to the developers
-as attachments.
-
-\end{itemize}
-
-
-
-\subsection{More Details}
-
-Normally, you will work by creating a branch of the master branch of your
-repository, make your modifications, then make sure it is up to date, and finally
-create format-patch patches or push it to the Source Forge repo. Assuming
-you call the Bacula repository {\bf trunk}, you might use the following
-commands:
-
-\begin{verbatim}
-cd trunk
-git checkout master
-git pull
-git checkout -b newbranch master
-(edit, ...)
-git add <file-edited>
-git commit -m "<comment about commit>"
-...
-\end{verbatim}
-
-When you have completed working on your branch, you will do:
-
-\begin{verbatim}
-cd trunk
-git checkout newbranch # ensure I am on my branch
-git pull # get latest source code
-git rebase master # merge my code
-\end{verbatim}
-
-If you have completed your edits before anyone has modified the repository,
-the {\bf git rebase master} will report that there was nothing to do. Otherwise,
-it will merge the changes that were made in the repository before your changes.
-If there are any conflicts, Git will tell you. Typically resolving conflicts with
-Git is relatively easy. You simply make a diff:
-
-\begin{verbatim}
-git diff
-\end{verbatim}
-
-Then edit each file that was listed in the {\bf git diff} to remove the
-conflict, which will be indicated by lines of:
-
-\begin{verbatim}
-<<<<<<< HEAD
-text
->>>>>>>>
-other text
-=====
-\end{verbatim}
-
-where {\bf text} is what is in the Bacula repository, and {\bf other text}
-is what you have changed.
-
-Once you have eliminated the conflict, the {\bf git diff} will show nothing,
-and you must do a:
-
-\begin{verbatim}
-git add <file-with-conflicts-fixed>
-\end{verbatim}
-
-Once you have fixed all the files with conflicts in the above manner, you enter:
-
-\begin{verbatim}
-git rebase --continue
-\end{verbatim}
-
-and your rebase will be complete.
-
-If for some reason, before doing the --continue, you want to abort the rebase and return to what you had, you enter:
-
-\begin{verbatim}
-git rebase --abort
-\end{verbatim}
-
-Finally to make a set of patch files
-
-\begin{verbatim}
-git format-patch -M master
-\end{verbatim}
-
-When you see your changes have been integrated and pushed to the
-main repo, you can delete your branch with:
-
-\begin{verbatim}
-git checkout master
-git branch -D newbranch
-\end{verbatim}
-
-
-\section{Forcing Changes}
-If you want to understand why it is not a good idea to force a
-push to the repository, look at the following picture:
-
-\includegraphics[width=0.85\textwidth]{\idir git-edit-commit.eps}
-
-The above graphic has three lines of circles. Each circle represents
-a commit, and time runs from the left to the right. The top line
-shows the repository just before you are going to do a push. Note the
-point at which you pulled is the circle on the left, your changes are
-represented by the circle labeled {\bf Your mods}. It is shown below
-to indicate that the changes are only in your local repository. Finally,
-there are pushes A and B that came after the time at which you pulled.
-
-If you were to force your changes into the repository, Git would place them
-immediately after the point at which you pulled them, so they would
-go before the pushes A and B. However, doing so would rewrite the history
-of the repository and make it very difficult for other users to synchronize
-since they would have to somehow wedge their changes at some point before the
-current HEAD of the repository. This situation is shown by the second line of
-pushes.
-
-What you really want to do is to put your changes after Push B (the current HEAD).
-This is shown in the third line of pushes. The best way to accomplish this is to
-work in a branch, pull the repository so you have your master equal to HEAD (in first
-line), then to rebase your branch on the current master and then commit it. The
-exact commands to accomplish this are shown in the next couple of sections.
--- /dev/null
+%%
+%%
+
+\chapter*{Implementing a GUI Interface}
+\label{_ChapterStart}
+\index[general]{Interface!Implementing a Bacula GUI }
+\index[general]{Implementing a Bacula GUI Interface }
+\addcontentsline{toc}{section}{Implementing a Bacula GUI Interface}
+
+\section{General}
+\index[general]{General }
+\addcontentsline{toc}{subsection}{General}
+
+This document is intended mostly for developers who wish to develop a new GUI
+interface to {\bf Bacula}.
+
+\subsection{Minimal Code in Console Program}
+\index[general]{Program!Minimal Code in Console }
+\index[general]{Minimal Code in Console Program }
+\addcontentsline{toc}{subsubsection}{Minimal Code in Console Program}
+
+Until now, I have kept all the Catalog code in the Directory (with the
+exception of dbcheck and bscan). This is because at some point I would like to
+add user level security and access. If we have code spread everywhere such as
+in a GUI this will be more difficult. The other advantage is that any code you
+add to the Director is automatically available to both the tty console program
+and the WX program. The major disadvantage is it increases the size of the
+code -- however, compared to Networker the Bacula Director is really tiny.
+
+\subsection{GUI Interface is Difficult}
+\index[general]{GUI Interface is Difficult }
+\index[general]{Difficult!GUI Interface is }
+\addcontentsline{toc}{subsubsection}{GUI Interface is Difficult}
+
+Interfacing to an interactive program such as Bacula can be very difficult
+because the interfacing program must interpret all the prompts that may come.
+This can be next to impossible. There are are a number of ways that Bacula is
+designed to facilitate this:
+
+\begin{itemize}
+\item The Bacula network protocol is packet based, and thus pieces of
+information sent can be ASCII or binary.
+\item The packet interface permits knowing where the end of a list is.
+\item The packet interface permits special ``signals'' to be passed rather
+than data.
+\item The Director has a number of commands that are non-interactive. They
+all begin with a period, and provide things such as the list of all Jobs,
+list of all Clients, list of all Pools, list of all Storage, ... Thus the GUI
+interface can get to virtually all information that the Director has in a
+deterministic way. See \lt{}bacula-source\gt{}/src/dird/ua\_dotcmds.c for
+more details on this.
+\item Most console commands allow all the arguments to be specified on the
+command line: e.g. {\bf run job=NightlyBackup level=Full}
+\end{itemize}
+
+One of the first things to overcome is to be able to establish a conversation
+with the Director. Although you can write all your own code, it is probably
+easier to use the Bacula subroutines. The following code is used by the
+Console program to begin a conversation.
+
+\footnotesize
+\begin{verbatim}
+static BSOCK *UA_sock = NULL;
+static JCR *jcr;
+...
+ read-your-config-getting-address-and-pasword;
+ UA_sock = bnet_connect(NULL, 5, 15, "Director daemon", dir->address,
+ NULL, dir->DIRport, 0);
+ if (UA_sock == NULL) {
+ terminate_console(0);
+ return 1;
+ }
+ jcr.dir_bsock = UA_sock;
+ if (!authenticate_director(\&jcr, dir)) {
+ fprintf(stderr, "ERR=%s", UA_sock->msg);
+ terminate_console(0);
+ return 1;
+ }
+ read_and_process_input(stdin, UA_sock);
+ if (UA_sock) {
+ bnet_sig(UA_sock, BNET_TERMINATE); /* send EOF */
+ bnet_close(UA_sock);
+ }
+ exit 0;
+\end{verbatim}
+\normalsize
+
+Then the read\_and\_process\_input routine looks like the following:
+
+\footnotesize
+\begin{verbatim}
+ get-input-to-send-to-the-Director;
+ bnet_fsend(UA_sock, "%s", input);
+ stat = bnet_recv(UA_sock);
+ process-output-from-the-Director;
+\end{verbatim}
+\normalsize
+
+For a GUI program things will be a bit more complicated. Basically in the very
+inner loop, you will need to check and see if any output is available on the
+UA\_sock. For an example, please take a look at the WX GUI interface code
+in: \lt{bacula-source/src/wx-console}
+
+\section{Bvfs API}
+\label{sec:bvfs}
+
+To help developers of restore GUI interfaces, we have added new \textsl{dot
+ commands} that permit browsing the catalog in a very simple way.
+
+\begin{itemize}
+\item \texttt{.bvfs\_update [jobid=x,y,z]} This command is required to update
+ the Bvfs cache in the catalog. You need to run it before any access to the
+ Bvfs layer.
+
+\item \texttt{.bvfs\_lsdirs jobid=x,y,z path=/path | pathid=101} This command
+ will list all directories in the specified \texttt{path} or
+ \texttt{pathid}. Using \texttt{pathid} avoids problems with character
+ encoding of path/filenames.
+
+\item \texttt{.bvfs\_lsfiles jobid=x,y,z path=/path | pathid=101} This command
+ will list all files in the specified \texttt{path} or \texttt{pathid}. Using
+ \texttt{pathid} avoids problems with character encoding.
+\end{itemize}
+
+You can use \texttt{limit=xxx} and \texttt{offset=yyy} to limit the amount of
+data that will be displayed.
+
+\begin{verbatim}
+* .bvfs_update jobid=1,2
+* .bvfs_update
+* .bvfs_lsdir path=/ jobid=1,2
+\end{verbatim}
+++ /dev/null
-%%
-%%
-
-\chapter*{Implementing a GUI Interface}
-\label{_ChapterStart}
-\index[general]{Interface!Implementing a Bacula GUI }
-\index[general]{Implementing a Bacula GUI Interface }
-\addcontentsline{toc}{section}{Implementing a Bacula GUI Interface}
-
-\section{General}
-\index[general]{General }
-\addcontentsline{toc}{subsection}{General}
-
-This document is intended mostly for developers who wish to develop a new GUI
-interface to {\bf Bacula}.
-
-\subsection{Minimal Code in Console Program}
-\index[general]{Program!Minimal Code in Console }
-\index[general]{Minimal Code in Console Program }
-\addcontentsline{toc}{subsubsection}{Minimal Code in Console Program}
-
-Until now, I have kept all the Catalog code in the Directory (with the
-exception of dbcheck and bscan). This is because at some point I would like to
-add user level security and access. If we have code spread everywhere such as
-in a GUI this will be more difficult. The other advantage is that any code you
-add to the Director is automatically available to both the tty console program
-and the WX program. The major disadvantage is it increases the size of the
-code -- however, compared to Networker the Bacula Director is really tiny.
-
-\subsection{GUI Interface is Difficult}
-\index[general]{GUI Interface is Difficult }
-\index[general]{Difficult!GUI Interface is }
-\addcontentsline{toc}{subsubsection}{GUI Interface is Difficult}
-
-Interfacing to an interactive program such as Bacula can be very difficult
-because the interfacing program must interpret all the prompts that may come.
-This can be next to impossible. There are are a number of ways that Bacula is
-designed to facilitate this:
-
-\begin{itemize}
-\item The Bacula network protocol is packet based, and thus pieces of
-information sent can be ASCII or binary.
-\item The packet interface permits knowing where the end of a list is.
-\item The packet interface permits special ``signals'' to be passed rather
-than data.
-\item The Director has a number of commands that are non-interactive. They
-all begin with a period, and provide things such as the list of all Jobs,
-list of all Clients, list of all Pools, list of all Storage, ... Thus the GUI
-interface can get to virtually all information that the Director has in a
-deterministic way. See \lt{}bacula-source\gt{}/src/dird/ua\_dotcmds.c for
-more details on this.
-\item Most console commands allow all the arguments to be specified on the
-command line: e.g. {\bf run job=NightlyBackup level=Full}
-\end{itemize}
-
-One of the first things to overcome is to be able to establish a conversation
-with the Director. Although you can write all your own code, it is probably
-easier to use the Bacula subroutines. The following code is used by the
-Console program to begin a conversation.
-
-\footnotesize
-\begin{verbatim}
-static BSOCK *UA_sock = NULL;
-static JCR *jcr;
-...
- read-your-config-getting-address-and-pasword;
- UA_sock = bnet_connect(NULL, 5, 15, "Director daemon", dir->address,
- NULL, dir->DIRport, 0);
- if (UA_sock == NULL) {
- terminate_console(0);
- return 1;
- }
- jcr.dir_bsock = UA_sock;
- if (!authenticate_director(\&jcr, dir)) {
- fprintf(stderr, "ERR=%s", UA_sock->msg);
- terminate_console(0);
- return 1;
- }
- read_and_process_input(stdin, UA_sock);
- if (UA_sock) {
- bnet_sig(UA_sock, BNET_TERMINATE); /* send EOF */
- bnet_close(UA_sock);
- }
- exit 0;
-\end{verbatim}
-\normalsize
-
-Then the read\_and\_process\_input routine looks like the following:
-
-\footnotesize
-\begin{verbatim}
- get-input-to-send-to-the-Director;
- bnet_fsend(UA_sock, "%s", input);
- stat = bnet_recv(UA_sock);
- process-output-from-the-Director;
-\end{verbatim}
-\normalsize
-
-For a GUI program things will be a bit more complicated. Basically in the very
-inner loop, you will need to check and see if any output is available on the
-UA\_sock. For an example, please take a look at the WX GUI interface code
-in: \lt{bacula-source/src/wx-console}
-
-\section{Bvfs API}
-\label{sec:bvfs}
-
-To help developers of restore GUI interfaces, we have added new \textsl{dot
- commands} that permit browsing the catalog in a very simple way.
-
-\begin{itemize}
-\item \texttt{.bvfs\_update [jobid=x,y,z]} This command is required to update
- the Bvfs cache in the catalog. You need to run it before any access to the
- Bvfs layer.
-
-\item \texttt{.bvfs\_lsdirs jobid=x,y,z path=/path | pathid=101} This command
- will list all directories in the specified \texttt{path} or
- \texttt{pathid}. Using \texttt{pathid} avoids problems with character
- encoding of path/filenames.
-
-\item \texttt{.bvfs\_lsfiles jobid=x,y,z path=/path | pathid=101} This command
- will list all files in the specified \texttt{path} or \texttt{pathid}. Using
- \texttt{pathid} avoids problems with character encoding.
-\end{itemize}
-
-You can use \texttt{limit=xxx} and \texttt{offset=yyy} to limit the amount of
-data that will be displayed.
-
-\begin{verbatim}
-* .bvfs_update jobid=1,2
-* .bvfs_update
-* .bvfs_lsdir path=/ jobid=1,2
-\end{verbatim}
--- /dev/null
+%%
+%%
+
+\chapter{Bacula MD5 Algorithm}
+\label{MD5Chapter}
+\addcontentsline{toc}{section}{}
+
+\section{Command Line Message Digest Utility }
+\index{Utility!Command Line Message Digest }
+\index{Command Line Message Digest Utility }
+\addcontentsline{toc}{subsection}{Command Line Message Digest Utility}
+
+
+This page describes {\bf md5}, a command line utility usable on either Unix or
+MS-DOS/Windows, which generates and verifies message digests (digital
+signatures) using the MD5 algorithm. This program can be useful when
+developing shell scripts or Perl programs for software installation, file
+comparison, and detection of file corruption and tampering.
+
+\subsection{Name}
+\index{Name}
+\addcontentsline{toc}{subsubsection}{Name}
+
+{\bf md5} - generate / check MD5 message digest
+
+\subsection{Synopsis}
+\index{Synopsis }
+\addcontentsline{toc}{subsubsection}{Synopsis}
+
+{\bf md5} [ {\bf -c}{\it signature} ] [ {\bf -u} ] [ {\bf -d}{\it input\_text}
+| {\it infile} ] [ {\it outfile} ]
+
+\subsection{Description}
+\index{Description }
+\addcontentsline{toc}{subsubsection}{Description}
+
+A {\it message digest} is a compact digital signature for an arbitrarily long
+stream of binary data. An ideal message digest algorithm would never generate
+the same signature for two different sets of input, but achieving such
+theoretical perfection would require a message digest as long as the input
+file. Practical message digest algorithms compromise in favour of a digital
+signature of modest size created with an algorithm designed to make
+preparation of input text with a given signature computationally infeasible.
+Message digest algorithms have much in common with techniques used in
+encryption, but to a different end; verification that data have not been
+altered since the signature was published.
+
+Many older programs requiring digital signatures employed 16 or 32 bit {\it
+cyclical redundancy codes} (CRC) originally developed to verify correct
+transmission in data communication protocols, but these short codes, while
+adequate to detect the kind of transmission errors for which they were
+intended, are insufficiently secure for applications such as electronic
+commerce and verification of security related software distributions.
+
+The most commonly used present-day message digest algorithm is the 128 bit MD5
+algorithm, developed by Ron Rivest of the
+\elink{MIT}{http://web.mit.edu/}
+\elink{Laboratory for Computer Science}{http://www.lcs.mit.edu/} and
+\elink{RSA Data Security, Inc.}{http://www.rsa.com/} The algorithm, with a
+reference implementation, was published as Internet
+\elink{RFC 1321}{http://www.fourmilab.ch/md5/rfc1321.html} in April 1992, and
+was placed into the public domain at that time. Message digest algorithms such
+as MD5 are not deemed ``encryption technology'' and are not subject to the
+export controls some governments impose on other data security products.
+(Obviously, the responsibility for obeying the laws in the jurisdiction in
+which you reside is entirely your own, but many common Web and Mail utilities
+use MD5, and I am unaware of any restrictions on their distribution and use.)
+
+The MD5 algorithm has been implemented in numerous computer languages
+including C,
+\elink{Perl}{http://www.perl.org/}, and
+\elink{Java}{http://www.javasoft.com/}; if you're writing a program in such a
+language, track down a suitable subroutine and incorporate it into your
+program. The program described on this page is a {\it command line}
+implementation of MD5, intended for use in shell scripts and Perl programs (it
+is much faster than computing an MD5 signature directly in Perl). This {\bf
+md5} program was originally developed as part of a suite of tools intended to
+monitor large collections of files (for example, the contents of a Web site)
+to detect corruption of files and inadvertent (or perhaps malicious) changes.
+That task is now best accomplished with more comprehensive packages such as
+\elink{Tripwire}{ftp://coast.cs.purdue.edu/pub/COAST/Tripwire/}, but the
+command line {\bf md5} component continues to prove useful for verifying
+correct delivery and installation of software packages, comparing the contents
+of two different systems, and checking for changes in specific files.
+
+\subsection{Options}
+\index{Options }
+\addcontentsline{toc}{subsubsection}{Options}
+
+\begin{description}
+
+\item [{\bf -c}{\it signature} ]
+ \index{-csignature }
+ Computes the signature of the specified {\it infile} or the string supplied
+by the {\bf -d} option and compares it against the specified {\it signature}.
+If the two signatures match, the exit status will be zero, otherwise the exit
+status will be 1. No signature is written to {\it outfile} or standard
+output; only the exit status is set. The signature to be checked must be
+specified as 32 hexadecimal digits.
+
+\item [{\bf -d}{\it input\_text} ]
+ \index{-dinput\_text }
+ A signature is computed for the given {\it input\_text} (which must be quoted
+if it contains white space characters) instead of input from {\it infile} or
+standard input. If input is specified with the {\bf -d} option, no {\it
+infile} should be specified.
+
+\item [{\bf -u} ]
+ Print how-to-call information.
+ \end{description}
+
+\subsection{Files}
+\index{Files }
+\addcontentsline{toc}{subsubsection}{Files}
+
+If no {\it infile} or {\bf -d} option is specified or {\it infile} is a single
+``-'', {\bf md5} reads from standard input; if no {\it outfile} is given, or
+{\it outfile} is a single ``-'', output is sent to standard output. Input and
+output are processed strictly serially; consequently {\bf md5} may be used in
+pipelines.
+
+\subsection{Bugs}
+\index{Bugs }
+\addcontentsline{toc}{subsubsection}{Bugs}
+
+The mechanism used to set standard input to binary mode may be specific to
+Microsoft C; if you rebuild the DOS/Windows version of the program from source
+using another compiler, be sure to verify binary files work properly when read
+via redirection or a pipe.
+
+This program has not been tested on a machine on which {\tt int} and/or {\tt
+long} are longer than 32 bits.
+
+\section{
+\elink{Download md5.zip}{http://www.fourmilab.ch/md5/md5.zip} (Zipped
+archive)}
+\index{Archive!Download md5.zip Zipped }
+\index{Download md5.zip (Zipped archive) }
+\addcontentsline{toc}{subsection}{Download md5.zip (Zipped archive)}
+
+The program is provided as
+\elink{md5.zip}{http://www.fourmilab.ch/md5/md5.zip}, a
+\elink{Zipped}{http://www.pkware.com/} archive containing an ready-to-run
+Win32 command-line executable program, {\tt md5.exe} (compiled using Microsoft
+Visual C++ 5.0), and in source code form along with a {\tt Makefile} to build
+the program under Unix.
+
+\subsection{See Also}
+\index{ALSO!SEE }
+\index{See Also }
+\addcontentsline{toc}{subsubsection}{SEE ALSO}
+
+{\bf sum}(1)
+
+\subsection{Exit Status}
+\index{Status!Exit }
+\index{Exit Status }
+\addcontentsline{toc}{subsubsection}{Exit Status}
+
+{\bf md5} returns status 0 if processing was completed without errors, 1 if
+the {\bf -c} option was specified and the given signature does not match that
+of the input, and 2 if processing could not be performed at all due, for
+example, to a nonexistent input file.
+
+\subsection{Copying}
+\index{Copying }
+\addcontentsline{toc}{subsubsection}{Copying}
+
+\begin{quote}
+This software is in the public domain. Permission to use, copy, modify, and
+distribute this software and its documentation for any purpose and without
+fee is hereby granted, without any conditions or restrictions. This software
+is provided ``as is'' without express or implied warranty.
+\end{quote}
+
+\subsection{Acknowledgements}
+\index{Acknowledgements }
+\addcontentsline{toc}{subsubsection}{Acknowledgements}
+
+The MD5 algorithm was developed by Ron Rivest. The public domain C language
+implementation used in this program was written by Colin Plumb in 1993.
+{\it
+\elink{by John Walker}{http://www.fourmilab.ch/}
+January 6th, MIM }
+++ /dev/null
-%%
-%%
-
-\chapter{Bacula MD5 Algorithm}
-\label{MD5Chapter}
-\addcontentsline{toc}{section}{}
-
-\section{Command Line Message Digest Utility }
-\index{Utility!Command Line Message Digest }
-\index{Command Line Message Digest Utility }
-\addcontentsline{toc}{subsection}{Command Line Message Digest Utility}
-
-
-This page describes {\bf md5}, a command line utility usable on either Unix or
-MS-DOS/Windows, which generates and verifies message digests (digital
-signatures) using the MD5 algorithm. This program can be useful when
-developing shell scripts or Perl programs for software installation, file
-comparison, and detection of file corruption and tampering.
-
-\subsection{Name}
-\index{Name}
-\addcontentsline{toc}{subsubsection}{Name}
-
-{\bf md5} - generate / check MD5 message digest
-
-\subsection{Synopsis}
-\index{Synopsis }
-\addcontentsline{toc}{subsubsection}{Synopsis}
-
-{\bf md5} [ {\bf -c}{\it signature} ] [ {\bf -u} ] [ {\bf -d}{\it input\_text}
-| {\it infile} ] [ {\it outfile} ]
-
-\subsection{Description}
-\index{Description }
-\addcontentsline{toc}{subsubsection}{Description}
-
-A {\it message digest} is a compact digital signature for an arbitrarily long
-stream of binary data. An ideal message digest algorithm would never generate
-the same signature for two different sets of input, but achieving such
-theoretical perfection would require a message digest as long as the input
-file. Practical message digest algorithms compromise in favour of a digital
-signature of modest size created with an algorithm designed to make
-preparation of input text with a given signature computationally infeasible.
-Message digest algorithms have much in common with techniques used in
-encryption, but to a different end; verification that data have not been
-altered since the signature was published.
-
-Many older programs requiring digital signatures employed 16 or 32 bit {\it
-cyclical redundancy codes} (CRC) originally developed to verify correct
-transmission in data communication protocols, but these short codes, while
-adequate to detect the kind of transmission errors for which they were
-intended, are insufficiently secure for applications such as electronic
-commerce and verification of security related software distributions.
-
-The most commonly used present-day message digest algorithm is the 128 bit MD5
-algorithm, developed by Ron Rivest of the
-\elink{MIT}{http://web.mit.edu/}
-\elink{Laboratory for Computer Science}{http://www.lcs.mit.edu/} and
-\elink{RSA Data Security, Inc.}{http://www.rsa.com/} The algorithm, with a
-reference implementation, was published as Internet
-\elink{RFC 1321}{http://www.fourmilab.ch/md5/rfc1321.html} in April 1992, and
-was placed into the public domain at that time. Message digest algorithms such
-as MD5 are not deemed ``encryption technology'' and are not subject to the
-export controls some governments impose on other data security products.
-(Obviously, the responsibility for obeying the laws in the jurisdiction in
-which you reside is entirely your own, but many common Web and Mail utilities
-use MD5, and I am unaware of any restrictions on their distribution and use.)
-
-The MD5 algorithm has been implemented in numerous computer languages
-including C,
-\elink{Perl}{http://www.perl.org/}, and
-\elink{Java}{http://www.javasoft.com/}; if you're writing a program in such a
-language, track down a suitable subroutine and incorporate it into your
-program. The program described on this page is a {\it command line}
-implementation of MD5, intended for use in shell scripts and Perl programs (it
-is much faster than computing an MD5 signature directly in Perl). This {\bf
-md5} program was originally developed as part of a suite of tools intended to
-monitor large collections of files (for example, the contents of a Web site)
-to detect corruption of files and inadvertent (or perhaps malicious) changes.
-That task is now best accomplished with more comprehensive packages such as
-\elink{Tripwire}{ftp://coast.cs.purdue.edu/pub/COAST/Tripwire/}, but the
-command line {\bf md5} component continues to prove useful for verifying
-correct delivery and installation of software packages, comparing the contents
-of two different systems, and checking for changes in specific files.
-
-\subsection{Options}
-\index{Options }
-\addcontentsline{toc}{subsubsection}{Options}
-
-\begin{description}
-
-\item [{\bf -c}{\it signature} ]
- \index{-csignature }
- Computes the signature of the specified {\it infile} or the string supplied
-by the {\bf -d} option and compares it against the specified {\it signature}.
-If the two signatures match, the exit status will be zero, otherwise the exit
-status will be 1. No signature is written to {\it outfile} or standard
-output; only the exit status is set. The signature to be checked must be
-specified as 32 hexadecimal digits.
-
-\item [{\bf -d}{\it input\_text} ]
- \index{-dinput\_text }
- A signature is computed for the given {\it input\_text} (which must be quoted
-if it contains white space characters) instead of input from {\it infile} or
-standard input. If input is specified with the {\bf -d} option, no {\it
-infile} should be specified.
-
-\item [{\bf -u} ]
- Print how-to-call information.
- \end{description}
-
-\subsection{Files}
-\index{Files }
-\addcontentsline{toc}{subsubsection}{Files}
-
-If no {\it infile} or {\bf -d} option is specified or {\it infile} is a single
-``-'', {\bf md5} reads from standard input; if no {\it outfile} is given, or
-{\it outfile} is a single ``-'', output is sent to standard output. Input and
-output are processed strictly serially; consequently {\bf md5} may be used in
-pipelines.
-
-\subsection{Bugs}
-\index{Bugs }
-\addcontentsline{toc}{subsubsection}{Bugs}
-
-The mechanism used to set standard input to binary mode may be specific to
-Microsoft C; if you rebuild the DOS/Windows version of the program from source
-using another compiler, be sure to verify binary files work properly when read
-via redirection or a pipe.
-
-This program has not been tested on a machine on which {\tt int} and/or {\tt
-long} are longer than 32 bits.
-
-\section{
-\elink{Download md5.zip}{http://www.fourmilab.ch/md5/md5.zip} (Zipped
-archive)}
-\index{Archive!Download md5.zip Zipped }
-\index{Download md5.zip (Zipped archive) }
-\addcontentsline{toc}{subsection}{Download md5.zip (Zipped archive)}
-
-The program is provided as
-\elink{md5.zip}{http://www.fourmilab.ch/md5/md5.zip}, a
-\elink{Zipped}{http://www.pkware.com/} archive containing an ready-to-run
-Win32 command-line executable program, {\tt md5.exe} (compiled using Microsoft
-Visual C++ 5.0), and in source code form along with a {\tt Makefile} to build
-the program under Unix.
-
-\subsection{See Also}
-\index{ALSO!SEE }
-\index{See Also }
-\addcontentsline{toc}{subsubsection}{SEE ALSO}
-
-{\bf sum}(1)
-
-\subsection{Exit Status}
-\index{Status!Exit }
-\index{Exit Status }
-\addcontentsline{toc}{subsubsection}{Exit Status}
-
-{\bf md5} returns status 0 if processing was completed without errors, 1 if
-the {\bf -c} option was specified and the given signature does not match that
-of the input, and 2 if processing could not be performed at all due, for
-example, to a nonexistent input file.
-
-\subsection{Copying}
-\index{Copying }
-\addcontentsline{toc}{subsubsection}{Copying}
-
-\begin{quote}
-This software is in the public domain. Permission to use, copy, modify, and
-distribute this software and its documentation for any purpose and without
-fee is hereby granted, without any conditions or restrictions. This software
-is provided ``as is'' without express or implied warranty.
-\end{quote}
-
-\subsection{Acknowledgements}
-\index{Acknowledgements }
-\addcontentsline{toc}{subsubsection}{Acknowledgements}
-
-The MD5 algorithm was developed by Ron Rivest. The public domain C language
-implementation used in this program was written by Colin Plumb in 1993.
-{\it
-\elink{by John Walker}{http://www.fourmilab.ch/}
-January 6th, MIM }
--- /dev/null
+%%
+%%
+
+\chapter{Storage Media Output Format}
+\label{_ChapterStart9}
+\index{Format!Storage Media Output}
+\index{Storage Media Output Format}
+\addcontentsline{toc}{section}{Storage Media Output Format}
+
+\section{General}
+\index{General}
+\addcontentsline{toc}{subsection}{General}
+
+This document describes the media format written by the Storage daemon. The
+Storage daemon reads and writes in units of blocks. Blocks contain records.
+Each block has a block header followed by records, and each record has a
+record header followed by record data.
+
+This chapter is intended to be a technical discussion of the Media Format and
+as such is not targeted at end users but rather at developers and system
+administrators that want or need to know more of the working details of {\bf
+Bacula}.
+
+\section{Definitions}
+\index{Definitions}
+\addcontentsline{toc}{subsection}{Definitions}
+
+\begin{description}
+
+\item [Block]
+ \index{Block}
+ A block represents the primitive unit of information that the Storage daemon
+reads and writes to a physical device. Normally, for a tape device, it will
+be the same as a tape block. The Storage daemon always reads and writes
+blocks. A block consists of block header information followed by records.
+Clients of the Storage daemon (the File daemon) normally never see blocks.
+However, some of the Storage tools (bls, bscan, bextract, ...) may be use
+block header information. In older Bacula tape versions, a block could
+contain records (see record definition below) from multiple jobs. However,
+all blocks currently written by Bacula are block level BB02, and a given
+block contains records for only a single job. Different jobs simply have
+their own private blocks that are intermingled with the other blocks from
+other jobs on the Volume (previously the records were intermingled within
+the blocks). Having only records from a single job in any give block
+permitted moving the VolumeSessionId and VolumeSessionTime (see below) from
+each record heading to the Block header. This has two advantages: 1. a block
+can be quickly rejected based on the contents of the header without reading
+all the records. 2. because there is on the average more than one record per
+block, less data is written to the Volume for each job.
+
+\item [Record]
+ \index{Record}
+ A record consists of a Record Header, which is managed by the Storage daemon
+and Record Data, which is the data received from the Client. A record is the
+primitive unit of information sent to and from the Storage daemon by the
+Client (File daemon) programs. The details are described below.
+
+\item [JobId]
+ \index{JobId}
+ A number assigned by the Director daemon for a particular job. This number
+will be unique for that particular Director (Catalog). The daemons use this
+number to keep track of individual jobs. Within the Storage daemon, the JobId
+may not be unique if several Directors are accessing the Storage daemon
+simultaneously.
+
+\item [Session]
+ \index{Session}
+ A Session is a concept used in the Storage daemon corresponds one to one to a
+Job with the exception that each session is uniquely identified within the
+Storage daemon by a unique SessionId/SessionTime pair (see below).
+
+\item [VolSessionId]
+ \index{VolSessionId}
+ A unique number assigned by the Storage daemon to a particular session (Job)
+it is having with a File daemon. This number by itself is not unique to the
+given Volume, but with the VolSessionTime, it is unique.
+
+\item [VolSessionTime]
+ \index{VolSessionTime}
+ A unique number assigned by the Storage daemon to a particular Storage daemon
+execution. It is actually the Unix time\_t value of when the Storage daemon
+began execution cast to a 32 bit unsigned integer. The combination of the
+{\bf VolSessionId} and the {\bf VolSessionTime} for a given Storage daemon is
+guaranteed to be unique for each Job (or session).
+
+\item [FileIndex]
+ \index{FileIndex}
+ A sequential number beginning at one assigned by the File daemon to the files
+within a job that are sent to the Storage daemon for backup. The Storage
+daemon ensures that this number is greater than zero and sequential. Note,
+the Storage daemon uses negative FileIndexes to flag Session Start and End
+Labels as well as End of Volume Labels. Thus, the combination of
+VolSessionId, VolSessionTime, and FileIndex uniquely identifies the records
+for a single file written to a Volume.
+
+\item [Stream]
+ \index{Stream}
+ While writing the information for any particular file to the Volume, there
+can be any number of distinct pieces of information about that file, e.g. the
+attributes, the file data, ... The Stream indicates what piece of data it
+is, and it is an arbitrary number assigned by the File daemon to the parts
+(Unix attributes, Win32 attributes, data, compressed data,\ ...) of a file
+that are sent to the Storage daemon. The Storage daemon has no knowledge of
+the details of a Stream; it simply represents a numbered stream of bytes. The
+data for a given stream may be passed to the Storage daemon in single record,
+or in multiple records.
+
+\item [Block Header]
+ \index{Block Header}
+ A block header consists of a block identification (``BB02''), a block length
+in bytes (typically 64,512) a checksum, and sequential block number. Each
+block starts with a Block Header and is followed by Records. Current block
+headers also contain the VolSessionId and VolSessionTime for the records
+written to that block.
+
+\item [Record Header]
+ \index{Record Header}
+ A record header contains the Volume Session Id, the Volume Session Time, the
+FileIndex, the Stream, and the size of the data record which follows. The
+Record Header is always immediately followed by a Data Record if the size
+given in the Header is greater than zero. Note, for Block headers of level
+BB02 (version 1.27 and later), the Record header as written to tape does not
+contain the Volume Session Id and the Volume Session Time as these two
+fields are stored in the BB02 Block header. The in-memory record header does
+have those fields for convenience.
+
+\item [Data Record]
+ \index{Data Record}
+ A data record consists of a binary stream of bytes and is always preceded by
+a Record Header. The details of the meaning of the binary stream of bytes are
+unknown to the Storage daemon, but the Client programs (File daemon) defines
+and thus knows the details of each record type.
+
+\item [Volume Label]
+ \index{Volume Label}
+ A label placed by the Storage daemon at the beginning of each storage volume.
+It contains general information about the volume. It is written in Record
+format. The Storage daemon manages Volume Labels, and if the client wants, he
+may also read them.
+
+\item [Begin Session Label]
+ \index{Begin Session Label}
+ The Begin Session Label is a special record placed by the Storage daemon on
+the storage medium as the first record of an append session job with a File
+daemon. This record is useful for finding the beginning of a particular
+session (Job), since no records with the same VolSessionId and VolSessionTime
+will precede this record. This record is not normally visible outside of the
+Storage daemon. The Begin Session Label is similar to the Volume Label except
+that it contains additional information pertaining to the Session.
+
+\item [End Session Label]
+ \index{End Session Label}
+ The End Session Label is a special record placed by the Storage daemon on the
+storage medium as the last record of an append session job with a File
+daemon. The End Session Record is distinguished by a FileIndex with a value
+of minus two (-2). This record is useful for detecting the end of a
+particular session since no records with the same VolSessionId and
+VolSessionTime will follow this record. This record is not normally visible
+outside of the Storage daemon. The End Session Label is similar to the Volume
+Label except that it contains additional information pertaining to the
+Session.
+\end{description}
+
+\section{Storage Daemon File Output Format}
+\index{Format!Storage Daemon File Output}
+\index{Storage Daemon File Output Format}
+\addcontentsline{toc}{subsection}{Storage Daemon File Output Format}
+
+The file storage and tape storage formats are identical except that tape
+records are by default blocked into blocks of 64,512 bytes, except for the
+last block, which is the actual number of bytes written rounded up to a
+multiple of 1024 whereas the last record of file storage is not rounded up.
+The default block size of 64,512 bytes may be overridden by the user (some
+older tape drives only support block sizes of 32K). Each Session written to
+tape is terminated with an End of File mark (this will be removed later).
+Sessions written to file are simply appended to the end of the file.
+
+\section{Overall Format}
+\index{Format!Overall}
+\index{Overall Format}
+\addcontentsline{toc}{subsection}{Overall Format}
+
+A Bacula output file consists of Blocks of data. Each block contains a block
+header followed by records. Each record consists of a record header followed
+by the record data. The first record on a tape will always be the Volume Label
+Record.
+
+No Record Header will be split across Bacula blocks. However, Record Data may
+be split across any number of Bacula blocks. Obviously this will not be the
+case for the Volume Label which will always be smaller than the Bacula Block
+size.
+
+To simplify reading tapes, the Start of Session (SOS) and End of Session (EOS)
+records are never split across blocks. If this is about to happen, Bacula will
+write a short block before writing the session record (actually, the SOS
+record should always be the first record in a block, excepting perhaps the
+Volume label).
+
+Due to hardware limitations, the last block written to the tape may not be
+fully written. If your drive permits backspace record, Bacula will backup over
+the last record written on the tape, re-read it and verify that it was
+correctly written.
+
+When a new tape is mounted Bacula will write the full contents of the
+partially written block to the new tape ensuring that there is no loss of
+data. When reading a tape, Bacula will discard any block that is not totally
+written, thus ensuring that there is no duplication of data. In addition,
+since Bacula blocks are sequentially numbered within a Job, it is easy to
+ensure that no block is missing or duplicated.
+
+\section{Serialization}
+\index{Serialization}
+\addcontentsline{toc}{subsection}{Serialization}
+
+All Block Headers, Record Headers, and Label Records are written using
+Bacula's serialization routines. These routines guarantee that the data is
+written to the output volume in a machine independent format.
+
+\section{Block Header}
+\index{Header!Block}
+\index{Block Header}
+\addcontentsline{toc}{subsection}{Block Header}
+
+The format of the Block Header (version 1.27 and later) is:
+
+\footnotesize
+\begin{verbatim}
+ uint32_t CheckSum; /* Block check sum */
+ uint32_t BlockSize; /* Block byte size including the header */
+ uint32_t BlockNumber; /* Block number */
+ char ID[4] = "BB02"; /* Identification and block level */
+ uint32_t VolSessionId; /* Session Id for Job */
+ uint32_t VolSessionTime; /* Session Time for Job */
+\end{verbatim}
+\normalsize
+
+The Block header is a fixed length and fixed format and is followed by Record
+Headers and Record Data. The CheckSum field is a 32 bit checksum of the block
+data and the block header but not including the CheckSum field. The Block
+Header is always immediately followed by a Record Header. If the tape is
+damaged, a Bacula utility will be able to recover as much information as
+possible from the tape by recovering blocks which are valid. The Block header
+is written using the Bacula serialization routines and thus is guaranteed to
+be in machine independent format. See below for version 2 of the block header.
+
+
+\section{Record Header}
+\index{Header!Record}
+\index{Record Header}
+\addcontentsline{toc}{subsection}{Record Header}
+
+Each binary data record is preceded by a Record Header. The Record Header is
+fixed length and fixed format, whereas the binary data record is of variable
+length. The Record Header is written using the Bacula serialization routines
+and thus is guaranteed to be in machine independent format.
+
+The format of the Record Header (version 1.27 or later) is:
+
+\footnotesize
+\begin{verbatim}
+ int32_t FileIndex; /* File index supplied by File daemon */
+ int32_t Stream; /* Stream number supplied by File daemon */
+ uint32_t DataSize; /* size of following data record in bytes */
+\end{verbatim}
+\normalsize
+
+This record is followed by the binary Stream data of DataSize bytes, followed
+by another Record Header record and the binary stream data. For the definitive
+definition of this record, see record.h in the src/stored directory.
+
+Additional notes on the above:
+
+\begin{description}
+
+\item [The {\bf VolSessionId} ]
+ \index{VolSessionId}
+ is a unique sequential number that is assigned by the Storage Daemon to a
+particular Job. This number is sequential since the start of execution of the
+daemon.
+
+\item [The {\bf VolSessionTime} ]
+ \index{VolSessionTime}
+ is the time/date that the current execution of the Storage Daemon started. It
+assures that the combination of VolSessionId and VolSessionTime is unique for
+every jobs written to the tape, even if there was a machine crash between two
+writes.
+
+\item [The {\bf FileIndex} ]
+ \index{FileIndex}
+ is a sequential file number within a job. The Storage daemon requires this
+index to be greater than zero and sequential. Note, however, that the File
+daemon may send multiple Streams for the same FileIndex. In addition, the
+Storage daemon uses negative FileIndices to hold the Begin Session Label, the
+End Session Label, and the End of Volume Label.
+
+\item [The {\bf Stream} ]
+ \index{Stream}
+ is defined by the File daemon and is used to identify separate parts of the
+data saved for each file (Unix attributes, Win32 attributes, file data,
+compressed file data, sparse file data, ...). The Storage Daemon has no idea
+of what a Stream is or what it contains except that the Stream is required to
+be a positive integer. Negative Stream numbers are used internally by the
+Storage daemon to indicate that the record is a continuation of the previous
+record (the previous record would not entirely fit in the block).
+
+For Start Session and End Session Labels (where the FileIndex is negative),
+the Storage daemon uses the Stream field to contain the JobId. The current
+stream definitions are:
+
+\footnotesize
+\begin{verbatim}
+#define STREAM_UNIX_ATTRIBUTES 1 /* Generic Unix attributes */
+#define STREAM_FILE_DATA 2 /* Standard uncompressed data */
+#define STREAM_MD5_SIGNATURE 3 /* MD5 signature for the file */
+#define STREAM_GZIP_DATA 4 /* GZip compressed file data */
+/* Extended Unix attributes with Win32 Extended data. Deprecated. */
+#define STREAM_UNIX_ATTRIBUTES_EX 5 /* Extended Unix attr for Win32 EX */
+#define STREAM_SPARSE_DATA 6 /* Sparse data stream */
+#define STREAM_SPARSE_GZIP_DATA 7
+#define STREAM_PROGRAM_NAMES 8 /* program names for program data */
+#define STREAM_PROGRAM_DATA 9 /* Data needing program */
+#define STREAM_SHA1_SIGNATURE 10 /* SHA1 signature for the file */
+#define STREAM_WIN32_DATA 11 /* Win32 BackupRead data */
+#define STREAM_WIN32_GZIP_DATA 12 /* Gzipped Win32 BackupRead data */
+#define STREAM_MACOS_FORK_DATA 13 /* Mac resource fork */
+#define STREAM_HFSPLUS_ATTRIBUTES 14 /* Mac OS extra attributes */
+#define STREAM_UNIX_ATTRIBUTES_ACCESS_ACL 15 /* Standard ACL attributes on UNIX */
+#define STREAM_UNIX_ATTRIBUTES_DEFAULT_ACL 16 /* Default ACL attributes on UNIX */
+\end{verbatim}
+\normalsize
+
+\item [The {\bf DataSize} ]
+ \index{DataSize}
+ is the size in bytes of the binary data record that follows the Session
+Record header. The Storage Daemon has no idea of the actual contents of the
+binary data record. For standard Unix files, the data record typically
+contains the file attributes or the file data. For a sparse file the first
+64 bits of the file data contains the storage address for the data block.
+\end{description}
+
+The Record Header is never split across two blocks. If there is not enough
+room in a block for the full Record Header, the block is padded to the end
+with zeros and the Record Header begins in the next block. The data record, on
+the other hand, may be split across multiple blocks and even multiple physical
+volumes. When a data record is split, the second (and possibly subsequent)
+piece of the data is preceded by a new Record Header. Thus each piece of data
+is always immediately preceded by a Record Header. When reading a record, if
+Bacula finds only part of the data in the first record, it will automatically
+read the next record and concatenate the data record to form a full data
+record.
+
+\section{Version BB02 Block Header}
+\index{Version BB02 Block Header}
+\index{Header!Version BB02 Block}
+\addcontentsline{toc}{subsection}{Version BB02 Block Header}
+
+Each session or Job has its own private block. As a consequence, the SessionId
+and SessionTime are written once in each Block Header and not in the Record
+Header. So, the second and current version of the Block Header BB02 is:
+
+\footnotesize
+\begin{verbatim}
+ uint32_t CheckSum; /* Block check sum */
+ uint32_t BlockSize; /* Block byte size including the header */
+ uint32_t BlockNumber; /* Block number */
+ char ID[4] = "BB02"; /* Identification and block level */
+ uint32_t VolSessionId; /* Applies to all records */
+ uint32_t VolSessionTime; /* contained in this block */
+\end{verbatim}
+\normalsize
+
+As with the previous version, the BB02 Block header is a fixed length and
+fixed format and is followed by Record Headers and Record Data. The CheckSum
+field is a 32 bit CRC checksum of the block data and the block header but not
+including the CheckSum field. The Block Header is always immediately followed
+by a Record Header. If the tape is damaged, a Bacula utility will be able to
+recover as much information as possible from the tape by recovering blocks
+which are valid. The Block header is written using the Bacula serialization
+routines and thus is guaranteed to be in machine independent format.
+
+\section{Version 2 Record Header}
+\index{Version 2 Record Header}
+\index{Header!Version 2 Record}
+\addcontentsline{toc}{subsection}{Version 2 Record Header}
+
+Version 2 Record Header is written to the medium when using Version BB02 Block
+Headers. The memory representation of the record is identical to the old BB01
+Record Header, but on the storage medium, the first two fields, namely
+VolSessionId and VolSessionTime are not written. The Block Header is filled
+with these values when the First user record is written (i.e. non label
+record) so that when the block is written, it will have the current and unique
+VolSessionId and VolSessionTime. On reading each record from the Block, the
+VolSessionId and VolSessionTime is filled in the Record Header from the Block
+Header.
+
+\section{Volume Label Format}
+\index{Volume Label Format}
+\index{Format!Volume Label}
+\addcontentsline{toc}{subsection}{Volume Label Format}
+
+Tape volume labels are created by the Storage daemon in response to a {\bf
+label} command given to the Console program, or alternatively by the {\bf
+btape} program. created. Each volume is labeled with the following information
+using the Bacula serialization routines, which guarantee machine byte order
+independence.
+
+For Bacula versions 1.27 and later, the Volume Label Format is:
+
+\footnotesize
+\begin{verbatim}
+ char Id[32]; /* Bacula 1.0 Immortal\n */
+ uint32_t VerNum; /* Label version number */
+ /* VerNum 11 and greater Bacula 1.27 and later */
+ btime_t label_btime; /* Time/date tape labeled */
+ btime_t write_btime; /* Time/date tape first written */
+ /* The following are 0 in VerNum 11 and greater */
+ float64_t write_date; /* Date this label written */
+ float64_t write_time; /* Time this label written */
+ char VolName[128]; /* Volume name */
+ char PrevVolName[128]; /* Previous Volume Name */
+ char PoolName[128]; /* Pool name */
+ char PoolType[128]; /* Pool type */
+ char MediaType[128]; /* Type of this media */
+ char HostName[128]; /* Host name of writing computer */
+ char LabelProg[32]; /* Label program name */
+ char ProgVersion[32]; /* Program version */
+ char ProgDate[32]; /* Program build date/time */
+\end{verbatim}
+\normalsize
+
+Note, the LabelType (Volume Label, Volume PreLabel, Session Start Label, ...)
+is stored in the record FileIndex field of the Record Header and does not
+appear in the data part of the record.
+
+\section{Session Label}
+\index{Label!Session}
+\index{Session Label}
+\addcontentsline{toc}{subsection}{Session Label}
+
+The Session Label is written at the beginning and end of each session as well
+as the last record on the physical medium. It has the following binary format:
+
+
+\footnotesize
+\begin{verbatim}
+ char Id[32]; /* Bacula Immortal ... */
+ uint32_t VerNum; /* Label version number */
+ uint32_t JobId; /* Job id */
+ uint32_t VolumeIndex; /* sequence no of vol */
+ /* Prior to VerNum 11 */
+ float64_t write_date; /* Date this label written */
+ /* VerNum 11 and greater */
+ btime_t write_btime; /* time/date record written */
+ /* The following is zero VerNum 11 and greater */
+ float64_t write_time; /* Time this label written */
+ char PoolName[128]; /* Pool name */
+ char PoolType[128]; /* Pool type */
+ char JobName[128]; /* base Job name */
+ char ClientName[128];
+ /* Added in VerNum 10 */
+ char Job[128]; /* Unique Job name */
+ char FileSetName[128]; /* FileSet name */
+ uint32_t JobType;
+ uint32_t JobLevel;
+\end{verbatim}
+\normalsize
+
+In addition, the EOS label contains:
+
+\footnotesize
+\begin{verbatim}
+ /* The remainder are part of EOS label only */
+ uint32_t JobFiles;
+ uint64_t JobBytes;
+ uint32_t start_block;
+ uint32_t end_block;
+ uint32_t start_file;
+ uint32_t end_file;
+ uint32_t JobErrors;
+\end{verbatim}
+\normalsize
+
+In addition, for VerNum greater than 10, the EOS label contains (in addition
+to the above):
+
+\footnotesize
+\begin{verbatim}
+ uint32_t JobStatus /* Job termination code */
+\end{verbatim}
+\normalsize
+
+: Note, the LabelType (Volume Label, Volume PreLabel, Session Start Label,
+...) is stored in the record FileIndex field and does not appear in the data
+part of the record. Also, the Stream field of the Record Header contains the
+JobId. This permits quick filtering without actually reading all the session
+data in many cases.
+
+\section{Overall Storage Format}
+\index{Format!Overall Storage}
+\index{Overall Storage Format}
+\addcontentsline{toc}{subsection}{Overall Storage Format}
+
+\footnotesize
+\begin{verbatim}
+ Current Bacula Tape Format
+ 6 June 2001
+ Version BB02 added 28 September 2002
+ Version BB01 is the old deprecated format.
+ A Bacula tape is composed of tape Blocks. Each block
+ has a Block header followed by the block data. Block
+ Data consists of Records. Records consist of Record
+ Headers followed by Record Data.
+ :=======================================================:
+ | |
+ | Block Header (24 bytes) |
+ | |
+ |-------------------------------------------------------|
+ | |
+ | Record Header (12 bytes) |
+ | |
+ |-------------------------------------------------------|
+ | |
+ | Record Data |
+ | |
+ |-------------------------------------------------------|
+ | |
+ | Record Header (12 bytes) |
+ | |
+ |-------------------------------------------------------|
+ | |
+ | ... |
+ Block Header: the first item in each block. The format is
+ shown below.
+ Partial Data block: occurs if the data from a previous
+ block spills over to this block (the normal case except
+ for the first block on a tape). However, this partial
+ data block is always preceded by a record header.
+ Record Header: identifies the Volume Session, the Stream
+ and the following Record Data size. See below for format.
+ Record data: arbitrary binary data.
+ Block Header Format BB02
+ :=======================================================:
+ | CheckSum (uint32_t) |
+ |-------------------------------------------------------|
+ | BlockSize (uint32_t) |
+ |-------------------------------------------------------|
+ | BlockNumber (uint32_t) |
+ |-------------------------------------------------------|
+ | "BB02" (char [4]) |
+ |-------------------------------------------------------|
+ | VolSessionId (uint32_t) |
+ |-------------------------------------------------------|
+ | VolSessionTime (uint32_t) |
+ :=======================================================:
+ BBO2: Serves to identify the block as a
+ Bacula block and also servers as a block format identifier
+ should we ever need to change the format.
+ BlockSize: is the size in bytes of the block. When reading
+ back a block, if the BlockSize does not agree with the
+ actual size read, Bacula discards the block.
+ CheckSum: a checksum for the Block.
+ BlockNumber: is the sequential block number on the tape.
+ VolSessionId: a unique sequential number that is assigned
+ by the Storage Daemon to a particular Job.
+ This number is sequential since the start
+ of execution of the daemon.
+ VolSessionTime: the time/date that the current execution
+ of the Storage Daemon started. It assures
+ that the combination of VolSessionId and
+ VolSessionTime is unique for all jobs
+ written to the tape, even if there was a
+ machine crash between two writes.
+ Record Header Format BB02
+ :=======================================================:
+ | FileIndex (int32_t) |
+ |-------------------------------------------------------|
+ | Stream (int32_t) |
+ |-------------------------------------------------------|
+ | DataSize (uint32_t) |
+ :=======================================================:
+ FileIndex: a sequential file number within a job. The
+ Storage daemon enforces this index to be
+ greater than zero and sequential. Note,
+ however, that the File daemon may send
+ multiple Streams for the same FileIndex.
+ The Storage Daemon uses negative FileIndices
+ to identify Session Start and End labels
+ as well as the End of Volume labels.
+ Stream: defined by the File daemon and is intended to be
+ used to identify separate parts of the data
+ saved for each file (attributes, file data,
+ ...). The Storage Daemon has no idea of
+ what a Stream is or what it contains.
+ DataSize: the size in bytes of the binary data record
+ that follows the Session Record header.
+ The Storage Daemon has no idea of the
+ actual contents of the binary data record.
+ For standard Unix files, the data record
+ typically contains the file attributes or
+ the file data. For a sparse file
+ the first 64 bits of the data contains
+ the storage address for the data block.
+ Volume Label
+ :=======================================================:
+ | Id (32 bytes) |
+ |-------------------------------------------------------|
+ | VerNum (uint32_t) |
+ |-------------------------------------------------------|
+ | label_date (float64_t) |
+ | label_btime (btime_t VerNum 11 |
+ |-------------------------------------------------------|
+ | label_time (float64_t) |
+ | write_btime (btime_t VerNum 11 |
+ |-------------------------------------------------------|
+ | write_date (float64_t) |
+ | 0 (float64_t) VerNum 11 |
+ |-------------------------------------------------------|
+ | write_time (float64_t) |
+ | 0 (float64_t) VerNum 11 |
+ |-------------------------------------------------------|
+ | VolName (128 bytes) |
+ |-------------------------------------------------------|
+ | PrevVolName (128 bytes) |
+ |-------------------------------------------------------|
+ | PoolName (128 bytes) |
+ |-------------------------------------------------------|
+ | PoolType (128 bytes) |
+ |-------------------------------------------------------|
+ | MediaType (128 bytes) |
+ |-------------------------------------------------------|
+ | HostName (128 bytes) |
+ |-------------------------------------------------------|
+ | LabelProg (32 bytes) |
+ |-------------------------------------------------------|
+ | ProgVersion (32 bytes) |
+ |-------------------------------------------------------|
+ | ProgDate (32 bytes) |
+ |-------------------------------------------------------|
+ :=======================================================:
+
+ Id: 32 byte Bacula identifier "Bacula 1.0 immortal\n"
+ (old version also recognized:)
+ Id: 32 byte Bacula identifier "Bacula 0.9 mortal\n"
+ LabelType (Saved in the FileIndex of the Header record).
+ PRE_LABEL -1 Volume label on unwritten tape
+ VOL_LABEL -2 Volume label after tape written
+ EOM_LABEL -3 Label at EOM (not currently implemented)
+ SOS_LABEL -4 Start of Session label (format given below)
+ EOS_LABEL -5 End of Session label (format given below)
+ VerNum: 11
+ label_date: Julian day tape labeled
+ label_time: Julian time tape labeled
+ write_date: Julian date tape first used (data written)
+ write_time: Julian time tape first used (data written)
+ VolName: "Physical" Volume name
+ PrevVolName: The VolName of the previous tape (if this tape is
+ a continuation of the previous one).
+ PoolName: Pool Name
+ PoolType: Pool Type
+ MediaType: Media Type
+ HostName: Name of host that is first writing the tape
+ LabelProg: Name of the program that labeled the tape
+ ProgVersion: Version of the label program
+ ProgDate: Date Label program built
+ Session Label
+ :=======================================================:
+ | Id (32 bytes) |
+ |-------------------------------------------------------|
+ | VerNum (uint32_t) |
+ |-------------------------------------------------------|
+ | JobId (uint32_t) |
+ |-------------------------------------------------------|
+ | write_btime (btime_t) VerNum 11 |
+ |-------------------------------------------------------|
+ | 0 (float64_t) VerNum 11 |
+ |-------------------------------------------------------|
+ | PoolName (128 bytes) |
+ |-------------------------------------------------------|
+ | PoolType (128 bytes) |
+ |-------------------------------------------------------|
+ | JobName (128 bytes) |
+ |-------------------------------------------------------|
+ | ClientName (128 bytes) |
+ |-------------------------------------------------------|
+ | Job (128 bytes) |
+ |-------------------------------------------------------|
+ | FileSetName (128 bytes) |
+ |-------------------------------------------------------|
+ | JobType (uint32_t) |
+ |-------------------------------------------------------|
+ | JobLevel (uint32_t) |
+ |-------------------------------------------------------|
+ | FileSetMD5 (50 bytes) VerNum 11 |
+ |-------------------------------------------------------|
+ Additional fields in End Of Session Label
+ |-------------------------------------------------------|
+ | JobFiles (uint32_t) |
+ |-------------------------------------------------------|
+ | JobBytes (uint32_t) |
+ |-------------------------------------------------------|
+ | start_block (uint32_t) |
+ |-------------------------------------------------------|
+ | end_block (uint32_t) |
+ |-------------------------------------------------------|
+ | start_file (uint32_t) |
+ |-------------------------------------------------------|
+ | end_file (uint32_t) |
+ |-------------------------------------------------------|
+ | JobErrors (uint32_t) |
+ |-------------------------------------------------------|
+ | JobStatus (uint32_t) VerNum 11 |
+ :=======================================================:
+ * => fields deprecated
+ Id: 32 byte Bacula Identifier "Bacula 1.0 immortal\n"
+ LabelType (in FileIndex field of Header):
+ EOM_LABEL -3 Label at EOM
+ SOS_LABEL -4 Start of Session label
+ EOS_LABEL -5 End of Session label
+ VerNum: 11
+ JobId: JobId
+ write_btime: Bacula time/date this tape record written
+ write_date: Julian date tape this record written - deprecated
+ write_time: Julian time tape this record written - deprecated.
+ PoolName: Pool Name
+ PoolType: Pool Type
+ MediaType: Media Type
+ ClientName: Name of File daemon or Client writing this session
+ Not used for EOM_LABEL.
+\end{verbatim}
+\normalsize
+
+\section{Unix File Attributes}
+\index{Unix File Attributes}
+\index{Attributes!Unix File}
+\addcontentsline{toc}{subsection}{Unix File Attributes}
+
+The Unix File Attributes packet consists of the following:
+
+\lt{}File-Index\gt{} \lt{}Type\gt{}
+\lt{}Filename\gt{}@\lt{}File-Attributes\gt{}@\lt{}Link\gt{}
+@\lt{}Extended-Attributes@\gt{} where
+
+\begin{description}
+
+\item [@]
+ represents a byte containing a binary zero.
+
+\item [FileIndex]
+ \index{FileIndex}
+ is the sequential file index starting from one assigned by the File daemon.
+
+\item [Type]
+ \index{Type}
+ is one of the following:
+
+\footnotesize
+\begin{verbatim}
+#define FT_LNKSAVED 1 /* hard link to file already saved */
+#define FT_REGE 2 /* Regular file but empty */
+#define FT_REG 3 /* Regular file */
+#define FT_LNK 4 /* Soft Link */
+#define FT_DIR 5 /* Directory */
+#define FT_SPEC 6 /* Special file -- chr, blk, fifo, sock */
+#define FT_NOACCESS 7 /* Not able to access */
+#define FT_NOFOLLOW 8 /* Could not follow link */
+#define FT_NOSTAT 9 /* Could not stat file */
+#define FT_NOCHG 10 /* Incremental option, file not changed */
+#define FT_DIRNOCHG 11 /* Incremental option, directory not changed */
+#define FT_ISARCH 12 /* Trying to save archive file */
+#define FT_NORECURSE 13 /* No recursion into directory */
+#define FT_NOFSCHG 14 /* Different file system, prohibited */
+#define FT_NOOPEN 15 /* Could not open directory */
+#define FT_RAW 16 /* Raw block device */
+#define FT_FIFO 17 /* Raw fifo device */
+\end{verbatim}
+\normalsize
+
+\item [Filename]
+ \index{Filename}
+ is the fully qualified filename.
+
+\item [File-Attributes]
+ \index{File-Attributes}
+ consists of the 13 fields of the stat() buffer in ASCII base64 format
+separated by spaces. These fields and their meanings are shown below. This
+stat() packet is in Unix format, and MUST be provided (constructed) for ALL
+systems.
+
+\item [Link]
+ \index{Link}
+ when the FT code is FT\_LNK or FT\_LNKSAVED, the item in question is a Unix
+link, and this field contains the fully qualified link name. When the FT code
+is not FT\_LNK or FT\_LNKSAVED, this field is null.
+
+\item [Extended-Attributes]
+ \index{Extended-Attributes}
+ The exact format of this field is operating system dependent. It contains
+additional or extended attributes of a system dependent nature. Currently,
+this field is used only on WIN32 systems where it contains a ASCII base64
+representation of the WIN32\_FILE\_ATTRIBUTE\_DATA structure as defined by
+Windows. The fields in the base64 representation of this structure are like
+the File-Attributes separated by spaces.
+\end{description}
+
+The File-attributes consist of the following:
+
+\addcontentsline{lot}{table}{File Attributes}
+\begin{longtable}{|p{0.6in}|p{0.7in}|p{1in}|p{1in}|p{1.4in}|}
+ \hline
+\multicolumn{1}{|c|}{\bf Field No. } & \multicolumn{1}{c|}{\bf Stat Name }
+& \multicolumn{1}{c|}{\bf Unix } & \multicolumn{1}{c|}{\bf Win98/NT } &
+\multicolumn{1}{c|}{\bf MacOS } \\
+ \hline
+\multicolumn{1}{|c|}{1 } & {st\_dev } & {Device number of filesystem } &
+{Drive number } & {vRefNum } \\
+ \hline
+\multicolumn{1}{|c|}{2 } & {st\_ino } & {Inode number } & {Always 0 } &
+{fileID/dirID } \\
+ \hline
+\multicolumn{1}{|c|}{3 } & {st\_mode } & {File mode } & {File mode } &
+{777 dirs/apps; 666 docs; 444 locked docs } \\
+ \hline
+\multicolumn{1}{|c|}{4 } & {st\_nlink } & {Number of links to the file } &
+{Number of link (only on NTFS) } & {Always 1 } \\
+ \hline
+\multicolumn{1}{|c|}{5 } & {st\_uid } & {Owner ID } & {Always 0 } &
+{Always 0 } \\
+ \hline
+\multicolumn{1}{|c|}{6 } & {st\_gid } & {Group ID } & {Always 0 } &
+{Always 0 } \\
+ \hline
+\multicolumn{1}{|c|}{7 } & {st\_rdev } & {Device ID for special files } &
+{Drive No. } & {Always 0 } \\
+ \hline
+\multicolumn{1}{|c|}{8 } & {st\_size } & {File size in bytes } & {File
+size in bytes } & {Data fork file size in bytes } \\
+ \hline
+\multicolumn{1}{|c|}{9 } & {st\_blksize } & {Preferred block size } &
+{Always 0 } & {Preferred block size } \\
+ \hline
+\multicolumn{1}{|c|}{10 } & {st\_blocks } & {Number of blocks allocated }
+& {Always 0 } & {Number of blocks allocated } \\
+ \hline
+\multicolumn{1}{|c|}{11 } & {st\_atime } & {Last access time since epoch }
+& {Last access time since epoch } & {Last access time -66 years } \\
+ \hline
+\multicolumn{1}{|c|}{12 } & {st\_mtime } & {Last modify time since epoch }
+& {Last modify time since epoch } & {Last access time -66 years } \\
+ \hline
+\multicolumn{1}{|c|}{13 } & {st\_ctime } & {Inode change time since epoch
+} & {File create time since epoch } & {File create time -66 years}
+\\ \hline
+
+\end{longtable}
+
+\section{Old Depreciated Tape Format}
+\index{Old Depreciated Tape Format}
+\index{Format!Old Depreciated Tape}
+\addcontentsline{toc}{subsection}{Old Depreciated Tape Format}
+
+The format of the Block Header (version 1.26 and earlier) is:
+
+\footnotesize
+\begin{verbatim}
+ uint32_t CheckSum; /* Block check sum */
+ uint32_t BlockSize; /* Block byte size including the header */
+ uint32_t BlockNumber; /* Block number */
+ char ID[4] = "BB01"; /* Identification and block level */
+\end{verbatim}
+\normalsize
+
+The format of the Record Header (version 1.26 or earlier) is:
+
+\footnotesize
+\begin{verbatim}
+ uint32_t VolSessionId; /* Unique ID for this session */
+ uint32_t VolSessionTime; /* Start time/date of session */
+ int32_t FileIndex; /* File index supplied by File daemon */
+ int32_t Stream; /* Stream number supplied by File daemon */
+ uint32_t DataSize; /* size of following data record in bytes */
+\end{verbatim}
+\normalsize
+
+\footnotesize
+\begin{verbatim}
+ Current Bacula Tape Format
+ 6 June 2001
+ Version BB01 is the old deprecated format.
+ A Bacula tape is composed of tape Blocks. Each block
+ has a Block header followed by the block data. Block
+ Data consists of Records. Records consist of Record
+ Headers followed by Record Data.
+ :=======================================================:
+ | |
+ | Block Header |
+ | (16 bytes version BB01) |
+ |-------------------------------------------------------|
+ | |
+ | Record Header |
+ | (20 bytes version BB01) |
+ |-------------------------------------------------------|
+ | |
+ | Record Data |
+ | |
+ |-------------------------------------------------------|
+ | |
+ | Record Header |
+ | (20 bytes version BB01) |
+ |-------------------------------------------------------|
+ | |
+ | ... |
+ Block Header: the first item in each block. The format is
+ shown below.
+ Partial Data block: occurs if the data from a previous
+ block spills over to this block (the normal case except
+ for the first block on a tape). However, this partial
+ data block is always preceded by a record header.
+ Record Header: identifies the Volume Session, the Stream
+ and the following Record Data size. See below for format.
+ Record data: arbitrary binary data.
+ Block Header Format BB01 (deprecated)
+ :=======================================================:
+ | CheckSum (uint32_t) |
+ |-------------------------------------------------------|
+ | BlockSize (uint32_t) |
+ |-------------------------------------------------------|
+ | BlockNumber (uint32_t) |
+ |-------------------------------------------------------|
+ | "BB01" (char [4]) |
+ :=======================================================:
+ BBO1: Serves to identify the block as a
+ Bacula block and also servers as a block format identifier
+ should we ever need to change the format.
+ BlockSize: is the size in bytes of the block. When reading
+ back a block, if the BlockSize does not agree with the
+ actual size read, Bacula discards the block.
+ CheckSum: a checksum for the Block.
+ BlockNumber: is the sequential block number on the tape.
+ VolSessionId: a unique sequential number that is assigned
+ by the Storage Daemon to a particular Job.
+ This number is sequential since the start
+ of execution of the daemon.
+ VolSessionTime: the time/date that the current execution
+ of the Storage Daemon started. It assures
+ that the combination of VolSessionId and
+ VolSessionTime is unique for all jobs
+ written to the tape, even if there was a
+ machine crash between two writes.
+ Record Header Format BB01 (deprecated)
+ :=======================================================:
+ | VolSessionId (uint32_t) |
+ |-------------------------------------------------------|
+ | VolSessionTime (uint32_t) |
+ |-------------------------------------------------------|
+ | FileIndex (int32_t) |
+ |-------------------------------------------------------|
+ | Stream (int32_t) |
+ |-------------------------------------------------------|
+ | DataSize (uint32_t) |
+ :=======================================================:
+ VolSessionId: a unique sequential number that is assigned
+ by the Storage Daemon to a particular Job.
+ This number is sequential since the start
+ of execution of the daemon.
+ VolSessionTime: the time/date that the current execution
+ of the Storage Daemon started. It assures
+ that the combination of VolSessionId and
+ VolSessionTime is unique for all jobs
+ written to the tape, even if there was a
+ machine crash between two writes.
+ FileIndex: a sequential file number within a job. The
+ Storage daemon enforces this index to be
+ greater than zero and sequential. Note,
+ however, that the File daemon may send
+ multiple Streams for the same FileIndex.
+ The Storage Daemon uses negative FileIndices
+ to identify Session Start and End labels
+ as well as the End of Volume labels.
+ Stream: defined by the File daemon and is intended to be
+ used to identify separate parts of the data
+ saved for each file (attributes, file data,
+ ...). The Storage Daemon has no idea of
+ what a Stream is or what it contains.
+ DataSize: the size in bytes of the binary data record
+ that follows the Session Record header.
+ The Storage Daemon has no idea of the
+ actual contents of the binary data record.
+ For standard Unix files, the data record
+ typically contains the file attributes or
+ the file data. For a sparse file
+ the first 64 bits of the data contains
+ the storage address for the data block.
+ Volume Label
+ :=======================================================:
+ | Id (32 bytes) |
+ |-------------------------------------------------------|
+ | VerNum (uint32_t) |
+ |-------------------------------------------------------|
+ | label_date (float64_t) |
+ |-------------------------------------------------------|
+ | label_time (float64_t) |
+ |-------------------------------------------------------|
+ | write_date (float64_t) |
+ |-------------------------------------------------------|
+ | write_time (float64_t) |
+ |-------------------------------------------------------|
+ | VolName (128 bytes) |
+ |-------------------------------------------------------|
+ | PrevVolName (128 bytes) |
+ |-------------------------------------------------------|
+ | PoolName (128 bytes) |
+ |-------------------------------------------------------|
+ | PoolType (128 bytes) |
+ |-------------------------------------------------------|
+ | MediaType (128 bytes) |
+ |-------------------------------------------------------|
+ | HostName (128 bytes) |
+ |-------------------------------------------------------|
+ | LabelProg (32 bytes) |
+ |-------------------------------------------------------|
+ | ProgVersion (32 bytes) |
+ |-------------------------------------------------------|
+ | ProgDate (32 bytes) |
+ |-------------------------------------------------------|
+ :=======================================================:
+
+ Id: 32 byte Bacula identifier "Bacula 1.0 immortal\n"
+ (old version also recognized:)
+ Id: 32 byte Bacula identifier "Bacula 0.9 mortal\n"
+ LabelType (Saved in the FileIndex of the Header record).
+ PRE_LABEL -1 Volume label on unwritten tape
+ VOL_LABEL -2 Volume label after tape written
+ EOM_LABEL -3 Label at EOM (not currently implemented)
+ SOS_LABEL -4 Start of Session label (format given below)
+ EOS_LABEL -5 End of Session label (format given below)
+ label_date: Julian day tape labeled
+ label_time: Julian time tape labeled
+ write_date: Julian date tape first used (data written)
+ write_time: Julian time tape first used (data written)
+ VolName: "Physical" Volume name
+ PrevVolName: The VolName of the previous tape (if this tape is
+ a continuation of the previous one).
+ PoolName: Pool Name
+ PoolType: Pool Type
+ MediaType: Media Type
+ HostName: Name of host that is first writing the tape
+ LabelProg: Name of the program that labeled the tape
+ ProgVersion: Version of the label program
+ ProgDate: Date Label program built
+ Session Label
+ :=======================================================:
+ | Id (32 bytes) |
+ |-------------------------------------------------------|
+ | VerNum (uint32_t) |
+ |-------------------------------------------------------|
+ | JobId (uint32_t) |
+ |-------------------------------------------------------|
+ | *write_date (float64_t) VerNum 10 |
+ |-------------------------------------------------------|
+ | *write_time (float64_t) VerNum 10 |
+ |-------------------------------------------------------|
+ | PoolName (128 bytes) |
+ |-------------------------------------------------------|
+ | PoolType (128 bytes) |
+ |-------------------------------------------------------|
+ | JobName (128 bytes) |
+ |-------------------------------------------------------|
+ | ClientName (128 bytes) |
+ |-------------------------------------------------------|
+ | Job (128 bytes) |
+ |-------------------------------------------------------|
+ | FileSetName (128 bytes) |
+ |-------------------------------------------------------|
+ | JobType (uint32_t) |
+ |-------------------------------------------------------|
+ | JobLevel (uint32_t) |
+ |-------------------------------------------------------|
+ | FileSetMD5 (50 bytes) VerNum 11 |
+ |-------------------------------------------------------|
+ Additional fields in End Of Session Label
+ |-------------------------------------------------------|
+ | JobFiles (uint32_t) |
+ |-------------------------------------------------------|
+ | JobBytes (uint32_t) |
+ |-------------------------------------------------------|
+ | start_block (uint32_t) |
+ |-------------------------------------------------------|
+ | end_block (uint32_t) |
+ |-------------------------------------------------------|
+ | start_file (uint32_t) |
+ |-------------------------------------------------------|
+ | end_file (uint32_t) |
+ |-------------------------------------------------------|
+ | JobErrors (uint32_t) |
+ |-------------------------------------------------------|
+ | JobStatus (uint32_t) VerNum 11 |
+ :=======================================================:
+ * => fields deprecated
+ Id: 32 byte Bacula Identifier "Bacula 1.0 immortal\n"
+ LabelType (in FileIndex field of Header):
+ EOM_LABEL -3 Label at EOM
+ SOS_LABEL -4 Start of Session label
+ EOS_LABEL -5 End of Session label
+ VerNum: 11
+ JobId: JobId
+ write_btime: Bacula time/date this tape record written
+ write_date: Julian date tape this record written - deprecated
+ write_time: Julian time tape this record written - deprecated.
+ PoolName: Pool Name
+ PoolType: Pool Type
+ MediaType: Media Type
+ ClientName: Name of File daemon or Client writing this session
+ Not used for EOM_LABEL.
+\end{verbatim}
+\normalsize
+++ /dev/null
-%%
-%%
-
-\chapter{Storage Media Output Format}
-\label{_ChapterStart9}
-\index{Format!Storage Media Output}
-\index{Storage Media Output Format}
-\addcontentsline{toc}{section}{Storage Media Output Format}
-
-\section{General}
-\index{General}
-\addcontentsline{toc}{subsection}{General}
-
-This document describes the media format written by the Storage daemon. The
-Storage daemon reads and writes in units of blocks. Blocks contain records.
-Each block has a block header followed by records, and each record has a
-record header followed by record data.
-
-This chapter is intended to be a technical discussion of the Media Format and
-as such is not targeted at end users but rather at developers and system
-administrators that want or need to know more of the working details of {\bf
-Bacula}.
-
-\section{Definitions}
-\index{Definitions}
-\addcontentsline{toc}{subsection}{Definitions}
-
-\begin{description}
-
-\item [Block]
- \index{Block}
- A block represents the primitive unit of information that the Storage daemon
-reads and writes to a physical device. Normally, for a tape device, it will
-be the same as a tape block. The Storage daemon always reads and writes
-blocks. A block consists of block header information followed by records.
-Clients of the Storage daemon (the File daemon) normally never see blocks.
-However, some of the Storage tools (bls, bscan, bextract, ...) may be use
-block header information. In older Bacula tape versions, a block could
-contain records (see record definition below) from multiple jobs. However,
-all blocks currently written by Bacula are block level BB02, and a given
-block contains records for only a single job. Different jobs simply have
-their own private blocks that are intermingled with the other blocks from
-other jobs on the Volume (previously the records were intermingled within
-the blocks). Having only records from a single job in any give block
-permitted moving the VolumeSessionId and VolumeSessionTime (see below) from
-each record heading to the Block header. This has two advantages: 1. a block
-can be quickly rejected based on the contents of the header without reading
-all the records. 2. because there is on the average more than one record per
-block, less data is written to the Volume for each job.
-
-\item [Record]
- \index{Record}
- A record consists of a Record Header, which is managed by the Storage daemon
-and Record Data, which is the data received from the Client. A record is the
-primitive unit of information sent to and from the Storage daemon by the
-Client (File daemon) programs. The details are described below.
-
-\item [JobId]
- \index{JobId}
- A number assigned by the Director daemon for a particular job. This number
-will be unique for that particular Director (Catalog). The daemons use this
-number to keep track of individual jobs. Within the Storage daemon, the JobId
-may not be unique if several Directors are accessing the Storage daemon
-simultaneously.
-
-\item [Session]
- \index{Session}
- A Session is a concept used in the Storage daemon corresponds one to one to a
-Job with the exception that each session is uniquely identified within the
-Storage daemon by a unique SessionId/SessionTime pair (see below).
-
-\item [VolSessionId]
- \index{VolSessionId}
- A unique number assigned by the Storage daemon to a particular session (Job)
-it is having with a File daemon. This number by itself is not unique to the
-given Volume, but with the VolSessionTime, it is unique.
-
-\item [VolSessionTime]
- \index{VolSessionTime}
- A unique number assigned by the Storage daemon to a particular Storage daemon
-execution. It is actually the Unix time\_t value of when the Storage daemon
-began execution cast to a 32 bit unsigned integer. The combination of the
-{\bf VolSessionId} and the {\bf VolSessionTime} for a given Storage daemon is
-guaranteed to be unique for each Job (or session).
-
-\item [FileIndex]
- \index{FileIndex}
- A sequential number beginning at one assigned by the File daemon to the files
-within a job that are sent to the Storage daemon for backup. The Storage
-daemon ensures that this number is greater than zero and sequential. Note,
-the Storage daemon uses negative FileIndexes to flag Session Start and End
-Labels as well as End of Volume Labels. Thus, the combination of
-VolSessionId, VolSessionTime, and FileIndex uniquely identifies the records
-for a single file written to a Volume.
-
-\item [Stream]
- \index{Stream}
- While writing the information for any particular file to the Volume, there
-can be any number of distinct pieces of information about that file, e.g. the
-attributes, the file data, ... The Stream indicates what piece of data it
-is, and it is an arbitrary number assigned by the File daemon to the parts
-(Unix attributes, Win32 attributes, data, compressed data,\ ...) of a file
-that are sent to the Storage daemon. The Storage daemon has no knowledge of
-the details of a Stream; it simply represents a numbered stream of bytes. The
-data for a given stream may be passed to the Storage daemon in single record,
-or in multiple records.
-
-\item [Block Header]
- \index{Block Header}
- A block header consists of a block identification (``BB02''), a block length
-in bytes (typically 64,512) a checksum, and sequential block number. Each
-block starts with a Block Header and is followed by Records. Current block
-headers also contain the VolSessionId and VolSessionTime for the records
-written to that block.
-
-\item [Record Header]
- \index{Record Header}
- A record header contains the Volume Session Id, the Volume Session Time, the
-FileIndex, the Stream, and the size of the data record which follows. The
-Record Header is always immediately followed by a Data Record if the size
-given in the Header is greater than zero. Note, for Block headers of level
-BB02 (version 1.27 and later), the Record header as written to tape does not
-contain the Volume Session Id and the Volume Session Time as these two
-fields are stored in the BB02 Block header. The in-memory record header does
-have those fields for convenience.
-
-\item [Data Record]
- \index{Data Record}
- A data record consists of a binary stream of bytes and is always preceded by
-a Record Header. The details of the meaning of the binary stream of bytes are
-unknown to the Storage daemon, but the Client programs (File daemon) defines
-and thus knows the details of each record type.
-
-\item [Volume Label]
- \index{Volume Label}
- A label placed by the Storage daemon at the beginning of each storage volume.
-It contains general information about the volume. It is written in Record
-format. The Storage daemon manages Volume Labels, and if the client wants, he
-may also read them.
-
-\item [Begin Session Label]
- \index{Begin Session Label}
- The Begin Session Label is a special record placed by the Storage daemon on
-the storage medium as the first record of an append session job with a File
-daemon. This record is useful for finding the beginning of a particular
-session (Job), since no records with the same VolSessionId and VolSessionTime
-will precede this record. This record is not normally visible outside of the
-Storage daemon. The Begin Session Label is similar to the Volume Label except
-that it contains additional information pertaining to the Session.
-
-\item [End Session Label]
- \index{End Session Label}
- The End Session Label is a special record placed by the Storage daemon on the
-storage medium as the last record of an append session job with a File
-daemon. The End Session Record is distinguished by a FileIndex with a value
-of minus two (-2). This record is useful for detecting the end of a
-particular session since no records with the same VolSessionId and
-VolSessionTime will follow this record. This record is not normally visible
-outside of the Storage daemon. The End Session Label is similar to the Volume
-Label except that it contains additional information pertaining to the
-Session.
-\end{description}
-
-\section{Storage Daemon File Output Format}
-\index{Format!Storage Daemon File Output}
-\index{Storage Daemon File Output Format}
-\addcontentsline{toc}{subsection}{Storage Daemon File Output Format}
-
-The file storage and tape storage formats are identical except that tape
-records are by default blocked into blocks of 64,512 bytes, except for the
-last block, which is the actual number of bytes written rounded up to a
-multiple of 1024 whereas the last record of file storage is not rounded up.
-The default block size of 64,512 bytes may be overridden by the user (some
-older tape drives only support block sizes of 32K). Each Session written to
-tape is terminated with an End of File mark (this will be removed later).
-Sessions written to file are simply appended to the end of the file.
-
-\section{Overall Format}
-\index{Format!Overall}
-\index{Overall Format}
-\addcontentsline{toc}{subsection}{Overall Format}
-
-A Bacula output file consists of Blocks of data. Each block contains a block
-header followed by records. Each record consists of a record header followed
-by the record data. The first record on a tape will always be the Volume Label
-Record.
-
-No Record Header will be split across Bacula blocks. However, Record Data may
-be split across any number of Bacula blocks. Obviously this will not be the
-case for the Volume Label which will always be smaller than the Bacula Block
-size.
-
-To simplify reading tapes, the Start of Session (SOS) and End of Session (EOS)
-records are never split across blocks. If this is about to happen, Bacula will
-write a short block before writing the session record (actually, the SOS
-record should always be the first record in a block, excepting perhaps the
-Volume label).
-
-Due to hardware limitations, the last block written to the tape may not be
-fully written. If your drive permits backspace record, Bacula will backup over
-the last record written on the tape, re-read it and verify that it was
-correctly written.
-
-When a new tape is mounted Bacula will write the full contents of the
-partially written block to the new tape ensuring that there is no loss of
-data. When reading a tape, Bacula will discard any block that is not totally
-written, thus ensuring that there is no duplication of data. In addition,
-since Bacula blocks are sequentially numbered within a Job, it is easy to
-ensure that no block is missing or duplicated.
-
-\section{Serialization}
-\index{Serialization}
-\addcontentsline{toc}{subsection}{Serialization}
-
-All Block Headers, Record Headers, and Label Records are written using
-Bacula's serialization routines. These routines guarantee that the data is
-written to the output volume in a machine independent format.
-
-\section{Block Header}
-\index{Header!Block}
-\index{Block Header}
-\addcontentsline{toc}{subsection}{Block Header}
-
-The format of the Block Header (version 1.27 and later) is:
-
-\footnotesize
-\begin{verbatim}
- uint32_t CheckSum; /* Block check sum */
- uint32_t BlockSize; /* Block byte size including the header */
- uint32_t BlockNumber; /* Block number */
- char ID[4] = "BB02"; /* Identification and block level */
- uint32_t VolSessionId; /* Session Id for Job */
- uint32_t VolSessionTime; /* Session Time for Job */
-\end{verbatim}
-\normalsize
-
-The Block header is a fixed length and fixed format and is followed by Record
-Headers and Record Data. The CheckSum field is a 32 bit checksum of the block
-data and the block header but not including the CheckSum field. The Block
-Header is always immediately followed by a Record Header. If the tape is
-damaged, a Bacula utility will be able to recover as much information as
-possible from the tape by recovering blocks which are valid. The Block header
-is written using the Bacula serialization routines and thus is guaranteed to
-be in machine independent format. See below for version 2 of the block header.
-
-
-\section{Record Header}
-\index{Header!Record}
-\index{Record Header}
-\addcontentsline{toc}{subsection}{Record Header}
-
-Each binary data record is preceded by a Record Header. The Record Header is
-fixed length and fixed format, whereas the binary data record is of variable
-length. The Record Header is written using the Bacula serialization routines
-and thus is guaranteed to be in machine independent format.
-
-The format of the Record Header (version 1.27 or later) is:
-
-\footnotesize
-\begin{verbatim}
- int32_t FileIndex; /* File index supplied by File daemon */
- int32_t Stream; /* Stream number supplied by File daemon */
- uint32_t DataSize; /* size of following data record in bytes */
-\end{verbatim}
-\normalsize
-
-This record is followed by the binary Stream data of DataSize bytes, followed
-by another Record Header record and the binary stream data. For the definitive
-definition of this record, see record.h in the src/stored directory.
-
-Additional notes on the above:
-
-\begin{description}
-
-\item [The {\bf VolSessionId} ]
- \index{VolSessionId}
- is a unique sequential number that is assigned by the Storage Daemon to a
-particular Job. This number is sequential since the start of execution of the
-daemon.
-
-\item [The {\bf VolSessionTime} ]
- \index{VolSessionTime}
- is the time/date that the current execution of the Storage Daemon started. It
-assures that the combination of VolSessionId and VolSessionTime is unique for
-every jobs written to the tape, even if there was a machine crash between two
-writes.
-
-\item [The {\bf FileIndex} ]
- \index{FileIndex}
- is a sequential file number within a job. The Storage daemon requires this
-index to be greater than zero and sequential. Note, however, that the File
-daemon may send multiple Streams for the same FileIndex. In addition, the
-Storage daemon uses negative FileIndices to hold the Begin Session Label, the
-End Session Label, and the End of Volume Label.
-
-\item [The {\bf Stream} ]
- \index{Stream}
- is defined by the File daemon and is used to identify separate parts of the
-data saved for each file (Unix attributes, Win32 attributes, file data,
-compressed file data, sparse file data, ...). The Storage Daemon has no idea
-of what a Stream is or what it contains except that the Stream is required to
-be a positive integer. Negative Stream numbers are used internally by the
-Storage daemon to indicate that the record is a continuation of the previous
-record (the previous record would not entirely fit in the block).
-
-For Start Session and End Session Labels (where the FileIndex is negative),
-the Storage daemon uses the Stream field to contain the JobId. The current
-stream definitions are:
-
-\footnotesize
-\begin{verbatim}
-#define STREAM_UNIX_ATTRIBUTES 1 /* Generic Unix attributes */
-#define STREAM_FILE_DATA 2 /* Standard uncompressed data */
-#define STREAM_MD5_SIGNATURE 3 /* MD5 signature for the file */
-#define STREAM_GZIP_DATA 4 /* GZip compressed file data */
-/* Extended Unix attributes with Win32 Extended data. Deprecated. */
-#define STREAM_UNIX_ATTRIBUTES_EX 5 /* Extended Unix attr for Win32 EX */
-#define STREAM_SPARSE_DATA 6 /* Sparse data stream */
-#define STREAM_SPARSE_GZIP_DATA 7
-#define STREAM_PROGRAM_NAMES 8 /* program names for program data */
-#define STREAM_PROGRAM_DATA 9 /* Data needing program */
-#define STREAM_SHA1_SIGNATURE 10 /* SHA1 signature for the file */
-#define STREAM_WIN32_DATA 11 /* Win32 BackupRead data */
-#define STREAM_WIN32_GZIP_DATA 12 /* Gzipped Win32 BackupRead data */
-#define STREAM_MACOS_FORK_DATA 13 /* Mac resource fork */
-#define STREAM_HFSPLUS_ATTRIBUTES 14 /* Mac OS extra attributes */
-#define STREAM_UNIX_ATTRIBUTES_ACCESS_ACL 15 /* Standard ACL attributes on UNIX */
-#define STREAM_UNIX_ATTRIBUTES_DEFAULT_ACL 16 /* Default ACL attributes on UNIX */
-\end{verbatim}
-\normalsize
-
-\item [The {\bf DataSize} ]
- \index{DataSize}
- is the size in bytes of the binary data record that follows the Session
-Record header. The Storage Daemon has no idea of the actual contents of the
-binary data record. For standard Unix files, the data record typically
-contains the file attributes or the file data. For a sparse file the first
-64 bits of the file data contains the storage address for the data block.
-\end{description}
-
-The Record Header is never split across two blocks. If there is not enough
-room in a block for the full Record Header, the block is padded to the end
-with zeros and the Record Header begins in the next block. The data record, on
-the other hand, may be split across multiple blocks and even multiple physical
-volumes. When a data record is split, the second (and possibly subsequent)
-piece of the data is preceded by a new Record Header. Thus each piece of data
-is always immediately preceded by a Record Header. When reading a record, if
-Bacula finds only part of the data in the first record, it will automatically
-read the next record and concatenate the data record to form a full data
-record.
-
-\section{Version BB02 Block Header}
-\index{Version BB02 Block Header}
-\index{Header!Version BB02 Block}
-\addcontentsline{toc}{subsection}{Version BB02 Block Header}
-
-Each session or Job has its own private block. As a consequence, the SessionId
-and SessionTime are written once in each Block Header and not in the Record
-Header. So, the second and current version of the Block Header BB02 is:
-
-\footnotesize
-\begin{verbatim}
- uint32_t CheckSum; /* Block check sum */
- uint32_t BlockSize; /* Block byte size including the header */
- uint32_t BlockNumber; /* Block number */
- char ID[4] = "BB02"; /* Identification and block level */
- uint32_t VolSessionId; /* Applies to all records */
- uint32_t VolSessionTime; /* contained in this block */
-\end{verbatim}
-\normalsize
-
-As with the previous version, the BB02 Block header is a fixed length and
-fixed format and is followed by Record Headers and Record Data. The CheckSum
-field is a 32 bit CRC checksum of the block data and the block header but not
-including the CheckSum field. The Block Header is always immediately followed
-by a Record Header. If the tape is damaged, a Bacula utility will be able to
-recover as much information as possible from the tape by recovering blocks
-which are valid. The Block header is written using the Bacula serialization
-routines and thus is guaranteed to be in machine independent format.
-
-\section{Version 2 Record Header}
-\index{Version 2 Record Header}
-\index{Header!Version 2 Record}
-\addcontentsline{toc}{subsection}{Version 2 Record Header}
-
-Version 2 Record Header is written to the medium when using Version BB02 Block
-Headers. The memory representation of the record is identical to the old BB01
-Record Header, but on the storage medium, the first two fields, namely
-VolSessionId and VolSessionTime are not written. The Block Header is filled
-with these values when the First user record is written (i.e. non label
-record) so that when the block is written, it will have the current and unique
-VolSessionId and VolSessionTime. On reading each record from the Block, the
-VolSessionId and VolSessionTime is filled in the Record Header from the Block
-Header.
-
-\section{Volume Label Format}
-\index{Volume Label Format}
-\index{Format!Volume Label}
-\addcontentsline{toc}{subsection}{Volume Label Format}
-
-Tape volume labels are created by the Storage daemon in response to a {\bf
-label} command given to the Console program, or alternatively by the {\bf
-btape} program. created. Each volume is labeled with the following information
-using the Bacula serialization routines, which guarantee machine byte order
-independence.
-
-For Bacula versions 1.27 and later, the Volume Label Format is:
-
-\footnotesize
-\begin{verbatim}
- char Id[32]; /* Bacula 1.0 Immortal\n */
- uint32_t VerNum; /* Label version number */
- /* VerNum 11 and greater Bacula 1.27 and later */
- btime_t label_btime; /* Time/date tape labeled */
- btime_t write_btime; /* Time/date tape first written */
- /* The following are 0 in VerNum 11 and greater */
- float64_t write_date; /* Date this label written */
- float64_t write_time; /* Time this label written */
- char VolName[128]; /* Volume name */
- char PrevVolName[128]; /* Previous Volume Name */
- char PoolName[128]; /* Pool name */
- char PoolType[128]; /* Pool type */
- char MediaType[128]; /* Type of this media */
- char HostName[128]; /* Host name of writing computer */
- char LabelProg[32]; /* Label program name */
- char ProgVersion[32]; /* Program version */
- char ProgDate[32]; /* Program build date/time */
-\end{verbatim}
-\normalsize
-
-Note, the LabelType (Volume Label, Volume PreLabel, Session Start Label, ...)
-is stored in the record FileIndex field of the Record Header and does not
-appear in the data part of the record.
-
-\section{Session Label}
-\index{Label!Session}
-\index{Session Label}
-\addcontentsline{toc}{subsection}{Session Label}
-
-The Session Label is written at the beginning and end of each session as well
-as the last record on the physical medium. It has the following binary format:
-
-
-\footnotesize
-\begin{verbatim}
- char Id[32]; /* Bacula Immortal ... */
- uint32_t VerNum; /* Label version number */
- uint32_t JobId; /* Job id */
- uint32_t VolumeIndex; /* sequence no of vol */
- /* Prior to VerNum 11 */
- float64_t write_date; /* Date this label written */
- /* VerNum 11 and greater */
- btime_t write_btime; /* time/date record written */
- /* The following is zero VerNum 11 and greater */
- float64_t write_time; /* Time this label written */
- char PoolName[128]; /* Pool name */
- char PoolType[128]; /* Pool type */
- char JobName[128]; /* base Job name */
- char ClientName[128];
- /* Added in VerNum 10 */
- char Job[128]; /* Unique Job name */
- char FileSetName[128]; /* FileSet name */
- uint32_t JobType;
- uint32_t JobLevel;
-\end{verbatim}
-\normalsize
-
-In addition, the EOS label contains:
-
-\footnotesize
-\begin{verbatim}
- /* The remainder are part of EOS label only */
- uint32_t JobFiles;
- uint64_t JobBytes;
- uint32_t start_block;
- uint32_t end_block;
- uint32_t start_file;
- uint32_t end_file;
- uint32_t JobErrors;
-\end{verbatim}
-\normalsize
-
-In addition, for VerNum greater than 10, the EOS label contains (in addition
-to the above):
-
-\footnotesize
-\begin{verbatim}
- uint32_t JobStatus /* Job termination code */
-\end{verbatim}
-\normalsize
-
-: Note, the LabelType (Volume Label, Volume PreLabel, Session Start Label,
-...) is stored in the record FileIndex field and does not appear in the data
-part of the record. Also, the Stream field of the Record Header contains the
-JobId. This permits quick filtering without actually reading all the session
-data in many cases.
-
-\section{Overall Storage Format}
-\index{Format!Overall Storage}
-\index{Overall Storage Format}
-\addcontentsline{toc}{subsection}{Overall Storage Format}
-
-\footnotesize
-\begin{verbatim}
- Current Bacula Tape Format
- 6 June 2001
- Version BB02 added 28 September 2002
- Version BB01 is the old deprecated format.
- A Bacula tape is composed of tape Blocks. Each block
- has a Block header followed by the block data. Block
- Data consists of Records. Records consist of Record
- Headers followed by Record Data.
- :=======================================================:
- | |
- | Block Header (24 bytes) |
- | |
- |-------------------------------------------------------|
- | |
- | Record Header (12 bytes) |
- | |
- |-------------------------------------------------------|
- | |
- | Record Data |
- | |
- |-------------------------------------------------------|
- | |
- | Record Header (12 bytes) |
- | |
- |-------------------------------------------------------|
- | |
- | ... |
- Block Header: the first item in each block. The format is
- shown below.
- Partial Data block: occurs if the data from a previous
- block spills over to this block (the normal case except
- for the first block on a tape). However, this partial
- data block is always preceded by a record header.
- Record Header: identifies the Volume Session, the Stream
- and the following Record Data size. See below for format.
- Record data: arbitrary binary data.
- Block Header Format BB02
- :=======================================================:
- | CheckSum (uint32_t) |
- |-------------------------------------------------------|
- | BlockSize (uint32_t) |
- |-------------------------------------------------------|
- | BlockNumber (uint32_t) |
- |-------------------------------------------------------|
- | "BB02" (char [4]) |
- |-------------------------------------------------------|
- | VolSessionId (uint32_t) |
- |-------------------------------------------------------|
- | VolSessionTime (uint32_t) |
- :=======================================================:
- BBO2: Serves to identify the block as a
- Bacula block and also servers as a block format identifier
- should we ever need to change the format.
- BlockSize: is the size in bytes of the block. When reading
- back a block, if the BlockSize does not agree with the
- actual size read, Bacula discards the block.
- CheckSum: a checksum for the Block.
- BlockNumber: is the sequential block number on the tape.
- VolSessionId: a unique sequential number that is assigned
- by the Storage Daemon to a particular Job.
- This number is sequential since the start
- of execution of the daemon.
- VolSessionTime: the time/date that the current execution
- of the Storage Daemon started. It assures
- that the combination of VolSessionId and
- VolSessionTime is unique for all jobs
- written to the tape, even if there was a
- machine crash between two writes.
- Record Header Format BB02
- :=======================================================:
- | FileIndex (int32_t) |
- |-------------------------------------------------------|
- | Stream (int32_t) |
- |-------------------------------------------------------|
- | DataSize (uint32_t) |
- :=======================================================:
- FileIndex: a sequential file number within a job. The
- Storage daemon enforces this index to be
- greater than zero and sequential. Note,
- however, that the File daemon may send
- multiple Streams for the same FileIndex.
- The Storage Daemon uses negative FileIndices
- to identify Session Start and End labels
- as well as the End of Volume labels.
- Stream: defined by the File daemon and is intended to be
- used to identify separate parts of the data
- saved for each file (attributes, file data,
- ...). The Storage Daemon has no idea of
- what a Stream is or what it contains.
- DataSize: the size in bytes of the binary data record
- that follows the Session Record header.
- The Storage Daemon has no idea of the
- actual contents of the binary data record.
- For standard Unix files, the data record
- typically contains the file attributes or
- the file data. For a sparse file
- the first 64 bits of the data contains
- the storage address for the data block.
- Volume Label
- :=======================================================:
- | Id (32 bytes) |
- |-------------------------------------------------------|
- | VerNum (uint32_t) |
- |-------------------------------------------------------|
- | label_date (float64_t) |
- | label_btime (btime_t VerNum 11 |
- |-------------------------------------------------------|
- | label_time (float64_t) |
- | write_btime (btime_t VerNum 11 |
- |-------------------------------------------------------|
- | write_date (float64_t) |
- | 0 (float64_t) VerNum 11 |
- |-------------------------------------------------------|
- | write_time (float64_t) |
- | 0 (float64_t) VerNum 11 |
- |-------------------------------------------------------|
- | VolName (128 bytes) |
- |-------------------------------------------------------|
- | PrevVolName (128 bytes) |
- |-------------------------------------------------------|
- | PoolName (128 bytes) |
- |-------------------------------------------------------|
- | PoolType (128 bytes) |
- |-------------------------------------------------------|
- | MediaType (128 bytes) |
- |-------------------------------------------------------|
- | HostName (128 bytes) |
- |-------------------------------------------------------|
- | LabelProg (32 bytes) |
- |-------------------------------------------------------|
- | ProgVersion (32 bytes) |
- |-------------------------------------------------------|
- | ProgDate (32 bytes) |
- |-------------------------------------------------------|
- :=======================================================:
-
- Id: 32 byte Bacula identifier "Bacula 1.0 immortal\n"
- (old version also recognized:)
- Id: 32 byte Bacula identifier "Bacula 0.9 mortal\n"
- LabelType (Saved in the FileIndex of the Header record).
- PRE_LABEL -1 Volume label on unwritten tape
- VOL_LABEL -2 Volume label after tape written
- EOM_LABEL -3 Label at EOM (not currently implemented)
- SOS_LABEL -4 Start of Session label (format given below)
- EOS_LABEL -5 End of Session label (format given below)
- VerNum: 11
- label_date: Julian day tape labeled
- label_time: Julian time tape labeled
- write_date: Julian date tape first used (data written)
- write_time: Julian time tape first used (data written)
- VolName: "Physical" Volume name
- PrevVolName: The VolName of the previous tape (if this tape is
- a continuation of the previous one).
- PoolName: Pool Name
- PoolType: Pool Type
- MediaType: Media Type
- HostName: Name of host that is first writing the tape
- LabelProg: Name of the program that labeled the tape
- ProgVersion: Version of the label program
- ProgDate: Date Label program built
- Session Label
- :=======================================================:
- | Id (32 bytes) |
- |-------------------------------------------------------|
- | VerNum (uint32_t) |
- |-------------------------------------------------------|
- | JobId (uint32_t) |
- |-------------------------------------------------------|
- | write_btime (btime_t) VerNum 11 |
- |-------------------------------------------------------|
- | 0 (float64_t) VerNum 11 |
- |-------------------------------------------------------|
- | PoolName (128 bytes) |
- |-------------------------------------------------------|
- | PoolType (128 bytes) |
- |-------------------------------------------------------|
- | JobName (128 bytes) |
- |-------------------------------------------------------|
- | ClientName (128 bytes) |
- |-------------------------------------------------------|
- | Job (128 bytes) |
- |-------------------------------------------------------|
- | FileSetName (128 bytes) |
- |-------------------------------------------------------|
- | JobType (uint32_t) |
- |-------------------------------------------------------|
- | JobLevel (uint32_t) |
- |-------------------------------------------------------|
- | FileSetMD5 (50 bytes) VerNum 11 |
- |-------------------------------------------------------|
- Additional fields in End Of Session Label
- |-------------------------------------------------------|
- | JobFiles (uint32_t) |
- |-------------------------------------------------------|
- | JobBytes (uint32_t) |
- |-------------------------------------------------------|
- | start_block (uint32_t) |
- |-------------------------------------------------------|
- | end_block (uint32_t) |
- |-------------------------------------------------------|
- | start_file (uint32_t) |
- |-------------------------------------------------------|
- | end_file (uint32_t) |
- |-------------------------------------------------------|
- | JobErrors (uint32_t) |
- |-------------------------------------------------------|
- | JobStatus (uint32_t) VerNum 11 |
- :=======================================================:
- * => fields deprecated
- Id: 32 byte Bacula Identifier "Bacula 1.0 immortal\n"
- LabelType (in FileIndex field of Header):
- EOM_LABEL -3 Label at EOM
- SOS_LABEL -4 Start of Session label
- EOS_LABEL -5 End of Session label
- VerNum: 11
- JobId: JobId
- write_btime: Bacula time/date this tape record written
- write_date: Julian date tape this record written - deprecated
- write_time: Julian time tape this record written - deprecated.
- PoolName: Pool Name
- PoolType: Pool Type
- MediaType: Media Type
- ClientName: Name of File daemon or Client writing this session
- Not used for EOM_LABEL.
-\end{verbatim}
-\normalsize
-
-\section{Unix File Attributes}
-\index{Unix File Attributes}
-\index{Attributes!Unix File}
-\addcontentsline{toc}{subsection}{Unix File Attributes}
-
-The Unix File Attributes packet consists of the following:
-
-\lt{}File-Index\gt{} \lt{}Type\gt{}
-\lt{}Filename\gt{}@\lt{}File-Attributes\gt{}@\lt{}Link\gt{}
-@\lt{}Extended-Attributes@\gt{} where
-
-\begin{description}
-
-\item [@]
- represents a byte containing a binary zero.
-
-\item [FileIndex]
- \index{FileIndex}
- is the sequential file index starting from one assigned by the File daemon.
-
-\item [Type]
- \index{Type}
- is one of the following:
-
-\footnotesize
-\begin{verbatim}
-#define FT_LNKSAVED 1 /* hard link to file already saved */
-#define FT_REGE 2 /* Regular file but empty */
-#define FT_REG 3 /* Regular file */
-#define FT_LNK 4 /* Soft Link */
-#define FT_DIR 5 /* Directory */
-#define FT_SPEC 6 /* Special file -- chr, blk, fifo, sock */
-#define FT_NOACCESS 7 /* Not able to access */
-#define FT_NOFOLLOW 8 /* Could not follow link */
-#define FT_NOSTAT 9 /* Could not stat file */
-#define FT_NOCHG 10 /* Incremental option, file not changed */
-#define FT_DIRNOCHG 11 /* Incremental option, directory not changed */
-#define FT_ISARCH 12 /* Trying to save archive file */
-#define FT_NORECURSE 13 /* No recursion into directory */
-#define FT_NOFSCHG 14 /* Different file system, prohibited */
-#define FT_NOOPEN 15 /* Could not open directory */
-#define FT_RAW 16 /* Raw block device */
-#define FT_FIFO 17 /* Raw fifo device */
-\end{verbatim}
-\normalsize
-
-\item [Filename]
- \index{Filename}
- is the fully qualified filename.
-
-\item [File-Attributes]
- \index{File-Attributes}
- consists of the 13 fields of the stat() buffer in ASCII base64 format
-separated by spaces. These fields and their meanings are shown below. This
-stat() packet is in Unix format, and MUST be provided (constructed) for ALL
-systems.
-
-\item [Link]
- \index{Link}
- when the FT code is FT\_LNK or FT\_LNKSAVED, the item in question is a Unix
-link, and this field contains the fully qualified link name. When the FT code
-is not FT\_LNK or FT\_LNKSAVED, this field is null.
-
-\item [Extended-Attributes]
- \index{Extended-Attributes}
- The exact format of this field is operating system dependent. It contains
-additional or extended attributes of a system dependent nature. Currently,
-this field is used only on WIN32 systems where it contains a ASCII base64
-representation of the WIN32\_FILE\_ATTRIBUTE\_DATA structure as defined by
-Windows. The fields in the base64 representation of this structure are like
-the File-Attributes separated by spaces.
-\end{description}
-
-The File-attributes consist of the following:
-
-\addcontentsline{lot}{table}{File Attributes}
-\begin{longtable}{|p{0.6in}|p{0.7in}|p{1in}|p{1in}|p{1.4in}|}
- \hline
-\multicolumn{1}{|c|}{\bf Field No. } & \multicolumn{1}{c|}{\bf Stat Name }
-& \multicolumn{1}{c|}{\bf Unix } & \multicolumn{1}{c|}{\bf Win98/NT } &
-\multicolumn{1}{c|}{\bf MacOS } \\
- \hline
-\multicolumn{1}{|c|}{1 } & {st\_dev } & {Device number of filesystem } &
-{Drive number } & {vRefNum } \\
- \hline
-\multicolumn{1}{|c|}{2 } & {st\_ino } & {Inode number } & {Always 0 } &
-{fileID/dirID } \\
- \hline
-\multicolumn{1}{|c|}{3 } & {st\_mode } & {File mode } & {File mode } &
-{777 dirs/apps; 666 docs; 444 locked docs } \\
- \hline
-\multicolumn{1}{|c|}{4 } & {st\_nlink } & {Number of links to the file } &
-{Number of link (only on NTFS) } & {Always 1 } \\
- \hline
-\multicolumn{1}{|c|}{5 } & {st\_uid } & {Owner ID } & {Always 0 } &
-{Always 0 } \\
- \hline
-\multicolumn{1}{|c|}{6 } & {st\_gid } & {Group ID } & {Always 0 } &
-{Always 0 } \\
- \hline
-\multicolumn{1}{|c|}{7 } & {st\_rdev } & {Device ID for special files } &
-{Drive No. } & {Always 0 } \\
- \hline
-\multicolumn{1}{|c|}{8 } & {st\_size } & {File size in bytes } & {File
-size in bytes } & {Data fork file size in bytes } \\
- \hline
-\multicolumn{1}{|c|}{9 } & {st\_blksize } & {Preferred block size } &
-{Always 0 } & {Preferred block size } \\
- \hline
-\multicolumn{1}{|c|}{10 } & {st\_blocks } & {Number of blocks allocated }
-& {Always 0 } & {Number of blocks allocated } \\
- \hline
-\multicolumn{1}{|c|}{11 } & {st\_atime } & {Last access time since epoch }
-& {Last access time since epoch } & {Last access time -66 years } \\
- \hline
-\multicolumn{1}{|c|}{12 } & {st\_mtime } & {Last modify time since epoch }
-& {Last modify time since epoch } & {Last access time -66 years } \\
- \hline
-\multicolumn{1}{|c|}{13 } & {st\_ctime } & {Inode change time since epoch
-} & {File create time since epoch } & {File create time -66 years}
-\\ \hline
-
-\end{longtable}
-
-\section{Old Depreciated Tape Format}
-\index{Old Depreciated Tape Format}
-\index{Format!Old Depreciated Tape}
-\addcontentsline{toc}{subsection}{Old Depreciated Tape Format}
-
-The format of the Block Header (version 1.26 and earlier) is:
-
-\footnotesize
-\begin{verbatim}
- uint32_t CheckSum; /* Block check sum */
- uint32_t BlockSize; /* Block byte size including the header */
- uint32_t BlockNumber; /* Block number */
- char ID[4] = "BB01"; /* Identification and block level */
-\end{verbatim}
-\normalsize
-
-The format of the Record Header (version 1.26 or earlier) is:
-
-\footnotesize
-\begin{verbatim}
- uint32_t VolSessionId; /* Unique ID for this session */
- uint32_t VolSessionTime; /* Start time/date of session */
- int32_t FileIndex; /* File index supplied by File daemon */
- int32_t Stream; /* Stream number supplied by File daemon */
- uint32_t DataSize; /* size of following data record in bytes */
-\end{verbatim}
-\normalsize
-
-\footnotesize
-\begin{verbatim}
- Current Bacula Tape Format
- 6 June 2001
- Version BB01 is the old deprecated format.
- A Bacula tape is composed of tape Blocks. Each block
- has a Block header followed by the block data. Block
- Data consists of Records. Records consist of Record
- Headers followed by Record Data.
- :=======================================================:
- | |
- | Block Header |
- | (16 bytes version BB01) |
- |-------------------------------------------------------|
- | |
- | Record Header |
- | (20 bytes version BB01) |
- |-------------------------------------------------------|
- | |
- | Record Data |
- | |
- |-------------------------------------------------------|
- | |
- | Record Header |
- | (20 bytes version BB01) |
- |-------------------------------------------------------|
- | |
- | ... |
- Block Header: the first item in each block. The format is
- shown below.
- Partial Data block: occurs if the data from a previous
- block spills over to this block (the normal case except
- for the first block on a tape). However, this partial
- data block is always preceded by a record header.
- Record Header: identifies the Volume Session, the Stream
- and the following Record Data size. See below for format.
- Record data: arbitrary binary data.
- Block Header Format BB01 (deprecated)
- :=======================================================:
- | CheckSum (uint32_t) |
- |-------------------------------------------------------|
- | BlockSize (uint32_t) |
- |-------------------------------------------------------|
- | BlockNumber (uint32_t) |
- |-------------------------------------------------------|
- | "BB01" (char [4]) |
- :=======================================================:
- BBO1: Serves to identify the block as a
- Bacula block and also servers as a block format identifier
- should we ever need to change the format.
- BlockSize: is the size in bytes of the block. When reading
- back a block, if the BlockSize does not agree with the
- actual size read, Bacula discards the block.
- CheckSum: a checksum for the Block.
- BlockNumber: is the sequential block number on the tape.
- VolSessionId: a unique sequential number that is assigned
- by the Storage Daemon to a particular Job.
- This number is sequential since the start
- of execution of the daemon.
- VolSessionTime: the time/date that the current execution
- of the Storage Daemon started. It assures
- that the combination of VolSessionId and
- VolSessionTime is unique for all jobs
- written to the tape, even if there was a
- machine crash between two writes.
- Record Header Format BB01 (deprecated)
- :=======================================================:
- | VolSessionId (uint32_t) |
- |-------------------------------------------------------|
- | VolSessionTime (uint32_t) |
- |-------------------------------------------------------|
- | FileIndex (int32_t) |
- |-------------------------------------------------------|
- | Stream (int32_t) |
- |-------------------------------------------------------|
- | DataSize (uint32_t) |
- :=======================================================:
- VolSessionId: a unique sequential number that is assigned
- by the Storage Daemon to a particular Job.
- This number is sequential since the start
- of execution of the daemon.
- VolSessionTime: the time/date that the current execution
- of the Storage Daemon started. It assures
- that the combination of VolSessionId and
- VolSessionTime is unique for all jobs
- written to the tape, even if there was a
- machine crash between two writes.
- FileIndex: a sequential file number within a job. The
- Storage daemon enforces this index to be
- greater than zero and sequential. Note,
- however, that the File daemon may send
- multiple Streams for the same FileIndex.
- The Storage Daemon uses negative FileIndices
- to identify Session Start and End labels
- as well as the End of Volume labels.
- Stream: defined by the File daemon and is intended to be
- used to identify separate parts of the data
- saved for each file (attributes, file data,
- ...). The Storage Daemon has no idea of
- what a Stream is or what it contains.
- DataSize: the size in bytes of the binary data record
- that follows the Session Record header.
- The Storage Daemon has no idea of the
- actual contents of the binary data record.
- For standard Unix files, the data record
- typically contains the file attributes or
- the file data. For a sparse file
- the first 64 bits of the data contains
- the storage address for the data block.
- Volume Label
- :=======================================================:
- | Id (32 bytes) |
- |-------------------------------------------------------|
- | VerNum (uint32_t) |
- |-------------------------------------------------------|
- | label_date (float64_t) |
- |-------------------------------------------------------|
- | label_time (float64_t) |
- |-------------------------------------------------------|
- | write_date (float64_t) |
- |-------------------------------------------------------|
- | write_time (float64_t) |
- |-------------------------------------------------------|
- | VolName (128 bytes) |
- |-------------------------------------------------------|
- | PrevVolName (128 bytes) |
- |-------------------------------------------------------|
- | PoolName (128 bytes) |
- |-------------------------------------------------------|
- | PoolType (128 bytes) |
- |-------------------------------------------------------|
- | MediaType (128 bytes) |
- |-------------------------------------------------------|
- | HostName (128 bytes) |
- |-------------------------------------------------------|
- | LabelProg (32 bytes) |
- |-------------------------------------------------------|
- | ProgVersion (32 bytes) |
- |-------------------------------------------------------|
- | ProgDate (32 bytes) |
- |-------------------------------------------------------|
- :=======================================================:
-
- Id: 32 byte Bacula identifier "Bacula 1.0 immortal\n"
- (old version also recognized:)
- Id: 32 byte Bacula identifier "Bacula 0.9 mortal\n"
- LabelType (Saved in the FileIndex of the Header record).
- PRE_LABEL -1 Volume label on unwritten tape
- VOL_LABEL -2 Volume label after tape written
- EOM_LABEL -3 Label at EOM (not currently implemented)
- SOS_LABEL -4 Start of Session label (format given below)
- EOS_LABEL -5 End of Session label (format given below)
- label_date: Julian day tape labeled
- label_time: Julian time tape labeled
- write_date: Julian date tape first used (data written)
- write_time: Julian time tape first used (data written)
- VolName: "Physical" Volume name
- PrevVolName: The VolName of the previous tape (if this tape is
- a continuation of the previous one).
- PoolName: Pool Name
- PoolType: Pool Type
- MediaType: Media Type
- HostName: Name of host that is first writing the tape
- LabelProg: Name of the program that labeled the tape
- ProgVersion: Version of the label program
- ProgDate: Date Label program built
- Session Label
- :=======================================================:
- | Id (32 bytes) |
- |-------------------------------------------------------|
- | VerNum (uint32_t) |
- |-------------------------------------------------------|
- | JobId (uint32_t) |
- |-------------------------------------------------------|
- | *write_date (float64_t) VerNum 10 |
- |-------------------------------------------------------|
- | *write_time (float64_t) VerNum 10 |
- |-------------------------------------------------------|
- | PoolName (128 bytes) |
- |-------------------------------------------------------|
- | PoolType (128 bytes) |
- |-------------------------------------------------------|
- | JobName (128 bytes) |
- |-------------------------------------------------------|
- | ClientName (128 bytes) |
- |-------------------------------------------------------|
- | Job (128 bytes) |
- |-------------------------------------------------------|
- | FileSetName (128 bytes) |
- |-------------------------------------------------------|
- | JobType (uint32_t) |
- |-------------------------------------------------------|
- | JobLevel (uint32_t) |
- |-------------------------------------------------------|
- | FileSetMD5 (50 bytes) VerNum 11 |
- |-------------------------------------------------------|
- Additional fields in End Of Session Label
- |-------------------------------------------------------|
- | JobFiles (uint32_t) |
- |-------------------------------------------------------|
- | JobBytes (uint32_t) |
- |-------------------------------------------------------|
- | start_block (uint32_t) |
- |-------------------------------------------------------|
- | end_block (uint32_t) |
- |-------------------------------------------------------|
- | start_file (uint32_t) |
- |-------------------------------------------------------|
- | end_file (uint32_t) |
- |-------------------------------------------------------|
- | JobErrors (uint32_t) |
- |-------------------------------------------------------|
- | JobStatus (uint32_t) VerNum 11 |
- :=======================================================:
- * => fields deprecated
- Id: 32 byte Bacula Identifier "Bacula 1.0 immortal\n"
- LabelType (in FileIndex field of Header):
- EOM_LABEL -3 Label at EOM
- SOS_LABEL -4 Start of Session label
- EOS_LABEL -5 End of Session label
- VerNum: 11
- JobId: JobId
- write_btime: Bacula time/date this tape record written
- write_date: Julian date tape this record written - deprecated
- write_time: Julian time tape this record written - deprecated.
- PoolName: Pool Name
- PoolType: Pool Type
- MediaType: Media Type
- ClientName: Name of File daemon or Client writing this session
- Not used for EOM_LABEL.
-\end{verbatim}
-\normalsize
--- /dev/null
+%%
+%%
+
+\chapter{Bacula Memory Management}
+\label{_ChapterStart7}
+\index{Management!Bacula Memory}
+\index{Bacula Memory Management}
+\addcontentsline{toc}{section}{Bacula Memory Management}
+
+\section{General}
+\index{General}
+\addcontentsline{toc}{subsection}{General}
+
+This document describes the memory management routines that are used in Bacula
+and is meant to be a technical discussion for developers rather than part of
+the user manual.
+
+Since Bacula may be called upon to handle filenames of varying and more or
+less arbitrary length, special attention needs to be used in the code to
+ensure that memory buffers are sufficiently large. There are four
+possibilities for memory usage within {\bf Bacula}. Each will be described in
+turn. They are:
+
+\begin{itemize}
+\item Statically allocated memory.
+\item Dynamically allocated memory using malloc() and free().
+\item Non-pooled memory.
+\item Pooled memory.
+ \end{itemize}
+
+\subsection{Statically Allocated Memory}
+\index{Statically Allocated Memory}
+\index{Memory!Statically Allocated}
+\addcontentsline{toc}{subsubsection}{Statically Allocated Memory}
+
+Statically allocated memory is of the form:
+
+\footnotesize
+\begin{verbatim}
+char buffer[MAXSTRING];
+\end{verbatim}
+\normalsize
+
+The use of this kind of memory is discouraged except when you are 100\% sure
+that the strings to be used will be of a fixed length. One example of where
+this is appropriate is for {\bf Bacula} resource names, which are currently
+limited to 127 characters (MAX\_NAME\_LENGTH). Although this maximum size may
+change, particularly to accommodate Unicode, it will remain a relatively small
+value.
+
+\subsection{Dynamically Allocated Memory}
+\index{Dynamically Allocated Memory}
+\index{Memory!Dynamically Allocated}
+\addcontentsline{toc}{subsubsection}{Dynamically Allocated Memory}
+
+Dynamically allocated memory is obtained using the standard malloc() routines.
+As in:
+
+\footnotesize
+\begin{verbatim}
+char *buf;
+buf = malloc(256);
+\end{verbatim}
+\normalsize
+
+This kind of memory can be released with:
+
+\footnotesize
+\begin{verbatim}
+free(buf);
+\end{verbatim}
+\normalsize
+
+It is recommended to use this kind of memory only when you are sure that you
+know the memory size needed and the memory will be used for short periods of
+time -- that is it would not be appropriate to use statically allocated
+memory. An example might be to obtain a large memory buffer for reading and
+writing files. When {\bf SmartAlloc} is enabled, the memory obtained by
+malloc() will automatically be checked for buffer overwrite (overflow) during
+the free() call, and all malloc'ed memory that is not released prior to
+termination of the program will be reported as Orphaned memory.
+
+\subsection{Pooled and Non-pooled Memory}
+\index{Memory!Pooled and Non-pooled}
+\index{Pooled and Non-pooled Memory}
+\addcontentsline{toc}{subsubsection}{Pooled and Non-pooled Memory}
+
+In order to facility the handling of arbitrary length filenames and to
+efficiently handle a high volume of dynamic memory usage, we have implemented
+routines between the C code and the malloc routines. The first is called
+``Pooled'' memory, and is memory, which once allocated and then released, is
+not returned to the system memory pool, but rather retained in a Bacula memory
+pool. The next request to acquire pooled memory will return any free memory
+block. In addition, each memory block has its current size associated with the
+block allowing for easy checking if the buffer is of sufficient size. This
+kind of memory would normally be used in high volume situations (lots of
+malloc()s and free()s) where the buffer length may have to frequently change
+to adapt to varying filename lengths.
+
+The non-pooled memory is handled by routines similar to those used for pooled
+memory, allowing for easy size checking. However, non-pooled memory is
+returned to the system rather than being saved in the Bacula pool. This kind
+of memory would normally be used in low volume situations (few malloc()s and
+free()s), but where the size of the buffer might have to be adjusted
+frequently.
+
+\paragraph*{Types of Memory Pool:}
+
+Currently there are three memory pool types:
+
+\begin{itemize}
+\item PM\_NOPOOL -- non-pooled memory.
+\item PM\_FNAME -- a filename pool.
+\item PM\_MESSAGE -- a message buffer pool.
+\item PM\_EMSG -- error message buffer pool.
+ \end{itemize}
+
+\paragraph*{Getting Memory:}
+
+To get memory, one uses:
+
+\footnotesize
+\begin{verbatim}
+void *get_pool_memory(pool);
+\end{verbatim}
+\normalsize
+
+where {\bf pool} is one of the above mentioned pool names. The size of the
+memory returned will be determined by the system to be most appropriate for
+the application.
+
+If you wish non-pooled memory, you may alternatively call:
+
+\footnotesize
+\begin{verbatim}
+void *get_memory(size_t size);
+\end{verbatim}
+\normalsize
+
+The buffer length will be set to the size specified, and it will be assigned
+to the PM\_NOPOOL pool (no pooling).
+
+\paragraph*{Releasing Memory:}
+
+To free memory acquired by either of the above two calls, use:
+
+\footnotesize
+\begin{verbatim}
+void free_pool_memory(void *buffer);
+\end{verbatim}
+\normalsize
+
+where buffer is the memory buffer returned when the memory was acquired. If
+the memory was originally allocated as type PM\_NOPOOL, it will be released to
+the system, otherwise, it will be placed on the appropriate Bacula memory pool
+free chain to be used in a subsequent call for memory from that pool.
+
+\paragraph*{Determining the Memory Size:}
+
+To determine the memory buffer size, use:
+
+\footnotesize
+\begin{verbatim}
+size_t sizeof_pool_memory(void *buffer);
+\end{verbatim}
+\normalsize
+
+\paragraph*{Resizing Pool Memory:}
+
+To resize pool memory, use:
+
+\footnotesize
+\begin{verbatim}
+void *realloc_pool_memory(void *buffer);
+\end{verbatim}
+\normalsize
+
+The buffer will be reallocated, and the contents of the original buffer will
+be preserved, but the address of the buffer may change.
+
+\paragraph*{Automatic Size Adjustment:}
+
+To have the system check and if necessary adjust the size of your pooled
+memory buffer, use:
+
+\footnotesize
+\begin{verbatim}
+void *check_pool_memory_size(void *buffer, size_t new-size);
+\end{verbatim}
+\normalsize
+
+where {\bf new-size} is the buffer length needed. Note, if the buffer is
+already equal to or larger than {\bf new-size} no buffer size change will
+occur. However, if a buffer size change is needed, the original contents of
+the buffer will be preserved, but the buffer address may change. Many of the
+low level Bacula subroutines expect to be passed a pool memory buffer and use
+this call to ensure the buffer they use is sufficiently large.
+
+\paragraph*{Releasing All Pooled Memory:}
+
+In order to avoid orphaned buffer error messages when terminating the program,
+use:
+
+\footnotesize
+\begin{verbatim}
+void close_memory_pool();
+\end{verbatim}
+\normalsize
+
+to free all unused memory retained in the Bacula memory pool. Note, any memory
+not returned to the pool via free\_pool\_memory() will not be released by this
+call.
+
+\paragraph*{Pooled Memory Statistics:}
+
+For debugging purposes and performance tuning, the following call will print
+the current memory pool statistics:
+
+\footnotesize
+\begin{verbatim}
+void print_memory_pool_stats();
+\end{verbatim}
+\normalsize
+
+an example output is:
+
+\footnotesize
+\begin{verbatim}
+Pool Maxsize Maxused Inuse
+ 0 256 0 0
+ 1 256 1 0
+ 2 256 1 0
+\end{verbatim}
+\normalsize
+++ /dev/null
-%%
-%%
-
-\chapter{Bacula Memory Management}
-\label{_ChapterStart7}
-\index{Management!Bacula Memory}
-\index{Bacula Memory Management}
-\addcontentsline{toc}{section}{Bacula Memory Management}
-
-\section{General}
-\index{General}
-\addcontentsline{toc}{subsection}{General}
-
-This document describes the memory management routines that are used in Bacula
-and is meant to be a technical discussion for developers rather than part of
-the user manual.
-
-Since Bacula may be called upon to handle filenames of varying and more or
-less arbitrary length, special attention needs to be used in the code to
-ensure that memory buffers are sufficiently large. There are four
-possibilities for memory usage within {\bf Bacula}. Each will be described in
-turn. They are:
-
-\begin{itemize}
-\item Statically allocated memory.
-\item Dynamically allocated memory using malloc() and free().
-\item Non-pooled memory.
-\item Pooled memory.
- \end{itemize}
-
-\subsection{Statically Allocated Memory}
-\index{Statically Allocated Memory}
-\index{Memory!Statically Allocated}
-\addcontentsline{toc}{subsubsection}{Statically Allocated Memory}
-
-Statically allocated memory is of the form:
-
-\footnotesize
-\begin{verbatim}
-char buffer[MAXSTRING];
-\end{verbatim}
-\normalsize
-
-The use of this kind of memory is discouraged except when you are 100\% sure
-that the strings to be used will be of a fixed length. One example of where
-this is appropriate is for {\bf Bacula} resource names, which are currently
-limited to 127 characters (MAX\_NAME\_LENGTH). Although this maximum size may
-change, particularly to accommodate Unicode, it will remain a relatively small
-value.
-
-\subsection{Dynamically Allocated Memory}
-\index{Dynamically Allocated Memory}
-\index{Memory!Dynamically Allocated}
-\addcontentsline{toc}{subsubsection}{Dynamically Allocated Memory}
-
-Dynamically allocated memory is obtained using the standard malloc() routines.
-As in:
-
-\footnotesize
-\begin{verbatim}
-char *buf;
-buf = malloc(256);
-\end{verbatim}
-\normalsize
-
-This kind of memory can be released with:
-
-\footnotesize
-\begin{verbatim}
-free(buf);
-\end{verbatim}
-\normalsize
-
-It is recommended to use this kind of memory only when you are sure that you
-know the memory size needed and the memory will be used for short periods of
-time -- that is it would not be appropriate to use statically allocated
-memory. An example might be to obtain a large memory buffer for reading and
-writing files. When {\bf SmartAlloc} is enabled, the memory obtained by
-malloc() will automatically be checked for buffer overwrite (overflow) during
-the free() call, and all malloc'ed memory that is not released prior to
-termination of the program will be reported as Orphaned memory.
-
-\subsection{Pooled and Non-pooled Memory}
-\index{Memory!Pooled and Non-pooled}
-\index{Pooled and Non-pooled Memory}
-\addcontentsline{toc}{subsubsection}{Pooled and Non-pooled Memory}
-
-In order to facility the handling of arbitrary length filenames and to
-efficiently handle a high volume of dynamic memory usage, we have implemented
-routines between the C code and the malloc routines. The first is called
-``Pooled'' memory, and is memory, which once allocated and then released, is
-not returned to the system memory pool, but rather retained in a Bacula memory
-pool. The next request to acquire pooled memory will return any free memory
-block. In addition, each memory block has its current size associated with the
-block allowing for easy checking if the buffer is of sufficient size. This
-kind of memory would normally be used in high volume situations (lots of
-malloc()s and free()s) where the buffer length may have to frequently change
-to adapt to varying filename lengths.
-
-The non-pooled memory is handled by routines similar to those used for pooled
-memory, allowing for easy size checking. However, non-pooled memory is
-returned to the system rather than being saved in the Bacula pool. This kind
-of memory would normally be used in low volume situations (few malloc()s and
-free()s), but where the size of the buffer might have to be adjusted
-frequently.
-
-\paragraph*{Types of Memory Pool:}
-
-Currently there are three memory pool types:
-
-\begin{itemize}
-\item PM\_NOPOOL -- non-pooled memory.
-\item PM\_FNAME -- a filename pool.
-\item PM\_MESSAGE -- a message buffer pool.
-\item PM\_EMSG -- error message buffer pool.
- \end{itemize}
-
-\paragraph*{Getting Memory:}
-
-To get memory, one uses:
-
-\footnotesize
-\begin{verbatim}
-void *get_pool_memory(pool);
-\end{verbatim}
-\normalsize
-
-where {\bf pool} is one of the above mentioned pool names. The size of the
-memory returned will be determined by the system to be most appropriate for
-the application.
-
-If you wish non-pooled memory, you may alternatively call:
-
-\footnotesize
-\begin{verbatim}
-void *get_memory(size_t size);
-\end{verbatim}
-\normalsize
-
-The buffer length will be set to the size specified, and it will be assigned
-to the PM\_NOPOOL pool (no pooling).
-
-\paragraph*{Releasing Memory:}
-
-To free memory acquired by either of the above two calls, use:
-
-\footnotesize
-\begin{verbatim}
-void free_pool_memory(void *buffer);
-\end{verbatim}
-\normalsize
-
-where buffer is the memory buffer returned when the memory was acquired. If
-the memory was originally allocated as type PM\_NOPOOL, it will be released to
-the system, otherwise, it will be placed on the appropriate Bacula memory pool
-free chain to be used in a subsequent call for memory from that pool.
-
-\paragraph*{Determining the Memory Size:}
-
-To determine the memory buffer size, use:
-
-\footnotesize
-\begin{verbatim}
-size_t sizeof_pool_memory(void *buffer);
-\end{verbatim}
-\normalsize
-
-\paragraph*{Resizing Pool Memory:}
-
-To resize pool memory, use:
-
-\footnotesize
-\begin{verbatim}
-void *realloc_pool_memory(void *buffer);
-\end{verbatim}
-\normalsize
-
-The buffer will be reallocated, and the contents of the original buffer will
-be preserved, but the address of the buffer may change.
-
-\paragraph*{Automatic Size Adjustment:}
-
-To have the system check and if necessary adjust the size of your pooled
-memory buffer, use:
-
-\footnotesize
-\begin{verbatim}
-void *check_pool_memory_size(void *buffer, size_t new-size);
-\end{verbatim}
-\normalsize
-
-where {\bf new-size} is the buffer length needed. Note, if the buffer is
-already equal to or larger than {\bf new-size} no buffer size change will
-occur. However, if a buffer size change is needed, the original contents of
-the buffer will be preserved, but the buffer address may change. Many of the
-low level Bacula subroutines expect to be passed a pool memory buffer and use
-this call to ensure the buffer they use is sufficiently large.
-
-\paragraph*{Releasing All Pooled Memory:}
-
-In order to avoid orphaned buffer error messages when terminating the program,
-use:
-
-\footnotesize
-\begin{verbatim}
-void close_memory_pool();
-\end{verbatim}
-\normalsize
-
-to free all unused memory retained in the Bacula memory pool. Note, any memory
-not returned to the pool via free\_pool\_memory() will not be released by this
-call.
-
-\paragraph*{Pooled Memory Statistics:}
-
-For debugging purposes and performance tuning, the following call will print
-the current memory pool statistics:
-
-\footnotesize
-\begin{verbatim}
-void print_memory_pool_stats();
-\end{verbatim}
-\normalsize
-
-an example output is:
-
-\footnotesize
-\begin{verbatim}
-Pool Maxsize Maxused Inuse
- 0 256 0 0
- 1 256 1 0
- 2 256 1 0
-\end{verbatim}
-\normalsize
--- /dev/null
+%%
+%%
+
+\chapter{TCP/IP Network Protocol}
+\label{_ChapterStart5}
+\index{TCP/IP Network Protocol}
+\index{Protocol!TCP/IP Network}
+\addcontentsline{toc}{section}{TCP/IP Network Protocol}
+
+\section{General}
+\index{General}
+\addcontentsline{toc}{subsection}{General}
+
+This document describes the TCP/IP protocol used by Bacula to communicate
+between the various daemons and services. The definitive definition of the
+protocol can be found in src/lib/bsock.h, src/lib/bnet.c and
+src/lib/bnet\_server.c.
+
+Bacula's network protocol is basically a ``packet oriented'' protocol built on
+a standard TCP/IP streams. At the lowest level all packet transfers are done
+with read() and write() requests on system sockets. Pipes are not used as they
+are considered unreliable for large serial data transfers between various
+hosts.
+
+Using the routines described below (bnet\_open, bnet\_write, bnet\_recv, and
+bnet\_close) guarantees that the number of bytes you write into the socket
+will be received as a single record on the other end regardless of how many
+low level write() and read() calls are needed. All data transferred are
+considered to be binary data.
+
+\section{bnet and Threads}
+\index{Threads!bnet and}
+\index{Bnet and Threads}
+\addcontentsline{toc}{subsection}{bnet and Threads}
+
+These bnet routines work fine in a threaded environment. However, they assume
+that there is only one reader or writer on the socket at any time. It is
+highly recommended that only a single thread access any BSOCK packet. The
+exception to this rule is when the socket is first opened and it is waiting
+for a job to start. The wait in the Storage daemon is done in one thread and
+then passed to another thread for subsequent handling.
+
+If you envision having two threads using the same BSOCK, think twice, then you
+must implement some locking mechanism. However, it probably would not be
+appropriate to put locks inside the bnet subroutines for efficiency reasons.
+
+\section{bnet\_open}
+\index{Bnet\_open}
+\addcontentsline{toc}{subsection}{bnet\_open}
+
+To establish a connection to a server, use the subroutine:
+
+BSOCK *bnet\_open(void *jcr, char *host, char *service, int port, int *fatal)
+bnet\_open(), if successful, returns the Bacula sock descriptor pointer to be
+used in subsequent bnet\_send() and bnet\_read() requests. If not successful,
+bnet\_open() returns a NULL. If fatal is set on return, it means that a fatal
+error occurred and that you should not repeatedly call bnet\_open(). Any error
+message will generally be sent to the JCR.
+
+\section{bnet\_send}
+\index{Bnet\_send}
+\addcontentsline{toc}{subsection}{bnet\_send}
+
+To send a packet, one uses the subroutine:
+
+int bnet\_send(BSOCK *sock) This routine is equivalent to a write() except
+that it handles the low level details. The data to be sent is expected to be
+in sock-\gt{}msg and be sock-\gt{}msglen bytes. To send a packet, bnet\_send()
+first writes four bytes in network byte order than indicate the size of the
+following data packet. It returns:
+
+\footnotesize
+\begin{verbatim}
+ Returns 0 on failure
+ Returns 1 on success
+\end{verbatim}
+\normalsize
+
+In the case of a failure, an error message will be sent to the JCR contained
+within the bsock packet.
+
+\section{bnet\_fsend}
+\index{Bnet\_fsend}
+\addcontentsline{toc}{subsection}{bnet\_fsend}
+
+This form uses:
+
+int bnet\_fsend(BSOCK *sock, char *format, ...) and it allows you to send a
+formatted messages somewhat like fprintf(). The return status is the same as
+bnet\_send.
+
+\section{Additional Error information}
+\index{Information!Additional Error}
+\index{Additional Error information}
+\addcontentsline{toc}{subsection}{Additional Error information}
+
+Fro additional error information, you can call {\bf is\_bnet\_error(BSOCK
+*bsock)} which will return 0 if there is no error or non-zero if there is an
+error on the last transmission. The {\bf is\_bnet\_stop(BSOCK *bsock)}
+function will return 0 if there no errors and you can continue sending. It
+will return non-zero if there are errors or the line is closed (no more
+transmissions should be sent).
+
+\section{bnet\_recv}
+\index{Bnet\_recv}
+\addcontentsline{toc}{subsection}{bnet\_recv}
+
+To read a packet, one uses the subroutine:
+
+int bnet\_recv(BSOCK *sock) This routine is similar to a read() except that it
+handles the low level details. bnet\_read() first reads packet length that
+follows as four bytes in network byte order. The data is read into
+sock-\gt{}msg and is sock-\gt{}msglen bytes. If the sock-\gt{}msg is not large
+enough, bnet\_recv() realloc() the buffer. It will return an error (-2) if
+maxbytes is less than the record size sent. It returns:
+
+\footnotesize
+\begin{verbatim}
+ * Returns number of bytes read
+ * Returns 0 on end of file
+ * Returns -1 on hard end of file (i.e. network connection close)
+ * Returns -2 on error
+\end{verbatim}
+\normalsize
+
+It should be noted that bnet\_recv() is a blocking read.
+
+\section{bnet\_sig}
+\index{Bnet\_sig}
+\addcontentsline{toc}{subsection}{bnet\_sig}
+
+To send a ``signal'' from one daemon to another, one uses the subroutine:
+
+int bnet\_sig(BSOCK *sock, SIGNAL) where SIGNAL is one of the following:
+
+\begin{enumerate}
+\item BNET\_EOF - deprecated use BNET\_EOD
+\item BNET\_EOD - End of data stream, new data may follow
+\item BNET\_EOD\_POLL - End of data and poll all in one
+\item BNET\_STATUS - Request full status
+\item BNET\_TERMINATE - Conversation terminated, doing close()
+\item BNET\_POLL - Poll request, I'm hanging on a read
+\item BNET\_HEARTBEAT - Heartbeat Response requested
+\item BNET\_HB\_RESPONSE - Only response permitted to HB
+\item BNET\_PROMPT - Prompt for UA
+ \end{enumerate}
+
+\section{bnet\_strerror}
+\index{Bnet\_strerror}
+\addcontentsline{toc}{subsection}{bnet\_strerror}
+
+Returns a formated string corresponding to the last error that occurred.
+
+\section{bnet\_close}
+\index{Bnet\_close}
+\addcontentsline{toc}{subsection}{bnet\_close}
+
+The connection with the server remains open until closed by the subroutine:
+
+void bnet\_close(BSOCK *sock)
+
+\section{Becoming a Server}
+\index{Server!Becoming a}
+\index{Becoming a Server}
+\addcontentsline{toc}{subsection}{Becoming a Server}
+
+The bnet\_open() and bnet\_close() routines described above are used on the
+client side to establish a connection and terminate a connection with the
+server. To become a server (i.e. wait for a connection from a client), use the
+routine {\bf bnet\_thread\_server}. The calling sequence is a bit complicated,
+please refer to the code in bnet\_server.c and the code at the beginning of
+each daemon as examples of how to call it.
+
+\section{Higher Level Conventions}
+\index{Conventions!Higher Level}
+\index{Higher Level Conventions}
+\addcontentsline{toc}{subsection}{Higher Level Conventions}
+
+Within Bacula, we have established the convention that any time a single
+record is passed, it is sent with bnet\_send() and read with bnet\_recv().
+Thus the normal exchange between the server (S) and the client (C) are:
+
+\footnotesize
+\begin{verbatim}
+S: wait for connection C: attempt connection
+S: accept connection C: bnet_send() send request
+S: bnet_recv() wait for request
+S: act on request
+S: bnet_send() send ack C: bnet_recv() wait for ack
+\end{verbatim}
+\normalsize
+
+Thus a single command is sent, acted upon by the server, and then
+acknowledged.
+
+In certain cases, such as the transfer of the data for a file, all the
+information or data cannot be sent in a single packet. In this case, the
+convention is that the client will send a command to the server, who knows
+that more than one packet will be returned. In this case, the server will
+enter a loop:
+
+\footnotesize
+\begin{verbatim}
+while ((n=bnet_recv(bsock)) > 0) {
+ act on request
+}
+if (n < 0)
+ error
+\end{verbatim}
+\normalsize
+
+The client will perform the following:
+
+\footnotesize
+\begin{verbatim}
+bnet_send(bsock);
+bnet_send(bsock);
+...
+bnet_sig(bsock, BNET_EOD);
+\end{verbatim}
+\normalsize
+
+Thus the client will send multiple packets and signal to the server when all
+the packets have been sent by sending a zero length record.
+++ /dev/null
-%%
-%%
-
-\chapter{TCP/IP Network Protocol}
-\label{_ChapterStart5}
-\index{TCP/IP Network Protocol}
-\index{Protocol!TCP/IP Network}
-\addcontentsline{toc}{section}{TCP/IP Network Protocol}
-
-\section{General}
-\index{General}
-\addcontentsline{toc}{subsection}{General}
-
-This document describes the TCP/IP protocol used by Bacula to communicate
-between the various daemons and services. The definitive definition of the
-protocol can be found in src/lib/bsock.h, src/lib/bnet.c and
-src/lib/bnet\_server.c.
-
-Bacula's network protocol is basically a ``packet oriented'' protocol built on
-a standard TCP/IP streams. At the lowest level all packet transfers are done
-with read() and write() requests on system sockets. Pipes are not used as they
-are considered unreliable for large serial data transfers between various
-hosts.
-
-Using the routines described below (bnet\_open, bnet\_write, bnet\_recv, and
-bnet\_close) guarantees that the number of bytes you write into the socket
-will be received as a single record on the other end regardless of how many
-low level write() and read() calls are needed. All data transferred are
-considered to be binary data.
-
-\section{bnet and Threads}
-\index{Threads!bnet and}
-\index{Bnet and Threads}
-\addcontentsline{toc}{subsection}{bnet and Threads}
-
-These bnet routines work fine in a threaded environment. However, they assume
-that there is only one reader or writer on the socket at any time. It is
-highly recommended that only a single thread access any BSOCK packet. The
-exception to this rule is when the socket is first opened and it is waiting
-for a job to start. The wait in the Storage daemon is done in one thread and
-then passed to another thread for subsequent handling.
-
-If you envision having two threads using the same BSOCK, think twice, then you
-must implement some locking mechanism. However, it probably would not be
-appropriate to put locks inside the bnet subroutines for efficiency reasons.
-
-\section{bnet\_open}
-\index{Bnet\_open}
-\addcontentsline{toc}{subsection}{bnet\_open}
-
-To establish a connection to a server, use the subroutine:
-
-BSOCK *bnet\_open(void *jcr, char *host, char *service, int port, int *fatal)
-bnet\_open(), if successful, returns the Bacula sock descriptor pointer to be
-used in subsequent bnet\_send() and bnet\_read() requests. If not successful,
-bnet\_open() returns a NULL. If fatal is set on return, it means that a fatal
-error occurred and that you should not repeatedly call bnet\_open(). Any error
-message will generally be sent to the JCR.
-
-\section{bnet\_send}
-\index{Bnet\_send}
-\addcontentsline{toc}{subsection}{bnet\_send}
-
-To send a packet, one uses the subroutine:
-
-int bnet\_send(BSOCK *sock) This routine is equivalent to a write() except
-that it handles the low level details. The data to be sent is expected to be
-in sock-\gt{}msg and be sock-\gt{}msglen bytes. To send a packet, bnet\_send()
-first writes four bytes in network byte order than indicate the size of the
-following data packet. It returns:
-
-\footnotesize
-\begin{verbatim}
- Returns 0 on failure
- Returns 1 on success
-\end{verbatim}
-\normalsize
-
-In the case of a failure, an error message will be sent to the JCR contained
-within the bsock packet.
-
-\section{bnet\_fsend}
-\index{Bnet\_fsend}
-\addcontentsline{toc}{subsection}{bnet\_fsend}
-
-This form uses:
-
-int bnet\_fsend(BSOCK *sock, char *format, ...) and it allows you to send a
-formatted messages somewhat like fprintf(). The return status is the same as
-bnet\_send.
-
-\section{Additional Error information}
-\index{Information!Additional Error}
-\index{Additional Error information}
-\addcontentsline{toc}{subsection}{Additional Error information}
-
-Fro additional error information, you can call {\bf is\_bnet\_error(BSOCK
-*bsock)} which will return 0 if there is no error or non-zero if there is an
-error on the last transmission. The {\bf is\_bnet\_stop(BSOCK *bsock)}
-function will return 0 if there no errors and you can continue sending. It
-will return non-zero if there are errors or the line is closed (no more
-transmissions should be sent).
-
-\section{bnet\_recv}
-\index{Bnet\_recv}
-\addcontentsline{toc}{subsection}{bnet\_recv}
-
-To read a packet, one uses the subroutine:
-
-int bnet\_recv(BSOCK *sock) This routine is similar to a read() except that it
-handles the low level details. bnet\_read() first reads packet length that
-follows as four bytes in network byte order. The data is read into
-sock-\gt{}msg and is sock-\gt{}msglen bytes. If the sock-\gt{}msg is not large
-enough, bnet\_recv() realloc() the buffer. It will return an error (-2) if
-maxbytes is less than the record size sent. It returns:
-
-\footnotesize
-\begin{verbatim}
- * Returns number of bytes read
- * Returns 0 on end of file
- * Returns -1 on hard end of file (i.e. network connection close)
- * Returns -2 on error
-\end{verbatim}
-\normalsize
-
-It should be noted that bnet\_recv() is a blocking read.
-
-\section{bnet\_sig}
-\index{Bnet\_sig}
-\addcontentsline{toc}{subsection}{bnet\_sig}
-
-To send a ``signal'' from one daemon to another, one uses the subroutine:
-
-int bnet\_sig(BSOCK *sock, SIGNAL) where SIGNAL is one of the following:
-
-\begin{enumerate}
-\item BNET\_EOF - deprecated use BNET\_EOD
-\item BNET\_EOD - End of data stream, new data may follow
-\item BNET\_EOD\_POLL - End of data and poll all in one
-\item BNET\_STATUS - Request full status
-\item BNET\_TERMINATE - Conversation terminated, doing close()
-\item BNET\_POLL - Poll request, I'm hanging on a read
-\item BNET\_HEARTBEAT - Heartbeat Response requested
-\item BNET\_HB\_RESPONSE - Only response permitted to HB
-\item BNET\_PROMPT - Prompt for UA
- \end{enumerate}
-
-\section{bnet\_strerror}
-\index{Bnet\_strerror}
-\addcontentsline{toc}{subsection}{bnet\_strerror}
-
-Returns a formated string corresponding to the last error that occurred.
-
-\section{bnet\_close}
-\index{Bnet\_close}
-\addcontentsline{toc}{subsection}{bnet\_close}
-
-The connection with the server remains open until closed by the subroutine:
-
-void bnet\_close(BSOCK *sock)
-
-\section{Becoming a Server}
-\index{Server!Becoming a}
-\index{Becoming a Server}
-\addcontentsline{toc}{subsection}{Becoming a Server}
-
-The bnet\_open() and bnet\_close() routines described above are used on the
-client side to establish a connection and terminate a connection with the
-server. To become a server (i.e. wait for a connection from a client), use the
-routine {\bf bnet\_thread\_server}. The calling sequence is a bit complicated,
-please refer to the code in bnet\_server.c and the code at the beginning of
-each daemon as examples of how to call it.
-
-\section{Higher Level Conventions}
-\index{Conventions!Higher Level}
-\index{Higher Level Conventions}
-\addcontentsline{toc}{subsection}{Higher Level Conventions}
-
-Within Bacula, we have established the convention that any time a single
-record is passed, it is sent with bnet\_send() and read with bnet\_recv().
-Thus the normal exchange between the server (S) and the client (C) are:
-
-\footnotesize
-\begin{verbatim}
-S: wait for connection C: attempt connection
-S: accept connection C: bnet_send() send request
-S: bnet_recv() wait for request
-S: act on request
-S: bnet_send() send ack C: bnet_recv() wait for ack
-\end{verbatim}
-\normalsize
-
-Thus a single command is sent, acted upon by the server, and then
-acknowledged.
-
-In certain cases, such as the transfer of the data for a file, all the
-information or data cannot be sent in a single packet. In this case, the
-convention is that the client will send a command to the server, who knows
-that more than one packet will be returned. In this case, the server will
-enter a loop:
-
-\footnotesize
-\begin{verbatim}
-while ((n=bnet_recv(bsock)) > 0) {
- act on request
-}
-if (n < 0)
- error
-\end{verbatim}
-\normalsize
-
-The client will perform the following:
-
-\footnotesize
-\begin{verbatim}
-bnet_send(bsock);
-bnet_send(bsock);
-...
-bnet_sig(bsock, BNET_EOD);
-\end{verbatim}
-\normalsize
-
-Thus the client will send multiple packets and signal to the server when all
-the packets have been sent by sending a zero length record.
--- /dev/null
+%%
+%%
+
+\chapter{Platform Support}
+\label{_PlatformChapter}
+\index{Support!Platform}
+\index{Platform Support}
+\addcontentsline{toc}{section}{Platform Support}
+
+\section{General}
+\index{General }
+\addcontentsline{toc}{subsection}{General}
+
+This chapter describes the requirements for having a
+supported platform (Operating System). In general, Bacula is
+quite portable. It supports 32 and 64 bit architectures as well
+as bigendian and littleendian machines. For full
+support, the platform (Operating System) must implement POSIX Unix
+system calls. However, for File daemon support only, a small
+compatibility library can be written to support almost any
+architecture.
+
+Currently Linux, FreeBSD, and Solaris are fully supported
+platforms, which means that the code has been tested on those
+machines and passes a full set of regression tests.
+
+In addition, the Windows File daemon is supported on most versions
+of Windows, and finally, there are a number of other platforms
+where the File daemon (client) is known to run: NetBSD, OpenBSD,
+Mac OSX, SGI, ...
+
+\section{Requirements to become a Supported Platform}
+\index{Requirements!Platform}
+\index{Platform Requirements}
+\addcontentsline{toc}{subsection}{Platform Requirements}
+
+As mentioned above, in order to become a fully supported platform, it
+must support POSIX Unix system calls. In addition, the following
+requirements must be met:
+
+\begin{itemize}
+\item The principal developer (currently Kern) must have
+ non-root ssh access to a test machine running the platform.
+\item The ideal requirements and minimum requirements
+ for this machine are given below.
+\item There must be a defined platform champion who is normally
+ a system administrator for the machine that is available. This
+ person need not be a developer/programmer but must be familiar
+ with system administration of the platform.
+\item There must be at least one person designated who will
+ run regression tests prior to each release. Releases occur
+ approximately once every 6 months, but can be more frequent.
+ It takes at most a day's effort to setup the regression scripts
+ in the beginning, and after that, they can either be run daily
+ or on demand before a release. Running the regression scripts
+ involves only one or two command line commands and is fully
+ automated.
+\item Ideally there are one or more persons who will package
+ each Bacula release.
+\item Ideally there are one or more developers who can respond to
+ and fix platform specific bugs.
+\end{itemize}
+
+Ideal requirements for a test machine:
+\begin{itemize}
+\item The principal developer will have non-root ssh access to
+ the test machine at all times.
+\item The pricipal developer will have a root password.
+\item The test machine will provide approximately 200 MB of
+ disk space for continual use.
+\item The test machine will have approximately 500 MB of free
+ disk space for temporary use.
+\item The test machine will run the most common version of the OS.
+\item The test machine will have an autochanger of DDS-4 technology
+ or later having two or more tapes.
+\item The test machine will have MySQL and/or PostgreSQL database
+ access for account "bacula" available.
+\item The test machine will have sftp access.
+\item The test machine will provide an smtp server.
+\end{itemize}
+
+Minimum requirements for a test machine:
+\begin{itemize}
+\item The principal developer will have non-root ssh access to
+ the test machine when requested approximately once a month.
+\item The pricipal developer not have root access.
+\item The test machine will provide approximately 80 MB of
+ disk space for continual use.
+\item The test machine will have approximately 300 MB of free
+ disk space for temporary use.
+\item The test machine will run the the OS.
+\item The test machine will have a tape drive of DDS-4 technology
+ or later that can be scheduled for access.
+\item The test machine will not have MySQL and/or PostgreSQL database
+ access.
+\item The test machine will have no sftp access.
+\item The test machine will provide no email access.
+\end{itemize}
+
+Bare bones test machine requirements:
+\begin{itemize}
+\item The test machine is available only to a designated
+ test person (your own machine).
+\item The designated test person runs the regession
+ tests on demand.
+\item The test machine has a tape drive available.
+\end{itemize}
+++ /dev/null
-%%
-%%
-
-\chapter{Platform Support}
-\label{_PlatformChapter}
-\index{Support!Platform}
-\index{Platform Support}
-\addcontentsline{toc}{section}{Platform Support}
-
-\section{General}
-\index{General }
-\addcontentsline{toc}{subsection}{General}
-
-This chapter describes the requirements for having a
-supported platform (Operating System). In general, Bacula is
-quite portable. It supports 32 and 64 bit architectures as well
-as bigendian and littleendian machines. For full
-support, the platform (Operating System) must implement POSIX Unix
-system calls. However, for File daemon support only, a small
-compatibility library can be written to support almost any
-architecture.
-
-Currently Linux, FreeBSD, and Solaris are fully supported
-platforms, which means that the code has been tested on those
-machines and passes a full set of regression tests.
-
-In addition, the Windows File daemon is supported on most versions
-of Windows, and finally, there are a number of other platforms
-where the File daemon (client) is known to run: NetBSD, OpenBSD,
-Mac OSX, SGI, ...
-
-\section{Requirements to become a Supported Platform}
-\index{Requirements!Platform}
-\index{Platform Requirements}
-\addcontentsline{toc}{subsection}{Platform Requirements}
-
-As mentioned above, in order to become a fully supported platform, it
-must support POSIX Unix system calls. In addition, the following
-requirements must be met:
-
-\begin{itemize}
-\item The principal developer (currently Kern) must have
- non-root ssh access to a test machine running the platform.
-\item The ideal requirements and minimum requirements
- for this machine are given below.
-\item There must be a defined platform champion who is normally
- a system administrator for the machine that is available. This
- person need not be a developer/programmer but must be familiar
- with system administration of the platform.
-\item There must be at least one person designated who will
- run regression tests prior to each release. Releases occur
- approximately once every 6 months, but can be more frequent.
- It takes at most a day's effort to setup the regression scripts
- in the beginning, and after that, they can either be run daily
- or on demand before a release. Running the regression scripts
- involves only one or two command line commands and is fully
- automated.
-\item Ideally there are one or more persons who will package
- each Bacula release.
-\item Ideally there are one or more developers who can respond to
- and fix platform specific bugs.
-\end{itemize}
-
-Ideal requirements for a test machine:
-\begin{itemize}
-\item The principal developer will have non-root ssh access to
- the test machine at all times.
-\item The pricipal developer will have a root password.
-\item The test machine will provide approximately 200 MB of
- disk space for continual use.
-\item The test machine will have approximately 500 MB of free
- disk space for temporary use.
-\item The test machine will run the most common version of the OS.
-\item The test machine will have an autochanger of DDS-4 technology
- or later having two or more tapes.
-\item The test machine will have MySQL and/or PostgreSQL database
- access for account "bacula" available.
-\item The test machine will have sftp access.
-\item The test machine will provide an smtp server.
-\end{itemize}
-
-Minimum requirements for a test machine:
-\begin{itemize}
-\item The principal developer will have non-root ssh access to
- the test machine when requested approximately once a month.
-\item The pricipal developer not have root access.
-\item The test machine will provide approximately 80 MB of
- disk space for continual use.
-\item The test machine will have approximately 300 MB of free
- disk space for temporary use.
-\item The test machine will run the the OS.
-\item The test machine will have a tape drive of DDS-4 technology
- or later that can be scheduled for access.
-\item The test machine will not have MySQL and/or PostgreSQL database
- access.
-\item The test machine will have no sftp access.
-\item The test machine will provide no email access.
-\end{itemize}
-
-Bare bones test machine requirements:
-\begin{itemize}
-\item The test machine is available only to a designated
- test person (your own machine).
-\item The designated test person runs the regession
- tests on demand.
-\item The test machine has a tape drive available.
-\end{itemize}
--- /dev/null
+%%
+%%
+
+\chapter{Bacula FD Plugin API}
+To write a Bacula plugin, you create a dynamic shared object program (or dll on
+Win32) with a particular name and two exported entry points, place it in the
+{\bf Plugins Directory}, which is defined in the {\bf bacula-fd.conf} file in
+the {\bf Client} resource, and when the FD starts, it will load all the plugins
+that end with {\bf -fd.so} (or {\bf -fd.dll} on Win32) found in that directory.
+
+\section{Normal vs Command Plugins}
+In general, there are two ways that plugins are called. The first way, is when
+a particular event is detected in Bacula, it will transfer control to each
+plugin that is loaded in turn informing the plugin of the event. This is very
+similar to how a {\bf RunScript} works, and the events are very similar. Once
+the plugin gets control, it can interact with Bacula by getting and setting
+Bacula variables. In this way, it behaves much like a RunScript. Currently
+very few Bacula variables are defined, but they will be implemented as the need
+arrises, and it is very extensible.
+
+We plan to have plugins register to receive events that they normally would
+not receive, such as an event for each file examined for backup or restore.
+This feature is not yet implemented.
+
+The second type of plugin, which is more useful and fully implemented in the
+current version is what we call a command plugin. As with all plugins, it gets
+notified of important events as noted above (details described below), but in
+addition, this kind of plugin can accept a command line, which is a:
+
+\begin{verbatim}
+ Plugin = <command-string>
+\end{verbatim}
+
+directive that is placed in the Include section of a FileSet and is very
+similar to the "File = " directive. When this Plugin directive is encountered
+by Bacula during backup, it passes the "command" part of the Plugin directive
+only to the plugin that is explicitly named in the first field of that command
+string. This allows that plugin to backup any file or files on the system that
+it wants. It can even create "virtual files" in the catalog that contain data
+to be restored but do not necessarily correspond to actual files on the
+filesystem.
+
+The important features of the command plugin entry points are:
+\begin{itemize}
+ \item It is triggered by a "Plugin =" directive in the FileSet
+ \item Only a single plugin is called that is named on the "Plugin =" directive.
+ \item The full command string after the "Plugin =" is passed to the plugin
+ so that it can be told what to backup/restore.
+\end{itemize}
+
+
+\section{Loading Plugins}
+Once the File daemon loads the plugins, it asks the OS for the
+two entry points (loadPlugin and unloadPlugin) then calls the
+{\bf loadPlugin} entry point (see below).
+
+Bacula passes information to the plugin through this call and it gets
+back information that it needs to use the plugin. Later, Bacula
+ will call particular functions that are defined by the
+{\bf loadPlugin} interface.
+
+When Bacula is finished with the plugin
+(when Bacula is going to exit), it will call the {\bf unloadPlugin}
+entry point.
+
+The two entry points are:
+
+\begin{verbatim}
+bRC loadPlugin(bInfo *lbinfo, bFuncs *lbfuncs, pInfo **pinfo, pFuncs **pfuncs)
+
+and
+
+bRC unloadPlugin()
+\end{verbatim}
+
+both these external entry points to the shared object are defined as C entry
+points to avoid name mangling complications with C++. However, the shared
+object can actually be written in any language (preferrably C or C++) providing
+that it follows C language calling conventions.
+
+The definitions for {\bf bRC} and the arguments are {\bf
+src/filed/fd-plugins.h} and so this header file needs to be included in
+your plugin. It along with {\bf src/lib/plugins.h} define basically the whole
+plugin interface. Within this header file, it includes the following
+files:
+
+\begin{verbatim}
+#include <sys/types.h>
+#include "config.h"
+#include "bc_types.h"
+#include "lib/plugins.h"
+#include <sys/stat.h>
+\end{verbatim}
+
+Aside from the {\bf bc\_types.h} and {\bf confit.h} headers, the plugin
+definition uses the minimum code from Bacula. The bc\_types.h file is required
+to ensure that the data type defintions in arguments correspond to the Bacula
+core code.
+
+The return codes are defined as:
+\begin{verbatim}
+typedef enum {
+ bRC_OK = 0, /* OK */
+ bRC_Stop = 1, /* Stop calling other plugins */
+ bRC_Error = 2, /* Some kind of error */
+ bRC_More = 3, /* More files to backup */
+} bRC;
+\end{verbatim}
+
+
+At a future point in time, we hope to make the Bacula libbac.a into a
+shared object so that the plugin can use much more of Bacula's
+infrastructure, but for this first cut, we have tried to minimize the
+dependence on Bacula.
+
+\section{loadPlugin}
+As previously mentioned, the {\bf loadPlugin} entry point in the plugin
+is called immediately after Bacula loads the plugin when the File daemon
+itself is first starting. This entry point is only called once during the
+execution of the File daemon. In calling the
+plugin, the first two arguments are information from Bacula that
+is passed to the plugin, and the last two arguments are information
+about the plugin that the plugin must return to Bacula. The call is:
+
+\begin{verbatim}
+bRC loadPlugin(bInfo *lbinfo, bFuncs *lbfuncs, pInfo **pinfo, pFuncs **pfuncs)
+\end{verbatim}
+
+and the arguments are:
+
+\begin{description}
+\item [lbinfo]
+This is information about Bacula in general. Currently, the only value
+defined in the bInfo structure is the version, which is the Bacula plugin
+interface version, currently defined as 1. The {\bf size} is set to the
+byte size of the structure. The exact definition of the bInfo structure
+as of this writing is:
+
+\begin{verbatim}
+typedef struct s_baculaInfo {
+ uint32_t size;
+ uint32_t version;
+} bInfo;
+\end{verbatim}
+
+\item [lbfuncs]
+The bFuncs structure defines the callback entry points within Bacula
+that the plugin can use register events, get Bacula values, set
+Bacula values, and send messages to the Job output or debug output.
+
+The exact definition as of this writing is:
+\begin{verbatim}
+typedef struct s_baculaFuncs {
+ uint32_t size;
+ uint32_t version;
+ bRC (*registerBaculaEvents)(bpContext *ctx, ...);
+ bRC (*getBaculaValue)(bpContext *ctx, bVariable var, void *value);
+ bRC (*setBaculaValue)(bpContext *ctx, bVariable var, void *value);
+ bRC (*JobMessage)(bpContext *ctx, const char *file, int line,
+ int type, utime_t mtime, const char *fmt, ...);
+ bRC (*DebugMessage)(bpContext *ctx, const char *file, int line,
+ int level, const char *fmt, ...);
+ void *(*baculaMalloc)(bpContext *ctx, const char *file, int line,
+ size_t size);
+ void (*baculaFree)(bpContext *ctx, const char *file, int line, void *mem);
+} bFuncs;
+\end{verbatim}
+
+We will discuss these entry points and how to use them a bit later when
+describing the plugin code.
+
+
+\item [pInfo]
+When the loadPlugin entry point is called, the plugin must initialize
+an information structure about the plugin and return a pointer to
+this structure to Bacula.
+
+The exact definition as of this writing is:
+
+\begin{verbatim}
+typedef struct s_pluginInfo {
+ uint32_t size;
+ uint32_t version;
+ const char *plugin_magic;
+ const char *plugin_license;
+ const char *plugin_author;
+ const char *plugin_date;
+ const char *plugin_version;
+ const char *plugin_description;
+} pInfo;
+\end{verbatim}
+
+Where:
+ \begin{description}
+ \item [version] is the current Bacula defined plugin interface version, currently
+ set to 1. If the interface version differs from the current version of
+ Bacula, the plugin will not be run (not yet implemented).
+ \item [plugin\_magic] is a pointer to the text string "*FDPluginData*", a
+ sort of sanity check. If this value is not specified, the plugin
+ will not be run (not yet implemented).
+ \item [plugin\_license] is a pointer to a text string that describes the
+ plugin license. Bacula will only accept compatible licenses (not yet
+ implemented).
+ \item [plugin\_author] is a pointer to the text name of the author of the program.
+ This string can be anything but is generally the author's name.
+ \item [plugin\_date] is the pointer text string containing the date of the plugin.
+ This string can be anything but is generally some human readable form of
+ the date.
+ \item [plugin\_version] is a pointer to a text string containing the version of
+ the plugin. The contents are determined by the plugin writer.
+ \item [plugin\_description] is a pointer to a string describing what the
+ plugin does. The contents are determined by the plugin writer.
+ \end{description}
+
+The pInfo structure must be defined in static memory because Bacula does not
+copy it and may refer to the values at any time while the plugin is
+loaded. All values must be supplied or the plugin will not run (not yet
+implemented). All text strings must be either ASCII or UTF-8 strings that
+are terminated with a zero byte.
+
+\item [pFuncs]
+When the loadPlugin entry point is called, the plugin must initialize
+an entry point structure about the plugin and return a pointer to
+this structure to Bacula. This structure contains pointer to each
+of the entry points that the plugin must provide for Bacula. When
+Bacula is actually running the plugin, it will call the defined
+entry points at particular times. All entry points must be defined.
+
+The pFuncs structure must be defined in static memory because Bacula does not
+copy it and may refer to the values at any time while the plugin is
+loaded.
+
+The exact definition as of this writing is:
+
+\begin{verbatim}
+typedef struct s_pluginFuncs {
+ uint32_t size;
+ uint32_t version;
+ bRC (*newPlugin)(bpContext *ctx);
+ bRC (*freePlugin)(bpContext *ctx);
+ bRC (*getPluginValue)(bpContext *ctx, pVariable var, void *value);
+ bRC (*setPluginValue)(bpContext *ctx, pVariable var, void *value);
+ bRC (*handlePluginEvent)(bpContext *ctx, bEvent *event, void *value);
+ bRC (*startBackupFile)(bpContext *ctx, struct save_pkt *sp);
+ bRC (*endBackupFile)(bpContext *ctx);
+ bRC (*startRestoreFile)(bpContext *ctx, const char *cmd);
+ bRC (*endRestoreFile)(bpContext *ctx);
+ bRC (*pluginIO)(bpContext *ctx, struct io_pkt *io);
+ bRC (*createFile)(bpContext *ctx, struct restore_pkt *rp);
+ bRC (*setFileAttributes)(bpContext *ctx, struct restore_pkt *rp);
+ bRC (*checkFile)(bpContext *ctx, char *fname);
+} pFuncs;
+\end{verbatim}
+
+The details of the entry points will be presented in
+separate sections below.
+
+Where:
+ \begin{description}
+ \item [size] is the byte size of the structure.
+ \item [version] is the plugin interface version currently set to 3.
+ \end{description}
+
+Sample code for loadPlugin:
+\begin{verbatim}
+ bfuncs = lbfuncs; /* set Bacula funct pointers */
+ binfo = lbinfo;
+ *pinfo = &pluginInfo; /* return pointer to our info */
+ *pfuncs = &pluginFuncs; /* return pointer to our functions */
+
+ return bRC_OK;
+\end{verbatim}
+
+where pluginInfo and pluginFuncs are statically defined structures.
+See bpipe-fd.c for details.
+
+
+
+\end{description}
+
+\section{Plugin Entry Points}
+This section will describe each of the entry points (subroutines) within
+the plugin that the plugin must provide for Bacula, when they are called
+and their arguments. As noted above, pointers to these subroutines are
+passed back to Bacula in the pFuncs structure when Bacula calls the
+loadPlugin() externally defined entry point.
+
+\subsection{newPlugin(bpContext *ctx)}
+ This is the entry point that Bacula will call
+ when a new "instance" of the plugin is created. This typically
+ happens at the beginning of a Job. If 10 Jobs are running
+ simultaneously, there will be at least 10 instances of the
+ plugin.
+
+ The bpContext structure will be passed to the plugin, and
+ during this call, if the plugin needs to have its private
+ working storage that is associated with the particular
+ instance of the plugin, it should create it from the heap
+ (malloc the memory) and store a pointer to
+ its private working storage in the {\bf pContext} variable.
+ Note: since Bacula is a multi-threaded program, you must not
+ keep any variable data in your plugin unless it is truely meant
+ to apply globally to the whole plugin. In addition, you must
+ be aware that except the first and last call to the plugin
+ (loadPlugin and unloadPlugin) all the other calls will be
+ made by threads that correspond to a Bacula job. The
+ bpContext that will be passed for each thread will remain the
+ same throughout the Job thus you can keep your privat Job specific
+ data in it ({\bf bContext}).
+
+\begin{verbatim}
+typedef struct s_bpContext {
+ void *pContext; /* Plugin private context */
+ void *bContext; /* Bacula private context */
+} bpContext;
+
+\end{verbatim}
+
+ This context pointer will be passed as the first argument to all
+ the entry points that Bacula calls within the plugin. Needless
+ to say, the plugin should not change the bContext variable, which
+ is Bacula's private context pointer for this instance (Job) of this
+ plugin.
+
+\subsection{freePlugin(bpContext *ctx)}
+This entry point is called when the
+this instance of the plugin is no longer needed (the Job is
+ending), and the plugin should release all memory it may
+have allocated for this particular instance (Job) i.e. the pContext.
+This is not the final termination
+of the plugin signaled by a call to {\bf unloadPlugin}.
+Any other instances (Job) will
+continue to run, and the entry point {\bf newPlugin} may be called
+again if other jobs start.
+
+\subsection{getPluginValue(bpContext *ctx, pVariable var, void *value)}
+Bacula will call this entry point to get
+a value from the plugin. This entry point is currently not called.
+
+\subsection{setPluginValue(bpContext *ctx, pVariable var, void *value)}
+Bacula will call this entry point to set
+a value in the plugin. This entry point is currently not called.
+
+\subsection{handlePluginEvent(bpContext *ctx, bEvent *event, void *value)}
+This entry point is called when Bacula
+encounters certain events (discussed below). This is, in fact, the
+main way that most plugins get control when a Job runs and how
+they know what is happening in the job. It can be likened to the
+{\bf RunScript} feature that calls external programs and scripts,
+and is very similar to the Bacula Python interface.
+When the plugin is called, Bacula passes it the pointer to an event
+structure (bEvent), which currently has one item, the eventType:
+
+\begin{verbatim}
+typedef struct s_bEvent {
+ uint32_t eventType;
+} bEvent;
+\end{verbatim}
+
+ which defines what event has been triggered, and for each event,
+ Bacula will pass a pointer to a value associated with that event.
+ If no value is associated with a particular event, Bacula will
+ pass a NULL pointer, so the plugin must be careful to always check
+ value pointer prior to dereferencing it.
+
+ The current list of events are:
+
+\begin{verbatim}
+typedef enum {
+ bEventJobStart = 1,
+ bEventJobEnd = 2,
+ bEventStartBackupJob = 3,
+ bEventEndBackupJob = 4,
+ bEventStartRestoreJob = 5,
+ bEventEndRestoreJob = 6,
+ bEventStartVerifyJob = 7,
+ bEventEndVerifyJob = 8,
+ bEventBackupCommand = 9,
+ bEventRestoreCommand = 10,
+ bEventLevel = 11,
+ bEventSince = 12,
+} bEventType;
+
+\end{verbatim}
+
+Most of the above are self-explanatory.
+
+\begin{description}
+ \item [bEventJobStart] is called whenever a Job starts. The value
+ passed is a pointer to a string that contains: "Jobid=nnn
+ Job=job-name". Where nnn will be replaced by the JobId and job-name
+ will be replaced by the Job name. The variable is temporary so if you
+ need the values, you must copy them.
+
+ \item [bEventJobEnd] is called whenever a Job ends. No value is passed.
+
+ \item [bEventStartBackupJob] is called when a Backup Job begins. No value
+ is passed.
+
+ \item [bEventEndBackupJob] is called when a Backup Job ends. No value is
+ passed.
+
+ \item [bEventStartRestoreJob] is called when a Restore Job starts. No value
+ is passed.
+
+ \item [bEventEndRestoreJob] is called when a Restore Job ends. No value is
+ passed.
+
+ \item [bEventStartVerifyJob] is called when a Verify Job starts. No value
+ is passed.
+
+ \item [bEventEndVerifyJob] is called when a Verify Job ends. No value
+ is passed.
+
+ \item [bEventBackupCommand] is called prior to the bEventStartBackupJob and
+ the plugin is passed the command string (everything after the equal sign
+ in "Plugin =" as the value.
+
+ Note, if you intend to backup a file, this is an important first point to
+ write code that copies the command string passed into your pContext area
+ so that you will know that a backup is being performed and you will know
+ the full contents of the "Plugin =" command (i.e. what to backup and
+ what virtual filename the user wants to call it.
+
+ \item [bEventRestoreCommand] is called prior to the bEventStartRestoreJob and
+ the plugin is passed the command string (everything after the equal sign
+ in "Plugin =" as the value.
+
+ See the notes above concerning backup and the command string. This is the
+ point at which Bacula passes you the original command string that was
+ specified during the backup, so you will want to save it in your pContext
+ area for later use when Bacula calls the plugin again.
+
+ \item [bEventLevel] is called when the level is set for a new Job. The value
+ is a 32 bit integer stored in the void*, which represents the Job Level code.
+
+ \item [bEventSince] is called when the since time is set for a new Job. The
+ value is a time\_t time at which the last job was run.
+\end{description}
+
+During each of the above calls, the plugin receives either no specific value or
+only one value, which in some cases may not be sufficient. However, knowing
+the context of the event, the plugin can call back to the Bacula entry points
+it was passed during the {\bf loadPlugin} call and get to a number of Bacula
+variables. (at the current time few Bacula variables are implemented, but it
+easily extended at a future time and as needs require).
+
+\subsection{startBackupFile(bpContext *ctx, struct save\_pkt *sp)}
+This entry point is called only if your plugin is a command plugin, and
+it is called when Bacula encounters the "Plugin = " directive in
+the Include section of the FileSet.
+Called when beginning the backup of a file. Here Bacula provides you
+with a pointer to the {\bf save\_pkt} structure and you must fill in
+this packet with the "attribute" data of the file.
+
+\begin{verbatim}
+struct save_pkt {
+ int32_t pkt_size; /* size of this packet */
+ char *fname; /* Full path and filename */
+ char *link; /* Link name if any */
+ struct stat statp; /* System stat() packet for file */
+ int32_t type; /* FT_xx for this file */
+ uint32_t flags; /* Bacula internal flags */
+ bool portable; /* set if data format is portable */
+ char *cmd; /* command */
+ int32_t pkt_end; /* end packet sentinel */
+};
+\end{verbatim}
+
+The second argument is a pointer to the {\bf save\_pkt} structure for the file
+to be backed up. The plugin is responsible for filling in all the fields
+of the {\bf save\_pkt}. If you are backing up
+a real file, then generally, the statp structure can be filled in by doing
+a {\bf stat} system call on the file.
+
+If you are backing up a database or
+something that is more complex, you might want to create a virtual file.
+That is a file that does not actually exist on the filesystem, but represents
+say an object that you are backing up. In that case, you need to ensure
+that the {\bf fname} string that you pass back is unique so that it
+does not conflict with a real file on the system, and you need to
+artifically create values in the statp packet.
+
+Example programs such as {\bf bpipe-fd.c} show how to set these fields. You
+must take care not to store pointers the stack in the pointer fields such as
+fname and link, because when you return from your function, your stack entries
+will be destroyed. The solution in that case is to malloc() and return the
+pointer to it. In order to not have memory leaks, you should store a pointer to
+all memory allocated in your pContext structure so that in subsequent calls or
+at termination, you can release it back to the system.
+
+Once the backup has begun, Bacula will call your plugin at the {\bf pluginIO}
+entry point to "read" the data to be backed up. Please see the {\bf bpipe-fd.c}
+plugin for how to do I/O.
+
+Example of filling in the save\_pkt as used in bpipe-fd.c:
+
+\begin{verbatim}
+ struct plugin_ctx *p_ctx = (struct plugin_ctx *)ctx->pContext;
+ time_t now = time(NULL);
+ sp->fname = p_ctx->fname;
+ sp->statp.st_mode = 0700 | S_IFREG;
+ sp->statp.st_ctime = now;
+ sp->statp.st_mtime = now;
+ sp->statp.st_atime = now;
+ sp->statp.st_size = -1;
+ sp->statp.st_blksize = 4096;
+ sp->statp.st_blocks = 1;
+ p_ctx->backup = true;
+ return bRC_OK;
+\end{verbatim}
+
+Note: the filename to be created has already been created from the
+command string previously sent to the plugin and is in the plugin
+context (p\_ctx->fname) and is a malloc()ed string. This example
+creates a regular file (S\_IFREG), with various fields being created.
+
+In general, the sequence of commands issued from Bacula to the plugin
+to do a backup while processing the "Plugin = " directive are:
+
+\begin{enumerate}
+ \item generate a bEventBackupCommand event to the specified plugin
+ and pass it the command string.
+ \item make a startPluginBackup call to the plugin, which
+ fills in the data needed in save\_pkt to save as the file
+ attributes and to put on the Volume and in the catalog.
+ \item call Bacula's internal save\_file() subroutine to save the specified
+ file. The plugin will then be called at pluginIO() to "open"
+ the file, and then to read the file data.
+ Note, if you are dealing with a virtual file, the "open" operation
+ is something the plugin does internally and it doesn't necessarily
+ mean opening a file on the filesystem. For example in the case of
+ the bpipe-fd.c program, it initiates a pipe to the requested program.
+ Finally when the plugin signals to Bacula that all the data was read,
+ Bacula will call the plugin with the "close" pluginIO() function.
+\end{enumerate}
+
+
+\subsection{endBackupFile(bpContext *ctx)}
+Called at the end of backing up a file for a command plugin. If the plugin's
+work is done, it should return bRC\_OK. If the plugin wishes to create another
+file and back it up, then it must return bRC\_More (not yet implemented). This
+is probably a good time to release any malloc()ed memory you used to pass back
+filenames.
+
+\subsection{startRestoreFile(bpContext *ctx, const char *cmd)}
+Called when the first record is read from the Volume that was
+previously written by the command plugin.
+
+\subsection{createFile(bpContext *ctx, struct restore\_pkt *rp)}
+Called for a command plugin to create a file during a Restore job before
+restoring the data.
+This entry point is called before any I/O is done on the file. After
+this call, Bacula will call pluginIO() to open the file for write.
+
+The data in the
+restore\_pkt is passed to the plugin and is based on the data that was
+originally given by the plugin during the backup and the current user
+restore settings (e.g. where, RegexWhere, replace). This allows the
+plugin to first create a file (if necessary) so that the data can
+be transmitted to it. The next call to the plugin will be a
+pluginIO command with a request to open the file write-only.
+
+This call must return one of the following values:
+
+\begin{verbatim}
+ enum {
+ CF_SKIP = 1, /* skip file (not newer or something) */
+ CF_ERROR, /* error creating file */
+ CF_EXTRACT, /* file created, data to extract */
+ CF_CREATED /* file created, no data to extract */
+};
+\end{verbatim}
+
+in the restore\_pkt value {\bf create\_status}. For a normal file,
+unless there is an error, you must return {\bf CF\_EXTRACT}.
+
+\begin{verbatim}
+
+struct restore_pkt {
+ int32_t pkt_size; /* size of this packet */
+ int32_t stream; /* attribute stream id */
+ int32_t data_stream; /* id of data stream to follow */
+ int32_t type; /* file type FT */
+ int32_t file_index; /* file index */
+ int32_t LinkFI; /* file index to data if hard link */
+ uid_t uid; /* userid */
+ struct stat statp; /* decoded stat packet */
+ const char *attrEx; /* extended attributes if any */
+ const char *ofname; /* output filename */
+ const char *olname; /* output link name */
+ const char *where; /* where */
+ const char *RegexWhere; /* regex where */
+ int replace; /* replace flag */
+ int create_status; /* status from createFile() */
+ int32_t pkt_end; /* end packet sentinel */
+
+};
+\end{verbatim}
+
+Typical code to create a regular file would be the following:
+
+\begin{verbatim}
+ struct plugin_ctx *p_ctx = (struct plugin_ctx *)ctx->pContext;
+ time_t now = time(NULL);
+ sp->fname = p_ctx->fname; /* set the full path/filename I want to create */
+ sp->type = FT_REG;
+ sp->statp.st_mode = 0700 | S_IFREG;
+ sp->statp.st_ctime = now;
+ sp->statp.st_mtime = now;
+ sp->statp.st_atime = now;
+ sp->statp.st_size = -1;
+ sp->statp.st_blksize = 4096;
+ sp->statp.st_blocks = 1;
+ return bRC_OK;
+\end{verbatim}
+
+This will create a virtual file. If you are creating a file that actually
+exists, you will most likely want to fill the statp packet using the
+stat() system call.
+
+Creating a directory is similar, but requires a few extra steps:
+
+\begin{verbatim}
+ struct plugin_ctx *p_ctx = (struct plugin_ctx *)ctx->pContext;
+ time_t now = time(NULL);
+ sp->fname = p_ctx->fname; /* set the full path I want to create */
+ sp->link = xxx; where xxx is p_ctx->fname with a trailing forward slash
+ sp->type = FT_DIREND
+ sp->statp.st_mode = 0700 | S_IFDIR;
+ sp->statp.st_ctime = now;
+ sp->statp.st_mtime = now;
+ sp->statp.st_atime = now;
+ sp->statp.st_size = -1;
+ sp->statp.st_blksize = 4096;
+ sp->statp.st_blocks = 1;
+ return bRC_OK;
+\end{verbatim}
+
+The link field must be set with the full cononical path name, which always
+ends with a forward slash. If you do not terminate it with a forward slash,
+you will surely have problems later.
+
+As with the example that creates a file, if you are backing up a real
+directory, you will want to do an stat() on the directory.
+
+Note, if you want the directory permissions and times to be correctly
+restored, you must create the directory {\bf after} all the file directories
+have been sent to Bacula. That allows the restore process to restore all the
+files in a directory using default directory options, then at the end, restore
+the directory permissions. If you do it the other way around, each time you
+restore a file, the OS will modify the time values for the directory entry.
+
+\subsection{setFileAttributes(bpContext *ctx, struct restore\_pkt *rp)}
+This is call not yet implemented. Called for a command plugin.
+
+See the definition of {\bf restre\_pkt} in the above section.
+
+\subsection{endRestoreFile(bpContext *ctx)}
+Called when a command plugin is done restoring a file.
+
+\subsection{pluginIO(bpContext *ctx, struct io\_pkt *io)}
+Called to do the input (backup) or output (restore) of data from or to a file
+for a command plugin. These routines simulate the Unix read(), write(), open(),
+close(), and lseek() I/O calls, and the arguments are passed in the packet and
+the return values are also placed in the packet. In addition for Win32 systems
+the plugin must return two additional values (described below).
+
+\begin{verbatim}
+ enum {
+ IO_OPEN = 1,
+ IO_READ = 2,
+ IO_WRITE = 3,
+ IO_CLOSE = 4,
+ IO_SEEK = 5
+};
+
+struct io_pkt {
+ int32_t pkt_size; /* Size of this packet */
+ int32_t func; /* Function code */
+ int32_t count; /* read/write count */
+ mode_t mode; /* permissions for created files */
+ int32_t flags; /* Open flags */
+ char *buf; /* read/write buffer */
+ const char *fname; /* open filename */
+ int32_t status; /* return status */
+ int32_t io_errno; /* errno code */
+ int32_t lerror; /* Win32 error code */
+ int32_t whence; /* lseek argument */
+ boffset_t offset; /* lseek argument */
+ bool win32; /* Win32 GetLastError returned */
+ int32_t pkt_end; /* end packet sentinel */
+};
+\end{verbatim}
+
+The particular Unix function being simulated is indicated by the {\bf func},
+which will have one of the IO\_OPEN, IO\_READ, ... codes listed above. The
+status code that would be returned from a Unix call is returned in {\bf status}
+for IO\_OPEN, IO\_CLOSE, IO\_READ, and IO\_WRITE. The return value for IO\_SEEK
+is returned in {\bf offset} which in general is a 64 bit value.
+
+When there is an error on Unix systems, you must always set io\_error, and
+on a Win32 system, you must always set win32, and the returned value from
+the OS call GetLastError() in lerror.
+
+For all except IO\_SEEK, {\bf status} is the return result. In general it is
+a positive integer unless there is an error in which case it is -1.
+
+The following describes each call and what you get and what you
+should return:
+
+\begin{description}
+ \item [IO\_OPEN]
+ You will be passed fname, mode, and flags.
+ You must set on return: status, and if there is a Unix error
+ io\_errno must be set to the errno value, and if there is a
+ Win32 error win32 and lerror.
+
+ \item [IO\_READ]
+ You will be passed: count, and buf (buffer of size count).
+ You must set on return: status to the number of bytes
+ read into the buffer (buf) or -1 on an error,
+ and if there is a Unix error
+ io\_errno must be set to the errno value, and if there is a
+ Win32 error, win32 and lerror must be set.
+
+ \item [IO\_WRITE]
+ You will be passed: count, and buf (buffer of size count).
+ You must set on return: status to the number of bytes
+ written from the buffer (buf) or -1 on an error,
+ and if there is a Unix error
+ io\_errno must be set to the errno value, and if there is a
+ Win32 error, win32 and lerror must be set.
+
+ \item [IO\_CLOSE]
+ Nothing will be passed to you. On return you must set
+ status to 0 on success and -1 on failure. If there is a Unix error
+ io\_errno must be set to the errno value, and if there is a
+ Win32 error, win32 and lerror must be set.
+
+ \item [IO\_LSEEK]
+ You will be passed: offset, and whence. offset is a 64 bit value
+ and is the position to seek to relative to whence. whence is one
+ of the following SEEK\_SET, SEEK\_CUR, or SEEK\_END indicating to
+ either to seek to an absolute possition, relative to the current
+ position or relative to the end of the file.
+ You must pass back in offset the absolute location to which you
+ seeked. If there is an error, offset should be set to -1.
+ If there is a Unix error
+ io\_errno must be set to the errno value, and if there is a
+ Win32 error, win32 and lerror must be set.
+
+ Note: Bacula will call IO\_SEEK only when writing a sparse file.
+
+\end{description}
+
+\subsection{bool checkFile(bpContext *ctx, char *fname)}
+If this entry point is set, Bacula will call it after backing up all file
+data during an Accurate backup. It will be passed the full filename for
+each file that Bacula is proposing to mark as deleted. Only files
+previously backed up but not backed up in the current session will be
+marked to be deleted. If you return {\bf false}, the file will be be
+marked deleted. If you return {\bf true} the file will not be marked
+deleted. This permits a plugin to ensure that previously saved virtual
+files or files controlled by your plugin that have not change (not backed
+up in the current job) are not marked to be deleted. This entry point will
+only be called during Accurate Incrmental and Differential backup jobs.
+
+
+\section{Bacula Plugin Entrypoints}
+When Bacula calls one of your plugin entrypoints, you can call back to
+the entrypoints in Bacula that were supplied during the xxx plugin call
+to get or set information within Bacula.
+
+\subsection{bRC registerBaculaEvents(bpContext *ctx, ...)}
+This Bacula entrypoint will allow you to register to receive events
+that are not autmatically passed to your plugin by default. This
+entrypoint currently is unimplemented.
+
+\subsection{bRC getBaculaValue(bpContext *ctx, bVariable var, void *value)}
+Calling this entrypoint, you can obtain specific values that are available
+in Bacula. The following Variables can be referenced:
+\begin{itemize}
+\item bVarJobId returns an int
+\item bVarFDName returns a char *
+\item bVarLevel returns an int
+\item bVarClient returns a char *
+\item bVarJobName returns a char *
+\item bVarJobStatus returns an int
+\item bVarSinceTime returns an int (time\_t)
+\item bVarAccurate returns an int
+\end{itemize}
+
+\subsection{bRC setBaculaValue(bpContext *ctx, bVariable var, void *value)}
+Calling this entrypoint allows you to set particular values in
+Bacula. The only variable that can currently be set is
+{\bf bVarFileSeen} and the value passed is a char * that points
+to the full filename for a file that you are indicating has been
+seen and hence is not deleted.
+
+\subsection{bRC JobMessage(bpContext *ctx, const char *file, int line,
+ int type, utime\_t mtime, const char *fmt, ...)}
+This call permits you to put a message in the Job Report.
+
+
+\subsection{bRC DebugMessage(bpContext *ctx, const char *file, int line,
+ int level, const char *fmt, ...)}
+This call permits you to print a debug message.
+
+
+\subsection{void baculaMalloc(bpContext *ctx, const char *file, int line,
+ size\_t size)}
+This call permits you to obtain memory from Bacula's memory allocator.
+
+
+\subsection{void baculaFree(bpContext *ctx, const char *file, int line, void *mem)}
+This call permits you to free memory obtained from Bacula's memory allocator.
+
+\section{Building Bacula Plugins}
+There is currently one sample program {\bf example-plugin-fd.c} and
+one working plugin {\bf bpipe-fd.c} that can be found in the Bacula
+{\bf src/plugins/fd} directory. Both are built with the following:
+
+\begin{verbatim}
+ cd <bacula-source>
+ ./configure <your-options>
+ make
+ ...
+ cd src/plugins/fd
+ make
+ make test
+\end{verbatim}
+
+After building Bacula and changing into the src/plugins/fd directory,
+the {\bf make} command will build the {\bf bpipe-fd.so} plugin, which
+is a very useful and working program.
+
+The {\bf make test} command will build the {\bf example-plugin-fd.so}
+plugin and a binary named {\bf main}, which is build from the source
+code located in {\bf src/filed/fd\_plugins.c}.
+
+If you execute {\bf ./main}, it will load and run the example-plugin-fd
+plugin simulating a small number of the calling sequences that Bacula uses
+in calling a real plugin. This allows you to do initial testing of
+your plugin prior to trying it with Bacula.
+
+You can get a good idea of how to write your own plugin by first
+studying the example-plugin-fd, and actually running it. Then
+it can also be instructive to read the bpipe-fd.c code as it is
+a real plugin, which is still rather simple and small.
+
+When actually writing your own plugin, you may use the example-plugin-fd.c
+code as a template for your code.
+
+++ /dev/null
-%%
-%%
-
-\chapter{Bacula FD Plugin API}
-To write a Bacula plugin, you create a dynamic shared object program (or dll on
-Win32) with a particular name and two exported entry points, place it in the
-{\bf Plugins Directory}, which is defined in the {\bf bacula-fd.conf} file in
-the {\bf Client} resource, and when the FD starts, it will load all the plugins
-that end with {\bf -fd.so} (or {\bf -fd.dll} on Win32) found in that directory.
-
-\section{Normal vs Command Plugins}
-In general, there are two ways that plugins are called. The first way, is when
-a particular event is detected in Bacula, it will transfer control to each
-plugin that is loaded in turn informing the plugin of the event. This is very
-similar to how a {\bf RunScript} works, and the events are very similar. Once
-the plugin gets control, it can interact with Bacula by getting and setting
-Bacula variables. In this way, it behaves much like a RunScript. Currently
-very few Bacula variables are defined, but they will be implemented as the need
-arrises, and it is very extensible.
-
-We plan to have plugins register to receive events that they normally would
-not receive, such as an event for each file examined for backup or restore.
-This feature is not yet implemented.
-
-The second type of plugin, which is more useful and fully implemented in the
-current version is what we call a command plugin. As with all plugins, it gets
-notified of important events as noted above (details described below), but in
-addition, this kind of plugin can accept a command line, which is a:
-
-\begin{verbatim}
- Plugin = <command-string>
-\end{verbatim}
-
-directive that is placed in the Include section of a FileSet and is very
-similar to the "File = " directive. When this Plugin directive is encountered
-by Bacula during backup, it passes the "command" part of the Plugin directive
-only to the plugin that is explicitly named in the first field of that command
-string. This allows that plugin to backup any file or files on the system that
-it wants. It can even create "virtual files" in the catalog that contain data
-to be restored but do not necessarily correspond to actual files on the
-filesystem.
-
-The important features of the command plugin entry points are:
-\begin{itemize}
- \item It is triggered by a "Plugin =" directive in the FileSet
- \item Only a single plugin is called that is named on the "Plugin =" directive.
- \item The full command string after the "Plugin =" is passed to the plugin
- so that it can be told what to backup/restore.
-\end{itemize}
-
-
-\section{Loading Plugins}
-Once the File daemon loads the plugins, it asks the OS for the
-two entry points (loadPlugin and unloadPlugin) then calls the
-{\bf loadPlugin} entry point (see below).
-
-Bacula passes information to the plugin through this call and it gets
-back information that it needs to use the plugin. Later, Bacula
- will call particular functions that are defined by the
-{\bf loadPlugin} interface.
-
-When Bacula is finished with the plugin
-(when Bacula is going to exit), it will call the {\bf unloadPlugin}
-entry point.
-
-The two entry points are:
-
-\begin{verbatim}
-bRC loadPlugin(bInfo *lbinfo, bFuncs *lbfuncs, pInfo **pinfo, pFuncs **pfuncs)
-
-and
-
-bRC unloadPlugin()
-\end{verbatim}
-
-both these external entry points to the shared object are defined as C entry
-points to avoid name mangling complications with C++. However, the shared
-object can actually be written in any language (preferrably C or C++) providing
-that it follows C language calling conventions.
-
-The definitions for {\bf bRC} and the arguments are {\bf
-src/filed/fd-plugins.h} and so this header file needs to be included in
-your plugin. It along with {\bf src/lib/plugins.h} define basically the whole
-plugin interface. Within this header file, it includes the following
-files:
-
-\begin{verbatim}
-#include <sys/types.h>
-#include "config.h"
-#include "bc_types.h"
-#include "lib/plugins.h"
-#include <sys/stat.h>
-\end{verbatim}
-
-Aside from the {\bf bc\_types.h} and {\bf confit.h} headers, the plugin
-definition uses the minimum code from Bacula. The bc\_types.h file is required
-to ensure that the data type defintions in arguments correspond to the Bacula
-core code.
-
-The return codes are defined as:
-\begin{verbatim}
-typedef enum {
- bRC_OK = 0, /* OK */
- bRC_Stop = 1, /* Stop calling other plugins */
- bRC_Error = 2, /* Some kind of error */
- bRC_More = 3, /* More files to backup */
-} bRC;
-\end{verbatim}
-
-
-At a future point in time, we hope to make the Bacula libbac.a into a
-shared object so that the plugin can use much more of Bacula's
-infrastructure, but for this first cut, we have tried to minimize the
-dependence on Bacula.
-
-\section{loadPlugin}
-As previously mentioned, the {\bf loadPlugin} entry point in the plugin
-is called immediately after Bacula loads the plugin when the File daemon
-itself is first starting. This entry point is only called once during the
-execution of the File daemon. In calling the
-plugin, the first two arguments are information from Bacula that
-is passed to the plugin, and the last two arguments are information
-about the plugin that the plugin must return to Bacula. The call is:
-
-\begin{verbatim}
-bRC loadPlugin(bInfo *lbinfo, bFuncs *lbfuncs, pInfo **pinfo, pFuncs **pfuncs)
-\end{verbatim}
-
-and the arguments are:
-
-\begin{description}
-\item [lbinfo]
-This is information about Bacula in general. Currently, the only value
-defined in the bInfo structure is the version, which is the Bacula plugin
-interface version, currently defined as 1. The {\bf size} is set to the
-byte size of the structure. The exact definition of the bInfo structure
-as of this writing is:
-
-\begin{verbatim}
-typedef struct s_baculaInfo {
- uint32_t size;
- uint32_t version;
-} bInfo;
-\end{verbatim}
-
-\item [lbfuncs]
-The bFuncs structure defines the callback entry points within Bacula
-that the plugin can use register events, get Bacula values, set
-Bacula values, and send messages to the Job output or debug output.
-
-The exact definition as of this writing is:
-\begin{verbatim}
-typedef struct s_baculaFuncs {
- uint32_t size;
- uint32_t version;
- bRC (*registerBaculaEvents)(bpContext *ctx, ...);
- bRC (*getBaculaValue)(bpContext *ctx, bVariable var, void *value);
- bRC (*setBaculaValue)(bpContext *ctx, bVariable var, void *value);
- bRC (*JobMessage)(bpContext *ctx, const char *file, int line,
- int type, utime_t mtime, const char *fmt, ...);
- bRC (*DebugMessage)(bpContext *ctx, const char *file, int line,
- int level, const char *fmt, ...);
- void *(*baculaMalloc)(bpContext *ctx, const char *file, int line,
- size_t size);
- void (*baculaFree)(bpContext *ctx, const char *file, int line, void *mem);
-} bFuncs;
-\end{verbatim}
-
-We will discuss these entry points and how to use them a bit later when
-describing the plugin code.
-
-
-\item [pInfo]
-When the loadPlugin entry point is called, the plugin must initialize
-an information structure about the plugin and return a pointer to
-this structure to Bacula.
-
-The exact definition as of this writing is:
-
-\begin{verbatim}
-typedef struct s_pluginInfo {
- uint32_t size;
- uint32_t version;
- const char *plugin_magic;
- const char *plugin_license;
- const char *plugin_author;
- const char *plugin_date;
- const char *plugin_version;
- const char *plugin_description;
-} pInfo;
-\end{verbatim}
-
-Where:
- \begin{description}
- \item [version] is the current Bacula defined plugin interface version, currently
- set to 1. If the interface version differs from the current version of
- Bacula, the plugin will not be run (not yet implemented).
- \item [plugin\_magic] is a pointer to the text string "*FDPluginData*", a
- sort of sanity check. If this value is not specified, the plugin
- will not be run (not yet implemented).
- \item [plugin\_license] is a pointer to a text string that describes the
- plugin license. Bacula will only accept compatible licenses (not yet
- implemented).
- \item [plugin\_author] is a pointer to the text name of the author of the program.
- This string can be anything but is generally the author's name.
- \item [plugin\_date] is the pointer text string containing the date of the plugin.
- This string can be anything but is generally some human readable form of
- the date.
- \item [plugin\_version] is a pointer to a text string containing the version of
- the plugin. The contents are determined by the plugin writer.
- \item [plugin\_description] is a pointer to a string describing what the
- plugin does. The contents are determined by the plugin writer.
- \end{description}
-
-The pInfo structure must be defined in static memory because Bacula does not
-copy it and may refer to the values at any time while the plugin is
-loaded. All values must be supplied or the plugin will not run (not yet
-implemented). All text strings must be either ASCII or UTF-8 strings that
-are terminated with a zero byte.
-
-\item [pFuncs]
-When the loadPlugin entry point is called, the plugin must initialize
-an entry point structure about the plugin and return a pointer to
-this structure to Bacula. This structure contains pointer to each
-of the entry points that the plugin must provide for Bacula. When
-Bacula is actually running the plugin, it will call the defined
-entry points at particular times. All entry points must be defined.
-
-The pFuncs structure must be defined in static memory because Bacula does not
-copy it and may refer to the values at any time while the plugin is
-loaded.
-
-The exact definition as of this writing is:
-
-\begin{verbatim}
-typedef struct s_pluginFuncs {
- uint32_t size;
- uint32_t version;
- bRC (*newPlugin)(bpContext *ctx);
- bRC (*freePlugin)(bpContext *ctx);
- bRC (*getPluginValue)(bpContext *ctx, pVariable var, void *value);
- bRC (*setPluginValue)(bpContext *ctx, pVariable var, void *value);
- bRC (*handlePluginEvent)(bpContext *ctx, bEvent *event, void *value);
- bRC (*startBackupFile)(bpContext *ctx, struct save_pkt *sp);
- bRC (*endBackupFile)(bpContext *ctx);
- bRC (*startRestoreFile)(bpContext *ctx, const char *cmd);
- bRC (*endRestoreFile)(bpContext *ctx);
- bRC (*pluginIO)(bpContext *ctx, struct io_pkt *io);
- bRC (*createFile)(bpContext *ctx, struct restore_pkt *rp);
- bRC (*setFileAttributes)(bpContext *ctx, struct restore_pkt *rp);
- bRC (*checkFile)(bpContext *ctx, char *fname);
-} pFuncs;
-\end{verbatim}
-
-The details of the entry points will be presented in
-separate sections below.
-
-Where:
- \begin{description}
- \item [size] is the byte size of the structure.
- \item [version] is the plugin interface version currently set to 3.
- \end{description}
-
-Sample code for loadPlugin:
-\begin{verbatim}
- bfuncs = lbfuncs; /* set Bacula funct pointers */
- binfo = lbinfo;
- *pinfo = &pluginInfo; /* return pointer to our info */
- *pfuncs = &pluginFuncs; /* return pointer to our functions */
-
- return bRC_OK;
-\end{verbatim}
-
-where pluginInfo and pluginFuncs are statically defined structures.
-See bpipe-fd.c for details.
-
-
-
-\end{description}
-
-\section{Plugin Entry Points}
-This section will describe each of the entry points (subroutines) within
-the plugin that the plugin must provide for Bacula, when they are called
-and their arguments. As noted above, pointers to these subroutines are
-passed back to Bacula in the pFuncs structure when Bacula calls the
-loadPlugin() externally defined entry point.
-
-\subsection{newPlugin(bpContext *ctx)}
- This is the entry point that Bacula will call
- when a new "instance" of the plugin is created. This typically
- happens at the beginning of a Job. If 10 Jobs are running
- simultaneously, there will be at least 10 instances of the
- plugin.
-
- The bpContext structure will be passed to the plugin, and
- during this call, if the plugin needs to have its private
- working storage that is associated with the particular
- instance of the plugin, it should create it from the heap
- (malloc the memory) and store a pointer to
- its private working storage in the {\bf pContext} variable.
- Note: since Bacula is a multi-threaded program, you must not
- keep any variable data in your plugin unless it is truely meant
- to apply globally to the whole plugin. In addition, you must
- be aware that except the first and last call to the plugin
- (loadPlugin and unloadPlugin) all the other calls will be
- made by threads that correspond to a Bacula job. The
- bpContext that will be passed for each thread will remain the
- same throughout the Job thus you can keep your privat Job specific
- data in it ({\bf bContext}).
-
-\begin{verbatim}
-typedef struct s_bpContext {
- void *pContext; /* Plugin private context */
- void *bContext; /* Bacula private context */
-} bpContext;
-
-\end{verbatim}
-
- This context pointer will be passed as the first argument to all
- the entry points that Bacula calls within the plugin. Needless
- to say, the plugin should not change the bContext variable, which
- is Bacula's private context pointer for this instance (Job) of this
- plugin.
-
-\subsection{freePlugin(bpContext *ctx)}
-This entry point is called when the
-this instance of the plugin is no longer needed (the Job is
-ending), and the plugin should release all memory it may
-have allocated for this particular instance (Job) i.e. the pContext.
-This is not the final termination
-of the plugin signaled by a call to {\bf unloadPlugin}.
-Any other instances (Job) will
-continue to run, and the entry point {\bf newPlugin} may be called
-again if other jobs start.
-
-\subsection{getPluginValue(bpContext *ctx, pVariable var, void *value)}
-Bacula will call this entry point to get
-a value from the plugin. This entry point is currently not called.
-
-\subsection{setPluginValue(bpContext *ctx, pVariable var, void *value)}
-Bacula will call this entry point to set
-a value in the plugin. This entry point is currently not called.
-
-\subsection{handlePluginEvent(bpContext *ctx, bEvent *event, void *value)}
-This entry point is called when Bacula
-encounters certain events (discussed below). This is, in fact, the
-main way that most plugins get control when a Job runs and how
-they know what is happening in the job. It can be likened to the
-{\bf RunScript} feature that calls external programs and scripts,
-and is very similar to the Bacula Python interface.
-When the plugin is called, Bacula passes it the pointer to an event
-structure (bEvent), which currently has one item, the eventType:
-
-\begin{verbatim}
-typedef struct s_bEvent {
- uint32_t eventType;
-} bEvent;
-\end{verbatim}
-
- which defines what event has been triggered, and for each event,
- Bacula will pass a pointer to a value associated with that event.
- If no value is associated with a particular event, Bacula will
- pass a NULL pointer, so the plugin must be careful to always check
- value pointer prior to dereferencing it.
-
- The current list of events are:
-
-\begin{verbatim}
-typedef enum {
- bEventJobStart = 1,
- bEventJobEnd = 2,
- bEventStartBackupJob = 3,
- bEventEndBackupJob = 4,
- bEventStartRestoreJob = 5,
- bEventEndRestoreJob = 6,
- bEventStartVerifyJob = 7,
- bEventEndVerifyJob = 8,
- bEventBackupCommand = 9,
- bEventRestoreCommand = 10,
- bEventLevel = 11,
- bEventSince = 12,
-} bEventType;
-
-\end{verbatim}
-
-Most of the above are self-explanatory.
-
-\begin{description}
- \item [bEventJobStart] is called whenever a Job starts. The value
- passed is a pointer to a string that contains: "Jobid=nnn
- Job=job-name". Where nnn will be replaced by the JobId and job-name
- will be replaced by the Job name. The variable is temporary so if you
- need the values, you must copy them.
-
- \item [bEventJobEnd] is called whenever a Job ends. No value is passed.
-
- \item [bEventStartBackupJob] is called when a Backup Job begins. No value
- is passed.
-
- \item [bEventEndBackupJob] is called when a Backup Job ends. No value is
- passed.
-
- \item [bEventStartRestoreJob] is called when a Restore Job starts. No value
- is passed.
-
- \item [bEventEndRestoreJob] is called when a Restore Job ends. No value is
- passed.
-
- \item [bEventStartVerifyJob] is called when a Verify Job starts. No value
- is passed.
-
- \item [bEventEndVerifyJob] is called when a Verify Job ends. No value
- is passed.
-
- \item [bEventBackupCommand] is called prior to the bEventStartBackupJob and
- the plugin is passed the command string (everything after the equal sign
- in "Plugin =" as the value.
-
- Note, if you intend to backup a file, this is an important first point to
- write code that copies the command string passed into your pContext area
- so that you will know that a backup is being performed and you will know
- the full contents of the "Plugin =" command (i.e. what to backup and
- what virtual filename the user wants to call it.
-
- \item [bEventRestoreCommand] is called prior to the bEventStartRestoreJob and
- the plugin is passed the command string (everything after the equal sign
- in "Plugin =" as the value.
-
- See the notes above concerning backup and the command string. This is the
- point at which Bacula passes you the original command string that was
- specified during the backup, so you will want to save it in your pContext
- area for later use when Bacula calls the plugin again.
-
- \item [bEventLevel] is called when the level is set for a new Job. The value
- is a 32 bit integer stored in the void*, which represents the Job Level code.
-
- \item [bEventSince] is called when the since time is set for a new Job. The
- value is a time\_t time at which the last job was run.
-\end{description}
-
-During each of the above calls, the plugin receives either no specific value or
-only one value, which in some cases may not be sufficient. However, knowing
-the context of the event, the plugin can call back to the Bacula entry points
-it was passed during the {\bf loadPlugin} call and get to a number of Bacula
-variables. (at the current time few Bacula variables are implemented, but it
-easily extended at a future time and as needs require).
-
-\subsection{startBackupFile(bpContext *ctx, struct save\_pkt *sp)}
-This entry point is called only if your plugin is a command plugin, and
-it is called when Bacula encounters the "Plugin = " directive in
-the Include section of the FileSet.
-Called when beginning the backup of a file. Here Bacula provides you
-with a pointer to the {\bf save\_pkt} structure and you must fill in
-this packet with the "attribute" data of the file.
-
-\begin{verbatim}
-struct save_pkt {
- int32_t pkt_size; /* size of this packet */
- char *fname; /* Full path and filename */
- char *link; /* Link name if any */
- struct stat statp; /* System stat() packet for file */
- int32_t type; /* FT_xx for this file */
- uint32_t flags; /* Bacula internal flags */
- bool portable; /* set if data format is portable */
- char *cmd; /* command */
- int32_t pkt_end; /* end packet sentinel */
-};
-\end{verbatim}
-
-The second argument is a pointer to the {\bf save\_pkt} structure for the file
-to be backed up. The plugin is responsible for filling in all the fields
-of the {\bf save\_pkt}. If you are backing up
-a real file, then generally, the statp structure can be filled in by doing
-a {\bf stat} system call on the file.
-
-If you are backing up a database or
-something that is more complex, you might want to create a virtual file.
-That is a file that does not actually exist on the filesystem, but represents
-say an object that you are backing up. In that case, you need to ensure
-that the {\bf fname} string that you pass back is unique so that it
-does not conflict with a real file on the system, and you need to
-artifically create values in the statp packet.
-
-Example programs such as {\bf bpipe-fd.c} show how to set these fields. You
-must take care not to store pointers the stack in the pointer fields such as
-fname and link, because when you return from your function, your stack entries
-will be destroyed. The solution in that case is to malloc() and return the
-pointer to it. In order to not have memory leaks, you should store a pointer to
-all memory allocated in your pContext structure so that in subsequent calls or
-at termination, you can release it back to the system.
-
-Once the backup has begun, Bacula will call your plugin at the {\bf pluginIO}
-entry point to "read" the data to be backed up. Please see the {\bf bpipe-fd.c}
-plugin for how to do I/O.
-
-Example of filling in the save\_pkt as used in bpipe-fd.c:
-
-\begin{verbatim}
- struct plugin_ctx *p_ctx = (struct plugin_ctx *)ctx->pContext;
- time_t now = time(NULL);
- sp->fname = p_ctx->fname;
- sp->statp.st_mode = 0700 | S_IFREG;
- sp->statp.st_ctime = now;
- sp->statp.st_mtime = now;
- sp->statp.st_atime = now;
- sp->statp.st_size = -1;
- sp->statp.st_blksize = 4096;
- sp->statp.st_blocks = 1;
- p_ctx->backup = true;
- return bRC_OK;
-\end{verbatim}
-
-Note: the filename to be created has already been created from the
-command string previously sent to the plugin and is in the plugin
-context (p\_ctx->fname) and is a malloc()ed string. This example
-creates a regular file (S\_IFREG), with various fields being created.
-
-In general, the sequence of commands issued from Bacula to the plugin
-to do a backup while processing the "Plugin = " directive are:
-
-\begin{enumerate}
- \item generate a bEventBackupCommand event to the specified plugin
- and pass it the command string.
- \item make a startPluginBackup call to the plugin, which
- fills in the data needed in save\_pkt to save as the file
- attributes and to put on the Volume and in the catalog.
- \item call Bacula's internal save\_file() subroutine to save the specified
- file. The plugin will then be called at pluginIO() to "open"
- the file, and then to read the file data.
- Note, if you are dealing with a virtual file, the "open" operation
- is something the plugin does internally and it doesn't necessarily
- mean opening a file on the filesystem. For example in the case of
- the bpipe-fd.c program, it initiates a pipe to the requested program.
- Finally when the plugin signals to Bacula that all the data was read,
- Bacula will call the plugin with the "close" pluginIO() function.
-\end{enumerate}
-
-
-\subsection{endBackupFile(bpContext *ctx)}
-Called at the end of backing up a file for a command plugin. If the plugin's
-work is done, it should return bRC\_OK. If the plugin wishes to create another
-file and back it up, then it must return bRC\_More (not yet implemented). This
-is probably a good time to release any malloc()ed memory you used to pass back
-filenames.
-
-\subsection{startRestoreFile(bpContext *ctx, const char *cmd)}
-Called when the first record is read from the Volume that was
-previously written by the command plugin.
-
-\subsection{createFile(bpContext *ctx, struct restore\_pkt *rp)}
-Called for a command plugin to create a file during a Restore job before
-restoring the data.
-This entry point is called before any I/O is done on the file. After
-this call, Bacula will call pluginIO() to open the file for write.
-
-The data in the
-restore\_pkt is passed to the plugin and is based on the data that was
-originally given by the plugin during the backup and the current user
-restore settings (e.g. where, RegexWhere, replace). This allows the
-plugin to first create a file (if necessary) so that the data can
-be transmitted to it. The next call to the plugin will be a
-pluginIO command with a request to open the file write-only.
-
-This call must return one of the following values:
-
-\begin{verbatim}
- enum {
- CF_SKIP = 1, /* skip file (not newer or something) */
- CF_ERROR, /* error creating file */
- CF_EXTRACT, /* file created, data to extract */
- CF_CREATED /* file created, no data to extract */
-};
-\end{verbatim}
-
-in the restore\_pkt value {\bf create\_status}. For a normal file,
-unless there is an error, you must return {\bf CF\_EXTRACT}.
-
-\begin{verbatim}
-
-struct restore_pkt {
- int32_t pkt_size; /* size of this packet */
- int32_t stream; /* attribute stream id */
- int32_t data_stream; /* id of data stream to follow */
- int32_t type; /* file type FT */
- int32_t file_index; /* file index */
- int32_t LinkFI; /* file index to data if hard link */
- uid_t uid; /* userid */
- struct stat statp; /* decoded stat packet */
- const char *attrEx; /* extended attributes if any */
- const char *ofname; /* output filename */
- const char *olname; /* output link name */
- const char *where; /* where */
- const char *RegexWhere; /* regex where */
- int replace; /* replace flag */
- int create_status; /* status from createFile() */
- int32_t pkt_end; /* end packet sentinel */
-
-};
-\end{verbatim}
-
-Typical code to create a regular file would be the following:
-
-\begin{verbatim}
- struct plugin_ctx *p_ctx = (struct plugin_ctx *)ctx->pContext;
- time_t now = time(NULL);
- sp->fname = p_ctx->fname; /* set the full path/filename I want to create */
- sp->type = FT_REG;
- sp->statp.st_mode = 0700 | S_IFREG;
- sp->statp.st_ctime = now;
- sp->statp.st_mtime = now;
- sp->statp.st_atime = now;
- sp->statp.st_size = -1;
- sp->statp.st_blksize = 4096;
- sp->statp.st_blocks = 1;
- return bRC_OK;
-\end{verbatim}
-
-This will create a virtual file. If you are creating a file that actually
-exists, you will most likely want to fill the statp packet using the
-stat() system call.
-
-Creating a directory is similar, but requires a few extra steps:
-
-\begin{verbatim}
- struct plugin_ctx *p_ctx = (struct plugin_ctx *)ctx->pContext;
- time_t now = time(NULL);
- sp->fname = p_ctx->fname; /* set the full path I want to create */
- sp->link = xxx; where xxx is p_ctx->fname with a trailing forward slash
- sp->type = FT_DIREND
- sp->statp.st_mode = 0700 | S_IFDIR;
- sp->statp.st_ctime = now;
- sp->statp.st_mtime = now;
- sp->statp.st_atime = now;
- sp->statp.st_size = -1;
- sp->statp.st_blksize = 4096;
- sp->statp.st_blocks = 1;
- return bRC_OK;
-\end{verbatim}
-
-The link field must be set with the full cononical path name, which always
-ends with a forward slash. If you do not terminate it with a forward slash,
-you will surely have problems later.
-
-As with the example that creates a file, if you are backing up a real
-directory, you will want to do an stat() on the directory.
-
-Note, if you want the directory permissions and times to be correctly
-restored, you must create the directory {\bf after} all the file directories
-have been sent to Bacula. That allows the restore process to restore all the
-files in a directory using default directory options, then at the end, restore
-the directory permissions. If you do it the other way around, each time you
-restore a file, the OS will modify the time values for the directory entry.
-
-\subsection{setFileAttributes(bpContext *ctx, struct restore\_pkt *rp)}
-This is call not yet implemented. Called for a command plugin.
-
-See the definition of {\bf restre\_pkt} in the above section.
-
-\subsection{endRestoreFile(bpContext *ctx)}
-Called when a command plugin is done restoring a file.
-
-\subsection{pluginIO(bpContext *ctx, struct io\_pkt *io)}
-Called to do the input (backup) or output (restore) of data from or to a file
-for a command plugin. These routines simulate the Unix read(), write(), open(),
-close(), and lseek() I/O calls, and the arguments are passed in the packet and
-the return values are also placed in the packet. In addition for Win32 systems
-the plugin must return two additional values (described below).
-
-\begin{verbatim}
- enum {
- IO_OPEN = 1,
- IO_READ = 2,
- IO_WRITE = 3,
- IO_CLOSE = 4,
- IO_SEEK = 5
-};
-
-struct io_pkt {
- int32_t pkt_size; /* Size of this packet */
- int32_t func; /* Function code */
- int32_t count; /* read/write count */
- mode_t mode; /* permissions for created files */
- int32_t flags; /* Open flags */
- char *buf; /* read/write buffer */
- const char *fname; /* open filename */
- int32_t status; /* return status */
- int32_t io_errno; /* errno code */
- int32_t lerror; /* Win32 error code */
- int32_t whence; /* lseek argument */
- boffset_t offset; /* lseek argument */
- bool win32; /* Win32 GetLastError returned */
- int32_t pkt_end; /* end packet sentinel */
-};
-\end{verbatim}
-
-The particular Unix function being simulated is indicated by the {\bf func},
-which will have one of the IO\_OPEN, IO\_READ, ... codes listed above. The
-status code that would be returned from a Unix call is returned in {\bf status}
-for IO\_OPEN, IO\_CLOSE, IO\_READ, and IO\_WRITE. The return value for IO\_SEEK
-is returned in {\bf offset} which in general is a 64 bit value.
-
-When there is an error on Unix systems, you must always set io\_error, and
-on a Win32 system, you must always set win32, and the returned value from
-the OS call GetLastError() in lerror.
-
-For all except IO\_SEEK, {\bf status} is the return result. In general it is
-a positive integer unless there is an error in which case it is -1.
-
-The following describes each call and what you get and what you
-should return:
-
-\begin{description}
- \item [IO\_OPEN]
- You will be passed fname, mode, and flags.
- You must set on return: status, and if there is a Unix error
- io\_errno must be set to the errno value, and if there is a
- Win32 error win32 and lerror.
-
- \item [IO\_READ]
- You will be passed: count, and buf (buffer of size count).
- You must set on return: status to the number of bytes
- read into the buffer (buf) or -1 on an error,
- and if there is a Unix error
- io\_errno must be set to the errno value, and if there is a
- Win32 error, win32 and lerror must be set.
-
- \item [IO\_WRITE]
- You will be passed: count, and buf (buffer of size count).
- You must set on return: status to the number of bytes
- written from the buffer (buf) or -1 on an error,
- and if there is a Unix error
- io\_errno must be set to the errno value, and if there is a
- Win32 error, win32 and lerror must be set.
-
- \item [IO\_CLOSE]
- Nothing will be passed to you. On return you must set
- status to 0 on success and -1 on failure. If there is a Unix error
- io\_errno must be set to the errno value, and if there is a
- Win32 error, win32 and lerror must be set.
-
- \item [IO\_LSEEK]
- You will be passed: offset, and whence. offset is a 64 bit value
- and is the position to seek to relative to whence. whence is one
- of the following SEEK\_SET, SEEK\_CUR, or SEEK\_END indicating to
- either to seek to an absolute possition, relative to the current
- position or relative to the end of the file.
- You must pass back in offset the absolute location to which you
- seeked. If there is an error, offset should be set to -1.
- If there is a Unix error
- io\_errno must be set to the errno value, and if there is a
- Win32 error, win32 and lerror must be set.
-
- Note: Bacula will call IO\_SEEK only when writing a sparse file.
-
-\end{description}
-
-\subsection{bool checkFile(bpContext *ctx, char *fname)}
-If this entry point is set, Bacula will call it after backing up all file
-data during an Accurate backup. It will be passed the full filename for
-each file that Bacula is proposing to mark as deleted. Only files
-previously backed up but not backed up in the current session will be
-marked to be deleted. If you return {\bf false}, the file will be be
-marked deleted. If you return {\bf true} the file will not be marked
-deleted. This permits a plugin to ensure that previously saved virtual
-files or files controlled by your plugin that have not change (not backed
-up in the current job) are not marked to be deleted. This entry point will
-only be called during Accurate Incrmental and Differential backup jobs.
-
-
-\section{Bacula Plugin Entrypoints}
-When Bacula calls one of your plugin entrypoints, you can call back to
-the entrypoints in Bacula that were supplied during the xxx plugin call
-to get or set information within Bacula.
-
-\subsection{bRC registerBaculaEvents(bpContext *ctx, ...)}
-This Bacula entrypoint will allow you to register to receive events
-that are not autmatically passed to your plugin by default. This
-entrypoint currently is unimplemented.
-
-\subsection{bRC getBaculaValue(bpContext *ctx, bVariable var, void *value)}
-Calling this entrypoint, you can obtain specific values that are available
-in Bacula. The following Variables can be referenced:
-\begin{itemize}
-\item bVarJobId returns an int
-\item bVarFDName returns a char *
-\item bVarLevel returns an int
-\item bVarClient returns a char *
-\item bVarJobName returns a char *
-\item bVarJobStatus returns an int
-\item bVarSinceTime returns an int (time\_t)
-\item bVarAccurate returns an int
-\end{itemize}
-
-\subsection{bRC setBaculaValue(bpContext *ctx, bVariable var, void *value)}
-Calling this entrypoint allows you to set particular values in
-Bacula. The only variable that can currently be set is
-{\bf bVarFileSeen} and the value passed is a char * that points
-to the full filename for a file that you are indicating has been
-seen and hence is not deleted.
-
-\subsection{bRC JobMessage(bpContext *ctx, const char *file, int line,
- int type, utime\_t mtime, const char *fmt, ...)}
-This call permits you to put a message in the Job Report.
-
-
-\subsection{bRC DebugMessage(bpContext *ctx, const char *file, int line,
- int level, const char *fmt, ...)}
-This call permits you to print a debug message.
-
-
-\subsection{void baculaMalloc(bpContext *ctx, const char *file, int line,
- size\_t size)}
-This call permits you to obtain memory from Bacula's memory allocator.
-
-
-\subsection{void baculaFree(bpContext *ctx, const char *file, int line, void *mem)}
-This call permits you to free memory obtained from Bacula's memory allocator.
-
-\section{Building Bacula Plugins}
-There is currently one sample program {\bf example-plugin-fd.c} and
-one working plugin {\bf bpipe-fd.c} that can be found in the Bacula
-{\bf src/plugins/fd} directory. Both are built with the following:
-
-\begin{verbatim}
- cd <bacula-source>
- ./configure <your-options>
- make
- ...
- cd src/plugins/fd
- make
- make test
-\end{verbatim}
-
-After building Bacula and changing into the src/plugins/fd directory,
-the {\bf make} command will build the {\bf bpipe-fd.so} plugin, which
-is a very useful and working program.
-
-The {\bf make test} command will build the {\bf example-plugin-fd.so}
-plugin and a binary named {\bf main}, which is build from the source
-code located in {\bf src/filed/fd\_plugins.c}.
-
-If you execute {\bf ./main}, it will load and run the example-plugin-fd
-plugin simulating a small number of the calling sequences that Bacula uses
-in calling a real plugin. This allows you to do initial testing of
-your plugin prior to trying it with Bacula.
-
-You can get a good idea of how to write your own plugin by first
-studying the example-plugin-fd, and actually running it. Then
-it can also be instructive to read the bpipe-fd.c code as it is
-a real plugin, which is still rather simple and small.
-
-When actually writing your own plugin, you may use the example-plugin-fd.c
-code as a template for your code.
-
--- /dev/null
+%%
+%%
+
+\chapter{Bacula Porting Notes}
+\label{_ChapterStart1}
+\index{Notes!Bacula Porting}
+\index{Bacula Porting Notes}
+\addcontentsline{toc}{section}{Bacula Porting Notes}
+
+This document is intended mostly for developers who wish to port Bacula to a
+system that is not {\bf officially} supported.
+
+It is hoped that Bacula clients will eventually run on every imaginable system
+that needs backing up (perhaps even a Palm). It is also hoped that the Bacula
+Directory and Storage daemons will run on every system capable of supporting
+them.
+
+\section{Porting Requirements}
+\index{Requirements!Porting}
+\index{Porting Requirements}
+\addcontentsline{toc}{section}{Porting Requirements}
+
+In General, the following holds true:
+
+\begin{itemize}
+\item {\bf Bacula} has been compiled and run on Linux RedHat, FreeBSD, and
+ Solaris systems.
+\item In addition, clients exist on Win32, and Irix
+\item It requires GNU C++ to compile. You can try with other compilers, but
+ you are on your own. The Irix client is built with the Irix complier, but, in
+ general, you will need GNU.
+\item Your compiler must provide support for 64 bit signed and unsigned
+ integers.
+\item You will need a recent copy of the {\bf autoconf} tools loaded on your
+ system (version 2.13 or later). The {\bf autoconf} tools are used to build
+ the configuration program, but are not part of the Bacula source
+distribution.
+\item There are certain third party packages that Bacula needs. Except for
+ MySQL, they can all be found in the {\bf depkgs} and {\bf depkgs1} releases.
+\item To build the Win32 binaries, we use Microsoft VC++ standard
+ 2003. Please see the instructions in
+ bacula-source/src/win32/README.win32 for more details. If you
+ want to use VC++ Express, please see README.vc8. Our build is
+ done under the most recent version of Cygwin, but Cygwin is
+ not used in the Bacula binaries that are produced.
+ Unfortunately, we do not have the resources to help you build
+ your own version of the Win32 FD, so you are pretty much on
+ your own. You can ask the bacula-devel list for help, but
+ please don't expect much.
+\item {\bf Bacula} requires a good implementation of pthreads to work.
+\item The source code has been written with portability in mind and is mostly
+ POSIX compatible. Thus porting to any POSIX compatible operating system
+ should be relatively easy.
+\end{itemize}
+
+\section{Steps to Take for Porting}
+\index{Porting!Steps to Take for}
+\index{Steps to Take for Porting}
+\addcontentsline{toc}{section}{Steps to Take for Porting}
+
+\begin{itemize}
+\item The first step is to ensure that you have version 2.13 or later of the
+ {\bf autoconf} tools loaded. You can skip this step, but making changes to
+ the configuration program will be difficult or impossible.
+\item The run a {\bf ./configure} command in the main source directory and
+ examine the output. It should look something like the following:
+
+\footnotesize
+\begin{verbatim}
+Configuration on Mon Oct 28 11:42:27 CET 2002:
+ Host: i686-pc-linux-gnu -- redhat 7.3
+ Bacula version: 1.27 (26 October 2002)
+ Source code location: .
+ Install binaries: /sbin
+ Install config files: /etc/bacula
+ C Compiler: gcc
+ C++ Compiler: c++
+ Compiler flags: -g -O2
+ Linker flags:
+ Libraries: -lpthread
+ Statically Linked Tools: no
+ Database found: no
+ Database type: Internal
+ Database lib:
+ Job Output Email: root@localhost
+ Traceback Email: root@localhost
+ SMTP Host Address: localhost
+ Director Port 9101
+ File daemon Port 9102
+ Storage daemon Port 9103
+ Working directory /etc/bacula/working
+ SQL binaries Directory
+ Large file support: yes
+ readline support: yes
+ cweb support: yes /home/kern/bacula/depkgs/cweb
+ TCP Wrappers support: no
+ ZLIB support: yes
+ enable-smartalloc: yes
+ enable-gnome: no
+ gmp support: yes
+\end{verbatim}
+\normalsize
+
+The details depend on your system. The first thing to check is that it
+properly identified your host on the {\bf Host:} line. The first part (added
+in version 1.27) is the GNU four part identification of your system. The part
+after the -- is your system and the system version. Generally, if your system
+is not yet supported, you must correct these.
+\item If the {\bf ./configure} does not function properly, you must determine
+ the cause and fix it. Generally, it will be because some required system
+ routine is not available on your machine.
+\item To correct problems with detection of your system type or with routines
+ and libraries, you must edit the file {\bf
+ \lt{}bacula-src\gt{}/autoconf/configure.in}. This is the ``source'' from
+which {\bf configure} is built. In general, most of the changes for your
+system will be made in {\bf autoconf/aclocal.m4} in the routine {\bf
+BA\_CHECK\_OPSYS} or in the routine {\bf BA\_CHECK\_OPSYS\_DISTNAME}. I have
+already added the necessary code for most systems, but if yours shows up as
+{\bf unknown} you will need to make changes. Then as mentioned above, you
+will need to set a number of system dependent items in {\bf configure.in} in
+the {\bf case} statement at approximately line 1050 (depending on the Bacula
+release).
+\item The items to in the case statement that corresponds to your system are
+ the following:
+
+\begin{itemize}
+\item DISTVER -- set to the version of your operating system. Typically some
+ form of {\bf uname} obtains it.
+\item TAPEDRIVE -- the default tape drive. Not too important as the user can
+ set it as an option.
+\item PSCMD -- set to the {\bf ps} command that will provide the PID in the
+ first field and the program name in the second field. If this is not set
+ properly, the {\bf bacula stop} script will most likely not be able to stop
+Bacula in all cases.
+\item hostname -- command to return the base host name (non-qualified) of
+ your system. This is generally the machine name. Not too important as the
+ user can correct this in his configuration file.
+\item CFLAGS -- set any special compiler flags needed. Many systems need a
+ special flag to make pthreads work. See cygwin for an example.
+\item LDFLAGS -- set any special loader flags. See cygwin for an example.
+\item PTHREAD\_LIB -- set for any special pthreads flags needed during
+ linking. See freebsd as an example.
+\item lld -- set so that a ``long long int'' will be properly edited in a
+ printf() call.
+\item llu -- set so that a ``long long unsigned'' will be properly edited in
+ a printf() call.
+\item PFILES -- set to add any files that you may define is your platform
+ subdirectory. These files are used for installation of automatic system
+ startup of Bacula daemons.
+\end{itemize}
+
+\item To rebuild a new version of {\bf configure} from a changed {\bf
+ autoconf/configure.in} you enter {\bf make configure} in the top level Bacula
+ source directory. You must have done a ./configure prior to trying to rebuild
+ the configure script or it will get into an infinite loop.
+\item If the {\bf make configure} gets into an infinite loop, ctl-c it, then
+ do {\bf ./configure} (no options are necessary) and retry the {\bf make
+ configure}, which should now work.
+\item To rebuild {\bf configure} you will need to have {\bf autoconf} version
+ 2.57-3 or higher loaded. Older versions of autoconf will complain about
+ unknown or bad options, and won't work.
+\item After you have a working {\bf configure} script, you may need to make a
+ few system dependent changes to the way Bacula works. Generally, these are
+ done in {\bf src/baconfig.h}. You can find a few examples of system dependent
+changes toward the end of this file. For example, on Irix systems, there is
+no definition for {\bf socklen\_t}, so it is made in this file. If your
+system has structure alignment requirements, check the definition of BALIGN
+in this file. Currently, all Bacula allocated memory is aligned on a {\bf
+double} boundary.
+\item If you are having problems with Bacula's type definitions, you might
+ look at {\bf src/bc\_types.h} where all the types such as {\bf uint32\_t},
+ {\bf uint64\_t}, etc. that Bacula uses are defined.
+\end{itemize}
+++ /dev/null
-%%
-%%
-
-\chapter{Bacula Porting Notes}
-\label{_ChapterStart1}
-\index{Notes!Bacula Porting}
-\index{Bacula Porting Notes}
-\addcontentsline{toc}{section}{Bacula Porting Notes}
-
-This document is intended mostly for developers who wish to port Bacula to a
-system that is not {\bf officially} supported.
-
-It is hoped that Bacula clients will eventually run on every imaginable system
-that needs backing up (perhaps even a Palm). It is also hoped that the Bacula
-Directory and Storage daemons will run on every system capable of supporting
-them.
-
-\section{Porting Requirements}
-\index{Requirements!Porting}
-\index{Porting Requirements}
-\addcontentsline{toc}{section}{Porting Requirements}
-
-In General, the following holds true:
-
-\begin{itemize}
-\item {\bf Bacula} has been compiled and run on Linux RedHat, FreeBSD, and
- Solaris systems.
-\item In addition, clients exist on Win32, and Irix
-\item It requires GNU C++ to compile. You can try with other compilers, but
- you are on your own. The Irix client is built with the Irix complier, but, in
- general, you will need GNU.
-\item Your compiler must provide support for 64 bit signed and unsigned
- integers.
-\item You will need a recent copy of the {\bf autoconf} tools loaded on your
- system (version 2.13 or later). The {\bf autoconf} tools are used to build
- the configuration program, but are not part of the Bacula source
-distribution.
-\item There are certain third party packages that Bacula needs. Except for
- MySQL, they can all be found in the {\bf depkgs} and {\bf depkgs1} releases.
-\item To build the Win32 binaries, we use Microsoft VC++ standard
- 2003. Please see the instructions in
- bacula-source/src/win32/README.win32 for more details. If you
- want to use VC++ Express, please see README.vc8. Our build is
- done under the most recent version of Cygwin, but Cygwin is
- not used in the Bacula binaries that are produced.
- Unfortunately, we do not have the resources to help you build
- your own version of the Win32 FD, so you are pretty much on
- your own. You can ask the bacula-devel list for help, but
- please don't expect much.
-\item {\bf Bacula} requires a good implementation of pthreads to work.
-\item The source code has been written with portability in mind and is mostly
- POSIX compatible. Thus porting to any POSIX compatible operating system
- should be relatively easy.
-\end{itemize}
-
-\section{Steps to Take for Porting}
-\index{Porting!Steps to Take for}
-\index{Steps to Take for Porting}
-\addcontentsline{toc}{section}{Steps to Take for Porting}
-
-\begin{itemize}
-\item The first step is to ensure that you have version 2.13 or later of the
- {\bf autoconf} tools loaded. You can skip this step, but making changes to
- the configuration program will be difficult or impossible.
-\item The run a {\bf ./configure} command in the main source directory and
- examine the output. It should look something like the following:
-
-\footnotesize
-\begin{verbatim}
-Configuration on Mon Oct 28 11:42:27 CET 2002:
- Host: i686-pc-linux-gnu -- redhat 7.3
- Bacula version: 1.27 (26 October 2002)
- Source code location: .
- Install binaries: /sbin
- Install config files: /etc/bacula
- C Compiler: gcc
- C++ Compiler: c++
- Compiler flags: -g -O2
- Linker flags:
- Libraries: -lpthread
- Statically Linked Tools: no
- Database found: no
- Database type: Internal
- Database lib:
- Job Output Email: root@localhost
- Traceback Email: root@localhost
- SMTP Host Address: localhost
- Director Port 9101
- File daemon Port 9102
- Storage daemon Port 9103
- Working directory /etc/bacula/working
- SQL binaries Directory
- Large file support: yes
- readline support: yes
- cweb support: yes /home/kern/bacula/depkgs/cweb
- TCP Wrappers support: no
- ZLIB support: yes
- enable-smartalloc: yes
- enable-gnome: no
- gmp support: yes
-\end{verbatim}
-\normalsize
-
-The details depend on your system. The first thing to check is that it
-properly identified your host on the {\bf Host:} line. The first part (added
-in version 1.27) is the GNU four part identification of your system. The part
-after the -- is your system and the system version. Generally, if your system
-is not yet supported, you must correct these.
-\item If the {\bf ./configure} does not function properly, you must determine
- the cause and fix it. Generally, it will be because some required system
- routine is not available on your machine.
-\item To correct problems with detection of your system type or with routines
- and libraries, you must edit the file {\bf
- \lt{}bacula-src\gt{}/autoconf/configure.in}. This is the ``source'' from
-which {\bf configure} is built. In general, most of the changes for your
-system will be made in {\bf autoconf/aclocal.m4} in the routine {\bf
-BA\_CHECK\_OPSYS} or in the routine {\bf BA\_CHECK\_OPSYS\_DISTNAME}. I have
-already added the necessary code for most systems, but if yours shows up as
-{\bf unknown} you will need to make changes. Then as mentioned above, you
-will need to set a number of system dependent items in {\bf configure.in} in
-the {\bf case} statement at approximately line 1050 (depending on the Bacula
-release).
-\item The items to in the case statement that corresponds to your system are
- the following:
-
-\begin{itemize}
-\item DISTVER -- set to the version of your operating system. Typically some
- form of {\bf uname} obtains it.
-\item TAPEDRIVE -- the default tape drive. Not too important as the user can
- set it as an option.
-\item PSCMD -- set to the {\bf ps} command that will provide the PID in the
- first field and the program name in the second field. If this is not set
- properly, the {\bf bacula stop} script will most likely not be able to stop
-Bacula in all cases.
-\item hostname -- command to return the base host name (non-qualified) of
- your system. This is generally the machine name. Not too important as the
- user can correct this in his configuration file.
-\item CFLAGS -- set any special compiler flags needed. Many systems need a
- special flag to make pthreads work. See cygwin for an example.
-\item LDFLAGS -- set any special loader flags. See cygwin for an example.
-\item PTHREAD\_LIB -- set for any special pthreads flags needed during
- linking. See freebsd as an example.
-\item lld -- set so that a ``long long int'' will be properly edited in a
- printf() call.
-\item llu -- set so that a ``long long unsigned'' will be properly edited in
- a printf() call.
-\item PFILES -- set to add any files that you may define is your platform
- subdirectory. These files are used for installation of automatic system
- startup of Bacula daemons.
-\end{itemize}
-
-\item To rebuild a new version of {\bf configure} from a changed {\bf
- autoconf/configure.in} you enter {\bf make configure} in the top level Bacula
- source directory. You must have done a ./configure prior to trying to rebuild
- the configure script or it will get into an infinite loop.
-\item If the {\bf make configure} gets into an infinite loop, ctl-c it, then
- do {\bf ./configure} (no options are necessary) and retry the {\bf make
- configure}, which should now work.
-\item To rebuild {\bf configure} you will need to have {\bf autoconf} version
- 2.57-3 or higher loaded. Older versions of autoconf will complain about
- unknown or bad options, and won't work.
-\item After you have a working {\bf configure} script, you may need to make a
- few system dependent changes to the way Bacula works. Generally, these are
- done in {\bf src/baconfig.h}. You can find a few examples of system dependent
-changes toward the end of this file. For example, on Irix systems, there is
-no definition for {\bf socklen\_t}, so it is made in this file. If your
-system has structure alignment requirements, check the definition of BALIGN
-in this file. Currently, all Bacula allocated memory is aligned on a {\bf
-double} boundary.
-\item If you are having problems with Bacula's type definitions, you might
- look at {\bf src/bc\_types.h} where all the types such as {\bf uint32\_t},
- {\bf uint64\_t}, etc. that Bacula uses are defined.
-\end{itemize}
--- /dev/null
+%%
+%%
+
+\chapter{Bacula Regression Testing}
+\label{_ChapterStart8}
+\index{Testing!Bacula Regression}
+\index{Bacula Regression Testing}
+\addcontentsline{toc}{section}{Bacula Regression Testing}
+
+\section{General}
+\index{General}
+\addcontentsline{toc}{section}{General}
+
+This document is intended mostly for developers who wish to ensure that their
+changes to Bacula don't introduce bugs in the base code. However, you
+don't need to be a developer to run the regression scripts, and we
+recommend them before putting your system into production, and before each
+upgrade, especially if you build from source code. They are
+simply shell scripts that drive Bacula through bconsole and then typically
+compare the input and output with {\bf diff}.
+
+You can find the existing regression scripts in the Bacula developer's
+{\bf git} repository on SourceForge. We strongly recommend that you {\bf
+clone} the repository because afterwards, you can easily get pull the
+updates that have been made.
+
+To get started, we recommend that you create a directory named {\bf
+bacula}, under which you will put the current source code and the current
+set of regression scripts. Below, we will describe how to set this up.
+
+The top level directory that we call {\bf bacula} can be named anything you
+want. Note, all the standard regression scripts run as non-root and can be
+run on the same machine as a production Bacula system (the developers run
+it this way).
+
+To create the directory structure for the current trunk and to
+clone the repository, do the following (note, we assume you
+are working in your home directory in a non-root account):
+
+\footnotesize
+\begin{verbatim}
+cd
+git clone git://bacula.git.sourceforge.net/gitroot/bacula bacula
+\end{verbatim}
+\normalsize
+
+This will create the directory {\bf bacula} and populate it with
+three directories: {\bf bacula}, {\bf gui}, and {\bf regress}.
+{\bf bacula} contains the Bacula source code; {\bf gui} contains
+certain gui programs that you will not need, and {\bf regress} contains
+all the regression scripts. The above should be needed only
+once. Thereafter to update to the latest code, you do:
+
+\footnotesize
+\begin{verbatim}
+cd bacula
+git pull
+\end{verbatim}
+\normalsize
+
+If you want to test with SQLite and it is not installed on your system,
+you will need to download the latest depkgs release from Source Forge and
+unpack it into {\bf depkgs}, then simply:
+
+\footnotesize
+\begin{verbatim}
+cd depkgs
+make
+\end{verbatim}
+\normalsize
+
+
+There are two different aspects of regression testing that this document will
+discuss: 1. Running the Regression Script, 2. Writing a Regression test.
+
+\section{Running the Regression Script}
+\index{Running the Regression Script}
+\index{Script!Running the Regression}
+\addcontentsline{toc}{section}{Running the Regression Script}
+
+There are a number of different tests that may be run, such as: the standard
+set that uses disk Volumes and runs under any userid; a small set of tests
+that write to tape; another set of tests where you must be root to run them.
+Normally, I run all my tests as non-root and very rarely run the root
+tests. The tests vary in length, and running the full tests including disk
+based testing, tape based testing, autochanger based testing, and multiple
+drive autochanger based testing can take 3 or 4 hours.
+
+\subsection{Setting the Configuration Parameters}
+\index{Setting the Configuration Parameters}
+\index{Parameters!Setting the Configuration}
+\addcontentsline{toc}{subsection}{Setting the Configuration Parameters}
+
+There is nothing you need to change in the source directory.
+
+To begin:
+
+\footnotesize
+\begin{verbatim}
+cd bacula/regress
+\end{verbatim}
+\normalsize
+
+
+The
+very first time you are going to run the regression scripts, you will
+need to create a custom config file for your system.
+We suggest that you start by:
+
+\footnotesize
+\begin{verbatim}
+cp prototype.conf config
+\end{verbatim}
+\normalsize
+
+Then you can edit the {\bf config} file directly.
+
+\footnotesize
+\begin{verbatim}
+
+# Where to get the source to be tested
+BACULA_SOURCE="${HOME}/bacula/bacula"
+
+# Where to send email !!!!! Change me !!!!!!!
+EMAIL=your-name@your-domain.com
+SMTP_HOST="localhost"
+
+# Full "default" path where to find sqlite (no quotes!)
+SQLITE3_DIR=${HOME}/depkgs/sqlite3
+SQLITE_DIR=${HOME}/depkgs/sqlite
+
+TAPE_DRIVE="/dev/nst0"
+# if you don't have an autochanger set AUTOCHANGER to /dev/null
+AUTOCHANGER="/dev/sg0"
+# For two drive tests -- set to /dev/null if you do not have it
+TAPE_DRIVE1="/dev/null"
+
+# This must be the path to the autochanger including its name
+AUTOCHANGER_PATH="/usr/sbin/mtx"
+
+# Set your database here
+#WHICHDB="--with-sqlite=${SQLITE_DIR}"
+#WHICHDB="--with-sqlite3=${SQLITE3_DIR}"
+#WHICHDB="--with-mysql"
+WHICHDB="--with-postgresql"
+
+# Set this to "--with-tcp-wrappers" or "--without-tcp-wrappers"
+TCPWRAPPERS="--with-tcp-wrappers"
+
+# Set this to "" to disable OpenSSL support, "--with-openssl=yes"
+# to enable it, or provide the path to the OpenSSL installation,
+# eg "--with-openssl=/usr/local"
+OPENSSL="--with-openssl"
+
+# You may put your real host name here, but localhost is valid also
+# and it has the advantage that it works on a non-newtworked machine
+HOST="localhost"
+
+\end{verbatim}
+\normalsize
+
+\begin{itemize}
+\item {\bf BACULA\_SOURCE} should be the full path to the Bacula source code
+ that you wish to test. It will be loaded configured, compiled, and
+ installed with the "make setup" command, which needs to be done only
+ once each time you change the source code.
+
+\item {\bf EMAIL} should be your email addres. Please remember to change this
+ or I will get a flood of unwanted messages. You may or may not want to see
+ these emails. In my case, I don't need them so I direct it to the bit bucket.
+
+\item {\bf SMTP\_HOST} defines where your SMTP server is.
+
+\item {\bf SQLITE\_DIR} should be the full path to the sqlite package, must
+ be build before running a Bacula regression, if you are using SQLite. This
+ variable is ignored if you are using MySQL or PostgreSQL. To use PostgreSQL,
+ edit the Makefile and change (or add) WHICHDB?=``\verb{--{with-postgresql''. For
+ MySQL use ``WHICHDB=''\verb{--{with-mysql``.
+
+ The advantage of using SQLite is that it is totally independent of any
+ installation you may have running on your system, and there is no
+ special configuration or authorization that must be done to run it.
+ With both MySQL and PostgreSQL, you must pre-install the packages,
+ initialize them and ensure that you have authorization to access the
+ database and create and delete tables.
+
+\item {\bf TAPE\_DRIVE} is the full path to your tape drive. The base set of
+ regression tests do not use a tape, so this is only important if you want to
+ run the full tests. Set this to /dev/null if you do not have a tape drive.
+
+\item {\bf TAPE\_DRIVE1} is the full path to your second tape drive, if
+ have one. The base set of
+ regression tests do not use a tape, so this is only important if you want to
+ run the full two drive tests. Set this to /dev/null if you do not have a
+ second tape drive.
+
+\item {\bf AUTOCHANGER} is the name of your autochanger control device. Set this to
+ /dev/null if you do not have an autochanger.
+
+\item {\bf AUTOCHANGER\_PATH} is the full path including the program name for
+ your autochanger program (normally {\bf mtx}. Leave the default value if you
+ do not have one.
+
+\item {\bf TCPWRAPPERS} defines whether or not you want the ./configure
+ to be performed with tcpwrappers enabled.
+
+\item {\bf OPENSSL} used to enable/disable SSL support for Bacula
+ communications and data encryption.
+
+\item {\bf HOST} is the hostname that it will use when building the
+ scripts. The Bacula daemons will be named <HOST>-dir, <HOST>-fd,
+ ... It is also the name of the HOST machine that to connect to the
+ daemons by the network. Hence the name should either be your real
+ hostname (with an appropriate DNS or /etc/hosts entry) or {\bf
+ localhost} as it is in the default file.
+
+\item {\bf bin} is the binary location.
+
+\item {\bf scripts} is the bacula scripts location (where we could find
+ database creation script, autochanger handler, etc.)
+
+\end{itemize}
+
+\subsection{Building the Test Bacula}
+\index{Building the Test Bacula}
+\index{Bacula!Building the Test}
+\addcontentsline{toc}{subsection}{Building the Test Bacula}
+
+Once the above variables are set, you can build the Makefile by entering:
+
+\footnotesize
+\begin{verbatim}
+./config xxx.conf
+\end{verbatim}
+\normalsize
+
+Where xxx.conf is the name of the conf file containing your system parameters.
+This will build a Makefile from Makefile.in, and you should not need to
+do this again unless you want to change the database or other regression
+configuration parameter.
+
+
+\subsection{Setting up your SQL engine}
+\index{Setting up your SQL engine}
+\addcontentsline{toc}{subsection}{Setting up your SQL engine}
+If you are using SQLite or SQLite3, there is nothing more to do; you can
+simply run the tests as described in the next section.
+
+If you are using MySQL or PostgreSQL, you will need to establish an
+account with your database engine for the user name {\bf regress} and
+you will need to manually create a database named {\bf regress} that can be
+used by user name regress, which means you will have to give the user
+regress sufficient permissions to use the database named regress.
+There is no password on the regress account.
+
+You have probably already done this procedure for the user name and
+database named bacula. If not, the manual describes roughly how to
+do it, and the scripts in bacula/regress/build/src/cats named
+create\_mysql\_database, create\_postgresql\_database, grant\_mysql\_privileges,
+and grant\_postgresql\_privileges may be of a help to you.
+
+Generally, to do the above, you will need to run under root to
+be able to create databases and modify permissions within MySQL and
+PostgreSQL.
+
+
+\subsection{Running the Disk Only Regression}
+\index{Regression!Running the Disk Only}
+\index{Running the Disk Only Regression}
+\addcontentsline{toc}{subsection}{Running the Disk Only Regression}
+
+The simplest way to copy the source code, configure it, compile it, link
+it, and run the tests is to use a helper script:
+
+\footnotesize
+\begin{verbatim}
+./do_disk
+\end{verbatim}
+\normalsize
+
+
+
+
+This will run the base set of tests using disk Volumes.
+If you are testing on a
+non-Linux machine several of the of the tests may not be run. In any case,
+as we add new tests, the number will vary. It will take about 1 hour
+and you don't need to be root
+to run these tests (I run under my regular userid). The result should be
+something similar to:
+
+\footnotesize
+\begin{verbatim}
+Test results
+ ===== auto-label-test OK 12:31:33 =====
+ ===== backup-bacula-test OK 12:32:32 =====
+ ===== bextract-test OK 12:33:27 =====
+ ===== bscan-test OK 12:34:47 =====
+ ===== bsr-opt-test OK 12:35:46 =====
+ ===== compressed-test OK 12:36:52 =====
+ ===== compressed-encrypt-test OK 12:38:18 =====
+ ===== concurrent-jobs-test OK 12:39:49 =====
+ ===== data-encrypt-test OK 12:41:11 =====
+ ===== encrypt-bug-test OK 12:42:00 =====
+ ===== fifo-test OK 12:43:46 =====
+ ===== backup-bacula-fifo OK 12:44:54 =====
+ ===== differential-test OK 12:45:36 =====
+ ===== four-concurrent-jobs-test OK 12:47:39 =====
+ ===== four-jobs-test OK 12:49:22 =====
+ ===== incremental-test OK 12:50:38 =====
+ ===== query-test OK 12:51:37 =====
+ ===== recycle-test OK 12:53:52 =====
+ ===== restore2-by-file-test OK 12:54:53 =====
+ ===== restore-by-file-test OK 12:55:40 =====
+ ===== restore-disk-seek-test OK 12:56:29 =====
+ ===== six-vol-test OK 12:57:44 =====
+ ===== span-vol-test OK 12:58:52 =====
+ ===== sparse-compressed-test OK 13:00:00 =====
+ ===== sparse-test OK 13:01:04 =====
+ ===== two-jobs-test OK 13:02:39 =====
+ ===== two-vol-test OK 13:03:49 =====
+ ===== verify-vol-test OK 13:04:56 =====
+ ===== weird-files2-test OK 13:05:47 =====
+ ===== weird-files-test OK 13:06:33 =====
+ ===== migration-job-test OK 13:08:15 =====
+ ===== migration-jobspan-test OK 13:09:33 =====
+ ===== migration-volume-test OK 13:10:48 =====
+ ===== migration-time-test OK 13:12:59 =====
+ ===== hardlink-test OK 13:13:50 =====
+ ===== two-pool-test OK 13:18:17 =====
+ ===== fast-two-pool-test OK 13:24:02 =====
+ ===== two-volume-test OK 13:25:06 =====
+ ===== incremental-2disk OK 13:25:57 =====
+ ===== 2drive-incremental-2disk OK 13:26:53 =====
+ ===== scratch-pool-test OK 13:28:01 =====
+Total time = 0:57:55 or 3475 secs
+
+\end{verbatim}
+\normalsize
+
+and the working tape tests are run with
+
+\footnotesize
+\begin{verbatim}
+make full_test
+\end{verbatim}
+\normalsize
+
+
+\footnotesize
+\begin{verbatim}
+Test results
+
+ ===== Bacula tape test OK =====
+ ===== Small File Size test OK =====
+ ===== restore-by-file-tape test OK =====
+ ===== incremental-tape test OK =====
+ ===== four-concurrent-jobs-tape OK =====
+ ===== four-jobs-tape OK =====
+\end{verbatim}
+\normalsize
+
+Each separate test is self contained in that it initializes to run Bacula from
+scratch (i.e. newly created database). It will also kill any Bacula session
+that is currently running. In addition, it uses ports 8101, 8102, and 8103 so
+that it does not intefere with a production system.
+
+Alternatively, you can do the ./do\_disk work by hand with:
+
+\footnotesize
+\begin{verbatim}
+make setup
+\end{verbatim}
+\normalsize
+
+The above will then copy the source code within
+the regression tree (in directory regress/build), configure it, and build it.
+There should be no errors. If there are, please correct them before
+continuing. From this point on, as long as you don't change the Bacula
+source code, you should not need to repeat any of the above steps. If
+you pull down a new version of the source code, simply run {\bf make setup}
+again.
+
+
+Once Bacula is built, you can run the basic disk only non-root regression test
+by entering:
+
+\footnotesize
+\begin{verbatim}
+make test
+\end{verbatim}
+\normalsize
+
+
+\subsection{Other Tests}
+\index{Other Tests}
+\index{Tests!Other}
+\addcontentsline{toc}{subsection}{Other Tests}
+
+There are a number of other tests that can be run as well. All the tests are a
+simply shell script keep in the regress directory. For example the ''make
+test`` simply executes {\bf ./all-non-root-tests}. The other tests, which
+are invoked by directly running the script are:
+
+\begin{description}
+
+\item [all\_non-root-tests]
+ \index{all\_non-root-tests}
+ All non-tape tests not requiring root. This is the standard set of tests,
+that in general, backup some data, then restore it, and finally compares the
+restored data with the original data.
+
+\item [all-root-tests]
+ \index{all-root-tests}
+ All non-tape tests requiring root permission. These are a relatively small
+number of tests that require running as root. The amount of data backed up
+can be quite large. For example, one test backs up /usr, another backs up
+/etc. One or more of these tests reports an error -- I'll fix it one day.
+
+\item [all-non-root-tape-tests]
+ \index{all-non-root-tape-tests}
+ All tape test not requiring root. There are currently three tests, all run
+without being root, and backup to a tape. The first two tests use one volume,
+and the third test requires an autochanger, and uses two volumes. If you
+don't have an autochanger, then this script will probably produce an error.
+
+\item [all-tape-and-file-tests]
+ \index{all-tape-and-file-tests}
+ All tape and file tests not requiring root. This includes just about
+everything, and I don't run it very often.
+\end{description}
+
+\subsection{If a Test Fails}
+\index{Fails!If a Test}
+\index{If a Test Fails}
+\addcontentsline{toc}{subsection}{If a Test Fails}
+
+If you one or more tests fail, the line output will be similar to:
+
+\footnotesize
+\begin{verbatim}
+ !!!!! concurrent-jobs-test failed!!! !!!!!
+\end{verbatim}
+\normalsize
+
+If you want to determine why the test failed, you will need to rerun the
+script with the debug output turned on. You do so by defining the
+environment variable {\bf REGRESS\_DEBUG} with commands such as:
+
+\begin{verbatim}
+REGRESS_DEBUG=1
+export REGRESS_DEBUG
+\end{verbatim}
+
+Then from the "regress" directory (all regression scripts assume that
+you have "regress" as the current directory), enter:
+
+\begin{verbatim}
+tests/test-name
+\end{verbatim}
+
+where test-name should be the name of a test script -- for example:
+{\bf tests/backup-bacula-test}.
+
+\section{Testing a Binary Installation}
+\index{Test!Testing a Binary Installation}
+
+If you have installed your Bacula from a binary release such as (rpms or debs),
+you can still run regression tests on it.
+First, make sure that your regression {\bf config} file uses the same catalog backend as
+your installed binaries. Then define the variables \texttt{bin} and \texttt{scripts} variables
+in your config file.
+
+Example:
+\begin{verbatim}
+bin=/opt/bacula/bin
+scripts=/opt/bacula/scripts
+\end{verbatim}
+
+The \texttt{./scripts/prepare-other-loc} will tweak the regress scripts to use
+your binary location. You will need to run it manually once before you run any
+regression tests.
+
+\begin{verbatim}
+$ ./scripts/prepare-other-loc
+$ ./tests/backup-bacula-test
+...
+\end{verbatim}
+
+All regression scripts must be run by hand or by calling the test scripts.
+These are principally scripts that begin with {\bf all\_...} such as {\bf all\_disk\_tests},
+{\bf ./all\_test} ...
+None of the
+{\bf ./do\_disk}, {\bf ./do\_all}, {\bf ./nightly...} scripts will work.
+
+If you want to switch back to running the regression scripts from source, first
+remove the {\bf bin} and {\bf scripts} variables from your {\bf config} file and
+rerun the {\bf make setup} step.
+
+\section{Running a Single Test}
+\index{Running a Single Test}
+\addcontentsline{toc}{section}{Running a Single Test}
+
+If you wish to run a single test, you can simply:
+
+\begin{verbatim}
+cd regress
+tests/<name-of-test>
+\end{verbatim}
+
+or, if the source code has been updated, you would do:
+
+\begin{verbatim}
+cd bacula
+git pull
+cd regress
+make setup
+tests/backup-to-null
+\end{verbatim}
+
+
+\section{Writing a Regression Test}
+\index{Test!Writing a Regression}
+\index{Writing a Regression Test}
+\addcontentsline{toc}{section}{Writing a Regression Test}
+
+Any developer, who implements a major new feature, should write a regression
+test that exercises and validates the new feature. Each regression test is a
+complete test by itself. It terminates any running Bacula, initializes the
+database, starts Bacula, then runs the test by using the console program.
+
+\subsection{Running the Tests by Hand}
+\index{Hand!Running the Tests by}
+\index{Running the Tests by Hand}
+\addcontentsline{toc}{subsection}{Running the Tests by Hand}
+
+You can run any individual test by hand by cd'ing to the {\bf regress}
+directory and entering:
+
+\footnotesize
+\begin{verbatim}
+tests/<test-name>
+\end{verbatim}
+\normalsize
+
+\subsection{Directory Structure}
+\index{Structure!Directory}
+\index{Directory Structure}
+\addcontentsline{toc}{subsection}{Directory Structure}
+
+The directory structure of the regression tests is:
+
+\footnotesize
+\begin{verbatim}
+ regress - Makefile, scripts to start tests
+ |------ scripts - Scripts and conf files
+ |-------tests - All test scripts are here
+ |
+ |------------------ -- All directories below this point are used
+ | for testing, but are created from the
+ | above directories and are removed with
+ | "make distclean"
+ |
+ |------ bin - This is the install directory for
+ | Bacula to be used testing
+ |------ build - Where the Bacula source build tree is
+ |------ tmp - Most temp files go here
+ |------ working - Bacula working directory
+ |------ weird-files - Weird files used in two of the tests.
+\end{verbatim}
+\normalsize
+
+\subsection{Adding a New Test}
+\index{Adding a New Test}
+\index{Test!Adding a New}
+\addcontentsline{toc}{subsection}{Adding a New Test}
+
+If you want to write a new regression test, it is best to start with one of
+the existing test scripts, and modify it to do the new test.
+
+When adding a new test, be extremely careful about adding anything to any of
+the daemons' configuration files. The reason is that it may change the prompts
+that are sent to the console. For example, adding a Pool means that the
+current scripts, which assume that Bacula automatically selects a Pool, will
+now be presented with a new prompt, so the test will fail. If you need to
+enhance the configuration files, consider making your own versions.
+
+\subsection{Running a Test Under The Debugger}
+\index{Debugger}
+\addcontentsline{toc}{subsection}{Running a Test Under The Debugger}
+You can run a test under the debugger (actually run a Bacula daemon
+under the debugger) by first setting the environment variable
+{\bf REGRESS\_WAIT} with commands such as:
+
+\begin{verbatim}
+REGRESS_WAIT=1
+export REGRESS_WAIT
+\end{verbatim}
+
+Then executing the script. When the script prints the following line:
+
+\begin{verbatim}
+Start Bacula under debugger and enter anything when ready ...
+\end{verbatim}
+
+You start the Bacula component you want to run under the debugger in a
+different shell window. For example:
+
+\begin{verbatim}
+cd .../regress/bin
+gdb bacula-sd
+(possibly set breakpoints, ...)
+run -s -f
+\end{verbatim}
+
+Then enter any character in the window with the above message.
+An error message will appear saying that the daemon you are debugging
+is already running, which is the case. You can simply ignore the
+error message.
+++ /dev/null
-%%
-%%
-
-\chapter{Bacula Regression Testing}
-\label{_ChapterStart8}
-\index{Testing!Bacula Regression}
-\index{Bacula Regression Testing}
-\addcontentsline{toc}{section}{Bacula Regression Testing}
-
-\section{General}
-\index{General}
-\addcontentsline{toc}{section}{General}
-
-This document is intended mostly for developers who wish to ensure that their
-changes to Bacula don't introduce bugs in the base code. However, you
-don't need to be a developer to run the regression scripts, and we
-recommend them before putting your system into production, and before each
-upgrade, especially if you build from source code. They are
-simply shell scripts that drive Bacula through bconsole and then typically
-compare the input and output with {\bf diff}.
-
-You can find the existing regression scripts in the Bacula developer's
-{\bf git} repository on SourceForge. We strongly recommend that you {\bf
-clone} the repository because afterwards, you can easily get pull the
-updates that have been made.
-
-To get started, we recommend that you create a directory named {\bf
-bacula}, under which you will put the current source code and the current
-set of regression scripts. Below, we will describe how to set this up.
-
-The top level directory that we call {\bf bacula} can be named anything you
-want. Note, all the standard regression scripts run as non-root and can be
-run on the same machine as a production Bacula system (the developers run
-it this way).
-
-To create the directory structure for the current trunk and to
-clone the repository, do the following (note, we assume you
-are working in your home directory in a non-root account):
-
-\footnotesize
-\begin{verbatim}
-cd
-git clone git://bacula.git.sourceforge.net/gitroot/bacula bacula
-\end{verbatim}
-\normalsize
-
-This will create the directory {\bf bacula} and populate it with
-three directories: {\bf bacula}, {\bf gui}, and {\bf regress}.
-{\bf bacula} contains the Bacula source code; {\bf gui} contains
-certain gui programs that you will not need, and {\bf regress} contains
-all the regression scripts. The above should be needed only
-once. Thereafter to update to the latest code, you do:
-
-\footnotesize
-\begin{verbatim}
-cd bacula
-git pull
-\end{verbatim}
-\normalsize
-
-If you want to test with SQLite and it is not installed on your system,
-you will need to download the latest depkgs release from Source Forge and
-unpack it into {\bf depkgs}, then simply:
-
-\footnotesize
-\begin{verbatim}
-cd depkgs
-make
-\end{verbatim}
-\normalsize
-
-
-There are two different aspects of regression testing that this document will
-discuss: 1. Running the Regression Script, 2. Writing a Regression test.
-
-\section{Running the Regression Script}
-\index{Running the Regression Script}
-\index{Script!Running the Regression}
-\addcontentsline{toc}{section}{Running the Regression Script}
-
-There are a number of different tests that may be run, such as: the standard
-set that uses disk Volumes and runs under any userid; a small set of tests
-that write to tape; another set of tests where you must be root to run them.
-Normally, I run all my tests as non-root and very rarely run the root
-tests. The tests vary in length, and running the full tests including disk
-based testing, tape based testing, autochanger based testing, and multiple
-drive autochanger based testing can take 3 or 4 hours.
-
-\subsection{Setting the Configuration Parameters}
-\index{Setting the Configuration Parameters}
-\index{Parameters!Setting the Configuration}
-\addcontentsline{toc}{subsection}{Setting the Configuration Parameters}
-
-There is nothing you need to change in the source directory.
-
-To begin:
-
-\footnotesize
-\begin{verbatim}
-cd bacula/regress
-\end{verbatim}
-\normalsize
-
-
-The
-very first time you are going to run the regression scripts, you will
-need to create a custom config file for your system.
-We suggest that you start by:
-
-\footnotesize
-\begin{verbatim}
-cp prototype.conf config
-\end{verbatim}
-\normalsize
-
-Then you can edit the {\bf config} file directly.
-
-\footnotesize
-\begin{verbatim}
-
-# Where to get the source to be tested
-BACULA_SOURCE="${HOME}/bacula/bacula"
-
-# Where to send email !!!!! Change me !!!!!!!
-EMAIL=your-name@your-domain.com
-SMTP_HOST="localhost"
-
-# Full "default" path where to find sqlite (no quotes!)
-SQLITE3_DIR=${HOME}/depkgs/sqlite3
-SQLITE_DIR=${HOME}/depkgs/sqlite
-
-TAPE_DRIVE="/dev/nst0"
-# if you don't have an autochanger set AUTOCHANGER to /dev/null
-AUTOCHANGER="/dev/sg0"
-# For two drive tests -- set to /dev/null if you do not have it
-TAPE_DRIVE1="/dev/null"
-
-# This must be the path to the autochanger including its name
-AUTOCHANGER_PATH="/usr/sbin/mtx"
-
-# Set your database here
-#WHICHDB="--with-sqlite=${SQLITE_DIR}"
-#WHICHDB="--with-sqlite3=${SQLITE3_DIR}"
-#WHICHDB="--with-mysql"
-WHICHDB="--with-postgresql"
-
-# Set this to "--with-tcp-wrappers" or "--without-tcp-wrappers"
-TCPWRAPPERS="--with-tcp-wrappers"
-
-# Set this to "" to disable OpenSSL support, "--with-openssl=yes"
-# to enable it, or provide the path to the OpenSSL installation,
-# eg "--with-openssl=/usr/local"
-OPENSSL="--with-openssl"
-
-# You may put your real host name here, but localhost is valid also
-# and it has the advantage that it works on a non-newtworked machine
-HOST="localhost"
-
-\end{verbatim}
-\normalsize
-
-\begin{itemize}
-\item {\bf BACULA\_SOURCE} should be the full path to the Bacula source code
- that you wish to test. It will be loaded configured, compiled, and
- installed with the "make setup" command, which needs to be done only
- once each time you change the source code.
-
-\item {\bf EMAIL} should be your email addres. Please remember to change this
- or I will get a flood of unwanted messages. You may or may not want to see
- these emails. In my case, I don't need them so I direct it to the bit bucket.
-
-\item {\bf SMTP\_HOST} defines where your SMTP server is.
-
-\item {\bf SQLITE\_DIR} should be the full path to the sqlite package, must
- be build before running a Bacula regression, if you are using SQLite. This
- variable is ignored if you are using MySQL or PostgreSQL. To use PostgreSQL,
- edit the Makefile and change (or add) WHICHDB?=``\verb{--{with-postgresql''. For
- MySQL use ``WHICHDB=''\verb{--{with-mysql``.
-
- The advantage of using SQLite is that it is totally independent of any
- installation you may have running on your system, and there is no
- special configuration or authorization that must be done to run it.
- With both MySQL and PostgreSQL, you must pre-install the packages,
- initialize them and ensure that you have authorization to access the
- database and create and delete tables.
-
-\item {\bf TAPE\_DRIVE} is the full path to your tape drive. The base set of
- regression tests do not use a tape, so this is only important if you want to
- run the full tests. Set this to /dev/null if you do not have a tape drive.
-
-\item {\bf TAPE\_DRIVE1} is the full path to your second tape drive, if
- have one. The base set of
- regression tests do not use a tape, so this is only important if you want to
- run the full two drive tests. Set this to /dev/null if you do not have a
- second tape drive.
-
-\item {\bf AUTOCHANGER} is the name of your autochanger control device. Set this to
- /dev/null if you do not have an autochanger.
-
-\item {\bf AUTOCHANGER\_PATH} is the full path including the program name for
- your autochanger program (normally {\bf mtx}. Leave the default value if you
- do not have one.
-
-\item {\bf TCPWRAPPERS} defines whether or not you want the ./configure
- to be performed with tcpwrappers enabled.
-
-\item {\bf OPENSSL} used to enable/disable SSL support for Bacula
- communications and data encryption.
-
-\item {\bf HOST} is the hostname that it will use when building the
- scripts. The Bacula daemons will be named <HOST>-dir, <HOST>-fd,
- ... It is also the name of the HOST machine that to connect to the
- daemons by the network. Hence the name should either be your real
- hostname (with an appropriate DNS or /etc/hosts entry) or {\bf
- localhost} as it is in the default file.
-
-\item {\bf bin} is the binary location.
-
-\item {\bf scripts} is the bacula scripts location (where we could find
- database creation script, autochanger handler, etc.)
-
-\end{itemize}
-
-\subsection{Building the Test Bacula}
-\index{Building the Test Bacula}
-\index{Bacula!Building the Test}
-\addcontentsline{toc}{subsection}{Building the Test Bacula}
-
-Once the above variables are set, you can build the Makefile by entering:
-
-\footnotesize
-\begin{verbatim}
-./config xxx.conf
-\end{verbatim}
-\normalsize
-
-Where xxx.conf is the name of the conf file containing your system parameters.
-This will build a Makefile from Makefile.in, and you should not need to
-do this again unless you want to change the database or other regression
-configuration parameter.
-
-
-\subsection{Setting up your SQL engine}
-\index{Setting up your SQL engine}
-\addcontentsline{toc}{subsection}{Setting up your SQL engine}
-If you are using SQLite or SQLite3, there is nothing more to do; you can
-simply run the tests as described in the next section.
-
-If you are using MySQL or PostgreSQL, you will need to establish an
-account with your database engine for the user name {\bf regress} and
-you will need to manually create a database named {\bf regress} that can be
-used by user name regress, which means you will have to give the user
-regress sufficient permissions to use the database named regress.
-There is no password on the regress account.
-
-You have probably already done this procedure for the user name and
-database named bacula. If not, the manual describes roughly how to
-do it, and the scripts in bacula/regress/build/src/cats named
-create\_mysql\_database, create\_postgresql\_database, grant\_mysql\_privileges,
-and grant\_postgresql\_privileges may be of a help to you.
-
-Generally, to do the above, you will need to run under root to
-be able to create databases and modify permissions within MySQL and
-PostgreSQL.
-
-
-\subsection{Running the Disk Only Regression}
-\index{Regression!Running the Disk Only}
-\index{Running the Disk Only Regression}
-\addcontentsline{toc}{subsection}{Running the Disk Only Regression}
-
-The simplest way to copy the source code, configure it, compile it, link
-it, and run the tests is to use a helper script:
-
-\footnotesize
-\begin{verbatim}
-./do_disk
-\end{verbatim}
-\normalsize
-
-
-
-
-This will run the base set of tests using disk Volumes.
-If you are testing on a
-non-Linux machine several of the of the tests may not be run. In any case,
-as we add new tests, the number will vary. It will take about 1 hour
-and you don't need to be root
-to run these tests (I run under my regular userid). The result should be
-something similar to:
-
-\footnotesize
-\begin{verbatim}
-Test results
- ===== auto-label-test OK 12:31:33 =====
- ===== backup-bacula-test OK 12:32:32 =====
- ===== bextract-test OK 12:33:27 =====
- ===== bscan-test OK 12:34:47 =====
- ===== bsr-opt-test OK 12:35:46 =====
- ===== compressed-test OK 12:36:52 =====
- ===== compressed-encrypt-test OK 12:38:18 =====
- ===== concurrent-jobs-test OK 12:39:49 =====
- ===== data-encrypt-test OK 12:41:11 =====
- ===== encrypt-bug-test OK 12:42:00 =====
- ===== fifo-test OK 12:43:46 =====
- ===== backup-bacula-fifo OK 12:44:54 =====
- ===== differential-test OK 12:45:36 =====
- ===== four-concurrent-jobs-test OK 12:47:39 =====
- ===== four-jobs-test OK 12:49:22 =====
- ===== incremental-test OK 12:50:38 =====
- ===== query-test OK 12:51:37 =====
- ===== recycle-test OK 12:53:52 =====
- ===== restore2-by-file-test OK 12:54:53 =====
- ===== restore-by-file-test OK 12:55:40 =====
- ===== restore-disk-seek-test OK 12:56:29 =====
- ===== six-vol-test OK 12:57:44 =====
- ===== span-vol-test OK 12:58:52 =====
- ===== sparse-compressed-test OK 13:00:00 =====
- ===== sparse-test OK 13:01:04 =====
- ===== two-jobs-test OK 13:02:39 =====
- ===== two-vol-test OK 13:03:49 =====
- ===== verify-vol-test OK 13:04:56 =====
- ===== weird-files2-test OK 13:05:47 =====
- ===== weird-files-test OK 13:06:33 =====
- ===== migration-job-test OK 13:08:15 =====
- ===== migration-jobspan-test OK 13:09:33 =====
- ===== migration-volume-test OK 13:10:48 =====
- ===== migration-time-test OK 13:12:59 =====
- ===== hardlink-test OK 13:13:50 =====
- ===== two-pool-test OK 13:18:17 =====
- ===== fast-two-pool-test OK 13:24:02 =====
- ===== two-volume-test OK 13:25:06 =====
- ===== incremental-2disk OK 13:25:57 =====
- ===== 2drive-incremental-2disk OK 13:26:53 =====
- ===== scratch-pool-test OK 13:28:01 =====
-Total time = 0:57:55 or 3475 secs
-
-\end{verbatim}
-\normalsize
-
-and the working tape tests are run with
-
-\footnotesize
-\begin{verbatim}
-make full_test
-\end{verbatim}
-\normalsize
-
-
-\footnotesize
-\begin{verbatim}
-Test results
-
- ===== Bacula tape test OK =====
- ===== Small File Size test OK =====
- ===== restore-by-file-tape test OK =====
- ===== incremental-tape test OK =====
- ===== four-concurrent-jobs-tape OK =====
- ===== four-jobs-tape OK =====
-\end{verbatim}
-\normalsize
-
-Each separate test is self contained in that it initializes to run Bacula from
-scratch (i.e. newly created database). It will also kill any Bacula session
-that is currently running. In addition, it uses ports 8101, 8102, and 8103 so
-that it does not intefere with a production system.
-
-Alternatively, you can do the ./do\_disk work by hand with:
-
-\footnotesize
-\begin{verbatim}
-make setup
-\end{verbatim}
-\normalsize
-
-The above will then copy the source code within
-the regression tree (in directory regress/build), configure it, and build it.
-There should be no errors. If there are, please correct them before
-continuing. From this point on, as long as you don't change the Bacula
-source code, you should not need to repeat any of the above steps. If
-you pull down a new version of the source code, simply run {\bf make setup}
-again.
-
-
-Once Bacula is built, you can run the basic disk only non-root regression test
-by entering:
-
-\footnotesize
-\begin{verbatim}
-make test
-\end{verbatim}
-\normalsize
-
-
-\subsection{Other Tests}
-\index{Other Tests}
-\index{Tests!Other}
-\addcontentsline{toc}{subsection}{Other Tests}
-
-There are a number of other tests that can be run as well. All the tests are a
-simply shell script keep in the regress directory. For example the ''make
-test`` simply executes {\bf ./all-non-root-tests}. The other tests, which
-are invoked by directly running the script are:
-
-\begin{description}
-
-\item [all\_non-root-tests]
- \index{all\_non-root-tests}
- All non-tape tests not requiring root. This is the standard set of tests,
-that in general, backup some data, then restore it, and finally compares the
-restored data with the original data.
-
-\item [all-root-tests]
- \index{all-root-tests}
- All non-tape tests requiring root permission. These are a relatively small
-number of tests that require running as root. The amount of data backed up
-can be quite large. For example, one test backs up /usr, another backs up
-/etc. One or more of these tests reports an error -- I'll fix it one day.
-
-\item [all-non-root-tape-tests]
- \index{all-non-root-tape-tests}
- All tape test not requiring root. There are currently three tests, all run
-without being root, and backup to a tape. The first two tests use one volume,
-and the third test requires an autochanger, and uses two volumes. If you
-don't have an autochanger, then this script will probably produce an error.
-
-\item [all-tape-and-file-tests]
- \index{all-tape-and-file-tests}
- All tape and file tests not requiring root. This includes just about
-everything, and I don't run it very often.
-\end{description}
-
-\subsection{If a Test Fails}
-\index{Fails!If a Test}
-\index{If a Test Fails}
-\addcontentsline{toc}{subsection}{If a Test Fails}
-
-If you one or more tests fail, the line output will be similar to:
-
-\footnotesize
-\begin{verbatim}
- !!!!! concurrent-jobs-test failed!!! !!!!!
-\end{verbatim}
-\normalsize
-
-If you want to determine why the test failed, you will need to rerun the
-script with the debug output turned on. You do so by defining the
-environment variable {\bf REGRESS\_DEBUG} with commands such as:
-
-\begin{verbatim}
-REGRESS_DEBUG=1
-export REGRESS_DEBUG
-\end{verbatim}
-
-Then from the "regress" directory (all regression scripts assume that
-you have "regress" as the current directory), enter:
-
-\begin{verbatim}
-tests/test-name
-\end{verbatim}
-
-where test-name should be the name of a test script -- for example:
-{\bf tests/backup-bacula-test}.
-
-\section{Testing a Binary Installation}
-\index{Test!Testing a Binary Installation}
-
-If you have installed your Bacula from a binary release such as (rpms or debs),
-you can still run regression tests on it.
-First, make sure that your regression {\bf config} file uses the same catalog backend as
-your installed binaries. Then define the variables \texttt{bin} and \texttt{scripts} variables
-in your config file.
-
-Example:
-\begin{verbatim}
-bin=/opt/bacula/bin
-scripts=/opt/bacula/scripts
-\end{verbatim}
-
-The \texttt{./scripts/prepare-other-loc} will tweak the regress scripts to use
-your binary location. You will need to run it manually once before you run any
-regression tests.
-
-\begin{verbatim}
-$ ./scripts/prepare-other-loc
-$ ./tests/backup-bacula-test
-...
-\end{verbatim}
-
-All regression scripts must be run by hand or by calling the test scripts.
-These are principally scripts that begin with {\bf all\_...} such as {\bf all\_disk\_tests},
-{\bf ./all\_test} ...
-None of the
-{\bf ./do\_disk}, {\bf ./do\_all}, {\bf ./nightly...} scripts will work.
-
-If you want to switch back to running the regression scripts from source, first
-remove the {\bf bin} and {\bf scripts} variables from your {\bf config} file and
-rerun the {\bf make setup} step.
-
-\section{Running a Single Test}
-\index{Running a Single Test}
-\addcontentsline{toc}{section}{Running a Single Test}
-
-If you wish to run a single test, you can simply:
-
-\begin{verbatim}
-cd regress
-tests/<name-of-test>
-\end{verbatim}
-
-or, if the source code has been updated, you would do:
-
-\begin{verbatim}
-cd bacula
-git pull
-cd regress
-make setup
-tests/backup-to-null
-\end{verbatim}
-
-
-\section{Writing a Regression Test}
-\index{Test!Writing a Regression}
-\index{Writing a Regression Test}
-\addcontentsline{toc}{section}{Writing a Regression Test}
-
-Any developer, who implements a major new feature, should write a regression
-test that exercises and validates the new feature. Each regression test is a
-complete test by itself. It terminates any running Bacula, initializes the
-database, starts Bacula, then runs the test by using the console program.
-
-\subsection{Running the Tests by Hand}
-\index{Hand!Running the Tests by}
-\index{Running the Tests by Hand}
-\addcontentsline{toc}{subsection}{Running the Tests by Hand}
-
-You can run any individual test by hand by cd'ing to the {\bf regress}
-directory and entering:
-
-\footnotesize
-\begin{verbatim}
-tests/<test-name>
-\end{verbatim}
-\normalsize
-
-\subsection{Directory Structure}
-\index{Structure!Directory}
-\index{Directory Structure}
-\addcontentsline{toc}{subsection}{Directory Structure}
-
-The directory structure of the regression tests is:
-
-\footnotesize
-\begin{verbatim}
- regress - Makefile, scripts to start tests
- |------ scripts - Scripts and conf files
- |-------tests - All test scripts are here
- |
- |------------------ -- All directories below this point are used
- | for testing, but are created from the
- | above directories and are removed with
- | "make distclean"
- |
- |------ bin - This is the install directory for
- | Bacula to be used testing
- |------ build - Where the Bacula source build tree is
- |------ tmp - Most temp files go here
- |------ working - Bacula working directory
- |------ weird-files - Weird files used in two of the tests.
-\end{verbatim}
-\normalsize
-
-\subsection{Adding a New Test}
-\index{Adding a New Test}
-\index{Test!Adding a New}
-\addcontentsline{toc}{subsection}{Adding a New Test}
-
-If you want to write a new regression test, it is best to start with one of
-the existing test scripts, and modify it to do the new test.
-
-When adding a new test, be extremely careful about adding anything to any of
-the daemons' configuration files. The reason is that it may change the prompts
-that are sent to the console. For example, adding a Pool means that the
-current scripts, which assume that Bacula automatically selects a Pool, will
-now be presented with a new prompt, so the test will fail. If you need to
-enhance the configuration files, consider making your own versions.
-
-\subsection{Running a Test Under The Debugger}
-\index{Debugger}
-\addcontentsline{toc}{subsection}{Running a Test Under The Debugger}
-You can run a test under the debugger (actually run a Bacula daemon
-under the debugger) by first setting the environment variable
-{\bf REGRESS\_WAIT} with commands such as:
-
-\begin{verbatim}
-REGRESS_WAIT=1
-export REGRESS_WAIT
-\end{verbatim}
-
-Then executing the script. When the script prints the following line:
-
-\begin{verbatim}
-Start Bacula under debugger and enter anything when ready ...
-\end{verbatim}
-
-You start the Bacula component you want to run under the debugger in a
-different shell window. For example:
-
-\begin{verbatim}
-cd .../regress/bin
-gdb bacula-sd
-(possibly set breakpoints, ...)
-run -s -f
-\end{verbatim}
-
-Then enter any character in the window with the above message.
-An error message will appear saying that the daemon you are debugging
-is already running, which is the case. You can simply ignore the
-error message.
--- /dev/null
+%%
+%%
+
+\addcontentsline{lof}{figure}{Smart Memory Allocation with Orphaned Buffer
+Detection}
+\includegraphics{\idir smartall.eps}
+
+\chapter{Smart Memory Allocation}
+\label{_ChapterStart4}
+\index{Detection!Smart Memory Allocation With Orphaned Buffer }
+\index{Smart Memory Allocation With Orphaned Buffer Detection }
+\addcontentsline{toc}{section}{Smart Memory Allocation With Orphaned Buffer
+Detection}
+
+Few things are as embarrassing as a program that leaks, yet few errors are so
+easy to commit or as difficult to track down in a large, complicated program
+as failure to release allocated memory. SMARTALLOC replaces the standard C
+library memory allocation functions with versions which keep track of buffer
+allocations and releases and report all orphaned buffers at the end of program
+execution. By including this package in your program during development and
+testing, you can identify code that loses buffers right when it's added and
+most easily fixed, rather than as part of a crisis debugging push when the
+problem is identified much later in the testing cycle (or even worse, when the
+code is in the hands of a customer). When program testing is complete, simply
+recompiling with different flags removes SMARTALLOC from your program,
+permitting it to run without speed or storage penalties.
+
+In addition to detecting orphaned buffers, SMARTALLOC also helps to find other
+common problems in management of dynamic storage including storing before the
+start or beyond the end of an allocated buffer, referencing data through a
+pointer to a previously released buffer, attempting to release a buffer twice
+or releasing storage not obtained from the allocator, and assuming the initial
+contents of storage allocated by functions that do not guarantee a known
+value. SMARTALLOC's checking does not usually add a large amount of overhead
+to a program (except for programs which use {\tt realloc()} extensively; see
+below). SMARTALLOC focuses on proper storage management rather than internal
+consistency of the heap as checked by the malloc\_debug facility available on
+some systems. SMARTALLOC does not conflict with malloc\_debug and both may be
+used together, if you wish. SMARTALLOC makes no assumptions regarding the
+internal structure of the heap and thus should be compatible with any C
+language implementation of the standard memory allocation functions.
+
+\subsection{ Installing SMARTALLOC}
+\index{SMARTALLOC!Installing }
+\index{Installing SMARTALLOC }
+\addcontentsline{toc}{subsection}{Installing SMARTALLOC}
+
+SMARTALLOC is provided as a Zipped archive,
+\elink{smartall.zip}{http://www.fourmilab.ch/smartall/smartall.zip}; see the
+download instructions below.
+
+To install SMARTALLOC in your program, simply add the statement:
+
+to every C program file which calls any of the memory allocation functions
+({\tt malloc}, {\tt calloc}, {\tt free}, etc.). SMARTALLOC must be used for
+all memory allocation with a program, so include file for your entire program,
+if you have such a thing. Next, define the symbol SMARTALLOC in the
+compilation before the inclusion of smartall.h. I usually do this by having my
+Makefile add the ``{\tt -DSMARTALLOC}'' option to the C compiler for
+non-production builds. You can define the symbol manually, if you prefer, by
+adding the statement:
+
+{\tt \#define SMARTALLOC}
+
+At the point where your program is all done and ready to relinquish control to
+the operating system, add the call:
+
+{\tt \ \ \ \ \ \ \ \ sm\_dump(}{\it datadump}{\tt );}
+
+where {\it datadump} specifies whether the contents of orphaned buffers are to
+be dumped in addition printing to their size and place of allocation. The data
+are dumped only if {\it datadump} is nonzero, so most programs will normally
+use ``{\tt sm\_dump(0);}''. If a mysterious orphaned buffer appears that can't
+be identified from the information this prints about it, replace the statement
+with ``{\tt sm\_dump(1)};''. Usually the dump of the buffer's data will
+furnish the additional clues you need to excavate and extirpate the elusive
+error that left the buffer allocated.
+
+Finally, add the files ``smartall.h'' and ``smartall.c'' from this release to
+your source directory, make dependencies, and linker input. You needn't make
+inclusion of smartall.c in your link optional; if compiled with SMARTALLOC not
+defined it generates no code, so you may always include it knowing it will
+waste no storage in production builds. Now when you run your program, if it
+leaves any buffers around when it's done, each will be reported by {\tt
+sm\_dump()} on stderr as follows:
+
+\footnotesize
+\begin{verbatim}
+Orphaned buffer: 120 bytes allocated at line 50 of gutshot.c
+\end{verbatim}
+\normalsize
+
+\subsection{ Squelching a SMARTALLOC}
+\index{SMARTALLOC!Squelching a }
+\index{Squelching a SMARTALLOC }
+\addcontentsline{toc}{subsection}{Squelching a SMARTALLOC}
+
+Usually, when you first install SMARTALLOC in an existing program you'll find
+it nattering about lots of orphaned buffers. Some of these turn out to be
+legitimate errors, but some are storage allocated during program
+initialisation that, while dynamically allocated, is logically static storage
+not intended to be released. Of course, you can get rid of the complaints
+about these buffers by adding code to release them, but by doing so you're
+adding unnecessary complexity and code size to your program just to silence
+the nattering of a SMARTALLOC, so an escape hatch is provided to eliminate the
+need to release these buffers.
+
+Normally all storage allocated with the functions {\tt malloc()}, {\tt
+calloc()}, and {\tt realloc()} is monitored by SMARTALLOC. If you make the
+function call:
+
+\footnotesize
+\begin{verbatim}
+ sm_static(1);
+\end{verbatim}
+\normalsize
+
+you declare that subsequent storage allocated by {\tt malloc()}, {\tt
+calloc()}, and {\tt realloc()} should not be considered orphaned if found to
+be allocated when {\tt sm\_dump()} is called. I use a call on ``{\tt
+sm\_static(1);}'' before I allocate things like program configuration tables
+so I don't have to add code to release them at end of program time. After
+allocating unmonitored data this way, be sure to add a call to:
+
+\footnotesize
+\begin{verbatim}
+ sm_static(0);
+\end{verbatim}
+\normalsize
+
+to resume normal monitoring of buffer allocations. Buffers allocated while
+{\tt sm\_static(1}) is in effect are not checked for having been orphaned but
+all the other safeguards provided by SMARTALLOC remain in effect. You may
+release such buffers, if you like; but you don't have to.
+
+\subsection{ Living with Libraries}
+\index{Libraries!Living with }
+\index{Living with Libraries }
+\addcontentsline{toc}{subsection}{Living with Libraries}
+
+Some library functions for which source code is unavailable may gratuitously
+allocate and return buffers that contain their results, or require you to pass
+them buffers which they subsequently release. If you have source code for the
+library, by far the best approach is to simply install SMARTALLOC in it,
+particularly since this kind of ill-structured dynamic storage management is
+the source of so many storage leaks. Without source code, however, there's no
+option but to provide a way to bypass SMARTALLOC for the buffers the library
+allocates and/or releases with the standard system functions.
+
+For each function {\it xxx} redefined by SMARTALLOC, a corresponding routine
+named ``{\tt actually}{\it xxx}'' is furnished which provides direct access to
+the underlying system function, as follows:
+
+\begin{quote}
+
+\begin{longtable}{ll}
+\multicolumn{1}{l }{\bf Standard function } & \multicolumn{1}{l }{\bf Direct
+access function } \\
+{{\tt malloc(}{\it size}{\tt )} } & {{\tt actuallymalloc(}{\it size}{\tt )}
+} \\
+{{\tt calloc(}{\it nelem}{\tt ,} {\it elsize}{\tt )} } & {{\tt
+actuallycalloc(}{\it nelem}, {\it elsize}{\tt )} } \\
+{{\tt realloc(}{\it ptr}{\tt ,} {\it size}{\tt )} } & {{\tt
+actuallyrealloc(}{\it ptr}, {\it size}{\tt )} } \\
+{{\tt free(}{\it ptr}{\tt )} } & {{\tt actuallyfree(}{\it ptr}{\tt )} }
+
+\end{longtable}
+
+\end{quote}
+
+For example, suppose there exists a system library function named ``{\tt
+getimage()}'' which reads a raster image file and returns the address of a
+buffer containing it. Since the library routine allocates the image directly
+with {\tt malloc()}, you can't use SMARTALLOC's {\tt free()}, as that call
+expects information placed in the buffer by SMARTALLOC's special version of
+{\tt malloc()}, and hence would report an error. To release the buffer you
+should call {\tt actuallyfree()}, as in this code fragment:
+
+\footnotesize
+\begin{verbatim}
+ struct image *ibuf = getimage("ratpack.img");
+ display_on_screen(ibuf);
+ actuallyfree(ibuf);
+\end{verbatim}
+\normalsize
+
+Conversely, suppose we are to call a library function, ``{\tt putimage()}'',
+which writes an image buffer into a file and then releases the buffer with
+{\tt free()}. Since the system {\tt free()} is being called, we can't pass a
+buffer allocated by SMARTALLOC's allocation routines, as it contains special
+information that the system {\tt free()} doesn't expect to be there. The
+following code uses {\tt actuallymalloc()} to obtain the buffer passed to such
+a routine.
+
+\footnotesize
+\begin{verbatim}
+ struct image *obuf =
+ (struct image *) actuallymalloc(sizeof(struct image));
+ dump_screen_to_image(obuf);
+ putimage("scrdump.img", obuf); /* putimage() releases obuf */
+\end{verbatim}
+\normalsize
+
+It's unlikely you'll need any of the ``actually'' calls except under very odd
+circumstances (in four products and three years, I've only needed them once),
+but they're there for the rare occasions that demand them. Don't use them to
+subvert the error checking of SMARTALLOC; if you want to disable orphaned
+buffer detection, use the {\tt sm\_static(1)} mechanism described above. That
+way you don't forfeit all the other advantages of SMARTALLOC as you do when
+using {\tt actuallymalloc()} and {\tt actuallyfree()}.
+
+\subsection{ SMARTALLOC Details}
+\index{SMARTALLOC Details }
+\index{Details!SMARTALLOC }
+\addcontentsline{toc}{subsection}{SMARTALLOC Details}
+
+When you include ``smartall.h'' and define SMARTALLOC, the following standard
+system library functions are redefined with the \#define mechanism to call
+corresponding functions within smartall.c instead. (For details of the
+redefinitions, please refer to smartall.h.)
+
+\footnotesize
+\begin{verbatim}
+ void *malloc(size_t size)
+ void *calloc(size_t nelem, size_t elsize)
+ void *realloc(void *ptr, size_t size)
+ void free(void *ptr)
+ void cfree(void *ptr)
+\end{verbatim}
+\normalsize
+
+{\tt cfree()} is a historical artifact identical to {\tt free()}.
+
+In addition to allocating storage in the same way as the standard library
+functions, the SMARTALLOC versions expand the buffers they allocate to include
+information that identifies where each buffer was allocated and to chain all
+allocated buffers together. When a buffer is released, it is removed from the
+allocated buffer chain. A call on {\tt sm\_dump()} is able, by scanning the
+chain of allocated buffers, to find all orphaned buffers. Buffers allocated
+while {\tt sm\_static(1)} is in effect are specially flagged so that, despite
+appearing on the allocated buffer chain, {\tt sm\_dump()} will not deem them
+orphans.
+
+When a buffer is allocated by {\tt malloc()} or expanded with {\tt realloc()},
+all bytes of newly allocated storage are set to the hexadecimal value 0x55
+(alternating one and zero bits). Note that for {\tt realloc()} this applies
+only to the bytes added at the end of buffer; the original contents of the
+buffer are not modified. Initializing allocated storage to a distinctive
+nonzero pattern is intended to catch code that erroneously assumes newly
+allocated buffers are cleared to zero; in fact their contents are random. The
+{\tt calloc()} function, defined as returning a buffer cleared to zero,
+continues to zero its buffers under SMARTALLOC.
+
+Buffers obtained with the SMARTALLOC functions contain a special sentinel byte
+at the end of the user data area. This byte is set to a special key value
+based upon the buffer's memory address. When the buffer is released, the key
+is tested and if it has been overwritten an assertion in the {\tt free}
+function will fail. This catches incorrect program code that stores beyond the
+storage allocated for the buffer. At {\tt free()} time the queue links are
+also validated and an assertion failure will occur if the program has
+destroyed them by storing before the start of the allocated storage.
+
+In addition, when a buffer is released with {\tt free()}, its contents are
+immediately destroyed by overwriting them with the hexadecimal pattern 0xAA
+(alternating bits, the one's complement of the initial value pattern). This
+will usually trip up code that keeps a pointer to a buffer that's been freed
+and later attempts to reference data within the released buffer. Incredibly,
+this is {\it legal} in the standard Unix memory allocation package, which
+permits programs to free() buffers, then raise them from the grave with {\tt
+realloc()}. Such program ``logic'' should be fixed, not accommodated, and
+SMARTALLOC brooks no such Lazarus buffer`` nonsense.
+
+Some C libraries allow a zero size argument in calls to {\tt malloc()}. Since
+this is far more likely to indicate a program error than a defensible
+programming stratagem, SMARTALLOC disallows it with an assertion.
+
+When the standard library {\tt realloc()} function is called to expand a
+buffer, it attempts to expand the buffer in place if possible, moving it only
+if necessary. Because SMARTALLOC must place its own private storage in the
+buffer and also to aid in error detection, its version of {\tt realloc()}
+always moves and copies the buffer except in the trivial case where the size
+of the buffer is not being changed. By forcing the buffer to move on every
+call and destroying the contents of the old buffer when it is released,
+SMARTALLOC traps programs which keep pointers into a buffer across a call on
+{\tt realloc()} which may move it. This strategy may prove very costly to
+programs which make extensive use of {\tt realloc()}. If this proves to be a
+problem, such programs may wish to use {\tt actuallymalloc()}, {\tt
+actuallyrealloc()}, and {\tt actuallyfree()} for such frequently-adjusted
+buffers, trading error detection for performance. Although not specified in
+the System V Interface Definition, many C library implementations of {\tt
+realloc()} permit an old buffer argument of NULL, causing {\tt realloc()} to
+allocate a new buffer. The SMARTALLOC version permits this.
+
+\subsection{ When SMARTALLOC is Disabled}
+\index{When SMARTALLOC is Disabled }
+\index{Disabled!When SMARTALLOC is }
+\addcontentsline{toc}{subsection}{When SMARTALLOC is Disabled}
+
+When SMARTALLOC is disabled by compiling a program with the symbol SMARTALLOC
+not defined, calls on the functions otherwise redefined by SMARTALLOC go
+directly to the system functions. In addition, compile-time definitions
+translate calls on the ''{\tt actually}...{\tt ()}`` functions into the
+corresponding library calls; ''{\tt actuallymalloc(100)}``, for example,
+compiles into ''{\tt malloc(100)}``. The two special SMARTALLOC functions,
+{\tt sm\_dump()} and {\tt sm\_static()}, are defined to generate no code
+(hence the null statement). Finally, if SMARTALLOC is not defined, compilation
+of the file smartall.c generates no code or data at all, effectively removing
+it from the program even if named in the link instructions.
+
+Thus, except for unusual circumstances, a program that works with SMARTALLOC
+defined for testing should require no changes when built without it for
+production release.
+
+\subsection{ The {\tt alloc()} Function}
+\index{Function!alloc }
+\index{Alloc() Function }
+\addcontentsline{toc}{subsection}{alloc() Function}
+
+Many programs I've worked on use very few direct calls to {\tt malloc()},
+using the identically declared {\tt alloc()} function instead. Alloc detects
+out-of-memory conditions and aborts, removing the need for error checking on
+every call of {\tt malloc()} (and the temptation to skip checking for
+out-of-memory).
+
+As a convenience, SMARTALLOC supplies a compatible version of {\tt alloc()} in
+the file alloc.c, with its definition in the file alloc.h. This version of
+{\tt alloc()} is sensitive to the definition of SMARTALLOC and cooperates with
+SMARTALLOC's orphaned buffer detection. In addition, when SMARTALLOC is
+defined and {\tt alloc()} detects an out of memory condition, it takes
+advantage of the SMARTALLOC diagnostic information to identify the file and
+line number of the call on {\tt alloc()} that failed.
+
+\subsection{ Overlays and Underhandedness}
+\index{Underhandedness!Overlays and }
+\index{Overlays and Underhandedness }
+\addcontentsline{toc}{subsection}{Overlays and Underhandedness}
+
+String constants in the C language are considered to be static arrays of
+characters accessed through a pointer constant. The arrays are potentially
+writable even though their pointer is a constant. SMARTALLOC uses the
+compile-time definition {\tt ./smartall.wml} to obtain the name of the file in
+which a call on buffer allocation was performed. Rather than reserve space in
+a buffer to save this information, SMARTALLOC simply stores the pointer to the
+compiled-in text of the file name. This works fine as long as the program does
+not overlay its data among modules. If data are overlayed, the area of memory
+which contained the file name at the time it was saved in the buffer may
+contain something else entirely when {\tt sm\_dump()} gets around to using the
+pointer to edit the file name which allocated the buffer.
+
+If you want to use SMARTALLOC in a program with overlayed data, you'll have to
+modify smartall.c to either copy the file name to a fixed-length field added
+to the {\tt abufhead} structure, or else allocate storage with {\tt malloc()},
+copy the file name there, and set the {\tt abfname} pointer to that buffer,
+then remember to release the buffer in {\tt sm\_free}. Either of these
+approaches are wasteful of storage and time, and should be considered only if
+there is no alternative. Since most initial debugging is done in non-overlayed
+environments, the restrictions on SMARTALLOC with data overlaying may never
+prove a problem. Note that conventional overlaying of code, by far the most
+common form of overlaying, poses no problems for SMARTALLOC; you need only be
+concerned if you're using exotic tools for data overlaying on MS-DOS or other
+address-space-challenged systems.
+
+Since a C language ''constant`` string can actually be written into, most C
+compilers generate a unique copy of each string used in a module, even if the
+same constant string appears many times. In modules that contain many calls on
+allocation functions, this results in substantial wasted storage for the
+strings that identify the file name. If your compiler permits optimization of
+multiple occurrences of constant strings, enabling this mode will eliminate
+the overhead for these strings. Of course, it's up to you to make sure
+choosing this compiler mode won't wreak havoc on some other part of your
+program.
+
+\subsection{ Test and Demonstration Program}
+\index{Test and Demonstration Program }
+\index{Program!Test and Demonstration }
+\addcontentsline{toc}{subsection}{Test and Demonstration Program}
+
+A test and demonstration program, smtest.c, is supplied with SMARTALLOC. You
+can build this program with the Makefile included. Please refer to the
+comments in smtest.c and the Makefile for information on this program. If
+you're attempting to use SMARTALLOC on a new machine or with a new compiler or
+operating system, it's a wise first step to check it out with smtest first.
+
+\subsection{ Invitation to the Hack}
+\index{Hack!Invitation to the }
+\index{Invitation to the Hack }
+\addcontentsline{toc}{subsection}{Invitation to the Hack}
+
+SMARTALLOC is not intended to be a panacea for storage management problems,
+nor is it universally applicable or effective; it's another weapon in the
+arsenal of the defensive professional programmer attempting to create reliable
+products. It represents the current state of evolution of expedient debug code
+which has been used in several commercial software products which have,
+collectively, sold more than third of a million copies in the retail market,
+and can be expected to continue to develop through time as it is applied to
+ever more demanding projects.
+
+The version of SMARTALLOC here has been tested on a Sun SPARCStation, Silicon
+Graphics Indigo2, and on MS-DOS using both Borland and Microsoft C. Moving
+from compiler to compiler requires the usual small changes to resolve disputes
+about prototyping of functions, whether the type returned by buffer allocation
+is {\tt char\ *} or {\tt void\ *}, and so forth, but following those changes
+it works in a variety of environments. I hope you'll find SMARTALLOC as useful
+for your projects as I've found it in mine.
+
+\section{
+\elink{}{http://www.fourmilab.ch/smartall/smartall.zip}
+\elink{Download smartall.zip}{http://www.fourmilab.ch/smartall/smartall.zip}
+(Zipped archive)}
+\index{Archive! Download smartall.zip Zipped }
+\index{ Download smartall.zip (Zipped archive) }
+\addcontentsline{toc}{section}{ Download smartall.zip (Zipped archive)}
+
+SMARTALLOC is provided as
+\elink{smartall.zip}{http://www.fourmilab.ch/smartall/smartall.zip}, a
+\elink{Zipped}{http://www.pkware.com/} archive containing source code,
+documentation, and a {\tt Makefile} to build the software under Unix.
+
+\subsection{ Copying}
+\index{Copying }
+\addcontentsline{toc}{subsection}{Copying}
+
+\begin{quote}
+SMARTALLOC is in the public domain. Permission to use, copy, modify, and
+distribute this software and its documentation for any purpose and without fee
+is hereby granted, without any conditions or restrictions. This software is
+provided ''as is`` without express or implied warranty.
+\end{quote}
+
+{\it
+\elink{by John Walker}{http://www.fourmilab.ch}
+October 30th, 1998 }
+++ /dev/null
-%%
-%%
-
-\addcontentsline{lof}{figure}{Smart Memory Allocation with Orphaned Buffer
-Detection}
-\includegraphics{\idir smartall.eps}
-
-\chapter{Smart Memory Allocation}
-\label{_ChapterStart4}
-\index{Detection!Smart Memory Allocation With Orphaned Buffer }
-\index{Smart Memory Allocation With Orphaned Buffer Detection }
-\addcontentsline{toc}{section}{Smart Memory Allocation With Orphaned Buffer
-Detection}
-
-Few things are as embarrassing as a program that leaks, yet few errors are so
-easy to commit or as difficult to track down in a large, complicated program
-as failure to release allocated memory. SMARTALLOC replaces the standard C
-library memory allocation functions with versions which keep track of buffer
-allocations and releases and report all orphaned buffers at the end of program
-execution. By including this package in your program during development and
-testing, you can identify code that loses buffers right when it's added and
-most easily fixed, rather than as part of a crisis debugging push when the
-problem is identified much later in the testing cycle (or even worse, when the
-code is in the hands of a customer). When program testing is complete, simply
-recompiling with different flags removes SMARTALLOC from your program,
-permitting it to run without speed or storage penalties.
-
-In addition to detecting orphaned buffers, SMARTALLOC also helps to find other
-common problems in management of dynamic storage including storing before the
-start or beyond the end of an allocated buffer, referencing data through a
-pointer to a previously released buffer, attempting to release a buffer twice
-or releasing storage not obtained from the allocator, and assuming the initial
-contents of storage allocated by functions that do not guarantee a known
-value. SMARTALLOC's checking does not usually add a large amount of overhead
-to a program (except for programs which use {\tt realloc()} extensively; see
-below). SMARTALLOC focuses on proper storage management rather than internal
-consistency of the heap as checked by the malloc\_debug facility available on
-some systems. SMARTALLOC does not conflict with malloc\_debug and both may be
-used together, if you wish. SMARTALLOC makes no assumptions regarding the
-internal structure of the heap and thus should be compatible with any C
-language implementation of the standard memory allocation functions.
-
-\subsection{ Installing SMARTALLOC}
-\index{SMARTALLOC!Installing }
-\index{Installing SMARTALLOC }
-\addcontentsline{toc}{subsection}{Installing SMARTALLOC}
-
-SMARTALLOC is provided as a Zipped archive,
-\elink{smartall.zip}{http://www.fourmilab.ch/smartall/smartall.zip}; see the
-download instructions below.
-
-To install SMARTALLOC in your program, simply add the statement:
-
-to every C program file which calls any of the memory allocation functions
-({\tt malloc}, {\tt calloc}, {\tt free}, etc.). SMARTALLOC must be used for
-all memory allocation with a program, so include file for your entire program,
-if you have such a thing. Next, define the symbol SMARTALLOC in the
-compilation before the inclusion of smartall.h. I usually do this by having my
-Makefile add the ``{\tt -DSMARTALLOC}'' option to the C compiler for
-non-production builds. You can define the symbol manually, if you prefer, by
-adding the statement:
-
-{\tt \#define SMARTALLOC}
-
-At the point where your program is all done and ready to relinquish control to
-the operating system, add the call:
-
-{\tt \ \ \ \ \ \ \ \ sm\_dump(}{\it datadump}{\tt );}
-
-where {\it datadump} specifies whether the contents of orphaned buffers are to
-be dumped in addition printing to their size and place of allocation. The data
-are dumped only if {\it datadump} is nonzero, so most programs will normally
-use ``{\tt sm\_dump(0);}''. If a mysterious orphaned buffer appears that can't
-be identified from the information this prints about it, replace the statement
-with ``{\tt sm\_dump(1)};''. Usually the dump of the buffer's data will
-furnish the additional clues you need to excavate and extirpate the elusive
-error that left the buffer allocated.
-
-Finally, add the files ``smartall.h'' and ``smartall.c'' from this release to
-your source directory, make dependencies, and linker input. You needn't make
-inclusion of smartall.c in your link optional; if compiled with SMARTALLOC not
-defined it generates no code, so you may always include it knowing it will
-waste no storage in production builds. Now when you run your program, if it
-leaves any buffers around when it's done, each will be reported by {\tt
-sm\_dump()} on stderr as follows:
-
-\footnotesize
-\begin{verbatim}
-Orphaned buffer: 120 bytes allocated at line 50 of gutshot.c
-\end{verbatim}
-\normalsize
-
-\subsection{ Squelching a SMARTALLOC}
-\index{SMARTALLOC!Squelching a }
-\index{Squelching a SMARTALLOC }
-\addcontentsline{toc}{subsection}{Squelching a SMARTALLOC}
-
-Usually, when you first install SMARTALLOC in an existing program you'll find
-it nattering about lots of orphaned buffers. Some of these turn out to be
-legitimate errors, but some are storage allocated during program
-initialisation that, while dynamically allocated, is logically static storage
-not intended to be released. Of course, you can get rid of the complaints
-about these buffers by adding code to release them, but by doing so you're
-adding unnecessary complexity and code size to your program just to silence
-the nattering of a SMARTALLOC, so an escape hatch is provided to eliminate the
-need to release these buffers.
-
-Normally all storage allocated with the functions {\tt malloc()}, {\tt
-calloc()}, and {\tt realloc()} is monitored by SMARTALLOC. If you make the
-function call:
-
-\footnotesize
-\begin{verbatim}
- sm_static(1);
-\end{verbatim}
-\normalsize
-
-you declare that subsequent storage allocated by {\tt malloc()}, {\tt
-calloc()}, and {\tt realloc()} should not be considered orphaned if found to
-be allocated when {\tt sm\_dump()} is called. I use a call on ``{\tt
-sm\_static(1);}'' before I allocate things like program configuration tables
-so I don't have to add code to release them at end of program time. After
-allocating unmonitored data this way, be sure to add a call to:
-
-\footnotesize
-\begin{verbatim}
- sm_static(0);
-\end{verbatim}
-\normalsize
-
-to resume normal monitoring of buffer allocations. Buffers allocated while
-{\tt sm\_static(1}) is in effect are not checked for having been orphaned but
-all the other safeguards provided by SMARTALLOC remain in effect. You may
-release such buffers, if you like; but you don't have to.
-
-\subsection{ Living with Libraries}
-\index{Libraries!Living with }
-\index{Living with Libraries }
-\addcontentsline{toc}{subsection}{Living with Libraries}
-
-Some library functions for which source code is unavailable may gratuitously
-allocate and return buffers that contain their results, or require you to pass
-them buffers which they subsequently release. If you have source code for the
-library, by far the best approach is to simply install SMARTALLOC in it,
-particularly since this kind of ill-structured dynamic storage management is
-the source of so many storage leaks. Without source code, however, there's no
-option but to provide a way to bypass SMARTALLOC for the buffers the library
-allocates and/or releases with the standard system functions.
-
-For each function {\it xxx} redefined by SMARTALLOC, a corresponding routine
-named ``{\tt actually}{\it xxx}'' is furnished which provides direct access to
-the underlying system function, as follows:
-
-\begin{quote}
-
-\begin{longtable}{ll}
-\multicolumn{1}{l }{\bf Standard function } & \multicolumn{1}{l }{\bf Direct
-access function } \\
-{{\tt malloc(}{\it size}{\tt )} } & {{\tt actuallymalloc(}{\it size}{\tt )}
-} \\
-{{\tt calloc(}{\it nelem}{\tt ,} {\it elsize}{\tt )} } & {{\tt
-actuallycalloc(}{\it nelem}, {\it elsize}{\tt )} } \\
-{{\tt realloc(}{\it ptr}{\tt ,} {\it size}{\tt )} } & {{\tt
-actuallyrealloc(}{\it ptr}, {\it size}{\tt )} } \\
-{{\tt free(}{\it ptr}{\tt )} } & {{\tt actuallyfree(}{\it ptr}{\tt )} }
-
-\end{longtable}
-
-\end{quote}
-
-For example, suppose there exists a system library function named ``{\tt
-getimage()}'' which reads a raster image file and returns the address of a
-buffer containing it. Since the library routine allocates the image directly
-with {\tt malloc()}, you can't use SMARTALLOC's {\tt free()}, as that call
-expects information placed in the buffer by SMARTALLOC's special version of
-{\tt malloc()}, and hence would report an error. To release the buffer you
-should call {\tt actuallyfree()}, as in this code fragment:
-
-\footnotesize
-\begin{verbatim}
- struct image *ibuf = getimage("ratpack.img");
- display_on_screen(ibuf);
- actuallyfree(ibuf);
-\end{verbatim}
-\normalsize
-
-Conversely, suppose we are to call a library function, ``{\tt putimage()}'',
-which writes an image buffer into a file and then releases the buffer with
-{\tt free()}. Since the system {\tt free()} is being called, we can't pass a
-buffer allocated by SMARTALLOC's allocation routines, as it contains special
-information that the system {\tt free()} doesn't expect to be there. The
-following code uses {\tt actuallymalloc()} to obtain the buffer passed to such
-a routine.
-
-\footnotesize
-\begin{verbatim}
- struct image *obuf =
- (struct image *) actuallymalloc(sizeof(struct image));
- dump_screen_to_image(obuf);
- putimage("scrdump.img", obuf); /* putimage() releases obuf */
-\end{verbatim}
-\normalsize
-
-It's unlikely you'll need any of the ``actually'' calls except under very odd
-circumstances (in four products and three years, I've only needed them once),
-but they're there for the rare occasions that demand them. Don't use them to
-subvert the error checking of SMARTALLOC; if you want to disable orphaned
-buffer detection, use the {\tt sm\_static(1)} mechanism described above. That
-way you don't forfeit all the other advantages of SMARTALLOC as you do when
-using {\tt actuallymalloc()} and {\tt actuallyfree()}.
-
-\subsection{ SMARTALLOC Details}
-\index{SMARTALLOC Details }
-\index{Details!SMARTALLOC }
-\addcontentsline{toc}{subsection}{SMARTALLOC Details}
-
-When you include ``smartall.h'' and define SMARTALLOC, the following standard
-system library functions are redefined with the \#define mechanism to call
-corresponding functions within smartall.c instead. (For details of the
-redefinitions, please refer to smartall.h.)
-
-\footnotesize
-\begin{verbatim}
- void *malloc(size_t size)
- void *calloc(size_t nelem, size_t elsize)
- void *realloc(void *ptr, size_t size)
- void free(void *ptr)
- void cfree(void *ptr)
-\end{verbatim}
-\normalsize
-
-{\tt cfree()} is a historical artifact identical to {\tt free()}.
-
-In addition to allocating storage in the same way as the standard library
-functions, the SMARTALLOC versions expand the buffers they allocate to include
-information that identifies where each buffer was allocated and to chain all
-allocated buffers together. When a buffer is released, it is removed from the
-allocated buffer chain. A call on {\tt sm\_dump()} is able, by scanning the
-chain of allocated buffers, to find all orphaned buffers. Buffers allocated
-while {\tt sm\_static(1)} is in effect are specially flagged so that, despite
-appearing on the allocated buffer chain, {\tt sm\_dump()} will not deem them
-orphans.
-
-When a buffer is allocated by {\tt malloc()} or expanded with {\tt realloc()},
-all bytes of newly allocated storage are set to the hexadecimal value 0x55
-(alternating one and zero bits). Note that for {\tt realloc()} this applies
-only to the bytes added at the end of buffer; the original contents of the
-buffer are not modified. Initializing allocated storage to a distinctive
-nonzero pattern is intended to catch code that erroneously assumes newly
-allocated buffers are cleared to zero; in fact their contents are random. The
-{\tt calloc()} function, defined as returning a buffer cleared to zero,
-continues to zero its buffers under SMARTALLOC.
-
-Buffers obtained with the SMARTALLOC functions contain a special sentinel byte
-at the end of the user data area. This byte is set to a special key value
-based upon the buffer's memory address. When the buffer is released, the key
-is tested and if it has been overwritten an assertion in the {\tt free}
-function will fail. This catches incorrect program code that stores beyond the
-storage allocated for the buffer. At {\tt free()} time the queue links are
-also validated and an assertion failure will occur if the program has
-destroyed them by storing before the start of the allocated storage.
-
-In addition, when a buffer is released with {\tt free()}, its contents are
-immediately destroyed by overwriting them with the hexadecimal pattern 0xAA
-(alternating bits, the one's complement of the initial value pattern). This
-will usually trip up code that keeps a pointer to a buffer that's been freed
-and later attempts to reference data within the released buffer. Incredibly,
-this is {\it legal} in the standard Unix memory allocation package, which
-permits programs to free() buffers, then raise them from the grave with {\tt
-realloc()}. Such program ``logic'' should be fixed, not accommodated, and
-SMARTALLOC brooks no such Lazarus buffer`` nonsense.
-
-Some C libraries allow a zero size argument in calls to {\tt malloc()}. Since
-this is far more likely to indicate a program error than a defensible
-programming stratagem, SMARTALLOC disallows it with an assertion.
-
-When the standard library {\tt realloc()} function is called to expand a
-buffer, it attempts to expand the buffer in place if possible, moving it only
-if necessary. Because SMARTALLOC must place its own private storage in the
-buffer and also to aid in error detection, its version of {\tt realloc()}
-always moves and copies the buffer except in the trivial case where the size
-of the buffer is not being changed. By forcing the buffer to move on every
-call and destroying the contents of the old buffer when it is released,
-SMARTALLOC traps programs which keep pointers into a buffer across a call on
-{\tt realloc()} which may move it. This strategy may prove very costly to
-programs which make extensive use of {\tt realloc()}. If this proves to be a
-problem, such programs may wish to use {\tt actuallymalloc()}, {\tt
-actuallyrealloc()}, and {\tt actuallyfree()} for such frequently-adjusted
-buffers, trading error detection for performance. Although not specified in
-the System V Interface Definition, many C library implementations of {\tt
-realloc()} permit an old buffer argument of NULL, causing {\tt realloc()} to
-allocate a new buffer. The SMARTALLOC version permits this.
-
-\subsection{ When SMARTALLOC is Disabled}
-\index{When SMARTALLOC is Disabled }
-\index{Disabled!When SMARTALLOC is }
-\addcontentsline{toc}{subsection}{When SMARTALLOC is Disabled}
-
-When SMARTALLOC is disabled by compiling a program with the symbol SMARTALLOC
-not defined, calls on the functions otherwise redefined by SMARTALLOC go
-directly to the system functions. In addition, compile-time definitions
-translate calls on the ''{\tt actually}...{\tt ()}`` functions into the
-corresponding library calls; ''{\tt actuallymalloc(100)}``, for example,
-compiles into ''{\tt malloc(100)}``. The two special SMARTALLOC functions,
-{\tt sm\_dump()} and {\tt sm\_static()}, are defined to generate no code
-(hence the null statement). Finally, if SMARTALLOC is not defined, compilation
-of the file smartall.c generates no code or data at all, effectively removing
-it from the program even if named in the link instructions.
-
-Thus, except for unusual circumstances, a program that works with SMARTALLOC
-defined for testing should require no changes when built without it for
-production release.
-
-\subsection{ The {\tt alloc()} Function}
-\index{Function!alloc }
-\index{Alloc() Function }
-\addcontentsline{toc}{subsection}{alloc() Function}
-
-Many programs I've worked on use very few direct calls to {\tt malloc()},
-using the identically declared {\tt alloc()} function instead. Alloc detects
-out-of-memory conditions and aborts, removing the need for error checking on
-every call of {\tt malloc()} (and the temptation to skip checking for
-out-of-memory).
-
-As a convenience, SMARTALLOC supplies a compatible version of {\tt alloc()} in
-the file alloc.c, with its definition in the file alloc.h. This version of
-{\tt alloc()} is sensitive to the definition of SMARTALLOC and cooperates with
-SMARTALLOC's orphaned buffer detection. In addition, when SMARTALLOC is
-defined and {\tt alloc()} detects an out of memory condition, it takes
-advantage of the SMARTALLOC diagnostic information to identify the file and
-line number of the call on {\tt alloc()} that failed.
-
-\subsection{ Overlays and Underhandedness}
-\index{Underhandedness!Overlays and }
-\index{Overlays and Underhandedness }
-\addcontentsline{toc}{subsection}{Overlays and Underhandedness}
-
-String constants in the C language are considered to be static arrays of
-characters accessed through a pointer constant. The arrays are potentially
-writable even though their pointer is a constant. SMARTALLOC uses the
-compile-time definition {\tt ./smartall.wml} to obtain the name of the file in
-which a call on buffer allocation was performed. Rather than reserve space in
-a buffer to save this information, SMARTALLOC simply stores the pointer to the
-compiled-in text of the file name. This works fine as long as the program does
-not overlay its data among modules. If data are overlayed, the area of memory
-which contained the file name at the time it was saved in the buffer may
-contain something else entirely when {\tt sm\_dump()} gets around to using the
-pointer to edit the file name which allocated the buffer.
-
-If you want to use SMARTALLOC in a program with overlayed data, you'll have to
-modify smartall.c to either copy the file name to a fixed-length field added
-to the {\tt abufhead} structure, or else allocate storage with {\tt malloc()},
-copy the file name there, and set the {\tt abfname} pointer to that buffer,
-then remember to release the buffer in {\tt sm\_free}. Either of these
-approaches are wasteful of storage and time, and should be considered only if
-there is no alternative. Since most initial debugging is done in non-overlayed
-environments, the restrictions on SMARTALLOC with data overlaying may never
-prove a problem. Note that conventional overlaying of code, by far the most
-common form of overlaying, poses no problems for SMARTALLOC; you need only be
-concerned if you're using exotic tools for data overlaying on MS-DOS or other
-address-space-challenged systems.
-
-Since a C language ''constant`` string can actually be written into, most C
-compilers generate a unique copy of each string used in a module, even if the
-same constant string appears many times. In modules that contain many calls on
-allocation functions, this results in substantial wasted storage for the
-strings that identify the file name. If your compiler permits optimization of
-multiple occurrences of constant strings, enabling this mode will eliminate
-the overhead for these strings. Of course, it's up to you to make sure
-choosing this compiler mode won't wreak havoc on some other part of your
-program.
-
-\subsection{ Test and Demonstration Program}
-\index{Test and Demonstration Program }
-\index{Program!Test and Demonstration }
-\addcontentsline{toc}{subsection}{Test and Demonstration Program}
-
-A test and demonstration program, smtest.c, is supplied with SMARTALLOC. You
-can build this program with the Makefile included. Please refer to the
-comments in smtest.c and the Makefile for information on this program. If
-you're attempting to use SMARTALLOC on a new machine or with a new compiler or
-operating system, it's a wise first step to check it out with smtest first.
-
-\subsection{ Invitation to the Hack}
-\index{Hack!Invitation to the }
-\index{Invitation to the Hack }
-\addcontentsline{toc}{subsection}{Invitation to the Hack}
-
-SMARTALLOC is not intended to be a panacea for storage management problems,
-nor is it universally applicable or effective; it's another weapon in the
-arsenal of the defensive professional programmer attempting to create reliable
-products. It represents the current state of evolution of expedient debug code
-which has been used in several commercial software products which have,
-collectively, sold more than third of a million copies in the retail market,
-and can be expected to continue to develop through time as it is applied to
-ever more demanding projects.
-
-The version of SMARTALLOC here has been tested on a Sun SPARCStation, Silicon
-Graphics Indigo2, and on MS-DOS using both Borland and Microsoft C. Moving
-from compiler to compiler requires the usual small changes to resolve disputes
-about prototyping of functions, whether the type returned by buffer allocation
-is {\tt char\ *} or {\tt void\ *}, and so forth, but following those changes
-it works in a variety of environments. I hope you'll find SMARTALLOC as useful
-for your projects as I've found it in mine.
-
-\section{
-\elink{}{http://www.fourmilab.ch/smartall/smartall.zip}
-\elink{Download smartall.zip}{http://www.fourmilab.ch/smartall/smartall.zip}
-(Zipped archive)}
-\index{Archive! Download smartall.zip Zipped }
-\index{ Download smartall.zip (Zipped archive) }
-\addcontentsline{toc}{section}{ Download smartall.zip (Zipped archive)}
-
-SMARTALLOC is provided as
-\elink{smartall.zip}{http://www.fourmilab.ch/smartall/smartall.zip}, a
-\elink{Zipped}{http://www.pkware.com/} archive containing source code,
-documentation, and a {\tt Makefile} to build the software under Unix.
-
-\subsection{ Copying}
-\index{Copying }
-\addcontentsline{toc}{subsection}{Copying}
-
-\begin{quote}
-SMARTALLOC is in the public domain. Permission to use, copy, modify, and
-distribute this software and its documentation for any purpose and without fee
-is hereby granted, without any conditions or restrictions. This software is
-provided ''as is`` without express or implied warranty.
-\end{quote}
-
-{\it
-\elink{by John Walker}{http://www.fourmilab.ch}
-October 30th, 1998 }
--- /dev/null
+%%
+%%
+
+\chapter{Storage Daemon Design}
+\label{_ChapterStart3}
+\index{Storage Daemon Design }
+\index{Design!Storage Daemon }
+\addcontentsline{toc}{section}{Storage Daemon Design}
+
+This chapter is intended to be a technical discussion of the Storage daemon
+services and as such is not targeted at end users but rather at developers and
+system administrators that want or need to know more of the working details of
+{\bf Bacula}.
+
+This document is somewhat out of date.
+
+\section{SD Design Introduction}
+\index{Introduction!SD Design }
+\index{SD Design Introduction }
+\addcontentsline{toc}{section}{SD Design Introduction}
+
+The Bacula Storage daemon provides storage resources to a Bacula installation.
+An individual Storage daemon is associated with a physical permanent storage
+device (for example, a tape drive, CD writer, tape changer or jukebox, etc.),
+and may employ auxiliary storage resources (such as space on a hard disk file
+system) to increase performance and/or optimize use of the permanent storage
+medium.
+
+Any number of storage daemons may be run on a given machine; each associated
+with an individual storage device connected to it, and BACULA operations may
+employ storage daemons on any number of hosts connected by a network, local or
+remote. The ability to employ remote storage daemons (with appropriate
+security measures) permits automatic off-site backup, possibly to publicly
+available backup repositories.
+
+\section{SD Development Outline}
+\index{Outline!SD Development }
+\index{SD Development Outline }
+\addcontentsline{toc}{section}{SD Development Outline}
+
+In order to provide a high performance backup and restore solution that scales
+to very large capacity devices and networks, the storage daemon must be able
+to extract as much performance from the storage device and network with which
+it interacts. In order to accomplish this, storage daemons will eventually
+have to sacrifice simplicity and painless portability in favor of techniques
+which improve performance. My goal in designing the storage daemon protocol
+and developing the initial prototype storage daemon is to provide for these
+additions in the future, while implementing an initial storage daemon which is
+very simple and portable to almost any POSIX-like environment. This original
+storage daemon (and its evolved descendants) can serve as a portable solution
+for non-demanding backup requirements (such as single servers of modest size,
+individual machines, or small local networks), while serving as the starting
+point for development of higher performance configurable derivatives which use
+techniques such as POSIX threads, shared memory, asynchronous I/O, buffering
+to high-speed intermediate media, and support for tape changers and jukeboxes.
+
+
+\section{SD Connections and Sessions}
+\index{Sessions!SD Connections and }
+\index{SD Connections and Sessions }
+\addcontentsline{toc}{section}{SD Connections and Sessions}
+
+A client connects to a storage server by initiating a conventional TCP
+connection. The storage server accepts the connection unless its maximum
+number of connections has been reached or the specified host is not granted
+access to the storage server. Once a connection has been opened, the client
+may make any number of Query requests, and/or initiate (if permitted), one or
+more Append sessions (which transmit data to be stored by the storage daemon)
+and/or Read sessions (which retrieve data from the storage daemon).
+
+Most requests and replies sent across the connection are simple ASCII strings,
+with status replies prefixed by a four digit status code for easier parsing.
+Binary data appear in blocks stored and retrieved from the storage. Any
+request may result in a single-line status reply of ``{\tt 3201\ Notification\
+pending}'', which indicates the client must send a ``Query notification''
+request to retrieve one or more notifications posted to it. Once the
+notifications have been returned, the client may then resubmit the request
+which resulted in the 3201 status.
+
+The following descriptions omit common error codes, yet to be defined, which
+can occur from most or many requests due to events like media errors,
+restarting of the storage daemon, etc. These details will be filled in, along
+with a comprehensive list of status codes along with which requests can
+produce them in an update to this document.
+
+\subsection{SD Append Requests}
+\index{Requests!SD Append }
+\index{SD Append Requests }
+\addcontentsline{toc}{subsection}{SD Append Requests}
+
+\begin{description}
+
+\item [{append open session = \lt{}JobId\gt{} [ \lt{}Password\gt{} ] }]
+ \index{SPAN class }
+ A data append session is opened with the Job ID given by {\it JobId} with
+client password (if required) given by {\it Password}. If the session is
+successfully opened, a status of {\tt 3000\ OK} is returned with a ``{\tt
+ticket\ =\ }{\it number}'' reply used to identify subsequent messages in the
+session. If too many sessions are open, or a conflicting session (for
+example, a read in progress when simultaneous read and append sessions are
+not permitted), a status of ``{\tt 3502\ Volume\ busy}'' is returned. If no
+volume is mounted, or the volume mounted cannot be appended to, a status of
+``{\tt 3503\ Volume\ not\ mounted}'' is returned.
+
+\item [append data = \lt{}ticket-number\gt{} ]
+ \index{SPAN class }
+ If the append data is accepted, a status of {\tt 3000\ OK data address =
+\lt{}IPaddress\gt{} port = \lt{}port\gt{}} is returned, where the {\tt
+IPaddress} and {\tt port} specify the IP address and port number of the data
+channel. Error status codes are {\tt 3504\ Invalid\ ticket\ number} and {\tt
+3505\ Session\ aborted}, the latter of which indicates the entire append
+session has failed due to a daemon or media error.
+
+Once the File daemon has established the connection to the data channel
+opened by the Storage daemon, it will transfer a header packet followed by
+any number of data packets. The header packet is of the form:
+
+{\tt \lt{}file-index\gt{} \lt{}stream-id\gt{} \lt{}info\gt{}}
+
+The details are specified in the
+\ilink{Daemon Protocol}{_ChapterStart2} section of this
+document.
+
+\item [*append abort session = \lt{}ticket-number\gt{} ]
+ \index{SPAN class }
+ The open append session with ticket {\it ticket-number} is aborted; any blocks
+not yet written to permanent media are discarded. Subsequent attempts to
+append data to the session will receive an error status of {\tt 3505\
+Session\ aborted}.
+
+\item [append end session = \lt{}ticket-number\gt{} ]
+ \index{SPAN class }
+ The open append session with ticket {\it ticket-number} is marked complete; no
+further blocks may be appended. The storage daemon will give priority to
+saving any buffered blocks from this session to permanent media as soon as
+possible.
+
+\item [append close session = \lt{}ticket-number\gt{} ]
+ \index{SPAN class }
+ The append session with ticket {\it ticket} is closed. This message does not
+receive an {\tt 3000\ OK} reply until all of the content of the session are
+stored on permanent media, at which time said reply is given, followed by a
+list of volumes, from first to last, which contain blocks from the session,
+along with the first and last file and block on each containing session data
+and the volume session key identifying data from that session in lines with
+the following format:
+
+{\tt {\tt Volume = }\lt{}Volume-id\gt{} \lt{}start-file\gt{}
+\lt{}start-block\gt{} \lt{}end-file\gt{} \lt{}end-block\gt{}
+\lt{}volume-session-id\gt{}}where {\it Volume-id} is the volume label, {\it
+start-file} and {\it start-block} are the file and block containing the first
+data from that session on the volume, {\it end-file} and {\it end-block} are
+the file and block with the last data from the session on the volume and {\it
+volume-session-id} is the volume session ID for blocks from the session
+stored on that volume.
+\end{description}
+
+\subsection{SD Read Requests}
+\index{SD Read Requests }
+\index{Requests!SD Read }
+\addcontentsline{toc}{subsection}{SD Read Requests}
+
+\begin{description}
+
+\item [Read open session = \lt{}JobId\gt{} \lt{}Volume-id\gt{}
+ \lt{}start-file\gt{} \lt{}start-block\gt{} \lt{}end-file\gt{}
+ \lt{}end-block\gt{} \lt{}volume-session-id\gt{} \lt{}password\gt{} ]
+\index{SPAN class }
+where {\it Volume-id} is the volume label, {\it start-file} and {\it
+start-block} are the file and block containing the first data from that
+session on the volume, {\it end-file} and {\it end-block} are the file and
+block with the last data from the session on the volume and {\it
+volume-session-id} is the volume session ID for blocks from the session
+stored on that volume.
+
+If the session is successfully opened, a status of
+
+{\tt {\tt 3100\ OK Ticket\ =\ }{\it number}``}
+
+is returned with a reply used to identify subsequent messages in the session.
+If too many sessions are open, or a conflicting session (for example, an
+append in progress when simultaneous read and append sessions are not
+permitted), a status of ''{\tt 3502\ Volume\ busy}`` is returned. If no
+volume is mounted, or the volume mounted cannot be appended to, a status of
+''{\tt 3503\ Volume\ not\ mounted}`` is returned. If no block with the given
+volume session ID and the correct client ID number appears in the given first
+file and block for the volume, a status of ''{\tt 3505\ Session\ not\
+found}`` is returned.
+
+\item [Read data = \lt{}Ticket\gt{} \gt{} \lt{}Block\gt{} ]
+ \index{SPAN class }
+ The specified Block of data from open read session with the specified Ticket
+number is returned, with a status of {\tt 3000\ OK} followed by a ''{\tt
+Length\ =\ }{\it size}`` line giving the length in bytes of the block data
+which immediately follows. Blocks must be retrieved in ascending order, but
+blocks may be skipped. If a block number greater than the largest stored on
+the volume is requested, a status of ''{\tt 3201\ End\ of\ volume}`` is
+returned. If a block number greater than the largest in the file is
+requested, a status of ''{\tt 3401\ End\ of\ file}`` is returned.
+
+\item [Read close session = \lt{}Ticket\gt{} ]
+ \index{SPAN class }
+ The read session with Ticket number is closed. A read session may be closed
+at any time; you needn't read all its blocks before closing it.
+\end{description}
+
+{\it by
+\elink{John Walker}{http://www.fourmilab.ch/}
+January 30th, MM }
+
+\section{SD Data Structures}
+\index{SD Data Structures}
+\addcontentsline{toc}{section}{SD Data Structures}
+
+In the Storage daemon, there is a Device resource (i.e. from conf file)
+that describes each physical device. When the physical device is used it
+is controled by the DEVICE structure (defined in dev.h), and typically
+refered to as dev in the C++ code. Anyone writing or reading a physical
+device must ultimately get a lock on the DEVICE structure -- this controls
+the device. However, multiple Jobs (defined by a JCR structure src/jcr.h)
+can be writing a physical DEVICE at the same time (of course they are
+sequenced by locking the DEVICE structure). There are a lot of job
+dependent "device" variables that may be different for each Job such as
+spooling (one job may spool and another may not, and when a job is
+spooling, it must have an i/o packet open, each job has its own record and
+block structures, ...), so there is a device control record or DCR that is
+the primary way of interfacing to the physical device. The DCR contains
+all the job specific data as well as a pointer to the Device resource
+(DEVRES structure) and the physical DEVICE structure.
+
+Now if a job is writing to two devices (it could be writing two separate
+streams to the same device), it must have two DCRs. Today, the code only
+permits one. This won't be hard to change, but it is new code.
+
+Today three jobs (threads), two physical devices each job
+ writes to only one device:
+
+\begin{verbatim}
+ Job1 -> DCR1 -> DEVICE1
+ Job2 -> DCR2 -> DEVICE1
+ Job3 -> DCR3 -> DEVICE2
+\end{verbatim}
+
+To be implemented three jobs, three physical devices, but
+ job1 is writing simultaneously to three devices:
+
+\begin{verbatim}
+ Job1 -> DCR1 -> DEVICE1
+ -> DCR4 -> DEVICE2
+ -> DCR5 -> DEVICE3
+ Job2 -> DCR2 -> DEVICE1
+ Job3 -> DCR3 -> DEVICE2
+
+ Job = job control record
+ DCR = Job contorl data for a specific device
+ DEVICE = Device only control data
+\end{verbatim}
+
+++ /dev/null
-%%
-%%
-
-\chapter{Storage Daemon Design}
-\label{_ChapterStart3}
-\index{Storage Daemon Design }
-\index{Design!Storage Daemon }
-\addcontentsline{toc}{section}{Storage Daemon Design}
-
-This chapter is intended to be a technical discussion of the Storage daemon
-services and as such is not targeted at end users but rather at developers and
-system administrators that want or need to know more of the working details of
-{\bf Bacula}.
-
-This document is somewhat out of date.
-
-\section{SD Design Introduction}
-\index{Introduction!SD Design }
-\index{SD Design Introduction }
-\addcontentsline{toc}{section}{SD Design Introduction}
-
-The Bacula Storage daemon provides storage resources to a Bacula installation.
-An individual Storage daemon is associated with a physical permanent storage
-device (for example, a tape drive, CD writer, tape changer or jukebox, etc.),
-and may employ auxiliary storage resources (such as space on a hard disk file
-system) to increase performance and/or optimize use of the permanent storage
-medium.
-
-Any number of storage daemons may be run on a given machine; each associated
-with an individual storage device connected to it, and BACULA operations may
-employ storage daemons on any number of hosts connected by a network, local or
-remote. The ability to employ remote storage daemons (with appropriate
-security measures) permits automatic off-site backup, possibly to publicly
-available backup repositories.
-
-\section{SD Development Outline}
-\index{Outline!SD Development }
-\index{SD Development Outline }
-\addcontentsline{toc}{section}{SD Development Outline}
-
-In order to provide a high performance backup and restore solution that scales
-to very large capacity devices and networks, the storage daemon must be able
-to extract as much performance from the storage device and network with which
-it interacts. In order to accomplish this, storage daemons will eventually
-have to sacrifice simplicity and painless portability in favor of techniques
-which improve performance. My goal in designing the storage daemon protocol
-and developing the initial prototype storage daemon is to provide for these
-additions in the future, while implementing an initial storage daemon which is
-very simple and portable to almost any POSIX-like environment. This original
-storage daemon (and its evolved descendants) can serve as a portable solution
-for non-demanding backup requirements (such as single servers of modest size,
-individual machines, or small local networks), while serving as the starting
-point for development of higher performance configurable derivatives which use
-techniques such as POSIX threads, shared memory, asynchronous I/O, buffering
-to high-speed intermediate media, and support for tape changers and jukeboxes.
-
-
-\section{SD Connections and Sessions}
-\index{Sessions!SD Connections and }
-\index{SD Connections and Sessions }
-\addcontentsline{toc}{section}{SD Connections and Sessions}
-
-A client connects to a storage server by initiating a conventional TCP
-connection. The storage server accepts the connection unless its maximum
-number of connections has been reached or the specified host is not granted
-access to the storage server. Once a connection has been opened, the client
-may make any number of Query requests, and/or initiate (if permitted), one or
-more Append sessions (which transmit data to be stored by the storage daemon)
-and/or Read sessions (which retrieve data from the storage daemon).
-
-Most requests and replies sent across the connection are simple ASCII strings,
-with status replies prefixed by a four digit status code for easier parsing.
-Binary data appear in blocks stored and retrieved from the storage. Any
-request may result in a single-line status reply of ``{\tt 3201\ Notification\
-pending}'', which indicates the client must send a ``Query notification''
-request to retrieve one or more notifications posted to it. Once the
-notifications have been returned, the client may then resubmit the request
-which resulted in the 3201 status.
-
-The following descriptions omit common error codes, yet to be defined, which
-can occur from most or many requests due to events like media errors,
-restarting of the storage daemon, etc. These details will be filled in, along
-with a comprehensive list of status codes along with which requests can
-produce them in an update to this document.
-
-\subsection{SD Append Requests}
-\index{Requests!SD Append }
-\index{SD Append Requests }
-\addcontentsline{toc}{subsection}{SD Append Requests}
-
-\begin{description}
-
-\item [{append open session = \lt{}JobId\gt{} [ \lt{}Password\gt{} ] }]
- \index{SPAN class }
- A data append session is opened with the Job ID given by {\it JobId} with
-client password (if required) given by {\it Password}. If the session is
-successfully opened, a status of {\tt 3000\ OK} is returned with a ``{\tt
-ticket\ =\ }{\it number}'' reply used to identify subsequent messages in the
-session. If too many sessions are open, or a conflicting session (for
-example, a read in progress when simultaneous read and append sessions are
-not permitted), a status of ``{\tt 3502\ Volume\ busy}'' is returned. If no
-volume is mounted, or the volume mounted cannot be appended to, a status of
-``{\tt 3503\ Volume\ not\ mounted}'' is returned.
-
-\item [append data = \lt{}ticket-number\gt{} ]
- \index{SPAN class }
- If the append data is accepted, a status of {\tt 3000\ OK data address =
-\lt{}IPaddress\gt{} port = \lt{}port\gt{}} is returned, where the {\tt
-IPaddress} and {\tt port} specify the IP address and port number of the data
-channel. Error status codes are {\tt 3504\ Invalid\ ticket\ number} and {\tt
-3505\ Session\ aborted}, the latter of which indicates the entire append
-session has failed due to a daemon or media error.
-
-Once the File daemon has established the connection to the data channel
-opened by the Storage daemon, it will transfer a header packet followed by
-any number of data packets. The header packet is of the form:
-
-{\tt \lt{}file-index\gt{} \lt{}stream-id\gt{} \lt{}info\gt{}}
-
-The details are specified in the
-\ilink{Daemon Protocol}{_ChapterStart2} section of this
-document.
-
-\item [*append abort session = \lt{}ticket-number\gt{} ]
- \index{SPAN class }
- The open append session with ticket {\it ticket-number} is aborted; any blocks
-not yet written to permanent media are discarded. Subsequent attempts to
-append data to the session will receive an error status of {\tt 3505\
-Session\ aborted}.
-
-\item [append end session = \lt{}ticket-number\gt{} ]
- \index{SPAN class }
- The open append session with ticket {\it ticket-number} is marked complete; no
-further blocks may be appended. The storage daemon will give priority to
-saving any buffered blocks from this session to permanent media as soon as
-possible.
-
-\item [append close session = \lt{}ticket-number\gt{} ]
- \index{SPAN class }
- The append session with ticket {\it ticket} is closed. This message does not
-receive an {\tt 3000\ OK} reply until all of the content of the session are
-stored on permanent media, at which time said reply is given, followed by a
-list of volumes, from first to last, which contain blocks from the session,
-along with the first and last file and block on each containing session data
-and the volume session key identifying data from that session in lines with
-the following format:
-
-{\tt {\tt Volume = }\lt{}Volume-id\gt{} \lt{}start-file\gt{}
-\lt{}start-block\gt{} \lt{}end-file\gt{} \lt{}end-block\gt{}
-\lt{}volume-session-id\gt{}}where {\it Volume-id} is the volume label, {\it
-start-file} and {\it start-block} are the file and block containing the first
-data from that session on the volume, {\it end-file} and {\it end-block} are
-the file and block with the last data from the session on the volume and {\it
-volume-session-id} is the volume session ID for blocks from the session
-stored on that volume.
-\end{description}
-
-\subsection{SD Read Requests}
-\index{SD Read Requests }
-\index{Requests!SD Read }
-\addcontentsline{toc}{subsection}{SD Read Requests}
-
-\begin{description}
-
-\item [Read open session = \lt{}JobId\gt{} \lt{}Volume-id\gt{}
- \lt{}start-file\gt{} \lt{}start-block\gt{} \lt{}end-file\gt{}
- \lt{}end-block\gt{} \lt{}volume-session-id\gt{} \lt{}password\gt{} ]
-\index{SPAN class }
-where {\it Volume-id} is the volume label, {\it start-file} and {\it
-start-block} are the file and block containing the first data from that
-session on the volume, {\it end-file} and {\it end-block} are the file and
-block with the last data from the session on the volume and {\it
-volume-session-id} is the volume session ID for blocks from the session
-stored on that volume.
-
-If the session is successfully opened, a status of
-
-{\tt {\tt 3100\ OK Ticket\ =\ }{\it number}``}
-
-is returned with a reply used to identify subsequent messages in the session.
-If too many sessions are open, or a conflicting session (for example, an
-append in progress when simultaneous read and append sessions are not
-permitted), a status of ''{\tt 3502\ Volume\ busy}`` is returned. If no
-volume is mounted, or the volume mounted cannot be appended to, a status of
-''{\tt 3503\ Volume\ not\ mounted}`` is returned. If no block with the given
-volume session ID and the correct client ID number appears in the given first
-file and block for the volume, a status of ''{\tt 3505\ Session\ not\
-found}`` is returned.
-
-\item [Read data = \lt{}Ticket\gt{} \gt{} \lt{}Block\gt{} ]
- \index{SPAN class }
- The specified Block of data from open read session with the specified Ticket
-number is returned, with a status of {\tt 3000\ OK} followed by a ''{\tt
-Length\ =\ }{\it size}`` line giving the length in bytes of the block data
-which immediately follows. Blocks must be retrieved in ascending order, but
-blocks may be skipped. If a block number greater than the largest stored on
-the volume is requested, a status of ''{\tt 3201\ End\ of\ volume}`` is
-returned. If a block number greater than the largest in the file is
-requested, a status of ''{\tt 3401\ End\ of\ file}`` is returned.
-
-\item [Read close session = \lt{}Ticket\gt{} ]
- \index{SPAN class }
- The read session with Ticket number is closed. A read session may be closed
-at any time; you needn't read all its blocks before closing it.
-\end{description}
-
-{\it by
-\elink{John Walker}{http://www.fourmilab.ch/}
-January 30th, MM }
-
-\section{SD Data Structures}
-\index{SD Data Structures}
-\addcontentsline{toc}{section}{SD Data Structures}
-
-In the Storage daemon, there is a Device resource (i.e. from conf file)
-that describes each physical device. When the physical device is used it
-is controled by the DEVICE structure (defined in dev.h), and typically
-refered to as dev in the C++ code. Anyone writing or reading a physical
-device must ultimately get a lock on the DEVICE structure -- this controls
-the device. However, multiple Jobs (defined by a JCR structure src/jcr.h)
-can be writing a physical DEVICE at the same time (of course they are
-sequenced by locking the DEVICE structure). There are a lot of job
-dependent "device" variables that may be different for each Job such as
-spooling (one job may spool and another may not, and when a job is
-spooling, it must have an i/o packet open, each job has its own record and
-block structures, ...), so there is a device control record or DCR that is
-the primary way of interfacing to the physical device. The DCR contains
-all the job specific data as well as a pointer to the Device resource
-(DEVRES structure) and the physical DEVICE structure.
-
-Now if a job is writing to two devices (it could be writing two separate
-streams to the same device), it must have two DCRs. Today, the code only
-permits one. This won't be hard to change, but it is new code.
-
-Today three jobs (threads), two physical devices each job
- writes to only one device:
-
-\begin{verbatim}
- Job1 -> DCR1 -> DEVICE1
- Job2 -> DCR2 -> DEVICE1
- Job3 -> DCR3 -> DEVICE2
-\end{verbatim}
-
-To be implemented three jobs, three physical devices, but
- job1 is writing simultaneously to three devices:
-
-\begin{verbatim}
- Job1 -> DCR1 -> DEVICE1
- -> DCR4 -> DEVICE2
- -> DCR5 -> DEVICE3
- Job2 -> DCR2 -> DEVICE1
- Job3 -> DCR3 -> DEVICE2
-
- Job = job control record
- DCR = Job contorl data for a specific device
- DEVICE = Device only control data
-\end{verbatim}
-
--- /dev/null
+%%
+%%
+
+%\author{Landon Fuller}
+%\title{Bacula TLS Additions}
+
+\chapter{TLS}
+\label{_Chapter_TLS}
+\index{TLS}
+
+Written by Landon Fuller
+
+\section{Introduction to TLS}
+\index{TLS Introduction}
+\index{Introduction!TLS}
+\addcontentsline{toc}{section}{TLS Introduction}
+
+This patch includes all the back-end code necessary to add complete TLS
+data encryption support to Bacula. In addition, support for TLS in
+Console/Director communications has been added as a proof of concept.
+Adding support for the remaining daemons will be straight-forward.
+Supported features of this patchset include:
+
+\begin{itemize}
+\item Client/Server TLS Requirement Negotiation
+\item TLSv1 Connections with Server and Client Certificate
+Validation
+\item Forward Secrecy Support via Diffie-Hellman Ephemeral Keying
+\end{itemize}
+
+This document will refer to both ``server'' and ``client'' contexts. These
+terms refer to the accepting and initiating peer, respectively.
+
+Diffie-Hellman anonymous ciphers are not supported by this patchset. The
+use of DH anonymous ciphers increases the code complexity and places
+explicit trust upon the two-way Cram-MD5 implementation. Cram-MD5 is
+subject to known plaintext attacks, and is should be considered
+considerably less secure than PKI certificate-based authentication.
+
+Appropriate autoconf macros have been added to detect and use OpenSSL. Two
+additional preprocessor defines have been added: \emph{HAVE\_TLS} and
+\emph{HAVE\_OPENSSL}. All changes not specific to OpenSSL rely on
+\emph{HAVE\_TLS}. OpenSSL-specific code is constrained to
+\emph{src/lib/tls.c} to facilitate the support of alternative TLS
+implementations.
+
+\section{New Configuration Directives}
+\index{TLS Configuration Directives}
+\index{Directives!TLS Configuration}
+\addcontentsline{toc}{section}{New Configuration Directives}
+
+Additional configuration directives have been added to both the Console and
+Director resources. These new directives are defined as follows:
+
+\begin{itemize}
+\item \underline{TLS Enable} \emph{(yes/no)}
+Enable TLS support.
+
+\item \underline{TLS Require} \emph{(yes/no)}
+Require TLS connections.
+
+\item \underline{TLS Certificate} \emph{(path)}
+Path to PEM encoded TLS certificate. Used as either a client or server
+certificate.
+
+\item \underline{TLS Key} \emph{(path)}
+Path to PEM encoded TLS private key. Must correspond with the TLS
+certificate.
+
+\item \underline{TLS Verify Peer} \emph{(yes/no)}
+Verify peer certificate. Instructs server to request and verify the
+client's x509 certificate. Any client certificate signed by a known-CA
+will be accepted unless the TLS Allowed CN configuration directive is used.
+Not valid in a client context.
+
+\item \underline{TLS Allowed CN} \emph{(string list)}
+Common name attribute of allowed peer certificates. If directive is
+specified, all client certificates will be verified against this list.
+This directive may be specified more than once. Not valid in a client
+context.
+
+\item \underline{TLS CA Certificate File} \emph{(path)}
+Path to PEM encoded TLS CA certificate(s). Multiple certificates are
+permitted in the file. One of \emph{TLS CA Certificate File} or \emph{TLS
+CA Certificate Dir} are required in a server context if \underline{TLS
+Verify Peer} is also specified, and are always required in a client
+context.
+
+\item \underline{TLS CA Certificate Dir} \emph{(path)}
+Path to TLS CA certificate directory. In the current implementation,
+certificates must be stored PEM encoded with OpenSSL-compatible hashes.
+One of \emph{TLS CA Certificate File} or \emph{TLS CA Certificate Dir} are
+required in a server context if \emph{TLS Verify Peer} is also specified,
+and are always required in a client context.
+
+\item \underline{TLS DH File} \emph{(path)}
+Path to PEM encoded Diffie-Hellman parameter file. If this directive is
+specified, DH ephemeral keying will be enabled, allowing for forward
+secrecy of communications. This directive is only valid within a server
+context. To generate the parameter file, you may use openssl:
+\footnotesize
+\begin{verbatim}
+openssl dhparam -out dh1024.pem -5 1024
+\end{verbatim}
+\normalsize
+\end{itemize}
+
+\section{TLS API Implementation}
+\index{TLS API Implimentation}
+\index{API Implimentation!TLS}
+\addcontentsline{toc}{section}{TLS API Implementation}
+
+To facilitate the use of additional TLS libraries, all OpenSSL-specific
+code has been implemented within \emph{src/lib/tls.c}. In turn, a generic
+TLS API is exported.
+
+\subsection{Library Initialization and Cleanup}
+\index{Library Initialization and Cleanup}
+\index{Initialization and Cleanup!Library}
+\addcontentsline{toc}{subsection}{Library Initialization and Cleanup}
+
+\footnotesize
+\begin{verbatim}
+int init_tls (void);
+\end{verbatim}
+\normalsize
+
+Performs TLS library initialization, including seeding of the PRNG. PRNG
+seeding has not yet been implemented for win32.
+
+\footnotesize
+\begin{verbatim}
+int cleanup_tls (void);
+\end{verbatim}
+\normalsize
+
+Performs TLS library cleanup.
+
+\subsection{Manipulating TLS Contexts}
+\index{TLS Context Manipulation}
+\index{Contexts!Manipulating TLS}
+\addcontentsline{toc}{subsection}{Manipulating TLS Contexts}
+
+\footnotesize
+\begin{verbatim}
+TLS_CONTEXT *new_tls_context (const char *ca_certfile,
+ const char *ca_certdir, const char *certfile,
+ const char *keyfile, const char *dhfile, bool verify_peer);
+\end{verbatim}
+\normalsize
+
+Allocates and initalizes a new opaque \emph{TLS\_CONTEXT} structure. The
+\emph{TLS\_CONTEXT} structure maintains default TLS settings from which
+\emph{TLS\_CONNECTION} structures are instantiated. In the future the
+\emph{TLS\_CONTEXT} structure may be used to maintain the TLS session
+cache. \emph{ca\_certfile} and \emph{ca\_certdir} arguments are used to
+initialize the CA verification stores. The \emph{certfile} and
+\emph{keyfile} arguments are used to initialize the local certificate and
+private key. If \emph{dhfile} is non-NULL, it is used to initialize
+Diffie-Hellman ephemeral keying. If \emph{verify\_peer} is \emph{true} ,
+client certificate validation is enabled.
+
+\footnotesize
+\begin{verbatim}
+void free_tls_context (TLS_CONTEXT *ctx);
+\end{verbatim}
+\normalsize
+
+Deallocated a previously allocated \emph{TLS\_CONTEXT} structure.
+
+\subsection{Performing Post-Connection Verification}
+\index{TLS Post-Connection Verification}
+\index{Verification!TLS Post-Connection}
+\addcontentsline{toc}{subsection}{Performing Post-Connection Verification}
+
+\footnotesize
+\begin{verbatim}
+bool tls_postconnect_verify_host (TLS_CONNECTION *tls, const char *host);
+\end{verbatim}
+\normalsize
+
+Performs post-connection verification of the peer-supplied x509
+certificate. Checks whether the \emph{subjectAltName} and
+\emph{commonName} attributes match the supplied \emph{host} string.
+Returns \emph{true} if there is a match, \emph{false} otherwise.
+
+\footnotesize
+\begin{verbatim}
+bool tls_postconnect_verify_cn (TLS_CONNECTION *tls, alist *verify_list);
+\end{verbatim}
+\normalsize
+
+Performs post-connection verification of the peer-supplied x509
+certificate. Checks whether the \emph{commonName} attribute matches any
+strings supplied via the \emph{verify\_list} parameter. Returns
+\emph{true} if there is a match, \emph{false} otherwise.
+
+\subsection{Manipulating TLS Connections}
+\index{TLS Connection Manipulation}
+\index{Connections!Manipulating TLS}
+\addcontentsline{toc}{subsection}{Manipulating TLS Connections}
+
+\footnotesize
+\begin{verbatim}
+TLS_CONNECTION *new_tls_connection (TLS_CONTEXT *ctx, int fd);
+\end{verbatim}
+\normalsize
+
+Allocates and initializes a new \emph{TLS\_CONNECTION} structure with
+context \emph{ctx} and file descriptor \emph{fd}.
+
+\footnotesize
+\begin{verbatim}
+void free_tls_connection (TLS_CONNECTION *tls);
+\end{verbatim}
+\normalsize
+
+Deallocates memory associated with the \emph{tls} structure.
+
+\footnotesize
+\begin{verbatim}
+bool tls_bsock_connect (BSOCK *bsock);
+\end{verbatim}
+\normalsize
+
+Negotiates a a TLS client connection via \emph{bsock}. Returns \emph{true}
+if successful, \emph{false} otherwise. Will fail if there is a TLS
+protocol error or an invalid certificate is presented
+
+\footnotesize
+\begin{verbatim}
+bool tls_bsock_accept (BSOCK *bsock);
+\end{verbatim}
+\normalsize
+
+Accepts a TLS client connection via \emph{bsock}. Returns \emph{true} if
+successful, \emph{false} otherwise. Will fail if there is a TLS protocol
+error or an invalid certificate is presented.
+
+\footnotesize
+\begin{verbatim}
+bool tls_bsock_shutdown (BSOCK *bsock);
+\end{verbatim}
+\normalsize
+
+Issues a blocking TLS shutdown request to the peer via \emph{bsock}. This function may not wait for the peer's reply.
+
+\footnotesize
+\begin{verbatim}
+int tls_bsock_writen (BSOCK *bsock, char *ptr, int32_t nbytes);
+\end{verbatim}
+\normalsize
+
+Writes \emph{nbytes} from \emph{ptr} via the \emph{TLS\_CONNECTION}
+associated with \emph{bsock}. Due to OpenSSL's handling of \emph{EINTR},
+\emph{bsock} is set non-blocking at the start of the function, and restored
+to its original blocking state before the function returns. Less than
+\emph{nbytes} may be written if an error occurs. The actual number of
+bytes written will be returned.
+
+\footnotesize
+\begin{verbatim}
+int tls_bsock_readn (BSOCK *bsock, char *ptr, int32_t nbytes);
+\end{verbatim}
+\normalsize
+
+Reads \emph{nbytes} from the \emph{TLS\_CONNECTION} associated with
+\emph{bsock} and stores the result in \emph{ptr}. Due to OpenSSL's
+handling of \emph{EINTR}, \emph{bsock} is set non-blocking at the start of
+the function, and restored to its original blocking state before the
+function returns. Less than \emph{nbytes} may be read if an error occurs.
+The actual number of bytes read will be returned.
+
+\section{Bnet API Changes}
+\index{Bnet API Changes}
+\index{API Changes!Bnet}
+\addcontentsline{toc}{section}{Bnet API Changes}
+
+A minimal number of changes were required in the Bnet socket API. The BSOCK
+structure was expanded to include an associated TLS\_CONNECTION structure,
+as well as a flag to designate the current blocking state of the socket.
+The blocking state flag is required for win32, where it does not appear
+possible to discern the current blocking state of a socket.
+
+\subsection{Negotiating a TLS Connection}
+\index{Negotiating a TLS Connection}
+\index{TLS Connection!Negotiating}
+\addcontentsline{toc}{subsection}{Negotiating a TLS Connection}
+
+\emph{bnet\_tls\_server()} and \emph{bnet\_tls\_client()} were both
+implemented using the new TLS API as follows:
+
+\footnotesize
+\begin{verbatim}
+int bnet_tls_client(TLS_CONTEXT *ctx, BSOCK * bsock);
+\end{verbatim}
+\normalsize
+
+Negotiates a TLS session via \emph{bsock} using the settings from
+\emph{ctx}. Returns 1 if successful, 0 otherwise.
+
+\footnotesize
+\begin{verbatim}
+int bnet_tls_server(TLS_CONTEXT *ctx, BSOCK * bsock, alist *verify_list);
+\end{verbatim}
+\normalsize
+
+Accepts a TLS client session via \emph{bsock} using the settings from
+\emph{ctx}. If \emph{verify\_list} is non-NULL, it is passed to
+\emph{tls\_postconnect\_verify\_cn()} for client certificate verification.
+
+\subsection{Manipulating Socket Blocking State}
+\index{Manipulating Socket Blocking State}
+\index{Socket Blocking State!Manipulating}
+\index{Blocking State!Socket!Manipulating}
+\addcontentsline{toc}{subsection}{Manipulating Socket Blocking State}
+
+Three functions were added for manipulating the blocking state of a socket
+on both Win32 and Unix-like systems. The Win32 code was written according
+to the MSDN documentation, but has not been tested.
+
+These functions are prototyped as follows:
+
+\footnotesize
+\begin{verbatim}
+int bnet_set_nonblocking (BSOCK *bsock);
+\end{verbatim}
+\normalsize
+
+Enables non-blocking I/O on the socket associated with \emph{bsock}.
+Returns a copy of the socket flags prior to modification.
+
+\footnotesize
+\begin{verbatim}
+int bnet_set_blocking (BSOCK *bsock);
+\end{verbatim}
+\normalsize
+
+Enables blocking I/O on the socket associated with \emph{bsock}. Returns a
+copy of the socket flags prior to modification.
+
+\footnotesize
+\begin{verbatim}
+void bnet_restore_blocking (BSOCK *bsock, int flags);
+\end{verbatim}
+\normalsize
+
+Restores blocking or non-blocking IO setting on the socket associated with
+\emph{bsock}. The \emph{flags} argument must be the return value of either
+\emph{bnet\_set\_blocking()} or \emph{bnet\_restore\_blocking()}.
+
+\pagebreak
+
+\section{Authentication Negotiation}
+\index{Authentication Negotiation}
+\index{Negotiation!TLS Authentication}
+\addcontentsline{toc}{section}{Authentication Negotiation}
+
+Backwards compatibility with the existing SSL negotiation hooks implemented
+in src/lib/cram-md5.c have been maintained. The
+\emph{cram\_md5\_get\_auth()} function has been modified to accept an
+integer pointer argument, tls\_remote\_need. The TLS requirement
+advertised by the remote host is returned via this pointer.
+
+After exchanging cram-md5 authentication and TLS requirements, both the
+client and server independently decide whether to continue:
+
+\footnotesize
+\begin{verbatim}
+if (!cram_md5_get_auth(dir, password, &tls_remote_need) ||
+ !cram_md5_auth(dir, password, tls_local_need)) {
+[snip]
+/* Verify that the remote host is willing to meet our TLS requirements */
+if (tls_remote_need < tls_local_need && tls_local_need != BNET_TLS_OK &&
+ tls_remote_need != BNET_TLS_OK) {
+ sendit(_("Authorization problem:"
+ " Remote server did not advertise required TLS support.\n"));
+ auth_success = false;
+ goto auth_done;
+}
+
+/* Verify that we are willing to meet the remote host's requirements */
+if (tls_remote_need > tls_local_need && tls_local_need != BNET_TLS_OK &&
+ tls_remote_need != BNET_TLS_OK) {
+ sendit(_("Authorization problem:"
+ " Remote server requires TLS.\n"));
+ auth_success = false;
+ goto auth_done;
+}
+\end{verbatim}
+\normalsize
+++ /dev/null
-%%
-%%
-
-%\author{Landon Fuller}
-%\title{Bacula TLS Additions}
-
-\chapter{TLS}
-\label{_Chapter_TLS}
-\index{TLS}
-
-Written by Landon Fuller
-
-\section{Introduction to TLS}
-\index{TLS Introduction}
-\index{Introduction!TLS}
-\addcontentsline{toc}{section}{TLS Introduction}
-
-This patch includes all the back-end code necessary to add complete TLS
-data encryption support to Bacula. In addition, support for TLS in
-Console/Director communications has been added as a proof of concept.
-Adding support for the remaining daemons will be straight-forward.
-Supported features of this patchset include:
-
-\begin{itemize}
-\item Client/Server TLS Requirement Negotiation
-\item TLSv1 Connections with Server and Client Certificate
-Validation
-\item Forward Secrecy Support via Diffie-Hellman Ephemeral Keying
-\end{itemize}
-
-This document will refer to both ``server'' and ``client'' contexts. These
-terms refer to the accepting and initiating peer, respectively.
-
-Diffie-Hellman anonymous ciphers are not supported by this patchset. The
-use of DH anonymous ciphers increases the code complexity and places
-explicit trust upon the two-way Cram-MD5 implementation. Cram-MD5 is
-subject to known plaintext attacks, and is should be considered
-considerably less secure than PKI certificate-based authentication.
-
-Appropriate autoconf macros have been added to detect and use OpenSSL. Two
-additional preprocessor defines have been added: \emph{HAVE\_TLS} and
-\emph{HAVE\_OPENSSL}. All changes not specific to OpenSSL rely on
-\emph{HAVE\_TLS}. OpenSSL-specific code is constrained to
-\emph{src/lib/tls.c} to facilitate the support of alternative TLS
-implementations.
-
-\section{New Configuration Directives}
-\index{TLS Configuration Directives}
-\index{Directives!TLS Configuration}
-\addcontentsline{toc}{section}{New Configuration Directives}
-
-Additional configuration directives have been added to both the Console and
-Director resources. These new directives are defined as follows:
-
-\begin{itemize}
-\item \underline{TLS Enable} \emph{(yes/no)}
-Enable TLS support.
-
-\item \underline{TLS Require} \emph{(yes/no)}
-Require TLS connections.
-
-\item \underline{TLS Certificate} \emph{(path)}
-Path to PEM encoded TLS certificate. Used as either a client or server
-certificate.
-
-\item \underline{TLS Key} \emph{(path)}
-Path to PEM encoded TLS private key. Must correspond with the TLS
-certificate.
-
-\item \underline{TLS Verify Peer} \emph{(yes/no)}
-Verify peer certificate. Instructs server to request and verify the
-client's x509 certificate. Any client certificate signed by a known-CA
-will be accepted unless the TLS Allowed CN configuration directive is used.
-Not valid in a client context.
-
-\item \underline{TLS Allowed CN} \emph{(string list)}
-Common name attribute of allowed peer certificates. If directive is
-specified, all client certificates will be verified against this list.
-This directive may be specified more than once. Not valid in a client
-context.
-
-\item \underline{TLS CA Certificate File} \emph{(path)}
-Path to PEM encoded TLS CA certificate(s). Multiple certificates are
-permitted in the file. One of \emph{TLS CA Certificate File} or \emph{TLS
-CA Certificate Dir} are required in a server context if \underline{TLS
-Verify Peer} is also specified, and are always required in a client
-context.
-
-\item \underline{TLS CA Certificate Dir} \emph{(path)}
-Path to TLS CA certificate directory. In the current implementation,
-certificates must be stored PEM encoded with OpenSSL-compatible hashes.
-One of \emph{TLS CA Certificate File} or \emph{TLS CA Certificate Dir} are
-required in a server context if \emph{TLS Verify Peer} is also specified,
-and are always required in a client context.
-
-\item \underline{TLS DH File} \emph{(path)}
-Path to PEM encoded Diffie-Hellman parameter file. If this directive is
-specified, DH ephemeral keying will be enabled, allowing for forward
-secrecy of communications. This directive is only valid within a server
-context. To generate the parameter file, you may use openssl:
-\footnotesize
-\begin{verbatim}
-openssl dhparam -out dh1024.pem -5 1024
-\end{verbatim}
-\normalsize
-\end{itemize}
-
-\section{TLS API Implementation}
-\index{TLS API Implimentation}
-\index{API Implimentation!TLS}
-\addcontentsline{toc}{section}{TLS API Implementation}
-
-To facilitate the use of additional TLS libraries, all OpenSSL-specific
-code has been implemented within \emph{src/lib/tls.c}. In turn, a generic
-TLS API is exported.
-
-\subsection{Library Initialization and Cleanup}
-\index{Library Initialization and Cleanup}
-\index{Initialization and Cleanup!Library}
-\addcontentsline{toc}{subsection}{Library Initialization and Cleanup}
-
-\footnotesize
-\begin{verbatim}
-int init_tls (void);
-\end{verbatim}
-\normalsize
-
-Performs TLS library initialization, including seeding of the PRNG. PRNG
-seeding has not yet been implemented for win32.
-
-\footnotesize
-\begin{verbatim}
-int cleanup_tls (void);
-\end{verbatim}
-\normalsize
-
-Performs TLS library cleanup.
-
-\subsection{Manipulating TLS Contexts}
-\index{TLS Context Manipulation}
-\index{Contexts!Manipulating TLS}
-\addcontentsline{toc}{subsection}{Manipulating TLS Contexts}
-
-\footnotesize
-\begin{verbatim}
-TLS_CONTEXT *new_tls_context (const char *ca_certfile,
- const char *ca_certdir, const char *certfile,
- const char *keyfile, const char *dhfile, bool verify_peer);
-\end{verbatim}
-\normalsize
-
-Allocates and initalizes a new opaque \emph{TLS\_CONTEXT} structure. The
-\emph{TLS\_CONTEXT} structure maintains default TLS settings from which
-\emph{TLS\_CONNECTION} structures are instantiated. In the future the
-\emph{TLS\_CONTEXT} structure may be used to maintain the TLS session
-cache. \emph{ca\_certfile} and \emph{ca\_certdir} arguments are used to
-initialize the CA verification stores. The \emph{certfile} and
-\emph{keyfile} arguments are used to initialize the local certificate and
-private key. If \emph{dhfile} is non-NULL, it is used to initialize
-Diffie-Hellman ephemeral keying. If \emph{verify\_peer} is \emph{true} ,
-client certificate validation is enabled.
-
-\footnotesize
-\begin{verbatim}
-void free_tls_context (TLS_CONTEXT *ctx);
-\end{verbatim}
-\normalsize
-
-Deallocated a previously allocated \emph{TLS\_CONTEXT} structure.
-
-\subsection{Performing Post-Connection Verification}
-\index{TLS Post-Connection Verification}
-\index{Verification!TLS Post-Connection}
-\addcontentsline{toc}{subsection}{Performing Post-Connection Verification}
-
-\footnotesize
-\begin{verbatim}
-bool tls_postconnect_verify_host (TLS_CONNECTION *tls, const char *host);
-\end{verbatim}
-\normalsize
-
-Performs post-connection verification of the peer-supplied x509
-certificate. Checks whether the \emph{subjectAltName} and
-\emph{commonName} attributes match the supplied \emph{host} string.
-Returns \emph{true} if there is a match, \emph{false} otherwise.
-
-\footnotesize
-\begin{verbatim}
-bool tls_postconnect_verify_cn (TLS_CONNECTION *tls, alist *verify_list);
-\end{verbatim}
-\normalsize
-
-Performs post-connection verification of the peer-supplied x509
-certificate. Checks whether the \emph{commonName} attribute matches any
-strings supplied via the \emph{verify\_list} parameter. Returns
-\emph{true} if there is a match, \emph{false} otherwise.
-
-\subsection{Manipulating TLS Connections}
-\index{TLS Connection Manipulation}
-\index{Connections!Manipulating TLS}
-\addcontentsline{toc}{subsection}{Manipulating TLS Connections}
-
-\footnotesize
-\begin{verbatim}
-TLS_CONNECTION *new_tls_connection (TLS_CONTEXT *ctx, int fd);
-\end{verbatim}
-\normalsize
-
-Allocates and initializes a new \emph{TLS\_CONNECTION} structure with
-context \emph{ctx} and file descriptor \emph{fd}.
-
-\footnotesize
-\begin{verbatim}
-void free_tls_connection (TLS_CONNECTION *tls);
-\end{verbatim}
-\normalsize
-
-Deallocates memory associated with the \emph{tls} structure.
-
-\footnotesize
-\begin{verbatim}
-bool tls_bsock_connect (BSOCK *bsock);
-\end{verbatim}
-\normalsize
-
-Negotiates a a TLS client connection via \emph{bsock}. Returns \emph{true}
-if successful, \emph{false} otherwise. Will fail if there is a TLS
-protocol error or an invalid certificate is presented
-
-\footnotesize
-\begin{verbatim}
-bool tls_bsock_accept (BSOCK *bsock);
-\end{verbatim}
-\normalsize
-
-Accepts a TLS client connection via \emph{bsock}. Returns \emph{true} if
-successful, \emph{false} otherwise. Will fail if there is a TLS protocol
-error or an invalid certificate is presented.
-
-\footnotesize
-\begin{verbatim}
-bool tls_bsock_shutdown (BSOCK *bsock);
-\end{verbatim}
-\normalsize
-
-Issues a blocking TLS shutdown request to the peer via \emph{bsock}. This function may not wait for the peer's reply.
-
-\footnotesize
-\begin{verbatim}
-int tls_bsock_writen (BSOCK *bsock, char *ptr, int32_t nbytes);
-\end{verbatim}
-\normalsize
-
-Writes \emph{nbytes} from \emph{ptr} via the \emph{TLS\_CONNECTION}
-associated with \emph{bsock}. Due to OpenSSL's handling of \emph{EINTR},
-\emph{bsock} is set non-blocking at the start of the function, and restored
-to its original blocking state before the function returns. Less than
-\emph{nbytes} may be written if an error occurs. The actual number of
-bytes written will be returned.
-
-\footnotesize
-\begin{verbatim}
-int tls_bsock_readn (BSOCK *bsock, char *ptr, int32_t nbytes);
-\end{verbatim}
-\normalsize
-
-Reads \emph{nbytes} from the \emph{TLS\_CONNECTION} associated with
-\emph{bsock} and stores the result in \emph{ptr}. Due to OpenSSL's
-handling of \emph{EINTR}, \emph{bsock} is set non-blocking at the start of
-the function, and restored to its original blocking state before the
-function returns. Less than \emph{nbytes} may be read if an error occurs.
-The actual number of bytes read will be returned.
-
-\section{Bnet API Changes}
-\index{Bnet API Changes}
-\index{API Changes!Bnet}
-\addcontentsline{toc}{section}{Bnet API Changes}
-
-A minimal number of changes were required in the Bnet socket API. The BSOCK
-structure was expanded to include an associated TLS\_CONNECTION structure,
-as well as a flag to designate the current blocking state of the socket.
-The blocking state flag is required for win32, where it does not appear
-possible to discern the current blocking state of a socket.
-
-\subsection{Negotiating a TLS Connection}
-\index{Negotiating a TLS Connection}
-\index{TLS Connection!Negotiating}
-\addcontentsline{toc}{subsection}{Negotiating a TLS Connection}
-
-\emph{bnet\_tls\_server()} and \emph{bnet\_tls\_client()} were both
-implemented using the new TLS API as follows:
-
-\footnotesize
-\begin{verbatim}
-int bnet_tls_client(TLS_CONTEXT *ctx, BSOCK * bsock);
-\end{verbatim}
-\normalsize
-
-Negotiates a TLS session via \emph{bsock} using the settings from
-\emph{ctx}. Returns 1 if successful, 0 otherwise.
-
-\footnotesize
-\begin{verbatim}
-int bnet_tls_server(TLS_CONTEXT *ctx, BSOCK * bsock, alist *verify_list);
-\end{verbatim}
-\normalsize
-
-Accepts a TLS client session via \emph{bsock} using the settings from
-\emph{ctx}. If \emph{verify\_list} is non-NULL, it is passed to
-\emph{tls\_postconnect\_verify\_cn()} for client certificate verification.
-
-\subsection{Manipulating Socket Blocking State}
-\index{Manipulating Socket Blocking State}
-\index{Socket Blocking State!Manipulating}
-\index{Blocking State!Socket!Manipulating}
-\addcontentsline{toc}{subsection}{Manipulating Socket Blocking State}
-
-Three functions were added for manipulating the blocking state of a socket
-on both Win32 and Unix-like systems. The Win32 code was written according
-to the MSDN documentation, but has not been tested.
-
-These functions are prototyped as follows:
-
-\footnotesize
-\begin{verbatim}
-int bnet_set_nonblocking (BSOCK *bsock);
-\end{verbatim}
-\normalsize
-
-Enables non-blocking I/O on the socket associated with \emph{bsock}.
-Returns a copy of the socket flags prior to modification.
-
-\footnotesize
-\begin{verbatim}
-int bnet_set_blocking (BSOCK *bsock);
-\end{verbatim}
-\normalsize
-
-Enables blocking I/O on the socket associated with \emph{bsock}. Returns a
-copy of the socket flags prior to modification.
-
-\footnotesize
-\begin{verbatim}
-void bnet_restore_blocking (BSOCK *bsock, int flags);
-\end{verbatim}
-\normalsize
-
-Restores blocking or non-blocking IO setting on the socket associated with
-\emph{bsock}. The \emph{flags} argument must be the return value of either
-\emph{bnet\_set\_blocking()} or \emph{bnet\_restore\_blocking()}.
-
-\pagebreak
-
-\section{Authentication Negotiation}
-\index{Authentication Negotiation}
-\index{Negotiation!TLS Authentication}
-\addcontentsline{toc}{section}{Authentication Negotiation}
-
-Backwards compatibility with the existing SSL negotiation hooks implemented
-in src/lib/cram-md5.c have been maintained. The
-\emph{cram\_md5\_get\_auth()} function has been modified to accept an
-integer pointer argument, tls\_remote\_need. The TLS requirement
-advertised by the remote host is returned via this pointer.
-
-After exchanging cram-md5 authentication and TLS requirements, both the
-client and server independently decide whether to continue:
-
-\footnotesize
-\begin{verbatim}
-if (!cram_md5_get_auth(dir, password, &tls_remote_need) ||
- !cram_md5_auth(dir, password, tls_local_need)) {
-[snip]
-/* Verify that the remote host is willing to meet our TLS requirements */
-if (tls_remote_need < tls_local_need && tls_local_need != BNET_TLS_OK &&
- tls_remote_need != BNET_TLS_OK) {
- sendit(_("Authorization problem:"
- " Remote server did not advertise required TLS support.\n"));
- auth_success = false;
- goto auth_done;
-}
-
-/* Verify that we are willing to meet the remote host's requirements */
-if (tls_remote_need > tls_local_need && tls_local_need != BNET_TLS_OK &&
- tls_remote_need != BNET_TLS_OK) {
- sendit(_("Authorization problem:"
- " Remote server requires TLS.\n"));
- auth_success = false;
- goto auth_done;
-}
-\end{verbatim}
-\normalsize