--- /dev/null
+%%
+%%
+
+\section*{Catalog Services}
+\label{_ChapterStart30}
+\index[general]{Services!Catalog }
+\index[general]{Catalog Services }
+\addcontentsline{toc}{section}{Catalog Services}
+
+\subsection*{General}
+\index[general]{General }
+\addcontentsline{toc}{subsection}{General}
+
+This chapter is intended to be a technical discussion of the Catalog services
+and as such is not targeted at end users but rather at developers and system
+administrators that want or need to know more of the working details of {\bf
+Bacula}.
+
+The {\bf Bacula Catalog} services consist of the programs that provide the SQL
+database engine for storage and retrieval of all information concerning files
+that were backed up and their locations on the storage media.
+
+We have investigated the possibility of using the following SQL engines for
+Bacula: Beagle, mSQL, GNU SQL, PostgreSQL, SQLite, Oracle, and MySQL. Each
+presents certain problems with either licensing or maturity. At present, we
+have chosen for development purposes to use MySQL, PostgreSQL and SQLite.
+MySQL was chosen because it is fast, proven to be reliable, widely used, and
+actively being developed. MySQL is released under the GNU GPL license.
+PostgreSQL was chosen because it is a full-featured, very mature database, and
+because Dan Langille did the Bacula driver for it. PostgreSQL is distributed
+under the BSD license. SQLite was chosen because it is small, efficient, and
+can be directly embedded in {\bf Bacula} thus requiring much less effort from
+the system administrator or person building {\bf Bacula}. In our testing
+SQLite has performed very well, and for the functions that we use, it has
+never encountered any errors except that it does not appear to handle
+databases larger than 2GBytes.
+
+The Bacula SQL code has been written in a manner that will allow it to be
+easily modified to support any of the current SQL database systems on the
+market (for example: mSQL, iODBC, unixODBC, Solid, OpenLink ODBC, EasySoft
+ODBC, InterBase, Oracle8, Oracle7, and DB2).
+
+If you do not specify either {\bf \verb{--{with-mysql} or {\bf \verb{--{with-postgresql} or
+{\bf \verb{--{with-sqlite} on the ./configure line, Bacula will use its minimalist
+internal database. This database is kept for build reasons but is no longer
+supported. Bacula {\bf requires} one of the three databases (MySQL,
+PostgreSQL, or SQLite) to run.
+
+\subsubsection*{Filenames and Maximum Filename Length}
+\index[general]{Filenames and Maximum Filename Length }
+\index[general]{Length!Filenames and Maximum Filename }
+\addcontentsline{toc}{subsubsection}{Filenames and Maximum Filename Length}
+
+In general, either MySQL, PostgreSQL or SQLite permit storing arbitrary long
+path names and file names in the catalog database. In practice, there still
+may be one or two places in the Catalog interface code that restrict the
+maximum path length to 512 characters and the maximum file name length to 512
+characters. These restrictions are believed to have been removed. Please note,
+these restrictions apply only to the Catalog database and thus to your ability
+to list online the files saved during any job. All information received and
+stored by the Storage daemon (normally on tape) allows and handles arbitrarily
+long path and filenames.
+
+\subsubsection*{Installing and Configuring MySQL}
+\index[general]{MySQL!Installing and Configuring }
+\index[general]{Installing and Configuring MySQL }
+\addcontentsline{toc}{subsubsection}{Installing and Configuring MySQL}
+
+For the details of installing and configuring MySQL, please see the
+\ilink{Installing and Configuring MySQL}{_ChapterStart} chapter of
+this manual.
+
+\subsubsection*{Installing and Configuring PostgreSQL}
+\index[general]{PostgreSQL!Installing and Configuring }
+\index[general]{Installing and Configuring PostgreSQL }
+\addcontentsline{toc}{subsubsection}{Installing and Configuring PostgreSQL}
+
+For the details of installing and configuring PostgreSQL, please see the
+\ilink{Installing and Configuring PostgreSQL}{_ChapterStart10}
+chapter of this manual.
+
+\subsubsection*{Installing and Configuring SQLite}
+\index[general]{Installing and Configuring SQLite }
+\index[general]{SQLite!Installing and Configuring }
+\addcontentsline{toc}{subsubsection}{Installing and Configuring SQLite}
+
+For the details of installing and configuring SQLite, please see the
+\ilink{Installing and Configuring SQLite}{_ChapterStart33} chapter of
+this manual.
+
+\subsubsection*{Internal Bacula Catalog}
+\index[general]{Catalog!Internal Bacula }
+\index[general]{Internal Bacula Catalog }
+\addcontentsline{toc}{subsubsection}{Internal Bacula Catalog}
+
+Please see the
+\ilink{Internal Bacula Database}{_ChapterStart42} chapter of this
+manual for more details.
+
+\subsubsection*{Database Table Design}
+\index[general]{Design!Database Table }
+\index[general]{Database Table Design }
+\addcontentsline{toc}{subsubsection}{Database Table Design}
+
+All discussions that follow pertain to the MySQL database. The details for the
+PostgreSQL and SQLite databases are essentially identical except for that all
+fields in the SQLite database are stored as ASCII text and some of the
+database creation statements are a bit different. The details of the internal
+Bacula catalog are not discussed here.
+
+Because the Catalog database may contain very large amounts of data for large
+sites, we have made a modest attempt to normalize the data tables to reduce
+redundant information. While reducing the size of the database significantly,
+it does, unfortunately, add some complications to the structures.
+
+In simple terms, the Catalog database must contain a record of all Jobs run by
+Bacula, and for each Job, it must maintain a list of all files saved, with
+their File Attributes (permissions, create date, ...), and the location and
+Media on which the file is stored. This is seemingly a simple task, but it
+represents a huge amount interlinked data. Note: the list of files and their
+attributes is not maintained when using the internal Bacula database. The data
+stored in the File records, which allows the user or administrator to obtain a
+list of all files backed up during a job, is by far the largest volume of
+information put into the Catalog database.
+
+Although the Catalog database has been designed to handle backup data for
+multiple clients, some users may want to maintain multiple databases, one for
+each machine to be backed up. This reduces the risk of confusion of accidental
+restoring a file to the wrong machine as well as reducing the amount of data
+in a single database, thus increasing efficiency and reducing the impact of a
+lost or damaged database.
+
+\subsection*{Sequence of Creation of Records for a Save Job}
+\index[general]{Sequence of Creation of Records for a Save Job }
+\index[general]{Job!Sequence of Creation of Records for a Save }
+\addcontentsline{toc}{subsection}{Sequence of Creation of Records for a Save
+Job}
+
+Start with StartDate, ClientName, Filename, Path, Attributes, MediaName,
+MediaCoordinates. (PartNumber, NumParts). In the steps below, ``Create new''
+means to create a new record whether or not it is unique. ``Create unique''
+means each record in the database should be unique. Thus, one must first
+search to see if the record exists, and only if not should a new one be
+created, otherwise the existing RecordId should be used.
+
+\begin{enumerate}
+\item Create new Job record with StartDate; save JobId
+\item Create unique Media record; save MediaId
+\item Create unique Client record; save ClientId
+\item Create unique Filename record; save FilenameId
+\item Create unique Path record; save PathId
+\item Create unique Attribute record; save AttributeId
+ store ClientId, FilenameId, PathId, and Attributes
+\item Create new File record
+ store JobId, AttributeId, MediaCoordinates, etc
+\item Repeat steps 4 through 8 for each file
+\item Create a JobMedia record; save MediaId
+\item Update Job record filling in EndDate and other Job statistics
+ \end{enumerate}
+
+\subsection*{Database Tables}
+\index[general]{Database Tables }
+\index[general]{Tables!Database }
+\addcontentsline{toc}{subsection}{Database Tables}
+
+\addcontentsline{lot}{table}{Filename Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf Filename } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{l| }{\bf Data Type }
+& \multicolumn{1}{l| }{\bf Remark } \\
+ \hline
+{FilenameId } & {integer } & {Primary Key } \\
+ \hline
+{Name } & {Blob } & {Filename }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Filename} table shown above contains the name of each file backed up
+with the path removed. If different directories or machines contain the same
+filename, only one copy will be saved in this table.
+
+\
+
+\addcontentsline{lot}{table}{Path Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf Path } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{PathId } & {integer } & {Primary Key } \\
+ \hline
+{Path } & {Blob } & {Full Path }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Path} table contains shown above the path or directory names of all
+directories on the system or systems. The filename and any MSDOS disk name are
+stripped off. As with the filename, only one copy of each directory name is
+kept regardless of how many machines or drives have the same directory. These
+path names should be stored in Unix path name format.
+
+Some simple testing on a Linux file system indicates that separating the
+filename and the path may be more complication than is warranted by the space
+savings. For example, this system has a total of 89,097 files, 60,467 of which
+have unique filenames, and there are 4,374 unique paths.
+
+Finding all those files and doing two stats() per file takes an average wall
+clock time of 1 min 35 seconds on a 400MHz machine running RedHat 6.1 Linux.
+
+Finding all those files and putting them directly into a MySQL database with
+the path and filename defined as TEXT, which is variable length up to 65,535
+characters takes 19 mins 31 seconds and creates a 27.6 MByte database.
+
+Doing the same thing, but inserting them into Blob fields with the filename
+indexed on the first 30 characters and the path name indexed on the 255 (max)
+characters takes 5 mins 18 seconds and creates a 5.24 MB database. Rerunning
+the job (with the database already created) takes about 2 mins 50 seconds.
+
+Running the same as the last one (Path and Filename Blob), but Filename
+indexed on the first 30 characters and the Path on the first 50 characters
+(linear search done there after) takes 5 mins on the average and creates a 3.4
+MB database. Rerunning with the data already in the DB takes 3 mins 35
+seconds.
+
+Finally, saving only the full path name rather than splitting the path and the
+file, and indexing it on the first 50 characters takes 6 mins 43 seconds and
+creates a 7.35 MB database.
+
+\
+
+\addcontentsline{lot}{table}{File Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf File } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{FileId } & {integer } & {Primary Key } \\
+ \hline
+{FileIndex } & {integer } & {The sequential file number in the Job } \\
+ \hline
+{JobId } & {integer } & {Link to Job Record } \\
+ \hline
+{PathId } & {integer } & {Link to Path Record } \\
+ \hline
+{FilenameId } & {integer } & {Link to Filename Record } \\
+ \hline
+{MarkId } & {integer } & {Used to mark files during Verify Jobs } \\
+ \hline
+{LStat } & {tinyblob } & {File attributes in base64 encoding } \\
+ \hline
+{MD5 } & {tinyblob } & {MD5 signature in base64 encoding }
+\\ \hline
+
+\end{longtable}
+
+The {\bf File} table shown above contains one entry for each file backed up by
+Bacula. Thus a file that is backed up multiple times (as is normal) will have
+multiple entries in the File table. This will probably be the table with the
+most number of records. Consequently, it is essential to keep the size of this
+record to an absolute minimum. At the same time, this table must contain all
+the information (or pointers to the information) about the file and where it
+is backed up. Since a file may be backed up many times without having changed,
+the path and filename are stored in separate tables.
+
+This table contains by far the largest amount of information in the Catalog
+database, both from the stand point of number of records, and the stand point
+of total database size. As a consequence, the user must take care to
+periodically reduce the number of File records using the {\bf retention}
+command in the Console program.
+
+\
+
+\addcontentsline{lot}{table}{Job Table Layout}
+\begin{longtable}{|l|l|p{2.5in}|}
+ \hline
+\multicolumn{3}{|l| }{\bf Job } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{JobId } & {integer } & {Primary Key } \\
+ \hline
+{Job } & {tinyblob } & {Unique Job Name } \\
+ \hline
+{Name } & {tinyblob } & {Job Name } \\
+ \hline
+{PurgedFiles } & {tinyint } & {Used by Bacula for purging/retention periods
+} \\
+ \hline
+{Type } & {binary(1) } & {Job Type: Backup, Copy, Clone, Archive, Migration
+} \\
+ \hline
+{Level } & {binary(1) } & {Job Level } \\
+ \hline
+{ClientId } & {integer } & {Client index } \\
+ \hline
+{JobStatus } & {binary(1) } & {Job Termination Status } \\
+ \hline
+{SchedTime } & {datetime } & {Time/date when Job scheduled } \\
+ \hline
+{StartTime } & {datetime } & {Time/date when Job started } \\
+ \hline
+{EndTime } & {datetime } & {Time/date when Job ended } \\
+ \hline
+{JobTDate } & {bigint } & {Start day in Unix format but 64 bits; used for
+Retention period. } \\
+ \hline
+{VolSessionId } & {integer } & {Unique Volume Session ID } \\
+ \hline
+{VolSessionTime } & {integer } & {Unique Volume Session Time } \\
+ \hline
+{JobFiles } & {integer } & {Number of files saved in Job } \\
+ \hline
+{JobBytes } & {bigint } & {Number of bytes saved in Job } \\
+ \hline
+{JobErrors } & {integer } & {Number of errors during Job } \\
+ \hline
+{JobMissingFiles } & {integer } & {Number of files not saved (not yet used) }
+\\
+ \hline
+{PoolId } & {integer } & {Link to Pool Record } \\
+ \hline
+{FileSetId } & {integer } & {Link to FileSet Record } \\
+ \hline
+{PurgedFiles } & {tiny integer } & {Set when all File records purged } \\
+ \hline
+{HasBase } & {tiny integer } & {Set when Base Job run }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Job} table contains one record for each Job run by Bacula. Thus
+normally, there will be one per day per machine added to the database. Note,
+the JobId is used to index Job records in the database, and it often is shown
+to the user in the Console program. However, care must be taken with its use
+as it is not unique from database to database. For example, the user may have
+a database for Client data saved on machine Rufus and another database for
+Client data saved on machine Roxie. In this case, the two database will each
+have JobIds that match those in another database. For a unique reference to a
+Job, see Job below.
+
+The Name field of the Job record corresponds to the Name resource record given
+in the Director's configuration file. Thus it is a generic name, and it will
+be normal to find many Jobs (or even all Jobs) with the same Name.
+
+The Job field contains a combination of the Name and the schedule time of the
+Job by the Director. Thus for a given Director, even with multiple Catalog
+databases, the Job will contain a unique name that represents the Job.
+
+For a given Storage daemon, the VolSessionId and VolSessionTime form a unique
+identification of the Job. This will be the case even if multiple Directors
+are using the same Storage daemon.
+
+The Job Type (or simply Type) can have one of the following values:
+
+\addcontentsline{lot}{table}{Job Types}
+\begin{longtable}{|l|l|}
+ \hline
+\multicolumn{1}{|c| }{\bf Value } & \multicolumn{1}{c| }{\bf Meaning } \\
+ \hline
+{B } & {Backup Job } \\
+ \hline
+{V } & {Verify Job } \\
+ \hline
+{R } & {Restore Job } \\
+ \hline
+{C } & {Console program (not in database) } \\
+ \hline
+{D } & {Admin Job } \\
+ \hline
+{A } & {Archive Job (not implemented) }
+\\ \hline
+
+\end{longtable}
+
+The JobStatus field specifies how the job terminated, and can be one of the
+following:
+
+\addcontentsline{lot}{table}{Job Statuses}
+\begin{longtable}{|l|l|}
+ \hline
+\multicolumn{1}{|c| }{\bf Value } & \multicolumn{1}{c| }{\bf Meaning } \\
+ \hline
+{C } & {Created but not yet running } \\
+ \hline
+{R } & {Running } \\
+ \hline
+{B } & {Blocked } \\
+ \hline
+{T } & {Terminated normally } \\
+ \hline
+{E } & {Terminated in Error } \\
+ \hline
+{e } & {Non-fatal error } \\
+ \hline
+{f } & {Fatal error } \\
+ \hline
+{D } & {Verify Differences } \\
+ \hline
+{A } & {Canceled by the user } \\
+ \hline
+{F } & {Waiting on the File daemon } \\
+ \hline
+{S } & {Waiting on the Storage daemon } \\
+ \hline
+{m } & {Waiting for a new Volume to be mounted } \\
+ \hline
+{M } & {Waiting for a Mount } \\
+ \hline
+{s } & {Waiting for Storage resource } \\
+ \hline
+{j } & {Waiting for Job resource } \\
+ \hline
+{c } & {Waiting for Client resource } \\
+ \hline
+{d } & {Wating for Maximum jobs } \\
+ \hline
+{t } & {Waiting for Start Time } \\
+ \hline
+{p } & {Waiting for higher priority job to finish }
+\\ \hline
+
+\end{longtable}
+
+\
+
+\addcontentsline{lot}{table}{File Sets Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf FileSet } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type\
+\ \ } & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{FileSetId } & {integer } & {Primary Key } \\
+ \hline
+{FileSet } & {tinyblob } & {FileSet name } \\
+ \hline
+{MD5 } & {tinyblob } & {MD5 checksum of FileSet } \\
+ \hline
+{CreateTime } & {datetime } & {Time and date Fileset created }
+\\ \hline
+
+\end{longtable}
+
+The {\bf FileSet} table contains one entry for each FileSet that is used. The
+MD5 signature is kept to ensure that if the user changes anything inside the
+FileSet, it will be detected and the new FileSet will be used. This is
+particularly important when doing an incremental update. If the user deletes a
+file or adds a file, we need to ensure that a Full backup is done prior to the
+next incremental.
+
+\
+
+\addcontentsline{lot}{table}{JobMedia Table Layout}
+\begin{longtable}{|l|l|p{2.5in}|}
+ \hline
+\multicolumn{3}{|l| }{\bf JobMedia } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type\
+\ \ } & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{JobMediaId } & {integer } & {Primary Key } \\
+ \hline
+{JobId } & {integer } & {Link to Job Record } \\
+ \hline
+{MediaId } & {integer } & {Link to Media Record } \\
+ \hline
+{FirstIndex } & {integer } & {The index (sequence number) of the first file
+written for this Job to the Media } \\
+ \hline
+{LastIndex } & {integer } & {The index of the last file written for this
+Job to the Media } \\
+ \hline
+{StartFile } & {integer } & {The physical media (tape) file number of the
+first block written for this Job } \\
+ \hline
+{EndFile } & {integer } & {The physical media (tape) file number of the
+last block written for this Job } \\
+ \hline
+{StartBlock } & {integer } & {The number of the first block written for
+this Job } \\
+ \hline
+{EndBlock } & {integer } & {The number of the last block written for this
+Job } \\
+ \hline
+{VolIndex } & {integer } & {The Volume use sequence number within the Job }
+\\ \hline
+
+\end{longtable}
+
+The {\bf JobMedia} table contains one entry for each volume written for the
+current Job. If the Job spans 3 tapes, there will be three JobMedia records,
+each containing the information to find all the files for the given JobId on
+the tape.
+
+\
+
+\addcontentsline{lot}{table}{Media Table Layout}
+\begin{longtable}{|l|l|p{2.4in}|}
+ \hline
+\multicolumn{3}{|l| }{\bf Media } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type\
+\ \ } & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{MediaId } & {integer } & {Primary Key } \\
+ \hline
+{VolumeName } & {tinyblob } & {Volume name } \\
+ \hline
+{Slot } & {integer } & {Autochanger Slot number or zero } \\
+ \hline
+{PoolId } & {integer } & {Link to Pool Record } \\
+ \hline
+{MediaType } & {tinyblob } & {The MediaType supplied by the user } \\
+ \hline
+{FirstWritten } & {datetime } & {Time/date when first written } \\
+ \hline
+{LastWritten } & {datetime } & {Time/date when last written } \\
+ \hline
+{LabelDate } & {datetime } & {Time/date when tape labeled } \\
+ \hline
+{VolJobs } & {integer } & {Number of jobs written to this media } \\
+ \hline
+{VolFiles } & {integer } & {Number of files written to this media } \\
+ \hline
+{VolBlocks } & {integer } & {Number of blocks written to this media } \\
+ \hline
+{VolMounts } & {integer } & {Number of time media mounted } \\
+ \hline
+{VolBytes } & {bigint } & {Number of bytes saved in Job } \\
+ \hline
+{VolErrors } & {integer } & {Number of errors during Job } \\
+ \hline
+{VolWrites } & {integer } & {Number of writes to media } \\
+ \hline
+{MaxVolBytes } & {bigint } & {Maximum bytes to put on this media } \\
+ \hline
+{VolCapacityBytes } & {bigint } & {Capacity estimate for this volume } \\
+ \hline
+{VolStatus } & {enum } & {Status of media: Full, Archive, Append, Recycle,
+Read-Only, Disabled, Error, Busy } \\
+ \hline
+{Recycle } & {tinyint } & {Whether or not Bacula can recycle the Volumes:
+Yes, No } \\
+ \hline
+{VolRetention } & {bigint } & {64 bit seconds until expiration } \\
+ \hline
+{VolUseDuration } & {bigint } & {64 bit seconds volume can be used } \\
+ \hline
+{MaxVolJobs } & {integer } & {maximum jobs to put on Volume } \\
+ \hline
+{MaxVolFiles } & {integer } & {maximume EOF marks to put on Volume }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Volume} table (internally referred to as the Media table) contains
+one entry for each volume, that is each tape, cassette (8mm, DLT, DAT, ...),
+or file on which information is or was backed up. There is one Volume record
+created for each of the NumVols specified in the Pool resource record.
+
+\
+
+\addcontentsline{lot}{table}{Pool Table Layout}
+\begin{longtable}{|l|l|p{2.4in}|}
+ \hline
+\multicolumn{3}{|l| }{\bf Pool } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{PoolId } & {integer } & {Primary Key } \\
+ \hline
+{Name } & {Tinyblob } & {Pool Name } \\
+ \hline
+{NumVols } & {Integer } & {Number of Volumes in the Pool } \\
+ \hline
+{MaxVols } & {Integer } & {Maximum Volumes in the Pool } \\
+ \hline
+{UseOnce } & {tinyint } & {Use volume once } \\
+ \hline
+{UseCatalog } & {tinyint } & {Set to use catalog } \\
+ \hline
+{AcceptAnyVolume } & {tinyint } & {Accept any volume from Pool } \\
+ \hline
+{VolRetention } & {bigint } & {64 bit seconds to retain volume } \\
+ \hline
+{VolUseDuration } & {bigint } & {64 bit seconds volume can be used } \\
+ \hline
+{MaxVolJobs } & {integer } & {max jobs on volume } \\
+ \hline
+{MaxVolFiles } & {integer } & {max EOF marks to put on Volume } \\
+ \hline
+{MaxVolBytes } & {bigint } & {max bytes to write on Volume } \\
+ \hline
+{AutoPrune } & {tinyint } & {yes|no for autopruning } \\
+ \hline
+{Recycle } & {tinyint } & {yes|no for allowing auto recycling of Volume }
+\\
+ \hline
+{PoolType } & {enum } & {Backup, Copy, Cloned, Archive, Migration } \\
+ \hline
+{LabelFormat } & {Tinyblob } & {Label format }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Pool} table contains one entry for each media pool controlled by
+Bacula in this database. One media record exists for each of the NumVols
+contained in the Pool. The PoolType is a Bacula defined keyword. The MediaType
+is defined by the administrator, and corresponds to the MediaType specified in
+the Director's Storage definition record. The CurrentVol is the sequence
+number of the Media record for the current volume.
+
+\
+
+\addcontentsline{lot}{table}{Client Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf Client } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{ClientId } & {integer } & {Primary Key } \\
+ \hline
+{Name } & {TinyBlob } & {File Services Name } \\
+ \hline
+{UName } & {TinyBlob } & {uname -a from Client (not yet used) } \\
+ \hline
+{AutoPrune } & {tinyint } & {yes|no for autopruning } \\
+ \hline
+{FileRetention } & {bigint } & {64 bit seconds to retain Files } \\
+ \hline
+{JobRetention } & {bigint } & {64 bit seconds to retain Job }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Client} table contains one entry for each machine backed up by Bacula
+in this database. Normally the Name is a fully qualified domain name.
+
+\
+
+\addcontentsline{lot}{table}{Unsaved Files Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf UnsavedFiles } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{UnsavedId } & {integer } & {Primary Key } \\
+ \hline
+{JobId } & {integer } & {JobId corresponding to this record } \\
+ \hline
+{PathId } & {integer } & {Id of path } \\
+ \hline
+{FilenameId } & {integer } & {Id of filename }
+\\ \hline
+
+\end{longtable}
+
+The {\bf UnsavedFiles} table contains one entry for each file that was not
+saved. Note! This record is not yet implemented.
+
+\
+
+\addcontentsline{lot}{table}{Counter Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf Counter } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{Counter } & {tinyblob } & {Counter name } \\
+ \hline
+{MinValue } & {integer } & {Start/Min value for counter } \\
+ \hline
+{MaxValue } & {integer } & {Max value for counter } \\
+ \hline
+{CurrentValue } & {integer } & {Current counter value } \\
+ \hline
+{WrapCounter } & {tinyblob } & {Name of another counter }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Counter} table contains one entry for each permanent counter defined
+by the user.
+
+\
+
+\addcontentsline{lot}{table}{Version Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf Version } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{VersionId } & {integer } & {Primary Key }
+\\ \hline
+
+\end{longtable}
+
+The {\bf Version} table defines the Bacula database version number. Bacula
+checks this number before reading the database to ensure that it is compatible
+with the Bacula binary file.
+
+\
+
+\addcontentsline{lot}{table}{Base Files Table Layout}
+\begin{longtable}{|l|l|l|}
+ \hline
+\multicolumn{3}{|l| }{\bf BaseFiles } \\
+ \hline
+\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
+} & \multicolumn{1}{c| }{\bf Remark } \\
+ \hline
+{BaseId } & {integer } & {Primary Key } \\
+ \hline
+{BaseJobId } & {integer } & {JobId of Base Job } \\
+ \hline
+{JobId } & {integer } & {Reference to Job } \\
+ \hline
+{FileId } & {integer } & {Reference to File } \\
+ \hline
+{FileIndex } & {integer } & {File Index number }
+\\ \hline
+
+\end{longtable}
+
+The {\bf BaseFiles} table contains all the File references for a particular
+JobId that point to a Base file -- i.e. they were previously saved and hence
+were not saved in the current JobId but in BaseJobId under FileId. FileIndex
+is the index of the file, and is used for optimization of Restore jobs to
+prevent the need to read the FileId record when creating the in memory tree.
+This record is not yet implemented.
+
+\
+
+\subsubsection*{MySQL Table Definition}
+\index[general]{MySQL Table Definition }
+\index[general]{Definition!MySQL Table }
+\addcontentsline{toc}{subsubsection}{MySQL Table Definition}
+
+The commands used to create the MySQL tables are as follows:
+
+\footnotesize
+\begin{verbatim}
+USE bacula;
+CREATE TABLE Filename (
+ FilenameId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ Name BLOB NOT NULL,
+ PRIMARY KEY(FilenameId),
+ INDEX (Name(30))
+ );
+CREATE TABLE Path (
+ PathId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ Path BLOB NOT NULL,
+ PRIMARY KEY(PathId),
+ INDEX (Path(50))
+ );
+CREATE TABLE File (
+ FileId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ FileIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
+ PathId INTEGER UNSIGNED NOT NULL REFERENCES Path,
+ FilenameId INTEGER UNSIGNED NOT NULL REFERENCES Filename,
+ MarkId INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ LStat TINYBLOB NOT NULL,
+ MD5 TINYBLOB NOT NULL,
+ PRIMARY KEY(FileId),
+ INDEX (JobId),
+ INDEX (PathId),
+ INDEX (FilenameId)
+ );
+CREATE TABLE Job (
+ JobId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ Job TINYBLOB NOT NULL,
+ Name TINYBLOB NOT NULL,
+ Type BINARY(1) NOT NULL,
+ Level BINARY(1) NOT NULL,
+ ClientId INTEGER NOT NULL REFERENCES Client,
+ JobStatus BINARY(1) NOT NULL,
+ SchedTime DATETIME NOT NULL,
+ StartTime DATETIME NOT NULL,
+ EndTime DATETIME NOT NULL,
+ JobTDate BIGINT UNSIGNED NOT NULL,
+ VolSessionId INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolSessionTime INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ JobFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ JobBytes BIGINT UNSIGNED NOT NULL,
+ JobErrors INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ JobMissingFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ PoolId INTEGER UNSIGNED NOT NULL REFERENCES Pool,
+ FileSetId INTEGER UNSIGNED NOT NULL REFERENCES FileSet,
+ PurgedFiles TINYINT NOT NULL DEFAULT 0,
+ HasBase TINYINT NOT NULL DEFAULT 0,
+ PRIMARY KEY(JobId),
+ INDEX (Name(128))
+ );
+CREATE TABLE FileSet (
+ FileSetId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ FileSet TINYBLOB NOT NULL,
+ MD5 TINYBLOB NOT NULL,
+ CreateTime DATETIME NOT NULL,
+ PRIMARY KEY(FileSetId)
+ );
+CREATE TABLE JobMedia (
+ JobMediaId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
+ MediaId INTEGER UNSIGNED NOT NULL REFERENCES Media,
+ FirstIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ LastIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ StartFile INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ EndFile INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ StartBlock INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ EndBlock INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ PRIMARY KEY(JobMediaId),
+ INDEX (JobId, MediaId)
+ );
+CREATE TABLE Media (
+ MediaId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ VolumeName TINYBLOB NOT NULL,
+ Slot INTEGER NOT NULL DEFAULT 0,
+ PoolId INTEGER UNSIGNED NOT NULL REFERENCES Pool,
+ MediaType TINYBLOB NOT NULL,
+ FirstWritten DATETIME NOT NULL,
+ LastWritten DATETIME NOT NULL,
+ LabelDate DATETIME NOT NULL,
+ VolJobs INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolBlocks INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolMounts INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolBytes BIGINT UNSIGNED NOT NULL DEFAULT 0,
+ VolErrors INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolWrites INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ VolCapacityBytes BIGINT UNSIGNED NOT NULL,
+ VolStatus ENUM('Full', 'Archive', 'Append', 'Recycle', 'Purged',
+ 'Read-Only', 'Disabled', 'Error', 'Busy', 'Used', 'Cleaning') NOT NULL,
+ Recycle TINYINT NOT NULL DEFAULT 0,
+ VolRetention BIGINT UNSIGNED NOT NULL DEFAULT 0,
+ VolUseDuration BIGINT UNSIGNED NOT NULL DEFAULT 0,
+ MaxVolJobs INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ MaxVolFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ MaxVolBytes BIGINT UNSIGNED NOT NULL DEFAULT 0,
+ InChanger TINYINT NOT NULL DEFAULT 0,
+ MediaAddressing TINYINT NOT NULL DEFAULT 0,
+ VolReadTime BIGINT UNSIGNED NOT NULL DEFAULT 0,
+ VolWriteTime BIGINT UNSIGNED NOT NULL DEFAULT 0,
+ PRIMARY KEY(MediaId),
+ INDEX (PoolId)
+ );
+CREATE TABLE Pool (
+ PoolId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ Name TINYBLOB NOT NULL,
+ NumVols INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ MaxVols INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ UseOnce TINYINT NOT NULL,
+ UseCatalog TINYINT NOT NULL,
+ AcceptAnyVolume TINYINT DEFAULT 0,
+ VolRetention BIGINT UNSIGNED NOT NULL,
+ VolUseDuration BIGINT UNSIGNED NOT NULL,
+ MaxVolJobs INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ MaxVolFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
+ MaxVolBytes BIGINT UNSIGNED NOT NULL,
+ AutoPrune TINYINT DEFAULT 0,
+ Recycle TINYINT DEFAULT 0,
+ PoolType ENUM('Backup', 'Copy', 'Cloned', 'Archive', 'Migration', 'Scratch') NOT NULL,
+ LabelFormat TINYBLOB,
+ Enabled TINYINT DEFAULT 1,
+ ScratchPoolId INTEGER UNSIGNED DEFAULT 0 REFERENCES Pool,
+ RecyclePoolId INTEGER UNSIGNED DEFAULT 0 REFERENCES Pool,
+ UNIQUE (Name(128)),
+ PRIMARY KEY (PoolId)
+ );
+CREATE TABLE Client (
+ ClientId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
+ Name TINYBLOB NOT NULL,
+ Uname TINYBLOB NOT NULL, /* full uname -a of client */
+ AutoPrune TINYINT DEFAULT 0,
+ FileRetention BIGINT UNSIGNED NOT NULL,
+ JobRetention BIGINT UNSIGNED NOT NULL,
+ UNIQUE (Name(128)),
+ PRIMARY KEY(ClientId)
+ );
+CREATE TABLE BaseFiles (
+ BaseId INTEGER UNSIGNED AUTO_INCREMENT,
+ BaseJobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
+ JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
+ FileId INTEGER UNSIGNED NOT NULL REFERENCES File,
+ FileIndex INTEGER UNSIGNED,
+ PRIMARY KEY(BaseId)
+ );
+CREATE TABLE UnsavedFiles (
+ UnsavedId INTEGER UNSIGNED AUTO_INCREMENT,
+ JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
+ PathId INTEGER UNSIGNED NOT NULL REFERENCES Path,
+ FilenameId INTEGER UNSIGNED NOT NULL REFERENCES Filename,
+ PRIMARY KEY (UnsavedId)
+ );
+CREATE TABLE Version (
+ VersionId INTEGER UNSIGNED NOT NULL
+ );
+-- Initialize Version
+INSERT INTO Version (VersionId) VALUES (7);
+CREATE TABLE Counters (
+ Counter TINYBLOB NOT NULL,
+ MinValue INTEGER,
+ MaxValue INTEGER,
+ CurrentValue INTEGER,
+ WrapCounter TINYBLOB NOT NULL,
+ PRIMARY KEY (Counter(128))
+ );
+\end{verbatim}
+\normalsize