4 \section*{Catalog Services}
5 \label{_ChapterStart30}
6 \index[general]{Services!Catalog }
7 \index[general]{Catalog Services }
8 \addcontentsline{toc}{section}{Catalog Services}
11 \index[general]{General }
12 \addcontentsline{toc}{subsection}{General}
14 This chapter is intended to be a technical discussion of the Catalog services
15 and as such is not targeted at end users but rather at developers and system
16 administrators that want or need to know more of the working details of {\bf
19 The {\bf Bacula Catalog} services consist of the programs that provide the SQL
20 database engine for storage and retrieval of all information concerning files
21 that were backed up and their locations on the storage media.
23 We have investigated the possibility of using the following SQL engines for
24 Bacula: Beagle, mSQL, GNU SQL, PostgreSQL, SQLite, Oracle, and MySQL. Each
25 presents certain problems with either licensing or maturity. At present, we
26 have chosen for development purposes to use MySQL, PostgreSQL and SQLite.
27 MySQL was chosen because it is fast, proven to be reliable, widely used, and
28 actively being developed. MySQL is released under the GNU GPL license.
29 PostgreSQL was chosen because it is a full-featured, very mature database, and
30 because Dan Langille did the Bacula driver for it. PostgreSQL is distributed
31 under the BSD license. SQLite was chosen because it is small, efficient, and
32 can be directly embedded in {\bf Bacula} thus requiring much less effort from
33 the system administrator or person building {\bf Bacula}. In our testing
34 SQLite has performed very well, and for the functions that we use, it has
35 never encountered any errors except that it does not appear to handle
36 databases larger than 2GBytes.
38 The Bacula SQL code has been written in a manner that will allow it to be
39 easily modified to support any of the current SQL database systems on the
40 market (for example: mSQL, iODBC, unixODBC, Solid, OpenLink ODBC, EasySoft
41 ODBC, InterBase, Oracle8, Oracle7, and DB2).
43 If you do not specify either {\bf \verb{--{with-mysql} or {\bf \verb{--{with-postgresql} or
44 {\bf \verb{--{with-sqlite} on the ./configure line, Bacula will use its minimalist
45 internal database. This database is kept for build reasons but is no longer
46 supported. Bacula {\bf requires} one of the three databases (MySQL,
47 PostgreSQL, or SQLite) to run.
49 \subsubsection*{Filenames and Maximum Filename Length}
50 \index[general]{Filenames and Maximum Filename Length }
51 \index[general]{Length!Filenames and Maximum Filename }
52 \addcontentsline{toc}{subsubsection}{Filenames and Maximum Filename Length}
54 In general, either MySQL, PostgreSQL or SQLite permit storing arbitrary long
55 path names and file names in the catalog database. In practice, there still
56 may be one or two places in the Catalog interface code that restrict the
57 maximum path length to 512 characters and the maximum file name length to 512
58 characters. These restrictions are believed to have been removed. Please note,
59 these restrictions apply only to the Catalog database and thus to your ability
60 to list online the files saved during any job. All information received and
61 stored by the Storage daemon (normally on tape) allows and handles arbitrarily
62 long path and filenames.
64 \subsubsection*{Installing and Configuring MySQL}
65 \index[general]{MySQL!Installing and Configuring }
66 \index[general]{Installing and Configuring MySQL }
67 \addcontentsline{toc}{subsubsection}{Installing and Configuring MySQL}
69 For the details of installing and configuring MySQL, please see the
70 \ilink{Installing and Configuring MySQL}{_ChapterStart} chapter of
73 \subsubsection*{Installing and Configuring PostgreSQL}
74 \index[general]{PostgreSQL!Installing and Configuring }
75 \index[general]{Installing and Configuring PostgreSQL }
76 \addcontentsline{toc}{subsubsection}{Installing and Configuring PostgreSQL}
78 For the details of installing and configuring PostgreSQL, please see the
79 \ilink{Installing and Configuring PostgreSQL}{_ChapterStart10}
80 chapter of this manual.
82 \subsubsection*{Installing and Configuring SQLite}
83 \index[general]{Installing and Configuring SQLite }
84 \index[general]{SQLite!Installing and Configuring }
85 \addcontentsline{toc}{subsubsection}{Installing and Configuring SQLite}
87 For the details of installing and configuring SQLite, please see the
88 \ilink{Installing and Configuring SQLite}{_ChapterStart33} chapter of
91 \subsubsection*{Internal Bacula Catalog}
92 \index[general]{Catalog!Internal Bacula }
93 \index[general]{Internal Bacula Catalog }
94 \addcontentsline{toc}{subsubsection}{Internal Bacula Catalog}
97 \ilink{Internal Bacula Database}{_ChapterStart42} chapter of this
98 manual for more details.
100 \subsubsection*{Database Table Design}
101 \index[general]{Design!Database Table }
102 \index[general]{Database Table Design }
103 \addcontentsline{toc}{subsubsection}{Database Table Design}
105 All discussions that follow pertain to the MySQL database. The details for the
106 PostgreSQL and SQLite databases are essentially identical except for that all
107 fields in the SQLite database are stored as ASCII text and some of the
108 database creation statements are a bit different. The details of the internal
109 Bacula catalog are not discussed here.
111 Because the Catalog database may contain very large amounts of data for large
112 sites, we have made a modest attempt to normalize the data tables to reduce
113 redundant information. While reducing the size of the database significantly,
114 it does, unfortunately, add some complications to the structures.
116 In simple terms, the Catalog database must contain a record of all Jobs run by
117 Bacula, and for each Job, it must maintain a list of all files saved, with
118 their File Attributes (permissions, create date, ...), and the location and
119 Media on which the file is stored. This is seemingly a simple task, but it
120 represents a huge amount interlinked data. Note: the list of files and their
121 attributes is not maintained when using the internal Bacula database. The data
122 stored in the File records, which allows the user or administrator to obtain a
123 list of all files backed up during a job, is by far the largest volume of
124 information put into the Catalog database.
126 Although the Catalog database has been designed to handle backup data for
127 multiple clients, some users may want to maintain multiple databases, one for
128 each machine to be backed up. This reduces the risk of confusion of accidental
129 restoring a file to the wrong machine as well as reducing the amount of data
130 in a single database, thus increasing efficiency and reducing the impact of a
131 lost or damaged database.
133 \subsection*{Sequence of Creation of Records for a Save Job}
134 \index[general]{Sequence of Creation of Records for a Save Job }
135 \index[general]{Job!Sequence of Creation of Records for a Save }
136 \addcontentsline{toc}{subsection}{Sequence of Creation of Records for a Save
139 Start with StartDate, ClientName, Filename, Path, Attributes, MediaName,
140 MediaCoordinates. (PartNumber, NumParts). In the steps below, ``Create new''
141 means to create a new record whether or not it is unique. ``Create unique''
142 means each record in the database should be unique. Thus, one must first
143 search to see if the record exists, and only if not should a new one be
144 created, otherwise the existing RecordId should be used.
147 \item Create new Job record with StartDate; save JobId
148 \item Create unique Media record; save MediaId
149 \item Create unique Client record; save ClientId
150 \item Create unique Filename record; save FilenameId
151 \item Create unique Path record; save PathId
152 \item Create unique Attribute record; save AttributeId
153 store ClientId, FilenameId, PathId, and Attributes
154 \item Create new File record
155 store JobId, AttributeId, MediaCoordinates, etc
156 \item Repeat steps 4 through 8 for each file
157 \item Create a JobMedia record; save MediaId
158 \item Update Job record filling in EndDate and other Job statistics
161 \subsection*{Database Tables}
162 \index[general]{Database Tables }
163 \index[general]{Tables!Database }
164 \addcontentsline{toc}{subsection}{Database Tables}
166 \addcontentsline{lot}{table}{Filename Table Layout}
167 \begin{longtable}{|l|l|l|}
169 \multicolumn{3}{|l| }{\bf Filename } \\
171 \multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{l| }{\bf Data Type }
172 & \multicolumn{1}{l| }{\bf Remark } \\
174 {FilenameId } & {integer } & {Primary Key } \\
176 {Name } & {Blob } & {Filename }
181 The {\bf Filename} table shown above contains the name of each file backed up
182 with the path removed. If different directories or machines contain the same
183 filename, only one copy will be saved in this table.
187 \addcontentsline{lot}{table}{Path Table Layout}
188 \begin{longtable}{|l|l|l|}
190 \multicolumn{3}{|l| }{\bf Path } \\
192 \multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
193 } & \multicolumn{1}{c| }{\bf Remark } \\
195 {PathId } & {integer } & {Primary Key } \\
197 {Path } & {Blob } & {Full Path }
202 The {\bf Path} table contains shown above the path or directory names of all
203 directories on the system or systems. The filename and any MSDOS disk name are
204 stripped off. As with the filename, only one copy of each directory name is
205 kept regardless of how many machines or drives have the same directory. These
206 path names should be stored in Unix path name format.
208 Some simple testing on a Linux file system indicates that separating the
209 filename and the path may be more complication than is warranted by the space
210 savings. For example, this system has a total of 89,097 files, 60,467 of which
211 have unique filenames, and there are 4,374 unique paths.
213 Finding all those files and doing two stats() per file takes an average wall
214 clock time of 1 min 35 seconds on a 400MHz machine running RedHat 6.1 Linux.
216 Finding all those files and putting them directly into a MySQL database with
217 the path and filename defined as TEXT, which is variable length up to 65,535
218 characters takes 19 mins 31 seconds and creates a 27.6 MByte database.
220 Doing the same thing, but inserting them into Blob fields with the filename
221 indexed on the first 30 characters and the path name indexed on the 255 (max)
222 characters takes 5 mins 18 seconds and creates a 5.24 MB database. Rerunning
223 the job (with the database already created) takes about 2 mins 50 seconds.
225 Running the same as the last one (Path and Filename Blob), but Filename
226 indexed on the first 30 characters and the Path on the first 50 characters
227 (linear search done there after) takes 5 mins on the average and creates a 3.4
228 MB database. Rerunning with the data already in the DB takes 3 mins 35
231 Finally, saving only the full path name rather than splitting the path and the
232 file, and indexing it on the first 50 characters takes 6 mins 43 seconds and
233 creates a 7.35 MB database.
237 \addcontentsline{lot}{table}{File Table Layout}
238 \begin{longtable}{|l|l|l|}
240 \multicolumn{3}{|l| }{\bf File } \\
242 \multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
243 } & \multicolumn{1}{c| }{\bf Remark } \\
245 {FileId } & {integer } & {Primary Key } \\
247 {FileIndex } & {integer } & {The sequential file number in the Job } \\
249 {JobId } & {integer } & {Link to Job Record } \\
251 {PathId } & {integer } & {Link to Path Record } \\
253 {FilenameId } & {integer } & {Link to Filename Record } \\
255 {MarkId } & {integer } & {Used to mark files during Verify Jobs } \\
257 {LStat } & {tinyblob } & {File attributes in base64 encoding } \\
259 {MD5 } & {tinyblob } & {MD5 signature in base64 encoding }
264 The {\bf File} table shown above contains one entry for each file backed up by
265 Bacula. Thus a file that is backed up multiple times (as is normal) will have
266 multiple entries in the File table. This will probably be the table with the
267 most number of records. Consequently, it is essential to keep the size of this
268 record to an absolute minimum. At the same time, this table must contain all
269 the information (or pointers to the information) about the file and where it
270 is backed up. Since a file may be backed up many times without having changed,
271 the path and filename are stored in separate tables.
273 This table contains by far the largest amount of information in the Catalog
274 database, both from the stand point of number of records, and the stand point
275 of total database size. As a consequence, the user must take care to
276 periodically reduce the number of File records using the {\bf retention}
277 command in the Console program.
281 \addcontentsline{lot}{table}{Job Table Layout}
282 \begin{longtable}{|l|l|p{2.5in}|}
284 \multicolumn{3}{|l| }{\bf Job } \\
286 \multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
287 } & \multicolumn{1}{c| }{\bf Remark } \\
289 {JobId } & {integer } & {Primary Key } \\
291 {Job } & {tinyblob } & {Unique Job Name } \\
293 {Name } & {tinyblob } & {Job Name } \\
295 {PurgedFiles } & {tinyint } & {Used by Bacula for purging/retention periods
298 {Type } & {binary(1) } & {Job Type: Backup, Copy, Clone, Archive, Migration
301 {Level } & {binary(1) } & {Job Level } \\
303 {ClientId } & {integer } & {Client index } \\
305 {JobStatus } & {binary(1) } & {Job Termination Status } \\
307 {SchedTime } & {datetime } & {Time/date when Job scheduled } \\
309 {StartTime } & {datetime } & {Time/date when Job started } \\
311 {EndTime } & {datetime } & {Time/date when Job ended } \\
313 {JobTDate } & {bigint } & {Start day in Unix format but 64 bits; used for
314 Retention period. } \\
316 {VolSessionId } & {integer } & {Unique Volume Session ID } \\
318 {VolSessionTime } & {integer } & {Unique Volume Session Time } \\
320 {JobFiles } & {integer } & {Number of files saved in Job } \\
322 {JobBytes } & {bigint } & {Number of bytes saved in Job } \\
324 {JobErrors } & {integer } & {Number of errors during Job } \\
326 {JobMissingFiles } & {integer } & {Number of files not saved (not yet used) }
329 {PoolId } & {integer } & {Link to Pool Record } \\
331 {FileSetId } & {integer } & {Link to FileSet Record } \\
333 {PurgedFiles } & {tiny integer } & {Set when all File records purged } \\
335 {HasBase } & {tiny integer } & {Set when Base Job run }
340 The {\bf Job} table contains one record for each Job run by Bacula. Thus
341 normally, there will be one per day per machine added to the database. Note,
342 the JobId is used to index Job records in the database, and it often is shown
343 to the user in the Console program. However, care must be taken with its use
344 as it is not unique from database to database. For example, the user may have
345 a database for Client data saved on machine Rufus and another database for
346 Client data saved on machine Roxie. In this case, the two database will each
347 have JobIds that match those in another database. For a unique reference to a
350 The Name field of the Job record corresponds to the Name resource record given
351 in the Director's configuration file. Thus it is a generic name, and it will
352 be normal to find many Jobs (or even all Jobs) with the same Name.
354 The Job field contains a combination of the Name and the schedule time of the
355 Job by the Director. Thus for a given Director, even with multiple Catalog
356 databases, the Job will contain a unique name that represents the Job.
358 For a given Storage daemon, the VolSessionId and VolSessionTime form a unique
359 identification of the Job. This will be the case even if multiple Directors
360 are using the same Storage daemon.
362 The Job Type (or simply Type) can have one of the following values:
364 \addcontentsline{lot}{table}{Job Types}
365 \begin{longtable}{|l|l|}
367 \multicolumn{1}{|c| }{\bf Value } & \multicolumn{1}{c| }{\bf Meaning } \\
369 {B } & {Backup Job } \\
371 {V } & {Verify Job } \\
373 {R } & {Restore Job } \\
375 {C } & {Console program (not in database) } \\
377 {D } & {Admin Job } \\
379 {A } & {Archive Job (not implemented) }
384 The JobStatus field specifies how the job terminated, and can be one of the
387 \addcontentsline{lot}{table}{Job Statuses}
388 \begin{longtable}{|l|l|}
390 \multicolumn{1}{|c| }{\bf Value } & \multicolumn{1}{c| }{\bf Meaning } \\
392 {C } & {Created but not yet running } \\
398 {T } & {Terminated normally } \\
400 {E } & {Terminated in Error } \\
402 {e } & {Non-fatal error } \\
404 {f } & {Fatal error } \\
406 {D } & {Verify Differences } \\
408 {A } & {Canceled by the user } \\
410 {F } & {Waiting on the File daemon } \\
412 {S } & {Waiting on the Storage daemon } \\
414 {m } & {Waiting for a new Volume to be mounted } \\
416 {M } & {Waiting for a Mount } \\
418 {s } & {Waiting for Storage resource } \\
420 {j } & {Waiting for Job resource } \\
422 {c } & {Waiting for Client resource } \\
424 {d } & {Wating for Maximum jobs } \\
426 {t } & {Waiting for Start Time } \\
428 {p } & {Waiting for higher priority job to finish }
435 \addcontentsline{lot}{table}{File Sets Table Layout}
436 \begin{longtable}{|l|l|l|}
438 \multicolumn{3}{|l| }{\bf FileSet } \\
440 \multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type\
441 \ \ } & \multicolumn{1}{c| }{\bf Remark } \\
443 {FileSetId } & {integer } & {Primary Key } \\
445 {FileSet } & {tinyblob } & {FileSet name } \\
447 {MD5 } & {tinyblob } & {MD5 checksum of FileSet } \\
449 {CreateTime } & {datetime } & {Time and date Fileset created }
454 The {\bf FileSet} table contains one entry for each FileSet that is used. The
455 MD5 signature is kept to ensure that if the user changes anything inside the
456 FileSet, it will be detected and the new FileSet will be used. This is
457 particularly important when doing an incremental update. If the user deletes a
458 file or adds a file, we need to ensure that a Full backup is done prior to the
463 \addcontentsline{lot}{table}{JobMedia Table Layout}
464 \begin{longtable}{|l|l|p{2.5in}|}
466 \multicolumn{3}{|l| }{\bf JobMedia } \\
468 \multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type\
469 \ \ } & \multicolumn{1}{c| }{\bf Remark } \\
471 {JobMediaId } & {integer } & {Primary Key } \\
473 {JobId } & {integer } & {Link to Job Record } \\
475 {MediaId } & {integer } & {Link to Media Record } \\
477 {FirstIndex } & {integer } & {The index (sequence number) of the first file
478 written for this Job to the Media } \\
480 {LastIndex } & {integer } & {The index of the last file written for this
481 Job to the Media } \\
483 {StartFile } & {integer } & {The physical media (tape) file number of the
484 first block written for this Job } \\
486 {EndFile } & {integer } & {The physical media (tape) file number of the
487 last block written for this Job } \\
489 {StartBlock } & {integer } & {The number of the first block written for
492 {EndBlock } & {integer } & {The number of the last block written for this
495 {VolIndex } & {integer } & {The Volume use sequence number within the Job }
500 The {\bf JobMedia} table contains one entry for each volume written for the
501 current Job. If the Job spans 3 tapes, there will be three JobMedia records,
502 each containing the information to find all the files for the given JobId on
507 \addcontentsline{lot}{table}{Media Table Layout}
508 \begin{longtable}{|l|l|p{2.4in}|}
510 \multicolumn{3}{|l| }{\bf Media } \\
512 \multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type\
513 \ \ } & \multicolumn{1}{c| }{\bf Remark } \\
515 {MediaId } & {integer } & {Primary Key } \\
517 {VolumeName } & {tinyblob } & {Volume name } \\
519 {Slot } & {integer } & {Autochanger Slot number or zero } \\
521 {PoolId } & {integer } & {Link to Pool Record } \\
523 {MediaType } & {tinyblob } & {The MediaType supplied by the user } \\
525 {FirstWritten } & {datetime } & {Time/date when first written } \\
527 {LastWritten } & {datetime } & {Time/date when last written } \\
529 {LabelDate } & {datetime } & {Time/date when tape labeled } \\
531 {VolJobs } & {integer } & {Number of jobs written to this media } \\
533 {VolFiles } & {integer } & {Number of files written to this media } \\
535 {VolBlocks } & {integer } & {Number of blocks written to this media } \\
537 {VolMounts } & {integer } & {Number of time media mounted } \\
539 {VolBytes } & {bigint } & {Number of bytes saved in Job } \\
541 {VolErrors } & {integer } & {Number of errors during Job } \\
543 {VolWrites } & {integer } & {Number of writes to media } \\
545 {MaxVolBytes } & {bigint } & {Maximum bytes to put on this media } \\
547 {VolCapacityBytes } & {bigint } & {Capacity estimate for this volume } \\
549 {VolStatus } & {enum } & {Status of media: Full, Archive, Append, Recycle,
550 Read-Only, Disabled, Error, Busy } \\
552 {Recycle } & {tinyint } & {Whether or not Bacula can recycle the Volumes:
555 {VolRetention } & {bigint } & {64 bit seconds until expiration } \\
557 {VolUseDuration } & {bigint } & {64 bit seconds volume can be used } \\
559 {MaxVolJobs } & {integer } & {maximum jobs to put on Volume } \\
561 {MaxVolFiles } & {integer } & {maximume EOF marks to put on Volume }
566 The {\bf Volume} table (internally referred to as the Media table) contains
567 one entry for each volume, that is each tape, cassette (8mm, DLT, DAT, ...),
568 or file on which information is or was backed up. There is one Volume record
569 created for each of the NumVols specified in the Pool resource record.
573 \addcontentsline{lot}{table}{Pool Table Layout}
574 \begin{longtable}{|l|l|p{2.4in}|}
576 \multicolumn{3}{|l| }{\bf Pool } \\
578 \multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
579 } & \multicolumn{1}{c| }{\bf Remark } \\
581 {PoolId } & {integer } & {Primary Key } \\
583 {Name } & {Tinyblob } & {Pool Name } \\
585 {NumVols } & {Integer } & {Number of Volumes in the Pool } \\
587 {MaxVols } & {Integer } & {Maximum Volumes in the Pool } \\
589 {UseOnce } & {tinyint } & {Use volume once } \\
591 {UseCatalog } & {tinyint } & {Set to use catalog } \\
593 {AcceptAnyVolume } & {tinyint } & {Accept any volume from Pool } \\
595 {VolRetention } & {bigint } & {64 bit seconds to retain volume } \\
597 {VolUseDuration } & {bigint } & {64 bit seconds volume can be used } \\
599 {MaxVolJobs } & {integer } & {max jobs on volume } \\
601 {MaxVolFiles } & {integer } & {max EOF marks to put on Volume } \\
603 {MaxVolBytes } & {bigint } & {max bytes to write on Volume } \\
605 {AutoPrune } & {tinyint } & {yes|no for autopruning } \\
607 {Recycle } & {tinyint } & {yes|no for allowing auto recycling of Volume }
610 {PoolType } & {enum } & {Backup, Copy, Cloned, Archive, Migration } \\
612 {LabelFormat } & {Tinyblob } & {Label format }
617 The {\bf Pool} table contains one entry for each media pool controlled by
618 Bacula in this database. One media record exists for each of the NumVols
619 contained in the Pool. The PoolType is a Bacula defined keyword. The MediaType
620 is defined by the administrator, and corresponds to the MediaType specified in
621 the Director's Storage definition record. The CurrentVol is the sequence
622 number of the Media record for the current volume.
626 \addcontentsline{lot}{table}{Client Table Layout}
627 \begin{longtable}{|l|l|l|}
629 \multicolumn{3}{|l| }{\bf Client } \\
631 \multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
632 } & \multicolumn{1}{c| }{\bf Remark } \\
634 {ClientId } & {integer } & {Primary Key } \\
636 {Name } & {TinyBlob } & {File Services Name } \\
638 {UName } & {TinyBlob } & {uname -a from Client (not yet used) } \\
640 {AutoPrune } & {tinyint } & {yes|no for autopruning } \\
642 {FileRetention } & {bigint } & {64 bit seconds to retain Files } \\
644 {JobRetention } & {bigint } & {64 bit seconds to retain Job }
649 The {\bf Client} table contains one entry for each machine backed up by Bacula
650 in this database. Normally the Name is a fully qualified domain name.
654 \addcontentsline{lot}{table}{Unsaved Files Table Layout}
655 \begin{longtable}{|l|l|l|}
657 \multicolumn{3}{|l| }{\bf UnsavedFiles } \\
659 \multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
660 } & \multicolumn{1}{c| }{\bf Remark } \\
662 {UnsavedId } & {integer } & {Primary Key } \\
664 {JobId } & {integer } & {JobId corresponding to this record } \\
666 {PathId } & {integer } & {Id of path } \\
668 {FilenameId } & {integer } & {Id of filename }
673 The {\bf UnsavedFiles} table contains one entry for each file that was not
674 saved. Note! This record is not yet implemented.
678 \addcontentsline{lot}{table}{Counter Table Layout}
679 \begin{longtable}{|l|l|l|}
681 \multicolumn{3}{|l| }{\bf Counter } \\
683 \multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
684 } & \multicolumn{1}{c| }{\bf Remark } \\
686 {Counter } & {tinyblob } & {Counter name } \\
688 {MinValue } & {integer } & {Start/Min value for counter } \\
690 {MaxValue } & {integer } & {Max value for counter } \\
692 {CurrentValue } & {integer } & {Current counter value } \\
694 {WrapCounter } & {tinyblob } & {Name of another counter }
699 The {\bf Counter} table contains one entry for each permanent counter defined
704 \addcontentsline{lot}{table}{Version Table Layout}
705 \begin{longtable}{|l|l|l|}
707 \multicolumn{3}{|l| }{\bf Version } \\
709 \multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
710 } & \multicolumn{1}{c| }{\bf Remark } \\
712 {VersionId } & {integer } & {Primary Key }
717 The {\bf Version} table defines the Bacula database version number. Bacula
718 checks this number before reading the database to ensure that it is compatible
719 with the Bacula binary file.
723 \addcontentsline{lot}{table}{Base Files Table Layout}
724 \begin{longtable}{|l|l|l|}
726 \multicolumn{3}{|l| }{\bf BaseFiles } \\
728 \multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type
729 } & \multicolumn{1}{c| }{\bf Remark } \\
731 {BaseId } & {integer } & {Primary Key } \\
733 {BaseJobId } & {integer } & {JobId of Base Job } \\
735 {JobId } & {integer } & {Reference to Job } \\
737 {FileId } & {integer } & {Reference to File } \\
739 {FileIndex } & {integer } & {File Index number }
744 The {\bf BaseFiles} table contains all the File references for a particular
745 JobId that point to a Base file -- i.e. they were previously saved and hence
746 were not saved in the current JobId but in BaseJobId under FileId. FileIndex
747 is the index of the file, and is used for optimization of Restore jobs to
748 prevent the need to read the FileId record when creating the in memory tree.
749 This record is not yet implemented.
753 \subsubsection*{MySQL Table Definition}
754 \index[general]{MySQL Table Definition }
755 \index[general]{Definition!MySQL Table }
756 \addcontentsline{toc}{subsubsection}{MySQL Table Definition}
758 The commands used to create the MySQL tables are as follows:
763 CREATE TABLE Filename (
764 FilenameId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
766 PRIMARY KEY(FilenameId),
770 PathId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
776 FileId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
777 FileIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
778 JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
779 PathId INTEGER UNSIGNED NOT NULL REFERENCES Path,
780 FilenameId INTEGER UNSIGNED NOT NULL REFERENCES Filename,
781 MarkId INTEGER UNSIGNED NOT NULL DEFAULT 0,
782 LStat TINYBLOB NOT NULL,
783 MD5 TINYBLOB NOT NULL,
790 JobId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
791 Job TINYBLOB NOT NULL,
792 Name TINYBLOB NOT NULL,
793 Type BINARY(1) NOT NULL,
794 Level BINARY(1) NOT NULL,
795 ClientId INTEGER NOT NULL REFERENCES Client,
796 JobStatus BINARY(1) NOT NULL,
797 SchedTime DATETIME NOT NULL,
798 StartTime DATETIME NOT NULL,
799 EndTime DATETIME NOT NULL,
800 JobTDate BIGINT UNSIGNED NOT NULL,
801 VolSessionId INTEGER UNSIGNED NOT NULL DEFAULT 0,
802 VolSessionTime INTEGER UNSIGNED NOT NULL DEFAULT 0,
803 JobFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
804 JobBytes BIGINT UNSIGNED NOT NULL,
805 JobErrors INTEGER UNSIGNED NOT NULL DEFAULT 0,
806 JobMissingFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
807 PoolId INTEGER UNSIGNED NOT NULL REFERENCES Pool,
808 FileSetId INTEGER UNSIGNED NOT NULL REFERENCES FileSet,
809 PurgedFiles TINYINT NOT NULL DEFAULT 0,
810 HasBase TINYINT NOT NULL DEFAULT 0,
814 CREATE TABLE FileSet (
815 FileSetId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
816 FileSet TINYBLOB NOT NULL,
817 MD5 TINYBLOB NOT NULL,
818 CreateTime DATETIME NOT NULL,
819 PRIMARY KEY(FileSetId)
821 CREATE TABLE JobMedia (
822 JobMediaId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
823 JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
824 MediaId INTEGER UNSIGNED NOT NULL REFERENCES Media,
825 FirstIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
826 LastIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
827 StartFile INTEGER UNSIGNED NOT NULL DEFAULT 0,
828 EndFile INTEGER UNSIGNED NOT NULL DEFAULT 0,
829 StartBlock INTEGER UNSIGNED NOT NULL DEFAULT 0,
830 EndBlock INTEGER UNSIGNED NOT NULL DEFAULT 0,
831 VolIndex INTEGER UNSIGNED NOT NULL DEFAULT 0,
832 PRIMARY KEY(JobMediaId),
833 INDEX (JobId, MediaId)
836 MediaId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
837 VolumeName TINYBLOB NOT NULL,
838 Slot INTEGER NOT NULL DEFAULT 0,
839 PoolId INTEGER UNSIGNED NOT NULL REFERENCES Pool,
840 MediaType TINYBLOB NOT NULL,
841 FirstWritten DATETIME NOT NULL,
842 LastWritten DATETIME NOT NULL,
843 LabelDate DATETIME NOT NULL,
844 VolJobs INTEGER UNSIGNED NOT NULL DEFAULT 0,
845 VolFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
846 VolBlocks INTEGER UNSIGNED NOT NULL DEFAULT 0,
847 VolMounts INTEGER UNSIGNED NOT NULL DEFAULT 0,
848 VolBytes BIGINT UNSIGNED NOT NULL DEFAULT 0,
849 VolErrors INTEGER UNSIGNED NOT NULL DEFAULT 0,
850 VolWrites INTEGER UNSIGNED NOT NULL DEFAULT 0,
851 VolCapacityBytes BIGINT UNSIGNED NOT NULL,
852 VolStatus ENUM('Full', 'Archive', 'Append', 'Recycle', 'Purged',
853 'Read-Only', 'Disabled', 'Error', 'Busy', 'Used', 'Cleaning') NOT NULL,
854 Recycle TINYINT NOT NULL DEFAULT 0,
855 VolRetention BIGINT UNSIGNED NOT NULL DEFAULT 0,
856 VolUseDuration BIGINT UNSIGNED NOT NULL DEFAULT 0,
857 MaxVolJobs INTEGER UNSIGNED NOT NULL DEFAULT 0,
858 MaxVolFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
859 MaxVolBytes BIGINT UNSIGNED NOT NULL DEFAULT 0,
860 InChanger TINYINT NOT NULL DEFAULT 0,
861 MediaAddressing TINYINT NOT NULL DEFAULT 0,
862 VolReadTime BIGINT UNSIGNED NOT NULL DEFAULT 0,
863 VolWriteTime BIGINT UNSIGNED NOT NULL DEFAULT 0,
864 PRIMARY KEY(MediaId),
868 PoolId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
869 Name TINYBLOB NOT NULL,
870 NumVols INTEGER UNSIGNED NOT NULL DEFAULT 0,
871 MaxVols INTEGER UNSIGNED NOT NULL DEFAULT 0,
872 UseOnce TINYINT NOT NULL,
873 UseCatalog TINYINT NOT NULL,
874 AcceptAnyVolume TINYINT DEFAULT 0,
875 VolRetention BIGINT UNSIGNED NOT NULL,
876 VolUseDuration BIGINT UNSIGNED NOT NULL,
877 MaxVolJobs INTEGER UNSIGNED NOT NULL DEFAULT 0,
878 MaxVolFiles INTEGER UNSIGNED NOT NULL DEFAULT 0,
879 MaxVolBytes BIGINT UNSIGNED NOT NULL,
880 AutoPrune TINYINT DEFAULT 0,
881 Recycle TINYINT DEFAULT 0,
882 PoolType ENUM('Backup', 'Copy', 'Cloned', 'Archive', 'Migration', 'Scratch') NOT NULL,
883 LabelFormat TINYBLOB,
884 Enabled TINYINT DEFAULT 1,
885 ScratchPoolId INTEGER UNSIGNED DEFAULT 0 REFERENCES Pool,
886 RecyclePoolId INTEGER UNSIGNED DEFAULT 0 REFERENCES Pool,
890 CREATE TABLE Client (
891 ClientId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT,
892 Name TINYBLOB NOT NULL,
893 Uname TINYBLOB NOT NULL, /* full uname -a of client */
894 AutoPrune TINYINT DEFAULT 0,
895 FileRetention BIGINT UNSIGNED NOT NULL,
896 JobRetention BIGINT UNSIGNED NOT NULL,
898 PRIMARY KEY(ClientId)
900 CREATE TABLE BaseFiles (
901 BaseId INTEGER UNSIGNED AUTO_INCREMENT,
902 BaseJobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
903 JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
904 FileId INTEGER UNSIGNED NOT NULL REFERENCES File,
905 FileIndex INTEGER UNSIGNED,
908 CREATE TABLE UnsavedFiles (
909 UnsavedId INTEGER UNSIGNED AUTO_INCREMENT,
910 JobId INTEGER UNSIGNED NOT NULL REFERENCES Job,
911 PathId INTEGER UNSIGNED NOT NULL REFERENCES Path,
912 FilenameId INTEGER UNSIGNED NOT NULL REFERENCES Filename,
913 PRIMARY KEY (UnsavedId)
915 CREATE TABLE Version (
916 VersionId INTEGER UNSIGNED NOT NULL
918 -- Initialize Version
919 INSERT INTO Version (VersionId) VALUES (7);
920 CREATE TABLE Counters (
921 Counter TINYBLOB NOT NULL,
924 CurrentValue INTEGER,
925 WrapCounter TINYBLOB NOT NULL,
926 PRIMARY KEY (Counter(128))