backup. By deleting records within the database, you can make space available
for the new records that will be added during the next Job. By constantly
deleting old expired records (dates older than the Retention period), your
-database size will remain constant.
+database size will remain constant.
If you started with the default configuration files, they already contain
reasonable defaults for a small number of machines (less than 5), so if you
fall into that case, catalog maintenance will not be urgent if you have a few
hundred megabytes of disk space free. Whatever the case may be, some knowledge
-of retention periods will be useful.
+of retention periods will be useful.
\label{Retention}
\section{Setting Retention Periods}
{\bf Bacula} uses three Retention periods: the {\bf File Retention} period,
the {\bf Job Retention} period, and the {\bf Volume Retention} period. Of
these three, the File Retention period is by far the most important in
-determining how large your database will become.
+determining how large your database will become.
The {\bf File Retention} and the {\bf Job Retention} are specified in each
Client resource as is shown below. The {\bf Volume Retention} period is
specified in the Pool resource, and the details are given in the next chapter
-of this manual.
+of this manual.
\begin{description}
occur at the end of a backup Job for the given Client. Note that the Client
database record contains a copy of the File and Job retention periods, but
Bacula uses the current values found in the Director's Client resource to do
-the pruning.
+the pruning.
-Since File records in the database account for probably 80 percent of the
+Since File records in the database account for probably 80 percent of the
size of the database, you should carefully determine exactly what File
Retention period you need. Once the File records have been removed from
the database, you will no longer be able to restore individual files
\ilink{ Configuration chapter}{Time} of this manual for additional details
of modifier specification.
-The default File retention period is 60 days.
+The default File retention period is 60 days.
\item [Job Retention = \lt{}time-period-specification\gt{}]
\index[general]{Job Retention }
As mentioned above, once the File records are removed from the database,
you will no longer be able to restore individual files from the Job.
However, as long as the Job record remains in the database, you will be
-able to restore all the files backuped for the Job (on version 1.37 and
+able to restore all the files backuped for the Job (on version 1.37 and
later). As a consequence, it is generally a good idea to retain the Job
records much longer than the File records.
Configuration chapter}{Time} of this manual for additional details of
modifier specification.
-The default Job Retention period is 180 days.
+The default Job Retention period is 180 days.
\item [AutoPrune = \lt{}yes/no\gt{}]
\index[general]{AutoPrune }
If AutoPrune is set to {\bf yes} (default), Bacula will automatically apply
the File retention period and the Job retention period for the Client at the
-end of the Job.
+end of the Job.
If you turn this off by setting it to {\bf no}, your Catalog will grow each
-time you run a Job.
+time you run a Job.
\end{description}
\label{CompactingMySQL}
as Oracle have commands that will compact a database to reclaim wasted file
space. MySQL has the {\bf OPTIMIZE TABLE} command that you can use, and SQLite
version 2.8.4 and greater has the {\bf VACUUM} command. We leave it to you to
-explore the utility of the {\bf OPTIMIZE TABLE} command in MySQL.
+explore the utility of the {\bf OPTIMIZE TABLE} command in MySQL.
All database programs have some means of writing the database out in ASCII
format and then reloading it. Doing so will re-create the database from
scratch producing a compacted result, so below, we show you how you can do
-this for MySQL, PostgreSQL and SQLite.
+this for MySQL, PostgreSQL and SQLite.
For a {\bf MySQL} database, you could write the Bacula database as an ASCII
-file (bacula.sql) then reload it by doing the following:
+file (bacula.sql) then reload it by doing the following:
\footnotesize
-\begin{verbatim}
+\begin{lstlisting}
mysqldump -f --opt bacula > bacula.sql
mysql bacula < bacula.sql
rm -f bacula.sql
-\end{verbatim}
+\end{lstlisting}
\normalsize
Depending on the size of your database, this will take more or less time and a
fair amount of disk space. For example, if I cd to the location of the MySQL
-Bacula database (typically /opt/mysql/var or something similar) and enter:
+Bacula database (typically /opt/mysql/var or something similar) and enter:
\footnotesize
-\begin{verbatim}
+\begin{lstlisting}
du bacula
-\end{verbatim}
+\end{lstlisting}
\normalsize
I get {\bf 620,644} which means there are that many blocks containing 1024
mysql} command to recreate the database, I ended up with a total of {\bf
210,464} blocks rather than the original {\bf 629,644}. In other words, the
compressed version of the database took approximately one third of the space
-of the database that had been in use for about a year.
+of the database that had been in use for about a year.
As a consequence, I suggest you monitor the size of your database and from
-time to time (once every six months or year), compress it.
+time to time (once every six months or year), compress it.
\label{DatabaseRepair}
\label{RepairingMySQL}
running MySQL's database check and repair routines. The program you need to
run depends on the type of database indexing you are using. If you are using
the default, you will probably want to use {\bf myisamchk}. For more details
-on how to do this, please consult the MySQL document at:
+on how to do this, please consult the MySQL document at:
\elink{
http://www.mysql.com/doc/en/Repair.html}
-{http://www.mysql.com/doc/en/Repair.html}.
+{http://www.mysql.com/doc/en/Repair.html}.
If the errors you are getting are simply SQL warnings, then you might try
running dbcheck before (or possibly after) using the MySQL database repair
program. It can clean up many of the orphaned record problems, and certain
-other inconsistencies in the Bacula database.
+other inconsistencies in the Bacula database.
A typical cause of MySQL database problems is if your partition fills. In
such a case, you will need to create additional space on the partition or
the following script did the trick for me:
\footnotesize
-\begin{verbatim}
+\begin{lstlisting}
#!/bin/sh
for i in *.MYD ; do
mv $i x${i}
chown mysql:mysql ${i}
myisamchk -r ${t}
done
-\end{verbatim}
+\end{lstlisting}
\normalsize
I invoked it with the following commands:
\footnotesize
-\begin{verbatim}
+\begin{lstlisting}
cd /var/lib/mysql/bacula
./repair
-\end{verbatim}
+\end{lstlisting}
\normalsize
Then after ensuring that the database was correctly fixed, I did:
\footnotesize
-\begin{verbatim}
+\begin{lstlisting}
cd /var/lib/mysql/bacula
rm -f x*.MYD
-\end{verbatim}
+\end{lstlisting}
\normalsize
\section{MySQL Table is Full}
\index[general]{Database!MySQL Table is Full}
\index[general]{MySQL Table is Full}
-If you are running into the error {\bf The table 'File' is full ...},
-it is probably because on version 4.x MySQL, the table is limited by
+If you are running into the error {\bf The table 'File' is full \ldots{}},
+it is probably because on version 4.x MySQL, the table is limited by
default to a maximum size of 4 GB and you have probably run into
the limit. The solution can be found at:
\elink{http://dev.mysql.com/doc/refman/5.0/en/full-table.html}
You can display the maximum length of your table with:
\footnotesize
-\begin{verbatim}
+\begin{lstlisting}
mysql bacula
SHOW TABLE STATUS FROM bacula like "File";
-\end{verbatim}
+\end{lstlisting}
\normalsize
If the column labeled "Max\_data\_length" is around 4Gb, this is likely
to be the source of your problem, and you can modify it with:
\footnotesize
-\begin{verbatim}
+\begin{lstlisting}
mysql bacula
ALTER TABLE File MAX_ROWS=281474976710656;
-\end{verbatim}
+\end{lstlisting}
\normalsize
Alternatively you can modify your /etc/my.conf file before creating the
Bacula tables, and in the [mysqld] section set:
-\footnotesize
-\begin{verbatim}
+\footnotesize
+\begin{lstlisting}
set-variable = myisam_data_pointer_size=6
-\end{verbatim}
+\end{lstlisting}
\normalsize
The above myisam data pointer size must be made before you create your
/tmp. Once that space fills up, Bacula daemons such as the Storage
daemon doing spooling can get strange errors. E.g.
-\footnotesize
-\begin{verbatim}
+\footnotesize
+\begin{lstlisting}
Fatal error: spool.c:402 Spool data read error.
Fatal error: backup.c:892 Network send error to SD. ERR=Connection reset by
peer
-\end{verbatim}
+\end{lstlisting}
\normalsize
What you need to do is setup MySQL to use a different (larger) temp
directory, which can be set in the /etc/my.cnf with these variables
set:
-\footnotesize
-\begin{verbatim}
+\footnotesize
+\begin{lstlisting}
tmpdir=/path/to/larger/tmpdir
bdb_tmpdir=/path/to/larger/tmpdir
-\end{verbatim}
+\end{lstlisting}
\normalsize
\label{RepairingPSQL}
The same considerations apply that are indicated above for MySQL. That is,
consult the PostgreSQL documents for how to repair the database, and also
consider using Bacula's dbcheck program if the conditions are reasonable for
-using (see above).
+using (see above).
\label{DatabasePerformance}
\section{Database Performance Issues}
indications as to what indexes may be appropriate. Please see below
for specific instructions on checking indexes.
-For MySQL, what is very important is to use the examine the
+For MySQL, what is very important is to use the examine the
my.cnf file (usually in /etc/my.cnf).
You may obtain significant performances by switching to
the my-large.cnf or my-huge.cnf files that come with the MySQL source
For SQLite3, one significant factor in improving the performance is
to ensure that there is a "PRAGMA synchronous = NORMAL;" statement.
This reduces the number of times that the database flushes the in memory
-cache to disk. There are other settings for this PRAGMA that can
+cache to disk. There are other settings for this PRAGMA that can
give even further performance improvements at the risk of a database
corruption if your system crashes.
{http://www.postgresql.org/docs/faqs.FAQ.html\#3.3}.
% TODO: verify above is correct. is this okay for book?
-Also for PostgreSQL, look at what "effective\_cache\_size". For a 2GB memory
+Also for PostgreSQL, look at what "effective\_cache\_size". For a 2GB memory
machine, you probably want to set it at 131072, but don't set it too high.
In addition, for a 2GB system, work\_mem = 256000 and
maintenance\_work\_mem = 256000 seem to be reasonable values. Make
the following commands:
\footnotesize
-\begin{verbatim}
+\begin{lstlisting}
psql bacula
select * from pg_indexes where tablename='file';
-\end{verbatim}
+\end{lstlisting}
\normalsize
If you do not see output that indicates that all three indexes
are created, you can create the two additional indexes using:
\footnotesize
-\begin{verbatim}
+\begin{lstlisting}
psql bacula
CREATE INDEX file_jobid_idx on file (jobid);
CREATE INDEX file_jpf_idx on file (jobid, pathid, filenameid);
-\end{verbatim}
+\end{lstlisting}
\normalsize
Make sure that you doesn't have an index on File (filenameid, pathid).
On MySQL, you can check if you have the proper indexes by:
\footnotesize
-\begin{verbatim}
+\begin{lstlisting}
mysql bacula
show index from File;
-\end{verbatim}
+\end{lstlisting}
\normalsize
If the indexes are not present, especially the JobId index, you can
create them with the following commands:
\footnotesize
-\begin{verbatim}
+\begin{lstlisting}
mysql bacula
CREATE INDEX file_jobid_idx on File (JobId);
CREATE INDEX file_jpf_idx on File (JobId, FilenameId, PathId);
-\end{verbatim}
+\end{lstlisting}
\normalsize
-Though normally not a problem, you should ensure that the indexes
-defined for Filename and Path are both set to 255 characters. Some users
+Though normally not a problem, you should ensure that the indexes
+defined for Filename and Path are both set to 255 characters. Some users
reported performance problems when their indexes were set to 50 characters.
To check, do:
\footnotesize
-\begin{verbatim}
+\begin{lstlisting}
mysql bacula
show index from Filename;
show index from Path;
-\end{verbatim}
+\end{lstlisting}
\normalsize
and what is important is that for Filename, you have an index with
index with:
\footnotesize
-\begin{verbatim}
+\begin{lstlisting}
mysql bacula
DROP INDEX Path on Path;
CREATE INDEX Path on Path (Path(255));
DROP INDEX Name on Filename;
CREATE INDEX Name on Filename (Name(255));
-\end{verbatim}
+\end{lstlisting}
\normalsize
On SQLite, you can check if you have the proper indexes by:
\footnotesize
-\begin{verbatim}
+\begin{lstlisting}
sqlite <path>/bacula.db
select * from sqlite_master where type='index' and tbl_name='File';
-\end{verbatim}
+\end{lstlisting}
\normalsize
If the indexes are not present, especially the JobId index, you can
create them with the following commands:
\footnotesize
-\begin{verbatim}
+\begin{lstlisting}
sqlite <path>/bacula.db
CREATE INDEX file_jobid_idx on File (JobId);
CREATE INDEX file_jfp_idx on File (JobId, PathId, FilenameId);
-\end{verbatim}
+\end{lstlisting}
\normalsize
Over time, as noted above, your database will tend to grow. I've noticed that
even though Bacula regularly prunes files, PostgreSQL has a {\bf VACUUM}
command that will compact your database for you. Alternatively you may want to
-use the {\bf vacuumdb} command, which can be run from a cron job.
+use the {\bf vacuumdb} command, which can be run from a cron job.
All database programs have some means of writing the database out in ASCII
format and then reloading it. Doing so will re-create the database from
scratch producing a compacted result, so below, we show you how you can do
-this for PostgreSQL.
+this for PostgreSQL.
For a {\bf PostgreSQL} database, you could write the Bacula database as an
-ASCII file (bacula.sql) then reload it by doing the following:
+ASCII file (bacula.sql) then reload it by doing the following:
\footnotesize
-\begin{verbatim}
+\begin{lstlisting}
pg_dump -c bacula > bacula.sql
cat bacula.sql | psql bacula
rm -f bacula.sql
-\end{verbatim}
+\end{lstlisting}
\normalsize
Depending on the size of your database, this will take more or less time and a
fair amount of disk space. For example, you can {\bf cd} to the location of
the Bacula database (typically /usr/local/pgsql/data or possible
-/var/lib/pgsql/data) and check the size.
+/var/lib/pgsql/data) and check the size.
-There are certain PostgreSQL users who do not recommend the above
+There are certain PostgreSQL users who do not recommend the above
procedure. They have the following to say:
PostgreSQL does not
need to be dumped/restored to keep the database efficient. A normal
Finally, you might want to look at the PostgreSQL documentation on
this subject at
\elink{http://www.postgresql.org/docs/8.1/interactive/maintenance.html}
-{http://www.postgresql.org/docs/8.1/interactive/maintenance.html}.
+{http://www.postgresql.org/docs/8.1/interactive/maintenance.html}.
\section{Compacting Your SQLite Database}
\index[general]{Compacting Your SQLite Database }
First please read the previous section that explains why it is necessary to
compress a database. SQLite version 2.8.4 and greater have the {\bf Vacuum}
-command for compacting the database.
+command for compacting the database.
\footnotesize
-\begin{verbatim}
+\begin{lstlisting}
cd {\bf working-directory}
echo 'vacuum;' | sqlite bacula.db
-\end{verbatim}
+\end{lstlisting}
\normalsize
As an alternative, you can use the following commands, adapted to your system:
\footnotesize
-\begin{verbatim}
+\begin{lstlisting}
cd {\bf working-directory}
echo '.dump' | sqlite bacula.db > bacula.sql
rm -f bacula.db
sqlite bacula.db < bacula.sql
rm -f bacula.sql
-\end{verbatim}
+\end{lstlisting}
\normalsize
Where {\bf working-directory} is the directory that you specified in the
Director's configuration file. Note, in the case of SQLite, it is necessary to
completely delete (rm) the old database before creating a new compressed
-version.
+version.
\section{Migrating from SQLite to MySQL or PostgreSQL}
\index[general]{MySQL!Migrating from SQLite to }
database if it is specified in the FileSet, this is not a very good way to do
it, because the database will be saved while Bacula is modifying it. Thus the
database may be in an instable state. Worse yet, you will backup the database
-before all the Bacula updates have been applied.
+before all the Bacula updates have been applied.
To resolve these problems, you need to backup the database after all the backup
jobs have been run. In addition, you will want to make a copy while Bacula is
will be automatically generated along with all the other Bacula scripts. The
first script will make an ASCII copy of your Bacula database into {\bf
bacula.sql} in the working directory you specified in your configuration, and
-the second will delete the {\bf bacula.sql} file.
+the second will delete the {\bf bacula.sql} file.
-The basic sequence of events to make this work correctly is as follows:
+The basic sequence of events to make this work correctly is as follows:
-\begin{itemize}
-\item Run all your nightly backups
-\item After running your nightly backups, run a Catalog backup Job
-\item The Catalog backup job must be scheduled after your last nightly backup
+\begin{bsysitemize}
+\item Run all your nightly backups
+\item After running your nightly backups, run a Catalog backup Job
+\item The Catalog backup job must be scheduled after your last nightly backup
\item You use {\bf RunBeforeJob} to create the ASCII backup file and {\bf
- RunAfterJob} to clean up
-\end{itemize}
+ RunAfterJob} to clean up
+\end{bsysitemize}
Assuming that you start all your nightly backup jobs at 1:05 am (and that they
run one after another), you can do the catalog backup with the following
-additional Director configuration statements:
+additional Director configuration statements:
\footnotesize
-\begin{verbatim}
+\begin{lstlisting}
# Backup the catalog database (after the nightly save)
Job {
Name = "BackupCatalog"
File = \lt{}working_directory\gt{}/bacula.sql
}
}
-\end{verbatim}
+\end{lstlisting}
\normalsize
Be sure to write a bootstrap file as in the above example. However, it is preferable
to write or copy the bootstrap file to another computer. It will allow
you to quickly recover the database backup should that be necessary. If
you do not have a bootstrap file, it is still possible to recover your
-database backup, but it will be more work and take longer.
+database backup, but it will be more work and take longer.
\label{BackingUpBaculaSecurityConsiderations}
\elink{
.pgpass}{http://www.postgresql.org/docs/8.2/static/libpq-pgpass.html}, and
we know MySQL has
-\elink{ .my.cnf}{http://dev.mysql.com/doc/refman/4.1/en/password-security.html}.
+\elink{.my.cnf}{http://dev.mysql.com/doc/refman/4.1/en/password-security.html}.
Only you can decide what is appropriate for your situation. We have provided
you with a starting point. We hope it helps.
If you are running a database in production mode on your machine, Bacula will
happily backup the files, but if the database is in use while Bacula is
-reading it, you may back it up in an unstable state.
+reading it, you may back it up in an unstable state.
The best solution is to shutdown your database before backing it up, or use
some tool specific to your database to make a valid live copy perhaps by
provide you advice on how to do this, but if you are unsure about how to
backup your database, you might try visiting the Backup Central site, which
has been renamed Storage Mountain (www.backupcentral.com). In particular,
-their
+their
\elink{ Free Backup and Recovery
Software}{http://www.backupcentral.com/toc-free-backup-software.html} page has
links to scripts that show you how to shutdown and backup most major
-databases.
+databases.
\label{Size}
\section{Database Size}
period to that time. Then you can either wait and see how big your Catalog
gets or make a calculation assuming approximately 154 bytes for each File
saved and knowing the number of Files that are saved during each backup and
-the number of Clients you backup.
+the number of Clients you backup.
For example, suppose you do a backup of two systems, each with 100,000 files.
Suppose further that you do a Full backup weekly and an Incremental every day,
and that the Incremental backup typically saves 4,000 files. The size of your
-database after a month can roughly be calculated as:
+database after a month can roughly be calculated as:
\footnotesize
-\begin{verbatim}
+\begin{lstlisting}
Size = 154 * No. Systems * (100,000 * 4 + 10,000 * 26)
-\end{verbatim}
+\end{lstlisting}
\normalsize
where we have assumed four weeks in a month and 26 incremental backups per month.
-This would give the following:
+This would give the following:
\footnotesize
-\begin{verbatim}
+\begin{lstlisting}
Size = 154 * 2 * (100,000 * 4 + 10,000 * 26)
or
Size = 308 * (400,000 + 260,000)
or
Size = 203,280,000 bytes
-\end{verbatim}
+\end{lstlisting}
\normalsize
So for the above two systems, we should expect to have a database size of
approximately 200 Megabytes. Of course, this will vary according to how many
-files are actually backed up.
+files are actually backed up.
Below are some statistics for a MySQL database containing Job records for five
Clients beginning September 2001 through May 2002 (8.5 months) and File
In the list below, the files (corresponding to Bacula Tables) with the
extension .MYD contain the data records whereas files with the extension .MYI
-contain indexes.
+contain indexes.
You will note that the File records (containing the file attributes) make up
the large bulk of the number of records as well as the space used (459 Mega
Bytes including the indexes). As a consequence, the most important Retention
period will be the {\bf File Retention} period. A quick calculation shows that
-for each File that is saved, the database grows by approximately 150 bytes.
+for each File that is saved, the database grows by approximately 150 bytes.
\footnotesize
-\begin{verbatim}
+\begin{lstlisting}
Size in
Bytes Records File
============ ========= ===========
3,072 Pool.MYI
5 1 Version.MYD
1,024 Version.MYI
-\end{verbatim}
+\end{lstlisting}
\normalsize
-This database has a total size of approximately 450 Megabytes.
+This database has a total size of approximately 450 Megabytes.
If we were using SQLite, the determination of the total database size would be
much easier since it is a single file, but we would have less insight to the
-size of the individual tables as we have in this case.
+size of the individual tables as we have in this case.
Note, SQLite databases may be as much as 50\% larger than MySQL databases due
to the fact that all data is stored as ASCII strings. That is even binary
integers are stored as ASCII strings, and this seems to increase the space
-needed.
+needed.