\addcontentsline{toc}{section}{Catalog Maintenance}
Without proper setup and maintenance, your Catalog may continue to grow
-indefinitely as you run Jobs and backup Files. How fast the size of your
+indefinitely as you run Jobs and backup Files, and/or it may become
+very inefficient and slow. How fast the size of your
Catalog grows depends on the number of Jobs you run and how many files they
backup. By deleting records within the database, you can make space available
for the new records that will be added during the next Job. By constantly
As a consequence, I suggest you monitor the size of your database and from
time to time (once every 6 months or year), compress it.
-\label{RepairingMySQL}
+\label{DatabaseRepair}
+\label{RepairingMySQL}
\subsection*{Repairing Your MySQL Database}
\index[general]{Database!Repairing Your MySQL }
\index[general]{Repairing Your MySQL Database }
running dbcheck before (or possibly after) using the MySQL database repair
program. It can clean up many of the orphaned record problems, and certain
other inconsistencies in the Bacula database.
-\label{RepairingPSQL}
+If you are running into the error {\bf The table 'File' is full ...},
+it is probably because on version 4.x MySQL, the table is limited by
+default to a maximum size of 4 GB and you have probably run into
+the limit. The solution can be found at:
+\elink{http://dev.mysql.com/doc/refman/5.0/en/full-table.html}
+{http://dev.mysql.com/doc/refman/5.0/en/full-table.html}
+
+You can display the maximum length of your table with:
+
+\footnotesize
+\begin{verbatim}
+mysql bacula
+SHOW TABLE STATUS FROM bacula like "File";
+\end{verbatim}
+\normalsize
+
+If the column labeld "Max_data_length" is around 4Gb, this is likely
+to be the source of your problem, and you can modify it with:
+
+\footnotesize
+\begin{verbatim}
+mysql bacula
+ALTER TABLE File MAX_ROWS=281474976710656;
+\end{verbatim}
+\normalsize
+
+Alternatively you can modify your /etc/my.conf file and in the
+[mysqld] section set:
+
+\footnotesize
+\begin{verbatim}
+set-variable = myisam_data_pointer_size=6
+\end{verbatim}
+\normalsize
+
+The above row and point size changes should already be the default on MySQL
+version 5.x, so these changes should only be necessary on MySQL 4.x
+depending on the size of your catalog database.
+
+
+\label{RepairingPSQL}
\subsection*{Repairing Your PostgreSQL Database}
\index[general]{Database!Repairing Your PostgreSQL }
\index[general]{Repairing Your PostgreSQL Database }
consult the PostgreSQL documents for how to repair the database, and also
consider using Bacula's dbcheck program if the conditions are reasonable for
using (see above).
-\label{CompactingPostgres}
-\subsection*{Performance Issues}
+\label{DatabasePerformance}
+\subsection*{Database Performance Issues}
\index[general]{Database Performance Issues}
\index[general]{Performance!Database}
\addcontentsline{toc}{subsection}{Database Performance Issues}
corruption if your system crashes.
For PostgreSQL, you might want to consider turning fsync off. Of course
-doing so can cause corrupted databases in the even of a machine crash.
+doing so can cause corrupted databases in the event of a machine crash.
There are many different ways that you can tune PostgreSQL, the
following document discusses a few of them:
\elink{
\end{verbatim}
\normalsize
-\subsubsection*{SQLit Indexes}
+Though normally not a problem, you should ensure that the indexes
+defined for Filename and Path are both set to 255 characters. Some users
+reported performance problems when their indexes were set to 50 characters.
+To check, do:
+
+\footnotesize
+\begin{verbatim}
+mysql bacula
+show index from Filename;
+show index from Path;
+\end{verbatim}
+\normalsize
+
+and what is important is that for Filename, you have an index with
+Key_name "Name" and Sub_part "255". For Pth, you should have a Key_name
+"Path" and Sub_part "255". If one or the other does not exist or the
+Sub_part is less that 255, you can drop and recreate the appropriate
+index with:
+
+\footnotesize
+\begin{verbatim}
+mysql bacula
+DROP INDEX Path on Path;
+CREATE INDEX Path on Path (Path(255);
+
+DROP INDEX Name on Filename;
+CREATE INDEX Name on Filename (Name(255));
+\end{verbatim}
+\normalsize
+
+
+\subsubsection*{SQLite Indexes}
On SQLite, you can check if you have the proper indexes by:
\footnotesize
\begin{verbatim}
mysql bacula
CREATE INDEX file_jobid_idx on File (JobId);
-CREATE INDEX file_jpf_idx on File (Job, FilenameId, PathId);
+CREATE INDEX file_jfp_idx on File (Job, FilenameId, PathId);
\end{verbatim}
\normalsize
+\label{CompactingPostgres}
\subsection*{Compacting Your PostgreSQL Database}
\index[general]{Database!Compacting Your PostgreSQL }
\index[general]{Compacting Your PostgreSQL Database }
\footnotesize
\begin{verbatim}
-pg_dump bacula > bacula.sql
+pg_dump -c bacula > bacula.sql
cat bacula.sql | psql bacula
rm -f bacula.sql
\end{verbatim}
the Bacula database (typically /usr/local/pgsql/data or possible
/var/lib/pgsql/data) and check the size.
+There are certain PostgreSQL users who do not recommend the above
+procedure. They have the following to say:
+PostgreSQL does not
+need to be dumped/restored to keep the database efficient. A normal
+process of vacuuming will prevent the database from every getting too
+large. If you want to fine-tweak the database storage, commands such
+as VACUUM FULL, REINDEX, and CLUSTER exist specifically to keep you
+from having to do a dump/restore.
+
+Finally, you might want to look at the PostgreSQL documentation on
+this subject at
+\elink{http://www.postgresql.org/docs/8.1/interactive/maintenance.html}
+{http://www.postgresql.org/docs/8.1/interactive/maintenance.html}.
+
\subsection*{Compacting Your SQLite Database}
\index[general]{Compacting Your SQLite Database }
\index[general]{Database!Compacting Your SQLite }