From: Kern Sibbald Date: Fri, 14 Sep 2007 17:11:00 +0000 (+0000) Subject: update to trunk X-Git-Tag: Release-2.2.4~8 X-Git-Url: https://git.sur5r.net/?a=commitdiff_plain;h=0ba9adb54a29e764e6a8924e7742d4c894a510e4;p=bacula%2Fdocs update to trunk --- diff --git a/docs/manual-de/autochangers.tex b/docs/manual-de/autochangers.tex index efc0f94e..b223cb93 100644 --- a/docs/manual-de/autochangers.tex +++ b/docs/manual-de/autochangers.tex @@ -13,7 +13,6 @@ die Details werden im folgenden gekl\"{a}rt. \begin{itemize} \item Ein Script das den Autochanger, gem\"{a}{\ss} den von Bacula gesendeten Kommandos, steuert. Bacula stellt solch ein Script in der {\bf depkgs} Distribution zur Verf\"{u}gung. - ==obsolete== This script works only with single drive autochangers. \item Jedes Volume (Tape) das benutzt wird, muss sowohl im Katalog definiert sein, als auch eine Slotnummer zugeteilt sein, nur so kann Bacula wissen, welches Volume @@ -52,9 +51,11 @@ dieses beinhaltet zwei Consolen-Kommandos: {\bf label barcodes} und {\bf update Im Abschnitt "Barcode Unterst\"{u}tzung" (siehe unten) erfolgt eine detaillierte Beschreibung dieser Kommandos. Momentan beinhaltet die Autochanger-Unterst\"{u}tzung keine Stacker und Silos, -und auch keine Laufwerks-Reinigung (Cleaning). +und auch keine Laufwerks-Reinigung (Cleaning). Stacker und Silos werden nicht unterst\"{u}tzt, +da sie keinen wahlfreien Zugriff auf ihre Slots erlauben. Unter Umst\"{a}nden schaffen Sie es vielleicht, einen Stacker (GravityFeed o. \"{a}.) -mit Bacula zum laufen zu bringen. +mit Bacula zum laufen zu bringen, indem Sie Ihre Konfiguration soweit anpassen, dass auf +den Autochanger nur sequentiell zugegriffen wird. Die Unterst\"{u}tzung f\"{u}r Autochanger mit mehreren Laufwerken erfordert eine Konfiguration wie in \ilink{Autochanger resource}{AutochangerRes} beschrieben. Diese Konfiguration ist aber auch f\"{u}r Autochanger mit nur einem Laufwerk zu benutzen. @@ -67,6 +68,14 @@ Eine Liste mit von {\bf mtx} unterst\"{u}zten Autochangern, finden Sie unter fol Die Homepage des {\bf mtx} Projekts ist: \elink{http://mtx.opensource-sw.net/}{http://mtx.opensource-sw.net/}. +Anmerkung: wir haben R\"{u}ckmeldungen von einigen Benutzern erhalten, +die \"{u}ber gewisse Inkompatibilit\"{a}ten zwischen dem Linux-Kernel und mtx berichten. +Zum Beispiel zwischen Kernel 2.6.18-8.1.8.el5 von CentOS und RedHat und Version 1.3.10 +und 1.3.11 von mtx. Ein Umstieg auf Kernel-Version 2.6.22 hat diese Probleme behoben. + +Zus\"{a}tzlich scheinen einige Versionen von mtx, z.B. 1.3.11, die maximale Anzahl der Slots auf 64 +zu begrenzen, Abhilfe schafft die Benutzung von mtx-Version 1.3.10. + Wenn Sie Probleme haben, benutzen Sie bitte das {\bf auto} Kommando im {\bf btape} Programm, um die Funktionalit\"{a}t des Autochangers mit Bacula zu testen. Bitte bedenken Sie, dass bei vielen Distributionen (z.B. FreeBSD, Debian, ...) der Storage-Dienst @@ -74,6 +83,14 @@ nicht als Benutzer und Gruppe {\bf root} l\"{a}ft, sonder als Benutzer {\bf bacu In diesem Fall m\"{u}ssen Sie sicherstellen, das der Benutzer oder die Gruppe entsprechende Rechte hat, um auf den Autochanger und die Laufwerke zuzugreifen. +Manche Benutzer berichten, dass der Storage-Dienst unter Umst\"{a}nden +beim laden eines Tapes in das Laufwerk blockiert, falls schon ein Tape im Laufwerk ist. +Soweit wir das ermitteln konnten, ist es einfache eine Frage der Wartezeit: +Das Laufwerk hat vorher ein Tape beschrieben und wird f\"{u}r eine ganze Zeit +(bis zu 7 Minuten bei langsamen Laufwerken) im Status BLOCKED verbleiben, +w\"{a}hrend das Tape zur\"{u}ckgespult und entladen wird, erst danach kann ein anderes +Tape in das Laufwerk geladen werden. + \label{SCSI devices} \section{Zuordnung der SCSI Ger\"{a}te} \index[general]{Zuordnung der SCSI Ger\"{a}te} @@ -159,11 +176,16 @@ die nicht in einem Laufwerk geladen sind. Bacula nummeriert diese Slots von eins vorhandenen Tapes im Autochanger. Bacula benutzt niemals ein Volume im Autochanger, dass nicht gelabelt ist, dem keine Slotnummer im Katalog -zugewiesen ist oder wenn das Volume nicht als InChanger im Katalog markiert ist. +zugewiesen ist oder wenn das Volume nicht als InChanger im Katalog markiert ist. Bacula muss wissen wo das +Volume/Tape ist, sonst kann es nicht geladen werden. Jedem Volume im Autochanger muss \"{u}ber das Console-Programm eine Slot-Nummer zugewiesen werden. Diese Information wird im Katalog, zusammen mit anderen Informationen \"{u}ber das Volume, gespeichert. Wenn kein Slot angegeben, oder der Slot auf Null gesetzt ist, wird Bacula das Volume nicht benutzen, auch wenn alle anderen ben\"{o}tigten Konfigurationsparameter richtig gesetzt sind. +Wenn Sie das {\bf mount} Console-Kommando ausf\"{u}hren, m\"{u}ssen Sie angeben welches Tape aus welchem Slot +in das Laufwerk geladen werden soll. Falls schon ein Tape im Laufwerk ist, wird es entladen und danach das +beim {bf\ mount} angegeben Tape geladen. Normalerweise wird kein anderes Tape im Laufwerk sein, da Bacula beim +{\bf unmount} Console-Kommando das Laufwerk leert. Sie k\"{o}nnen die Slot-Nummer und die InChanger-Markierung \"{u}berpr\"{u}fen, indem Sie: \begin{verbatim} @@ -890,6 +912,6 @@ Ausserdem muss jedes dieser Kommandos genau diese R\"{u}ckgabewerte liefern: Bacula \"{u}berpr\"{u}ft den R\"{u}ckgabewert des aufgerufenen Programms, wenn er Null ist, werden die gelieferten Daten akzeptiert. -Wenn der R\"{u}ckgabewert nicht Null ist, werden alle Daten verworfen und -Bacula behandelt das Laufwerk so, als wenn es kein Autochanger ist. +Wenn der R\"{u}ckgabewert nicht Null ist, wird eine entsprechende Fehlermeldung ausgegeben und +Bacula wird ein manuelles laden des Tapes in das laufwerk erwarten. diff --git a/docs/manual-de/bimagemgr-chapter.tex b/docs/manual-de/bimagemgr-chapter.tex new file mode 100644 index 00000000..01157f84 --- /dev/null +++ b/docs/manual-de/bimagemgr-chapter.tex @@ -0,0 +1,155 @@ +%% +%% +%% The following characters must be preceded by a backslash +%% to be entered as printable characters: +%% +%% # $ % & ~ _ ^ \ { } +%% + +\section{bimagemgr} +\label{bimagemgr} +\index[general]{Bimagemgr } + +{\bf bimagemgr} is a utility for those who backup to disk volumes in order to +commit them to CDR disk, rather than tapes. It is a web based interface +written in Perl and is used to monitor when a volume file needs to be burned to +disk. It requires: + +\begin{itemize} +\item A web server running on the bacula server +\item A CD recorder installed and configured on the bacula server +\item The cdrtools package installed on the bacula server. +\item perl, perl-DBI module, and either DBD-MySQL DBD-SQLite or DBD-PostgreSQL modules + \end{itemize} + +DVD burning is not supported by {\bf bimagemgr} at this +time, but both are planned for future releases. + +\subsection{bimagemgr installation} +\index[general]{bimagemgr!Installation } +\index[general]{bimagemgr Installation } + +Installation from tarball: +% TODO: use itemized list for this? +1. Examine the Makefile and adjust it to your configuration if needed. +2. Edit config.pm to fit your configuration. +3. Do 'make install' as root. +4. Edit httpd.conf and change the Timeout value. The web server must not time +out and close the connection before the burn process is finished. The exact +value needed may vary depending upon your cd recorder speed and whether you are +burning on the bacula server on on another machine across your network. In my +case I set it to 1000 seconds. Restart httpd. +5. Make sure that cdrecord is setuid root. +% TODO: I am pretty sure cdrecord can be used without setuid root +% TODO: as long as devices are setup correctly + +Installation from rpm package: +% TODO: use itemized list for this? +1. Install the rpm package for your platform. +2. Edit /cgi-bin/config.pm to fit your configuration. +3. Edit httpd.conf and change the Timeout value. The web server must not time +out and close the connection before the burn process is finished. The exact +value needed may vary depending upon your cd recorder speed and whether you are +burning on the bacula server on on another machine across your network. In my +case I set it to 1000 seconds. Restart httpd. +4. Make sure that cdrecord is setuid root. + +For bacula systems less than 1.36: +% TODO: use itemized list for this? +1. Edit the configuration section of config.pm to fit your configuration. +2. Run /etc/bacula/create\_cdimage\_table.pl from a console on your bacula +server (as root) to add the CDImage table to your bacula database. + +Accessing the Volume files: +The Volume files by default have permissions 640 and can only be read by root. +The recommended approach to this is as follows (and only works if bimagemgr and +apache are running on the same host as bacula. + +For bacula-1.34 or 1.36 installed from tarball - +% TODO: use itemized list for this? +1. Create a new user group bacula and add the user apache to the group for +Red Hat or Mandrake systems. For SuSE systems add the user wwwrun to the +bacula group. +2. Change ownership of all of your Volume files to root.bacula +3. Edit the /etc/bacula/bacula startup script and set SD\_USER=root and +SD\_GROUP=bacula. Restart bacula. + +Note: step 3 should also be done in /etc/init.d/bacula-sd but released versions +of this file prior to 1.36 do not support it. In that case it would be necessary after +a reboot of the server to execute '/etc/bacula/bacula restart'. + +For bacula-1.38 installed from tarball - +% TODO: use itemized list for this? +1. Your configure statement should include: +% TODO: fix formatting here + --with-dir-user=bacula + --with-dir-group=bacula + --with-sd-user=bacula + --with-sd-group=disk + --with-fd-user=root + --with-fd-group=bacula +2. Add the user apache to the bacula group for Red Hat or Mandrake systems. +For SuSE systems add the user wwwrun to the bacula group. +3. Check/change ownership of all of your Volume files to root.bacula + +For bacula-1.36 or bacula-1.38 installed from rpm - +% TODO: use itemized list for this? +1. Add the user apache to the group bacula for Red Hat or Mandrake systems. +For SuSE systems add the user wwwrun to the bacula group. +2. Check/change ownership of all of your Volume files to root.bacula + +bimagemgr installed from rpm > 1.38.9 will add the web server user to the +bacula group in a post install script. Be sure to edit the configuration +information in config.pm after installation of rpm package. + +bimagemgr will now be able to read the Volume files but they are still not +world readable. + +If you are running bimagemgr on another host (not recommended) then you will +need to change the permissions on all of your backup volume files to 644 in +order to access them via nfs share or other means. This approach should only +be taken if you are sure of the security of your environment as it exposes +the backup Volume files to world read. + +\subsection{bimagemgr usage} +\index[general]{bimagemgr!Usage } +\index[general]{bimagemgr Usage } + +Calling the program in your web browser, e.g. {\tt +http://localhost/cgi-bin/bimagemgr.pl} will produce a display as shown below +% TODO: use tex to say figure number +in Figure 1. The program will query the bacula database and display all volume +files with the date last written and the date last burned to disk. If a volume +needs to be burned (last written is newer than last burn date) a "Burn" +button will be displayed in the rightmost column. + +\addcontentsline{lof}{figure}{Bacula CD Image Manager} +\includegraphics{./bimagemgr1.eps} \\Figure 1 +% TODO: use tex to say figure number + +Place a blank CDR disk in your recorder and click the "Burn" button. This will +cause a pop up window as shown in Figure 2 to display the burn progress. +% TODO: use tex to say figure number + +\addcontentsline{lof}{figure}{Bacula CD Image Burn Progress Window} +\includegraphics{./bimagemgr2.eps} \\Figure 2 +% TODO: use tex to say figure number + +When the burn finishes the pop up window will display the results of cdrecord +% TODO: use tex to say figure number +as shown in Figure 3. Close the pop up window and refresh the main window. The +last burn date will be updated and the "Burn" button for that volume will +disappear. Should you have a failed burn you can reset the last burn date of +that volume by clicking its "Reset" link. + +\addcontentsline{lof}{figure}{Bacula CD Image Burn Results} +\includegraphics{./bimagemgr3.eps} \\Figure 3 +% TODO: use tex to say figure number + +In the bottom row of the main display window are two more buttons labeled +"Burn Catalog" and "Blank CDRW". "Burn Catalog" will place a copy of +your bacula catalog on a disk. If you use CDRW disks rather than CDR then +"Blank CDRW" allows you to erase the disk before re-burning it. Regularly +committing your backup volume files and your catalog to disk with {\bf +bimagemgr} ensures that you can rebuild easily in the event of some disaster +on the bacula server itself. diff --git a/docs/manual-de/bimagemgr.tex b/docs/manual-de/bimagemgr.tex new file mode 100644 index 00000000..48ca14ed --- /dev/null +++ b/docs/manual-de/bimagemgr.tex @@ -0,0 +1,60 @@ +%% +%% +%% The following characters must be preceded by a backslash +%% to be entered as printable characters: +%% +%% # $ % & ~ _ ^ \ { } +%% + +\documentclass[11pt,a4paper]{book} +\usepackage{html} +\usepackage{float} +\usepackage{graphicx} +\usepackage{bacula} +\usepackage{longtable} +\usepackage{makeidx} +\usepackage{index} +\usepackage{setspace} +\usepackage{hyperref} + +\makeindex +\newindex{general}{bix}{bid}{General Index} + +\sloppy + +\begin{document} +\sloppy + +\newfont{\bighead}{cmr17 at 36pt} +\parskip 10pt +\parindent 0pt + + +\title{\includegraphics{./bacula-logo.eps} \\ \bigskip + \begin{center} + \large{It comes in the night and sucks + the essence from your computers. } + \end{center} +} +\author{Kern Sibbald} +\date{\vspace{1.0in}\today \\ + This manual documents Bacula version \input{version} \\ + ~\vspace{0.2in}\\ + Copyright \copyright 1999-2007, Free Software Foundation Europe e.V. + \\ + ~\vspace{0.2in}\\ + Permission is granted to copy, distribute and/or modify this document under the terms of the \\ + GNU Free Documentation License, Version 1.2 published by the Free Software Foundation; \\ + with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. \\ + A copy of the license is included in the section entitled "GNU Free Documentation License". +} + +\maketitle + +\clearpage + +\markboth{Bacula Manual}{} +\include{bimagemgr-chapter} +\include{fdl} + +\end{document} diff --git a/docs/manual-de/bootstrap.tex b/docs/manual-de/bootstrap.tex index e343b105..8c96bb56 100644 --- a/docs/manual-de/bootstrap.tex +++ b/docs/manual-de/bootstrap.tex @@ -1,51 +1,44 @@ %% %% -\section*{The Bootstrap File} -\label{_ChapterStart43} -\index[general]{File!Bootstrap } -\index[general]{Bootstrap File } -\addcontentsline{toc}{section}{Bootstrap File} +\chapter{Die Bootstrap-Datei} +\label{BootstrapChapter} +\index[general]{Datei!Bootstrap } +\index[general]{Bootstrap-Datei } -The information in this chapter is provided so that you may either create your -own bootstrap files, or so that you can edit a bootstrap file produced by {\bf -Bacula}. However, normally the bootstrap file will be automatically created -for you during the -\ilink{restore}{_ChapterStart13} command in the Console program, or -by using a -\ilink{ Write Bootstrap}{writebootstrap} record in your Backup -Jobs, and thus you will never need to know the details of this file. +Die Informationen in diesem Kapitel sollen Ihnen helfen, entweder eigene Bootstrap-Dateien +zu erstellen, oder die von Bacula erzeugten zu editieren. Da die Bootstrap-Datei automatisch beim ausf\"{u}hren des +\ilink{restore}{_ConsoleChapter} Console-Komandos, oder wenn Sie \ilink{ Write Bootstrap}{writebootstrap} +in den Job-Eintr\"{a}gen der Director-Dienst-Konfiguration angeben, erzeugt wird, +brauchen Sie das genaue Format eigentlich nicht wissen. -The {\bf bootstrap} file contains ASCII information that permits precise -specification of what files should be restored. It is a relatively compact -form of specifying the information, is human readable, and can be edited with -any text editor. +Die Bootstrap-Datei enth\"{a}lt Informationen im ASCII-Format, +die pr\"{a}zise angeben, welche Dateien wiederhergestellt werden sollen, auf welchem Volume sie liegen +und wo auf dem Volume. Es ist ein relativ kompaktes Format diese Informationen anzugeben, aber es ist +lesbar f\"{u}r Menschen und kann mit einem Texteditor ge\"{a}ndert werden. -\subsection*{File Format} -\index[general]{Format!File } -\index[general]{File Format } -\addcontentsline{toc}{subsection}{File Format} +\section{Bootstrap-Datei Format} +\index[general]{Format!Bootstrap} +\index[general]{Bootstrap-Datei Format } -The general format of a {\bf bootstrap} file is: +Das generelle Format der Bootstrap-Datei ist: -{\bf \lt{}keyword\gt{}= \lt{}value\gt{}} +{\bf \lt{}Schl\"{u}sselwort\gt{} = \lt{}Wert\gt{}} -Where each {\bf keyword} and the {\bf value} specify which files to restore. -More precisely the {\bf keyword} and their {\bf values} serve to limit which -files will be restored and thus act as a filter. The absence of a keyword -means that all records will be accepted. +Wobei jedes Schl\"{u}sselwort und sein Wert angeben, welche Dateien wiederhergestellt werden. +Genauer gesagt, das Schl\"{u}sselwort und sein Wert dienen dazu, zu limitieren welche +Dateien wiederhergestellt werden, sie verhalten sich wie ein Filter. +Das Fehlen eines Schl\"{u}sselwort bedeutet, dass alle Dateien angenommen werden. -Blank lines and lines beginning with a pound sign (\#) in the bootstrap file -are ignored. +In der Bootstrap-Datei werden Leerzeilen und Zeilen beginnent mit {\#} ignoriert. -There are keywords which permit filtering by Volume, Client, Job, FileIndex, -Session Id, Session Time, ... +Es existieren Schl\"{u}sselw\"{o}rter, die die Filterung nach Volume, Client, Job, Fileindex, Session ID, +Session Time usw. erlauben. -The more keywords that are specified, the more selective the specification of -which files to restore will be. In fact, each keyword is {\bf AND}ed with -other keywords that may be present. +Je mehr Schl\"{u}sselw\"{o}rter Sie angeben, desto genauer ist die Auswahl der Dateien, die wiederhergestellt werden. +Alle Schl\"{u}sselw\"{o}rter werden \"{u}ber {\bf UND} verkn\"{u}pft. -For example, +Ein Beispiel: \footnotesize \begin{verbatim} @@ -55,120 +48,125 @@ VolSessionTime = 108927638 \end{verbatim} \normalsize -directs the Storage daemon (or the {\bf bextract} program) to restore only -those files on Volume Test-001 {\bf AND} having VolumeSessionId equal to one -{\bf AND} having VolumeSession time equal to 108927638. +veranlasst den Storage-Dienst (oder das {\bf bextract} Programm), nur die Dateien wiederherzustellen, die +auf dem Volume Test-001 vorhanden sind {\bf UND} eine VolumeSessionID mit 1 haben {\bf UND} deren VolumeSessionTime +gleich 108927638 ist. -The full set of permitted keywords presented in the order in which they are -matched against the Volume records are: +Hier ist eine Liste aller erlaubten Schl\"{u}sselw\"{o}rter in der Reihenfolge in der sie auf +die auf dem Volume befindlichen Daten angewendet werden: \begin{description} \item [Volume] - \index[fd]{Volume } - The value field specifies what Volume the following commands apply to. Each -Volume specification becomes the current Volume, to which all the following -commands apply until a new current Volume (if any) is specified. If the -Volume name contains spaces, it should be enclosed in quotes. + \index[general]{Volume } + Dieser Wert gibt an, auf welches Volume die folgenden Schl\"{u}sselw\"{o}rter angewendet werden sollen. + Falls in der Bootstrap-Datei ein zweites Volume angegeben wird, beziehen sich die darauf folgenden + Schl\"{u}sselw\"{o}rter auf dieses Volume. + Wenn der Name des Volumes Leerzeichen enth\"{a}lt, muss er in Anf\"{u}hrungszeichen gesetzt werden. + Mindestens ein Volume muss angegeben werden. \item [Count] - \index[fd]{Count } - The value is the total number of files that will be restored for this Volume. -This allows the Storage daemon to know when to stop reading the Volume. + \index[general]{Count} + Dieser Wert ist die Gesamtanzahl der Dateien, die von dem Volume gelesen werden sollen. + Daran erkennt der Storage-Dienst, wann er das Lesen beenden soll. + Dieser Wert ist optional. \item [VolFile] - \index[fd]{VolFile } - The value is a file number, a list of file numbers, or a range of file -numbers to match on the current Volume. The file number represents -the physical file on the Volume where the data is stored. For a tape volume, -this record is used to position to the correct starting file, and once the -tape is past the last specified file, reading will stop. + \index[general]{VolFile} + Dieser Wert gibt eine Dateinummer oder eine Liste bzw. einen Bereich von Dateinummern an, + die auf dem aktuellen Volume gefunden werden soll. Die Dateinummer stellt die physikalische + Datei auf dem Volume da, wo die Daten gespeichert sind. Bei einem Tape wird dieser Wert benutzt, + um das Band richtig zu positionieren und wenn das Laufwerk die letzte angegebene Datei gelesen hat, + wird der Lesevorgang gestoppt. \item [VolBlock] - \index[fd]{VolBlock } - The value is a block number, a list of block numbers, or a range of block -numbers to match on the current Volume. The block number represents -the physical block on the Volume where the data is stored. This record is -currently not used. + \index[general]{VolBlock} + Dieser Wert gibt eine Blocknummer oder eine Liste bzw. einen Bereich von Blocknummern an, + die auf dem aktuellen Volume gefunden werden soll. Die Blocknummer stellt die physikalischen + Bl\"{o}cke auf dem Volume da, wo die Daten gespeichert sind. \item [VolSessionTime] - \index[fd]{VolSessionTime } - The value specifies a Volume Session Time to be matched from the current -volume. + \index[general]{VolSessionTime } + Dieser Wert gibt die Volume-Session-Zeit an, die auf dem aktuellen Volume gefunden werden soll. \item [VolSessionId] - \index[fd]{VolSessionId } - The value specifies a VolSessionId, a list of volume session ids, or a range -of volume session ids to be matched from the current Volume. Each -VolSessionId and VolSessionTime pair corresponds to a unique Job that is -backed up on the Volume. + \index[general]{VolSessionId } + Dieser Wert gibt eine Volume-Session-ID oder eine Liste bzw. einen Bereich von Volume-Sesion-IDs an, + die auf dem aktuellen Volume gefunden werden soll. Jedes Paar aus Volume-Session-ID und Volume-Session-Zeit, + stimmt mit einem einzelnen Job \"{u}berein, der auf dem Volume gespeichert ist. \item [JobId] - \index[fd]{JobId } - The value specifies a JobId, list of JobIds, or range of JobIds to be -selected from the current Volume. Note, the JobId may not be unique if you -have multiple Directors, or if you have reinitialized your database. The -JobId filter works only if you do not run multiple simultaneous jobs. + \index[general]{JobId } + Dieser Wert gibt eine Job-ID oder eine Liste bzw. einen Bereich von Job-Ids an, + die auf dem aktuellen Volume gefunden werden soll. Beachten Sie bitte, dass die Job-ID + eventuell nicht eindeutig ist, falls Sie mehrere Director-Dienste haben, oder falls Sie + Ihre Datenbank neu initialisiert haben sollten. Der Job-ID-Filter funktioniert nicht, wenn + Sie mehrere Jobs gleichzeitig haben laufen lassen. + Dieser Wert ist optional und wird von Bacula nicht zum zur\"{u}cksichern ben\"{o}tigt. \item [Job] - \index[fd]{Job } - The value specifies a Job name or list of Job names to be matched on the -current Volume. The Job corresponds to a unique VolSessionId and -VolSessionTime pair. However, the Job is perhaps a bit more readable by -humans. Standard regular expressions (wildcards) may be used to match Job -names. The Job filter works only if you do not run multiple simultaneous -jobs. + \index[general]{Job } + Dieser Wert gibt einen Job-Namen oder eine Liste von Job-Namen an, die auf dem aktuellen + Volume gefunden werden sollen. Der Job-Name stimmt mit einem einzigartigen Paar aus Volume-Session-Zeit + und VolumeSessionID \"{u}berein, allerdings ist er f\"{u}r Menschen ein bischen leichter zu lesen. + Gew\"{o}hnliche regul\"{a}re Ausdr\"{u}cke k\"{o}nnen benutzt werden, um einen entsprechenden Job-Namen zu finden. + Der Job-Name-Filter funktioniert nicht, wenn Sie mehrere Jobs gleichzeitig haben laufen lassen. + Dieser Wert ist optional und wird von Bacula nicht zum zur\"{u}cksichern ben\"{o}tigt. \item [Client] - \index[fd]{Client } - The value specifies a Client name or list of Clients to will be matched on -the current Volume. Standard regular expressions (wildcards) may be used to -match Client names. The Client filter works only if you do not run multiple -simultaneous jobs. + \index[general]{Client } + Dieser Wert gibt einen Client-Namen oder eine Liste von Client-Namen an, dia auf dem aktuellen + Volume gefunden werden soll. Gew\"{o}hnliche regul\"{a}re Ausdr\"{u}cke k\"{o}nnen benutzt werden, + um einen entsprechenden Job-Namen zu finden. Der Job-Name-Filter funktioniert nicht, + wenn Sie mehrere Jobs gleichzeitig haben laufen lassen. + Dieser Wert ist optional und wird von Bacula nicht zum zur\"{u}cksichern ben\"{o}tigt. \item [FileIndex] - \index[fd]{FileIndex } - The value specifies a FileIndex, list of FileIndexes, or range of FileIndexes -to be selected from the current Volume. Each file (data) stored on a Volume -within a Session has a unique FileIndex. For each Session, the first file -written is assigned FileIndex equal to one and incremented for each file -backed up. - -This for a given Volume, the triple VolSessionId, VolSessionTime, and -FileIndex uniquely identifies a file stored on the Volume. Multiple copies of -the same file may be stored on the same Volume, but for each file, the triple -VolSessionId, VolSessionTime, and FileIndex will be unique. This triple is -stored in the Catalog database for each file. + \index[general]{FileIndex } + Dieser Wert gibt einen File-Index oder eine Liste bzw. einen Bereich von File-Indexen an, + die auf dem aktuellen Volume gefunden werden soll. Jedes File (Datei) das auf einem Volume gespeichert ist, + hat f\"{u}r seine Session einen einzigartigen File-Index. Bei jeder Session wird f\"{u}r das erste + gespeicherte File der File-Index auf eins gesetzt und dann mit jedem weiteren File um eins erh\"{o}ht. + + F\"{u}r ein beliebiges Volume bedeutet das, dass die drei Werte von Volume-Session-ID, Volume-Session-Time + und File-Index zusammen eine einzelne einzigartige Datei auf einem Volume angeben. Diese Datei ist eventuell + mehrfach auf dem Volume vorhanden, aber f\"{u}r jedes Vorkommen gibt es eine einzigartige Kombination + dieser drei Werte. Diese drei Werte sind f\"{u}r jede Datei in der Katalog-Datenbank gespeichert. + + Um eine Datei wiederherzustellen, ist die Angabe eines Wertes (oder einer Liste von File-Indexen) + erforderlich. \item [Slot] - \index[fd]{Slot } - The value specifies the autochanger slot. There may be only a single {\bf -Slot} specification for each Volume. + \index[general]{Slot } + Dieser Wert gibt den Autochanger-Slot an. F\"{u}r jedes Volume darf nur ein Slot angegeben werden. \item [Stream] - \index[fd]{Stream } - The value specifies a Stream, a list of Streams, or a range of Streams to be -selected from the current Volume. Unless you really know what you are doing -(the internals of {\bf Bacula}, you should avoid this specification. + \index[general]{Stream } + Dieser Wert gibt einen Stream (Strom) oder eine Liste bzw. einen Bereich von Streams an. + Solange Sie nicht wirklich wissen, was Sie tun, (wenn Sie das interne Arbeiten von Bacula kennen), + sollten Sie auf diese Angabe verzichten. + Dieser Wert ist optional und wird von Bacula nicht zum zur\"{u}cksichern ben\"{o}tigt. \item [*JobType] - \index[fd]{*JobType } - Not yet implemented. + \index[general]{*JobType } + Noch nicht implementiert. \item [*JobLevel] - \index[fd]{*JobLevel } - Not yet implemented. + \index[general]{*JobLevel } + Noch nicht implementiert. \end{description} -The {\bf Volume} record is a bit special in that it must be the first record. -The other keyword records may appear in any order and any number following a -Volume record. +Bei der Angabe des Volume ist zu bedenken, dass dies der erste Parameter sein muss. +Alle anderen Parameter k\"{o}nnen in beliebiger Reihenfolge und Anzahl hinter einem +Volume-Eintrag angegeben werden. -Multiple Volume records may be specified in the same bootstrap file, but each -one starts a new set of filter criteria for the Volume. +Mehrere Volume-Eintr\"{a}ge k\"{o}nnen in der selben Bootstrap-Datei vorkommen, +aber mit jedem Vorkommen beginnt ein neuer Satz an Filter, g\"{u}ltig f\"{u}r +das abgegebene Volume. -In processing the bootstrap file within the current Volume, each filter -specified by a keyword is {\bf AND}ed with the next. Thus, +Beim verarbeiten der Bootstrap-Datei werden alle Schl\"{u}sselw\"{o}rter +unterhalb eines Volume-Eintrags mit {\bf UND} verkn\"{u}pft. +Also wird: \footnotesize \begin{verbatim} @@ -178,10 +176,11 @@ FileIndex = 1 \end{verbatim} \normalsize -will match records on Volume {\bf Test-01} {\bf AND} Client records for {\bf -My machine} {\bf AND} FileIndex equal to {\bf one}. +auf alle Dateien auf dem Volume Test-01 {\bf UND} von Client My machine +{\bf UND} mit dem Fileindex 1 passen. -Multiple occurrences of the same record are {\bf OR}ed together. Thus, +Mehrfach angegebene Schl\"{u}sselw\"{o}rter werden mit {\bf ODER} verkn\"{u}pft. +Also wird: \footnotesize \begin{verbatim} @@ -192,13 +191,13 @@ FileIndex = 1 \end{verbatim} \normalsize -will match records on Volume {\bf Test-01} {\bf AND} (Client records for {\bf -My machine} {\bf OR} {\bf Backup machine}) {\bf AND} FileIndex equal to {\bf -one}. +auf alle Dateien auf dem Volume Test-01 {\bf UND} von Client My machine +{\bf ODER} vom Client Backup machine {\bf UND} mit dem Fileindex 1 passen. -For integer values, you may supply a range or a list, and for all other values -except Volumes, you may specify a list. A list is equivalent to multiple -records of the same keyword. For example, +F\"{u}r Zahlenwerte k\"{o}nnen Sie einen Bereich oder eine Liste angeben, +f\"{u}r alle anderen Parameter, bis auf Volumes, nur eine Liste. +Eine Liste ist gleichbedeutend mit mehrfachen Angaben eines Parameters. +Ein Beispiel \footnotesize \begin{verbatim} @@ -208,18 +207,18 @@ FileIndex = 1-20, 35 \end{verbatim} \normalsize -will match records on Volume {\bf Test-01} {\bf AND} {\bf (}Client records for -{\bf My machine} {\bf OR} {\bf Backup machine}{\bf )} {\bf AND} {\bf -(}FileIndex 1 {\bf OR} 2 {\bf OR} 3 ... {\bf OR} 20 {\bf OR} 35{\bf )}. +passt auf alle Dateien auf dem Volume Test-01 {\bf UND} von Client My machine +{\bf ODER} vom Client Backup machine {\bf UND} mit dem Fileindex 1 {\bf ODER} +2 {\bf ODER} 3 ... {\bf ODER} 20 {\bf ODER} 35. -As previously mentioned above, there may be multiple Volume records in the -same bootstrap file. Each new Volume definition begins a new set of filter -conditions that apply to that Volume and will be {\bf OR}ed with any other -Volume definitions. +Wie oben erw\"{a}hnt, k\"{o}nnen mehrere Volume-Eintr\"{a}ge in der selben +Bootstrap-Datei stehen. Jedes Vorkommen eines Volume-Eintrags beginnt einen neuen +Satz an Filterregeln der auf dem angegebenen Volume angewendet wird und mit weiteren +Volume-Eintr\"{a}gen \"{u}ber {\bf ODER} verkn\"{u}pft wird. -As an example, suppose we query for the current set of tapes to restore all -files on Client {\bf Rufus} using the {\bf query} command in the console -program: +Als ein Beispiel nehmen wir an, dass wir, mit dem Console-Kommando {\bf query} , +nach dem Satz Volumes fragen, die ben\"{o}tigt werden, um alle Dateien des Clients Rufus +wiederherstellen zu k\"{o}nnen: \footnotesize \begin{verbatim} @@ -247,10 +246,10 @@ Enter Client Name: Rufus \end{verbatim} \normalsize -The output shows us that there are four Jobs that must be restored. The first -one is a Full backup, and the following three are all Incremental backups. +Die Ausgabe zeigt uns, dass wir vier Jobs wiederherstellen m\"{u}ssen. +Der erste ist eine vollst\"{a}ndige Sicherung, und die drei folgenden sind inkrementelle Sicherungen. -The following bootstrap file will restore those files: +Die folgende Bootstrap-Datei wird ben\"{o}tigt um alle Dateien wiederherzustellen: \footnotesize \begin{verbatim} @@ -269,8 +268,9 @@ VolSessionTime=1024380678 \end{verbatim} \normalsize -As a final example, assume that the initial Full save spanned two Volumes. The -output from {\bf query} might look like: +Als letztes Beispiel nehmen wir an, dass die erste vollst\"{a}ndige Sicherung sich +\"{u}ber zwei verschiedene Volumes erstreckt. Die Ausgabe des Console-Kommandos +{\bf query} sieht eventuell so aus: \footnotesize \begin{verbatim} @@ -285,7 +285,7 @@ output from {\bf query} might look like: \end{verbatim} \normalsize -and the following bootstrap file would restore those files: +und die folgende Bootstrap-Datei wird ben\"{o}tigt, um diese Dateien wiederherzustellen: \footnotesize \begin{verbatim} @@ -304,51 +304,78 @@ VolSessionTime=1025025494 \end{verbatim} \normalsize -\subsection*{Automatic Generation of Bootstrap Files} -\index[general]{Files!Automatic Generation of Bootstrap } -\index[general]{Automatic Generation of Bootstrap Files } -\addcontentsline{toc}{subsection}{Automatic Generation of Bootstrap Files} +\section{automatische Erzeugung der Bootstrap-Datei} +\index[general]{Datei!automatische Erzeugung der Bootstrap-} +\index[general]{automatische Erzeugung der Bootstrap-Datei } + -One thing that is probably worth knowing: the bootstrap files that are -generated automatically at the end of the job are not as optimized as those -generated by the restore command. This is because the ones created at the end -of the file, contain all files written to the Volume for that job. As a -consequence, all the files saved to an Incremental or Differential job will be -restored first by the Full save, then by any Incremental or Differential -saves. +Eine Sache ist vermutlich wissenswert: die Bootstrap-Dateien die automatisch +am Ende eines jeden Jobs erzeugt werden, sind nicht so optimiert wie die, die +durch das Console-Kommando {\bf restore} erzeugt werden. +Das ist so, weil die Bootstrap-Dateien, die am Ende des Jobs erstellt werden, +alle Dateien enthalten, die f\"{u}r diesen Job auf das Volume geschrieben wurden. +Die Konsequenz ist, dass alle Dateien die w\"{a}rend eines inkrementellen oder differenziellen +Jobs geschrieben wurden, beim Wiederherstellen zun\"{a}chst von der vollst\"{a}ndigen Sicherung +wiederhergestellt werden und dann von der inkrementellen oder differenziellen Sicherung. -When the bootstrap file is generated for the restore command, only one copy -(the most recent) of each file is restored. +Wenn die Bootstrap-Datei f\"{u}r die Wiederherstellung erstellt wird, +wird immer nur eine Version der Datei (die aktuellste) zur Wiederherstellung aufgelistet. -So if you have spare cycles on your machine, you could optimize the bootstrap -files by doing the following: +Falls Ihr Rechner noch ein bischen Zeit \"{u}brig hat, k\"{o}nnen Sie Ihre +Bootstrap-Dateien optimieren, indem Sie das folgende tun: \footnotesize \begin{verbatim} ./console restore client=xxx select all + done no quit Backup bootstrap file. \end{verbatim} \normalsize -The above will not work if you have multiple FileSets because that will be an -extra prompt. However, the {\bf restore client=xxx select all} builds the -in-memory tree, selecting everything and creates the bootstrap file. +Das wird allerdings nicht funktionieren, wenn Ihr Client mehrere Filesets hat, +denn dann wird noch eine weitere Eingabe erforderlich. +Das Console-Kommando {\bf restore client=xxx select all} erstellt den Restore-Dateibaum +und w\"{a}hlt alle Dateien aus, {\bf done} beendet den Auswahlmodus, dann wird die Bootstrap-Datei f\"{u}r diesen +Wiederherstellungs-Job geschrieben. +Das {\bf no} beantwortet die Frage {\bf Do you want to run this (yes/mod/no)}. +{\bf quit} beendet das Console-Programm, danach kann die neu erstellte Bootstrap-Datei gesichert werden. + +\label{bscanBootstrap} +\section{Bootstrap-Datei f\"{u}r bscan} +\index[general]{bscan} +\index[general]{bscan!Bootstrap-Datei} +\index[general]{bscan Bootstrap-Datei} +Wenn Sie mit dem bscan-Programm sehr viele Volumes abfragen m\"{u}ssen, +wird Ihr Kommando eventuell das Limit der Kommandozeilel\"{a}nge \"{u}berschreiten (511 Zeichen). +In dem Fall, k\"{o}nnen Sie eine einfache Bootstrap-Datei erzeugen, die nur Volume-Namen enth\"{a}lt. +Ein Beispiel: + +\footnotesize +\begin{verbatim} +Volume="Vol001" +Volume="Vol002" +Volume="Vol003" +Volume="Vol004" +Volume="Vol005" +\end{verbatim} +\normalsize -The {\bf no} answers the {\bf Do you want to run this (yes/mod/no)} question. -\subsection*{A Final Example} -\index[general]{Example!Final } -\index[general]{Final Example } -\addcontentsline{toc}{subsection}{Final Example} +\section{ein weiteres Beispiel der Bootstrap-Datei} +\index[general]{Beispiel ein weiteres!Bootstrap-Datei } +\index[general]{ein weiteres Beispiel der Bootstrap-Datei } -If you want to extract or copy a single Job, you can do it by selecting by -JobId (code not tested) or better yet, if you know the VolSessionTime and the -VolSessionId (printed on Job report and in Catalog), specifying this is by far -the best. Using the VolSessionTime and VolSessionId is the way Bacula does -restores. A bsr file might look like the following: +Wenn Sie nur einen einzigen Job vom Volume lesen wollen, k\"{o}nnen Sie das +durch ausw\"{a}hlen der Job-Id tun (Funktion nicht getestet), oder besser noch, +Sie geben die VolumeSessionTime und VolumeSessionID an, falls Sie sie wissen. +(Die beiden Werte werden auf dem Job-Report ausgegeben und sind in der Katalog-Datenbank +zu finden.) +Die VolumeSessionTime und VolumeSessionID anzugeben ist auch die Art, +wie Bacula Wiederherstellungen durchf\"{u}hrt. +Eine Bootstrap-Datei kann dann wie folgt aussehen: \footnotesize \begin{verbatim} @@ -358,9 +385,9 @@ VolSessionTime=1080847820 \end{verbatim} \normalsize -If you know how many files are backed up (on the job report), you can -enormously speed up the selection by adding (let's assume there are 157 -files): +Wenn Sie wissen, wie viele Dateien gesichert wurden (siehe den Job-Report), +k\"{o}nnen Sie die Auswahl enorm beschleunigen, indem Sie der Bootstrap-Datei +folgendes hinzuf\"{u}gen (angenommen es waren 157 Dateien): \footnotesize \begin{verbatim} @@ -369,8 +396,9 @@ Count=157 \end{verbatim} \normalsize -Finally, if you know the File number where the Job starts, you can also cause -bcopy to forward space to the right file without reading every record: +Letztendlich, wenn Sie auch die File-Nummer wissen, wo auf dem Volume die +ausgew\"{a}hlten Dateien liegen, k\"{o}nnen Sie das bcopy-Programm veranlassen, +zum richtigen File auf dem Volumen zu springen, ohne jeden Eintrag lesen zu m\"{u}ssen: \footnotesize \begin{verbatim} @@ -378,11 +406,10 @@ VolFile=20 \end{verbatim} \normalsize -There is nothing magic or complicated about a BSR file. Parsing it and -properly applying it within Bacula *is* magic, but you don't need to worry -about that. +Bootstrap-Dateien sind weder magisch noch kompliziert. Sie zu lesen und Bacula sinnvoll mit ihnen +arbeiten zu lassen *ist* magisch, aber darum brauchen Sie sich nicht k\"{u}mmern. -If you want to see a *real* bsr file, simply fire up the {\bf restore} command -in the console program, select something, then answer no when it prompts to -run the job. Then look at the file {\bf restore.bsr} in your working -directory. +Wenn Sie eine *echte* Bootstrap-Datei sehen wollen, starten sie das Console-Programm und geben Sie +{\bf restore} ein, w\"{a}hlen ein paar Dateien aus und antworten mit {\bf no}, +wenn Sie gefragt werden, ob Sie die Wiederherstellung starten wollen. Dann finden Sie die Bootstrap-Datei +im Arbeitsverzeichnis des Director-Dienstes (z.B. unter /var/lib/bacula/backup-dir.restore.2.bsr). diff --git a/docs/manual-de/bugs.tex b/docs/manual-de/bugs.tex index c9b07450..6f4b9b6a 100644 --- a/docs/manual-de/bugs.tex +++ b/docs/manual-de/bugs.tex @@ -1,22 +1,19 @@ %% %% -\section*{Bacula Bugs} -\label{_ChapterStart4} +\section{Bacula Bugs} +\label{BugsChapter} \index[general]{Bacula Bugs } \index[general]{Bugs!Bacula } -\addcontentsline{toc}{section}{Bacula Bugs} -Well fortunately there are not too many bugs, but thanks to Dan Langille, we -have a -\elink{bugs database}{http://bugs.bacula.org} where bugs are reported. -Generally, when a bug is fixed, a patch for the currently released version will -be attached to the bug report. +Zum Gl\"{u}ck gibt es in Bacula nicht sehr viele Programmfehler (Bugs), +aber dank Dan Langille haben wir eine \elink{Bug-Datenbank}{http://bugs.bacula.org}, +wo Fehler gemeldet werden k\"{o}nnen. Wenn ein Fehler behoben ist, wird normalerweise ein +Programmst\"{u}ck das den Fehler korrigiert (Patch), auf der Seite des Fehlerberichts +ver\"{o}ffentlicht. -The directory {\bf patches} in the current CVS always contains a list of -the patches that have been created for the previously released version -of Bacula. In addition, the file {\bf patches-version-number} in the -{\bf patches} directory contains a summary of each of the patches. +Das Verzeichnis {\bf patches} im aktuellen SVN enth\"{a}lt eine Liste aller Programmkorrekturen +die f\"{u}r \"{a}ltere Bacula-Versionen ver\"{o}ffentlicht wurden. -A "raw" list of the current task list and known issues can be found in {\bf -kernstodo} in the main Bacula source directory. +Eine "grobe" \"{U}bersicht der momentanen Arbeit und bekannter Probleme befindet sich +auch in der Datei {\bf kernstodo} im Hauptverzeichnis der Bacula-Programmquellen. diff --git a/docs/manual-de/catalog.tex b/docs/manual-de/catalog.tex deleted file mode 100644 index eebe59bc..00000000 --- a/docs/manual-de/catalog.tex +++ /dev/null @@ -1,929 +0,0 @@ -%% -%% - -\section*{Catalog Services} -\label{_ChapterStart30} -\index[general]{Services!Catalog } -\index[general]{Catalog Services } -\addcontentsline{toc}{section}{Catalog Services} - -\subsection*{General} -\index[general]{General } -\addcontentsline{toc}{subsection}{General} - -This chapter is intended to be a technical discussion of the Catalog services -and as such is not targeted at end users but rather at developers and system -administrators that want or need to know more of the working details of {\bf -Bacula}. - -The {\bf Bacula Catalog} services consist of the programs that provide the SQL -database engine for storage and retrieval of all information concerning files -that were backed up and their locations on the storage media. - -We have investigated the possibility of using the following SQL engines for -Bacula: Beagle, mSQL, GNU SQL, PostgreSQL, SQLite, Oracle, and MySQL. Each -presents certain problems with either licensing or maturity. At present, we -have chosen for development purposes to use MySQL, PostgreSQL and SQLite. -MySQL was chosen because it is fast, proven to be reliable, widely used, and -actively being developed. MySQL is released under the GNU GPL license. -PostgreSQL was chosen because it is a full-featured, very mature database, and -because Dan Langille did the Bacula driver for it. PostgreSQL is distributed -under the BSD license. SQLite was chosen because it is small, efficient, and -can be directly embedded in {\bf Bacula} thus requiring much less effort from -the system administrator or person building {\bf Bacula}. In our testing -SQLite has performed very well, and for the functions that we use, it has -never encountered any errors except that it does not appear to handle -databases larger than 2GBytes. - -The Bacula SQL code has been written in a manner that will allow it to be -easily modified to support any of the current SQL database systems on the -market (for example: mSQL, iODBC, unixODBC, Solid, OpenLink ODBC, EasySoft -ODBC, InterBase, Oracle8, Oracle7, and DB2). - -If you do not specify either {\bf \verb{--{with-mysql} or {\bf \verb{--{with-postgresql} or -{\bf \verb{--{with-sqlite} on the ./configure line, Bacula will use its minimalist -internal database. This database is kept for build reasons but is no longer -supported. Bacula {\bf requires} one of the three databases (MySQL, -PostgreSQL, or SQLite) to run. - -\subsubsection*{Filenames and Maximum Filename Length} -\index[general]{Filenames and Maximum Filename Length } -\index[general]{Length!Filenames and Maximum Filename } -\addcontentsline{toc}{subsubsection}{Filenames and Maximum Filename Length} - -In general, either MySQL, PostgreSQL or SQLite permit storing arbitrary long -path names and file names in the catalog database. In practice, there still -may be one or two places in the Catalog interface code that restrict the -maximum path length to 512 characters and the maximum file name length to 512 -characters. These restrictions are believed to have been removed. Please note, -these restrictions apply only to the Catalog database and thus to your ability -to list online the files saved during any job. All information received and -stored by the Storage daemon (normally on tape) allows and handles arbitrarily -long path and filenames. - -\subsubsection*{Installing and Configuring MySQL} -\index[general]{MySQL!Installing and Configuring } -\index[general]{Installing and Configuring MySQL } -\addcontentsline{toc}{subsubsection}{Installing and Configuring MySQL} - -For the details of installing and configuring MySQL, please see the -\ilink{Installing and Configuring MySQL}{_ChapterStart} chapter of -this manual. - -\subsubsection*{Installing and Configuring PostgreSQL} -\index[general]{PostgreSQL!Installing and Configuring } -\index[general]{Installing and Configuring PostgreSQL } -\addcontentsline{toc}{subsubsection}{Installing and Configuring PostgreSQL} - -For the details of installing and configuring PostgreSQL, please see the -\ilink{Installing and Configuring PostgreSQL}{_ChapterStart10} -chapter of this manual. - -\subsubsection*{Installing and Configuring SQLite} -\index[general]{Installing and Configuring SQLite } -\index[general]{SQLite!Installing and Configuring } -\addcontentsline{toc}{subsubsection}{Installing and Configuring SQLite} - -For the details of installing and configuring SQLite, please see the -\ilink{Installing and Configuring SQLite}{_ChapterStart33} chapter of -this manual. - -\subsubsection*{Internal Bacula Catalog} -\index[general]{Catalog!Internal Bacula } -\index[general]{Internal Bacula Catalog } -\addcontentsline{toc}{subsubsection}{Internal Bacula Catalog} - -Please see the -\ilink{Internal Bacula Database}{_ChapterStart42} chapter of this -manual for more details. - -\subsubsection*{Database Table Design} -\index[general]{Design!Database Table } -\index[general]{Database Table Design } -\addcontentsline{toc}{subsubsection}{Database Table Design} - -All discussions that follow pertain to the MySQL database. The details for the -PostgreSQL and SQLite databases are essentially identical except for that all -fields in the SQLite database are stored as ASCII text and some of the -database creation statements are a bit different. The details of the internal -Bacula catalog are not discussed here. - -Because the Catalog database may contain very large amounts of data for large -sites, we have made a modest attempt to normalize the data tables to reduce -redundant information. While reducing the size of the database significantly, -it does, unfortunately, add some complications to the structures. - -In simple terms, the Catalog database must contain a record of all Jobs run by -Bacula, and for each Job, it must maintain a list of all files saved, with -their File Attributes (permissions, create date, ...), and the location and -Media on which the file is stored. This is seemingly a simple task, but it -represents a huge amount interlinked data. Note: the list of files and their -attributes is not maintained when using the internal Bacula database. The data -stored in the File records, which allows the user or administrator to obtain a -list of all files backed up during a job, is by far the largest volume of -information put into the Catalog database. - -Although the Catalog database has been designed to handle backup data for -multiple clients, some users may want to maintain multiple databases, one for -each machine to be backed up. This reduces the risk of confusion of accidental -restoring a file to the wrong machine as well as reducing the amount of data -in a single database, thus increasing efficiency and reducing the impact of a -lost or damaged database. - -\subsection*{Sequence of Creation of Records for a Save Job} -\index[general]{Sequence of Creation of Records for a Save Job } -\index[general]{Job!Sequence of Creation of Records for a Save } -\addcontentsline{toc}{subsection}{Sequence of Creation of Records for a Save -Job} - -Start with StartDate, ClientName, Filename, Path, Attributes, MediaName, -MediaCoordinates. (PartNumber, NumParts). In the steps below, ``Create new'' -means to create a new record whether or not it is unique. ``Create unique'' -means each record in the database should be unique. Thus, one must first -search to see if the record exists, and only if not should a new one be -created, otherwise the existing RecordId should be used. - -\begin{enumerate} -\item Create new Job record with StartDate; save JobId -\item Create unique Media record; save MediaId -\item Create unique Client record; save ClientId -\item Create unique Filename record; save FilenameId -\item Create unique Path record; save PathId -\item Create unique Attribute record; save AttributeId - store ClientId, FilenameId, PathId, and Attributes -\item Create new File record - store JobId, AttributeId, MediaCoordinates, etc -\item Repeat steps 4 through 8 for each file -\item Create a JobMedia record; save MediaId -\item Update Job record filling in EndDate and other Job statistics - \end{enumerate} - -\subsection*{Database Tables} -\index[general]{Database Tables } -\index[general]{Tables!Database } -\addcontentsline{toc}{subsection}{Database Tables} - -\addcontentsline{lot}{table}{Filename Table Layout} -\begin{longtable}{|l|l|l|} - \hline -\multicolumn{3}{|l| }{\bf Filename } \\ - \hline -\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{l| }{\bf Data Type } -& \multicolumn{1}{l| }{\bf Remark } \\ - \hline -{FilenameId } & {integer } & {Primary Key } \\ - \hline -{Name } & {Blob } & {Filename } -\\ \hline - -\end{longtable} - -The {\bf Filename} table shown above contains the name of each file backed up -with the path removed. If different directories or machines contain the same -filename, only one copy will be saved in this table. - -\ - -\addcontentsline{lot}{table}{Path Table Layout} -\begin{longtable}{|l|l|l|} - \hline -\multicolumn{3}{|l| }{\bf Path } \\ - \hline -\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type -} & \multicolumn{1}{c| }{\bf Remark } \\ - \hline -{PathId } & {integer } & {Primary Key } \\ - \hline -{Path } & {Blob } & {Full Path } -\\ \hline - -\end{longtable} - -The {\bf Path} table contains shown above the path or directory names of all -directories on the system or systems. The filename and any MSDOS disk name are -stripped off. As with the filename, only one copy of each directory name is -kept regardless of how many machines or drives have the same directory. These -path names should be stored in Unix path name format. - -Some simple testing on a Linux file system indicates that separating the -filename and the path may be more complication than is warranted by the space -savings. For example, this system has a total of 89,097 files, 60,467 of which -have unique filenames, and there are 4,374 unique paths. - -Finding all those files and doing two stats() per file takes an average wall -clock time of 1 min 35 seconds on a 400MHz machine running RedHat 6.1 Linux. - -Finding all those files and putting them directly into a MySQL database with -the path and filename defined as TEXT, which is variable length up to 65,535 -characters takes 19 mins 31 seconds and creates a 27.6 MByte database. - -Doing the same thing, but inserting them into Blob fields with the filename -indexed on the first 30 characters and the path name indexed on the 255 (max) -characters takes 5 mins 18 seconds and creates a 5.24 MB database. Rerunning -the job (with the database already created) takes about 2 mins 50 seconds. - -Running the same as the last one (Path and Filename Blob), but Filename -indexed on the first 30 characters and the Path on the first 50 characters -(linear search done there after) takes 5 mins on the average and creates a 3.4 -MB database. Rerunning with the data already in the DB takes 3 mins 35 -seconds. - -Finally, saving only the full path name rather than splitting the path and the -file, and indexing it on the first 50 characters takes 6 mins 43 seconds and -creates a 7.35 MB database. - -\ - -\addcontentsline{lot}{table}{File Table Layout} -\begin{longtable}{|l|l|l|} - \hline -\multicolumn{3}{|l| }{\bf File } \\ - \hline -\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type -} & \multicolumn{1}{c| }{\bf Remark } \\ - \hline -{FileId } & {integer } & {Primary Key } \\ - \hline -{FileIndex } & {integer } & {The sequential file number in the Job } \\ - \hline -{JobId } & {integer } & {Link to Job Record } \\ - \hline -{PathId } & {integer } & {Link to Path Record } \\ - \hline -{FilenameId } & {integer } & {Link to Filename Record } \\ - \hline -{MarkId } & {integer } & {Used to mark files during Verify Jobs } \\ - \hline -{LStat } & {tinyblob } & {File attributes in base64 encoding } \\ - \hline -{MD5 } & {tinyblob } & {MD5 signature in base64 encoding } -\\ \hline - -\end{longtable} - -The {\bf File} table shown above contains one entry for each file backed up by -Bacula. Thus a file that is backed up multiple times (as is normal) will have -multiple entries in the File table. This will probably be the table with the -most number of records. Consequently, it is essential to keep the size of this -record to an absolute minimum. At the same time, this table must contain all -the information (or pointers to the information) about the file and where it -is backed up. Since a file may be backed up many times without having changed, -the path and filename are stored in separate tables. - -This table contains by far the largest amount of information in the Catalog -database, both from the stand point of number of records, and the stand point -of total database size. As a consequence, the user must take care to -periodically reduce the number of File records using the {\bf retention} -command in the Console program. - -\ - -\addcontentsline{lot}{table}{Job Table Layout} -\begin{longtable}{|l|l|p{2.5in}|} - \hline -\multicolumn{3}{|l| }{\bf Job } \\ - \hline -\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type -} & \multicolumn{1}{c| }{\bf Remark } \\ - \hline -{JobId } & {integer } & {Primary Key } \\ - \hline -{Job } & {tinyblob } & {Unique Job Name } \\ - \hline -{Name } & {tinyblob } & {Job Name } \\ - \hline -{PurgedFiles } & {tinyint } & {Used by Bacula for purging/retention periods -} \\ - \hline -{Type } & {binary(1) } & {Job Type: Backup, Copy, Clone, Archive, Migration -} \\ - \hline -{Level } & {binary(1) } & {Job Level } \\ - \hline -{ClientId } & {integer } & {Client index } \\ - \hline -{JobStatus } & {binary(1) } & {Job Termination Status } \\ - \hline -{SchedTime } & {datetime } & {Time/date when Job scheduled } \\ - \hline -{StartTime } & {datetime } & {Time/date when Job started } \\ - \hline -{EndTime } & {datetime } & {Time/date when Job ended } \\ - \hline -{JobTDate } & {bigint } & {Start day in Unix format but 64 bits; used for -Retention period. } \\ - \hline -{VolSessionId } & {integer } & {Unique Volume Session ID } \\ - \hline -{VolSessionTime } & {integer } & {Unique Volume Session Time } \\ - \hline -{JobFiles } & {integer } & {Number of files saved in Job } \\ - \hline -{JobBytes } & {bigint } & {Number of bytes saved in Job } \\ - \hline -{JobErrors } & {integer } & {Number of errors during Job } \\ - \hline -{JobMissingFiles } & {integer } & {Number of files not saved (not yet used) } -\\ - \hline -{PoolId } & {integer } & {Link to Pool Record } \\ - \hline -{FileSetId } & {integer } & {Link to FileSet Record } \\ - \hline -{PurgedFiles } & {tiny integer } & {Set when all File records purged } \\ - \hline -{HasBase } & {tiny integer } & {Set when Base Job run } -\\ \hline - -\end{longtable} - -The {\bf Job} table contains one record for each Job run by Bacula. Thus -normally, there will be one per day per machine added to the database. Note, -the JobId is used to index Job records in the database, and it often is shown -to the user in the Console program. However, care must be taken with its use -as it is not unique from database to database. For example, the user may have -a database for Client data saved on machine Rufus and another database for -Client data saved on machine Roxie. In this case, the two database will each -have JobIds that match those in another database. For a unique reference to a -Job, see Job below. - -The Name field of the Job record corresponds to the Name resource record given -in the Director's configuration file. Thus it is a generic name, and it will -be normal to find many Jobs (or even all Jobs) with the same Name. - -The Job field contains a combination of the Name and the schedule time of the -Job by the Director. Thus for a given Director, even with multiple Catalog -databases, the Job will contain a unique name that represents the Job. - -For a given Storage daemon, the VolSessionId and VolSessionTime form a unique -identification of the Job. This will be the case even if multiple Directors -are using the same Storage daemon. - -The Job Type (or simply Type) can have one of the following values: - -\addcontentsline{lot}{table}{Job Types} -\begin{longtable}{|l|l|} - \hline -\multicolumn{1}{|c| }{\bf Value } & \multicolumn{1}{c| }{\bf Meaning } \\ - \hline -{B } & {Backup Job } \\ - \hline -{V } & {Verify Job } \\ - \hline -{R } & {Restore Job } \\ - \hline -{C } & {Console program (not in database) } \\ - \hline -{D } & {Admin Job } \\ - \hline -{A } & {Archive Job (not implemented) } -\\ \hline - -\end{longtable} - -The JobStatus field specifies how the job terminated, and can be one of the -following: - -\addcontentsline{lot}{table}{Job Statuses} -\begin{longtable}{|l|l|} - \hline -\multicolumn{1}{|c| }{\bf Value } & \multicolumn{1}{c| }{\bf Meaning } \\ - \hline -{C } & {Created but not yet running } \\ - \hline -{R } & {Running } \\ - \hline -{B } & {Blocked } \\ - \hline -{T } & {Terminated normally } \\ - \hline -{E } & {Terminated in Error } \\ - \hline -{e } & {Non-fatal error } \\ - \hline -{f } & {Fatal error } \\ - \hline -{D } & {Verify Differences } \\ - \hline -{A } & {Canceled by the user } \\ - \hline -{F } & {Waiting on the File daemon } \\ - \hline -{S } & {Waiting on the Storage daemon } \\ - \hline -{m } & {Waiting for a new Volume to be mounted } \\ - \hline -{M } & {Waiting for a Mount } \\ - \hline -{s } & {Waiting for Storage resource } \\ - \hline -{j } & {Waiting for Job resource } \\ - \hline -{c } & {Waiting for Client resource } \\ - \hline -{d } & {Wating for Maximum jobs } \\ - \hline -{t } & {Waiting for Start Time } \\ - \hline -{p } & {Waiting for higher priority job to finish } -\\ \hline - -\end{longtable} - -\ - -\addcontentsline{lot}{table}{File Sets Table Layout} -\begin{longtable}{|l|l|l|} - \hline -\multicolumn{3}{|l| }{\bf FileSet } \\ - \hline -\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type\ -\ \ } & \multicolumn{1}{c| }{\bf Remark } \\ - \hline -{FileSetId } & {integer } & {Primary Key } \\ - \hline -{FileSet } & {tinyblob } & {FileSet name } \\ - \hline -{MD5 } & {tinyblob } & {MD5 checksum of FileSet } \\ - \hline -{CreateTime } & {datetime } & {Time and date Fileset created } -\\ \hline - -\end{longtable} - -The {\bf FileSet} table contains one entry for each FileSet that is used. The -MD5 signature is kept to ensure that if the user changes anything inside the -FileSet, it will be detected and the new FileSet will be used. This is -particularly important when doing an incremental update. If the user deletes a -file or adds a file, we need to ensure that a Full backup is done prior to the -next incremental. - -\ - -\addcontentsline{lot}{table}{JobMedia Table Layout} -\begin{longtable}{|l|l|p{2.5in}|} - \hline -\multicolumn{3}{|l| }{\bf JobMedia } \\ - \hline -\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type\ -\ \ } & \multicolumn{1}{c| }{\bf Remark } \\ - \hline -{JobMediaId } & {integer } & {Primary Key } \\ - \hline -{JobId } & {integer } & {Link to Job Record } \\ - \hline -{MediaId } & {integer } & {Link to Media Record } \\ - \hline -{FirstIndex } & {integer } & {The index (sequence number) of the first file -written for this Job to the Media } \\ - \hline -{LastIndex } & {integer } & {The index of the last file written for this -Job to the Media } \\ - \hline -{StartFile } & {integer } & {The physical media (tape) file number of the -first block written for this Job } \\ - \hline -{EndFile } & {integer } & {The physical media (tape) file number of the -last block written for this Job } \\ - \hline -{StartBlock } & {integer } & {The number of the first block written for -this Job } \\ - \hline -{EndBlock } & {integer } & {The number of the last block written for this -Job } \\ - \hline -{VolIndex } & {integer } & {The Volume use sequence number within the Job } -\\ \hline - -\end{longtable} - -The {\bf JobMedia} table contains one entry for each volume written for the -current Job. If the Job spans 3 tapes, there will be three JobMedia records, -each containing the information to find all the files for the given JobId on -the tape. - -\ - -\addcontentsline{lot}{table}{Media Table Layout} -\begin{longtable}{|l|l|p{2.4in}|} - \hline -\multicolumn{3}{|l| }{\bf Media } \\ - \hline -\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type\ -\ \ } & \multicolumn{1}{c| }{\bf Remark } \\ - \hline -{MediaId } & {integer } & {Primary Key } \\ - \hline -{VolumeName } & {tinyblob } & {Volume name } \\ - \hline -{Slot } & {integer } & {Autochanger Slot number or zero } \\ - \hline -{PoolId } & {integer } & {Link to Pool Record } \\ - \hline -{MediaType } & {tinyblob } & {The MediaType supplied by the user } \\ - \hline -{FirstWritten } & {datetime } & {Time/date when first written } \\ - \hline -{LastWritten } & {datetime } & {Time/date when last written } \\ - \hline -{LabelDate } & {datetime } & {Time/date when tape labeled } \\ - \hline -{VolJobs } & {integer } & {Number of jobs written to this media } \\ - \hline -{VolFiles } & {integer } & {Number of files written to this media } \\ - \hline -{VolBlocks } & {integer } & {Number of blocks written to this media } \\ - \hline -{VolMounts } & {integer } & {Number of time media mounted } \\ - \hline -{VolBytes } & {bigint } & {Number of bytes saved in Job } \\ - \hline -{VolErrors } & {integer } & {Number of errors during Job } \\ - \hline -{VolWrites } & {integer } & {Number of writes to media } \\ - \hline -{MaxVolBytes } & {bigint } & {Maximum bytes to put on this media } \\ - \hline -{VolCapacityBytes } & {bigint } & {Capacity estimate for this volume } \\ - \hline -{VolStatus } & {enum } & {Status of media: Full, Archive, Append, Recycle, -Read-Only, Disabled, Error, Busy } \\ - \hline -{Recycle } & {tinyint } & {Whether or not Bacula can recycle the Volumes: -Yes, No } \\ - \hline -{VolRetention } & {bigint } & {64 bit seconds until expiration } \\ - \hline -{VolUseDuration } & {bigint } & {64 bit seconds volume can be used } \\ - \hline -{MaxVolJobs } & {integer } & {maximum jobs to put on Volume } \\ - \hline -{MaxVolFiles } & {integer } & {maximume EOF marks to put on Volume } -\\ \hline - -\end{longtable} - -The {\bf Volume} table (internally referred to as the Media table) contains -one entry for each volume, that is each tape, cassette (8mm, DLT, DAT, ...), -or file on which information is or was backed up. There is one Volume record -created for each of the NumVols specified in the Pool resource record. - -\ - -\addcontentsline{lot}{table}{Pool Table Layout} -\begin{longtable}{|l|l|p{2.4in}|} - \hline -\multicolumn{3}{|l| }{\bf Pool } \\ - \hline -\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type -} & \multicolumn{1}{c| }{\bf Remark } \\ - \hline -{PoolId } & {integer } & {Primary Key } \\ - \hline -{Name } & {Tinyblob } & {Pool Name } \\ - \hline -{NumVols } & {Integer } & {Number of Volumes in the Pool } \\ - \hline -{MaxVols } & {Integer } & {Maximum Volumes in the Pool } \\ - \hline -{UseOnce } & {tinyint } & {Use volume once } \\ - \hline -{UseCatalog } & {tinyint } & {Set to use catalog } \\ - \hline -{AcceptAnyVolume } & {tinyint } & {Accept any volume from Pool } \\ - \hline -{VolRetention } & {bigint } & {64 bit seconds to retain volume } \\ - \hline -{VolUseDuration } & {bigint } & {64 bit seconds volume can be used } \\ - \hline -{MaxVolJobs } & {integer } & {max jobs on volume } \\ - \hline -{MaxVolFiles } & {integer } & {max EOF marks to put on Volume } \\ - \hline -{MaxVolBytes } & {bigint } & {max bytes to write on Volume } \\ - \hline -{AutoPrune } & {tinyint } & {yes|no for autopruning } \\ - \hline -{Recycle } & {tinyint } & {yes|no for allowing auto recycling of Volume } -\\ - \hline -{PoolType } & {enum } & {Backup, Copy, Cloned, Archive, Migration } \\ - \hline -{LabelFormat } & {Tinyblob } & {Label format } -\\ \hline - -\end{longtable} - -The {\bf Pool} table contains one entry for each media pool controlled by -Bacula in this database. One media record exists for each of the NumVols -contained in the Pool. The PoolType is a Bacula defined keyword. The MediaType -is defined by the administrator, and corresponds to the MediaType specified in -the Director's Storage definition record. The CurrentVol is the sequence -number of the Media record for the current volume. - -\ - -\addcontentsline{lot}{table}{Client Table Layout} -\begin{longtable}{|l|l|l|} - \hline -\multicolumn{3}{|l| }{\bf Client } \\ - \hline -\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type -} & \multicolumn{1}{c| }{\bf Remark } \\ - \hline -{ClientId } & {integer } & {Primary Key } \\ - \hline -{Name } & {TinyBlob } & {File Services Name } \\ - \hline -{UName } & {TinyBlob } & {uname -a from Client (not yet used) } \\ - \hline -{AutoPrune } & {tinyint } & {yes|no for autopruning } \\ - \hline -{FileRetention } & {bigint } & {64 bit seconds to retain Files } \\ - \hline -{JobRetention } & {bigint } & {64 bit seconds to retain Job } -\\ \hline - -\end{longtable} - -The {\bf Client} table contains one entry for each machine backed up by Bacula -in this database. Normally the Name is a fully qualified domain name. - -\ - -\addcontentsline{lot}{table}{Unsaved Files Table Layout} -\begin{longtable}{|l|l|l|} - \hline -\multicolumn{3}{|l| }{\bf UnsavedFiles } \\ - \hline -\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type -} & \multicolumn{1}{c| }{\bf Remark } \\ - \hline -{UnsavedId } & {integer } & {Primary Key } \\ - \hline -{JobId } & {integer } & {JobId corresponding to this record } \\ - \hline -{PathId } & {integer } & {Id of path } \\ - \hline -{FilenameId } & {integer } & {Id of filename } -\\ \hline - -\end{longtable} - -The {\bf UnsavedFiles} table contains one entry for each file that was not -saved. Note! This record is not yet implemented. - -\ - -\addcontentsline{lot}{table}{Counter Table Layout} -\begin{longtable}{|l|l|l|} - \hline -\multicolumn{3}{|l| }{\bf Counter } \\ - \hline -\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type -} & \multicolumn{1}{c| }{\bf Remark } \\ - \hline -{Counter } & {tinyblob } & {Counter name } \\ - \hline -{MinValue } & {integer } & {Start/Min value for counter } \\ - \hline -{MaxValue } & {integer } & {Max value for counter } \\ - \hline -{CurrentValue } & {integer } & {Current counter value } \\ - \hline -{WrapCounter } & {tinyblob } & {Name of another counter } -\\ \hline - -\end{longtable} - -The {\bf Counter} table contains one entry for each permanent counter defined -by the user. - -\ - -\addcontentsline{lot}{table}{Version Table Layout} -\begin{longtable}{|l|l|l|} - \hline -\multicolumn{3}{|l| }{\bf Version } \\ - \hline -\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type -} & \multicolumn{1}{c| }{\bf Remark } \\ - \hline -{VersionId } & {integer } & {Primary Key } -\\ \hline - -\end{longtable} - -The {\bf Version} table defines the Bacula database version number. Bacula -checks this number before reading the database to ensure that it is compatible -with the Bacula binary file. - -\ - -\addcontentsline{lot}{table}{Base Files Table Layout} -\begin{longtable}{|l|l|l|} - \hline -\multicolumn{3}{|l| }{\bf BaseFiles } \\ - \hline -\multicolumn{1}{|c| }{\bf Column Name } & \multicolumn{1}{c| }{\bf Data Type -} & \multicolumn{1}{c| }{\bf Remark } \\ - \hline -{BaseId } & {integer } & {Primary Key } \\ - \hline -{BaseJobId } & {integer } & {JobId of Base Job } \\ - \hline -{JobId } & {integer } & {Reference to Job } \\ - \hline -{FileId } & {integer } & {Reference to File } \\ - \hline -{FileIndex } & {integer } & {File Index number } -\\ \hline - -\end{longtable} - -The {\bf BaseFiles} table contains all the File references for a particular -JobId that point to a Base file -- i.e. they were previously saved and hence -were not saved in the current JobId but in BaseJobId under FileId. FileIndex -is the index of the file, and is used for optimization of Restore jobs to -prevent the need to read the FileId record when creating the in memory tree. -This record is not yet implemented. - -\ - -\subsubsection*{MySQL Table Definition} -\index[general]{MySQL Table Definition } -\index[general]{Definition!MySQL Table } -\addcontentsline{toc}{subsubsection}{MySQL Table Definition} - -The commands used to create the MySQL tables are as follows: - -\footnotesize -\begin{verbatim} -USE bacula; -CREATE TABLE Filename ( - FilenameId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT, - Name BLOB NOT NULL, - PRIMARY KEY(FilenameId), - INDEX (Name(30)) - ); -CREATE TABLE Path ( - PathId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT, - Path BLOB NOT NULL, - PRIMARY KEY(PathId), - INDEX (Path(50)) - ); -CREATE TABLE File ( - FileId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT, - FileIndex INTEGER UNSIGNED NOT NULL DEFAULT 0, - JobId INTEGER UNSIGNED NOT NULL REFERENCES Job, - PathId INTEGER UNSIGNED NOT NULL REFERENCES Path, - FilenameId INTEGER UNSIGNED NOT NULL REFERENCES Filename, - MarkId INTEGER UNSIGNED NOT NULL DEFAULT 0, - LStat TINYBLOB NOT NULL, - MD5 TINYBLOB NOT NULL, - PRIMARY KEY(FileId), - INDEX (JobId), - INDEX (PathId), - INDEX (FilenameId) - ); -CREATE TABLE Job ( - JobId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT, - Job TINYBLOB NOT NULL, - Name TINYBLOB NOT NULL, - Type BINARY(1) NOT NULL, - Level BINARY(1) NOT NULL, - ClientId INTEGER NOT NULL REFERENCES Client, - JobStatus BINARY(1) NOT NULL, - SchedTime DATETIME NOT NULL, - StartTime DATETIME NOT NULL, - EndTime DATETIME NOT NULL, - JobTDate BIGINT UNSIGNED NOT NULL, - VolSessionId INTEGER UNSIGNED NOT NULL DEFAULT 0, - VolSessionTime INTEGER UNSIGNED NOT NULL DEFAULT 0, - JobFiles INTEGER UNSIGNED NOT NULL DEFAULT 0, - JobBytes BIGINT UNSIGNED NOT NULL, - JobErrors INTEGER UNSIGNED NOT NULL DEFAULT 0, - JobMissingFiles INTEGER UNSIGNED NOT NULL DEFAULT 0, - PoolId INTEGER UNSIGNED NOT NULL REFERENCES Pool, - FileSetId INTEGER UNSIGNED NOT NULL REFERENCES FileSet, - PurgedFiles TINYINT NOT NULL DEFAULT 0, - HasBase TINYINT NOT NULL DEFAULT 0, - PRIMARY KEY(JobId), - INDEX (Name(128)) - ); -CREATE TABLE FileSet ( - FileSetId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT, - FileSet TINYBLOB NOT NULL, - MD5 TINYBLOB NOT NULL, - CreateTime DATETIME NOT NULL, - PRIMARY KEY(FileSetId) - ); -CREATE TABLE JobMedia ( - JobMediaId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT, - JobId INTEGER UNSIGNED NOT NULL REFERENCES Job, - MediaId INTEGER UNSIGNED NOT NULL REFERENCES Media, - FirstIndex INTEGER UNSIGNED NOT NULL DEFAULT 0, - LastIndex INTEGER UNSIGNED NOT NULL DEFAULT 0, - StartFile INTEGER UNSIGNED NOT NULL DEFAULT 0, - EndFile INTEGER UNSIGNED NOT NULL DEFAULT 0, - StartBlock INTEGER UNSIGNED NOT NULL DEFAULT 0, - EndBlock INTEGER UNSIGNED NOT NULL DEFAULT 0, - VolIndex INTEGER UNSIGNED NOT NULL DEFAULT 0, - PRIMARY KEY(JobMediaId), - INDEX (JobId, MediaId) - ); -CREATE TABLE Media ( - MediaId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT, - VolumeName TINYBLOB NOT NULL, - Slot INTEGER NOT NULL DEFAULT 0, - PoolId INTEGER UNSIGNED NOT NULL REFERENCES Pool, - MediaType TINYBLOB NOT NULL, - FirstWritten DATETIME NOT NULL, - LastWritten DATETIME NOT NULL, - LabelDate DATETIME NOT NULL, - VolJobs INTEGER UNSIGNED NOT NULL DEFAULT 0, - VolFiles INTEGER UNSIGNED NOT NULL DEFAULT 0, - VolBlocks INTEGER UNSIGNED NOT NULL DEFAULT 0, - VolMounts INTEGER UNSIGNED NOT NULL DEFAULT 0, - VolBytes BIGINT UNSIGNED NOT NULL DEFAULT 0, - VolErrors INTEGER UNSIGNED NOT NULL DEFAULT 0, - VolWrites INTEGER UNSIGNED NOT NULL DEFAULT 0, - VolCapacityBytes BIGINT UNSIGNED NOT NULL, - VolStatus ENUM('Full', 'Archive', 'Append', 'Recycle', 'Purged', - 'Read-Only', 'Disabled', 'Error', 'Busy', 'Used', 'Cleaning') NOT NULL, - Recycle TINYINT NOT NULL DEFAULT 0, - VolRetention BIGINT UNSIGNED NOT NULL DEFAULT 0, - VolUseDuration BIGINT UNSIGNED NOT NULL DEFAULT 0, - MaxVolJobs INTEGER UNSIGNED NOT NULL DEFAULT 0, - MaxVolFiles INTEGER UNSIGNED NOT NULL DEFAULT 0, - MaxVolBytes BIGINT UNSIGNED NOT NULL DEFAULT 0, - InChanger TINYINT NOT NULL DEFAULT 0, - MediaAddressing TINYINT NOT NULL DEFAULT 0, - VolReadTime BIGINT UNSIGNED NOT NULL DEFAULT 0, - VolWriteTime BIGINT UNSIGNED NOT NULL DEFAULT 0, - PRIMARY KEY(MediaId), - INDEX (PoolId) - ); -CREATE TABLE Pool ( - PoolId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT, - Name TINYBLOB NOT NULL, - NumVols INTEGER UNSIGNED NOT NULL DEFAULT 0, - MaxVols INTEGER UNSIGNED NOT NULL DEFAULT 0, - UseOnce TINYINT NOT NULL, - UseCatalog TINYINT NOT NULL, - AcceptAnyVolume TINYINT DEFAULT 0, - VolRetention BIGINT UNSIGNED NOT NULL, - VolUseDuration BIGINT UNSIGNED NOT NULL, - MaxVolJobs INTEGER UNSIGNED NOT NULL DEFAULT 0, - MaxVolFiles INTEGER UNSIGNED NOT NULL DEFAULT 0, - MaxVolBytes BIGINT UNSIGNED NOT NULL, - AutoPrune TINYINT DEFAULT 0, - Recycle TINYINT DEFAULT 0, - PoolType ENUM('Backup', 'Copy', 'Cloned', 'Archive', 'Migration', 'Scratch') NOT NULL, - LabelFormat TINYBLOB, - Enabled TINYINT DEFAULT 1, - ScratchPoolId INTEGER UNSIGNED DEFAULT 0 REFERENCES Pool, - RecyclePoolId INTEGER UNSIGNED DEFAULT 0 REFERENCES Pool, - UNIQUE (Name(128)), - PRIMARY KEY (PoolId) - ); -CREATE TABLE Client ( - ClientId INTEGER UNSIGNED NOT NULL AUTO_INCREMENT, - Name TINYBLOB NOT NULL, - Uname TINYBLOB NOT NULL, /* full uname -a of client */ - AutoPrune TINYINT DEFAULT 0, - FileRetention BIGINT UNSIGNED NOT NULL, - JobRetention BIGINT UNSIGNED NOT NULL, - UNIQUE (Name(128)), - PRIMARY KEY(ClientId) - ); -CREATE TABLE BaseFiles ( - BaseId INTEGER UNSIGNED AUTO_INCREMENT, - BaseJobId INTEGER UNSIGNED NOT NULL REFERENCES Job, - JobId INTEGER UNSIGNED NOT NULL REFERENCES Job, - FileId INTEGER UNSIGNED NOT NULL REFERENCES File, - FileIndex INTEGER UNSIGNED, - PRIMARY KEY(BaseId) - ); -CREATE TABLE UnsavedFiles ( - UnsavedId INTEGER UNSIGNED AUTO_INCREMENT, - JobId INTEGER UNSIGNED NOT NULL REFERENCES Job, - PathId INTEGER UNSIGNED NOT NULL REFERENCES Path, - FilenameId INTEGER UNSIGNED NOT NULL REFERENCES Filename, - PRIMARY KEY (UnsavedId) - ); -CREATE TABLE Version ( - VersionId INTEGER UNSIGNED NOT NULL - ); --- Initialize Version -INSERT INTO Version (VersionId) VALUES (7); -CREATE TABLE Counters ( - Counter TINYBLOB NOT NULL, - MinValue INTEGER, - MaxValue INTEGER, - CurrentValue INTEGER, - WrapCounter TINYBLOB NOT NULL, - PRIMARY KEY (Counter(128)) - ); -\end{verbatim} -\normalsize diff --git a/docs/manual-de/catmaintenance.tex b/docs/manual-de/catmaintenance.tex index 77448757..a9cdb361 100644 --- a/docs/manual-de/catmaintenance.tex +++ b/docs/manual-de/catmaintenance.tex @@ -1,120 +1,113 @@ %% %% -\section*{Catalog Maintenance} -\label{_ChapterStart12} -\index[general]{Maintenance!Catalog } -\index[general]{Catalog Maintenance } -\addcontentsline{toc}{section}{Catalog Maintenance} - -Without proper setup and maintenance, your Catalog may continue to grow -indefinitely as you run Jobs and backup Files. How fast the size of your -Catalog grows depends on the number of Jobs you run and how many files they -backup. By deleting records within the database, you can make space available -for the new records that will be added during the next Job. By constantly -deleting old expired records (dates older than the Retention period), your -database size will remain constant. - -If you started with the default configuration files, they already contain -reasonable defaults for a small number of machines (less than 5), so if you -fall into that case, catalog maintenance will not be urgent if you have a few -hundred megabytes of disk space free. Whatever the case may be, some knowledge -of retention periods will be useful. +\chapter{Katalog Verwaltung} +\label{CatMaintenanceChapter} +\index[general]{Verwaltung!Katalog } +\index[general]{Katalog Verwaltung} + +Ohne eine ordnungsgem\"{a}{\ss}e Einrichtung und Verwaltung kann es sein, +dass Ihr Katalog immer gr\"{o}{\ss}er wird wenn Jobs laufen und Daten gesichert werden. +Zudem kann der Katalog ineffizient und langsam werden. Wie schnell der Katalog w\"{a}chst, +h\"{a}ngt von der Anzahl der Jobs und der Menge der dabei gesicherten Dateien ab. +Durch das L\"{o}schen von Eintr\"{a}gen im Katalog kann Platz geschaffen werden f\"{u}r +neue Eintr\"{a}ge der folgenden Jobs. Durch regelm\"{a}{\ss}iges l\"{o}schen alter abgelaufener +Daten (\"{a}lter als durch die Aufbewahrungszeitr\"{a}ume (Retention Periods) angegeben), +wird daf\"{u}r gesorgt, dass die Katalog-Datenbank eine konstante Gr\"{o}{\ss}e beibeh\"{a}lt. + +Sie k\"{o}nnen mit der vorgegebenen Konfiguration beginnen, sie enth\"{a}lt bereits +sinnvolle Vorgaben f\"{u}r eine kleine Anzahl von Clients (kleiner 5), in diesem Fall +wird die Katalogwartung, wenn Sie einige hundert Megabytes freien Plattenplatz haben, +nicht dringlich sein. Was aber auch immer der Fall ist, einiges Wissen \"{u}ber +die Retention Periods/Aufbewahrungszeitr\"{a}ume der Daten im Katalog und auf den Volumes ist hilfreich. + +\section{Einstellung der Aufbewahrungszeitr\"{a}ume} \label{Retention} +\index[general]{Einstellung der Aufbewahrungszeitr\"{a}ume } +\index[general]{Zeitr\"{a}ume!Einstellung der Aufbewahrungs- } -\subsection*{Setting Retention Periods} -\index[general]{Setting Retention Periods } -\index[general]{Periods!Setting Retention } -\addcontentsline{toc}{subsection}{Setting Retention Periods} +Bacula benutzt drei verschiedene Aufbewahrungszeitr\"{a}ume: +die {\bf File Retention}: der Aufbewahrungszeitraum f\"{u}r Dateien, +die {\bf Job Retention}: der Aufbewahrungszeitraum f\"{u}r Jobs und +die {\bf Volume Retention}: der Aufbewahrungszeitraum f\"{u}r Volumes. +Von diesen drei ist der Aufbewahrungszeitraum f\"{u}r Dateien der entscheidende, +wenn es darum geht, wie gro{\ss} die Datenbank werden wird. -{\bf Bacula} uses three Retention periods: the {\bf File Retention} period, -the {\bf Job Retention} period, and the {\bf Volume Retention} period. Of -these three, the File Retention period is by far the most important in -determining how large your database will become. - -The {\bf File Retention} and the {\bf Job Retention} are specified in each -Client resource as is shown below. The {\bf Volume Retention} period is -specified in the Pool resource, and the details are given in the next chapter -of this manual. +Die {\bf File Retention} und die {\bf Job Retention} werden in der Client-Konfiguration, +wie unten gezeigt, angegeben. Die {\bf Volume Retention} wird in der Pool-Konfiguration +angegeben, genauere Informationen dazu finden Sie im n\"{a}chsten Kapitel dieses Handbuchs. \begin{description} \item [File Retention = \lt{}time-period-specification\gt{}] \index[dir]{File Retention } - The File Retention record defines the length of time that Bacula will keep -File records in the Catalog database. When this time period expires, and if -{\bf AutoPrune} is set to {\bf yes}, Bacula will prune (remove) File records -that are older than the specified File Retention period. The pruning will -occur at the end of a backup Job for the given Client. Note that the Client -database record contains a copy of the File and Job retention periods, but -Bacula uses the current values found in the Director's Client resource to do -the pruning. - -Since File records in the database account for probably 80 percent of the -size of the database, you should carefully determine exactly what File -Retention period you need. Once the File records have been removed from -the database, you will no longer be able to restore individual files -in a Job. However, with Bacula version 1.37 and later, as long as the -Job record still exists, you will be able to restore all files in the -job. - -Retention periods are specified in seconds, but as a convenience, there are -a number of modifiers that permit easy specification in terms of minutes, -hours, days, weeks, months, quarters, or years on the record. See the -\ilink{ Configuration chapter}{Time} of this manual for additional details -of modifier specification. - -The default File retention period is 60 days. + Der Aufbewahrungszeitraum f\"{u}r Dateien gibt die Zeitspanne an, die die +Datei-Eintr\"{a}ge in der Katalog-Datenbank aufbewahrt werden. +Wenn {\bf AutoPrune} in der Client-Konfiguration auf {\bf yes} gesetzt ist, +wird Bacula die Katalog-Eintr\"{a}ge der Dateien l\"{o}schen, die \"{a}lter als +dieser Zeitraum sind. Das L\"{o}schen erfolgt nach Beendigung eines Jobs des entsprechenden Clients. +Bitte beachten Sie, dass die Client-Datenbank-Eintr\"{a}ge eine Kopie der Aufbewahrungszeitr\"{a}ume +f\"{u}r Dateien und Jobs enthalten, Bacula aber die Zeitr\"{a}ume aus der aktuellen Client-Konfiguration +des Director-Dienstes benutzt um alte Katalog-Eintr\"{a}ge zu l\"{o}schen. + +Da die Datei-Eintr\"{a}ge ca. 80 Prozent der Katalog-Datenbankgr\"{o}{\ss}e ausmachen, +sollten Sie sorgf\"{a}lltig ermitteln \"{u}ber welchen Zeitraum Sie die Eintr\"{a}ge aufbewahren wollen. +Nachdem die Datei-Eintr\"{a}ge ge\"{o}scht wurden, ist es nicht mehr m\"{o}glich einzelne dieser Dateien +mit einem R\"{u}cksicherungs-Job wiederherzustellen, aber die Bacula-Versionen 1.37 und sp\"{a}ter +sind in der Lage, aufgrund des Job-Eintrags im Katalog, alle Dateien des Jobs zur\"{u}ckzusichern +solange der Job-Eintrag im Katalog vorhanden ist. + +Aufbewahrungszeitr\"{a}ume werden in Sekunden angegeben, aber der Einfachheit halber sind auch +eine Nummer von Hilfsangaben vorhanden, so dass man Minuten, Stunden, Tage, Wochen, +Monate, Quartale und Jahre konfigurieren kann. Lesen Sie bitte das \ilink{Konfigurations-Kapitel}{Time} +dieses Handbuchs um mehr \"{u}ber diese Hilfsangaben zu erfahren. + +Der Standardwert der Aufbewahrungszeit f\"{u}r Dateien ist 60 Tage. \item [Job Retention = \lt{}time-period-specification\gt{}] \index[dir]{Job Retention } - The Job Retention record defines the length of time that {\bf Bacula} -will keep Job records in the Catalog database. When this time period -expires, and if {\bf AutoPrune} is set to {\bf yes} Bacula will prune -(remove) Job records that are older than the specified Job Retention -period. Note, if a Job record is selected for pruning, all associated File -and JobMedia records will also be pruned regardless of the File Retention -period set. As a consequence, you normally will set the File retention -period to be less than the Job retention period. - -As mentioned above, once the File records are removed from the database, -you will no longer be able to restore individual files from the Job. -However, as long as the Job record remains in the database, you will be -able to restore all the files backuped for the Job (on version 1.37 and -later). As a consequence, it is generally a good idea to retain the Job -records much longer than the File records. - -The retention period is specified in seconds, but as a convenience, there -are a number of modifiers that permit easy specification in terms of -minutes, hours, days, weeks, months, quarters, or years. See the \ilink{ -Configuration chapter}{Time} of this manual for additional details of -modifier specification. - -The default Job Retention period is 180 days. + Der Aufbewahrungszeitraum f\"{u}r Jobs gibt die Zeitspanne an, die die +Job-Eintr\"{a}ge in der Katalog-Datenbank aufbewahrt werden. +Wenn {\bf AutoPrune} in der Client-Konfiguration auf {\bf yes} gesetzt ist, +wird Bacula die Katalog-Eintr\"{a}ge der Jobs l\"{o}schen, die \"{a}lter als +dieser Zeitraum sind. Beachten Sie, dass wenn ein Job-Eintrag ge\"{o}scht wird, +auch alle zu diesem Job geh\"{o}renden Datei- und JobMedia-Eintr\"{a}ge aus dem +Katalog gel\"{o}scht werden. Dies passiert unabh\"{a}ngig von der Aufbewahrungszeit f\"{u}r Dateien, +infolge dessen wird die Aufbewahrungszeit f\"{u}r Dateien normalerweise k\"{u}rzer sein als f\"{u}r Jobs. + +Wie oben erw\"{a}hnt, sind Sie nicht mehr in der Lage einzelne Dateien eines Jobs zur\"{u}ckzusichern, +wenn die Datei-Eintr\"{a}ge aus der Katalog-Datenbank entfernt wurden. Jedoch, solange der Job-Eintrag +im Katalog vorhanden ist, k\"{o}nnen Sie immer noch den kompletten Job mit allen Dateien wiederherstellen +(mit Bacula-Version 1.37 und gr\"{o}{\ss}er). Daher ist es eine gute Idee, die Job-Eintr\"{a}ge im Katalog +l\"{a}nger als die Datei-Eintr\"{a}ge aufzubewahren. + +Aufbewahrungszeitr\"{a}ume werden in Sekunden angegeben, aber der Einfachheit halber sind auch +eine Nummer von Hilfsangaben vorhanden, so dass man Minuten, Stunden, Tage, Wochen, +Monate, Quartale und Jahre konfigurieren kann. Lesen Sie bitte das \ilink{Konfigurations-Kapitel}{Time} +dieses Handbuchs um mehr \"{u}ber diese Hilfsangaben zu erfahren. + +Der Standardwert der Aufbewahrungszeit f\"{u}r Jobs ist 180 Tage. \item [AutoPrune = \lt{}yes/no\gt{}] \index[dir]{AutoPrune } - If AutoPrune is set to {\bf yes} (default), Bacula will automatically apply -the File retention period and the Job retention period for the Client at the -end of the Job. - -If you turn this off by setting it to {\bf no}, your Catalog will grow each -time you run a Job. + Wenn AutoPrune auf {\bf yes} (Standard) gesetzt ist, wird Bacula nach jedem Job +automatisch \"{u}berpr\"{u}fen, ob die Aufbewahrungszeit f\"{u}r bestimmte Dateien und/oder Jobs +des gerade gesicherten Clients abgelaufen ist und diese aus dem Katalog entfernen. +Falls Sie AutoPrune durch das Setzen auf {\bf no} ausschalten, wird Ihre Katalog-Datenbank mit jedem +gelaufenen Job immer gr\"{o}{\ss}er werden. \end{description} \label{CompactingMySQL} - -\subsection*{Compacting Your MySQL Database} -\index[general]{Database!Compacting Your MySQL } -\index[general]{Compacting Your MySQL Database } -\addcontentsline{toc}{subsection}{Compacting Your MySQL Database} - -Over time, as noted above, your database will tend to grow. I've noticed that -even though Bacula regularly prunes files, {\bf MySQL} does not effectively -use the space, and instead continues growing. To avoid this, from time to -time, you must compact your database. Normally, large commercial database such -as Oracle have commands that will compact a database to reclaim wasted file -space. MySQL has the {\bf OPTIMIZE TABLE} command that you can use, and SQLite +\section{Komprimieren Ihrer MySQL Datenbank} +\index[general]{Datenbank!Komprimieren Ihrer MySQL } +\index[general]{Komprimieren Ihrer MySQL Datenbank } + +Mit der Zeit, wie oben schon angemerkt, wird Ihre Datenbank dazu neigen zu wachsen. +Auch wenn Bacula regelm\"{a}{\ss}ig Datei-Eintr\"{a}ge l\"{o}scht, wird die {\bf MySQL}-Datenbank +st\"{a}ndig gr\"{o}{\ss}er werden. Um dies zu vermeiden, muss die Datenbank komprimiert werden. +Normalerweise kennen gro{\ss}e kommerzielle Datenbanken, wie Oracle, bestimmte Kommandos +um den verschwendeten Festplattenplatz wieder freizugeben. +MySQL has the {\bf OPTIMIZE TABLE} command that you can use, and SQLite version 2.8.4 and greater has the {\bf VACUUM} command. We leave it to you to explore the utility of the {\bf OPTIMIZE TABLE} command in MySQL. @@ -145,7 +138,7 @@ du bacula \normalsize I get {\bf 620,644} which means there are that many blocks containing 1024 -bytes each or approximately 635 MB of data. After doing the {\bf msqldump}, I +bytes each or approximately 635 MB of data. After doing the {\bf mysqldump}, I had a bacula.sql file that had {\bf 174,356} blocks, and after doing the {\bf mysql} command to recreate the database, I ended up with a total of {\bf 210,464} blocks rather than the original {\bf 629,644}. In other words, the @@ -153,13 +146,13 @@ compressed version of the database took approximately one third of the space of the database that had been in use for about a year. As a consequence, I suggest you monitor the size of your database and from -time to time (once every 6 months or year), compress it. -\label{RepairingMySQL} +time to time (once every six months or year), compress it. -\subsection*{Repairing Your MySQL Database} +\label{DatabaseRepair} +\label{RepairingMySQL} +\section{Repairing Your MySQL Database} \index[general]{Database!Repairing Your MySQL } \index[general]{Repairing Your MySQL Database } -\addcontentsline{toc}{subsection}{Repairing Your MySQL Database} If you find that you are getting errors writing to your MySQL database, or Bacula hangs each time it tries to access the database, you should consider @@ -175,23 +168,120 @@ If the errors you are getting are simply SQL warnings, then you might try running dbcheck before (or possibly after) using the MySQL database repair program. It can clean up many of the orphaned record problems, and certain other inconsistencies in the Bacula database. -\label{RepairingPSQL} -\subsection*{Repairing Your PostgreSQL Database} +A typical cause of MySQL database problems is if your partition fills. In +such a case, you will need to create additional space on the partition or +free up some space then repair the database probably using {\bf myisamchk}. +Recently my root partition filled and the MySQL database was corrupted. +Simply running {\bf myisamchk -r} did not fix the problem. However, +the following script did the trick for me: + +\footnotesize +\begin{verbatim} +#!/bin/sh +for i in *.MYD ; do + mv $i x${i} + t=`echo $i | cut -f 1 -d '.' -` + mysql bacula < bacula.sql +pg_dump -c bacula > bacula.sql cat bacula.sql | psql bacula rm -f bacula.sql \end{verbatim} @@ -344,10 +473,23 @@ fair amount of disk space. For example, you can {\bf cd} to the location of the Bacula database (typically /usr/local/pgsql/data or possible /var/lib/pgsql/data) and check the size. -\subsection*{Compacting Your SQLite Database} +There are certain PostgreSQL users who do not recommend the above +procedure. They have the following to say: +PostgreSQL does not +need to be dumped/restored to keep the database efficient. A normal +process of vacuuming will prevent the database from every getting too +large. If you want to fine-tweak the database storage, commands such +as VACUUM FULL, REINDEX, and CLUSTER exist specifically to keep you +from having to do a dump/restore. + +Finally, you might want to look at the PostgreSQL documentation on +this subject at +\elink{http://www.postgresql.org/docs/8.1/interactive/maintenance.html} +{http://www.postgresql.org/docs/8.1/interactive/maintenance.html}. + +\section{Compacting Your SQLite Database} \index[general]{Compacting Your SQLite Database } \index[general]{Database!Compacting Your SQLite } -\addcontentsline{toc}{subsection}{Compacting Your SQLite Database} First please read the previous section that explains why it is necessary to compress a database. SQLite version 2.8.4 and greater have the {\bf Vacuum} @@ -378,27 +520,23 @@ Director's configuration file. Note, in the case of SQLite, it is necessary to completely delete (rm) the old database before creating a new compressed version. -\subsection*{Migrating from SQLite to MySQL} +\section{Migrating from SQLite to MySQL} \index[general]{MySQL!Migrating from SQLite to } \index[general]{Migrating from SQLite to MySQL } -\addcontentsline{toc}{subsection}{Migrating from SQLite to MySQL} You may begin using Bacula with SQLite then later find that you want to switch to MySQL for any of a number of reasons: SQLite tends to use more disk than -MySQL, SQLite apparently does not handle database sizes greater than 2GBytes, -... Several users have done so by first producing an ASCII "dump" of the -SQLite database, then creating the MySQL tables with the {\bf -create\_mysql\_tables} script that comes with Bacula, and finally feeding the -SQLite dump into MySQL using the {\bf -f} command line option to continue past -the errors that are generated by the DDL statements that SQLite's dump -creates. Of course, you could edit the dump and remove the offending -statements. Otherwise, MySQL accepts the SQL produced by SQLite. +MySQL; when the database is corrupted it is often more catastrophic than +with MySQL or PostgreSQL. +Several users have succeeded in converting from SQLite to MySQL by +exporting the MySQL data and then processing it with Perl scripts +prior to putting it into MySQL. This is, however, not a simple +process. \label{BackingUpBacula} -\subsection*{Backing Up Your Bacula Database} +\section{Backing Up Your Bacula Database} \index[general]{Backing Up Your Bacula Database } \index[general]{Database!Backing Up Your Bacula } -\addcontentsline{toc}{subsection}{Backing Up Your Bacula Database} If ever the machine on which your Bacula database crashes, and you need to restore from backup tapes, one of your first priorities will probably be to @@ -426,7 +564,7 @@ The basic sequence of events to make this work correctly is as follows: \item You use {\bf RunBeforeJob} to create the ASCII backup file and {\bf RunAfterJob} to clean up - \end{itemize} +\end{itemize} Assuming that you start all your nightly backup jobs at 1:05 am (and that they run one after another), you can do the catalog backup with the following @@ -451,29 +589,31 @@ Job { # This schedule does the catalog. It starts after the WeeklyCycle Schedule { Name = "WeeklyCycleAfterBackup - Run = Full sun-sat at 1:10 + Run = Level=Full sun-sat at 1:10 } # This is the backup of the catalog FileSet { Name = "Catalog" - Include = signature=MD5 { - @working_directory@/bacula.sql + Include { + Options { + signature=MD5 + } + File = \lt{}working_directory\gt{}/bacula.sql } } \end{verbatim} \normalsize -Be sure to write a bootstrap file as in the above example. It is preferable +Be sure to write a bootstrap file as in the above example. However, it is preferable to write or copy the bootstrap file to another computer. It will allow you to quickly recover the database backup should that be necessary. If you do not have a bootstrap file, it is still possible to recover your database backup, but it will be more work and take longer. \label{BackingUPOtherDBs} -\subsection*{Backing Up Third Party Databases} +\section{Backing Up Third Party Databases} \index[general]{Backing Up Third Party Databases } \index[general]{Databases!Backing Up Third Party } -\addcontentsline{toc}{subsection}{Backing Up Third Party Databases} If you are running a database in production mode on your machine, Bacula will happily backup the files, but if the database is in use while Bacula is @@ -492,10 +632,9 @@ links to scripts that show you how to shutdown and backup most major databases. \label{Size} -\subsection*{Database Size} +\section{Database Size} \index[general]{Size!Database } \index[general]{Database Size } -\addcontentsline{toc}{subsection}{Database Size} As mentioned above, if you do not do automatic pruning, your Catalog will grow each time you run a Job. Normally, you should decide how long you want File @@ -516,7 +655,7 @@ database after a month can roughly be calculated as: \end{verbatim} \normalsize -where we have assumed 4 weeks in a month and 26 incremental backups per month. +where we have assumed four weeks in a month and 26 incremental backups per month. This would give the following: \footnotesize @@ -537,7 +676,7 @@ Below are some statistics for a MySQL database containing Job records for five Clients beginning September 2001 through May 2002 (8.5 months) and File records for the last 80 days. (Older File records have been pruned). For these systems, only the user files and system files that change are backed up. The -core part of the system is assumed to be easily reloaded from the RedHat rpms. +core part of the system is assumed to be easily reloaded from the Red Hat rpms. In the list below, the files (corresponding to Bacula Tables) with the diff --git a/docs/manual-de/copy.tex b/docs/manual-de/copy.tex deleted file mode 100644 index b2ffc9ec..00000000 --- a/docs/manual-de/copy.tex +++ /dev/null @@ -1,36 +0,0 @@ -\vspace*{6cm} -\begin{center} - \copyright\ Copyright 2000-2005 --\pubyear\ \company -\end{center} - -\begin{center} - All Rights Reserved -\end{center} - -This publication, or parts thereof, may not be reproduced in any form, by any -method, for any purpose. - -\company\ makes no warranty, either express or implied, including but not -limited to any implied warranties of merchantability and fitness for a -particular purpose, regarding these materials and makes such materials -available solely on an "as-is" basis. - -In no event shall \company\ be liable to anyone for special, collateral, -incidental, or consequential damages in connection with or arising out of -purchase or use of these materials. The sole and exclusive liability to -\company\ regardless of the form of action, shall not exceed the purchase -price of the materials described herein. - -For condition of use and permission to use these materials for publication in -other than the English language, contact \company - -\company\ reserves the right to revise and improve its products as it sees -fit. This publication describes the state of this product as of the time of -its publication, and may not reflect the product at all times in the future. - -This manual was prepared and published in \pubmonth of \pubyear, and is based -on Release \levelno of \prog. - -MS-DOS and Windows are a registered trademarks of Microsoft. -\clearpage -\tableofcontents diff --git a/docs/manual-de/dataencryption.tex b/docs/manual-de/dataencryption.tex new file mode 100644 index 00000000..34b050fe --- /dev/null +++ b/docs/manual-de/dataencryption.tex @@ -0,0 +1,195 @@ + +\chapter{Data Encryption} +\label{DataEncryption} +\index[general]{Data Encryption} +\index[general]{Encryption!Data} +\index[general]{Data Encryption} + +Bacula permits file data encryption and signing within the File Daemon (or +Client) prior to sending data to the Storage Daemon. Upon restoration, +file signatures are validated and any mismatches are reported. At no time +does the Director or the Storage Daemon have access to unencrypted file +contents. + + +It is very important to specify what this implementation does NOT +do: +\begin{itemize} +\item There is one important restore problem to be aware of, namely, it's + possible for the director to restore new keys or a Bacula configuration + file to the client, and thus force later backups to be made with a + compromised key and/or with no encryption at all. You can avoid this by + not not changing the location of the keys in your Bacula File daemon + configuration file, and not changing your File daemon keys. If you do + change either one, you must ensure that no restore is done that restores + the old configuration or the old keys. In general, the worst effect of + this will be that you can no longer connect the File daemon. + +\item The implementation does not encrypt file metadata such as file path + names, permissions, and ownership. Extended attributes are also currently + not encrypted. However, Mac OS X resource forks are encrypted. +\end{itemize} + +Encryption and signing are implemented using RSA private keys coupled with +self-signed x509 public certificates. This is also sometimes known as PKI +or Public Key Infrastructure. + +Each File Daemon should be given its own unique private/public key pair. +In addition to this key pair, any number of "Master Keys" may be specified +-- these are key pairs that may be used to decrypt any backups should the +File Daemon key be lost. Only the Master Key's public certificate should +be made available to the File Daemon. Under no circumstances should the +Master Private Key be shared or stored on the Client machine. + +The Master Keys should be backed up to a secure location, such as a CD +placed in a in a fire-proof safe or bank safety deposit box. The Master +Keys should never be kept on the same machine as the Storage Daemon or +Director if you are worried about an unauthorized party compromising either +machine and accessing your encrypted backups. + +While less critical than the Master Keys, File Daemon Keys are also a prime +candidate for off-site backups; burn the key pair to a CD and send the CD +home with the owner of the machine. + +NOTE!!! If you lose your encryption keys, backups will be unrecoverable. +{\bf ALWAYS} store a copy of your master keys in a secure, off-site location. + +The basic algorithm used for each backup session (Job) is: +\begin{enumerate} +\item The File daemon generates a session key. +\item The FD encrypts that session key via PKE for all recipients (the file +daemon, any master keys). +\item The FD uses that session key to perform symmetric encryption on the data. +\end{enumerate} + + +\section{Building Bacula with Encryption Support} +\index[general]{Building Bacula with Encryption Support} + +The configuration option for enabling OpenSSL encryption support has not changed +since Bacula 1.38. To build Bacula with encryption support, you will need +the OpenSSL libraries and headers installed. When configuring Bacula, use: + +\begin{verbatim} + ./configure --with-openssl ... +\end{verbatim} + +\section{Encryption Technical Details} +\index[general]{Encryption Technical Details} + +The implementation uses 128bit AES-CBC, with RSA encrypted symmetric +session keys. The RSA key is user supplied. +If you are running OpenSSL 0.9.8 or later, the signed file hash uses +SHA-256 -- otherwise, SHA-1 is used. + +End-user configuration settings for the algorithms are not currently +exposed -- only the algorithms listed above are used. However, the +data written to Volume supports arbitrary symmetric, asymmetric, and +digest algorithms for future extensibility, and the back-end +implementation currently supports: + +\begin{verbatim} +Symmetric Encryption: + - 128, 192, and 256-bit AES-CBC + - Blowfish-CBC + +Asymmetric Encryption (used to encrypt symmetric session keys): + - RSA + +Digest Algorithms: + - MD5 + - SHA1 + - SHA256 + - SHA512 +\end{verbatim} + +The various algorithms are exposed via an entirely re-usable, +OpenSSL-agnostic API (ie, it is possible to drop in a new encryption +backend). The Volume format is DER-encoded ASN.1, modeled after the +Cryptographic Message Syntax from RFC 3852. Unfortunately, using CMS +directly was not possible, as at the time of coding a free software +streaming DER decoder/encoder was not available. + + +\section{Decrypting with a Master Key} +\index[general]{Decrypting with a Master Key} + +It is preferable to retain a secure, non-encrypted copy of the +client's own encryption keypair. However, should you lose the +client's keypair, recovery with the master keypair is possible. + +You must: +\begin{itemize} +\item Concatenate the master private and public key into a single + keypair file, ie: + cat master.key master.cert >master.keypair + +\item 2) Set the PKI Keypair statement in your bacula configuration file: + +\begin{verbatim} + PKI Keypair = master.keypair +\end{verbatim} + +\item Start the restore. The master keypair will be used to decrypt + the file data. + +\end{itemize} + + +\section{Generating Private/Public Encryption Keys} +\index[general]{Generating Private/Public Encryption Keypairs} + +Generate a Master Key Pair with: + +\footnotesize +\begin{verbatim} + openssl genrsa -out master.key 2048 + openssl req -new -key master.key -x509 -out master.cert +\end{verbatim} +\normalsize + +Generate a File Daemon Key Pair for each FD: + +\footnotesize +\begin{verbatim} + openssl genrsa -out fd-example.key 2048 + openssl req -new -key fd-example.key -x509 -out fd-example.cert + cat fd-example.key fd-example.cert >fd-example.pem +\end{verbatim} +\normalsize + +Note, there seems to be a lot of confusion around the file extensions given +to these keys. For example, a .pem file can contain all the following: +private keys (RSA and DSA), public keys (RSA and DSA) and (x509) certificates. +It is the default format for OpenSSL. It stores data Base64 encoded DER format, +surrounded by ASCII headers, so is suitable for text mode transfers between +systems. A .pem file may contain any number of keys either public or +private. We use it in cases where there is both a public and a private +key. + +Typically, above we have used the .cert extension to refer to X509 +certificate encoding that contains only a single public key. + + +\section{Example Data Encryption Configuration} +\index[general]{Example!File Daemon Configuration File} +\index[general]{Example!Data Encryption Configuration File} +\index[general]{Example Data Encryption Configuration} + +{\bf bacula-fd.conf} +\footnotesize +\begin{verbatim} +FileDaemon { + Name = example-fd + FDport = 9102 # where we listen for the director + WorkingDirectory = /var/bacula/working + Pid Directory = /var/run + Maximum Concurrent Jobs = 20 + + PKI Signatures = Yes # Enable Data Signing + PKI Encryption = Yes # Enable Data Encryption + PKI Keypair = "/etc/bacula/fd-example.pem" # Public and Private Keys + PKI Master Key = "/etc/bacula/master.cert" # ONLY the Public Key +} +\end{verbatim} +\normalsize diff --git a/docs/manual-de/migration.tex b/docs/manual-de/migration.tex new file mode 100644 index 00000000..b0d49df2 --- /dev/null +++ b/docs/manual-de/migration.tex @@ -0,0 +1,445 @@ + +\chapter{Migration} +\label{MigrationChapter} +\index[general]{Migration} + +The term Migration, as used in the context of Bacula, means moving data from +one Volume to another. In particular it refers to a Job (similar to a backup +job) that reads data that was previously backed up to a Volume and writes +it to another Volume. As part of this process, the File catalog records +associated with the first backup job are purged. In other words, Migration +moves Bacula Job data from one Volume to another by reading the Job data +from the Volume it is stored on, writing it to a different Volume in a +different Pool, and then purging the database records for the first Job. + +The section process for which Job or Jobs are migrated +can be based on quite a number of different criteria such as: +\begin{itemize} +\item a single previous Job +\item a Volume +\item a Client +\item a regular expression matching a Job, Volume, or Client name +\item the time a Job has been on a Volume +\item high and low water marks (usage or occupation) of a Pool +\item Volume size +\end{itemize} + +The details of these selection criteria will be defined below. + +To run a Migration job, you must first define a Job resource very similar +to a Backup Job but with {\bf Type = Migrate} instead of {\bf Type = +Backup}. One of the key points to remember is that the Pool that is +specified for the migration job is the only pool from which jobs will +be migrated, with one exception noted below. In addition, the Pool to +which the selected Job or Jobs will be migrated is defined by the {\bf +Next Pool = ...} in the Pool resource specified for the Migration Job. + +Bacula permits pools to contain Volumes with different Media Types. +However, when doing migration, this is a very undesirable condition. For +migration to work properly, you should use pools containing only Volumes of +the same Media Type for all migration jobs. + +The migration job normally is either manually started or starts +from a Schedule much like a backup job. It searches +for a previous backup Job or Jobs that match the parameters you have +specified in the migration Job resource, primarily a {\bf Selection Type} +(detailed a bit later). Then for +each previous backup JobId found, the Migration Job will run a new Job which +copies the old Job data from the previous Volume to a new Volume in +the Migration Pool. It is possible that no prior Jobs are found for +migration, in which case, the Migration job will simply terminate having +done nothing, but normally at a minimum, three jobs are involved during a +migration: + +\begin{itemize} +\item The currently running Migration control Job. This is only + a control job for starting the migration child jobs. +\item The previous Backup Job (already run). The File records + for this Job are purged if the Migration job successfully + terminates. The original data remains on the Volume until + it is recycled and rewritten. +\item A new Migration Backup Job that moves the data from the + previous Backup job to the new Volume. If you subsequently + do a restore, the data will be read from this Job. +\end{itemize} + +If the Migration control job finds a number of JobIds to migrate (e.g. +it is asked to migrate one or more Volumes), it will start one new +migration backup job for each JobId found on the specified Volumes. +Please note that Migration doesn't scale too well since Migrations are +done on a Job by Job basis. This if you select a very large volume or +a number of volumes for migration, you may have a large number of +Jobs that start. Because each job must read the same Volume, they will +run consecutively (not simultaneously). + +\section{Migration Job Resource Directives} + +The following directives can appear in a Director's Job resource, and they +are used to define a Migration job. + +\begin{description} +\item [Pool = \lt{}Pool-name\gt{}] The Pool specified in the Migration + control Job is not a new directive for the Job resource, but it is + particularly important because it determines what Pool will be examined for + finding JobIds to migrate. The exception to this is when {\bf Selection + Type = SQLQuery}, in which case no Pool is used, unless you + specifically include it in the SQL query. Note, the Pool resource + referenced must contain a {\bf Next Pool = ...} directive to define + the Pool to which the data will be migrated. + +\item [Type = Migrate] + {\bf Migrate} is a new type that defines the job that is run as being a + Migration Job. A Migration Job is a sort of control job and does not have + any Files associated with it, and in that sense they are more or less like + an Admin job. Migration jobs simply check to see if there is anything to + Migrate then possibly start and control new Backup jobs to migrate the data + from the specified Pool to another Pool. + +\item [Selection Type = \lt{}Selection-type-keyword\gt{}] + The \lt{}Selection-type-keyword\gt{} determines how the migration job + will go about selecting what JobIds to migrate. In most cases, it is + used in conjunction with a {\bf Selection Pattern} to give you fine + control over exactly what JobIds are selected. The possible values + for \lt{}Selection-type-keyword\gt{} are: + \begin{description} + \item [SmallestVolume] This selection keyword selects the volume with the + fewest bytes from the Pool to be migrated. The Pool to be migrated + is the Pool defined in the Migration Job resource. The migration + control job will then start and run one migration backup job for + each of the Jobs found on this Volume. The Selection Pattern, if + specified, is not used. + + \item [OldestVolume] This selection keyword selects the volume with the + oldest last write time in the Pool to be migrated. The Pool to be + migrated is the Pool defined in the Migration Job resource. The + migration control job will then start and run one migration backup + job for each of the Jobs found on this Volume. The Selection + Pattern, if specified, is not used. + + \item [Client] The Client selection type, first selects all the Clients + that have been backed up in the Pool specified by the Migration + Job resource, then it applies the {\bf Selection Pattern} (defined + below) as a regular expression to the list of Client names, giving + a filtered Client name list. All jobs that were backed up for those + filtered (regexed) Clients will be migrated. + The migration control job will then start and run one migration + backup job for each of the JobIds found for those filtered Clients. + + \item [Volume] The Volume selection type, first selects all the Volumes + that have been backed up in the Pool specified by the Migration + Job resource, then it applies the {\bf Selection Pattern} (defined + below) as a regular expression to the list of Volume names, giving + a filtered Volume list. All JobIds that were backed up for those + filtered (regexed) Volumes will be migrated. + The migration control job will then start and run one migration + backup job for each of the JobIds found on those filtered Volumes. + + \item [Job] The Job selection type, first selects all the Jobs (as + defined on the {\bf Name} directive in a Job resource) + that have been backed up in the Pool specified by the Migration + Job resource, then it applies the {\bf Selection Pattern} (defined + below) as a regular expression to the list of Job names, giving + a filtered Job name list. All JobIds that were run for those + filtered (regexed) Job names will be migrated. Note, for a given + Job named, they can be many jobs (JobIds) that ran. + The migration control job will then start and run one migration + backup job for each of the Jobs found. + + \item [SQLQuery] The SQLQuery selection type, used the {\bf Selection + Pattern} as an SQL query to obtain the JobIds to be migrated. + The Selection Pattern must be a valid SELECT SQL statement for your + SQL engine, and it must return the JobId as the first field + of the SELECT. + + \item [PoolOccupancy] This selection type will cause the Migration job + to compute the total size of the specified pool for all Media Types + combined. If it exceeds the {\bf Migration High Bytes} defined in + the Pool, the Migration job will migrate all JobIds beginning with + the oldest Volume in the pool (determined by Last Write time) until + the Pool bytes drop below the {\bf Migration Low Bytes} defined in the + Pool. This calculation should be consider rather approximative because + it is made once by the Migration job before migration is begun, and + thus does not take into account additional data written into the Pool + during the migration. In addition, the calculation of the total Pool + byte size is based on the Volume bytes saved in the Volume (Media) +database + entries. The bytes calculate for Migration is based on the value stored + in the Job records of the Jobs to be migrated. These do not include the + Storage daemon overhead as is in the total Pool size. As a consequence, + normally, the migration will migrate more bytes than strictly necessary. + + \item [PoolTime] The PoolTime selection type will cause the Migration job to + look at the time each JobId has been in the Pool since the job ended. + All Jobs in the Pool longer than the time specified on {\bf Migration Time} + directive in the Pool resource will be migrated. + \end{description} + +\item [Selection Pattern = \lt{}Quoted-string\gt{}] + The Selection Patterns permitted for each Selection-type-keyword are + described above. + + For the OldestVolume and SmallestVolume, this + Selection pattern is not used (ignored). + + For the Client, Volume, and Job + keywords, this pattern must be a valid regular expression that will filter + the appropriate item names found in the Pool. + + For the SQLQuery keyword, this pattern must be a valid SELECT SQL statement + that returns JobIds. + +\end{description} + +\section{Migration Pool Resource Directives} + +The following directives can appear in a Director's Pool resource, and they +are used to define a Migration job. + +\begin{description} +\item [Migration Time = \lt{}time-specification\gt{}] + If a PoolTime migration is done, the time specified here in seconds (time + modifiers are permitted -- e.g. hours, ...) will be used. If the + previous Backup Job or Jobs selected have been in the Pool longer than + the specified PoolTime, then they will be migrated. + +\item [Migration High Bytes = \lt{}byte-specification\gt{}] + This directive specifies the number of bytes in the Pool which will + trigger a migration if a {\bf PoolOccupancy} migration selection + type has been specified. The fact that the Pool + usage goes above this level does not automatically trigger a migration + job. However, if a migration job runs and has the PoolOccupancy selection + type set, the Migration High Bytes will be applied. Bacula does not + currently restrict a pool to have only a single Media Type, so you + must keep in mind that if you mix Media Types in a Pool, the results + may not be what you want, as the Pool count of all bytes will be + for all Media Types combined. + +\item [Migration Low Bytes = \lt{}byte-specification\gt{}] + This directive specifies the number of bytes in the Pool which will + stop a migration if a {\bf PoolOccupancy} migration selection + type has been specified and triggered by more than Migration High + Bytes being in the pool. In other words, once a migration job + is started with {\bf PoolOccupancy} migration selection and it + determines that there are more than Migration High Bytes, the + migration job will continue to run jobs until the number of + bytes in the Pool drop to or below Migration Low Bytes. + +\item [Next Pool = \lt{}pool-specification\gt{}] + The Next Pool directive specifies the pool to which Jobs will be + migrated. This directive is required to define the Pool into which + the data will be migrated. Without this directive, the migration job + will terminate in error. + +\item [Storage = \lt{}storage-specification\gt{}] + The Storage directive specifies what Storage resource will be used + for all Jobs that use this Pool. It takes precedence over any other + Storage specifications that may have been given such as in the + Schedule Run directive, or in the Job resource. We highly recommend + that you define the Storage resource to be used in the Pool rather + than elsewhere (job, schedule run, ...). +\end{description} + +\section{Important Migration Considerations} +\index[general]{Important Migration Considerations} +\begin{itemize} +\item Each Pool into which you migrate Jobs or Volumes {\bf must} + contain Volumes of only one Media Type. + +\item Migration takes place on a JobId by JobId basis. That is + each JobId is migrated in its entirety and independently + of other JobIds. Once the Job is migrated, it will be + on the new medium in the new Pool, but for the most part, + aside from having a new JobId, it will appear with all the + same characteristics of the original job (start, end time, ...). + The column RealEndTime in the catalog Job table will contain the + time and date that the Migration terminated, and by comparing + it with the EndTime column you can tell whether or not the + job was migrated. The original job is purged of its File + records, and its Type field is changed from "B" to "M" to + indicate that the job was migrated. + +\item Jobs on Volumes will be Migration only if the Volume is + marked, Full, Used, or Error. Volumes that are still + marked Append will not be considered for migration. This + prevents Bacula from attempting to read the Volume at + the same time it is writing it. It also reduces other deadlock + situations, as well as avoids the problem that you migrate a + Volume and later find new files appended to that Volume. + +\item As noted above, for the Migration High Bytes, the calculation + of the bytes to migrate is somewhat approximate. + +\item If you keep Volumes of different Media Types in the same Pool, + it is not clear how well migration will work. We recommend only + one Media Type per pool. + +\item It is possible to get into a resource deadlock where Bacula does + not find enough drives to simultaneously read and write all the + Volumes needed to do Migrations. For the moment, you must take + care as all the resource deadlock algorithms are not yet implemented. + +\item Migration is done only when you run a Migration job. If you set a + Migration High Bytes and that number of bytes is exceeded in the Pool + no migration job will automatically start. You must schedule the + migration jobs, and they must run for any migration to take place. + +\item If you migrate a number of Volumes, a very large number of Migration + jobs may start. + +\item Figuring out what jobs will actually be migrated can be a bit complicated + due to the flexibility provided by the regex patterns and the number of + different options. Turning on a debug level of 100 or more will provide + a limited amount of debug information about the migration selection + process. + +\item Bacula currently does only minimal Storage conflict resolution, so you + must take care to ensure that you don't try to read and write to the + same device or Bacula may block waiting to reserve a drive that it + will never find. In general, ensure that all your migration + pools contain only one Media Type, and that you always + migrate to pools with different Media Types. + +\item The {\bf Next Pool = ...} directive must be defined in the Pool + referenced in the Migration Job to define the Pool into which the + data will be migrated. + +\item Pay particular attention to the fact that data is migrated on a Job + by Job basis, and for any particular Volume, only one Job can read + that Volume at a time (no simultaneous read), so migration jobs that + all reference the same Volume will run sequentially. This can be a + potential bottle neck and does not scale very well to large numbers + of jobs. + +\item Only migration of Selection Types of Job and Volume have + been carefully tested. All the other migration methods (time, + occupancy, smallest, oldest, ...) need additional testing. + +\item Migration is only implemented for a single Storage daemon. You + cannot read on one Storage daemon and write on another. +\end{itemize} + + +\section{Example Migration Jobs} +\index[general]{Example Migration Jobs} + +When you specify a Migration Job, you must specify all the standard +directives as for a Job. However, certain such as the Level, Client, and +FileSet, though they must be defined, are ignored by the Migration job +because the values from the original job used instead. + +As an example, suppose you have the following Job that +you run every night. To note: there is no Storage directive in the +Job resource; there is a Storage directive in each of the Pool +resources; the Pool to be migrated (File) contains a Next Pool +directive that defines the output Pool (where the data is written +by the migration job). + +\footnotesize +\begin{verbatim} +# Define the backup Job +Job { + Name = "NightlySave" + Type = Backup + Level = Incremental # default + Client=rufus-fd + FileSet="Full Set" + Schedule = "WeeklyCycle" + Messages = Standard + Pool = Default +} + +# Default pool definition +Pool { + Name = Default + Pool Type = Backup + AutoPrune = yes + Recycle = yes + Next Pool = Tape + Storage = File + LabelFormat = "File" +} + +# Tape pool definition +Pool { + Name = Tape + Pool Type = Backup + AutoPrune = yes + Recycle = yes + Storage = DLTDrive +} + +# Definition of File storage device +Storage { + Name = File + Address = rufus + Password = "ccV3lVTsQRsdIUGyab0N4sMDavui2hOBkmpBU0aQKOr9" + Device = "File" # same as Device in Storage daemon + Media Type = File # same as MediaType in Storage daemon +} + +# Definition of DLT tape storage device +Storage { + Name = DLTDrive + Address = rufus + Password = "ccV3lVTsQRsdIUGyab0N4sMDavui2hOBkmpBU0aQKOr9" + Device = "HP DLT 80" # same as Device in Storage daemon + Media Type = DLT8000 # same as MediaType in Storage daemon +} + +\end{verbatim} +\normalsize + +Where we have included only the essential information -- i.e. the +Director, FileSet, Catalog, Client, Schedule, and Messages resources are +omitted. + +As you can see, by running the NightlySave Job, the data will be backed up +to File storage using the Default pool to specify the Storage as File. + +Now, if we add the following Job resource to this conf file. + +\footnotesize +\begin{verbatim} +Job { + Name = "migrate-volume" + Type = Migrate + Level = Full + Client = rufus-fd + FileSet = "Full Set" + Messages = Standard + Pool = Default + Maximum Concurrent Jobs = 4 + Selection Type = Volume + Selection Pattern = "File" +} +\end{verbatim} +\normalsize + +and then run the job named {\bf migrate-volume}, all volumes in the Pool +named Default (as specified in the migrate-volume Job that match the +regular expression pattern {\bf File} will be migrated to tape storage +DLTDrive because the {\bf Next Pool} in the Default Pool specifies that +Migrations should go to the pool named {\bf Tape}, which uses +Storage {\bf DLTDrive}. + +If instead, we use a Job resource as follows: + +\footnotesize +\begin{verbatim} +Job { + Name = "migrate" + Type = Migrate + Level = Full + Client = rufus-fd + FileSet="Full Set" + Messages = Standard + Pool = Default + Maximum Concurrent Jobs = 4 + Selection Type = Job + Selection Pattern = ".*Save" +} +\end{verbatim} +\normalsize + +All jobs ending with the name Save will be migrated from the File Default to +the Tape Pool, or from File storage to Tape storage. diff --git a/docs/manual-de/supported.tex b/docs/manual-de/supported.tex deleted file mode 100644 index 5550283f..00000000 --- a/docs/manual-de/supported.tex +++ /dev/null @@ -1,216 +0,0 @@ -%% -%% - -\section*{Supported Systems and Hardware} -\label{_ChapterStart} -\index[general]{Supported Systems and Hardware } -\index[general]{Hardware!Supported Systems and } -\addcontentsline{toc}{section}{Supported Systems and Hardware} - -\label{SysReqs} - -\subsection*{System Requirements} -\index[general]{System Requirements } -\index[general]{Requirements!System } -\addcontentsline{toc}{subsection}{System Requirements} - -\begin{itemize} -\item {\bf Bacula} has been compiled and run on Linux RedHat, FreeBSD, and -Solaris systems. -\item It requires GNU C++ version 2.95 or higher to compile. You can try with -other compilers and older versions, but you are on your own. We have -successfully compiled and used Bacula on RH8.0/RH9/RHEL 3.0/FC3 with GCC 3.4. -Note, in general GNU C++ is a separate package (e.g. RPM) from GNU C, so you -need them both loaded. On RedHat systems, the C++ compiler is part of the -{\bf gcc-c++} rpm package. -\item There are certain third party packages that Bacula needs. Except for -MySQL and PostgreSQL, they can all be found in the {\bf depkgs} and {\bf -depkgs1} releases. -\item If you want to build the Win32 binaries, you will need a Microsoft -Visual C++ compiler (or Visual Studio). Although all components build -(console has some warnings), only the File daemon has been tested. -\item {\bf Bacula} requires a good implementation of pthreads to work. This -is not the case on some of the BSD systems. -\item The source code has been written with portability in mind and is mostly -POSIX compatible. Thus porting to any POSIX compatible operating system -should be relatively easy. -\item The GNOME Console program is developed and tested under GNOME 2.x. It -also runs under GNOME 1.4 but this version is deprecated and thus no longer -maintained. -\item The wxWidgets Console program is developed and tested with the latest -stable ANSI (not Unicode) version of -\elink{wxWidgets}{http://www.wxwidgets.org/} (2.6.0). It works fine with the -Windows and GTK+-2.x version of wxWidgets, and should also works on other -platforms supported by wxWidgets. -\item The Tray Monitor program is developed for GTK+-2.x. It needs Gnome less -or equal to 2.2, KDE greater or equal to 3.1 or any window manager supporting -the -\elink{ FreeDesktop system tray -standard}{http://www.freedesktop.org/Standards/systemtray-spec}. -\item If you want to enable command line editing and history, you will need -to have /usr/include/termcap.h and either the termcap or the ncurses library -loaded (libtermcap-devel or ncurses-devel). -\item If you want to use DVD as backup medium, you will need to download and -install the -\elink{dvd+rw-tools}{http://fy.chalmers.se/~appro/linux/DVD+RW/}. -\end{itemize} - -\subsection*{Supported Operating Systems} -\label{SupportedOSes} -\index[general]{Systems!Supported Operating } -\index[general]{Supported Operating Systems } -\addcontentsline{toc}{subsection}{Supported Operating Systems} - -\begin{itemize} -\item Linux systems (built and tested on RedHat Fedora Core 3). -\item If you have a recent Red Hat Linux system running the 2.4.x kernel and -you have the directory {\bf /lib/tls} installed on your system (normally by -default), bacula will {\bf NOT} run. This is the new pthreads library and it -is defective. You must remove this directory prior to running Bacula, or you -can simply change the name to {\bf /lib/tls-broken}) then you must reboot -your machine (one of the few times Linux must be rebooted). If you are not -able to remove/rename /lib/tls, an alternative is to set the environment -variable ``LD\_ASSUME\_KERNEL=2.4.19'' prior to executing Bacula. For this -option, you do not need to reboot, and all programs other than Bacula will -continue to use /lib/tls. - -The feedback that we have for 2.6 kernels is that the same problem may -exist. However, we have not been able to reproduce the above mentioned -problem (bizarre hangs) on 2.6 kernels. If you do experience problems, we -recommend using the environment variable override -(LD\_ASSUME\_KERNEL=2.4.19) rather than removing /lib/tls, because TLS -is designed to work with 2.6 kernels. - -\item Most flavors of Linux (Gentoo, SuSE, Mandrake, Debian, ...). -\item Solaris various versions. -\item FreeBSD (tape driver supported in 1.30 -- please see some {\bf -important} considerations in the -\ilink{ Tape Modes on FreeBSD}{tapetesting.tex#FreeBSDTapes} section of the -Tape Testing chapter of this manual.) -\item Windows (Win98/Me, WinNT/2K/XP) Client (File daemon) binaries. -\item MacOS X/Darwin (see -\elink{ http://fink.sourceforge.net/}{http://fink.sourceforge.net/} for -obtaining the packages) -\item OpenBSD Client (File daemon). -\item Irix Client (File daemon). -\item Tru64 -\item Bacula is said to work on other systems (AIX, BSDI, HPUX, ...) but we -do not have first hand knowledge of these systems. -\item See the Porting chapter of the Bacula Developer's Guide for information -on porting to other systems. -\end{itemize} - -\subsection*{Supported Tape Drives} -\label{SupportedDrives} -\index[general]{Drives!Supported Tape } -\index[general]{Supported Tape Drives } -\addcontentsline{toc}{subsection}{Supported Tape Drives} - -Even if your drive is on the list below, please check the -\ilink{Tape Testing Chapter}{tapetesting.tex#btape} of this manual for -procedures that you can use to verify if your tape drive will work with -Bacula. If your drive is in fixed block mode, it may appear to work with -Bacula until you attempt to do a restore and Bacula wants to position the -tape. You can be sure only by following the procedures suggested above and -testing. - -It is very difficult to supply a list of supported tape drives, or drives that -are known to work with Bacula because of limited feedback (so if you use -Bacula on a different drive, please let us know). Based on user feedback, the -following drives are known to work with Bacula. A dash in a column means -unknown: - -\addcontentsline{lot}{table}{Supported Tape Drives} -\begin{longtable}{|p{1.2in}|l|l|p{1.3in}|l|} - \hline -\multicolumn{1}{|c| }{\bf OS } & \multicolumn{1}{c| }{\bf Man. } & -\multicolumn{1}{c| }{\bf Media } & \multicolumn{1}{c| }{\bf Model } & -\multicolumn{1}{c| }{\bf Capacity } \\ - \hline -{- } & {ADIC } & {DLT } & {Adic Scalar 100 DLT } & {100GB } \\ - \hline -{- } & {ADIC } & {DLT } & {Adic Fastor 22 DLT } & {- } \\ - \hline -{- } & {- } & {DDS } & {Compaq DDS 2,3,4 } & {- } \\ - \hline -{- } & {Exabyte } & {- } & {Exabyte drives less than 10 years old } & {- } -\\ - \hline -{- } & {Exabyte } & {- } & {Exabyte VXA drives } & {- } \\ - \hline -{- } & {HP } & {Travan 4 } & {Colorado T4000S } & {- } \\ - \hline -{- } & {HP } & {DLT } & {HP DLT drives } & {- } \\ - \hline -{- } & {HP } & {LTO } & {HP LTO Ultrium drives } & {- } \\ - \hline -{FreeBSD 4.10 RELEASE } & {HP } & {DAT } & {HP StorageWorks DAT72i } & {- } -\\ - \hline -{- } & {Overland } & {LTO } & {LoaderXpress LTO } & {- } \\ - \hline -{- } & {Overland } & {- } & {Neo2000 } & {- } \\ - \hline -{- } & {OnStream } & {- } & {OnStream drives (see below) } & {- } \\ - \hline -{- } & {Quantum } & {DLT } & {DLT-8000 } & {40/80GB } \\ - \hline -{Linux } & {Seagate } & {DDS-4 } & {Scorpio 40 } & {20/40GB } \\ - \hline -{FreeBSD 4.9 STABLE } & {Seagate } & {DDS-4 } & {STA2401LW } & {20/40GB } \\ - \hline -{FreeBSD 5.2.1 pthreads patched RELEASE } & {Seagate } & {AIT-1 } & {STA1701W -} & {35/70GB } \\ - \hline -{Linux } & {Sony } & {DDS-2,3,4 } & {- } & {4-40GB } \\ - \hline -{Linux } & {Tandberg } & {- } & {Tandbert MLR3 } & {- } \\ - \hline -{FreeBSD } & {Tandberg } & {- } & {Tandberg SLR6 } & {- } \\ - \hline -{Solaris } & {Tandberg } & {- } & {Tandberg SLR75 } & {- } -\\ \hline - -\end{longtable} - -There is a list of -\ilink{supported autochangers}{autochangers.tex#Models} models in the -\ilink{autochangers chapter}{autochangers.tex#_ChapterStart} of this document, -where you will find other tape drives that work with Bacula. - -\subsection*{Unsupported Tape Drives} -\label{UnSupportedDrives} -\index[general]{Unsupported Tape Drives } -\index[general]{Drives!Unsupported Tape } -\addcontentsline{toc}{subsection}{Unsupported Tape Drives} - -Previously OnStream IDE-SCSI tape drives did not work with Bacula. As of -Bacula version 1.33 and the osst kernel driver version 0.9.14 or later, they -now work. Please see the testing chapter as you must set a fixed block size. - -QIC tapes are known to have a number of particularities (fixed block size, and -one EOF rather than two to terminate the tape). As a consequence, you will -need to take a lot of care in configuring them to make them work correctly -with Bacula. - -\subsection*{FreeBSD Users Be Aware!!!} -\index[general]{FreeBSD Users Be Aware } -\index[general]{Aware!FreeBSD Users Be } -\addcontentsline{toc}{subsection}{FreeBSD Users Be Aware!!!} - -Unless you have patched the pthreads library on most FreeBSD systems, you will -lose data when Bacula spans tapes. This is because the unpatched pthreads -library fails to return a warning status to Bacula that the end of the tape is -near. Please see the -\ilink{Tape Testing Chapter}{tapetesting.tex#FreeBSDTapes} of this manual for -{\bf important} information on how to configure your tape drive for -compatibility with Bacula. - -\subsection*{Supported Autochangers} -\index[general]{Autochangers!Supported } -\index[general]{Supported Autochangers } -\addcontentsline{toc}{subsection}{Supported Autochangers} - -For information on supported autochangers, please see the -\ilink{Autochangers Known to Work with Bacula}{autochangers.tex#Models} -section of the Autochangers chapter of this manual. diff --git a/docs/manual-de/update_version b/docs/manual-de/update_version index 687c0988..5c2e0092 100755 --- a/docs/manual-de/update_version +++ b/docs/manual-de/update_version @@ -3,8 +3,8 @@ # Script file to update the Bacula version # out=/tmp/$$ -VERSION=`sed -n -e 's/^.*VERSION.*"\(.*\)"$/\1/p' /home/kern/bacula/Branch-2.2/bacula/src/version.h` -DATE=`sed -n -e 's/^.*[ \t]*BDATE.*"\(.*\)"$/\1/p' /home/kern/bacula/Branch-2.2/bacula/src/version.h` +VERSION=`sed -n -e 's/^.*VERSION.*"\(.*\)"$/\1/p' /home/kern/bacula/k/src/version.h` +DATE=`sed -n -e 's/^.*[ \t]*BDATE.*"\(.*\)"$/\1/p' /home/kern/bacula/k/src/version.h` . ./do_echo sed -f ${out} version.tex.in >version.tex rm -f ${out} diff --git a/docs/manual-de/version.tex b/docs/manual-de/version.tex index 3831e114..8c768b7b 100644 --- a/docs/manual-de/version.tex +++ b/docs/manual-de/version.tex @@ -1 +1 @@ -2.2.2 (06 September 2007) +2.2.1 (30 August 2007)