From df73bc2cc2980d869a78812f1eda4a5ada697a37 Mon Sep 17 00:00:00 2001 From: Kern Sibbald Date: Thu, 3 Sep 2009 18:09:35 +0000 Subject: [PATCH] Fix NewFeatures typos as reported on the list --- docs/manuals/en/concepts/newfeatures.tex | 31 ++++++++++++------------ docs/manuals/en/problems/tapetesting.tex | 8 +++--- 2 files changed, 21 insertions(+), 18 deletions(-) diff --git a/docs/manuals/en/concepts/newfeatures.tex b/docs/manuals/en/concepts/newfeatures.tex index 05b29543..a030bd87 100644 --- a/docs/manuals/en/concepts/newfeatures.tex +++ b/docs/manuals/en/concepts/newfeatures.tex @@ -353,21 +353,22 @@ Building directory tree for JobId(s) 19,2 ... +++++++++++++++++++++++++++++++++ The Copy Job runs without using the File daemon by copying the data from the old backup Volume to a different Volume in a different Pool. See the Migration documentation for additional details. For copy Jobs there is a new selection -criterium named PoolUncopiedJobs which copies all jobs from a pool to an other -pool which were not copied before. Next to that the client, volume, job or sql -query are possible ways of selecting jobs which should be copied. Selection -types like smallestvolume, oldestvolume, pooloccupancy and pooltime are -probably more suited for migration jobs only. But we could imagine some people -have a valid use for those kind of copy jobs too. - -If bacula founds a copy when a job record is purged (deleted) from the catalog, -it will promote the copy as \textsl{real} backup and will make it available for -automatic restore. If more than one copy is available, it will promote the copy -with the smallest jobid. - -A nice solution which can be build with the new copy jobs is what is -called the disk-to-disk-to-tape backup (DTDTT). A sample config could -look somethings like the one below: +directive named {\bf PoolUncopiedJobs} which selects all Jobs that were +not already copied to another Pool. + +As with Migration, the Client, Volume, Job, or SQL query, are +other possible ways of selecting the Jobs to be copied. Selection +types like SmallestVolume, OldestVolume, PoolOccupancy and PoolTime also +work, but are probably more suited for Migration Jobs. + +If Bacula finds a Copy of a job record that is purged (deleted) from the catalog, +it will promote the Copy to a \textsl{real} backup job and will make it available for +automatic restore. If more than one Copy is available, it will promote the copy +with the smallest JobId. + +A nice solution which can be built with the new Copy feature is often +called disk-to-disk-to-tape backup (DTDTT). A sample config could +look something like the one below: \begin{verbatim} Pool { diff --git a/docs/manuals/en/problems/tapetesting.tex b/docs/manuals/en/problems/tapetesting.tex index 4596d17f..8b1bdee6 100644 --- a/docs/manuals/en/problems/tapetesting.tex +++ b/docs/manuals/en/problems/tapetesting.tex @@ -1257,9 +1257,11 @@ certain tape modes and MTEOM. \section{Tape Performance Problems} \index[general]{Tape Performance} If you have LTO-3 or LTO-4 drives, you should be able to -fairly good transfer rates, from 60 to 90 MB/second, providing -you have fast disks, GigaBit Ethernet connections, and possibly set -up your tape buffer size a bit from the default 64K. +fairly good transfer rates; from 60 to 150 MB/second, providing +you have fast disks; GigaBit Ethernet connections (probably 2); you are +running multiple simultaneous jobs; you have Bacula data spooling +enabled; your tape block size is set to 131072 or 262144; and +you have set {\bf Maximum File Size = 5G}. If you are not getting good performance, consider some of the following suggestions from the Allen Balck on the Bacula Users email list: -- 2.39.5