2 \chapter{Migration and Copy}
3 \label{MigrationChapter}
4 \index[general]{Migration}
7 The term Migration, as used in the context of Bacula, means moving data from
8 one Volume to another. In particular it refers to a Job (similar to a backup
9 job) that reads data that was previously backed up to a Volume and writes
10 it to another Volume. As part of this process, the File catalog records
11 associated with the first backup job are purged. In other words, Migration
12 moves Bacula Job data from one Volume to another by reading the Job data
13 from the Volume it is stored on, writing it to a different Volume in a
14 different Pool, and then purging the database records for the first Job.
16 The Copy process is essentially identical to the Migration feature with the
17 exception that the Job that is copied is left unchanged. This essentially
18 creates two identical copies of the same backup. However, the copy is treated
19 as a copy rather than a backup job, and hence is not directly available for
20 restore. If bacula founds a copy when a job record is purged (deleted) from the
21 catalog, it will promote the copy as \textsl{real} backup and will make it
22 available for automatic restore.
24 The Copy and the Migration jobs run without using the File daemon by copying
25 the data from the old backup Volume to a different Volume in a different Pool.
27 The section process for which Job or Jobs are migrated
28 can be based on quite a number of different criteria such as:
30 \item a single previous Job
33 \item a regular expression matching a Job, Volume, or Client name
34 \item the time a Job has been on a Volume
35 \item high and low water marks (usage or occupation) of a Pool
39 The details of these selection criteria will be defined below.
41 To run a Migration job, you must first define a Job resource very similar
42 to a Backup Job but with {\bf Type = Migrate} instead of {\bf Type =
43 Backup}. One of the key points to remember is that the Pool that is
44 specified for the migration job is the only pool from which jobs will
45 be migrated, with one exception noted below. In addition, the Pool to
46 which the selected Job or Jobs will be migrated is defined by the {\bf
47 Next Pool = ...} in the Pool resource specified for the Migration Job.
49 Bacula permits pools to contain Volumes with different Media Types.
50 However, when doing migration, this is a very undesirable condition. For
51 migration to work properly, you should use pools containing only Volumes of
52 the same Media Type for all migration jobs.
54 The migration job normally is either manually started or starts
55 from a Schedule much like a backup job. It searches
56 for a previous backup Job or Jobs that match the parameters you have
57 specified in the migration Job resource, primarily a {\bf Selection Type}
58 (detailed a bit later). Then for
59 each previous backup JobId found, the Migration Job will run a new Job which
60 copies the old Job data from the previous Volume to a new Volume in
61 the Migration Pool. It is possible that no prior Jobs are found for
62 migration, in which case, the Migration job will simply terminate having
63 done nothing, but normally at a minimum, three jobs are involved during a
67 \item The currently running Migration control Job. This is only
68 a control job for starting the migration child jobs.
69 \item The previous Backup Job (already run). The File records
70 for this Job are purged if the Migration job successfully
71 terminates. The original data remains on the Volume until
72 it is recycled and rewritten.
73 \item A new Migration Backup Job that moves the data from the
74 previous Backup job to the new Volume. If you subsequently
75 do a restore, the data will be read from this Job.
78 If the Migration control job finds a number of JobIds to migrate (e.g.
79 it is asked to migrate one or more Volumes), it will start one new
80 migration backup job for each JobId found on the specified Volumes.
81 Please note that Migration doesn't scale too well since Migrations are
82 done on a Job by Job basis. This if you select a very large volume or
83 a number of volumes for migration, you may have a large number of
84 Jobs that start. Because each job must read the same Volume, they will
85 run consecutively (not simultaneously).
87 \section{Migration and Copy Job Resource Directives}
89 The following directives can appear in a Director's Job resource, and they
90 are used to define a Migration job.
93 \item [Pool = \lt{}Pool-name\gt{}] The Pool specified in the Migration
94 control Job is not a new directive for the Job resource, but it is
95 particularly important because it determines what Pool will be examined for
96 finding JobIds to migrate. The exception to this is when {\bf Selection
97 Type = SQLQuery}, in which case no Pool is used, unless you
98 specifically include it in the SQL query. Note, the Pool resource
99 referenced must contain a {\bf Next Pool = ...} directive to define
100 the Pool to which the data will be migrated.
102 \item [Type = Migrate]
103 {\bf Migrate} is a new type that defines the job that is run as being a
104 Migration Job. A Migration Job is a sort of control job and does not have
105 any Files associated with it, and in that sense they are more or less like
106 an Admin job. Migration jobs simply check to see if there is anything to
107 Migrate then possibly start and control new Backup jobs to migrate the data
108 from the specified Pool to another Pool.
111 {\bf Copy} is a new type that defines the job that is run as being a
112 Copy Job. A Copy Job is a sort of control job and does not have
113 any Files associated with it, and in that sense they are more or less like
114 an Admin job. Copy jobs simply check to see if there is anything to
115 Copy then possibly start and control new Backup jobs to copy the data
116 from the specified Pool to another Pool.
118 \item [Selection Type = \lt{}Selection-type-keyword\gt{}]
119 The \lt{}Selection-type-keyword\gt{} determines how the migration job
120 will go about selecting what JobIds to migrate. In most cases, it is
121 used in conjunction with a {\bf Selection Pattern} to give you fine
122 control over exactly what JobIds are selected. The possible values
123 for \lt{}Selection-type-keyword\gt{} are:
125 \item [SmallestVolume] This selection keyword selects the volume with the
126 fewest bytes from the Pool to be migrated. The Pool to be migrated
127 is the Pool defined in the Migration Job resource. The migration
128 control job will then start and run one migration backup job for
129 each of the Jobs found on this Volume. The Selection Pattern, if
130 specified, is not used.
132 \item [OldestVolume] This selection keyword selects the volume with the
133 oldest last write time in the Pool to be migrated. The Pool to be
134 migrated is the Pool defined in the Migration Job resource. The
135 migration control job will then start and run one migration backup
136 job for each of the Jobs found on this Volume. The Selection
137 Pattern, if specified, is not used.
139 \item [Client] The Client selection type, first selects all the Clients
140 that have been backed up in the Pool specified by the Migration
141 Job resource, then it applies the {\bf Selection Pattern} (defined
142 below) as a regular expression to the list of Client names, giving
143 a filtered Client name list. All jobs that were backed up for those
144 filtered (regexed) Clients will be migrated.
145 The migration control job will then start and run one migration
146 backup job for each of the JobIds found for those filtered Clients.
148 \item [Volume] The Volume selection type, first selects all the Volumes
149 that have been backed up in the Pool specified by the Migration
150 Job resource, then it applies the {\bf Selection Pattern} (defined
151 below) as a regular expression to the list of Volume names, giving
152 a filtered Volume list. All JobIds that were backed up for those
153 filtered (regexed) Volumes will be migrated.
154 The migration control job will then start and run one migration
155 backup job for each of the JobIds found on those filtered Volumes.
157 \item [Job] The Job selection type, first selects all the Jobs (as
158 defined on the {\bf Name} directive in a Job resource)
159 that have been backed up in the Pool specified by the Migration
160 Job resource, then it applies the {\bf Selection Pattern} (defined
161 below) as a regular expression to the list of Job names, giving
162 a filtered Job name list. All JobIds that were run for those
163 filtered (regexed) Job names will be migrated. Note, for a given
164 Job named, they can be many jobs (JobIds) that ran.
165 The migration control job will then start and run one migration
166 backup job for each of the Jobs found.
168 \item [SQLQuery] The SQLQuery selection type, used the {\bf Selection
169 Pattern} as an SQL query to obtain the JobIds to be migrated.
170 The Selection Pattern must be a valid SELECT SQL statement for your
171 SQL engine, and it must return the JobId as the first field
174 \item [PoolOccupancy] This selection type will cause the Migration job
175 to compute the total size of the specified pool for all Media Types
176 combined. If it exceeds the {\bf Migration High Bytes} defined in
177 the Pool, the Migration job will migrate all JobIds beginning with
178 the oldest Volume in the pool (determined by Last Write time) until
179 the Pool bytes drop below the {\bf Migration Low Bytes} defined in the
180 Pool. This calculation should be consider rather approximative because
181 it is made once by the Migration job before migration is begun, and
182 thus does not take into account additional data written into the Pool
183 during the migration. In addition, the calculation of the total Pool
184 byte size is based on the Volume bytes saved in the Volume (Media)
186 entries. The bytes calculate for Migration is based on the value stored
187 in the Job records of the Jobs to be migrated. These do not include the
188 Storage daemon overhead as is in the total Pool size. As a consequence,
189 normally, the migration will migrate more bytes than strictly necessary.
191 \item [PoolTime] The PoolTime selection type will cause the Migration job to
192 look at the time each JobId has been in the Pool since the job ended.
193 All Jobs in the Pool longer than the time specified on {\bf Migration Time}
194 directive in the Pool resource will be migrated.
196 \item [PoolUncopiedJobs] This selection which copies all jobs from a pool
197 to an other pool which were not copied before is available only for copy Jobs.
201 \item [Selection Pattern = \lt{}Quoted-string\gt{}]
202 The Selection Patterns permitted for each Selection-type-keyword are
205 For the OldestVolume and SmallestVolume, this
206 Selection pattern is not used (ignored).
208 For the Client, Volume, and Job
209 keywords, this pattern must be a valid regular expression that will filter
210 the appropriate item names found in the Pool.
212 For the SQLQuery keyword, this pattern must be a valid SELECT SQL statement
217 \section{Migration Pool Resource Directives}
219 The following directives can appear in a Director's Pool resource, and they
220 are used to define a Migration job.
223 \item [Migration Time = \lt{}time-specification\gt{}]
224 If a PoolTime migration is done, the time specified here in seconds (time
225 modifiers are permitted -- e.g. hours, ...) will be used. If the
226 previous Backup Job or Jobs selected have been in the Pool longer than
227 the specified PoolTime, then they will be migrated.
229 \item [Migration High Bytes = \lt{}byte-specification\gt{}]
230 This directive specifies the number of bytes in the Pool which will
231 trigger a migration if a {\bf PoolOccupancy} migration selection
232 type has been specified. The fact that the Pool
233 usage goes above this level does not automatically trigger a migration
234 job. However, if a migration job runs and has the PoolOccupancy selection
235 type set, the Migration High Bytes will be applied. Bacula does not
236 currently restrict a pool to have only a single Media Type, so you
237 must keep in mind that if you mix Media Types in a Pool, the results
238 may not be what you want, as the Pool count of all bytes will be
239 for all Media Types combined.
241 \item [Migration Low Bytes = \lt{}byte-specification\gt{}]
242 This directive specifies the number of bytes in the Pool which will
243 stop a migration if a {\bf PoolOccupancy} migration selection
244 type has been specified and triggered by more than Migration High
245 Bytes being in the pool. In other words, once a migration job
246 is started with {\bf PoolOccupancy} migration selection and it
247 determines that there are more than Migration High Bytes, the
248 migration job will continue to run jobs until the number of
249 bytes in the Pool drop to or below Migration Low Bytes.
251 \item [Next Pool = \lt{}pool-specification\gt{}]
252 The Next Pool directive specifies the pool to which Jobs will be
253 migrated. This directive is required to define the Pool into which
254 the data will be migrated. Without this directive, the migration job
255 will terminate in error.
257 \item [Storage = \lt{}storage-specification\gt{}]
258 The Storage directive specifies what Storage resource will be used
259 for all Jobs that use this Pool. It takes precedence over any other
260 Storage specifications that may have been given such as in the
261 Schedule Run directive, or in the Job resource. We highly recommend
262 that you define the Storage resource to be used in the Pool rather
263 than elsewhere (job, schedule run, ...).
266 \section{Important Migration Considerations}
267 \index[general]{Important Migration Considerations}
269 \item Each Pool into which you migrate Jobs or Volumes {\bf must}
270 contain Volumes of only one Media Type.
272 \item Migration takes place on a JobId by JobId basis. That is
273 each JobId is migrated in its entirety and independently
274 of other JobIds. Once the Job is migrated, it will be
275 on the new medium in the new Pool, but for the most part,
276 aside from having a new JobId, it will appear with all the
277 same characteristics of the original job (start, end time, ...).
278 The column RealEndTime in the catalog Job table will contain the
279 time and date that the Migration terminated, and by comparing
280 it with the EndTime column you can tell whether or not the
281 job was migrated. The original job is purged of its File
282 records, and its Type field is changed from "B" to "M" to
283 indicate that the job was migrated.
285 \item Jobs on Volumes will be Migration only if the Volume is
286 marked, Full, Used, or Error. Volumes that are still
287 marked Append will not be considered for migration. This
288 prevents Bacula from attempting to read the Volume at
289 the same time it is writing it. It also reduces other deadlock
290 situations, as well as avoids the problem that you migrate a
291 Volume and later find new files appended to that Volume.
293 \item As noted above, for the Migration High Bytes, the calculation
294 of the bytes to migrate is somewhat approximate.
296 \item If you keep Volumes of different Media Types in the same Pool,
297 it is not clear how well migration will work. We recommend only
298 one Media Type per pool.
300 \item It is possible to get into a resource deadlock where Bacula does
301 not find enough drives to simultaneously read and write all the
302 Volumes needed to do Migrations. For the moment, you must take
303 care as all the resource deadlock algorithms are not yet implemented.
305 \item Migration is done only when you run a Migration job. If you set a
306 Migration High Bytes and that number of bytes is exceeded in the Pool
307 no migration job will automatically start. You must schedule the
308 migration jobs, and they must run for any migration to take place.
310 \item If you migrate a number of Volumes, a very large number of Migration
313 \item Figuring out what jobs will actually be migrated can be a bit complicated
314 due to the flexibility provided by the regex patterns and the number of
315 different options. Turning on a debug level of 100 or more will provide
316 a limited amount of debug information about the migration selection
319 \item Bacula currently does only minimal Storage conflict resolution, so you
320 must take care to ensure that you don't try to read and write to the
321 same device or Bacula may block waiting to reserve a drive that it
322 will never find. In general, ensure that all your migration
323 pools contain only one Media Type, and that you always
324 migrate to pools with different Media Types.
326 \item The {\bf Next Pool = ...} directive must be defined in the Pool
327 referenced in the Migration Job to define the Pool into which the
328 data will be migrated.
330 \item Pay particular attention to the fact that data is migrated on a Job
331 by Job basis, and for any particular Volume, only one Job can read
332 that Volume at a time (no simultaneous read), so migration jobs that
333 all reference the same Volume will run sequentially. This can be a
334 potential bottle neck and does not scale very well to large numbers
337 \item Only migration of Selection Types of Job and Volume have
338 been carefully tested. All the other migration methods (time,
339 occupancy, smallest, oldest, ...) need additional testing.
341 \item Migration is only implemented for a single Storage daemon. You
342 cannot read on one Storage daemon and write on another.
346 \section{Example Migration Jobs}
347 \index[general]{Example Migration Jobs}
349 When you specify a Migration Job, you must specify all the standard
350 directives as for a Job. However, certain such as the Level, Client, and
351 FileSet, though they must be defined, are ignored by the Migration job
352 because the values from the original job used instead.
354 As an example, suppose you have the following Job that
355 you run every night. To note: there is no Storage directive in the
356 Job resource; there is a Storage directive in each of the Pool
357 resources; the Pool to be migrated (File) contains a Next Pool
358 directive that defines the output Pool (where the data is written
359 by the migration job).
363 # Define the backup Job
367 Level = Incremental # default
370 Schedule = "WeeklyCycle"
375 # Default pool definition
386 # Tape pool definition
395 # Definition of File storage device
399 Password = "ccV3lVTsQRsdIUGyab0N4sMDavui2hOBkmpBU0aQKOr9"
400 Device = "File" # same as Device in Storage daemon
401 Media Type = File # same as MediaType in Storage daemon
404 # Definition of DLT tape storage device
408 Password = "ccV3lVTsQRsdIUGyab0N4sMDavui2hOBkmpBU0aQKOr9"
409 Device = "HP DLT 80" # same as Device in Storage daemon
410 Media Type = DLT8000 # same as MediaType in Storage daemon
416 Where we have included only the essential information -- i.e. the
417 Director, FileSet, Catalog, Client, Schedule, and Messages resources are
420 As you can see, by running the NightlySave Job, the data will be backed up
421 to File storage using the Default pool to specify the Storage as File.
423 Now, if we add the following Job resource to this conf file.
428 Name = "migrate-volume"
435 Maximum Concurrent Jobs = 4
436 Selection Type = Volume
437 Selection Pattern = "File"
442 and then run the job named {\bf migrate-volume}, all volumes in the Pool
443 named Default (as specified in the migrate-volume Job that match the
444 regular expression pattern {\bf File} will be migrated to tape storage
445 DLTDrive because the {\bf Next Pool} in the Default Pool specifies that
446 Migrations should go to the pool named {\bf Tape}, which uses
447 Storage {\bf DLTDrive}.
449 If instead, we use a Job resource as follows:
461 Maximum Concurrent Jobs = 4
463 Selection Pattern = ".*Save"
468 All jobs ending with the name Save will be migrated from the File Default to
469 the Tape Pool, or from File storage to Tape storage.