-=========================================================
-
-
-
-=====
- Multiple drive autochanger data: see Alan Brown
- mtx -f xxx unloadStorage Element 1 is Already Full(drive 0 was empty)
- Unloading Data Transfer Element into Storage Element 1...source Element
- Address 480 is Empty
-
- (drive 0 was empty and so was slot 1)
- > mtx -f xxx load 15 0
- no response, just returns to the command prompt when complete.
- > mtx -f xxx status Storage Changer /dev/changer:2 Drives, 60 Slots ( 2 Import/Export )
- Data Transfer Element 0:Full (Storage Element 15 Loaded):VolumeTag = HX001
- Data Transfer Element 1:Empty
- Storage Element 1:Empty
- Storage Element 2:Full :VolumeTag=HX002
- Storage Element 3:Full :VolumeTag=HX003
- Storage Element 4:Full :VolumeTag=HX004
- Storage Element 5:Full :VolumeTag=HX005
- Storage Element 6:Full :VolumeTag=HX006
- Storage Element 7:Full :VolumeTag=HX007
- Storage Element 8:Full :VolumeTag=HX008
- Storage Element 9:Full :VolumeTag=HX009
- Storage Element 10:Full :VolumeTag=HX010
- Storage Element 11:Empty
- Storage Element 12:Empty
- Storage Element 13:Empty
- Storage Element 14:Empty
- Storage Element 15:Empty
- Storage Element 16:Empty....
- Storage Element 28:Empty
- Storage Element 29:Full :VolumeTag=CLNU01L1
- Storage Element 30:Empty....
- Storage Element 57:Empty
- Storage Element 58:Full :VolumeTag=NEX261L2
- Storage Element 59 IMPORT/EXPORT:Empty
- Storage Element 60 IMPORT/EXPORT:Empty
- $ mtx -f xxx unload
- Unloading Data Transfer Element into Storage Element 15...done
-
- (just to verify it remembers where it came from, however it can be
- overrriden with mtx unload {slotnumber} to go to any storage slot.)
- Configuration wise:
- There needs to be a table of drive # to devices somewhere - If there are
- multiple changers or drives there may not be a 1:1 correspondance between
- changer drive number and system device name - and depending on the way the
- drives are hooked up to scsi busses, they may not be linearly numbered
- from an offset point either.something like
-
- Autochanger drives = 2
- Autochanger drive 0 = /dev/nst1
- Autochanger drive 1 = /dev/nst2
- IMHO, it would be _safest_ to use explicit mtx unload commands at all
- times, not just for multidrive changers. For a 1 drive changer, that's
- just:
-
- mtx load xx 0
- mtx unload xx 0
-
- MTX's manpage (1.2.15):
- unload [<slotnum>] [ <drivenum> ]
- Unloads media from drive <drivenum> into slot
- <slotnum>. If <drivenum> is omitted, defaults to
- drive 0 (as do all commands). If <slotnum> is
- omitted, defaults to the slot that the drive was
- loaded from. Note that there's currently no way
- to say 'unload drive 1's media to the slot it
- came from', other than to explicitly use that
- slot number as the destination.AB
-====
-
-====
-SCSI info:
-FreeBSD
-undef# camcontrol devlist
-<WANGTEK 51000 SCSI M74H 12B3> at scbus0 target 2 lun 0 (pass0,sa0)
-<ARCHIVE 4586XX 28887-XXX 4BGD> at scbus0 target 4 lun 0 (pass1,sa1)
-<ARCHIVE 4586XX 28887-XXX 4BGD> at scbus0 target 4 lun 1 (pass2)
-
-tapeinfo -f /dev/sg0 with a bad tape in drive 1:
-[kern@rufus mtx-1.2.17kes]$ ./tapeinfo -f /dev/sg0
-Product Type: Tape Drive
-Vendor ID: 'HP '
-Product ID: 'C5713A '
-Revision: 'H107'
-Attached Changer: No
-TapeAlert[3]: Hard Error: Uncorrectable read/write error.
-TapeAlert[20]: Clean Now: The tape drive neads cleaning NOW.
-MinBlock:1
-MaxBlock:16777215
-SCSI ID: 5
-SCSI LUN: 0
-Ready: yes
-BufferedMode: yes
-Medium Type: Not Loaded
-Density Code: 0x26
-BlockSize: 0
-DataCompEnabled: yes
-DataCompCapable: yes
-DataDeCompEnabled: yes
-CompType: 0x20
-DeCompType: 0x0
-Block Position: 0
-=====
-
-====
- Handling removable disks
-
- From: Karl Cunningham <karlc@keckec.com>
-
- My backups are only to hard disk these days, in removable bays. This is my
- idea of how a backup to hard disk would work more smoothly. Some of these
- things Bacula does already, but I mention them for completeness. If others
- have better ways to do this, I'd like to hear about it.
-
- 1. Accommodate several disks, rotated similar to how tapes are. Identified
- by partition volume ID or perhaps by the name of a subdirectory.
- 2. Abort & notify the admin if the wrong disk is in the bay.
- 3. Write backups to different subdirectories for each machine to be backed
- up.
- 4. Volumes (files) get created as needed in the proper subdirectory, one
- for each backup.
- 5. When a disk is recycled, remove or zero all old backup files. This is
- important as the disk being recycled may be close to full. This may be
- better done manually since the backup files for many machines may be
- scattered in many subdirectories.
-====
-
-
-=== Done
-- Why the heck doesn't bacula drop root priviledges before connecting to
- the DB?
-- Look at using posix_fadvise(2) for backups -- see bug #751.
- Possibly add the code at findlib/bfile.c:795
-/* TCP socket options */
-#define TCP_KEEPIDLE 4 /* Start keeplives after this period */
-- Fix bnet_connect() code to set a timer and to use time to
- measure the time.
-- Implement 4th argument to make_catalog_backup that passes hostname.
-- Test FIFO backup/restore -- make regression
-- Please mount volume "xxx" on Storage device ... should also list
- Pool and MediaType in case user needs to create a new volume.
-- On restore add Restore Client, Original Client.
-01-Apr 00:42 rufus-dir: Start Backup JobId 55, Job=kernsave.2007-04-01_00.42.48
-01-Apr 00:42 rufus-sd: Python SD JobStart: JobId=55 Client=Rufus
-01-Apr 00:42 rufus-dir: Created new Volume "Full0001" in catalog.
-01-Apr 00:42 rufus-dir: Using Device "File"
-01-Apr 00:42 rufus-sd: kernsave.2007-04-01_00.42.48 Warning: Device "File" (/tmp) not configured to autolabel Volumes.
-01-Apr 00:42 rufus-sd: kernsave.2007-04-01_00.42.48 Warning: Device "File" (/tmp) not configured to autolabel Volumes.
-01-Apr 00:42 rufus-sd: Please mount Volume "Full0001" on Storage Device "File" (/tmp) for Job kernsave.2007-04-01_00.42.48
-01-Apr 00:44 rufus-sd: Wrote label to prelabeled Volume "Full0001" on device "File" (/tmp)
-- Check if gnome-console works with TLS.
-- the director seg faulted when I omitted the pool directive from a
- job resource. I was experimenting and thought it redundant that I had
- specified Pool, Full Backup Pool. and Differential Backup Pool. but
- apparently not. This happened when I removed the pool directive and
- started the director.
-- Add Where: client:/.... to restore job report.
-- Ensure that moving a purged Volume in ua_purge.c to the RecyclePool
- does the right thing.
-- FD-SD quick disconnect
-- Building the in memory restore tree is slow.
-- Erabt if min_block_size > max_block_size
-- Add the ability to consolidate old backup sets (basically do a restore
- to tape and appropriately update the catalog). Compress Volume sets.
- Might need to spool via file is only one drive is available.
-- Why doesn't @"xxx abc" work in a conf file?
-- Don't restore Solaris Door files:
- #define S_IFDOOR in st_mode.
- see: http://docs.sun.com/app/docs/doc/816-5173/6mbb8ae23?a=view#indexterm-360
-- Figure out how to recycle Scratch volumes back to the Scratch Pool.
-- Implement Despooling data status.
-- Use E'xxx' to escape PostgreSQL strings.
-- Look at mincore: http://insights.oetiker.ch/linux/fadvise.html
-- Unicode input http://en.wikipedia.org/wiki/Byte_Order_Mark
-- Look at moving the Storage directive from the Job to the
- Pool in the default conf files.
-- Look at in src/filed/backup.c
-> pm_strcpy(ff_pkt->fname, ff_pkt->fname_save);
-> pm_strcpy(ff_pkt->link, ff_pkt->link_save);
-- Add Catalog = to Pool resource so that pools will exist
- in only one catalog -- currently Pools are "global".
-- Add TLS to bat (should be done).
-=== Duplicate jobs ===
-- Done, but implemented somewhat differently than described below!!!
-
- hese apply only to backup jobs.
-
- 1. Allow Duplicate Jobs = Yes | No | Higher (Yes)
-
- 2. Duplicate Job Interval = <time-interval> (0)
-
- The defaults are in parenthesis and would produce the same behavior as today.
-
- If Allow Duplicate Jobs is set to No, then any job starting while a job of the
- same name is running will be canceled.
-
- If Allow Duplicate Jobs is set to Higher, then any job starting with the same
- or lower level will be canceled, but any job with a Higher level will start.
- The Levels are from High to Low: Full, Differential, Incremental
-
- Finally, if you have Duplicate Job Interval set to a non-zero value, any job
- of the same name which starts <time-interval> after a previous job of the
- same name would run, any one that starts within <time-interval> would be
- subject to the above rules. Another way of looking at it is that the Allow
- Duplicate Jobs directive will only apply after <time-interval> of when the
- previous job finished (i.e. it is the minimum interval between jobs).
-
- So in summary:
-
- Allow Duplicate Jobs = Yes | No | HigherLevel | CancelLowerLevel (Yes)
-
- Where HigherLevel cancels any waiting job but not any running job.
- Where CancelLowerLevel is same as HigherLevel but cancels any running job or
- waiting job.
-
- Duplicate Job Proximity = <time-interval> (0)
-
- My suggestion was to define it as the minimum guard time between
- executions of a specific job -- ie, if a job was scheduled within Job
- Proximity number of seconds, it would be considered a duplicate and
- consolidated.
-
- Skip = Do not allow two or more jobs with the same name to run
- simultaneously within the proximity interval. The second and subsequent
- jobs are skipped without further processing (other than to note the job
- and exit immediately), and are not considered errors.
-
- Fail = The second and subsequent jobs that attempt to run during the
- proximity interval are cancelled and treated as error-terminated jobs.
-
- Promote = If a job is running, and a second/subsequent job of higher
- level attempts to start, the running job is promoted to the higher level
- of processing using the resources already allocated, and the subsequent
- job is treated as in Skip above.
-
-
-DuplicateJobs {
- Name = "xxx"
- Description = "xxx"
- Allow = yes|no (no = default)
-
- AllowHigherLevel = yes|no (no)
-
- AllowLowerLevel = yes|no (no)
-
- AllowSameLevel = yes|no
-
- Cancel = Running | New (no)
-
- CancelledStatus = Fail | Skip (fail)
-
- Job Proximity = <time-interval> (0)
- My suggestion was to define it as the minimum guard time between
- executions of a specific job -- ie, if a job was scheduled within Job
- Proximity number of seconds, it would be considered a duplicate and
- consolidated.
-
-}
-
-===