From: Kern Sibbald Date: Mon, 10 Sep 2007 15:00:52 +0000 (+0000) Subject: Update News X-Git-Tag: Release-3.0.0~2554 X-Git-Url: https://git.sur5r.net/?a=commitdiff_plain;h=fc00df674091278c373c76794323f1601c79d431;p=bacula%2Fdocs Update News --- diff --git a/docs/home-page/news.txt b/docs/home-page/news.txt index 73a1087e..00407d4a 100644 --- a/docs/home-page/news.txt +++ b/docs/home-page/news.txt @@ -9,71 +9,66 @@ kind of problem many times. Despite our testing, there is indeed a bug in Bacula that has the following characteristics: -1. It happens only when multiple simultaneous Jobs are run (regardless of -whether or not data spooling is enabled). +1. It happens only when multiple simultaneous Jobs are run (regardless of +whether or not data spooling is enabled), and happens only when the +Storage daemon is changing from one Volume to another -- i.e. the +backups span multiple volumes, and it only happens for Jobs writing +to the same volume. -2. It has only been observed on disk based backup, but not on tape. +2. It has only been observed on disk based backup, but not on tape. -3. Under the right circumstances (timing), it could and probably does happen +3. Under the right circumstances (timing), it could and probably does happen on tape backups. -4. It seems to be timing dependent, and requires multiple clients to -reproduce. +4. It seems to be timing dependent, and requires multiple clients to +reproduce, although under the right circumstances, it should be reproducible +with a single client doing multiple simultaneous backups. -5. Analysis indicates that it happens most often when the clients are slow +5. Analysis indicates that it happens most often when the clients are slow (e.g. doing Incremental backups). 6. It has been verified to exist in versions 2.0.x and 2.2.x. -7. It should also be in version 1.38, but could not be reproduced in testing, -perhaps due to timing considerations or the fact that the test FD daemons +7. It should also be in version 1.38, but could not be reproduced in testing, +perhaps due to timing considerations or the fact that the test FD daemons were version 2.2.2. -8. The data is correctly stored on the Volume, but incorrect index (JobMedia) -records are stored in the database. (the JobMedia record generated during -the Volume change contains the index of the new Volume rather than the -previous Volume). +8. The data is correctly stored on the Volume, but incorrect index (JobMedia) +records are stored in the database. (the JobMedia record generated during +the Volume change contains the index of the new Volume rather than the +previous Volume). This will be described in more detail below. -9. You can prevent the problem from occurring by either turning off multiple -simultaneous Jobs or by ensuring that while running multiple simultaneous -Jobs that those Jobs do not span Volumes. E.g. you could manually mark +9. You can prevent the problem from occurring by either turning off multiple +simultaneous Jobs or by ensuring that while running multiple simultaneous +Jobs that those Jobs do not span Volumes. E.g. you could manually mark Volumes as full when they are sufficiently large. -10. If you are not running multiple simultaneous Jobs, you will not be +10. If you are not running multiple simultaneous Jobs, you will not be affected by this bug. -11. If you are running multiple simultaneous Jobs to tapes, I believe there is -a reasonable probability that this problem could show up when Jobs are split +11. If you are running multiple simultaneous Jobs to tapes, I believe there is +a reasonable probability that this problem could show up when Jobs are split across tapes. -12. If you are running multiple simultaneous Jobs to disks, I believe there is -a high probability that this problem will show up when Jobs are split across +12. If you are running multiple simultaneous Jobs to disks, I believe there is +a high probability that this problem will show up when Jobs are split across disks Volumes. +13. The bug concerns only the Storage daemon so there is no need to update +the clients, though I do recommend updating the Director when installing +an updated Storage daemon. + I have uploaded patches to bug #935 (bugs.bacula.org) that will correct version 2.2.0, 2.2.1, and 2.2.2. The patch has been tested only on version 2.2.2 and passes all regression tests as well as the specific test that -reproduced the problem. This patch is still in the testing phase because it -has not yet been confirmed by any user other than myself. The only daemon -that is affected by the bug and the patch is the Storage daemon, so there is -no need to upgrade any clients. - -After a little more testing, I plan to release version 2.2.3 probably on -Monday the 10th or Tuesday. - -At this time, I do not have a patch for 2.0.x versions, and unless there is -some really compelling reason to create one, I would prefer not -- it would -not be a huge effort to back port the patch, but it would require rather -extensive testing. Though it is hard to make a specific recommendation, I -believe that it probably will be the wisest and simplest to either patch -version 2.2.x if that is what you are currently running, or upgrade to -version 2.2.3 when it is released. - -It *could* be possible to manually correct the bad JobMedia records in the -catalog, but it is not something that I would personally recommend. If you -*really* need data off an old tape, I recommend first trying a restore. -Sometime tomorrow, I will provide more detailed instructions on several ways -how to correct the problem if necessary -- all of them are somewhat painful. +reproduced the problem. + +The patch has now been confirmed to fix the problem reported, and Bacula +version 2.2.3 has been released to Source Forge. + +For the technical details of the bug, please see: + + http://www.bacula.org/downloads/bug-935.txt ;;;