2 Bacula Regression Suite and CTest
3 ==================================================================
5 Thanks to Frank Sweetser, the Bacula regression scripts have now been modified
6 to use the ctest component of cmake. The major gain from this, since Bacula
7 already had a working test framework in place, is the ability to have the
8 results of each test submitted to a centralized dashboard system. All of the
9 test results are aggregated and summarized, where all of the developers can
10 quickly see how the regression tests are running.
13 ==================================================================
15 For more complete documentation on ctest, go to:
17 http://www.cmake.org/Wiki/CMake#CTest
19 The first step is to install the cmake package, which includes ctest. If your
20 distribution does already package it, you can download it directly from the
25 Next, you must edit your regression config file and add a paramter called
26 SITE_NAME to identify the machine running the tests. Ideally, it should
27 contain something to identify yourself to whoever is viewing the test results
28 as well as something to allow you to identify which machine is running the
31 SITE_NAME=kern-bacula-gumbie
33 Once you have cmake installed, you can perform one of two different kinds of
34 runs to submit test results. The most common kind will be Nightly runs. A
35 Nightly CTest backup will update the source directory (as defined by the
36 BACULA_SOURCE setting in your config file) to the current version, run the
37 specified list of tests, and submit all of the results to the server. Note
38 that all of the results in a given 24 hour period (starting at 9pm EST) are
39 lumped together to appear as a single block, rather than each test showing up
42 The simplest way to trigger a nightly run is to use one of the two provided
43 scripts. The nightly-all script will run all non root tests, both tape and
44 disk based, while the nightly-disk script will run only the disk based tests.
45 So, you can choose between the following scripts:
48 ./nightly-all # does disk and tape testing
49 ./nightly-disk # disk only tests
51 ./experimental-all # experimental disk and tape testing
52 ./experimental-disk # experimental disk testing
54 We recommend that you start with the ./experimental-disk runs so that
55 you can check that everything is working fine. Once that is done,
56 try a nightly-xxx run. The difference is the experimental runs are just
57 that -- they are things where you are experimenting and it is expected that
58 something might be broken (bad ctest configuration, experimental source
59 code, ...), and nightly runs are not expected to fail.
61 If you are a developer and you have modified your local SVN repository, you
62 should be running the experimental tests -- they are designed for developers.
63 If you do modify your local repository and commit it, then run a nightly
64 test, the local repository may be reverted to a prior version so that the
65 nightly tests all have a consistent cutoff time.
67 If you are just doing testing on a nightly basis (no development in your
68 source repository), then please use the nightly tests.
70 All the old scripts (./do_all, do_file, all-non-root-tests, ...) manually
71 run the tests outside of ctest.
73 Periodically, however, you may want to submit a single test separately from a
74 weekly run. This may be a test of a particular patch you're working on, or
75 perhaps a new OS patch. For these one-shot tests, you will want to manually
76 run ctest in Experimental mode, something like:
78 REGRESS_DEBUG=1 ctest -D Experimental -R all-non-root:auto-label-test
80 The '-D Experimental' option tells ctest to submit the test results as
81 Experimental instead of Nightly. We reccomend you use the REGRESS_DEBUG
82 environmental variable to ensure that any errors from the test are logged in
83 the dashboard (all of the ctest wrapper scripts set it). The '-R <pattern>'
84 option gives ctest a regular expression. Any tests with a name as defined in
85 DartTestfile.txt that matches the pattern will be run.
87 Note that you must have run ./scripts/do_sed at least once already in order to
88 use Experimental mode.
90 Updating and Building Within CTest
91 ==================================================================
93 Before each Nightly run, ctest will automatically update the BACULA_SOURCE
94 directory, and submit these updates along with the test results. Any
95 Experimental runs will not.
97 Before either type of run actually begins running tests, ctest will run the
98 script scripts/update-ctest. This script first compares the svn version of
99 BUILD_SOURCE with that of the build/ directory. If the two versions differ, or
100 if the build/ directory does not exist, it will automatically run 'make setup'
103 Viewing the Dashboard
104 ==================================================================
106 You can view the dashboard at:
108 http://regress.bacula.org/index.php?project=bacula
111 Results will not be visible as soon as they are submitted to the server.
112 Processing is currently done every 10 minutes, so you may have to wait up to 15
113 minutes or so before your results show up.
115 =========================================================
117 Email from Frank describing the flow when running a ctest and some of the
118 problems that come up.
120 0. Start off with a local svn repository at version A, and the master
121 repository at version B.
123 1. nightly-disk is started.
125 2. nightly-disk runs scripts/config_dart
127 3. config_dart runs scripts/create_sed
129 4. create_sed pulls bversion and bdate out of the current local repo, so gets
132 5. config_dart then creates DartConfiguration.tcl from the .in file, leaving a
133 BuildName parameter of A.
135 6. nightly-disk then runs 'ctest -D Nightly'. This implicitly tells ctest to
136 perform Update, Configure, Build, Test, and Submit stages, in that order.
138 7. The Update stage runs 'svn update' on the local repository. The local
139 repository is now updated to version B from the master, but since the
140 DartConfiguration.tcl file was already created and has not been updated, the
141 Update.xml file has the version A BuildName still.
143 8. Next, the Configure stage runs. Since the configure process is handled in
144 tandem with the build process by 'make setup', this just calls /bin/true so as
145 to not throw any false errors, and can effectively be treated as a no-op.
147 9. Next is the Build stage, which is handled by calling scripts/update-ctest.
149 10. update-ctest checks the svn versions of regress/build vs BACULA_SOURCE.
150 Since the two are different (regress/build is still version A, while
151 BACULA_SOURCE has been updated to B) it calls 'make setup'.
153 11. 'make setup' copies BACULA_SOURCE to regress/build and configures and
154 builds it. It then calls scripts/do_sed
156 12. do_sed calls scripts/config_dart again. Since regress/build has been
157 updated to B, it regenerates DartConfiguration.tcl with a version B BuildName.
159 13. ctest now generates the Build.xml results file, but since BuildName was
160 still A while this stage began, this is what appears in the XML BuildName.
162 14. Done with the Build stage, ctest moves on to the Test stage, where it
163 actually calls the various test/ scripts as defined by DartTestfile.txt and
164 filtered by the -R option. At this point, since ctest is beginning a new
165 stage, it appears to re-read DartConfiguration.tcl (I believe this is intended
166 to allow ctest to bootstrap itself in a virgin cmake managed source code tree,
167 where the test configuration should be generated by cmake). The final
168 Test.xml file, therefore, contains a version B BuildName string, as opposed to
169 all previous steps, which have version A.
171 15. Finally, ctest flings the results at the dashboard. The dashboard does
172 ignores in what order or grouping XML files are submitted, and instead uses
173 the site/buildname tuple to distinguish them (at least for Nightly runs, I'm
174 not so sure about Experimental ones). In this case, instead of one complete
175 run, the dashboard sees two runs, one of which is Update through Make, and a
176 second one which is Test only.
178 For a sample of the problem, take a look at the fsweetser 2.3 sqlite3 Nightly
181 http://regress.bacula.org:8081/Bacula/Dashboard/Dashboard?trackid=29
183 The Update and Build information show up with a BuildName of
184 bacula-2.3.10-26Feb08-Linux-sqlite3, then after svn update hit the Test
185 information shows up with bacula-2.3.11-03Mar08-Linux-sqlite3. (Ignore for
186 the moment the fact my timestamps are at 6:59PM, rather than at 9PM where
187 they're supposed to be; this seems to be a Fedora specific client side issue I
188 haven't tracked down yet.)
190 Looking more closely at the test submitted to the public dashboard
191 (http://public.kitware.com/dashboard.php), I get the impression that the
192 BuildName parameter was misnamed, and intended to be treated more as a build
193 platform name, rather than the name of the build being tested. Rather than
194 create a hook to tag the version being tested, everyone as far as I can tell
195 just seems to rely on the timestamp of the test.
198 NOTE !!!!!!!!! ctest can actually back out changes that have been made to
199 your local source repository. As a consequence, it is probably better not to
200 use a directory in which you are developing code for Nightly tests. Seee the
201 below explanation given by Frank Sweetser.
203 When a Nightly run is done, the timestamp is set to the last occurring
204 instance of the time defined by the NightlyStartTime parameter. The piece
205 that I missed is that, in addition to using that timestamp for reporting to
206 the dashboard, the update stage also uses that point in time to determine
207 exactly which version of the repository to check out.
209 So if you make commit changes at 10PM EST, and then run a Nightly test run,
210 the NightlyStartTime of 9PM EST will back out those changes in the local
211 repository. Any subsequent runs that are started at 9PM EST the following day
212 or later will include them. This implies to me that NightlyStartTime should
213 be set such that you don't expect any developers to commit any changes in
214 between NightlyStartTime and the time at which the ctest run actually starts.
216 The alternative is to make use of the Experimental track. While it normally
217 just uses the local source tree as is, you can manually have it update:
219 ctest -D ExperimentalUpdate
221 Unlike Nightly, this will update to whatever the latest version of the