2 Bacula Regression Suite and CTest
3 ==================================================================
5 # Copyright (C) 2000-2015 Kern Sibbald
6 # License: BSD 2-Clause; see file LICENSE-FOSS
9 Thanks to Frank Sweetser, the Bacula regression scripts have now been modified
10 to use the ctest component of cmake. The major gain from this, since Bacula
11 already had a working test framework in place, is the ability to have the
12 results of each test submitted to a centralized dashboard system. All of the
13 test results are aggregated and summarized, where all of the developers can
14 quickly see how the regression tests are running.
18 ==================================================================
20 For more complete documentation on ctest, go to:
22 http://www.cmake.org/Wiki/CMake#CTest
24 The first step is to install the cmake package, which includes ctest. If your
25 distribution does already package it, you can download it directly from the
30 Next, you must edit your regression config file and add a paramter called
31 SITE_NAME to identify the machine running the tests. Ideally, it should
32 contain something to identify yourself to whoever is viewing the test results
33 as well as something to allow you to identify which machine is running the
36 SITE_NAME=kern-bacula-gumbie
38 Once you have cmake installed, you can perform one of two different kinds of
39 runs to submit test results. The most common kind will be Nightly runs. A
40 Nightly CTest backup will update the source directory (as defined by the
41 BACULA_SOURCE setting in your config file) to the current version, run the
42 specified list of tests, and submit all of the results to the server. Note
43 that all of the results in a given 24 hour period (starting at 9pm EST) are
44 lumped together to appear as a single block, rather than each test showing up
47 The simplest way to trigger a nightly run is to use one of the two provided
48 scripts. The nightly-all script will run all non root tests, both tape and
49 disk based, while the nightly-disk script will run only the disk based tests.
50 So, you can choose between the following scripts:
53 ./nightly-all # does disk and tape testing
54 ./nightly-disk # disk only tests
56 ./experimental-all # experimental disk and tape testing
57 ./experimental-disk # experimental disk testing
59 We recommend that you start with the ./experimental-disk runs so that
60 you can check that everything is working fine. Once that is done,
61 try a nightly-xxx run. The difference is the experimental runs are just
62 that -- they are things where you are experimenting and it is expected that
63 something might be broken (bad ctest configuration, experimental source
64 code, ...), and nightly runs are not expected to fail.
66 If you are a developer and you have modified your local Git repository, you
67 should be running the experimental tests -- they are designed for developers.
68 If you do modify your local repository and commit it, then run a nightly
71 If you are just doing testing on a nightly basis (no development in your
72 source repository), then please use the nightly tests.
74 All the old scripts (./do_all, do_file, all-non-root-tests, ...) manually
75 run the tests outside of ctest.
77 Periodically, however, you may want to submit a single test separately from a
78 weekly run. This may be a test of a particular patch you're working on, or
79 perhaps a new OS patch. For these one-shot tests, you will want to manually
80 run ctest in Experimental mode, something like:
82 REGRESS_DEBUG=1 ctest -D Experimental -R all-non-root:auto-label-test
84 The '-D Experimental' option tells ctest to submit the test results as
85 Experimental instead of Nightly. We reccomend you use the REGRESS_DEBUG
86 environmental variable to ensure that any errors from the test are logged in
87 the dashboard (all of the ctest wrapper scripts set it). The '-R <pattern>'
88 option gives ctest a regular expression. Any tests with a name as defined in
89 DartTestfile.txt that matches the pattern will be run.
91 Note that you must have run ./scripts/do_sed at least once already in order to
92 use Experimental mode.
95 Updating and Building Within CTest:
96 ==================================================================
97 Before each Nightly run, ctest will automatically update the BACULA_SOURCE
98 directory, and submit these updates along with the test results. Any
99 Experimental runs will not.
101 Before either type of run actually begins running tests, ctest will run the
102 script scripts/update-ctest. This script first compares the version of
103 BUILD_SOURCE with that of the build/ directory. If the two versions differ, or
104 if the build/ directory does not exist, it will automatically run 'make setup'
108 Viewing the Dashboard:
109 ==================================================================
110 You can view the dashboard at:
112 http://regress.bacula.org/index.php?project=bacula
114 Results will not be visible as soon as they are submitted to the server.
115 Processing is currently done every 10 minutes, so you may have to wait up to 15
116 minutes or so before your results show up.
119 Getting CTest running on Solaris (thanks to Robert Hartzell):
120 ============================================================
121 The regression is working in zone on opensolaris build 126
123 create a zone and install these pkg's:
124 SUNWcmake, SUNWmysql5, SUNWlibm, gcc-dev
125 SUNWgtar, SUNWgit, SUNWperl584usr,
128 In the config file I edited the "WHICHDB=" line to read:
129 WHICHDB="--with-mysql=/usr/mysql/5.0"
131 And then in the shell:
132 $ export LDFLAGS="-L/usr/mysql/5.0/lib/mysql -R/usr/mysql/5.0/lib/mysql"
133 $ PATH=/usr/gnu/bin:$PATH
135 When i first ran "make setup" it failed... couldn't create the database
136 so I had to run /usr/mysql/5.0/bin/mysql -u root mysql and do this:
137 grant all privileges on regress.* to ''@localhost;
138 grant all privileges on regress.* to ''@"%";
141 CTest script details:
142 =========================================================
143 Email from Frank describing the flow when running a ctest and some of the
144 problems that come up.
146 0. Start off with a local Git repository at version A, and the master
147 repository at version B.
149 1. nightly-disk is started.
151 2. nightly-disk runs scripts/config_dart
153 3. config_dart runs scripts/create_sed
155 4. create_sed pulls bversion and bdate out of the current local repo, so gets
158 5. config_dart then creates DartConfiguration.tcl from the .in file, leaving a
159 BuildName parameter of A.
161 6. nightly-disk then runs 'ctest -D Nightly'. This implicitly tells ctest to
162 perform Update, Configure, Build, Test, and Submit stages, in that order.
164 7. The Update stage runs 'git pull' on the local repository. The local
165 repository is now updated to version B from the master, but since the
166 DartConfiguration.tcl file was already created and has not been updated, the
167 Update.xml file has the version A BuildName still.
169 8. Next, the Configure stage runs. Since the configure process is handled in
170 tandem with the build process by 'make setup', this just calls /bin/true so as
171 to not throw any false errors, and can effectively be treated as a no-op.
173 9. Next is the Build stage, which is handled by calling scripts/update-ctest.
175 10. update-ctest checks the Git versions of regress/build vs BACULA_SOURCE.
176 Since the two are different (regress/build is still version A, while
177 BACULA_SOURCE has been updated to B) it calls 'make setup'.
179 11. 'make setup' copies BACULA_SOURCE to regress/build and configures and
180 builds it. It then calls scripts/do_sed
182 12. do_sed calls scripts/config_dart again. Since regress/build has been
183 updated to B, it regenerates DartConfiguration.tcl with a version B BuildName.
185 13. ctest now generates the Build.xml results file, but since BuildName was
186 still A while this stage began, this is what appears in the XML BuildName.
188 14. Done with the Build stage, ctest moves on to the Test stage, where it
189 actually calls the various test/ scripts as defined by DartTestfile.txt and
190 filtered by the -R option. At this point, since ctest is beginning a new
191 stage, it appears to re-read DartConfiguration.tcl (I believe this is intended
192 to allow ctest to bootstrap itself in a virgin cmake managed source code tree,
193 where the test configuration should be generated by cmake). The final
194 Test.xml file, therefore, contains a version B BuildName string, as opposed to
195 all previous steps, which have version A.
197 15. Finally, ctest flings the results at the dashboard. The dashboard does
198 ignores in what order or grouping XML files are submitted, and instead uses
199 the site/buildname tuple to distinguish them (at least for Nightly runs, I'm
200 not so sure about Experimental ones). In this case, instead of one complete
201 run, the dashboard sees two runs, one of which is Update through Make, and a
202 second one which is Test only.
204 For a sample of the problem, take a look at the fsweetser 2.3 sqlite3 Nightly
207 http://regress.bacula.org:8081/Bacula/Dashboard/Dashboard?trackid=29
209 The Update and Build information show up with a BuildName of
210 bacula-2.3.10-26Feb08-Linux-sqlite3, then aftegit pull hit the Test
211 information shows up with bacula-2.3.11-03Mar08-Linux-sqlite3. (Ignore for
212 the moment the fact my timestamps are at 6:59PM, rather than at 9PM where
213 they're supposed to be; this seems to be a Fedora specific client side issue I
214 haven't tracked down yet.)
216 Looking more closely at the test submitted to the public dashboard
217 (http://public.kitware.com/dashboard.php), I get the impression that the
218 BuildName parameter was misnamed, and intended to be treated more as a build
219 platform name, rather than the name of the build being tested. Rather than
220 create a hook to tag the version being tested, everyone as far as I can tell
221 just seems to rely on the timestamp of the test.
225 ======================================================================
226 NOTE !!!!!!!!! ctest can actually back out changes that have been made to
227 your local source repository (this was true for SVN, but I (Kern) am
228 not sure it is true now that we have switched to git).
230 As a consequence, it is probably better not to
231 use a directory in which you are developing code for Nightly tests. Seee the
232 below explanation given by Frank Sweetser.
234 When a Nightly run is done, the timestamp is set to the last occurring
235 instance of the time defined by the NightlyStartTime parameter. The piece
236 that I missed is that, in addition to using that timestamp for reporting to
237 the dashboard, the update stage also uses that point in time to determine
238 exactly which version of the repository to check out.
240 So if you make commit changes at 10PM EST, and then run a Nightly test run,
241 the NightlyStartTime of 9PM EST will back out those changes in the local
242 repository. Any subsequent runs that are started at 9PM EST the following day
243 or later will include them. This implies to me that NightlyStartTime should
244 be set such that you don't expect any developers to commit any changes in
245 between NightlyStartTime and the time at which the ctest run actually starts.
247 The alternative is to make use of the Experimental track. While it normally
248 just uses the local source tree as is, you can manually have it update:
250 ctest -D ExperimentalUpdate
252 Unlike Nightly, this will update to whatever the latest version of the