Bacula Regression Suite and CTest ================================================================== Thanks to Frank Sweetser, the Bacula regression scripts have now been modified to use the ctest component of cmake. The major gain from this, since Bacula already had a working test framework in place, is the ability to have the results of each test submitted to a centralized dashboard system. All of the test results are aggregated and summarized, where all of the developers can quickly see how the regression tests are running. ==== How to Use CTest ================================================================== For more complete documentation on ctest, go to: http://www.cmake.org/Wiki/CMake#CTest The first step is to install the cmake package, which includes ctest. If your distribution does already package it, you can download it directly from the source: http://www.cmake.org Next, you must edit your regression config file and add a paramter called SITE_NAME to identify the machine running the tests. Ideally, it should contain something to identify yourself to whoever is viewing the test results as well as something to allow you to identify which machine is running the tests, ie SITE_NAME=kern-bacula-gumbie Once you have cmake installed, you can perform one of two different kinds of runs to submit test results. The most common kind will be Nightly runs. A Nightly CTest backup will update the source directory (as defined by the BACULA_SOURCE setting in your config file) to the current version, run the specified list of tests, and submit all of the results to the server. Note that all of the results in a given 24 hour period (starting at 9pm EST) are lumped together to appear as a single block, rather than each test showing up as a different run. The simplest way to trigger a nightly run is to use one of the two provided scripts. The nightly-all script will run all non root tests, both tape and disk based, while the nightly-disk script will run only the disk based tests. So, you can choose between the following scripts: cd ./nightly-all # does disk and tape testing ./nightly-disk # disk only tests ./experimental-all # experimental disk and tape testing ./experimental-disk # experimental disk testing We recommend that you start with the ./experimental-disk runs so that you can check that everything is working fine. Once that is done, try a nightly-xxx run. The difference is the experimental runs are just that -- they are things where you are experimenting and it is expected that something might be broken (bad ctest configuration, experimental source code, ...), and nightly runs are not expected to fail. If you are a developer and you have modified your local Git repository, you should be running the experimental tests -- they are designed for developers. If you do modify your local repository and commit it, then run a nightly test. If you are just doing testing on a nightly basis (no development in your source repository), then please use the nightly tests. All the old scripts (./do_all, do_file, all-non-root-tests, ...) manually run the tests outside of ctest. Periodically, however, you may want to submit a single test separately from a weekly run. This may be a test of a particular patch you're working on, or perhaps a new OS patch. For these one-shot tests, you will want to manually run ctest in Experimental mode, something like: REGRESS_DEBUG=1 ctest -D Experimental -R all-non-root:auto-label-test The '-D Experimental' option tells ctest to submit the test results as Experimental instead of Nightly. We reccomend you use the REGRESS_DEBUG environmental variable to ensure that any errors from the test are logged in the dashboard (all of the ctest wrapper scripts set it). The '-R ' option gives ctest a regular expression. Any tests with a name as defined in DartTestfile.txt that matches the pattern will be run. Note that you must have run ./scripts/do_sed at least once already in order to use Experimental mode. ==== Updating and Building Within CTest: ================================================================== Before each Nightly run, ctest will automatically update the BACULA_SOURCE directory, and submit these updates along with the test results. Any Experimental runs will not. Before either type of run actually begins running tests, ctest will run the script scripts/update-ctest. This script first compares the version of BUILD_SOURCE with that of the build/ directory. If the two versions differ, or if the build/ directory does not exist, it will automatically run 'make setup' for you. ==== Viewing the Dashboard: ================================================================== You can view the dashboard at: http://regress.bacula.org/index.php?project=bacula Results will not be visible as soon as they are submitted to the server. Processing is currently done every 10 minutes, so you may have to wait up to 15 minutes or so before your results show up. ==== Getting CTest running on Solaris (thanks to Robert Hartzell): ============================================================ The regression is working in zone on opensolaris build 126 create a zone and install these pkg's: SUNWcmake, SUNWmysql5, SUNWlibm, gcc-dev SUNWgtar, SUNWgit, SUNWperl584usr, and most of SUNWgnu* In the config file I edited the "WHICHDB=" line to read: WHICHDB="--with-mysql=/usr/mysql/5.0" And then in the shell: $ export LDFLAGS="-L/usr/mysql/5.0/lib/mysql -R/usr/mysql/5.0/lib/mysql" $ PATH=/usr/gnu/bin:$PATH When i first ran "make setup" it failed... couldn't create the database so I had to run /usr/mysql/5.0/bin/mysql -u root mysql and do this: grant all privileges on regress.* to ''@localhost; grant all privileges on regress.* to ''@"%"; ==== CTest script details: ========================================================= Email from Frank describing the flow when running a ctest and some of the problems that come up. 0. Start off with a local Git repository at version A, and the master repository at version B. 1. nightly-disk is started. 2. nightly-disk runs scripts/config_dart 3. config_dart runs scripts/create_sed 4. create_sed pulls bversion and bdate out of the current local repo, so gets version A values. 5. config_dart then creates DartConfiguration.tcl from the .in file, leaving a BuildName parameter of A. 6. nightly-disk then runs 'ctest -D Nightly'. This implicitly tells ctest to perform Update, Configure, Build, Test, and Submit stages, in that order. 7. The Update stage runs 'git pull' on the local repository. The local repository is now updated to version B from the master, but since the DartConfiguration.tcl file was already created and has not been updated, the Update.xml file has the version A BuildName still. 8. Next, the Configure stage runs. Since the configure process is handled in tandem with the build process by 'make setup', this just calls /bin/true so as to not throw any false errors, and can effectively be treated as a no-op. 9. Next is the Build stage, which is handled by calling scripts/update-ctest. 10. update-ctest checks the Git versions of regress/build vs BACULA_SOURCE. Since the two are different (regress/build is still version A, while BACULA_SOURCE has been updated to B) it calls 'make setup'. 11. 'make setup' copies BACULA_SOURCE to regress/build and configures and builds it. It then calls scripts/do_sed 12. do_sed calls scripts/config_dart again. Since regress/build has been updated to B, it regenerates DartConfiguration.tcl with a version B BuildName. 13. ctest now generates the Build.xml results file, but since BuildName was still A while this stage began, this is what appears in the XML BuildName. 14. Done with the Build stage, ctest moves on to the Test stage, where it actually calls the various test/ scripts as defined by DartTestfile.txt and filtered by the -R option. At this point, since ctest is beginning a new stage, it appears to re-read DartConfiguration.tcl (I believe this is intended to allow ctest to bootstrap itself in a virgin cmake managed source code tree, where the test configuration should be generated by cmake). The final Test.xml file, therefore, contains a version B BuildName string, as opposed to all previous steps, which have version A. 15. Finally, ctest flings the results at the dashboard. The dashboard does ignores in what order or grouping XML files are submitted, and instead uses the site/buildname tuple to distinguish them (at least for Nightly runs, I'm not so sure about Experimental ones). In this case, instead of one complete run, the dashboard sees two runs, one of which is Update through Make, and a second one which is Test only. For a sample of the problem, take a look at the fsweetser 2.3 sqlite3 Nightly runs at http://regress.bacula.org:8081/Bacula/Dashboard/Dashboard?trackid=29 The Update and Build information show up with a BuildName of bacula-2.­3.­10-26Feb08-Linux-sqlite3, then aftegit pull hit the Test information shows up with bacula-2.­3.­11-03Mar08-Linux-sqlite3. (Ignore for the moment the fact my timestamps are at 6:59PM, rather than at 9PM where they're supposed to be; this seems to be a Fedora specific client side issue I haven't tracked down yet.) Looking more closely at the test submitted to the public dashboard (http://public.kitware.com/dashboard.php), I get the impression that the BuildName parameter was misnamed, and intended to be treated more as a build platform name, rather than the name of the build being tested. Rather than create a hook to tag the version being tested, everyone as far as I can tell just seems to rely on the timestamp of the test. ====== Important Note: ====================================================================== NOTE !!!!!!!!! ctest can actually back out changes that have been made to your local source repository (this was true for SVN, but I (Kern) am not sure it is true now that we have switched to git). As a consequence, it is probably better not to use a directory in which you are developing code for Nightly tests. Seee the below explanation given by Frank Sweetser. When a Nightly run is done, the timestamp is set to the last occurring instance of the time defined by the NightlyStartTime parameter. The piece that I missed is that, in addition to using that timestamp for reporting to the dashboard, the update stage also uses that point in time to determine exactly which version of the repository to check out. So if you make commit changes at 10PM EST, and then run a Nightly test run, the NightlyStartTime of 9PM EST will back out those changes in the local repository. Any subsequent runs that are started at 9PM EST the following day or later will include them. This implies to me that NightlyStartTime should be set such that you don't expect any developers to commit any changes in between NightlyStartTime and the time at which the ctest run actually starts. The alternative is to make use of the Experimental track. While it normally just uses the local source tree as is, you can manually have it update: ctest -D ExperimentalUpdate Unlike Nightly, this will update to whatever the latest version of the repository is.