Boost logo

Boost Interest :

From: troy d. straszheim (troy_at_[hidden])
Date: 2008-05-31 15:50:41


Apologies if this mail rambles a bit. It discusses
the architecture of ctest, how this impacts us, and a (potential) way to
integrate our testing/reporting directly with cmake that removes many of these
problems. Doug at some point said that we should stick with ctest because
of the good integration with cmake: I argue here that we can get *better*
integration with cmake, and therefore with any dart/bitten-type system, by removing
the intermediate ctest step.

CTest is advertised to work with or without cmake. At configure time,
CMake writes textfiles (called 'CTestTestfile.cmake') to the build
area that contain lists of tests to run.

When run, CTest reads these lists of tests in, runs them, and
redirects the output to logfiles. It then must scrape results of
builds/tests out of logfiles, tries (with varying degrees of success)
to identify errors, and posts them in large chunks.

This log-scraping is an architectural weak point that is not going to
go away. For instance, here is a snippet from ctests's source:

static const char* cmCTestErrorMatches[] = {
   "^[Bb]us [Ee]rror",
   "^[Ss]egmentation [Vv]iolation",
   "^[Ss]egmentation [Ff]ault",
   "([^ :]+):([0-9]+): ([^ \\t])",
   "([^:]+): error[ \\t]*[0-9]+[ \\t]*:",
   "^Error ([0-9]+):",
   "^Error: ",
   "^Error ",
   "[0-9] ERROR: ",
   "^\"[^\"]+\", line [0-9]+: [^Ww]",
   "^cc[^C]*CC: ERROR File = ([^,]+), Line = ([0-9]+)",
   "^ld([^:])*:([ \\t])*ERROR([^:])*:",
   "^ild:([ \\t])*\\(undefined symbol\\)",
   "([^ :]+) : (error|fatal error|catastrophic error)",
   "([^:]+): (Error:|error|undefined reference|multiply defined)",
   "([^:]+)\\(([^\\)]+)\\) : (error|fatal error|catastrophic error)",
   "^fatal error C[0-9]+:",
   ": syntax error ",
   "^collect2: ld returned 1 exit status",
   "ld terminated with signal",
   "Unsatisfied symbols:",
   "Undefined symbols:",
   "^Undefined[ \\t]+first referenced",
   "^CMake Error:",
   ":[ \\t]cannot find",
   ":[ \\t]can't find",
   ": \\*\\*\\* No rule to make target \\`.*\\'. Stop",
   ": \\*\\*\\* No targets specified and no makefile found",
   ": Invalid loader fixup for symbol",
   ": Invalid fixups exist",
   ": Can't find library for",
   ": internal link edit command failed",
   ": Unrecognized option \\`.*\\'",
   "\", line [0-9]+\\.[0-9]+: [0-9]+-[0-9]+ \\([^WI]\\)",
   "ld: 0706-006 Cannot find or open library file: -l ",
   "ild: \\(argument error\\) can't find library argument ::",
   "^could not be found and will not be loaded.",
   "s:616 string too big",
   "make: Fatal error: ",
   "ld: 0711-993 Error occurred while writing to the output file:",
   "ld: fatal: ",
   "final link failed:",
   "make: \\*\\*\\*.*Error",
   "make\\[.*\\]: \\*\\*\\*.*Error",
   "\\*\\*\\* Error code",
   "nternal error:",
   "Makefile:[0-9]+: \\*\\*\\* .* Stop\\.",
   ": No such file or directory",
   ": Invalid argument",
   "^The project cannot be built\\.",


As a consequence of this architecture, there are a number of things
that we cannot easily do on any build reporting system that ingests
only the information currently reported by ctest:

=== Catalogue o' Worries ===

1. Make a nifty "M of N steps completed" graph on in-progress builds
     with good resolution.

2. Pinpoint the source of certain errors. See

     for a discussion of a case where link errors are not reported. It
     is neither possible to immediately tell what went wrong, nor to
     tell where in the dependency tree one was when they occurred (one
     must infer it from the list of other targets that failed, some of
     which might also be 'unknown')

3. Finely control the rate at which build/test results are submitted. CTest
     directly supports only reporting after Configure, Build, and Test. The
     most often you can get ctest to report things is like this:

       ctest -D ExperimentalStart # starts a new 'tag' for a build
       ctest -D ExperimentalConfigure # runs cmake
       ctest -D ExperimentalSubmit # post results of that configure
       ctest -D ExperimentalBuild # run build
       ctest -D ExperimentalSubmit # post results of that build
       ctest -D ExperimentalTest # run tests
       ctest -D ExperimentalSubmit # post results of that test run

     By contrast, the natural 'post rate' for boost (also for IceCube,
     presumably also for KDE) is per-library, something like:

       * configure
       * post (include upcoming list of libs-to-build and libs-to-test)
       * build lib1
       * post
       * build lib2
       * post
       * build libn
       * post
       * test lib1
       * post
       * test lib2
       * post

4. Tell how many *successful* build steps were executed. ctest
     reports only failures. For instance, if I run an incremental
     build and look at the results on dart, I don't know how many
     files were actually rebuilt. I often want to know this information,
     though: for instance, if the patches I committed haven't fixed
     certain test failures, I really want to be able to check that the
     tests themselves were actually rebuilt.

5. See the actual commands executed to run certain builds. One often
     wants to do this when chasing build misconfigurations: what were
     the flags this lib was built with?

Integration-wise, testing targets aren't really first class citizens
of the cmake-generated makefiles.
'make test' executes ctest in a fragile cascade of subshells.

Also, ctest hardcodes a set of 'testing models': Nightly, Weekly,
Experimental, which are aribtrary, distracting, and couple the system
that runs the builds (the ctest side) to the system that displays them
(the dart/cdart/etc side).

=== One possible solution ===

Some time ago I wrote a php-based build-displaying thing called
'snowblower', which we loved but decided to abandon when we went to
cmake, as the benefits of makefile generation far exceeded the cost of
losing our venerable tool. As this was before we switched to cmake,
we were using gnu make only (no nmake, no bsd make).

When building it, I wasn't willing to scrape logfiles: it seemed
obvious that the way to get at the relevant information was to have
'make' report it. Make knows what it is doing; the main problem is
getting it to report this information in a way that is palatable to
other tools. (The Right Way to do all this would be to have 'make'
itself report build results in sanitized XML format, but it doesn't.)

So I simply wrapped the execution of each build step, xmlizing and
storing the results of each target. This took a bit of doing to get
right but in the end worked quite reliably.

=== Example: the problem ===

Considering this simple makefile which builds a
shared library '' from files foo.cpp and bar.cpp:


   % : %.cpp
           $(GCC) -fPIC -DFLAG_I_WILL_NEED_TO_KNOW_ABOUT -c $^ -o $@ : foo.o bar.o
           $(LD) -shared -o $@ $^

with this foo.cpp:

   template <typename T>
   struct blah
     typedef typename T::result_type result_type;

   int foo()
     blah<bool&> bi;

and this bar.cpp:

   int bar() { }

When you make it (with -i, ignore-errors),

   % make -i
   gcc -fPIC -DFLAG_I_WILL_NEED_TO_KNOW_ABOUT -c -o foo.o foo.cpp
   foo.cpp: In instantiation of 'blah<bool&>':
   foo.cpp:11: instantiated from here
   foo.cpp:6: error: 'bool&' is not a class, struct, or union type
   make: [foo.o] Error 1 (ignored)
   gcc -fPIC -DFLAG_I_WILL_NEED_TO_KNOW_ABOUT -c -o bar.o bar.cpp
   gcc -shared -o foo.o bar.o
   gcc: foo.o: No such file or directory
   make: [] Error 1 (ignored)

You get an example of a couple of the main log-scraping problems:

- error messages vary depending on the type of target you're building (.o vs .so)

- some files are built successfully, you want that info too

- error messages contain xml-unsafe characters

- there is a compile flag that you want to be able to see on the build
   reporting website, even if the target is successful (e.g. it might
   explain a link error elsewhere)

=== Example: Solution ===

So I wrappped the interesting targets in a script that (configurably)
records the command, executes a subshell, sets a timer, captures and sanitizes
stdout/stderr, pretty-prints stuff, etc. I'm not proposing that we
use it (it is perl) but it is here:

You run it like this:

   run_cmd MODE NAME TARGET cmd arg1 arg2 ... argN

for instance

   run_cmd X compile_cpp foo.o gcc -fPIC -c foo.cpp

where MODE is

   V = verbose, but no xml
   X = capture output and create xml
   Q = quiet,

NAME is an identifier that goes into the xml and is reported in
various ways depending on MODE, and TARGET is another identifier that
goes into the xml.

So each target in the makefile is wrapped (new makefile):



   %.o : %.cpp
           @$(WRAPPER) $(MODE) compile_cpp $@ $(GCC) -fPIC -DFLAG_I_WILL_NEED_TO_KNOW_ABOUT -c $^ -o $@ : foo.o bar.o
           @$(WRAPPER) $(MODE) link_shared $@ $(LD) -shared -o $@ $^

and you get functionality like this:

Regular 'quiet' mode:

   % make -i

   compile_cpp foo.o
   foo.cpp: In instantiation of 'blah<bool&>':
   foo.cpp:11: instantiated from here
   foo.cpp:6: error: 'bool&' is not a class, struct, or union type
   make: [foo.o] Error 1 (ignored)
   compile_cpp bar.o
   gcc: foo.o: No such file or directory
   make: [] Error 1 (ignored)

or verbose mode:

   % make -i MODE=V

   compile_cpp foo.o
   gcc -fPIC -DFLAG_I_WILL_NEED_TO_KNOW_ABOUT -c foo.cpp -o foo.o
   foo.cpp: In instantiation of 'blah<bool&>':
   foo.cpp:11: instantiated from here
   foo.cpp:6: error: 'bool&' is not a class, struct, or union type
   make: [foo.o] Error 1 (ignored)
   compile_cpp bar.o
   gcc -fPIC -DFLAG_I_WILL_NEED_TO_KNOW_ABOUT -c bar.cpp -o bar.o
   gcc -shared -o foo.o bar.o
   gcc: foo.o: No such file or directory
   make: [] Error 1 (ignored)

or the xml mode:

   % make -i MODE=X

           <exec>gcc -fPIC -DFLAG_I_WILL_NEED_TO_KNOW_ABOUT -c foo.cpp -o foo.o</exec>

           <output>foo.cpp: In instantiation of &#39;blah&lt;bool&amp;&gt;&#39;:
   foo.cpp:11: instantiated from here
   foo.cpp:6: error: &#39;bool&amp;&#39; is not a class, struct, or union type

           <exec>gcc -fPIC -DFLAG_I_WILL_NEED_TO_KNOW_ABOUT -c bar.cpp -o bar.o</exec>
           <exec>gcc -shared -o foo.o bar.o</exec>
           <output>gcc: foo.o: No such file or directory


So the general idea is to use such an approach with our cmake-generated

Of note:

- No logfiles were scraped or otherwise harmed in the generation of
   the output above.

- You have full information about the structure of the build and the
   location of the errors (caveat below).

- Obviously the wrapper is perly and unixy. I have no idea how
   practical it would be to get this kind of thing via NMAKE.EXE, but
   I'm sure one of you does.

- This wrapper executes a subshell. This costs time, and in the
   example above (i.e. hardcoded, gnu make only), this costs all
   developers a subshell per build step all the time. CMake has a huge
   advantage here: using a mechanism like CMAKE_VERBOSE_MAKEFILES, this
   could be configured at generation time and only testers would pay.

- Actually, the XML above doesn't have quite enough information. For
   instance, you can't tell that the compile of bar.o, above, is a
   child task of the link of Off the top of my head: it may
   be easier to give each task a <parent> tag than try to capture,
   reorder and properly nest the tasks within one another. Needs thought.

- To capture test output, you simply make test-run targets
   first class citizens of make-land. You don't
   run ctest to test, you run 'make'. So e.g. boost_test_run() would
   not call add_test(), they would generate real cmake targets that run
   tests. Again, better integration.

On integration into cmake:

- CMake already does something similar: Think
   CMAKE_VERBOSE_MAKEFILES and the toggleable fancy colorization
   and percent-completed display.

- CMake already builds against its own curl or a system curl. (It is
   used by ctest). So the code one needs to post results is there
   already, if you wanted to code up this wrapper in C++ and push it upstream
   to the cmake project. I can see starting with some small python scripts.

- The 'wrapper' could conceivably be coded up in C++ and
   built/installed by the cmake distribution. Maybe distributing
   a python script is just as easy.

I think that covers the business of getting at the build results in a non-lossy

Referring to Catalogue O' Worries entry #3, ideally one would like to
post results

- at the end of the build of each component (e.g. libBAR, libFOO),
- at the end of the build of the *tests* for this component,
- and at the end of the *run* of the tests for this component

and to do so recursively, i.e. if libBAR depends on libFOO, the
build/test/post of libFOO will be done automagically. This implies
peppering the build dependency tree with intermediate targets
that collect results and post them.

Again sorry this mail got so long. Interested in your comments about
the feasibility of all this.


Boost-cmake list run by bdawes at, david.abrahams at, gregod at, cpdaniel at, john at