Boost logo

Boost-Build :

From: Steven Knight (knight_at_[hidden])
Date: 2002-03-26 09:56:15


> > Second, the code to run individual tests. Now I have a naive code
> > based on Python unit tests and it even does not allow to run any
> > individual test, IIRC. I really don't want to do anything about it
> > -- it was a temporary solution.
> >
> > The questions is:
> >
> > What to do with the second component? We most likely will need
> > something more usable. If I was to choose, I'll simple use QMTest
> > (http://www.qmtest.com) for this. It is quite functional, quite easy
> > to use, and is written in Python -- i.e. test code for the build
> > system will be easy to integrate with it. On the other hand, it's
> > more than a couple of modules -- i.e. if somebody wants to test
> > build system, he'll need to get QMTest. I actually don't think this
> > is big problem. So, what other's opinions on this?
>
> The Scons project has a mature python-based system for testing that
> has worked well for them. Should we reinvent the wheel, or would it be
> better to just hijack the work they've done?

You'd be more than welcome to do so, if I'd you'd like. I'd be happy to
provide code, doc, advice, etc. in any way that's useful.

I had a hard time getting excited about QMTest, personally. IIRC, it
struck me as a very generic test execution and results-gathering and
reporting framework. That's fine so far as it goes, but it solves the
easy part of the problem. It still leaves you to create all of the
actual infrastructure for your testing environment, the stuff that
determines whether it's difficult or easy for Joe Developer to actually
*write* a test.

> > For those who never seen the my test code I'd outline my vision of
> > how the test suite might look. It will be a directory hierarchy
> > (corresponding to tests hierarchy) and at some level we might have
> > this:
> >
> > <some directory>
> > test1.qmt
> > test2.qmt
> > test1-tree
> > test2-tree
> >
> > *.qmt are files that descibe tests and test{1,2}-tree are just
> > directories. They will contain initial state of source tree for
> > tests. Inside *.qmt files, python code will be used to drive tests,
> > something like
> > touch_file("src/source.cpp")
> > run_build_system()
> > expect_touch("build/main/release/runtime-link-dynamic/source.o")
>
> I'm inclined to go with whatever seems most natural to you, *as long
> as* there's an easy way for the rest of us to learn how to use it. It
> would be worth hearing what Steven Knight has to say about the Scons
> stuff, though.

Here's an overview:

In the SCons testing infrastructure, all tests are self-contained Python
scripts that execute tests in one or more temporary directories. Any
necessary files are created from in-line Python strings.

There is one underlying generic TestCmd.py module that provides
primitives for creating and removing temporary directories, writing
files, touching files, comparing actual and expected output, reporting
PASSED, FAILED, or NO RESULT, etc.

There is one TestSCons.py subclass module that provides SCons-specific
infrastructure common to all tests: create one temporary directory and
chdir() to it, set the program being tested to "scons.py," how to check
for an up-to-date build.

Example: Here's the simplest test in the suite, that checks that our
-h option works properly to display help text provided in the user's
SConstruct file (our Makefile equivalent):

import TestSCons

test = TestSCons.TestSCons()

test.write('SConstruct', r"""
Help("Help text\ngoes here.\n")
""")

expect = """Help text
goes here.

Use scons -H for help about command-line options.
"""

test.run(arguments = '-h', stdout = expect)

test.pass_test()

That should be reasonably self-explantory. Here's a more involved
example that checks for a proper build of targets in a subdirectory, and
that a second build is up-to-date:

import TestSCons

test = TestSCons.TestSCons()

test.subdir('subdir')

test.write('build.py', r"""
import sys
contents = open(sys.argv[2], 'rb').read()
file = open(sys.argv[1], 'wb')
file.write(contents)
file.close()
""")

test.write('SConstruct', """
B = Builder(name = "B", action = "python build.py $TARGETS $SOURCES")
env = Environment(BUILDERS = [B])
env.B(target = 'subdir/f1.out', source = 'subdir/f1.in')
env.B(target = 'subdir/f2.out', source = 'subdir/f2.in')
env.B(target = 'subdir/f3.out', source = 'subdir/f3.in')
env.B(target = 'subdir/f4.out', source = 'subdir/f4.in')
""")

test.write(['subdir', 'f1.in'], "f1.in\n")
test.write(['subdir', 'f2.in'], "f2.in\n")
test.write(['subdir', 'f3.in'], "f3.in\n")
test.write(['subdir', 'f4.in'], "f4.in\n")

test.run(arguments = 'subdir')

test.fail_test(test.read(['subdir', 'f1.out']) != "f1.in\n")
test.fail_test(test.read(['subdir', 'f2.out']) != "f2.in\n")
test.fail_test(test.read(['subdir', 'f3.out']) != "f3.in\n")
test.fail_test(test.read(['subdir', 'f4.out']) != "f4.in\n")

test.up_to_date(arguments = 'subdir')

test.pass_test()

Note that we use Python lists like ['subdir', 'f1.out'] to express
path names. This keeps the tests portable because the underlying
TestCmd.py module uses os.path.join() on any lists it finds to create an
OS-specific path name.

The feedback from the developers has been good. They've all found this
framework makes writing SCons tests easy enough that they all include
new or modified tests in their patches; I haven't had to crack down or
bug anyone to provide tests. (Of course, it helps that they *do* know
that tests are expected if they want their patch integrated... :-)

If it looks like you wanted to adopt any of this for Boost, the right
way would be to create a TestBoost.py subclass of TestCmd.py that you'd
tailor to the specific needs of Boost (the functionality Vladimir
alluded to above). I've been meaning to put TestCmd.py out on the
Vaults of Parnassus anyway, but haven't had time with trying to bring
the SCons feature set up to modern standards. This would certainly
provide the right incentive...

Even if you go with something else, I hope this has provided some useful
ideas. If I can answer any follow-on questions, let me know.

--SK

 


Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk