Boost logo

Boost-Build :

From: Steven Knight (knight_at_[hidden])
Date: 2002-03-26 11:56:25


> > I had a hard time getting excited about QMTest, personally. IIRC, it
> > struck me as a very generic test execution and results-gathering and
> > reporting framework. That's fine so far as it goes, but it solves the
> > easy part of the problem. It still leaves you to create all of the
> > actual infrastructure for your testing environment, the stuff that
> > determines whether it's difficult or easy for Joe Developer to actually
> > *write* a test.
>
> This is true, QMTest solves the generic problem but I does solve it, and I
> like the solution. For example, I believe that making a QMTest setup for
> Scons would be extra simple. (Okay... I will actually try it!)

I'm glad to hear that. In re-reading what I wrote, I didn't mean it
to sound so negative about QMTest. I think I was simply disappointed
relative to what I had hoped it would do.

Using QMTest as a driver for the SCons tests sounds like a win. Right
now, we just use a home-brew wrapper script that sets up the right
environment variables.

> > In the SCons testing infrastructure, all tests are self-contained Python
> > scripts that execute tests in one or more temporary directories. Any
> > necessary files are created from in-line Python strings.
>
> I like the idea of using actual directory layout for specifying tree for a
> simple reason. I can just create the tree on disk, play with it, and the
> convert it into test with no effort. Or later, I can easily play with the
> tree that a test uses, in case the test fails.

That's a definite plus for many people; YMMV. The in-line requirement
for our tests stems mostly from our use of the Aegis change management
system, which works a lot better when tests are self-contained.

The advantage I've found is that it makes the tests atomic. You don't
have to worry about failures because someone forgot to list a file, or
the state of the tree hasn't been re-set properly. Again, YMMV.

> > There is one underlying generic TestCmd.py module that provides
> > primitives for creating and removing temporary directories, writing
> > files, touching files, comparing actual and expected output, reporting
> > PASSED, FAILED, or NO RESULT, etc.
>
> Actually, when I first seen your code I thought: "Oh.. here a file is simply
> read and its content is tested with expected, while my code needs to build
> some tree first, just to do the same!".

You still do need to build the right tree (create the SConstruct and
other files, run SCons), but that's all handled internal to the test
script. Checking for expect output is just one common method of
checking for success or failure of an individual test.

> My second thought is: does you code
> allow to detect when a file was added/removed

That would typically be handled by something like:

test.run()
test.fail_test(os.path.exists(test.workpath('should_not_exist')))
test.fail_test(not os.path.exists(test.workpath('should_exist')))

(Actually, now that you've caused me to look at this, there should be
an exists() method internal to the TestCmd class so you don't have to
specify the workpath() method. Thanks!)

> and, more importantly, when it
> was touched/modified. I'm not sure this is straightforward -- attempt to open
> an nonexisted file will just raise IOError,

In which case the test fails, as you'd expect. If it became important
to catch IOError and handle it in some other way, that capability could
be wrapped up in a TestCmd.py method that would catch the IOError
exception internally.

So far we've used os.path.getmtime() successfully:

oldtime = os.path.getmtime('foo')
test.run()
test.fail_test(oldtime != os.path.getmtime('foo'))

> and you can't predict an expected
> content of a binary file.

But you *can* process the content and look for a specific string inside
that suggests it was built correctly. Or, if it's a program you built,
you can execute it to see that it worked correctly. For example,
here is a stripped-down test of our Program() method for building an
executable:

import os
import string
import sys
import TestSCons

if sys.platform == 'win32':
_exe = '.exe'
else:
_exe = ''

test = TestSCons.TestSCons()

foo = test.workpath('foo' + _exe)

test.write('SConstruct', """
env = Environment()
env.Program(target = 'foo', source = 'foo.c')
""")

test.write('foo.c', r"""
int
main(int argc, char *argv[])
{
argv[argc++] = "--";
printf("foo.c\n");
exit (0);
}
""")

test.run(arguments = '.')

test.run(program = foo, stdout = 'foo.c\n')

test.up_to_date(arguments = '.')

test.pass_test()

Note that you can run() an arbitrary program by using the appropriate
keyword argument.

TestCmd.py also supports use of regular expression matches on output,
instead of exact matches, which comes in handy when we're checking
output where the line numbers vary, for example.

> One example is when static library was not updated
> on borland: the test needed to touch one source and then check if the library
> is touched.

Pretty straightforward using os.path.getmtime(), per above.

> > The feedback from the developers has been good. They've all found this
> > framework makes writing SCons tests easy enough that they all include
> > new or modified tests in their patches; I haven't had to crack down or
> > bug anyone to provide tests. (Of course, it helps that they *do* know
> > that tests are expected if they want their patch integrated... :-)
>
> Sure :-) Actually, it should be noted that the code which actually senses
> which files are added/changed etc. is "stolen" from SCM tool Subversion
> (http://subversion.tigris.org) where it's used quite actively to write tests,
> and seems like developers don't object either.

Thanks for the pointer. I've heard good things about Subversion, but
haven't had a reason yet to look closely. Is this within Subversion
itself, or is it a separate testing infrastructure that Subversion
created?

--SK

 


Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk