Boost logo

Boost-Build :

From: Brad King (brad.king_at_[hidden])
Date: 2002-01-14 23:12:46


Dave,

> We now have 2 choices:
> 1. Try to integrate this with the current Boost.Build
> 2. Push forward with the rewrite
I would also favor number 2. Your argument about investing as little as
possible in the current codebase is my reasoning as well.

> That's basically our existing syntax; super. You should look at the
> python.jam file too, though, since there are lots of tests in there as
> well. It's not well-integrated with the other testing code; I'm sure
> there's some duplicated functionality there. Your testing system
> should make it possible to add new kinds of tests (e.g. python-run),
> or possibly provide a framework through which anything can be run.
I'll look into adding support in a clean manner.

> I hope we can also have "jam test1", which runs the test by the user's
> preferred testing means.
That should be easy to add. All we need to do is have a rule that checks
if an environment variable is set with the preferred testing module and
add a target "test1" that depends on "preferred-test.test1" for every
test. This should pass through the rule invocations automatically.

> > To see what a particular test entails, the user can also list a specific
> > test:
> > jam list-tests.test1
> > jam list-tests.test2
>
> What's that going to tell you? The command-line that will get executed
> perhaps?
Right now it just prints out the line used to declare the test, but
without the "test." prefix. That support was mostly just there for
checking my own code, but turned out to be a useful feature in the end.
It may be tricky to get the command line that will be executed unless the
testing back-end supports it because I don't know if there is a way to get
the string back from an action without actualy invoking it (perhaps this
is a feature worth adding to jam if it doesn't exist??).

> Some other features supported by the current system:
>
> "jam <testname>.run" will run the test even if there's an up-to-date record
> of its success. Now that I think of it, I wonder if it woulnd't be better if
> "jam <testname>" had that behavior, while "jam test" or "jam test-update"
> would only run outdated tests.
Okay, that was something I hadn't considered. Actually having a record
marking a test as up-to-date is a good idea. I would think a good choice
for this mark would be a file containing the test's output. I agree that
the default behavior when a specific test name is requested is to run it
even if it appears up-to-date. For the run-all-tests targets, there
should be one that always runs all tests, and one that runs out-of-date
tests.

Fortunately, there is an individual target for each test with each
back-end, and its name is well defined. This makes it easy to add new
rules that can group the tests in any combination. Perhaps having a rule
similar to the test-suite rule in the current system would be useful.
How does this look (just off the top of my head):

test.suite suite-name : test1 test2 ... ;

This would create a target for each back-end called
"back-end-name.suite-name" to run all the tests in the suite. Again, the
default back-end idea would allow a target called "suite-name" to run the
tests with the default back-end.

Also, as far as running a test versus compiling, how does this sound to
you:

"compile", "compile-fail", "link", and "link-fail" tests are actually
built when they are run since the compile/link steps are the test itself.

"run" and "run-fail" tests have targets that will build them without
actually running them in addition to the normal test execution targets.
The targets that actually run them will simply depend on the build
versions. This way the test will not be re-built if the executable is
up-to-date and the user requests that the test be run. It will also allow
nightly testing to build the run and run-fail tests as part of the normal
build so that any errors show up in the normal build log. This will
provide a means of distinguishing the output from building the run-* tests
and the output from actually running them.

> If you look at the python.jam file, you'll see that there's a
> PYTHON_LAUNCH variable which can be used to say how python is invoked.
> I commonly use this to run a debugger in the same context in which the
> test needs to be run.
I'll look at that. It sound's useful...then the user won't have to figure
out what command line to run just to bring the failed test up in a
debugger.

> While your approach is basically sound, it will need some adjustment to be
> compatible with the planned rewrite. Some things I noticed:
>
> 1. We don't write "module" explicitly, except in low-level code. See the
> contents of tools/build/new, and especially modules.jam
I was looking at that a bit. I take it that the name of the .jam file
becomes the name of the module automatically? I also see that there is a
nearly empty test.jam file. Should I write the testing front-end under
the assumption that it will be placed into that file (since the module
will probably be called "test" anyway)?

> 2. Part of the plan is to delay generation of targets (meaning the use
> of DEPENDS and action rules) until after the Jamfile has been
> completely read in. There are lots of good reasons for this, which you
> can read about in the message history. So, your initial level of
> indirection/delay will have to be extended.
I'm pretty sure this is the behavior of the current implementation unless
I'm misunderstanding your request. The only DEPENDS rules and action
invokations are in the "test.invoke" and "test.list" rules, which are not
called until after all the user jamfiles have been processed. All the
"test.*" declaration rules merely save their arguments in module-local
variables.

-Brad

 


Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk