Boost logo

Boost-Build :

From: David Abrahams (david.abrahams_at_[hidden])
Date: 2002-01-15 00:06:09


A request: please leave a blank line between text you quote and your new
text. I have trouble separating them otherwise. Thanks.

----- Original Message -----
From: "Brad King" <brad.king_at_[hidden]>

>
> > I hope we can also have "jam test1", which runs the test by the user's
> > preferred testing means.

> That should be easy to add. All we need to do is have a rule that checks
> if an environment variable is set with the preferred testing module and
> add a target "test1" that depends on "preferred-test.test1" for every
> test. This should pass through the rule invocations automatically.

We're not going to rely on environment variables. There are simply too many
things that a user would want to configure. Instead, we'll have
user-config.jam and site-config.jam in the BOOST_BUILD_PATH, which can
import modules and invoke rules to set preferences:

test.default-backend superdupertester ;

> > > To see what a particular test entails, the user can also list a
specific
> > > test:
> > > jam list-tests.test1
> > > jam list-tests.test2
> >
> > What's that going to tell you? The command-line that will get executed
> > perhaps?

> Right now it just prints out the line used to declare the test, but
> without the "test." prefix. That support was mostly just there for
> checking my own code, but turned out to be a useful feature in the end.
> It may be tricky to get the command line that will be executed unless the
> testing back-end supports it because I don't know if there is a way to get
> the string back from an action without actualy invoking it (perhaps this
> is a feature worth adding to jam if it doesn't exist??).

...my answer to this is too complicated to type at this late hour... ;-)

> > Some other features supported by the current system:
> >
> > "jam <testname>.run" will run the test even if there's an up-to-date
record
> > of its success. Now that I think of it, I wonder if it woulnd't be
better if
> > "jam <testname>" had that behavior, while "jam test" or "jam
test-update"
> > would only run outdated tests.

> Okay, that was something I hadn't considered. Actually having a record
> marking a test as up-to-date is a good idea. I would think a good choice
> for this mark would be a file containing the test's output.

Please examine the stuff Joerg is working on, or status/Jamfile. It already
does exactly that.

> I agree that
> the default behavior when a specific test name is requested is to run it
> even if it appears up-to-date. For the run-all-tests targets, there
> should be one that always runs all tests, and one that runs out-of-date
> tests.
>
> Fortunately, there is an individual target for each test with each
> back-end, and its name is well defined. This makes it easy to add new
> rules that can group the tests in any combination. Perhaps having a rule
> similar to the test-suite rule in the current system would be useful.
> How does this look (just off the top of my head):
>
> test.suite suite-name : test1 test2 ... ;
>
> This would create a target for each back-end called
> "back-end-name.suite-name" to run all the tests in the suite. Again, the
> default back-end idea would allow a target called "suite-name" to run the
> tests with the default back-end.
>
> Also, as far as running a test versus compiling, how does this sound to
> you:
>
> "compile", "compile-fail", "link", and "link-fail" tests are actually
> built when they are run since the compile/link steps are the test itself.

We might want to throw away the product and leave a simple file marker
instead, just to save space. But that's an optimization that can wait.

> "run" and "run-fail" tests have targets that will build them without
> actually running them in addition to the normal test execution targets.
> The targets that actually run them will simply depend on the build
> versions. This way the test will not be re-built if the executable is
> up-to-date and the user requests that the test be run.

AFAICT, that's what we're already doing.

> It will also allow
> nightly testing to build the run and run-fail tests as part of the normal
> build so that any errors show up in the normal build log. This will
> provide a means of distinguishing the output from building the run-* tests
> and the output from actually running them.

I was thinking that we always need a way to capture run output directly from
Jam anyway, so all build actions might end with something like

>$(STDOUT) >2$(STDERR)

or possibly

>>$(STDOUT) >>2$(STDERR)

If the build set the variables on the target, the output would go to the
specified place. This feature needs some consideration; it might be a
candidate for core language support.

> > If you look at the python.jam file, you'll see that there's a
> > PYTHON_LAUNCH variable which can be used to say how python is invoked.
> > I commonly use this to run a debugger in the same context in which the
> > test needs to be run.

> I'll look at that. It sound's useful...then the user won't have to figure
> out what command line to run just to bring the failed test up in a
> debugger.

very important, especially where shared libs are concerned.

> > While your approach is basically sound, it will need some adjustment to
be
> > compatible with the planned rewrite. Some things I noticed:
> >
> > 1. We don't write "module" explicitly, except in low-level code. See the
> > contents of tools/build/new, and especially modules.jam
> I was looking at that a bit. I take it that the name of the .jam file
> becomes the name of the module automatically? I also see that there is a
> nearly empty test.jam file. Should I write the testing front-end under
> the assumption that it will be placed into that file (since the module
> will probably be called "test" anyway)?

Oh, you can replace the contents of test.jam. I'm just using
with -sBOOST_BUILD_TEST=1 to run the unit tests of the new code. I can use a
differenly-named file.

> > 2. Part of the plan is to delay generation of targets (meaning the use
> > of DEPENDS and action rules) until after the Jamfile has been
> > completely read in. There are lots of good reasons for this, which you
> > can read about in the message history. So, your initial level of
> > indirection/delay will have to be extended.
> I'm pretty sure this is the behavior of the current implementation unless
> I'm misunderstanding your request. The only DEPENDS rules and action
> invokations are in the "test.invoke" and "test.list" rules, which are not
> called until after all the user jamfiles have been processed. All the
> "test.*" declaration rules merely save their arguments in module-local
> variables.

The user calls "demo.invoke demo-test" in the Jamfile itself, which calls
test.invoke. So the Jamfile isn't finished yet. The way the system will work
is:

1. import the Jamfile
2. Jamfile rules record data about user-level targets, etc., much like in
your example
3. After the Jamfile is completely processed, go through the record of
user-level targets and generate actual targets.

To make your system fit, you'd just have demo.invoke make some more records
about targets. But we don't have the framework to do that yet, so don't
worry about it ;-)

-Dave

 


Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk