Boost logo

Boost :

From: David Abrahams (dave_at_[hidden])
Date: 2003-01-05 08:00:12

"John Maddock" <jm_at_[hidden]> writes:

>> That sounds like a smart move. It should be easy enough if we can
>> encode that feature into the toolsets. Can you take care of that part
>> of the job? If so, it would be very easy for me to update testing.jam
>> and we'd be done.
> Not easily, I don't currently have access to those compilers (although the
> options used for EDG based compilers are documented in EDG's generic docs):
> we really need to get some more regression tests running.

I believe it's -tused you're referring to, isn't it?

>> > Actually this is less of a problem here (for config tests), because
>> > we would expect the tests to build and run on some platforms, so the
>> > tests get tested in both directions (that there are some platforms
>> > were they should build and do so, and others where they should not
>> > and don't do so).
>> I'm still very confused about this one, but I have an inkling of what
>> might be going on. I can understand how you could make a config test
>> which would want to be (compile-fail|run-success), and I can
>> understand how you could use return-code-inversion to make it
>> (compile-faile|run-fail), but I can't understand what kind of useful
>> test could be (compile-faile|link-fail|run-fail).
> Let me try again. We have a series of config regression tests (one per
> macro), and taking feature macros for example each can be tested in two
> directions:
> The macro is defined in our config: verify that the test code compiles,
> links, and runs.
> The macro is not defined in our config: verify that trying to
> compile+link+run fails at some point (otherwise we could enable this
> feature).

Right... but I'm still a little confused. I don't think you actually
want to test "in two directions". Presumably you want to have a
single test which checks that the macro is set appropriately, no?

> For example consider BOOST_HAS_PTHREAD_MUTEXATTR_SETTYPE, there are three
> reasons why we might not want to set this:
> 1) the function is not present in the headers (code doesn't compile, because
> the API is unsupported).
> 2) the function is present but linking fails (it's in the header but not the
> library - probably the toolset is set up wrongly for multithreaded code, or
> some other such problem)

I'm not convinced that a failure at this stage should make the test
succeed if it's just reflecting a problem with the toolset.

> 3) compiling and linking succeeds, but the function doesn't actually work
> (it's a non-functioning stub), this situation does actually seem to be
> occurring on some platforms leading to deadlocks when creating and using
> recursive mutexes, the test doesn't currently test for this, but should do
> so if I can figure out how :-(
> To conclude then, if BOOST_HAS_PTHREAD_MUTEXATTR_SETTYPE is not set, then I
> want to be able to verify that the test code does not compile+link+run,
> otherwise the test should fail because the macro should have been set.
> I hope that's making sense now,

Now it sounds like you're not "testing in two directions". Oh, I see:
that "otherwise" does not refer to the case where
differently. What I still don't understand is how you're going to
write the Jamfile for this test, since the test type has to be
determined based on the value of the macro -- 'any-fail' if it's not
set and 'run' if it's set -- and there's no provision for that sort of
feedback from header files into the build system.

Maybe you're planning to build two tests?

# error
int main() { return 0; }

// test code here


And then build two tests with the same source code: a 'run' test, and
an 'any-fail' test with <define>BOOST_EXPECT_FAIL ??


                       David Abrahams
   dave_at_[hidden] *
Boost support, enhancements, training, and commercial distribution

Boost list run by bdawes at, gregod at, cpdaniel at, john at