|
Boost : |
From: John Maddock (jm_at_[hidden])
Date: 2003-01-05 07:35:35
> That sounds like a smart move. It should be easy enough if we can
> encode that feature into the toolsets. Can you take care of that part
> of the job? If so, it would be very easy for me to update testing.jam
> and we'd be done.
Not easily, I don't currently have access to those compilers (although the
options used for EDG based compilers are documented in EDG's generic docs):
we really need to get some more regression tests running.
> >> >> Maybe we need some platform/compiler-dependent configuration which
> >> >> chooses the appropriate criterion for success.
> >> >
> >> > It's not unreliable at all, it's the exact negative of a run test.
It
> >> > allows a negative to be tested: that if a feature macro is *not* set,
> > then a
> >> > failure should occur if it is set, otherwise we are possibly
> > mis-configured.
> >>
> >> My point is that it might easily report false successes when something
> >> else is wrong, e.g. you just made a typo in a variable name.
> >
> > Which is true for all compile-fail tests as well.
>
> Yes. All I'm saying is that a regular run-fail test has stricter
> requirements. Simple typos that just create compilation errors will
> not allow them to succeed. That's why I don't want to replace
> run-fail with your "compile/link/run fail"...
>
> ...although now the only expected failure tests we have left are
> compile-fail. So I don't know what to do with the others.
>
> > Actually this is less of a problem here (for config tests), because
> > we would expect the tests to build and run on some platforms, so the
> > tests get tested in both directions (that there are some platforms
> > were they should build and do so, and others where they should not
> > and don't do so).
>
> I'm still very confused about this one, but I have an inkling of what
> might be going on. I can understand how you could make a config test
> which would want to be (compile-fail|run-success), and I can
> understand how you could use return-code-inversion to make it
> (compile-faile|run-fail), but I can't understand what kind of useful
> test could be (compile-faile|link-fail|run-fail).
Let me try again. We have a series of config regression tests (one per
macro), and taking feature macros for example each can be tested in two
directions:
The macro is defined in our config: verify that the test code compiles,
links, and runs.
The macro is not defined in our config: verify that trying to
compile+link+run fails at some point (otherwise we could enable this
feature).
For example consider BOOST_HAS_PTHREAD_MUTEXATTR_SETTYPE, there are three
reasons why we might not want to set this:
1) the function is not present in the headers (code doesn't compile, because
the API is unsupported).
2) the function is present but linking fails (it's in the header but not the
library - probably the toolset is set up wrongly for multithreaded code, or
some other such problem)
3) compiling and linking succeeds, but the function doesn't actually work
(it's a non-functioning stub), this situation does actually seem to be
occurring on some platforms leading to deadlocks when creating and using
recursive mutexes, the test doesn't currently test for this, but should do
so if I can figure out how :-(
To conclude then, if BOOST_HAS_PTHREAD_MUTEXATTR_SETTYPE is not set, then I
want to be able to verify that the test code does not compile+link+run,
otherwise the test should fail because the macro should have been set.
I hope that's making sense now,
John Maddock
http://ourworld.compuserve.com/homepages/john_maddock/index.htm
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk