|
Boost : |
From: David Abrahams (dave_at_[hidden])
Date: 2003-01-04 09:56:01
"John Maddock" <jm_at_[hidden]> writes:
>> > the problem remains, if we have a "compile-fail" test, the failure
>> > may be delayed until link time if the compiler does link-time
>> > template instantiation. The reason we're not seeing this cropping
>> > up in the current tests, is that the compilers that were exhibiting
>> > that behaviour are no longer being tested (SGI's compiler for eg).
>>
>> OK, I believe you. What I'm suggesting is that we ought to check for
>> specific compilers which do this, and do an explicit
>> compile-or-link-fail test in that case for all current compile-fail
>> tests. I believe it is too hard for programmers to keep track of
>> which expected compilation failures may involve template
>> instantiation. In fact most of them do, so we'd have to change most
>> of our compile-fail tests to say link-fail.
>
> OK, but that implies that most current compile-fail tests would need
> to have a "int main(){}" added. Actually thinking about it, most
> compilers that do link-time template instantiation have an option to
> force the instantiation of all used templates (at compile time), so
> maybe the way to handle this is just to modify the compiler
> requirements inside the compile-fail rule definition?
That sounds like a smart move. It should be easy enough if we can
encode that feature into the toolsets. Can you take care of that part
of the job? If so, it would be very easy for me to update testing.jam
and we'd be done.
>> >> Maybe we need some platform/compiler-dependent configuration which
>> >> chooses the appropriate criterion for success.
>> >
>> > It's not unreliable at all, it's the exact negative of a run test. It
>> > allows a negative to be tested: that if a feature macro is *not* set,
> then a
>> > failure should occur if it is set, otherwise we are possibly
> mis-configured.
>>
>> My point is that it might easily report false successes when something
>> else is wrong, e.g. you just made a typo in a variable name.
>
> Which is true for all compile-fail tests as well.
Yes. All I'm saying is that a regular run-fail test has stricter
requirements. Simple typos that just create compilation errors will
not allow them to succeed. That's why I don't want to replace
run-fail with your "compile/link/run fail"...
...although now the only expected failure tests we have left are
compile-fail. So I don't know what to do with the others.
> Actually this is less of a problem here (for config tests), because
> we would expect the tests to build and run on some platforms, so the
> tests get tested in both directions (that there are some platforms
> were they should build and do so, and others where they should not
> and don't do so).
I'm still very confused about this one, but I have an inkling of what
might be going on. I can understand how you could make a config test
which would want to be (compile-fail|run-success), and I can
understand how you could use return-code-inversion to make it
(compile-faile|run-fail), but I can't understand what kind of useful
test could be (compile-faile|link-fail|run-fail).
-Dave
-- David Abrahams dave_at_[hidden] * http://www.boost-consulting.com Boost support, enhancements, training, and commercial distribution
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk