Boost logo

Boost :

From: David Abrahams (dave_at_[hidden])
Date: 2003-01-01 07:30:06


"John Maddock" <jm_at_[hidden]> writes:

>> I intentionally changed it because it seemed as though a test which
>> was supposed to fail to link, but which fails to compile should not be
>> deemed a success. I think I did this by analogy with run-fail, where
>> we were masking some actual compile-time failures which should not
>> have been registered as successes.
>>
>>
>> Of course we seem to have no tests which are really expected to fail
>> linking anymore...
>
> I can't actually think of any uses for that

There are some idioms which are expected to fail at link time but not
compile-time. For example, suppose someone tries to copy or assign to
boost::noncopyable?

> the problem remains, if we have a "compile-fail" test, the failure
> may be delayed until link time if the compiler does link-time
> template instantiation. The reason we're not seeing this cropping
> up in the current tests, is that the compilers that were exhibiting
> that behaviour are no longer being tested (SGI's compiler for eg).

OK, I believe you. What I'm suggesting is that we ought to check for
specific compilers which do this, and do an explicit
compile-or-link-fail test in that case for all current compile-fail
tests. I believe it is too hard for programmers to keep track of
which expected compilation failures may involve template
instantiation. In fact most of them do, so we'd have to change most
of our compile-fail tests to say link-fail.

>> > BTW I could use an equivalent run-fail test for boost-config,
>> > meaning: "this file either doesn't compile, link, or run", which is
>> > of course the opposite of the current run-fail. So a better naming
>> > convention is required all round :-)
>>
>> Wow, that sounds like a pretty unreliable test. There are so many
>> ways things can go wrong, and you want to accept any of them?
>>
>> Maybe we need some platform/compiler-dependent configuration which
>> chooses the appropriate criterion for success.
>
> It's not unreliable at all, it's the exact negative of a run test. It
> allows a negative to be tested: that if a feature macro is *not* set, then a
> failure should occur if it is set, otherwise we are possibly mis-configured.

My point is that it might easily report false successes when something
else is wrong, e.g. you just made a typo in a variable name.

-- 
                       David Abrahams
   dave_at_[hidden] * http://www.boost-consulting.com
Boost support, enhancements, training, and commercial distribution

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk