From: John Maddock (jm_at_[hidden])
Date: 2003-01-01 06:34:48
> I intentionally changed it because it seemed as though a test which
> was supposed to fail to link, but which fails to compile should not be
> deemed a success. I think I did this by analogy with run-fail, where
> we were masking some actual compile-time failures which should not
> have been registered as successes.
> Of course we seem to have no tests which are really expected to fail
> linking anymore...
I can't actually think of any uses for that: the problem remains, if we have
a "compile-fail" test, the failure may be delayed until link time if the
compiler does link-time template instantiation. The reason we're not seeing
this cropping up in the current tests, is that the compilers that were
exhibiting that behaviour are no longer being tested (SGI's compiler for
> > BTW I could use an equivalent run-fail test for boost-config,
> > meaning: "this file either doesn't compile, link, or run", which is
> > of course the opposite of the current run-fail. So a better naming
> > convention is required all round :-)
> Wow, that sounds like a pretty unreliable test. There are so many
> ways things can go wrong, and you want to accept any of them?
> Maybe we need some platform/compiler-dependent configuration which
> chooses the appropriate criterion for success.
It's not unreliable at all, it's the exact negative of a run test. It
allows a negative to be tested: that if a feature macro is *not* set, then a
failure should occur if it is set, otherwise we are possibly mis-configured.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk