Boost logo

Boost :

From: John Maddock (jm_at_[hidden])
Date: 2003-01-04 07:40:53


> > the problem remains, if we have a "compile-fail" test, the failure
> > may be delayed until link time if the compiler does link-time
> > template instantiation. The reason we're not seeing this cropping
> > up in the current tests, is that the compilers that were exhibiting
> > that behaviour are no longer being tested (SGI's compiler for eg).
>
> OK, I believe you. What I'm suggesting is that we ought to check for
> specific compilers which do this, and do an explicit
> compile-or-link-fail test in that case for all current compile-fail
> tests. I believe it is too hard for programmers to keep track of
> which expected compilation failures may involve template
> instantiation. In fact most of them do, so we'd have to change most
> of our compile-fail tests to say link-fail.

OK, but that implies that most current compile-fail tests would need to have
a "int main(){}" added. Actually thinking about it, most compilers that do
link-time template instantiation have an option to force the instantiation
of all used templates (at compile time), so maybe the way to handle this is
just to modify the compiler requirements inside the compile-fail rule
definition?

> >> > BTW I could use an equivalent run-fail test for boost-config,
> >> > meaning: "this file either doesn't compile, link, or run", which is
> >> > of course the opposite of the current run-fail. So a better naming
> >> > convention is required all round :-)
> >>
> >> Wow, that sounds like a pretty unreliable test. There are so many
> >> ways things can go wrong, and you want to accept any of them?
> >>
> >> Maybe we need some platform/compiler-dependent configuration which
> >> chooses the appropriate criterion for success.
> >
> > It's not unreliable at all, it's the exact negative of a run test. It
> > allows a negative to be tested: that if a feature macro is *not* set,
then a
> > failure should occur if it is set, otherwise we are possibly
mis-configured.
>
> My point is that it might easily report false successes when something
> else is wrong, e.g. you just made a typo in a variable name.

Which is true for all compile-fail tests as well. Actually this is less of
a problem here (for config tests), because we would expect the tests to
build and run on some platforms, so the tests get tested in both directions
(that there are some platforms were they should build and do so, and others
where they should not and don't do so).

John Maddock
http://ourworld.compuserve.com/homepages/john_maddock/index.htm


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk