From: Ed Brey (edbrey_at_[hidden])
Date: 2001-08-29 16:21:50
> > > So I still think we should be "optimistic" for the sake of knowing
> > > exactly when are optimism is misplaced.
> > I think the best way to find that out is to test, rather than using all
> > the Boost users themselves as a test bed.
> I'm not sure where I stand on this one. I tend to lean towards the
> optimistic view, but it has problems when users are more on the
> cutting-edge. That is, it works fine if the only publicly-available
> compilers come out in regular releases not too often -- as is the case with
> commercial compilers. But it starts to fall apart when users are using, for
> example, beta versions of STLPort (or maybe CVS versions of gcc could have
> this problem too) -- where the version number is increased, but the bug
> isn't fixed and Boost hasn't caught up yet.
How about if the assumptions are made conditional depending on whether an official regression test is in progress? During a regression test, be optimistic; otherwise, assume no change (nothing fixed, nothing broken by a new version).
This will keep the configuration as up to as the regression testers' tool suites, which I expect will be on the leading edge of stable tool versions. The official regression testers would set a compiler switch like -DBOOST_REGRESSION_TEST. Regular end users running the regression test for sanity would probably not be interested in having the regression test fail because their tool suite is too new, so I would expect that the default in the regression portion of the build system would be to _not_ turn on such a regression flag.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk