From: David Abrahams (dave_at_[hidden])
Date: 2007-10-16 15:03:55
on Sun Oct 07 2007, Gennadiy Rozental <rogeeff-AT-gmail.com> wrote:
> Henrik Sundberg <storangen <at> gmail.com> writes:
>> 2007/10/7, Jeff Garland <jeff <at> crystalclearsoftware.com>:
>> > Gennadiy Rozental wrote:
>> > > Additionally it's important to split new failures from regression. New
>> > > failures we just mark as expected on day of release and ignore. At the
>> > Well, this doesn't quite work for me. If a new library can't pass tests
>> > on the 'primary platform list' then it needs to be removed from the
>> > release because it's not ready.
>> The definition of "New Failure" might be problematic.
>> E.g. If a test is added in 1.35, due to a bug found (and not fixed) in
>> 1.34, a new failure occurs in the test output.
>> If functionality with poor quality is added to an old library, then
>> the code should not be accepted, not just marked as an expected
> What ff I added feature that works on gcc 4.0, but do not have time
> to port it on VC 7.1? I've added corresponding test. No egressions
> appear. IMO what should be done is that this test should be marked
> as expected to fail everywhere where it fails
Why? Just to get a green field of tests? There's a difference
between features that can't be made to work due to compiler bugs and
those that you just haven't had the time to implement portably. The
former is not expected to ever pass for that platform unless someone
discovers new hacks. The latter is essentially in a (hopefully
temporarily) broken state, and shouldn't look like a healthy test.
> and next release I'll try to port it to VC7.1. Next
> to CW and so on.
If we have a set of primary release platforms, I want to be able to
claim that Boost is portable to those environments, not that some
features work here and there. If you can't get the feature working on
all the release platforms, it should be considered "not yet portable"
and held back from the release.
-- Dave Abrahams Boost Consulting http://www.boost-consulting.com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk