|
Boost : |
From: David Abrahams (dave_at_[hidden])
Date: 2008-07-15 19:51:59
on Tue Jul 15 2008, "Emil Dotchevski" <emil-AT-revergestudios.com> wrote:
> The aim of this testing is to prove that, if the test with the
> previous Boost release succeeds but the test with the next Boost
> release fails, a library that the serialization library depends on has
> introduced a breaking change which has to be fixed.
>
> However, the quoted proposal does not at all discourage breaking
> changes; it only detects them (and in fact it postpones the
> detection.)
>
> My suggestion was to formally flag certain libraries as frozen. The
> exact semantics of the "frozen" flag could be that any breaking change
> is a bug. This does discourage breaking changes (in those libs), and
> as far as I can tell achieves your goals, except that:
First of all: "breaking changes" are also called "regressions," except I
suppose when they are intentional. I'm just trying to understand the
premise here. We do have a system for detecting regressions. Is it
that intentional breaking changes are not properly distinguished from
the unintentional ones?
Wouldn't the most direct way to ensure we're testing for regressions be
to have release managers (or someone other than the library author)
regulate the contents of test suites (i.e. to ensure everything that was
tested in the last release is still tested in this one)? Isn't the way
we are able to hide regressions with XML markup also an issue?
Finally, I realize that some library authors' test coverage may not be
complete and thus a dependent library's tests will occasionally reveal a
problem that wasn't otherwise detected, but it doesn't look like a good
investment to sink a lot of effort in that marginal scenario nor to make
boost's regression detection dependent on it.
-- Dave Abrahams BoostPro Computing http://www.boostpro.com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk