Boost logo

Boost Testing :

From: Aleksey Gurtovoy (agurtovoy_at_[hidden])
Date: 2007-12-12 06:34:15


David Abrahams wrote:
> on Tue Dec 11 2007, Aleksey Gurtovoy <agurtovoy-AT-meta-comm.com> wrote:
>
>> David Abrahams wrote:
>>> I may be jumping the gun slightly, but as I write this down, it seems
>>> to boil down to only a few distinct questions for a given revision or
>>> revision range:
>>>
>>> * Which unexpected failures were introduced and/or removed?
>> Introduced and/or removed since the previous revision, or something
>> else?
>
> Introduced by the repository revision (range) in question -- see below
>
>>> * Which library/platform combinations have unexpected failures?
>> ... and regressions. Sure.
>
> I thought this through, and it seems to me that there are only two
> cases:
>
> 1. The new failure has not been marked expected, in which case it will
> show up as unexpected
>
> 2. The new failure has been marked expected, in which case,
> presumably, someone thinks there's a good reason for the failure
> and it's not a cause for concern.

The regression status is most useful in the context of release management.

Consider, for instance, a situation when we just released and now the
newly accepted libraries are being merged to the trunk. With no
distinction between new failures and regression, you just suddenly
jumped from no failures to, say, 20 failures -- which is not a big deal
by itself, but the problem is that now the codebase health status is
not indicative of its "release" status.

If there is a distinction between regressions and new failures, you are
still in the releasable state -- you just need to pull out the
(offending) new libraries. With no distinction you both paint a worse
picture, possibly triggering the "broken windows" effect, and lose the
ability to postpone/revise decisions about new libraries/features should
the need arrive (unless they are tracked in some other way, of course).

>
> I see a few marginal uses for the information "it wasn't failing in
> the last release but is now." For example, if a test was labelled as
> a regression despite also beging marked expected, one might scrutinize
> the markup a little more carefully to make sure we really mean it.
> These cases, however, do not seem compelling enough to make "is it a
> regression?" an important question for us.

Personally, I think the ability to say for sure whether release X has
any functional regressions from release Y and what are they is
essential for any "production-quality" software project. I know this is
the question we (MetaCommunications) ask when we decide whether and when
to upgrade to a new version of a third-party library/tool/compiler we
use, and having the (accurate) answer helps tremendously.

--
Aleksey Gurtovoy
MetaCommunications Engineering

Boost-testing list run by mbergal at meta-comm.com