|
Boost Testing : |
From: David Abrahams (dave_at_[hidden])
Date: 2007-07-03 11:31:31
on Tue Jul 03 2007, Martin Wille <mw8329-AT-yahoo.com.au> wrote:
> David Abrahams wrote:
>> on Tue Jul 03 2007, Martin Wille wrote:
>>
>>> Joaquin M Lopez Munoz wrote:
>>>
>>>> Your audit trail is correct in that nothing has changed in
>>>> the code base, but the problem is that this error is spurious,
>>>> it happens now and then without any particular dependence on the
>>>> code, and will go away on the next run, even if the source
>>>> hasn't been touched. A link to a past manifestation of the
>>>> same issue:
>>>>
>>>> http://lists.boost.org/Archives/boost/2007/05/122346.php
>>>>
>>>> So, what would be needed is simply a rerun of the offending
>>>> test --I don't know if you can control the regression tester
>>>> to that level of detail.
>>> I'm not convinced it is a good idea to rerun the tests until the results
>>> look good, if the testing site is known to work reliably (as it is the
>>> case for Victor's). Doing so simply hides a known issue. One could call
>>> that lying about the quality of the software tested.
>>
>> I agree. This calls for explicit failure markup to describe our state
>> of (little) knowledge about the problem.
>
> I guess we have another problem then: we don't have a markup for this
> specific kind of failure. If we mark that test as known-failing then
> we'll often see a dark green field signalling "passing unexpectedly",
> which would also give a wrong impression.
>
> I think we need a new tag for this.
Maybe, but I think in the meantime a reasonably detailed comment will
explain the dark green.
-- Dave Abrahams Boost Consulting http://www.boost-consulting.com The Astoria Seminar ==> http://www.astoriaseminar.com