|
Boost : |
From: Stefan Seefeld (seefeld_at_[hidden])
Date: 2007-03-14 19:57:16
Thomas Witt wrote:
> Stefan,
>
> Stefan Seefeld wrote:
>> A fix went in over a week ago. Instead of having a row of results in the test matrix showing me
>> 24 hours later whether all is well now, results are (for a number of reasons) tickling in one
>> at a time.
>
> While turnaround times like ours are not desirable I can't see why they
> make the whole system unreliable/unusable.
It's not the turnaround time per se. It's that in any given report there are test runs that
don't reflect the same state of the code, as they are run against (sometimes wildly) different
revisions of the code. Of course I can figure out the exact time a fix went in and then mentally
mask those runs that were run prior to that, but that's all stuff that the regression harness
could do much better and much more reliably.
>> Also, looking at the above descriptors, there are multiple entries for a number of
>> toolkits, suggesting the number of failures reported is only barely correlated to the
>> actual failures (and quite likely doesn't correspond the the current state of affairs) either.
>
> This again is more a problem of what you read into the numbers than it's
> a problem with the numbers not being correct. Interestingly zero does
> not have this issues with interpretation.
Right, the distribution isn't symmetrically distributed around some mean value; it's
an absorbing boundary condition. :-)
>> Thus my question: does anybody actually care about these numbers ?
>
> Let me put it this way. I do because they are the only numbers we got.
>
> Don't get me wrong these are all valid point with respect to usability.
> They just don't prove that the system is broken.
How much does it take to prove that ?
Regards,
Stefan
-- ...ich hab' noch einen Koffer in Berlin...
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk