|
Boost : |
From: David Abrahams (dave_at_[hidden])
Date: 2005-05-24 19:41:37
Aleksey Gurtovoy <agurtovoy_at_[hidden]> writes:
> Gennadiy Rozental writes:
>> "Rene Rivera" <grafik.list_at_[hidden]> wrote in message
>> news:42938CE3.6050701_at_redshift-software.com...
>>> Gennadiy Rozental wrote:
>>>>>I think we've seen multiple times that this at least causes
>>>>>Boost developers and release managers distress when it happens
>>>>
>>>> Does it distress you any less, when faulures in Boost.<anything else>
>>>> unit tests happends?
>>>
>>> I think the distress comes from not knowing that they are not required
>>> tests. During release, we assume that *all* tests are important. And most
>>> of us don't know enough about individual libraries to see if failing tests
>>> are important or not.
>>
>> In fact majority of failures comes even not from actual tests, but from
>> examples. I did not find a "proper" way for examples to show up in
>> regression tests screen, so I faked them as tests (compile only rule). I
>> think some kind of "test level" notion could be a good idea. We may have
>> critical, feature-critical, informational kind of tests.
>
> You can employ test case categorization
> (http://article.gmane.org/gmane.comp.lib.boost.devel/124071/) to at
> least visually group the tests into categories along the above lines.
Have you guys written a short manual that explains how all these
features work? ;-)
Sorry to be coy, but how hard would that be? It really should be in
an accessible place, no?
-- Dave Abrahams Boost Consulting www.boost-consulting.com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk