Boost logo

Boost :

From: David Abrahams (dave_at_[hidden])
Date: 2005-05-24 18:13:13


"Gennadiy Rozental" <gennadiy.rozental_at_[hidden]> writes:

> "Rene Rivera" <grafik.list_at_[hidden]> wrote in message
> news:42938CE3.6050701_at_redshift-software.com...
>> Gennadiy Rozental wrote:
>>>>I think we've seen multiple times that this at least causes
>>>>Boost developers and release managers distress when it happens
>>>
>>> Does it distress you any less, when faulures in Boost.<anything else>
>>> unit tests happends?
>>
>> I think the distress comes from not knowing that they are not required
>> tests. During release, we assume that *all* tests are important. And most
>> of us don't know enough about individual libraries to see if failing tests
>> are important or not.
>
> In fact majority of failures comes even not from actual tests, but from
> examples. I did not find a "proper" way for examples to show up in
> regression tests screen, so I faked them as tests (compile only
> rule).

It seems clear to me that we should not be running any tests for
examples that are doomed to fail because they use unimplemented
features. Aside from what it does to the impression we all get of the
CVS health, it soaks time from every test run.

> I think some kind of "test level" notion could be a good idea. We
> may have critical, feature-critical, informational kind of tests.

Maybe so. I could suggest in the meantime that you can use the
"expected-failure" notation, but really I believe what I said above:
there's no good reason to be running these tests if they have no
chance of passing.

-- 
Dave Abrahams
Boost Consulting
www.boost-consulting.com

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk