|
Boost : |
From: Martin Wille (mw8329_at_[hidden])
Date: 2007-06-11 06:35:01
Benjamin Kosnik wrote:
>> In an ideal world, we would:
>> (1) Build all of Boost, as a user would
>> (2) Install Boost
>> (3) Build tests against the installed Boost, then
>> (4) Run those tests
>
> Yessssssssssss!!!!!
>
> As part of (2), please include documentation.
That makes only sense if you want to test the build/installation
mechanism for the documentation. IMHO, this is not coupled closely to a
platform (except for the availability of external tools) and should not
be part of a regular testing schedule run by all testers.
> As part of (4), please include testing against the release components,
> which is the thing that users are encouraged to use for production.
That makes sense, if it does not exclude testing against debug versions.
> IMHO testing release, and then debugging failures with the debug builds
> is the way to go.
No. The debug versions contain additional code that checks against
certain problems. Not using that code will effectively hide a
significant number of problems.
> The current behavior of testing debug, and not
> testing release is highly suspect to me.
Agreed. This is to some extent caused by lack of resources.
> Maybe also include
>
> (5) Get accurate summaries of the local results, and perhaps be able to
> contribute them via email to a boost testing list.
I don't understand this point. What is an "accurate summary"? Why should
we publish these summaries via email? Either they're accurate and
unsuitably large for email (and for being read by humans) or they're
short and thus lack accuracy.
Regards,
m
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk