Boost logo

Boost :

From: Gennaro Prota (gennaro_prota_at_[hidden])
Date: 2003-06-19 11:06:34

On Thu, 19 Jun 2003 08:06:48 -0500, Aleksey Gurtovoy
<agurtovoy_at_[hidden]> wrote:

>Gennaro Prota wrote:
>> I particularly like the fact that they are organized by
>> library, rather than by platforms as were the latest regression logs I
>> have seen.
>Well, actually at the moment these are for a single platform only (win32).
>Having the most recent results for each platform collected into a nice
>single summary table is something that I hope somebody will address further
>down the road.

Or even single tables, with links (e.g. '-> next platform') that allow
the developer to rapidly have a look-through and see how his/her
library behaves everywhere.

>> Also, since 'doesn't work' and 'broken' (in the user summary) are
>> practically synonyms, I would simply use a dash or an empty cell
>> instead of the former.
>Uhm, they are not, though! 'broken' on the user summary page indicates that
>_the current_ CVS state is broken, probably to somebody's erroneous checkin,
>but, as its legend says, that doesn't at all mean that the library is not
>usable on that compiler. Basically it says "if you get me know, I won't

Ah :-) I meant that, without an explanation (as the one above), it was
difficult to understand what was the supposed difference. Out of
context, the two terms means almost the same thing.

Well, for 'broken' I would suggest 'broken cvs', 'bad status', 'bad
cvs', 'cvs mess' or 'screwed up'. If you are also looking for
alternatives to 'doesn't work' (reply to Rene), well, what about 'no
hope'? ;-) Seriously, for the latter I would simply use a dash.

> As soon as the regression is fixed, all broken statuses will
>become "OK" or "OK*". "doesn't work" means, well, doesn't work, permanently.
>> But these are really minor points.
>> More importantly instead, would it be possible to also have a sign
>> indicating regressions? A little gif comes to mind, but even something
>> as simple as an asterisk could be ok.
>Hmm, I am not sure I understand what we are talking about here. Anyway,
>ultimately, the developer summary page is supposed to serve as a regressions
>indicator, but for it to work every library author need to go through the
>trouble of specifying the expected failures and fixing everything else.

What I was thinking to is an "automatic" indicator that the result of
a test is different from the previous running or from a "reference
running" (especially when it is worse ;-)). In practice, I guess, it
isn't easy to remember whether for compiler xyz on platform abc a
given library was already failing or not, especially for those who are
authors of several libraries. Or is it just me and you, guys, all
remember such things? :-O

>Thanks for the thoughts,

Thanks to you. I do know that this is a thankless work. At least with
C++ there's a lot of fun :-)


Boost list run by bdawes at, gregod at, cpdaniel at, john at