|
Boost Testing : |
From: David Abrahams (dave_at_[hidden])
Date: 2005-06-02 12:53:12
Doug Gregor <dgregor_at_[hidden]> writes:
> On Jun 2, 2005, at 11:21 AM, David Abrahams wrote:
>> Pretty nice. Here's the thing: compared to my original suggestion,
>> it's a bit hard for me to see if any of my libraries are failing.
>
> Part of your suggestion was for something like this:
>
> Iterators: 12 regressions, 14 new warnings since release 1.32.0:
> gcc-2.95-3: test1, test2, test3
> msvc-8.0: test7, test9
> ...
>
> Graph: 3 regressions since release 1.32.0
>
> Python: 245 regressions since release 1.32.0:
> gcc-2.95-3: test1, test2, test3
> msvc-8.0: test7, test9
>
> This would (I think) have the same problems as the issues list that's
> available now.
Maybe not if it didn't use tables.
Also, the appending of paren pairs to the test names seems to cause most
rows to be double-height in my browser.
> We could have a table of contents at the top of the
> page, that links down to the set of failing tests for each library.
> Then you could skim the table of contents to see if any of your
> libraries are failing.
Sounds good.
>> One other point: when I click through any of the failures, I get "page
>> not found."
>
> You also see empty parens ( ) where you should see the name of the
> regression test runner.
Ah, than explains the parens. I don't really need to see that
information up front. How about putting it in the log text?
> Something is amiss and I'm looking into it as time permits. Click on
> the library name to see the full results for the library.
Thanks.
-- Dave Abrahams Boost Consulting www.boost-consulting.com