|
Boost : |
Subject: Re: [boost] [EXTERNAL] [testing] Proposal - regression tests results display upgrade
From: Belcourt, Kenneth (kbelco_at_[hidden])
Date: 2014-08-13 18:53:06
On Aug 13, 2014, at 1:51 PM, Beman Dawes <bdawes_at_[hidden]> wrote:
> On Wed, Aug 13, 2014 at 10:42 AM, Adam Wulkiewicz <adam.wulkiewicz_at_[hidden]
>> wrote:
>
>> Paul A. Bristow wrote:
>>
>>> -----Original Message-----
>>>> From: Boost [mailto:boost-bounces_at_[hidden]] On Behalf Of Adam
>>>>
>>> Wulkiewicz
>>>
>>>> Would it make sense to use some more descriptive color/naming scheme?
>>>>
>>>> In particular it would be nice to distinguish between the actual
>>>> failures and
>>>>
>>> the
>>>
>>>> situations when the compilation of a test took too much time or an
>>>> output file
>>>>
>>> was
>>>
>>>> too big.
>>>> Would it make sense to also distinguish between compilation, linking and
>>>> run
>>>>
>>> failure?
>>>
>>> +1 definitely.
>>>
>>> This is particularly a problem for Boost.Math - the largest library in
>>> Boost, by
>>> far, in both code and tests, with several tests that often time out.
>>>
>>> Various summary counts of passes and fails would be good as well.
>>>
>>> It takes a while to scan the current (nice) display, eg
>>>
>>> http://www.boost.org/development/tests/develop/developer/math.html
>>>
>>
>> Ok, I managed to find some time to play with it.
>> AFAIU the reports are generated using the code from:
>> https://github.com/boostorg/boost/tree/develop/tools/regression/src/report,
>> is that correct?
>>
>
> Yes, I believe so.
>
>>
>> The first change is simple since I'd like to see if I'm doing everything
>> right. I changed the way how those specific fails are displayed. If the
>> compilation fails and at the end of the compiler output (25 last
>> characters) one of the following strings can be found:
>> - "File too big"
>> - "time limit exceeded"
>> the test is considered as "unfinished compilation" and displayed on the
>> library results page as a yellow cell with a link named "fail?". So it's
>> distinguishable from the normal "fail".
>>
>> Here is the PR: https://github.com/boostorg/boost/pull/25
>>
>
> I skimmed it and didn't see any red flags.
>
>
>>
>> I only tested it locally on a test done by 1 runner for Geometry and Math
>> libraries.
>> Is there a way I could test it on results sent by all of the runners?
>> How is this program executed?
>
> IIRC, Noel Belcourt at sandia runs the tests. Thus he would be a good
> person to merge your pull request since he will likely be the first to
> notice any problems.
>
> Noel, are you reading this:-?
A little delayed but yes.
> Is there some script which e.g. checks some directory and passes all of
>> the tests as command arguments?
>>
> He would be the best person to answer that.
Ill have to investigate, dont know off the top of my head.
Noel
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk