Boost logo

Boost Testing :

From: Beman Dawes (bdawes_at_[hidden])
Date: 2008-08-07 11:20:54


Eric Niebler wrote:
> Beman Dawes wrote:
>> Eric Niebler wrote:
>>>
>>> Beman Dawes wrote:
>>>> Eric Niebler wrote:
>>>>> Just spoke with Rene about this ...
>>>>>
>>>>> Eric Niebler wrote:
>>>>>> How come the "report time" on nearly every page of release test
>>>>>> results is dated July 15th?
>>>>>>
>>>>>> For example:
>>>>>> http://www.boost.org/development/tests/release/developer/summary_release.html
>>>>>>
>>>>>>
>>>>
>>>> AFAIK, that's the wrong page to be looking at. The page I use to
>>>> make decisions is
>>>> http://beta.boost.org/development/tests/release/developer/summary.html
>>>>
>>>>
>>>
>>> Whew, thanks Beman! How do you get to that page? I go to boost.org,
>>> click on "Development", and then on "Release Summary".
>>
>> I don't look at www.boost.org, because I assume it applies to the
>> current release.
>>
>> Instead I look at beta.boost.org, because I assume it applies to the
>> release under development.
>
> Then why does www.boost.org have results for Boost.Accumulators (some of
> which are quite out of date), when Accumulators is new in 1.36.
>
> In particular, I never saw your response to this exchange from
> yesterday. I think this is a serious problem:
>
> Rene Rivera wrote:
>> Eric Niebler wrote:
>>>
>>> Oh whoa. Something really strange is going on with these pages.
>>>
>>> These results are old:
>>> http://www.boost.org/development/tests/release/developer/accumulators_release.html
>>>
>>>
>>> These results are new:
>>> http://www.boost.org/development/tests/release/developer/accumulators.html
>>>
>>>
>>> I can reach either page via different sequences of clicks. At first
>>> I thought that maybe one page represents results from 1.35, but
>>> that's not possible because accumulators wasn't in 1.35.
>>>
>>> What's going on?
>>
>> This sounds like the release view of the results is not getting
>> generated, on Beman's machine, and it's using some old set of results.
>> Hence only the developer view has the current results.
>
>
> Thoughts?

No idea. That machine just runs the script from the web site; I haven't
done any customization, and haven't changed either hardware or software
setup in ages.

Even though having multiple views of the test results sounds good, with
the current system it adds yet another place for failures to occur.
IIUC, the CMake based testing will be storing test results in a MySQL
database. I'm hoping that will be much more robust that the current XML
based system.

> <snip>
>>>
>>> OK, but that doesn't address the concern about test reporting.
>>> Currently, it takes a human (you, Rene, people on the boost-testing
>>> list) to manually verify that the results are being updated.
>>
>> Say we had a tool that looked at web pages to see if they had been
>> updated, and sent email if not. We would point it at
>> http://beta.boost.org/development/tests/release/developer/summary.html,
>> and set the required update frequency to say 24 hours. The email
>> address could be set to either the testing list, the site admins,
>> release managers, or whoever else was interested.
>
>
> That would not be good enough unless it checked every web page with test
> results on it. For instance, checking summary.html as you suggest would
> reveal nothing amiss, however this page on beta.boost.org is out of date:
>
> http://beta.boost.org/development/tests/release/developer/summary_release.html

I'd love a perfect automated quality assurance system with 100% coverage
yet no false positives. But hopefully QA probes are like regression
testing; even partial coverage is way better than no coverage, and the
system gets much better over time as coverage is added in response to
actual failures.

--Beman


Boost-testing list run by mbergal at meta-comm.com