|
Boost Testing : |
From: Eric Niebler (eric_at_[hidden])
Date: 2008-08-05 14:47:58
Beman Dawes wrote:
> Eric Niebler wrote:
>> Just spoke with Rene about this ...
>>
>> Eric Niebler wrote:
>>> How come the "report time" on nearly every page of release test
>>> results is dated July 15th?
>>>
>>> For example:
>>> http://www.boost.org/development/tests/release/developer/summary_release.html
>>>
>
> AFAIK, that's the wrong page to be looking at. The page I use to make
> decisions is
> http://beta.boost.org/development/tests/release/developer/summary.html
Whew, thanks Beman! How do you get to that page? I go to boost.org,
click on "Development", and then on "Release Summary". That takes me to:
http://www.boost.org/development/tests/release/developer/summary.html
That page *was* dated July 15, but Rene fixed something and now it seems
to be up to date.
Which is the correct site for results? Why do we have two? And why was
boost.org directing me to stale results on www.boost.org instead of the
fresh results on beta.boost.org? Rene?
> The date on that is Sun, 3 Aug 2008 10:16:33 +0000. That's the last run
> before I shut the machine down for maintenance. It is back up now and
> should produce an updated report in an hour or two.
>
>> We should *all* be checking the release test results, especially the
>> release manager.
>
> I look at the results several times a day during critical times, such as
> when when preparing the beta.
Glad to hear you're on top of this!
>> How did this happen? Here are some hard questions.
>
> No clue. I have no idea how the results on the www.boost.org web site
> get updated from the beta.boost.org web site.
IMO, the test results should live in *one* place, and all the links
should point to it.
>> On what basis did we release the beta?
>
> The beta was released on the basis of a release report that was only a
> few minutes old.
>
>> (That is, do we have release criteria? Written down?)
>> Do those criteria have anything to do with the test results?
>
> Of course. The results speak for them; most of the remaining failures
> are minor nits or even false positives.
>
>> Is anybody in charge of the test infrastructure?
>
> Rene works on the scripts, but a lot of the responsibility is
> distributed or ill-defined.
Hmm...
>> Does that person look at the test results?
>> What can we do to make sure this doesn't happen again?
>
> Take a look at ticket #2150. http://svn.boost.org/trac/boost/ticket/2150
>
> That's one attempt at automatic tools to look for things that have gone
> wrong and report them quickly. It will work on files, from the
> repository or generated by the doc process.
>
> A similar tool that looked at web sites would be a nice QA addition. It
> would check for the presence of specified files and verify their date
> was recent, for a definition of recent specific to each file. Maybe
> check file size, too, or even some content.
OK, but that doesn't address the concern about test reporting.
Currently, it takes a human (you, Rene, people on the boost-testing
list) to manually verify that the results are being updated.
>> Clearly, we need to delay 1.36 until we can get some fresh test results.
>
> The test results are currently being updated several (up to eight) times
> each day.
>
>> And probably reopen the release branch for bug fixes.
>
> I posted a message several days ago indicating that the release branch
> was open to bug fixes that are stable on trunk.
OK. But I wonder if there are other people like me who were looking at
stale results. :-P
-- Eric Niebler BoostPro Computing http://www.boostpro.com