|
Boost Testing : |
From: Rene Rivera (grafik.list_at_[hidden])
Date: 2005-03-09 17:22:53
Misha Bergal wrote:
> Rene Rivera <grafik.list_at_[hidden]> writes:
>
> --text follows this line--
> Rene Rivera <grafik.list_at_[hidden]> writes:
>
>
>>All,
>>
>>I've been reading all the testing related post and not having time to
>>respond to them. So I decided to talk about what might be done to
>>improve things in one big posting.
>>
>>For along time now one of my objectives has been to use BuildBot
>>(http://buildbot.sf.net/) to improve the management of running the
>>regression tests. For those who don't feel like reading about that
>>software here's a quick summary:
>>
>
> Several questions:
>
> 1. You presume that when BuildBot tells certain machine to do
> something it is already done with last changes it needed to
> process. It is not so now. I believe our cycle is ~11 hours and
> Martin's is ~24 hours.
BuildBot manages all that. It will not tell the client to do something
if it's already busy. It will have to wait for it to become available.
This also handles when client are down. Testers, at minimum, can control
when a machine is available by stopping and starting the client as
needed. So if you did not want tests to run during the day when you are
using the machine for other work you could set up cron jobs to start the
client after work, and stop it before work.
> 2. I still don't understand:
>
> * What exactly will BuildBot tell slaves to build.
It gives the slaves a sequence of command invocations. The slaves run
those commands, and respond back to the server with the command output.
The server puts a sequence of commands together to constitute a "Build".
Each one of the boxes you see above the "Build" box is one of those
commands. Follow one of the "log" links on those boxes and you see the
command and it's output.
> * What BuildBot decides they need to build. What clients
> declare they can build?
The BuildBot server is the master and decides all the commands that are
needed. We, as the group of testers, decide what those commands are and
which slaves can run what.
> * What would be the display result format. Green signifies what?
> Red signifies what? Pointing to some already running BuilBot
> would be helpful. Looking at http://buildbot.ethereal.com/ I just
> don't see how the library author would know if she has broken
> something.
These aren't result displays, they are log display. A developer will her
name on the left most column which shows the changes that occur to CVS.
Tests runs after that that match the revision instance will show red
squares for commands that fail (non-zero result). My plan is to make
individual commands for each library group getting tested. This way they
will be able to see if particular library failed, or if many libraries
failed.
I believe red means failure, green means success, and yellow means
in-progress. I remember when I first this up there was also purple to
indicate warnings.
>>*Resources*
>
>>The only predictable way to address the resource usage, is to
>>distribute the testing so we can create more capacity.
>
> Or don't waste what we already have, see below.
Yes good point :-)
>>*Response*
>
>>The gain from segmentation and distribution of testing is hopefully
>>obvious ;-) But another advantage of using BuildBot is that we are not
>>tied to waiting for the XSLT processing to see results. Sure the
>>results are not going to be as incredibly well organized as the
>>Meta-Comm results but they are immediately available.
>
> I can see how looking at the logs would be helpful, if the build was
> done per CVS commit. I commit, get a change number and look how that
> was processed by all slaves. Unfortunately because of build times it
> is not possible now - see below.
>
>>*Releases*
>
>
>>Managing the testing for a release was brought up a many times. And
>>it's clear that requiring testers to do manual changes is just not
>>working.
>
> What are the specific use cases you are referring to?
The instance brought by Victor, I think, of having to make changes to
the test script/procedure as the test machine to switch from testing the
HEAD to testing the release branch. This would be done at the server and
the clients would automatically do what they are told.
>>From my point of view, this is what needs to be worked on next
> (http://www.crystalclearsoftware.com/cgi-bin/boost_wiki/wiki.pl?Boost.Testing)
>
> # Incremental testing is not reliable
> # Tests are run for compilers for which they are known to fail.
>
> This needs to be done no matter whether we use BuildBot or something
> else.
Yes. And I, Dave, Volodya, and maybe others, are thinking about how to
fix those problems. But we can't fix, what we don't know how to fix ;-)
-- Concretely of course, abstractly we have some ideas.
-- -- Grafik - Don't Assume Anything -- Redshift Software, Inc. - http://redshift-software.com -- rrivera/acm.org - grafik/redshift-software.com - 102708583/icq