Boost logo

Boost Testing :

From: Rene Rivera (grafik.list_at_[hidden])
Date: 2005-03-09 23:59:25


Misha Bergal wrote:
> Rene Rivera <grafik.list_at_[hidden]> writes:
>>
>>It gives the slaves a sequence of command invocations. The slaves run
>>those commands, and respond back to the server with the command
>>output. The server puts a sequence of commands together to constitute
>>a "Build". Each one of the boxes you see above the "Build" box is one
>>of those commands. Follow one of the "log" links on those boxes and
>>you see the command and it's output.
>
> I am sorry, I still don't understand.

Ti's OK.. It took me some time to think in the BuildBot context :-)

> What will be those commands?

The same commands we currently run. So things like "cvs update", "cd
tools/build/jam_src; build.sh", "cd boost-root/status bjam -sTOOLS=gcc
test", etc.

It's a reformulation of the test scripts, either the hand built one like
what I have, or some of the things in regression.py. In fact I'm using
some of the same code from regression.py as it's very helpful ;-)

> What will be their results?

Each has a result value, and the standard output.

> How would we decide what toolsets to use?
> How would we decide what tests to run?

I'm not sure those are different questions. So please clarify if I don't
answer it..

The server(master) has a configuration file, composed of simple Python
code, which defines all the clients(slaves). It specifies what commands
will be run for each. For Boost I'm making what BuildBot calls a build
factory, that will automate most of the configuration so that we specify
the basic information. We would minimally put in what toolsets, and what
tests (specified as a regex of the test/Jamfile locations) to run for
that client.

That's the minimal step I want to get working this week. Later on we can
extend it to do things like add resource availability constraints.

> What do we need to know about
> the slave to correctly issue the commands?

I think the above answers that... But here's more. BuildBot already
knows what platform the clients are. Other than that.. We need things
like how the client should get the sources.

> What means "failed" or
> "succeded" for commands? Result code is not very helpful if there are
> some known failing tests.

Result code is sufficient for in this case. Remember this is not a
replacement for the XSLT result processing. That's what gives us precise
fail/succeed and why information. My recommendation, based on the test
breakup I first mentioned, would be to isolate known failing test into a
separate group so that they could be wholesale ignored depending on the
platform+toolset of the client. For example the serialization library
could segment it's tests into DLL and LIB and have only the LIB ones run
on Windows/CW. Also it could segments out wide_char tests so that those
don't run on MinGW nor BSD.

-- 
-- Grafik - Don't Assume Anything
-- Redshift Software, Inc. - http://redshift-software.com
-- rrivera/acm.org - grafik/redshift-software.com - 102708583/icq

Boost-testing list run by mbergal at meta-comm.com