Boost logo

Boost Testing :

From: Misha Bergal (mbergal_at_[hidden])
Date: 2005-03-10 00:24:41


Rene Rivera <grafik.list_at_[hidden]> writes:

> Misha Bergal wrote:
>> Rene Rivera <grafik.list_at_[hidden]> writes:
>>>
[...]
>
> Ti's OK.. It took me some time to think in the BuildBot context :-)
>
>> What will be those commands?
>
> The same commands we currently run. So things like "cvs update", "cd
> tools/build/jam_src; build.sh", "cd boost-root/status bjam -sTOOLS=gcc
> test", etc.
>
> It's a reformulation of the test scripts, either the hand built one
> like what I have, or some of the things in regression.py. In fact I'm
> using some of the same code from regression.py as it's very helpful
> ;-)
>
>> What will be their results?
>
> Each has a result value, and the standard output.
>
>> How would we decide what toolsets to use?
>> How would we decide what tests to run?
>
> I'm not sure those are different questions. So please clarify if I
> don't answer it..
>
> The server(master) has a configuration file, composed of simple Python
> code, which defines all the clients(slaves). It specifies what
> commands will be run for each. For Boost I'm making what BuildBot
> calls a build factory, that will automate most of the configuration so
> that we specify the basic information. We would minimally put in what
> toolsets, and what tests (specified as a regex of the test/Jamfile
> locations) to run for that client.
>
>> What do we need to know about
>> the slave to correctly issue the commands?
>
> I think the above answers that... But here's more. BuildBot already
> knows what platform the clients are. Other than that.. We need things
> like how the client should get the sources.

The knoweldge of what toolsets are supported by slave will have to
be coded in that Python script, right?

>> What means "failed" or
>> "succeded" for commands? Result code is not very helpful if there are
>> some known failing tests.

> Result code is sufficient for in this case. Remember this is not a
> replacement for the XSLT result processing. That's what gives us
> precise fail/succeed and why information. My recommendation, based on
> the test breakup I first mentioned, would be to isolate known failing
> test into a separate group so that they could be wholesale ignored
> depending on the platform+toolset of the client. For example the
> serialization library could segment it's tests into DLL and LIB and
> have only the LIB ones run on Windows/CW. Also it could segments out
> wide_char tests so that those don't run on MinGW nor BSD.

Having meaningful result codes seems to be the most important thing to
do for a useful BuildBot implementation. I am not sure that breaking
the tests is the right way to go, though,

-- 
Misha Bergal
MetaCommunications Engineering

Boost-testing list run by mbergal at meta-comm.com