Boost logo

Boost Testing :

From: Rene Rivera (grafik.list_at_[hidden])
Date: 2005-03-10 00:54:29


Misha Bergal wrote:

> Rene Rivera <grafik.list_at_[hidden]> writes:

>>The server(master) has a configuration file, composed of simple Python
>>code, which defines all the clients(slaves).
>>
>>>What do we need to know about
>>>the slave to correctly issue the commands?
>>
>>I think the above answers that... But here's more. BuildBot already
>>knows what platform the clients are. Other than that.. We need things
>>like how the client should get the sources.
>
> The knoweldge of what toolsets are supported by slave will have to
> be coded in that Python script, right?

If you mean the server(master) configuration file. Yes. At least that's
the current out of the box support from BuildBot. It would be possible
to move that to the client, but we would have to get down to the lower
levels of BuildBot to implement that.

>>Result code is sufficient for in this case. Remember this is not a
>>replacement for the XSLT result processing. That's what gives us
>>precise fail/succeed and why information. My recommendation, based on
>>the test breakup I first mentioned, would be to isolate known failing
>>test into a separate group so that they could be wholesale ignored
>>depending on the platform+toolset of the client.
>
> Having meaningful result codes seems to be the most important thing to
> do for a useful BuildBot implementation.

If BuildBot was the only test communication device one has.. Yes it
would be the most important. For us it's only a question of granularity.
The more detailed we need the BuildBot information to go the more
important the result codes become. But also the more we have to break up
the tests into individual commands. I'm thinking that at first we can
live with library level command granularity because that's the unit
developers are going to be interested in most of the time. For example
such a command would be:

cd $(BOOST_ROOT)/libs/config/test
bjam -sTOOLS=gcc --dump-tests test

If bjam fails, it's because at least one of the tests failed. Which
would mark the libs/config/test command as failed in the BuildBot
display. That would be sufficient for people to go investigate further
what is wrong. In the usual development they would likely go look at the
Meta-Comm results to get more info. But during that mad dash of release
testing they are likely to just look at the log in BuildBot to shorten
the turn around.

> I am not sure that breaking
> the tests is the right way to go, though,

Don't really know, ultimately it's something up to the library authors.
But it's definitely something we'll have to experiment with to see if it
works.

-- 
-- Grafik - Don't Assume Anything
-- Redshift Software, Inc. - http://redshift-software.com
-- rrivera/acm.org - grafik/redshift-software.com - 102708583/icq

Boost-testing list run by mbergal at meta-comm.com