Boost logo

Boost Testing :

From: Rene Rivera (grafikrobot_at_[hidden])
Date: 2007-09-14 19:53:09


Beman Dawes wrote:
> Rene Rivera wrote:
>> Beman Dawes wrote:

>> There should no longer be tarball creation failures. Especially since
>> there isn't a long running process that generates them. And since the
>> creation is being done directly from the file system, instead of through
>> the svn webdav interface.
>
> Wonderful, if it works out in practice. But I think we need to start
> looking at every failure in the test mechanism, and figuring a way to
> eliminate any recurrence.

Not that I haven't stopped looking at bullet proofing all aspects of
testing ;-) But I should point out that the style of fixes that I've
been doing recently are only possible because we are no longer in the
1.34.x release cycle. So, yes, finding the failure holes now, before the
1.35 cycle begins in earnest, is of utmost importance.

>>> This I encourage testers to use the subversion approach. It has several
>>> advantages for testers; it is quicker and uses less network bandwidth.

>> That is only true for incremental testers which do an svn update. For
>> full testers they get the code fresh each time AFAIK. So in this regard
>> the tarball uses less bandwidth since it does global compression without
>> the overhead of the web communications for svn.

> I suppose there isn't a lot of difference on platforms with many, many
> compile failures. But an incremental test on my Windows box with VC++
> 8.0 only takes two or three minutes, versus close to two hours for a
> full test. So I could run an incremental test every hour or even more
> often, and then run a full test once a day or once a week.

Sure, and a few testers run incrementals for the same reasons. But we
still have issues with incremental testing stemming from having to post
process the build output to get the results. And hence why we *really*
want to replace that aspect.

Note, running incrementals is not a guard against long runs. If a highly
dependent library makes changes even a single incremental build will
take a long time. In fact that seems to have happened a few times in the
last 24 hours, which had the effect of slowing down my cycle by one
factor. So as a next step we want to consider moving the result
processing to a machine with less contested CPU resources. Dave, you
mentioned you might have some such resource?

> One of the reasons I'd like to see bjam produce timings for compiles and
> tests is so we can see where our testing resources are being expended.

Well, bjam can already produce timing information for each build action
it runs. We just need a place to put that information.

> That would also give us a stick to beat certain compiler vendors over
> the head with, if it turns out their compiler is much more expensive to
> use than others.

Yep, as we've seen public shame is a great impetus for vendors :-)

-- 
-- Grafik - Don't Assume Anything
-- Redshift Software, Inc. - http://redshift-software.com
-- rrivera/acm.org - grafik/redshift-software.com
-- 102708583/icq - grafikrobot/aim - grafikrobot/yahoo

Boost-testing list run by mbergal at meta-comm.com