Boost logo

Boost Testing :

From: Beman Dawes (bdawes_at_[hidden])
Date: 2007-09-14 16:51:43


Rene Rivera wrote:
> Rearranging the Q&A...
>
> Beman Dawes wrote:
>> The regression testing script obtains the boost working copy for testing
>> either by downloading a tarball or via an subversion update.
>
>> As a developer, I prefer testers use the subversion approach because it
>> ensures my latest svn commits are included in the test run.
>
> Not true any more. The tarball generation is now on-demand. When the
> testers use the tarball the server does an svn-export+tar+bz2, but does
> it in a stream to reduce disk space use. Hence the tarball will always
> have the latest changes when it's obtained.

That's good news!

>> I'd also like to see the subversion approach become the default, rather
>> than an option. Or better yet, could we eliminate the tarball approach
>> completely?
>
> No the tarball can't be eliminated. There are some testers that are
> behind very restrictive firewalls and only have HTTP access. They can't
> make SVN/WebDav connections, or at least with considerable more pain and
> likely fragile setups.
>
>> The svn approach is also more robust, because it eliminates the
>> possibility of tarball creation failures.
>
> There should no longer be tarball creation failures. Especially since
> there isn't a long running process that generates them. And since the
> creation is being done directly from the file system, instead of through
> the svn webdav interface.

Wonderful, if it works out in practice. But I think we need to start
looking at every failure in the test mechanism, and figuring a way to
eliminate any recurrence.

>> This I encourage testers to use the subversion approach. It has several
>> advantages for testers; it is quicker and uses less network bandwidth.
>
> That is only true for incremental testers which do an svn update. For
> full testers they get the code fresh each time AFAIK. So in this regard
> the tarball uses less bandwidth since it does global compression without
> the overhead of the web communications for svn.

I suppose there isn't a lot of difference on platforms with many, many
compile failures. But an incremental test on my Windows box with VC++
8.0 only takes two or three minutes, versus close to two hours for a
full test. So I could run an incremental test every hour or even more
often, and then run a full test once a day or once a week.

One of the reasons I'd like to see bjam produce timings for compiles and
tests is so we can see where our testing resources are being expended.
That would also give us a stick to beat certain compiler vendors over
the head with, if it turns out their compiler is much more expensive to
use than others.
>
> PS. This is the tarball generation script
> <http://svn.boost.org/trac/boost/browser/trunk/tools/regression/boost_svn_export_archive.sh>.
> And this is the web script that serves it up
> <http://svn.boost.org/trac/boost/browser/website/public_html/beta/development/snapshot.php>.

Thanks,

--Beman


Boost-testing list run by mbergal at meta-comm.com