Boost logo

Boost :

From: Schrader, Glenn (gschrad_at_[hidden])
Date: 2008-04-04 08:29:50


> -----Original Message-----
> From: boost-bounces_at_[hidden] [mailto:boost-bounces_at_[hidden]]
> On Behalf Of Joaquin M Lopez Munoz
> Sent: Thursday, April 03, 2008 12:42 PM
> To: boost_at_[hidden]
> Subject: Re: [boost] [1.36.0] High priority compilers for this release?
>
<CLIP>
>
> I'd like to bring the following idea for discussion: Besides
> mainstream platforms, additional, more exotic platforms can be
> provisionally included into the supported set *provided that*
> they are associated to a volunteer *platform champion*. A platform
> champion will
>
> * Make sure there's a daily regression test for the platform
> (either run by her or some other party).
> * Continuously scans the platform-specific regressions and
> trac tickets in every Boost library, studies them and proposes
> tried patches. For a reasonably conformant compiler, this is
> actually not as hard to do as it might sound, as many fixes
> (in my experience) are very local in nature and can be fixed
> without having any deep knowledge of the code being fixed. I've
> done my share of fixing for MSVC 6.0 and can attest this. The
> point is that in fixing platform-specific glitches it is way
> more valuable to have someone with knowledge of and access to
> the platform than someone with knowledge of the lib itself.
> * Is available for answering question on the platform and try
> tests and stuff on demand.
>
> If the champion resigns or is not able to keep a high quality
> level on the platform, the platform is dropped.
>
> Does this make sense?
>
> Joaquín M López Muñoz
> Telefónica, Investigación y Desarrollo

Even for the mainstream there are a huge number of combinations of supporting tools/libraries (e.g. MPI, python, etc) and compilers (e.g. gcc, msvc, etc). Each of these has some number of release versions, each could have different build options (e.g. debug vs optimized or shared vs static vs both), as well as boost itself which has quite a few of its own build options. This is, of course, way too much to test exhaustively but the set of configurations that are actually used is probably still huge.

To make a dent in that kind of test set an enormous amount of computation is needed. An approach that looks interesting is seti_at_home. The basic idea is to set up a BOINC project for boost build testing and distribute the test load across anybody who wants to contribute cycles. The "dataset" that is being processed could consist of the entire toolchain (gcc + libs + boost + etc) for the open source pieces. The project should probably maintain an entire internal toolchain so that is doesn't rely on much more than the kernel version and network access from a client system. Commercial compilers and such couldn't be downloaded so there would need to be a project configuration mechanism to allow a particular user to say whether he has any of those compilers installed and their version. The project could for instance cache builds of supporting items (e.g. gcc) so that it wouldn't have to rebuild them for each test. All of the test results would be reported back to the boost site and aggregated into a master test results matrix. All of this could be done either nightly or continuously.

The nice thing about this is that a given user doesn't need to understand the build process in order to contribute resources to it. This low entry barrier would allow participation by users who would otherwise balk at the amount of commitment needed.

Having said all of this, I have never looked into actually setting up a BOINC project. Does anybody have insight into whether this is a sane thing to try? If it would work then prototyping this would make a nice GSOC project.

--glenn


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk