Boost logo

Boost :

From: Jeff Garland (jeff_at_[hidden])
Date: 2004-05-22 15:52:46


On Sat, 22 May 2004 15:04:19 -0500, Aleksey Gurtovoy wrote
> Jeff Garland writes:
> > It's kind of spotty outside of the meta-comm guys:
> > IBM Aix 11 days
> > Mac OS today
> > SGI Irix 2 weeks
> > linux 4 days
> > Sun Solaris 6 days
> > Win32 4 weeks
> > win32_metacomm today
> >
> > And that's today.
>
> IMO the only thing it indicates is that these tests are initiated manually.

Really. I find it hard to believe all the *nix guys don't have a cron job
setup. But maybe what you are saying is the rest of the system isn't 100%...
 
> > Consider during the next couple months 3-4 new libraries
> > are pending to be added.
>
> No a problem, in general. Right now a *full rebuild* takes about 8 hours.
> If we switch to incremental model, we have plenty of reserve, here.

I assume that's all the compilers? Anyway, I remember others (Beman) have
previously expressed concern about the length of the test cycle.

> > Serialization tests alone dramatically increase the
> > length of the time to run the regression if we always run the full test.
>
> Well, the dramatic cases need to be dealt with, and IMO a Jamfile
> that allows the library author to manage the level of
> "stressfulness" would be just enough.

'Something' will need to be done with or for serialization. The
current test is very lengthy to perform. So I suppose Robert can cut the test
down, but that means the possibility of missing some portability issue. So I
can see why we want the capability to run that full-up torture test -- just
not every day.

> > What will happen in a year when we have say 10 new libraries?
>
> Well, hopefully we'll also have more computing power. Surely a lot
> of organizations which use Boost libraries can afford to spare a
> middle-class machine for automatic testing?

Perhaps. From my view things seem pretty thin already. There was some
discussion during the last release that some testers had removed the python
tests because they were taking too long.

BTW, just to pile on, wouldn't it be nice if we had testing of the sandbox
libraries as well? This would really help those new libraries get ported
sooner rather than later...

>...from the other mail...
> Aleksey wrote:
> b) distributed testing of libraries, with the following merging of
> results into a single report.

I agree more distribution of testing would be another way to improve things --
at least for windows and Linux. But the reason I'm advocating the ability to
split into a basic versus torture and the various dll/static options is that
we don't have five contributors to run an SGI test. If the test takes 5 to 6
hours to run a single compiler we might lose the one contributor we have.
Wouldn't be better to have a basic test that would be faster to run than no
tests at all?

> > BTW I might be able to contribute to the Linux testing -- are there
> > instructions on how to set this up somewhere?
>
> For *nix systems, there is a shell script that is pretty much
> self-explanatory:
>
> http://cvs.sourceforge.net/viewcvs.py/boost/boost/tools/regression/run_tests.sh

Thanks, I'll take a look.

> If you want something that requires even less maintenance, we can
> provide you with the Python-based regression system we use here at Meta.

Well, I'm going to want something almost totally hands-off or it just won't
happen. I don't have time to babysit stuff. So I guess I'd like to see both.
 For awhile I'm likely to setup only a single compiler (gcc 3.3.1) on my
Mandrake 9 machine. With that approach I should be able to cycle more
frequently. Incremental testing is probably a good thing to try out as well.

Jeff


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk