Boost logo

Boost :

From: Aleksey Gurtovoy (agurtovoy_at_[hidden])
Date: 2004-05-22 17:18:04


Jeff Garland writes:
> On Sat, 22 May 2004 15:04:19 -0500, Aleksey Gurtovoy wrote
> > Jeff Garland writes:
> > > It's kind of spotty outside of the meta-comm guys:
> > > IBM Aix 11 days
> > > Mac OS today
> > > SGI Irix 2 weeks
> > > linux 4 days
> > > Sun Solaris 6 days
> > > Win32 4 weeks
> > > win32_metacomm today
> > >
> > > And that's today.
> >
> > IMO the only thing it indicates is that these tests are initiated manually.
>
> Really. I find it hard to believe all the *nix guys don't have a cron job
> setup.

If they did setup it, the above list would look different.

> But maybe what you are saying is the rest of the system isn't 100%...

I'm saying that the fact that tests on some platforms hasn't been run for a
while means exactly just that -- they hasn't been run for a while, no more, no
less. There might be a number of reasons why that's the case with every
particular platform, but by itself it doesn't indicate that a (supposedly)
long run cycle has anything to do with it.

>
> > > Consider during the next couple months 3-4 new libraries
> > > are pending to be added.
> >
> > No a problem, in general. Right now a *full rebuild* takes about 8 hours.
> > If we switch to incremental model, we have plenty of reserve, here.
>
> I assume that's all the compilers?

Yep, nine of them.

> Anyway, I remember others (Beman) have
> previously expressed concern about the length of the test cycle.

It is a problem if you are running them on your "primary" machine during the
day. I don't think we can do much about it -- just compiling the tests takes
about half of the whole cycle's time, and personally I see little value in
regressions that at least didn't compile every test.

On the other hand, an incremental cycle, if it involves just a couple of
libraries, can be made pretty fast. Bjam needs some tweaking, though, to
skip the libraries that were marked up as unusable.

>
> > > Serialization tests alone dramatically increase the
> > > length of the time to run the regression if we always run the full test.
> >
> > Well, the dramatic cases need to be dealt with, and IMO a Jamfile
> > that allows the library author to manage the level of
> > "stressfulness" would be just enough.
>
> 'Something' will need to be done with or for serialization. The
> current test is very lengthy to perform. So I suppose Robert can cut the test
> down, but that means the possibility of missing some portability issue. So I
> can see why we want the capability to run that full-up torture test -- just
> not every day.

Sure, I was just saying that the library author can deal with it on its
own -- just make several sections in the bjam file and enable/disable them
depending on your current needs.

>
> > > What will happen in a year when we have say 10 new libraries?
> >
> > Well, hopefully we'll also have more computing power. Surely a lot
> > of organizations which use Boost libraries can afford to spare a
> > middle-class machine for automatic testing?
>
> Perhaps. From my view things seem pretty thin already.

If we provide a documented way to setup the whole thing, and post "A Call
for Regression Runners", I am sure we'll get some response.

> There was some
> discussion during the last release that some testers had removed the python
> tests because they were taking too long.

Well, you are right that right now the resources are a little spare, but IMO
it's just because we didn't work on it.

>
> BTW, just to pile on, wouldn't it be nice if we had testing of the sandbox
> libraries as well? This would really help those new libraries get ported
> sooner rather than later...

IMO that's asking too much. Many of them never get submitted.

>
> >...from the other mail...
> > Aleksey wrote:
> > b) distributed testing of libraries, with the following merging of
> > results into a single report.
>
> I agree more distribution of testing would be another way to improve things --
> at least for windows and Linux. But the reason I'm advocating the ability to
> split into a basic versus torture and the various dll/static options is that
> we don't have five contributors to run an SGI test.

"Basic" (supposedly what we have now) versus "drastic" (supposedly what's coming
with serialization) distinction definitely makes sense. I am not arguing against
this one, rather against lowering down our current standards.

> If the test takes 5 to 6
> hours to run a single compiler we might lose the one contributor we have.

True, if they are forced to run the drastic test, which IMO shouldn't be the
case -- it should be entirely up to the regression runner to decide when and
if they have the resources to do that.

> Wouldn't be better to have a basic test that would be faster to run than no
> tests at all?

Sure, and it should be up to them to decide that.

>
> > > BTW I might be able to contribute to the Linux testing -- are there
> > > instructions on how to set this up somewhere?
> >
> > For *nix systems, there is a shell script that is pretty much
> > self-explanatory:
> >
> > http://cvs.sourceforge.net/viewcvs.py/boost/boost/tools/regression/run_tests.sh
>
> Thanks, I'll take a look.
>
> > If you want something that requires even less maintenance, we can
> > provide you with the Python-based regression system we use here at Meta.
>
> Well, I'm going to want something almost totally hands-off or it just won't
> happen. I don't have time to babysit stuff. So I guess I'd like to see both.

OK, we'll make it available.

> For awhile I'm likely to setup only a single compiler (gcc 3.3.1) on my
> Mandrake 9 machine. With that approach I should be able to cycle more
> frequently. Incremental testing is probably a good thing to try out as well.

It produces less reliable results, but the roots of it needs to be tracked and
fixed, so yes, it would be good to start looking into it.

--
Aleksey Gurtovoy
MetaCommunications Engineering

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk