Boost logo

Boost :

From: Jeff Garland (jeff_at_[hidden])
Date: 2004-05-23 14:54:16


On Sun, 23 May 2004 14:51:49 -0400, David Abrahams wrote
> "Jeff Garland" <jeff_at_[hidden]> writes:
>
> > Yes, those rules are typically provided by the library/build/Jamfile. Why
> > would I need special rules in the test/Jamfile
>
> I didn't say you would. Is that what you mean when you say "_the_
> Jamfile"?

Yes, I was talking about the test/Jamfile.
 
> > That's irrelevent. I only care about the library under test.
>
> OK, that makes it a little better specified. But why do you think
> that's a more relevant test than one that varies how the runtime is
> linked?

Because I assume that the runtime libraries are already tested and stable and
the focus is on the various incarnations of the library under test. However,
I do concede your point. To be exhaustive, linking different runtimes is
required to test all the interactions. Which, of course, increases the number
of options yet again...
 
> > Ok. By the way, I like your suggestion to call it --complete or perhaps
> > better --exhaustive.
>
> I didn't suggest that.

Sorry for the incorrect attribution -- too much email.

> >> > The same set of tests should be runnable by the regression tester
> >> > and hence associated with the column in the results table. So
> >> > something like:
> >> >
> >> > VC7.1 VC7.1 VC7.1 etc
> >> > release release debug
> >> > dll static dll
> >> >
> >> > test1 Pass fail Pass
> >> > test2 Pass Pass Pass
> >> > ...
> >> >
> >> > Then basic/complete option controls the number of tests run.
> >>
> >> That's outta hand, IMO. If it's worth having options, let's keep
> >> them simple.
> >
> > Well, I think it correctly factors the dimensions of compilation options
> > versus tests.
>
> There are many, many more dimensions. You could select different
> optimization levels, for example. You could test with inlining
> on/off. You could test with RTTI on/off.

Now that's outta hand ;-) I agree that there is an almost infinite potential
set of options. I believe the set I'm suggesting hits a broad cross-section
of needs, but I'd be happy to see others step forward with different test
variations if they have a need.
 
> Maybe there's an argument for the idea that complete testing is run
> against each of the library configurations that is installed by the
> top level build process, and no other ones...

That sounds like a reasonable approach to me.

Jeff


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk