|
Boost : |
From: David Abrahams (dave_at_[hidden])
Date: 2004-05-23 13:51:49
"Jeff Garland" <jeff_at_[hidden]> writes:
>> Static and dynamic objects can't always be built the
>> same way, so that may have to be part of the Jamfile.
>
> Yes, those rules are typically provided by the library/build/Jamfile. Why
> would I need special rules in the test/Jamfile
I didn't say you would. Is that what you mean when you say "_the_
Jamfile"?
> other than to specify my dependency on the dynamic or static
> library?
>
>> And
>> static/dynamic linking isn't an all-or-nothing proposition. Every
>> library test involves at least two libraries (the one being tested
>> and the runtime), sometimes more. There's no inherent reason some
>> couldn't be statically linked and others dynamically linked.
>
> That's irrelevent. I only care about the library under test.
OK, that makes it a little better specified. But why do you think
that's a more relevant test than one that varies how the runtime is
linked?
>> Furthermore, I don't see a big advantage in having a separate
>> command-line option to choose which of those linking modes is used.
>
> Ok, we disagree on this.
>
>> If the library needs to be tested in several variants, then so be it.
>> If it doesn't, but you'd like to see more variants sometimes, you can
>> put some of the variants into the --torture option.
>
> Ok. By the way, I like your suggestion to call it --complete or perhaps
> better --exhaustive.
I didn't suggest that.
>> > The same set of tests should be runnable by the regression tester
>> > and hence associated with the column in the results table. So
>> > something like:
>> >
>> > VC7.1 VC7.1 VC7.1 etc
>> > release release debug
>> > dll static dll
>> >
>> > test1 Pass fail Pass
>> > test2 Pass Pass Pass
>> > ...
>> >
>> > Then basic/complete option controls the number of tests run.
>>
>> That's outta hand, IMO. If it's worth having options, let's keep
>> them simple.
>
> Well, I think it correctly factors the dimensions of compilation options
> versus tests.
There are many, many more dimensions. You could select different
optimization levels, for example. You could test with inlining
on/off. You could test with RTTI on/off.
Maybe there's an argument for the idea that complete testing is run
against each of the library configurations that is installed by the
top level build process, and no other ones...
-- Dave Abrahams Boost Consulting http://www.boost-consulting.com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk