Boost logo

Boost :

From: Robert Ramey (ramey_at_[hidden])
Date: 2004-05-23 11:03:59


David Abrahams wrote:

> We currently already have a way of expressing that "known to not work on X
> compiler" idea in the regression system. We could make the build
> system smart about it.

I haven't seen that. That would be helpful. How can I specify that its
pointless to run tests on a particular compiler.

Probably simpler, though, you could just put something in one of your
library headers that said:

> #if BOOST_WORKAROUND(compiler1) || BOOST_WORKAROUND(compiler2) || ...
> # error doesn't work yet
> #endif

Actually I've done something similar for systems whose support for wide
character i/o is insufficient. Its sort of OK but the the test show up as
all failures which is slightly miss leading. At it doesn't take all that
long but I'd prefer just to skip it.

>> from q onward the problem might be manageable if BJAM was could manage
>> dependencies from are particular x.hpp -> y.cpp -> test.cpp.

> Of course it can and does.

>> But as it is now, if one *.cpp file in the library has to be
>> rebuilt, all the test that depend on any portion of the library have
>> to be run.

>Of course they should.

>> I realize this is an unrealistic hope but we're allowed to dream.

>I clearly don't know what you're talking about here.

What I mean is:

Test.cpp only depends on module1.cpp which Is in the library. The functions
in module1.cpp can only be accessed by including header1.hpp in the test
either directly or indirectly. Suppose module2.cpp gets recompilied. This
provokes a rebuild of the library which provokes a rebuild of test.cpp and
rerunning of the test. This occurs even though this isn't necessary as
module1.cpp - the only module the test depends upon - hasn't changed. In
other words. The dependency granularity is the library - not the modules
within the library. My understanding comes from study of BJAM and
observation of my system when I make a change. I would be very pleased to
find out that I'm wrong about this.

So let me see if I have this right.

a) I can check in a different version of runtest that will set my "testing
level" appropriate for the state of the library and change it as necessary.

b) I can specify which compilers should be included/excluded from the test.
(somehow - I don't know how yet)

c) If something changes that effects any code/header module used by the
library - everything is going to be rebuilt and tested regardless of the
dependency of the particular test.

OK - given a and b I think this can be made to work with relatively little
difficulty.

>From another post:

>> The same set of tests should be runnable by the regression tester
>> and hence associated with the column in the results table. So
>> something like:
>>
>> VC7.1 VC7.1 VC7.1 etc
>> release release debug
>> dll static dll
>>
>> test1 Pass fail Pass
>> test2 Pass Pass Pass
>> ...
>>
>> Then basic/complete option controls the number of tests run.

>That's outta hand, IMO. If it's worth having options, let's keep
> them simple.

The compiler_status program is a good start but I have a real need for
something that can produce the table above driven by a command line
configuration string. So much so that that I'm almost up for writing it
myself using the current compiler_status program as a staring point. It may
happen.

My experience is that when I have the debug version all running spiffy, I
always have some weird issue with release mode which is compiler dependent.
Usually its related to the compiler/linker dropping stuff it doesn't think
needs to be instantiated. In practice, once its fixed, it doesn't break
until some really major change occurs. So my policy will be to run the
relase mode test just once in a very long while.

Robert Ramey


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk