Subject: Re: [boost] [test] trunk breakage
From: Robert Ramey (ramey_at_[hidden])
Date: 2010-01-06 12:54:51
>> One concern I have is that most developers will not be running their
>> own tests in this configuration before committing to trunk.
Actually, I've been doing this myself for the past year. And it's made
my life much, much easier. Here is what I do:
a) I checked out an entire boost release branch to my local machine
b) I switch three directories to the trunk. In my case these directories
are root/boost/archive, root/boost/serialization, and
c) I run tests by invoking ../../../tools/regression/src/library_test.sh
the directory root/libs/serialization/test.
d) This generates test result tables in that that directory which I can
view with my browser. Note that this script uses the standard bjam
testing infrasture which means that thinks like my bjam script changes
are also tested.
e) after doing d) above maybe 50 times, with msvc 7.1, and msvc 9.0
and gcc 4.3.2 ,I'm thinking it might work so I just check in my changes.
Since the three directories are from the trunk, that's where the changes go.
f) then I watch the trunk tests until it looks like I didn't break anything.
g) Now the tricky part, I have to merge my changes to release branch.
There's probably a better way to do this, but here is how I do it. I
switch the three directories to the release branch, and merge in in the
changes from the trunk and check in the changes. Then I switch the
three magic directories back to the trunk. On one occasion I forgot
this last step which resulted in an embarassing hickup in accidently
checking in changes directly to release and a very inopportune time.
This system has worked very well for me
> It could be arranged that developers would also need to do their local
> testing in the same library+release configuration.
I believe that I am currently doing this by using the above procedure.
>> I suppose that's
>> not so bad because at worst, their component ends up broken on trunk,
>> but the rest of the world is unaffected because presumably they
>> haven't merged their broken change to release yet (and everybody
>> else is using the last-known-good version of that component from
>> Can someone who knows more about our test infrastructure say a bit
>> more about Robert's suggestion? As a half-measure, we could treat
>> Boost.Build and Boost.Test as special and have the test runners use
>> the versions on the release branch.
> It's not a half measure, it's actually the ideal testing situation and
> the current test scripts are somewhat arranged to handle this because
> it was the ultimate goal of testing when I started rewriting them.
> And at this time Boost.Build and some of the other infrastructure
> tools already use specific release only versions for testing.
> Specifically only the released Boost.Jam is used for testing, and
> Boost.Build and the release tools could be changed to use specific
> versions.. As long as they are tagged in SVN. That was the good news.
> The bad news is that there would need to be additional work to make
> the test scripts do the same for libraries. Which would not be a big
> amount of work.
Note that my procedure above can be used without ANY changes in the
test/build infrastructure. Any changes would only effect the "independent"
> The really bad news, is that in order to make such testing possible at
> the library level would require that the Boost sources are modularized
> into the library parts so that the scripts can manufacture a complete
> Boost from the modules.
Conceptually, this is what I see happening.
a) Tester has trunk and release directories
b) Tester updates local copies of both directories from SVN system
c) for each library:
i) Tester updates local copies of the 2 or three directories ON T
ii) switch the relevant directories to the trunk. SVN is very slow with
so probably a trickier method like linking /moving directories would be
iii) Tests are run on that librarie
ii) undo changes in ii above.
I realize this would alter the python script, but everything else would stay
the same -
like the test matrix generation etc. Also the current tools like boost
and all the bjam tools and scripts would be unchanged.
Besides the improvements in testing - basically testing one thing at time
the tests would run a lot faster. Currently, if library A depends upon
library B and library B changes, then library A is tested again. Since
library A is based on the assumption that library B is working, This
results in either wasted time ( and a lot of it) or spurious test failures
for library A - another time waster. The new system would only
re-run the testing of library A when changes to library B have been
merged to release.