From: Robert Ramey (ramey_at_[hidden])
Date: 2007-09-08 16:49:01
Here is what I've got on my local machine - I havn't checked it in yet
*.cpp & *.hpp files
*.cpp & *.hpp files + a profile.sh script which builds
profiles for me of the executables
Jamfile.v2 + normal test stuff
I've noticed that process_jam_log treats example and test specially. From
I've leapt to the conclusion that I should be able to run tests in other
directories besides test.
And indeed I can. This is what library_status.sh does in fact do
(except that process_jam_log on the trunk doesn't have my changes in it)
So this permits me to do the following:
a) run tests on just the serialization library (which I could do now,
compiler_status has been rendered broken by v2 changes as it has been
"deprecated"). So I use library_status instead.
b) display results for ALL combinations of variants, linkage, threading etc.
(for example see www.rrsd.com
c) This means that I can cut down the amount of testing of the serialization
by half since I no longer have separate tests for shared and static
I'm sure all our testers will appreciate this.
d) It permits me to run/test the examples separately. These examples
are meant to be illustrative and include a combination of features from the
library so they are fine for a tutorial but they don't really give much
useful information if they fail. So I've removed the tests of the examples
from the test directory to avoid wasting everyone's time with them.
However, the SHOULD work on the released package so I like
to test them on my own machine in a convenient way.
e) It permits me to run profiling tests with gcc under cygwin on
my windows machine. There has been interest in the past in
improving the performance of the serialization library which
is a worthy endeavor. My personal experience (and I do have
some experience in this area - see www.rrsd.com) has
convinced me that execution time profiling is the single most
effect tools or discovering where execution time bottlenecks
are. And its quite easy to set up. I'm sure a *.jam file could
be tweaked to do this - at least on gcc - but after spending time
investigating this I just made a shell script and moved on. I'm
undecided as to whether I should check this stuff in or not.
Note that the idea for a "performance" directory was based
on the fact that regex has a similarly named directory and I had
hope that by leveraging on established patterns I might get
some free stuff.
However - no good deed goes unpunished. When I checked
in my changes process_jam_log to support library_status, I
broke the trunk testing - ouch - and Rene backed out my
changes - as he had to do. OK so I made a couple of
adjustments and now before checking in the changes, I wanted
to test them. Well, this is not so easy. There is no way
to test that process_jam_log generates data compatible
with regression.py other than by running regression.py. But
there is no way to run part of this without delving into
1200 lines of script (Oh - thanks to whomever included the
23 lines of comments. Oh well, why fight the tide - just
follow the idiot proof instructions on how to become a
regression tester, run the tests in the normal way and post
the results. This entails setting up a directory, copying
regression.py (along with my custom version of process_jam_log)
into that directory, assuring that my local python isn't too
old (it isn't) and letter 'er rip. This runs in a short time - great.
A - er - but delving into the bjam.log one level deep
reveals some confusing error message which turns out
to be that I should customise my verision of user_config.jam.
Now this really mistifies me as the "running regression tests"
suggests that everythign is self contained. It even builds
the tools if necessary. So it SEEMS that one has to have
a Boost directory tree already loaded, but the tarball
contains another one, etc. etc. I just left it aside for
the moment as I had other stuff I HAD to do so things
are stuck there for now.
As to what I want
a) I want to test my changes in process_jam_log so that
I can be sure that they don't break boost testing. Locally
I can test that the support library_status and compiler_status.
b) check in these changes knowing that they won't break
testing. Note that since the output of process_jam_log
isn't specified anywhere - there is no way I can really
check that its correct other than by running the programs
which use that output. - that is regression.py - and
that one fails silently if the input isn't correct.
c) then my library_status will work and I can
ask users who have problems with the serialization
library to run the tests on it. This will tell me if
there are problems in the particular combination
of debug/release, shared/static, single/multi-threading
wchar_t a typedef for short or as an intrinsic character
setting for BOOST_DYN_LIB, setting for autolib.
I don't think its realistic to test in advance for every
concievable combination. So I want users to beable
to validate the library for their particular installation.
This SHOULD be easy with just minor tweaking
of the current tools - which is what library_status is.
Soooooo - THAT is what I'm looking for. Am I being
PS. I want some other stuff too but no point
in worrying about that now.
Michael Caisse wrote:
> Robert Ramey wrote:
>> Notice that in the first case, the directory name "performance" is
>> in the name after the (RUN)
>> while in the second case, the directory name "test" is not in the
>> same place.
>> This inconsistency ripples through process_jam_log and the xml files
>> that it places
>> in the bin.v2 directories unless everything is a really annoying
>> limitation. Which has
>> taken a long time to track down. Is there a convenient way to fix
> Robert -
> What are you looking for? I tried to take a look at your specific
> issues, but
> I could not find the serialization library structure you described in
> svn. As you can imagine, the testing.jam file has been created to
> make testing within
> boost easier and therefore makes assumptions about directory
> structure. It would be like any included makefile that helps define
> rules for testing.
> Perhaps what you are after is clear to other people on the list. Are
> you expecting that ../lib/serialization/performance/test will be used?
Boost-Build list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk