Boost Testing :
Subject: Re: [Boost-testing] A modest proposal
From: Rene Rivera (grafikrobot_at_[hidden])
Date: 2015-01-04 17:18:48
On Sat, Jan 3, 2015 at 10:09 PM, Robert Ramey <ramey_at_[hidden]> wrote:
> Rene Rivera-2 wrote
> > On Fri, Jan 2, 2015 at 11:36 PM, Robert Ramey <
> >> branch. OK that is what the develop branch is for. But it would be
> >> better if the test script were enhanced to use the following (pseudo
> >> code)
> >> procedure
> >> for each library
> >> set branch to master
> >> for each library x
> >> set branch to develop
> >> run tests on library x
> >> set branch back to master
> > I know, you know, that doing this is something I also want. But.. It's
> > actually hard to accomplish at the moment, although not impossible.
> > Switching branches for every library it's against how the build and test
> > systems work. It would mean calling N different build invocations (N ==
> > libraries).
> But wouldn't each build/test be ~1/N the size of the current one?
As far as disk space is concerned, yes. But not as far as anything else.
Consider all the steps involved to do that for each library:
1. Delete [root]/boost tree.
2. Switch target library from master to develop.
3. Run in [root]/status "b2 --limit-tests=<libname> --toolsets=a,b,c". And
collecting the b2 output to an overall output log.
4. Switch target library back to master.
(1) Is needed to correctly create new headers when we switch the library
from master to develop.
(1) Also means that no existing binaries are reused since time stamps are
changed. So we incur the cost of rebuilding any dependencies for each
library. Although I'm not sure what happens with timestamps and b2 when
symlinks/hardlinks are in use.
(2) Means that we have to keep around both master and develop branches at
the testers. Currently we only keep one branch (and from the first obtained
revision to current). It also means more git invocations which increases
And at the end we run process_jam_log. Which has to change to deal with the
fact that it's going to have N concatenate invocations of a b2 output log
to generate the results XML file. And getting that working correctly means
a lot of testing and iteration, and a lot of waiting around. I.e. it's
slow, tedious, and detailed work.
Of course there will be changes to regression.py. Which will also be slow,
tedious, detailed work. I'm only saying it's going to be that way because
that's how it's been in the past for the SVN->GIT code changes. And the
rather simpler, boostorg/boost repo to boostorg/regression repo move.
When I test on my own machine, I run boost build from inside my
> libs/serialization/test directory
> and it builds just what is needed for the serialization library tests. In
> particular, i builds system and filesystem libraries in order to run the
> tests. Since the file system and system libraries are checked out
> on the master, that's where they get built. Which seems fine to me. I
> another library were tested later, the most recent builds would be re-used.
> So it seems to me that the total amount of compiling, linking and testing
> going on would be the same as the current system.
> In other words, isn't testing each library individually with it's own
> instance of bjam more or less the same as running bjam from the top of the
> tree? So would the times be comparable?
No, it's not the same. But yes the times would be comparable from an order
of complexity point of view. I.e. it's still O(K+N).. just a larger K.
And.. This of course doesn't take into consideration the problems with
inter-library master vs. develop dependencies that may exists and may have
to be dealt with. But that is something I would only worry about *after*
having master/develop mixed testing implemented.
-- -- Rene Rivera -- Grafik - Don't Assume Anything -- Robot Dreams - http://robot-dreams.net -- rrivera/acm.org (msn) - grafikrobot/aim,yahoo,skype,efnet,gmail