|
Boost Testing : |
Subject: Re: [Boost-testing] A modest proposal
From: Robert Ramey (ramey_at_[hidden])
Date: 2015-01-07 13:03:18
Rene Rivera-2 wrote
> On Sat, Jan 3, 2015 at 10:09 PM, Robert Ramey <
> ramey@
> > wrote:
>
>> But wouldn't each build/test be ~1/N the size of the current one?
>>
>
> As far as disk space is concerned, yes. But not as far as anything else.
> Consider all the steps involved to do that for each library:
>
> 1. Delete [root]/boost tree.
> 2. Switch target library from master to develop.
> 3. Run in [root]/status "b2 --limit-tests=
> <libname>
> --toolsets=a,b,c". And
> collecting the b2 output to an overall output log.
> 4. Switch target library back to master.
>
> (1) Is needed to correctly create new headers when we switch the library
> from master to develop.
Library X - the library under test
Hmmm - but don't we need only new headers for the library x? Can't all
the others stay the same?. Perhaps we need to run b2 headers on
library X - but not the rest.
> (1) Also means that no existing binaries are reused since time stamps are
> changed. So we incur the cost of rebuilding any dependencies for each
> library. Although I'm not sure what happens with timestamps and b2 when
> symlinks/hardlinks are in use.
I don't see a problem here. Only library X has changed from the master. Any
existing binaries on the master branch can (and should!) be used.
> (2) Means that we have to keep around both master and develop branches at
> the testers. Currently we only keep one branch (and from the first
> obtained
> revision to current). It also means more git invocations which increases
> time overhead.
Hmm - I've always thought that my local GIT repo has both the master and
develop branches held locally. So when I switch - which I do all the time -
it's
quite fast operation which doesn't go to any server. So I'm really missing
something here.
> And at the end we run process_jam_log. Which has to change to deal with
> the
> fact that it's going to have N concatenate invocations of a b2 output log
> to generate the results XML file. And getting that working correctly means
> a lot of testing and iteration, and a lot of waiting around. I.e. it's
> slow, tedious, and detailed work.
(3) rather than do this I run b2 in the local test directory. So it
produces
a local b2.log. The time this takes is only dependent on that number
of test and the time time it takes to build the one library under test.
(library dependencies in the master might also take time - but that only
needs to be done when the master changes - which is infrequent)
> Of course there will be changes to regression.py. Which will also be slow,
> tedious, detailed work. I'm only saying it's going to be that way because
> that's how it's been in the past for the SVN->GIT code changes. And the
> rather simpler, boostorg/boost repo to boostorg/regression repo move.
LOL - I'm very, very sympathetic here. I realize that setting up and
maintaining the regression testing is a huge job. In no way would I ask
you to consider it if I thought it was going to be another death march.
But when I do this by hand on my own system - it works smooth as
silk. In fact, I'm thinking that doing N tests runs on 1/N the system
will workout even faster than the current way.
>> In other words, isn't testing each library individually with it's own
>> instance of bjam more or less the same as running bjam from the top of
>> the
>> tree? So would the times be comparable?
>
> No, it's not the same. But yes the times would be comparable from an order
> of complexity point of view. I.e. it's still O(K+N).. just a larger K.
>
> And.. This of course doesn't take into consideration the problems with
> inter-library master vs. develop dependencies that may exists and may have
> to be dealt with. But that is something I would only worry about *after*
> having master/develop mixed testing implemented.
Actually, this is the potentially the most problematic - but very hard to
predict.
Regardless of whether or not you decide to spend some time experimenting
with this, I want to thank you for creating and maintaining the Boost
Regression
testing infrastructure over these many years. To me, it is one of the
key reasons for the success of Boost. I don't believe that the importance
of this effort and your dedication to keeping running and reliable can be
understated.
Robert Ramey
-- View this message in context: http://boost.2283326.n4.nabble.com/A-modest-proposal-tp4670541p4670738.html Sent from the Boost - Testing mailing list archive at Nabble.com.