Subject: Re: [boost] El Capitan issues (Was: [1.61] Two weeks remaining for new libraries and breaking changes)
From: Edward Diener (eldiener_at_[hidden])
Date: 2016-02-16 14:08:49
On 2/16/2016 1:44 PM, Robert Ramey wrote:
> On 2/16/16 9:17 AM, Rene Rivera wrote:
>>> I believe that some small and easy changes would make development of
>>> easier for every one:
>>> a) the regression system should implement the develop/master branch
>>> which boost libraries do. This would permit any changes to the
>>> code and/or anything it depends upon to be tested separately before
>>> unleashed on unsuspecting library developers.
>> 1. Has there been an instance where the regression system changes
>> failed in
>> the current single branch?
> Hmmm - isn't that how we got to this thread? I cloned modular boost to
> my xp machine where I have cygwin installed. I invoked booststrap.sh
> from the cygwin shell and it builds b2 executable and maybe some other
> stuff. When I try to invoke b2 to run local tests on the library it
> doesn't work. I think stephen points to a change to b2 source - but
> maybe it's some jam file. My point is that something has been released
> that doesn't work as one would expect. For boost libraries we have a
> mechanism to detect and diminish the incidence of this problem - the
> distinction between the master and develop branch. A problem of this
> type should be detectable before library developers start using the code.
>> 2. Where would we get the resources to test with more than one branch of
>> the regression system?
> the same place we get the resources to build the current test matrices.
> tools should be tested/deployed with the same system used to test and
> deploy libraries. There should not be separate system. If there is
> some reason that this is not easy to do, then it's time to take a step
> back and consider why that might be.
>> b) the testing/build components should be tested the same way that boost
>>> libraries are tested. Were this being done, any anomalies on b2 or
>>> components would be detected the moment they occurred so that library
>>> developers wouldn't have to spend (a lot) if time tracking them down.
>> They are currently tested more than most libraries:
>> Is there more testing that should be done?
> I confess that I've never seen this before. I don't see a way to
> discover this through www.boost.org.
> Now that you've pointed it out to me I'm failures of the mingw platorms.
> These might be similar to cygwin which isn't tested.
> The reference to travis intrigues me for a number of reasons and for the
> first time ever investigated what travis is and how it works. It's
> unclear to me what this would entail to use and why it might be helpful.
> Doesn't seem that it tests mingw or cygwin either though.
>>> c) the regression system should be a boost submodule so it gets
>>> distributed along whenever one clones the main project. This would be
>>> useful to those who want to understand what's going on and make
>>> the components of the regression system to others which might find them
>>> useful. The current system has them in a separate repo which doesn't
>>> have a
>>> master branch. I found this to be rather confusing while spending time
>>> spelunking into things that are really outside of my responsibility and
>> First see (C). Second..
>> 1. Making it a submodule (as it used to be), adds perceived
>> requirements on
>> it that I don't want to keep (as it adds to the time I need to spend in
>> dealing with it).
>> 2. It's only slight convenience for a small percentage of Boost
>> for some small percentage of time.
> for me it's not a small percentage.
>> 3. It adds unneeded clutter to the Boost releases as it would get
>> in what end users get.
>> d) The regression test matrix is generated by testing the develop
>> branch of
>>> each library against the develop branch of other libraries.
>>> Currently, the
>>> boost library fails to build with the spirit library on the develop
>> Did I miss someone mention this on the dev list with a [spirit] tag?
> LOL - I only discovered it in the last day or so. Things like this
> happen on a regular basis. When it looks like it's a problem, I check
> the offending library and verify that there is a problem on tests of
> that library. The I usually give it a couple of days because I assume
> the developer looks on the test matrix after he check something in. If
> it persists then I complain. I try to avoid inadvertently transforming
> boost lists into the Robert Ramey rant list.
>>> So all the serialization library tests fail on the test matrix. I
>>> that all other tests of libraries on the develop branch which depend
>>> the serialization library fail because the serialization library won't
>>> build on that branch. As boost get's bigger, this problem get's worse.
>>> How can this be acceptable?
>> It's not acceptable. But given that (A) and (B) also apply to the Spirit
>> maintainers it's hard to avoid :-(
>> I do appreciate all the suggestions I receive to work around specific
>>> problems. But it would save a huge amount of time if the above changes
>>> were made so these problems wouldn't come up in the first place.
>>> Suggestions a), b) and c) would be easy to implement. d) is somewhat
>>> challenging - but well worth it.
>> I don't see a suggestion in (d).
> Sorry, the suggestion is that on the develop test matrix, each library
> be tested on the develop branch against the other libraries on the
> master branch. This would permit a developer to make an innocent
> mistake without bringing down the whole system.
This creates a problem when code in one library in the 'develop' branch
depends on code in another library in the 'develop' branch. I think this
situation is far more likely than code in one library in the 'develop'
branch having to wait until code in another library it may depend on is
promoted to the 'master' branch of that library. Therefore while I
understand the greater stability of testing the 'develop' branch of a
library against the 'master' branch of all other libraries I am opposed
to this sort of testing as a practical regression testing solution.
Ideally we should have a testing system where one could specify for any
given library its library dependencies, and one could also specify which
branch of each library dependency we want to test against. But we are a
long way from such a system at present.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk