|
Boost : |
Subject: Re: [boost] Cygwin tests (Was: [1.61] Two weeks remaining for new libraries and breaking changes)
From: Robert Ramey (ramey_at_[hidden])
Date: 2016-02-18 12:40:23
On 2/17/16 11:47 PM, Vladimir Prus wrote:
>> But I do have to make a choice. My options are
>>
>> a) spend time with b2 development along the lines that Stephen has
>> suggested.
>
> Steven has provided a patch, you only need to apply it and try again. Do
> you
> think you can allocate 5 minutes for this some time soon?
I don't think it's 5 min - it never is. But I'll try to find some
(more) time to do this. If our testing system was more helpful,
Stephen could make the change on develop and check it and watch
the b2 test results. THAT might be 5 minutes. I realize that this
presumes that cygwin/mingw are not being tested by the test volunteers
so this criticism is a little unfair. I do have ideas on how to address
this - but they're too far off topic for this thread. - even for me.
>
>> b) presume that testing on my os shows that the serialization library
>> is indeed correct and ignore the testing failures on cygwin and maybe?
>> mingw.
>
>> c) ignore the failures on the develop test matrix due to changes
>> in develop of other libraries and just merge from develop into release.
>
> As a release manager, I would advise against this approach - because if
> things break in master,
We're in agreement here. It's my policy that every merge to the master
should result in a strict improvement in the library - No net increase
in the number of known bugs and no net decrease in the number of
freatures/compilers/environment supported. But course I need the
testing system in order to implement this policy.
>> I've made several suggestions about how we can make
>> this simpler - which amount to making the test system development
>> look more like the rest of boost.
>
> Could we adopt a more iterative approach? As you point out, we're
> volunteers,
> so a large and vague task like 'write a test suite for regression report
> generator' is both
> hard to schedule, and is high-risk, given that such a test suite might
> not fix your
> actual problems.
I'm very sympathetic and don't expect immediate action - not that that
keeps me from complaining. In general, I'd like to see boost tools
development, testing, and deployment look much more similar to boost
libraries. I think this would result in better tools and less work.
Saying another way, I think that our development, review, testing and
deployment practices have worked pretty well and I want to base the
development of tools on the same model. In practice this would look like:
a) tools directory structure would look more like libraries
b) review of tools api, etc. would be more public and formal. In
fairness, the openness of the tool process has improved over the past
year. I think at C++Now I complained about the "drive-by tool
development" where I felt I was getting ambushed by some fait compli.
Maybe that made a difference - but probably not.
c) since the tools look like the libraries and many of them already have
some tests, it would be easy to integrate them into the same testing
regimin we already have.
d) none of the above would require any (as far as I know) re-coding just
some directory/repository re-organization.
e) I think this would make things far easier for tool builders and
maintainers. They wouldn't have a special setup. Any user with a
problem with a tool could open up an issue etc. ...
f) it would be simpler for users and would encourage them to submit more
fixes, etc.
I'm willing to consider regression.py script a special beast which
doesn't fit into this scheme. But leveraging our design patterns and
infrastructure to better support tool development would help address
other problems - like the one which provokes this tread.
It would be more convenient to fix immediate issues,
> adding tests
> as we go - after all, that's how Boost.Build got to hundreds of
> testcases, and I see
> no reason why we should do differently for regression report generator.
I agree. I didn't mean to suggest otherwise. I'd just like to see what
we already have used more effectively.
> If I understand correctly, the current issues for you are:
>
> - Shared library testing on OS El Capitan. I will take a look.
> - Testing on cygwin. Patch was provided, it seems that your testing of
> said patch is still
> the best approach.
OK
> - Some unspecified issues with function visibility. If you need help
> with this, could you
> post a separate email.
This is sort of interesting. correctly setting up visibility across
multiple compilers, multiple shared libraries, etc. turns out to be a
lot trickier to get right that first meets the idea. In the case of the
serialization library its especially stick as one library wserialization
is used as a callee while at the same time acts as a caller into the
serialization library. It seems like it should be simple - but it
hasn't been for me. The msvc compiler generates a lot of warnings (from
spirit) when building the library. The output is truncated so the
errors can't be seen. The turn around using the test matrix was just
too slow - a couple of days waiting for test results to cycle. So I
resolved to fire up my old XP laptop with MSVC so I could test locally
on this platform. This would cut my turnaround time to 1 minute. At
that point I ran into the b2/cygwin problem - which btw takes a lot of
time to track down and verify with Stephen.
> - Issues where Spirit either affects Serialization, or produces too many
> warnings causing
> everything else in the log to be truncated. If this still an issue,
> could you post a
> separate email detailing the problem?
Well, spirit generates a lot of warnings. But this is only a problem
when the library fails to build - as it was when I was struggling with
visibility issues. I think I got the visibility issue addressed but
then things failed to build due to some issue with spirit in the develop
branch. Now it looks like that is fixed and the libraries build, But a
large number of tests are failing because of new problem in boost
optional develop branch.
Testing a against the develop branch is like trying to build a castle on
quicksand.
Maybe this sheds some light on the motivation for my suggestions.
and BTW - that actual changes in the serialization library I'm trying to
get in are not at all trivial - among others getting utf8 codeconvert to
work properly across a bunch of different libraries - some of them buggy
in their own right. This is not made any easier of the way we
implemented the utf8_codecvt facet in the boost / detail at the
insistence of David Abrahams because it had never been reviewed. This
is another instance of violating our standard patterns and practices
because it seems expedient only to suffer the consequences longer term.
Robert Ramey
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk