|
Boost : |
From: Robert Ramey (ramey_at_[hidden])
Date: 2023-11-06 00:09:13
Hmmm - a rich and timely topic.
The original author of the model that use in boost wrote an update to
the description of his/our model here:
https://nvie.com/posts/a-successful-git-branching-model/
We have serious problems with testing. I'll include my comments below.
On 11/5/23 1:10 PM, Andrey Semashev via Boost wrote:
> Hi,
>
> The recent case of a commit in develop that was left unmerged to master
> made me wondering if people still find the develop branch useful.
>
> Historically (since the SVN days), we used a separate branch for active
> development, which we then merged to the stable branch that was used for
> creating Boost releases. This allowed to test libraries together before
> changes propagate to the stable branch. This was useful back when the
> official test matrix was the primary gauge of the Boost "health", so
> when something broke, it was rather visible, even if the breakage
> manifested in downstream libraries.
>
> With the adoption of git and CI, the relevance of the test matrix has
> diminished,
Personally, I would dispute that. I think that it's the only part of
our testing infrastructure that still works.
and, presumably, people have shifted towards monitoring
> their libraries' CI.
The CI for the boost serialization does not and never has worked. This
is easy to veryify just by looking at any of the CI output for this
library. And this demonstrable fact has been pointed out numerous
times. This situation is never addressed.
I doubt that the serialization library is the only one with this issue.
The sadist part of all this is that even if it did "work" it would still
be useless. It's not uncommon for a large and/or complex library to
fail one or more tests on one or more compilers due to issues in the
compiler itself. These can't be "fixed" in the library. The test
matrix shows all the tests x all the environments. One can easily see
if any failure is general or isolated to a particular environment. The
current CI just registers pass/fail for the whole library and all the
environments. Some times someone will suggest skipping a particular
test for a particular library so the CI comes of clean. This is
basically hiding the error. Users considering using a library in their
own enviroment are basically mislead that their the library works
everywhere - which is demonstrably untrue. It's a fraud.
The output of the CI is very user unfriendly.
The current CI is very slow and consumes a ridiculous amount of resources.
Which means, people are less likely to notice that
> something broke downstream, unless the downstream developer notices and
> reports the problem. Although I have been that downstream developer
> myself quite a few times, I have to admit that this role of the develop
> branch of "an integration testing field" is not well filled in the
> current workflow. Noticing the breakage is mostly luck (i.e. a matter of
> me making a commit to my library and noticing the CI failure caused by
> an upstream dependency) rather than a rule.
A useful observation. The problem is that the current CI tests the
development branch of one's library agains the develop branch of all the
other libraries. So now my library looks like it "works" but when (and
only when) it could fail when run against all master branches. So the
testing shows pass when it actually fails when shipped with the master.
Note that as we speak the test matrix for the master branch isn't
working so it seems we never test the software which is actually being
shipped.
> Additionally, I have been asked on a few occasions to avoid development
> directly in the develop branch and use separate feature branches
> instead. That is, do your commits in a feature branch, let the CI pass,
> then merge to develop, let the CI pass again, then merge to master.
> Apparently, some people are using this workflow by default to avoid
> breakage in develop, which means develop is no longer the branch where
> the actual development happens.
true and very useful. See the web link above
>
> (This workflow actually makes develop meaningless because your CI over
> the feature branch already tests your changes against develop of the
> rest of Boost. Which means after merging to develop you're running the
> same CI again, and might as well just merge to master straight away.)
I see the merit in this observation. Personally, on my own machine, I
test my development or feature branch agains the master branch of all
the other libraries. It's the only way to know that when the changes
are merged into the master the software will still work.
> Then there's the recurring problem that led to this post - people forget
> to merge to master from time to time. Or simply don't have time to
> merge. Or the library is no longer actively maintained. Or the commit is
> left out of master for any other reason, which means users don't receive
> their bug fixes in a release.
Right
>
> Personally, I do try to remember the PRs I make during the release cycle
> and look through them just before the release to make sure they are
> merged to master. Some of you probably received gentle merge reminders
> from me. I'm not sure if anyone else is doing this, but I can say this
> is an active mental effort on my part, which I'd rather not have to
> make. And even then I can't be sure I remember all PRs or have the time
> to cycle though them before the release.
>
> Which brings me to my question. Do people still find the develop branch
> useful? Maybe we'd be better off just dropping it and performing all
> development in feature branches before merging straight to master?
I think this a good idea. Among other things, it would effectively mean
that everyone would be using my method of testing the "next release
version" rather than the current develop version.
> Also, if you do find develop useful, what do you think about creating a
> GitHub Action to automatically merge develop to master once the CI on
> develop succeeds? Maybe there are other ideas how to avoid commits
> unmerged to master?
Yipes - more surprising behavior. Flog your idea above instead.
When I started writing this response, I had a negative predisposition.
But as I started to articulate my reaction, I've come around to your
point of view.
A modest proposal
=================
Immediately
a) start using your suggested workflow - update documents, procedures etc.
b) drop the whole boost CI - it doesn't work, consumes a ridiculous
amount of resource, and would still be useless if did.
c) continue to run the test matrix as is - but of course the develop
branch would not be necessary.
At our leasure, we could redo the CI to be useful and efficient.
Robert Ramey
>
> Thanks.
Your welcome.
>
> _______________________________________________
> Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
>
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk