|
Boost : |
From: Andrey Semashev (andrey.semashev_at_[hidden])
Date: 2023-05-08 22:22:12
On 5/8/23 20:40, John Maddock via Boost wrote:
>
> Vinnie,
>
> First of all, let me thank you for taking the time to write all this up,
> and put into words, probably what we've all been thinking!
>
>>Continuous Integration
>
> CI is wonderful, and horrible!
>
> The main issues I see are that:
>
> a) It needs continual maintenance, as the host runner upgrades/changes
> the systems they support.
>
> b) Stuff breaks without notice because we never know when the hosting CI
> service will change something.
>
> c) Knowledge of fairly obscure scripting is required to make the most of
> it.
>
> Strictly IMO, the issues that Robert Ramey was complaining about in CI
> recently were due to the above rather than anything wrong with the
> concept, or indeed Boost.
>
> So... in my ideal world Boost CI would look something like this:
>
> * The library author does nothing more than list (maybe in json or
> something similarly popular) the systems they want to test on - and
> these would be abstract Boost-wide names like "core", "bleeding edge",
> "legacy-msvc" etc. Exactly what these map to would be defined
> elsewhere. But the point is that now, library authors would almots
> never need to update their CI script *and* all Boost libraries get
> tested on the same "core" which would also form our release criteria.
While I can see the appeal of treating the CI as a black box, I
personally want the opposite. That is, I want to know *exactly* what I'm
testing and how I'm testing. This goes beyond listing specific compilers
or even their versions; it is not uncommon for me to configure
environment, including installing additional packages or setting up
environment variables and compiler switches as part of the CI run. This
is one of the reasons why I'm not using Boost.CI in my libraries.
> * Updates and changes to CI would be announced here first, if in doubt
> new compilers *might* go in "bleeding-edge" first and "core" a release
> later.
I think, making such lists of compilers commonly accepted is
unrealistic. One compiler or a list of compiler options might be
important for one library and not important at all for another one
(meaning, it would be a waste of CI resources to run that configuration
for that library). The only practical use case for such a list would be
the list of the compilers we declare as "officially tested" in our
Boost-wide release notes.
> Machine time could well be donated by volunteers and perhaps replace the
> current test/status matrix, which is fine, but requires you to go off
> seeking for results, which may or may not have cycled yet. Plus that
> matrix relies on a "build the whole of Boost" approach which
> increasingly simply does not scale.
I'm really grateful to the volunteers that run the tests and maintain
the official test matrix, but honestly, I'm not paying attention to it
anymore. I have three main issues with it:
1. Slow turnaround. From my memory, it could take weeks or more for the
runners to run the tests over a commit I made. With this order of times,
it is impossible to perform continued development while maintaining code
in working state.
2. Lack of notifications.
3. Problematic debugging. It was not uncommon that a test run failed
because of some misconfiguration on the runner's side. And it was also
not uncommon that build logs were unavailable.
So, while, again, I'm most grateful to the people that made public
testing possible at all before we had the current CI services, today we
do have CI services with the above problems fixed (more or less), and
I'm spoiled by them. It is true that the public CI resources are limited
and insufficient at times, so, IMHO, the way forward would be towards
fixing this problem without losing the convenience of the CI services
we've become used to.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk