|
Boost : |
From: Vinnie Falco (vinnie.falco_at_[hidden])
Date: 2023-05-09 21:21:16
On Tue, May 9, 2023 at 2:04â¯PM Tom Kent via Boost <boost_at_[hidden]> wrote:
> I'm happy to go with whatever the community is looking to do on this front.
> Some options as I see it:
>
> * Keep the current test matrix as a compliment to the various CI systems
> starting to roll out.
> * Merge my runner system in with the new C++ Alliance CI system
> * Shutdown the test matrix and move fully to C++ Alliance cloud CI
We haven't put any thought or effort into how our CI might interface
with the test matrix, so I think for now the best strategy is no
strategy: Keep things as they are.
But... since we are talking about it, my inner tech geek is tickled by
the idea of a decentralized network of runners using authors',
maintainers', and contributors' already existing commodity hardware to
run a specially prepared docker container that tests some
configurations of boost libraries and reports the results to a central
server.
While the execution of the current test matrix is somewhat flawed, I
think the idea behind it is actually rather brilliant. In exchange for
much longer turnaround times we get a depth of coverage that cannot be
matched on cloud systems. And it is basically "free" since the
computing model is inverted: the expensive compilation and testing is
performed by numerous cheap developer machines with spare cycles,
while the cheap collation and presentation of the results is handled
by the cloud.
If we are going to improve the test matrix I think the way to do it is
to treat it like any other software project. Assemble a team and start
writing maintainable code with documentation. We probably want to use
some off the shelf solution for deploying runners on people's PCs and
laptops, and then write our own set of scripts and dockerfiles (?)
which spread around the testing of all the different configurations of
the libraries.
The most important part of this software project is to address the
headaches of the current system: when there is a failure, it is
difficult (and often impossible) to diagnose from the logs. There are
many false positives and noisy things like warnings which have no
possible resolution, and so on. In other words we will need to be able
to open issues on the test matrix software, triage them, get the fixes
for them in, have them tested, and then deploy a new version just like
every other software project.
Thanks
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk