
On 12 Sep 2025 13:55, Tom Kent via Boost wrote:
**I'm looking for feedback from the community:**
Let it go, not worth the electricity spent on running these (maybe move the infrastructure to github runners?)
-or-
Hey this could be useful, let's spend a couple cycles getting it back working again.
I'm thankful for all the efforts on regression testing, especially during the pre-GitHub era, but personally I haven't looked at the test matrix for years. (Well, I have looked recently, when the URL in the readme were updated, but not other than that.) My main complaints are the same as always have been, and are (a) lack of feedback notifications to the library authors, (b) problematic diagnostics (I remember times when build/test logs were unavailable or the link pointed to an unrelated log; not sure if this got fixed) and (c) problems and misconfiguration on the test runners often go unnoticed. So, in its current state, I don't find the test matrix very useful. However, I do think that Boost could benefit from more diverse test runners than what GitHub Actions offer (more hardware architectures, more OSes, more compilers). I'm also concerned about depending solely on GitHub Actions for testing, since this is a single point of failure that historically does tend to fail from time to time and is difficult to replace. I would prefer if the current test matrix evolved into something more useful, an alternative to GHA, but I realize this is probably a lot of work. If not that, I think the resources could be better spent on improving GHA quality, e.g. as new GHA runners. BTW, on the topic of improving GHA quality, there is this collection of GHA plugins that run the CI on various OSes in VMs: https://github.com/vmactions/ It worked well enough for me in Boost.Atomic, so maybe it will be useful to others. The problem with this is that VM setup is being performed on every job run, and it takes some time and may occasionally fail. A native runner would have been faster and more reliable.