On Mon, Feb 2, 2026 at 6:40 AM Andrey Semashev via Boost < boost@lists.boost.org> wrote:
I believe, Christian Mazakas asked and I don't think an answer was provided.
The lack of response reflects my thinking that the questions were intended to provoke rather than inform, as this individual already knows the answer to those questions since they were present during the development of the libraries (and also contributed to some of the downstream libraries). That said, agentic workflows are now a core component of my development process, and the quality of the Capy offering is reflective of that workflow. Said individual was the inspiration for two of my blog posts from last week which in hindsight could explain the motivation behind the questions: The Span Reflex: When Concrete Thinking Blocks Compositional Design https://www.vinniefalco.com/p/the-span-reflex-when-concrete-thinking The Implementation Confidence Gap https://www.vinniefalco.com/p/the-implementation-confidence-gap This is my experience from interaction with various AI tools. I was
often bitten by getting misinformation or incorrect output, so I simply cannot trust AI
Yes, I had those experiences as well and this discouraged me from the tools for a while. Yet I decided to try again and fully embrace and immerse myself, and I find that adapting a development methodology around the tools can bring good results. The Capy library offers not only the concepts and types related to its domain but it also provides as a public interface, the set of tools needed to rigorously test those components, as users are inevitably creating models of the provided concepts. Those tools are located here: https://github.com/cppalliance/capy/tree/develop/include/boost/capy/test They contain mock objects, an algorithm to exercise the buffer sequence concepts, and also a new thing called a "fuse." This is a component which works together with the mock objects to inject both failing error codes and exceptions in the code under test. The testing components also have their own tests. Overkill? I think we want to err on the side of thoroughness especially when parts of the library are the result of generative synthesis. A strong foundation of mock objects, algorithmic exercise, error injection, coverage reports, and human-in-the-loop auditing is our approach to ensuring quality. And like my other libraries, we engage outside consultants to review everything since there are security implications with software that touches network inputs. Beast, JSON, URL all have independent reports, example: https://cppalliance.org/pdf/C%20Plus%20Plus%20Alliance%20-%20Boost%20JSON%20... While I authored our current libraries in Boost they have their own long-time maintainers so I will use the term "we" here to reflect their contributions. We have always maintained a consistently high level of quality for our proposed and maintained libraries, and Capy and the many libraries which will soon follow will be no exception. Thanks