On Mon, Feb 2, 2026 at 4:06 AM Andrey Semashev via Boost < boost@lists.boost.org> wrote:
There is a difference between code completion suggestions and a significant body of code that is generated by an LLM. Besides the legal implications raised by Rene, there are also technical concerns.
Agreed, and these are concerns I take seriously. I want to address them concretely rather than in the abstract.
What is the quality of such code, is it maintainable in the long term, including by people other than the original publisher (as the term "author" is probably not exactly applicable in this case), does it perform well and handle corner cases correctly - stuff like that.
These are the right questions to ask of any submission. They are also questions that can be answered by examining the actual library being proposed.
AI-generated code often lacks in these departments, sometimes obviously, but also sometimes subtly.
I'd be curious what experience informs this. Have you used these tools extensively, or observed a colleague doing so? Or is this a general impression? I ask not to be combative, but because the gap between popular perception of AI-generated code and the reality of AI-assisted development with rigorous testing and review is, in my experience, substantial. More to the point; did you look at this library's implementation? The code, the tests, the corner case coverage? That would be the most direct way to evaluate whether these concerns apply here.
You could argue that a review should highlight such weaknesses, if they exist, and partly that would be true. But historically, reviewers have paid little attention to the implementation, compared to high-level design, interfaces and documentation. It takes much more time and effort to dig into the implementation details, and it is understandable that not many reviewers are willing to do this.
This is a fair observation about review practice in general. We generally assume that the
author is willing to put the effort to make his library well implemented and maintainable as he is going to be the maintainer in the forseeable future. With vibe coding approach, this may no longer be a concern for such an author, and could result in poor quality of the implementation.
This is a legitimate concern and I want to acknowledge it directly. "Vibe coding" with no scrutiny would indeed be inappropriate for a project like Boost. I don't dispute that. But AI tooling is here to stay, and its capabilities are advancing rapidly. Rather than drawing a line that treats all AI-assisted development as suspect, this review could be an opportunity for the Boost community to demonstrate how it embraces this technology responsibly: with the same rigor it applies to everything else. That would be genuine leadership, and the kind of example the broader C++ community would benefit from. Yes, AI is just a tool. But, as any tool, it should be used wisely. For
example, I wouldn't mind using AI to generate code for prototyping or to get the job done quickly in a personal pet project, where my capabilities or time are lacking. But I don't think this would be acceptable in a widely used project such as Boost, definitely not without a high level of scrutiny from the authors of the submission.
I agree that a high level of scrutiny is required. Did you examine the test suite? Did you ask about my methodology: how I use AI tooling, what I review, how I validate correctness? These are the questions that would let you evaluate whether that scrutiny was actually applied.
So high level, in fact, that it would invalidate any possible time and effort gains from using AI in the first place.
How did you measure that? This is a strong empirical claim presented without evidence. You haven't asked about my workflow, my review process, or the time I invested in validation. Without that information, this is assertion, not analysis.
And the reviewers generally can't be sure that the author has exercised such scrutiny.
True in general. But this library is not a generality, it is a specific, concrete body of work that can be inspected. The code is there. The tests are there. Did you look? Thanks