On 2 Feb 2026 16:06, Vinnie Falco wrote:
On Mon, Feb 2, 2026 at 4:06 AM Andrey Semashev via Boost <boost@lists.boost.org <mailto:boost@lists.boost.org>> wrote:
AI-generated code often lacks in these departments, sometimes obviously, but also sometimes subtly.
I'd be curious what experience informs this. Have you used these tools extensively, or observed a colleague doing so? Or is this a general impression?
This is from my own experience with AI-based tools, not limited to code generation but also including "general assistants", e.g. what Google search provides. This is also the impression I get from other users and online sources like news, bloggers and commenters.
More to the point; did you look at this library's implementation? The code, the tests, the corner case coverage? That would be the most direct way to evaluate whether these concerns apply here.
My comment was not in relation to your specific library but rather an answer to the more general reply by Richard. I did not look at your library implementation and I have nothing to say about it yet.
Yes, AI is just a tool. But, as any tool, it should be used wisely. For example, I wouldn't mind using AI to generate code for prototyping or to get the job done quickly in a personal pet project, where my capabilities or time are lacking. But I don't think this would be acceptable in a widely used project such as Boost, definitely not without a high level of scrutiny from the authors of the submission.
I agree that a high level of scrutiny is required. Did you examine the test suite? Did you ask about my methodology: how I use AI tooling, what I review, how I validate correctness? These are the questions that would let you evaluate whether that scrutiny was actually applied.
I believe, Christian Mazakas asked and I don't think an answer was provided.
So high level, in fact, that it would invalidate any possible time and effort gains from using AI in the first place.
How did you measure that? This is a strong empirical claim presented without evidence.
This is my experience from interaction with various AI tools. I was often bitten by getting misinformation or incorrect output, so I simply cannot trust AI. Therefore I must verify its sources first hand or review code to a level where I would have probably spent the same or less amount of time and effort as if I didn't use AI to begin with. It simply isn't worth it when the result is something that I need to be reliable. Maybe you could blame me for not being able to compose the query just in the right way, but that would only reinforce my point. Where a regular search with incorrect terms would simply not bring you relevant information, an incorrect query might bring you garbage that looks correct. Similarly with code generation.