On 1 Feb 2026 23:27, Richard via Boost wrote:
In article < <CAHf7xWteO7nQazqXoXBXD24ytSjNAuyVcaC=gDjWYde7X2bdtw@mail.gmail.com>> you write:
It's also worth asking, was any of this largely coded by something such as an LLM?
...and if any library, not just this one, had such content, what then?
Crappy code written by humans has been around as long as humans have been writing code. So has good code written by humans.
LLM coding agents are with us to stay. Code should be judged on it's intrinsic quality and not whether or not an LLM was used to create it.
My IDE has coding agents built into it and they constantly make suggestions for completing code, writing code, etc. I judge these contributions myself and decide whether or not to accept them. In the end, you can't really tell how much of my code is written as suggested completions or as characters I typed myself.
There is a difference between code completion suggestions and a significant body of code that is generated by an LLM. Besides the legal implications raised by Rene, there are also technical concerns. What is the quality of such code, is it maintainable in the long term, including by people other than the original publisher (as the term "author" is probably not exactly applicable in this case), does it perform well and handle corner cases correctly - stuff like that. AI-generated code often lacks in these departments, sometimes obviously, but also sometimes subtly. You could argue that a review should highlight such weaknesses, if they exist, and partly that would be true. But historically, reviewers have paid little attention to the implementation, compared to high-level design, interfaces and documentation. It takes much more time and effort to dig into the implementation details, and it is understandable that not many reviewers are willing to do this. We generally assume that the author is willing to put the effort to make his library well implemented and maintainable as he is going to be the maintainer in the forseeable future. With vibe coding approach, this may no longer be a concern for such an author, and could result in poor quality of the implementation. Yes, AI is just a tool. But, as any tool, it should be used wisely. For example, I wouldn't mind using AI to generate code for prototyping or to get the job done quickly in a personal pet project, where my capabilities or time are lacking. But I don't think this would be acceptable in a widely used project such as Boost, definitely not without a high level of scrutiny from the authors of the submission. So high level, in fact, that it would invalidate any possible time and effort gains from using AI in the first place. And the reviewers generally can't be sure that the author has exercised such scrutiny.