
Le mercredi 03 septembre 2025 à 08:01 -0700, Vinnie Falco via Boost a écrit :
Given the sharply negative reaction I think we will put this project on hold and continue exploring how AI might assist us in other ways.
My feeling on that is that the experiment went wrong for the following reasons * the analysis was sent in the middle of the review, making it look like a review whereas it is not one * the tone of the analysis is véry verbose, so ai-looking * the analysis itself was not reviewed enough, and contained some falsehood statements, which may not be obvious to the casual user. Reviewers are not necessarily sqlite experts. In my opinion, having such an overview of the current field is valuable, but it needs: * to be done before the review, so that each reviewer can refer to it * be done or at least corrected / reviewed by the author of the library. Analyzing competitors and comparing the library has, as far as i remember, always been a request in reviews. AI can help in doing that step. Or maybe not. It's just a tool. The good question about the experiment is not whether AI generated content is relevant. It's how valuable Sergio's message was for reviewers, and for the review manager. In my opinion, there's some value in what has been posted, but the message is too verbose, the timing was wrong, and the presence of errors embarassing. It would probably have been better if it had been done before the review, phrased in a more concise way, and corrected by the library author.
For example, we are building an agent to analyze the entire archive of mailing list posts and propose topical keywords. And also to categorize each post, in particular to identify conversations tied to formal reviews so we can index and display such posts on the corresponding Library page.
This looks like a nice project. Thanks for experimenting in these areas as well. Regards, Julien