El 02/02/2026 a las 18:26, Vinnie Falco via Boost escribió:
The review process already asks the right questions. Is the design sound? Are the abstractions correct? Are the edge cases handled? Does the documentation meet our standard? A coroutine scheduler is either correct or it is not, and the subtle bugs that live in this domain will be surfaced by the quality of the review, not by an accounting of the author's workflow. The review process exists to evaluate artifacts, not to police how authors produce them. If we want to introduce that precedent, it warrants serious thought about where such a principle leads and whose workflows it would scrutinize next.
I agree. Let's focus on the result and how maintainable the library is by the original author or any successor/community. Just skimming the code it seems clean and simple. I don't have any expertise on coroutines and similar high concurrency tools, but I would suggest that documentation should highlight why Capy is the way to go. Documentation explains what Capy is about (and what the library does not do) but, unless I'm missing something, it does not emphasize why we should use Capy and not any other abstractions (are there are any competing libraries?). I can read about the cancellation problem in the "Stop Tokens" chapter and I can deduce that Capy offers some allocation optimizations/reuse (reusing coroutine frame allocations) that might be difficult to achieve with manual coroutine programming. But I would suggest collecting the strong points from Capy and putting a summary of them in the Introduction. If we could have some comparison against some other library or direct coroutine programming (maybe the number of allocations or total memory usage?) that will turn attention to the library. Potential Capy users will also ask about performance. Since Capy does not offer networking, event loops, and other tools, I'm not sure how we can measure Capy performance and what would be the baseline of that comparison. I see a single bench program, maybe we should collect the results and put them in the docs. As you suggest, real-world usage data and feedback is a good addition, but I don't think most new Boost libraries had real-world usage before being accepted into the collection. Maybe real-world usage can't be practical if Capy is not tested with a I/O providing library like Corosio, Beast... I mean, it's more difficult to see the usefulness of this framework without some concrete uses. I see the documentation has an example using Corosio (https://master.capy.cpp.al/capy/examples/custom-dynamic-buffer.html). Maybe reviewers can be more familiar with an example that uses Boost.Asio so that they can compare classic Asio with coroutines vs Capy (assuming Asio can be made compatible with Capy)? My 2 cents, Ion