Re: Capy: Request for endorsements
In article < <CAHf7xWteO7nQazqXoXBXD24ytSjNAuyVcaC=gDjWYde7X2bdtw@mail.gmail.com>> you write:
It's also worth asking, was any of this largely coded by something such as an LLM?
...and if any library, not just this one, had such content, what then? Crappy code written by humans has been around as long as humans have been writing code. So has good code written by humans. LLM coding agents are with us to stay. Code should be judged on it's intrinsic quality and not whether or not an LLM was used to create it. My IDE has coding agents built into it and they constantly make suggestions for completing code, writing code, etc. I judge these contributions myself and decide whether or not to accept them. In the end, you can't really tell how much of my code is written as suggested completions or as characters I typed myself. -- Richard -- "The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline> The Terminals Wiki <http://terminals-wiki.org> The Computer Graphics Museum <http://computergraphicsmuseum.org> Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
On Sun, Feb 1, 2026 at 2:28 PM Richard via Boost <boost@lists.boost.org> wrote:
In article < <CAHf7xWteO7nQazqXoXBXD24ytSjNAuyVcaC=gDjWYde7X2bdtw@mail.gmail.com>> you write:
It's also worth asking, was any of this largely coded by something such as an LLM?
...and if any library, not just this one, had such content, what then?
Crappy code written by humans has been around as long as humans have been writing code. So has good code written by humans.
LLM coding agents are with us to stay. Code should be judged on it's intrinsic quality and not whether or not an LLM was used to create it.
My IDE has coding agents built into it and they constantly make suggestions for completing code, writing code, etc. I judge these contributions myself and decide whether or not to accept them. In the end, you can't really tell how much of my code is written as suggested completions or as characters I typed myself.
People have, and will continue to, use all kinds of tools to get things done. And that's fine. But one aspect that this particular case brings up, which I've previously mentioned to Vinnie, are the rights to the produced code. Is the code produced under a different license than BSL? Is such a license compatible with BSL? Does it have a license? Do you need attribution? And, not being a lawyer, I just don't know any actual answers. But I do think we need answers. PS. As for endorsement.. I can't endorse it as I no longer have enough domain knowledge to evaluate it. :-) -- -- René Ferdinand Rivera Morell -- Don't Assume Anything -- No Supongas Nada -- Robot Dreams - http://robot-dreams.net
On Sun, Feb 1, 2026 at 7:46 PM René Ferdinand Rivera Morell via Boost < boost@lists.boost.org> wrote:
...I can't endorse it as I no longer have enough domain knowledge to evaluate it. :-)
Yes, well that's what the Coroutines Tutorial is for Thanks
On 1 Feb 2026 23:27, Richard via Boost wrote:
In article < <CAHf7xWteO7nQazqXoXBXD24ytSjNAuyVcaC=gDjWYde7X2bdtw@mail.gmail.com>> you write:
It's also worth asking, was any of this largely coded by something such as an LLM?
...and if any library, not just this one, had such content, what then?
Crappy code written by humans has been around as long as humans have been writing code. So has good code written by humans.
LLM coding agents are with us to stay. Code should be judged on it's intrinsic quality and not whether or not an LLM was used to create it.
My IDE has coding agents built into it and they constantly make suggestions for completing code, writing code, etc. I judge these contributions myself and decide whether or not to accept them. In the end, you can't really tell how much of my code is written as suggested completions or as characters I typed myself.
There is a difference between code completion suggestions and a significant body of code that is generated by an LLM. Besides the legal implications raised by Rene, there are also technical concerns. What is the quality of such code, is it maintainable in the long term, including by people other than the original publisher (as the term "author" is probably not exactly applicable in this case), does it perform well and handle corner cases correctly - stuff like that. AI-generated code often lacks in these departments, sometimes obviously, but also sometimes subtly. You could argue that a review should highlight such weaknesses, if they exist, and partly that would be true. But historically, reviewers have paid little attention to the implementation, compared to high-level design, interfaces and documentation. It takes much more time and effort to dig into the implementation details, and it is understandable that not many reviewers are willing to do this. We generally assume that the author is willing to put the effort to make his library well implemented and maintainable as he is going to be the maintainer in the forseeable future. With vibe coding approach, this may no longer be a concern for such an author, and could result in poor quality of the implementation. Yes, AI is just a tool. But, as any tool, it should be used wisely. For example, I wouldn't mind using AI to generate code for prototyping or to get the job done quickly in a personal pet project, where my capabilities or time are lacking. But I don't think this would be acceptable in a widely used project such as Boost, definitely not without a high level of scrutiny from the authors of the submission. So high level, in fact, that it would invalidate any possible time and effort gains from using AI in the first place. And the reviewers generally can't be sure that the author has exercised such scrutiny.
On Mon, Feb 2, 2026 at 4:06 AM Andrey Semashev via Boost < boost@lists.boost.org> wrote:
There is a difference between code completion suggestions and a significant body of code that is generated by an LLM. Besides the legal implications raised by Rene, there are also technical concerns.
Agreed, and these are concerns I take seriously. I want to address them concretely rather than in the abstract.
What is the quality of such code, is it maintainable in the long term, including by people other than the original publisher (as the term "author" is probably not exactly applicable in this case), does it perform well and handle corner cases correctly - stuff like that.
These are the right questions to ask of any submission. They are also questions that can be answered by examining the actual library being proposed.
AI-generated code often lacks in these departments, sometimes obviously, but also sometimes subtly.
I'd be curious what experience informs this. Have you used these tools extensively, or observed a colleague doing so? Or is this a general impression? I ask not to be combative, but because the gap between popular perception of AI-generated code and the reality of AI-assisted development with rigorous testing and review is, in my experience, substantial. More to the point; did you look at this library's implementation? The code, the tests, the corner case coverage? That would be the most direct way to evaluate whether these concerns apply here.
You could argue that a review should highlight such weaknesses, if they exist, and partly that would be true. But historically, reviewers have paid little attention to the implementation, compared to high-level design, interfaces and documentation. It takes much more time and effort to dig into the implementation details, and it is understandable that not many reviewers are willing to do this.
This is a fair observation about review practice in general. We generally assume that the
author is willing to put the effort to make his library well implemented and maintainable as he is going to be the maintainer in the forseeable future. With vibe coding approach, this may no longer be a concern for such an author, and could result in poor quality of the implementation.
This is a legitimate concern and I want to acknowledge it directly. "Vibe coding" with no scrutiny would indeed be inappropriate for a project like Boost. I don't dispute that. But AI tooling is here to stay, and its capabilities are advancing rapidly. Rather than drawing a line that treats all AI-assisted development as suspect, this review could be an opportunity for the Boost community to demonstrate how it embraces this technology responsibly: with the same rigor it applies to everything else. That would be genuine leadership, and the kind of example the broader C++ community would benefit from. Yes, AI is just a tool. But, as any tool, it should be used wisely. For
example, I wouldn't mind using AI to generate code for prototyping or to get the job done quickly in a personal pet project, where my capabilities or time are lacking. But I don't think this would be acceptable in a widely used project such as Boost, definitely not without a high level of scrutiny from the authors of the submission.
I agree that a high level of scrutiny is required. Did you examine the test suite? Did you ask about my methodology: how I use AI tooling, what I review, how I validate correctness? These are the questions that would let you evaluate whether that scrutiny was actually applied.
So high level, in fact, that it would invalidate any possible time and effort gains from using AI in the first place.
How did you measure that? This is a strong empirical claim presented without evidence. You haven't asked about my workflow, my review process, or the time I invested in validation. Without that information, this is assertion, not analysis.
And the reviewers generally can't be sure that the author has exercised such scrutiny.
True in general. But this library is not a generality, it is a specific, concrete body of work that can be inspected. The code is there. The tests are there. Did you look? Thanks
On 2 Feb 2026 16:06, Vinnie Falco wrote:
On Mon, Feb 2, 2026 at 4:06 AM Andrey Semashev via Boost <boost@lists.boost.org <mailto:boost@lists.boost.org>> wrote:
AI-generated code often lacks in these departments, sometimes obviously, but also sometimes subtly.
I'd be curious what experience informs this. Have you used these tools extensively, or observed a colleague doing so? Or is this a general impression?
This is from my own experience with AI-based tools, not limited to code generation but also including "general assistants", e.g. what Google search provides. This is also the impression I get from other users and online sources like news, bloggers and commenters.
More to the point; did you look at this library's implementation? The code, the tests, the corner case coverage? That would be the most direct way to evaluate whether these concerns apply here.
My comment was not in relation to your specific library but rather an answer to the more general reply by Richard. I did not look at your library implementation and I have nothing to say about it yet.
Yes, AI is just a tool. But, as any tool, it should be used wisely. For example, I wouldn't mind using AI to generate code for prototyping or to get the job done quickly in a personal pet project, where my capabilities or time are lacking. But I don't think this would be acceptable in a widely used project such as Boost, definitely not without a high level of scrutiny from the authors of the submission.
I agree that a high level of scrutiny is required. Did you examine the test suite? Did you ask about my methodology: how I use AI tooling, what I review, how I validate correctness? These are the questions that would let you evaluate whether that scrutiny was actually applied.
I believe, Christian Mazakas asked and I don't think an answer was provided.
So high level, in fact, that it would invalidate any possible time and effort gains from using AI in the first place.
How did you measure that? This is a strong empirical claim presented without evidence.
This is my experience from interaction with various AI tools. I was often bitten by getting misinformation or incorrect output, so I simply cannot trust AI. Therefore I must verify its sources first hand or review code to a level where I would have probably spent the same or less amount of time and effort as if I didn't use AI to begin with. It simply isn't worth it when the result is something that I need to be reliable. Maybe you could blame me for not being able to compose the query just in the right way, but that would only reinforce my point. Where a regular search with incorrect terms would simply not bring you relevant information, an incorrect query might bring you garbage that looks correct. Similarly with code generation.
On Mon, Feb 2, 2026 at 6:40 AM Andrey Semashev via Boost < boost@lists.boost.org> wrote:
I believe, Christian Mazakas asked and I don't think an answer was provided.
The lack of response reflects my thinking that the questions were intended to provoke rather than inform, as this individual already knows the answer to those questions since they were present during the development of the libraries (and also contributed to some of the downstream libraries). That said, agentic workflows are now a core component of my development process, and the quality of the Capy offering is reflective of that workflow. Said individual was the inspiration for two of my blog posts from last week which in hindsight could explain the motivation behind the questions: The Span Reflex: When Concrete Thinking Blocks Compositional Design https://www.vinniefalco.com/p/the-span-reflex-when-concrete-thinking The Implementation Confidence Gap https://www.vinniefalco.com/p/the-implementation-confidence-gap This is my experience from interaction with various AI tools. I was
often bitten by getting misinformation or incorrect output, so I simply cannot trust AI
Yes, I had those experiences as well and this discouraged me from the tools for a while. Yet I decided to try again and fully embrace and immerse myself, and I find that adapting a development methodology around the tools can bring good results. The Capy library offers not only the concepts and types related to its domain but it also provides as a public interface, the set of tools needed to rigorously test those components, as users are inevitably creating models of the provided concepts. Those tools are located here: https://github.com/cppalliance/capy/tree/develop/include/boost/capy/test They contain mock objects, an algorithm to exercise the buffer sequence concepts, and also a new thing called a "fuse." This is a component which works together with the mock objects to inject both failing error codes and exceptions in the code under test. The testing components also have their own tests. Overkill? I think we want to err on the side of thoroughness especially when parts of the library are the result of generative synthesis. A strong foundation of mock objects, algorithmic exercise, error injection, coverage reports, and human-in-the-loop auditing is our approach to ensuring quality. And like my other libraries, we engage outside consultants to review everything since there are security implications with software that touches network inputs. Beast, JSON, URL all have independent reports, example: https://cppalliance.org/pdf/C%20Plus%20Plus%20Alliance%20-%20Boost%20JSON%20... While I authored our current libraries in Boost they have their own long-time maintainers so I will use the term "we" here to reflect their contributions. We have always maintained a consistently high level of quality for our proposed and maintained libraries, and Capy and the many libraries which will soon follow will be no exception. Thanks
In article <b48cbdd7-28ea-4184-b77a-b6e2ff3e6b13@gmail.com> you write:
On 1 Feb 2026 23:27, Richard via Boost wrote:
In article < <CAHf7xWteO7nQazqXoXBXD24ytSjNAuyVcaC=gDjWYde7X2bdtw@mail.gmail.com>> you write:
It's also worth asking, was any of this largely coded by something such as an LLM?
...and if any library, not just this one, had such content, what then?
Crappy code written by humans has been around as long as humans have been writing code. So has good code written by humans.
LLM coding agents are with us to stay. Code should be judged on it's intrinsic quality and not whether or not an LLM was used to create it.
My IDE has coding agents built into it and they constantly make suggestions for completing code, writing code, etc. I judge these contributions myself and decide whether or not to accept them. In the end, you can't really tell how much of my code is written as suggested completions or as characters I typed myself.
There is a difference between code completion suggestions and a significant body of code that is generated by an LLM.
There certainly is a difference in the authoring process. If you, as a reviewer or reader, can't tell the difference between the product of that process and what I would've typed by hand, why do you care? -- "The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline> The Terminals Wiki <http://terminals-wiki.org> The Computer Graphics Museum <http://computergraphicsmuseum.org> Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
On 2 Feb 2026 18:38, Richard via Boost wrote:
In article <b48cbdd7-28ea-4184-b77a-b6e2ff3e6b13@gmail.com> you write:
On 1 Feb 2026 23:27, Richard via Boost wrote:
In article < <CAHf7xWteO7nQazqXoXBXD24ytSjNAuyVcaC=gDjWYde7X2bdtw@mail.gmail.com>> you write:
It's also worth asking, was any of this largely coded by something such as an LLM?
...and if any library, not just this one, had such content, what then?
Crappy code written by humans has been around as long as humans have been writing code. So has good code written by humans.
LLM coding agents are with us to stay. Code should be judged on it's intrinsic quality and not whether or not an LLM was used to create it.
My IDE has coding agents built into it and they constantly make suggestions for completing code, writing code, etc. I judge these contributions myself and decide whether or not to accept them. In the end, you can't really tell how much of my code is written as suggested completions or as characters I typed myself.
There is a difference between code completion suggestions and a significant body of code that is generated by an LLM.
There certainly is a difference in the authoring process.
If you, as a reviewer or reader, can't tell the difference between the product of that process and what I would've typed by hand, why do you care?
Purely from a technical quality standpoint, I wouldn't care if the code was of decent quality. The problem is, as I said in my reply, that the quality of the AI-generated code is often worse than that of the code written by a qualified human. Legal concerns still remain, though.
[Please do not mail me a copy of your followup] Andrey Semashev via Boost <boost@lists.boost.org> spake the secret code <6b77dc56-de64-4a9a-bedc-6191284b0ee0@gmail.com> thusly:
On 2 Feb 2026 18:38, Richard via Boost wrote:
If you, as a reviewer or reader, can't tell the difference between the product of that process and what I would've typed by hand, why do you care?
[...] The problem is, as I said in my reply, that the quality of the AI-generated code is often worse than that of the code written by a qualified human.
As I said: if you can't tell the difference, why do you care? You turn around and say that you can tell the difference because you assert that the code is "often worse" than that written by a "qualified" human. First, you need to define what you mean by a "qualified human", because other than willingness I see no requirements for boost library authors that they demonstrate some sort of qualification. Second, you need to provide evidence for your assertion that AI generated code is "often" worse. The premise of my statement is: if you can't tell how I arrived at the code, why do you care? As I wrote earlier, crappy code has been written by humans since they started writing code, so just asserting that AI generated code is crappy is a non-differentiating difference. -- "The Direct3D Graphics Pipeline" free book <http://tinyurl.com/d3d-pipeline> The Terminals Wiki <http://terminals-wiki.org> The Computer Graphics Museum <http://computergraphicsmuseum.org> Legalize Adulthood! (my blog) <http://legalizeadulthood.wordpress.com>
It should've been prominently displayed that the code in question was LLM-produced, which is why I had to explicitly ask because it wasn't mentioned. Boost should be honest and transparent and no one should be unwittingly subjected to reviewing LLM-produced code without knowing it beforehand. This is to say, Boost should have honor and integrity and we are the keepers of maintaining that honor and integrity. Transparency is part of this. The other side of the coin is that I've actually written coroutine and I/O schedulers so I understand exactly how difficult the problem is and how subtle the bugs are. It takes a lot of time to digest this stuff and produce something of high-quality. Using an LLM to take a shortcut for all of this is very anti-Boost, imo. And I'm not sure it's a precedent we want to set. What's the rush for getting this into Boost? Why not simply just incubate it and rollout to real users first? - Christian
On Mon, Feb 2, 2026 at 10:17 AM Richard via Boost <boost@lists.boost.org> wrote:
[Please do not mail me a copy of your followup]
Andrey Semashev via Boost <boost@lists.boost.org> spake the secret code <6b77dc56-de64-4a9a-bedc-6191284b0ee0@gmail.com> thusly:
On 2 Feb 2026 18:38, Richard via Boost wrote:
If you, as a reviewer or reader, can't tell the difference between the product of that process and what I would've typed by hand, why do you care?
[...] The problem is, as I said in my reply, that the quality of the AI-generated code is often worse than that of the code written by a qualified human.
As I said: if you can't tell the difference, why do you care?
You turn around and say that you can tell the difference because you assert that the code is "often worse" than that written by a "qualified" human.
A point of qualification.. You also have the responsibility to prove that LLM produced code is at least as good as human produced code. And there has been research on how LLM produced code is not as good as human produced code (it also follows logically if you think about how LLMs are limited by their training inputs). I've mentioned some of that research in the Boost Slack channel. The quality of the end product depends on the human in the loop. And AFAIK Vinnie is being diligent in his methods. But that doesn't remove the need to externally verify. As with all code multiple perspectives yield better end results. -- -- René Ferdinand Rivera Morell -- Don't Assume Anything -- No Supongas Nada -- Robot Dreams - http://robot-dreams.net
This is a genuinely important conversation and one that Boost has not yet had to have in a serious way. Rather than treating it as a problem, we should recognize it for what it is: an opportunity. This library may be the first to bring the question of LLM-assisted development to the review process directly, and that makes it valuable regardless of where one stands on the tooling itself. Boost has always led by example. We were early to adopt modern C++ idioms, early to formalize peer review for open source, early to set a standard of quality that the rest of the ecosystem looks to. The world is now grappling with how AI-assisted development fits into serious engineering. Boost can either wait for others to figure that out or we can do what we have always done: lead. The review process already asks the right questions. Is the design sound? Are the abstractions correct? Are the edge cases handled? Does the documentation meet our standard? A coroutine scheduler is either correct or it is not, and the subtle bugs that live in this domain will be surfaced by the quality of the review, not by an accounting of the author's workflow. The review process exists to evaluate artifacts, not to police how authors produce them. If we want to introduce that precedent, it warrants serious thought about where such a principle leads and whose workflows it would scrutinize next. The suggestion to incubate further is always worth considering on technical merits. Every library benefits from real-world usage. That advice stands on its own and does not need to be coupled to how the code was written. Boost has an opportunity here to show the broader C++ community how a serious project evaluates serious work in a changing landscape. We should rise to it.
El 02/02/2026 a las 18:26, Vinnie Falco via Boost escribió:
The review process already asks the right questions. Is the design sound? Are the abstractions correct? Are the edge cases handled? Does the documentation meet our standard? A coroutine scheduler is either correct or it is not, and the subtle bugs that live in this domain will be surfaced by the quality of the review, not by an accounting of the author's workflow. The review process exists to evaluate artifacts, not to police how authors produce them. If we want to introduce that precedent, it warrants serious thought about where such a principle leads and whose workflows it would scrutinize next.
I agree. Let's focus on the result and how maintainable the library is by the original author or any successor/community. Just skimming the code it seems clean and simple. I don't have any expertise on coroutines and similar high concurrency tools, but I would suggest that documentation should highlight why Capy is the way to go. Documentation explains what Capy is about (and what the library does not do) but, unless I'm missing something, it does not emphasize why we should use Capy and not any other abstractions (are there are any competing libraries?). I can read about the cancellation problem in the "Stop Tokens" chapter and I can deduce that Capy offers some allocation optimizations/reuse (reusing coroutine frame allocations) that might be difficult to achieve with manual coroutine programming. But I would suggest collecting the strong points from Capy and putting a summary of them in the Introduction. If we could have some comparison against some other library or direct coroutine programming (maybe the number of allocations or total memory usage?) that will turn attention to the library. Potential Capy users will also ask about performance. Since Capy does not offer networking, event loops, and other tools, I'm not sure how we can measure Capy performance and what would be the baseline of that comparison. I see a single bench program, maybe we should collect the results and put them in the docs. As you suggest, real-world usage data and feedback is a good addition, but I don't think most new Boost libraries had real-world usage before being accepted into the collection. Maybe real-world usage can't be practical if Capy is not tested with a I/O providing library like Corosio, Beast... I mean, it's more difficult to see the usefulness of this framework without some concrete uses. I see the documentation has an example using Corosio (https://master.capy.cpp.al/capy/examples/custom-dynamic-buffer.html). Maybe reviewers can be more familiar with an example that uses Boost.Asio so that they can compare classic Asio with coroutines vs Capy (assuming Asio can be made compatible with Capy)? My 2 cents, Ion
On Mon, Feb 2, 2026 at 3:52 PM Ion Gaztañaga <igaztanaga@gmail.com> wrote:
Documentation explains what Capy is about (and what the library does not do) but, unless I'm missing something, it does not emphasize why we should use Capy and not any other abstractions (are there are any competing libraries?).
This is a great point and it also highlights a difficulty in proposing such a library. On its own it is difficult to show the value proposition since it is a foundational library. It only solves a subset of user problems and needs things to be built with it to demonstrate utility. A while ago my thinking was to propose the Beast2 family of libraries as a collection. Yet this would be unprecedented in Boost. As we refined the ideas I realized that Capy is the direct analogue of std::execution. That is, it solves the same family of problems that what is about to go into C++26 solves. It also does a little more; things which are specific to I/O expressed in terms of buffers of bytes (the networking use-case). Thus I figure it is probably worthwhile to present these ideas sooner rather than later Maybe reviewers can be more familiar with an example that uses Boost.Asio so that they can compare classic Asio with coroutines vs Capy
(assuming Asio can be made compatible with Capy)?
There has been no effort to preserve any sort of compatibility with Boost.Asio, except for the buffer sequence concepts and that is admittedly such a small aspect that it seems hardly worth mentioning. Asio has its own execution model which is rather constrained. I would suggest that documentation should highlight why Capy
is the way to go.
Yes, I have put together this page to help answer that question: https://master.capy.cpp.al/capy/why-capy.html Thanks
El 03/02/2026 a las 4:52, Vinnie Falco via Boost escribió:
On Mon, Feb 2, 2026 at 3:52 PM Ion Gaztañaga <igaztanaga@gmail.com> wrote:
Documentation explains what Capy is about (and what the library does not do) but, unless I'm missing something, it does not emphasize why we should use Capy and not any other abstractions (are there are any competing libraries?).
This is a great point and it also highlights a difficulty in proposing such a library. On its own it is difficult to show the value proposition since it is a foundational library. It only solves a subset of user problems and needs things to be built with it to demonstrate utility.
Maybe in the examples section we could add a more concrete approach if possible, say an example using the framework and reading file asynchronously using POSIX or Windows asynchronous I/O. We can compare that with a more traditional coroutine approach. I guess that a user must write much less boilerplate code using Capy and we should show that.
There has been no effort to preserve any sort of compatibility with Boost.Asio, except for the buffer sequence concepts and that is admittedly such a small aspect that it seems hardly worth mentioning. Asio has its own execution model which is rather constrained.
Since Boost.Asio is very widely used library, I think users will ask how they can combine both, if possible. If combining Capy and Asio is not possible (I don't know, maybe incompatible APIs?), it should be mentioned in the documentation. I understand Capy can be combined with different I/O backends, so Capy users will inevitably ask which Capy-compatible I/O backends are available for them.
I would suggest that documentation should highlight why Capy
is the way to go.
Yes, I have put together this page to help answer that question:
Yeah, I think it show clear motivation. An additional doubt, note that I have little knowledge of coroutines, but here I go anyway: - When reading the IOAwaitableProtocol (https://master.capy.cpp.al/capy/coroutines/io-awaitable.html), it says that "the IoAwaitable protocol extends await_suspend to receive context". - But "await_suspend" is a method the coroutine machinery calls from an Awaiter, not an Awaitable. - How is the three-Argument "await_suspend" called? Who calls it? In which object? AFAIK the Awaiter comes from the co_await Awaitable expression but I'm confused by the terminology used in the page... Best, Ion
On Tue, Feb 3, 2026 at 12:35 PM Ion Gaztañaga <igaztanaga@gmail.com> wrote:
...
Today, another library called "TooManyCooks" announced itself. It solves a completely different problem than what Capy solves (although they would look very similar to people who are not thoroughly familiar with the domain). We've added a comparison here: https://develop.capy.cpp.al/capy/why-not-tmc.html Thanks
participants (7)
-
Andrey Semashev -
Christian Mazakas -
Ion Gaztañaga -
legalize+jeeves@mail.xmission.com -
René Ferdinand Rivera Morell -
Richard -
Vinnie Falco