On Mon, Feb 9, 2026 at 3:19 AM Ruben Perez <rubenperez038@gmail.com> wrote:
This does not handle cancellation.
You're right, it doesn't yet. The example was written to demonstrate the integration pattern, not to be production-complete. Adding cancellation support is straightforward and I'll update the example.
I don't think that trying to write Asio universal async operations using Capy is a good idea. I would encourage you to remove the "sans-io" term from Capy, since it assumes a particular I/O model.
I respectfully disagree. The term "sans-I/O" means that your protocol or business logic doesn't perform I/O itself; it reads and writes bytes through an abstraction, without knowing what the transport is. A Capy algorithm written against `any_stream` has no idea whether it's talking to a TCP socket, a TLS channel, a Unix pipe, or a memory buffer in a unit test. That's sans-I/O. Every sans-I/O design has an interface shape. Even the purest sans-I/O libraries define how bytes flow in and out. Capy's shape is coroutine-based type-erased streams. Having a shape doesn't disqualify it from being sans-I/O, it's what makes it usable. I think the concern here may be conflating "sans-I/O" with "sans-async-model," and those are different things. Sans-I/O means your business logic doesn't know what the transport is. It doesn't mean your business logic has no opinion about how async execution works. Asio's universal async model is also a particular model. Writing algorithms against `AsyncReadStream` with completion tokens is no more or less sans-I/O than writing against `any_stream` with coroutines. Both decouple the algorithm from the transport. Both commit to an execution pattern. The practical litmus test for sans-I/O is testability: can you exercise your protocol logic entirely in-process, with no sockets and no event loop? With Capy's memory-backed streams, you can. That's the proof.
Is there a reason why reading and writing have been placed in Capy, while other operations have been placed in Corosio? Connection establishment is a need for all clients, regardless of what the service they connect to.
There's a broad spectrum of algorithms whose entire job is moving bytes through a stream, and they have nothing to do with how things get connected. JSON parsing and serialization. Compression and decompression. TLS; once negotiated, it's just reading and writing encrypted bytes. HTTP message framing, WebSocket framing, protocol wire formats like PostgreSQL or Redis. Hashing, checksumming, base64 encoding, proxy forwarding, rate limiting, multiplexing. The list goes on. These algorithms are portable, testable, and reusable precisely because they only care about bytes in and bytes out. Connection establishment, DNS resolution, and accepting incoming connections are inherently platform-specific. They depend on the operating system, the network stack, and the I/O driver. Capy draws the line exactly where it makes sense: stream I/O is the reusable, portable layer. Connecting is platform-specific and belongs in Corosio. This is the same separation you see in well-designed systems everywhere. A JSON parser doesn't need to know how you connected to the server. A TLS implementation doesn't care whether the underlying transport is a TCP socket or a named pipe. Capy formalizes that boundary.
What's the best way for this user to adopt my library? Do they need to port everything in co_main to capy and corosio? Or can they keep it and use my library?
The answer depends on what API the library author chooses to expose, and it's entirely their decision. If the library author exposes `capy::io_task<void>` as the return type, then an existing Asio user can't directly `co_await` it from an `asio::awaitable` coroutine. Asio's `awaitable` promise type defines a closed set of `await_transform` overloads; it only accepts `awaitable<T>`, Asio async operations, and a few internal primitives. A `capy::io_task` is none of those, so the compiler rejects it. This is a limitation of Asio's coroutine integration, not Capy's, but it's a real practical constraint. However, the library author doesn't have to expose a Capy API. The more interesting design is a two-layer approach: The internal layer contains all the protocol logic: parsing, serialization, state management, written against `capy::any_stream` using `capy::task` coroutines. This is the sans-I/O core. It's portable, testable with memory-backed streams, and knows nothing about Asio. The public layer is a thin shell that exposes whatever API the library author wants. If they want to serve existing Asio users, they return `asio::awaitable<void>` and use `asio::async_initiate` internally to bridge into the Capy core. The user's code looks like this: asio::awaitable<void> co_main() { pg_client client; co_await client.query("SELECT 1"); // works, it's an Asio operation } The user doesn't port anything. They don't even know Capy exists. The bridging is internal to the library, invisible at the API boundary. This is the same pattern as `asio::ssl::stream`; OpenSSL is a completely different world internally, but Asio wraps it and the user never touches OpenSSL directly. The library author can also expose a native Capy API alongside the Asio one, for greenfield users who want the full benefits of Capy's execution model. Same protocol core, two thin shells. The sans-I/O design is what makes this possible as the protocol logic is written once and reused regardless of which async model the consumer prefers. Thanks