> If not possible in-process, then how to know when older server is no
> longer in use, to gracefully stop it? Asio servers typically run
> forever, i.e. never run out of work, even when they temporarily have
> nothing to do, by design.
Load balancers are typically involved in this process.
https://www.haproxy.org/, ebpf/xdp, LVS,
http://blog.raymond.burkholder.net/index.php?/archives/632-Load-Balancing-With-DNS,-BGP-and-LVS.html
I see. Found this article on how to confirm HAProxy for WebSocket.
If I understand correctly, that means I must start the servers (old and new) on different ports,
and have HAProxy listen on the "main public port", and manually update HAProxy's config when
redeploying, to start directing traffic to the new one?
My use case is simpler than Load Balancing, I was hoping for something simpler than
HAProxy, NGinx, Traefik, etc... Which are full blown solution for all sorts of networking tasks.
> The goal is to do this on "premise" (i.e. not in the cloud), on a
> single machine (no containers), and cross-platform (Windows and Linux).
The basic premise is that there is some sort of proxy or services which
tests for 'aliveness', and forwards requests to the appropriate
service. Typically it is designed to 'drain' traffic from a service to
be stopped, and forward new sessions to an alternate service, and when
no further traffic is forwarding to the old service, it can be stopped,
updated, and restarted, and traffic can then be re-balanced.
I didn't equate routing traffic from 1 server to another as Load Balancing,
but I guess it makes sense. My server is already multi-user and multi-threaded,
and not expected to have traffic that justifies a Load Balancer. Other people in
the company are going crazy with Kubernetes and Docker, but I'm trying to keep things
simple and make a good server fast and robust enough to avoid all that complexity.
Except "Hot-Reload" as they say in the Java world does complicate things... --DD