|
Boost : |
Subject: Re: [boost] ASIO TCP socket scalability for large scale connections
From: hh h (jupiter.hce_at_[hidden])
Date: 2018-12-24 08:03:23
Thanks Richard for the detailed response, very much appreciated it.
- jhh
On 12/24/18, Richard Hodges via Boost <boost_at_[hidden]> wrote:
> On Mon, 24 Dec 2018 at 03:36, hh h <jupiter.hce_at_[hidden]> wrote:
>
>> That is fabulous, I have been contemplating two connection models:
>>
>> (1) Keep and maintain each session connection in client lift time.
>>
>> (2) Open / close the session connection frequently on demand of each
>> data transfer.
>>
>> While (1) is my preference, it'll depend on the performance, which
>> connection model do you use?
>>
>
> It completely depends on the use case. The project I am currently working
> on accepts connections from:
> 1. rest clients, who tend to disconnect and reconnect often
> 2. websocket clients, who tend to upgrade snd stay connected.
> 3. a debug connection which tends to stay open idle for some time
> 4. a "firehose" connection, which allows us to pump data in quickly for
> testing purposes. This tends to connect and stay connected.
>
> 1 & 2 are internet-facing, so also have authentication, fair-scheduling and
> DDoS protection to worry about.
>
> Re security etc a common model these days is to put your server behind a
> reverse proxy. I can't do that in my case because I need to measure and
> respond to network back-pressure from each client.
>
> For performance reasons, I have template hooks to change the threading,
> memory allocation and object lifetime models. I default to std allocator,
> shared_ptr and multiple threads per io_context because this is the most
> easy to get wrong, but the framework I have written will support
> thread-per-io_context and static memory management (i.e. a maximum number
> of connections) without changing the code in the various io-aware objects.
>
>
>>
>> Thanks Richard and Vinnie.
>>
>> - jhh
>>
>>
>>
>> On 12/24/18, Richard Hodges via Boost <boost_at_[hidden]> wrote:
>> > I've managed 100,000 simultaneous tcp connections to a c++ server using
>> > boost::beast/asio on an appropriately configured fedora linux host.
>> >
>> > ASIO's memory overhead is minimal if written nicely.
>> >
>> >
>> > On Sun, 23 Dec 2018 at 01:06, Vinnie Falco via Boost
>> > <boost_at_[hidden]>
>> > wrote:
>> >
>> >> On Sat, Dec 22, 2018 at 3:01 PM hh h via Boost <boost_at_[hidden]>
>> >> wrote:
>> >> > What will be the maximum connections a single ASIO TCP
>> >> > socket server can handle?
>> >>
>> >> Asio doesn't contain magic or reinvent the wheel here, `basic_socket`
>> >> is a very thin abstraction over a file handle representing a socket.
>> >> You need to look to the limits of your operating system and
>> >> configuration to know the baseline limit. And of course subtract from
>> >> that any additional per-connection resources that your application
>> >> uses.
>> >>
>> >> Regards
>> >>
>> >> _______________________________________________
>> >> Unsubscribe & other changes:
>> >> http://lists.boost.org/mailman/listinfo.cgi/boost
>> >>
>> >
>> >
>> > --
>> > Richard Hodges
>> > hodges.r_at_[hidden]
>> > office: +442032898513
>> > home: +376841522
>> > mobile: +376380212 (this will be *expensive* outside Andorra!)
>> > skype: madmongo
>> > facebook: hodges.r
>> >
>> > _______________________________________________
>> > Unsubscribe & other changes:
>> > http://lists.boost.org/mailman/listinfo.cgi/boost
>> >
>>
>
>
> --
> Richard Hodges
> hodges.r_at_[hidden]
> office: +442032898513
> home: +376841522
> mobile: +376380212 (this will be *expensive* outside Andorra!)
> skype: madmongo
> facebook: hodges.r
>
> _______________________________________________
> Unsubscribe & other changes:
> http://lists.boost.org/mailman/listinfo.cgi/boost
>
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk