|
Boost : |
Subject: Re: [boost] async_read SEGFAULT
From: hh h (jupiter.hce_at_[hidden])
Date: 2019-01-09 05:59:02
Thank you so much Richard, as always appreciate for your sharing and
insight comments.
On 1/8/19, Richard Hodges via Boost <boost_at_[hidden]> wrote:
> On Tue, 8 Jan 2019 at 03:22, hh h via Boost <boost_at_[hidden]> wrote:
>
>> Thanks Richard, I was wandering the strand real role, thanks for the
>> explanation.
>>
>> As you pointed out the two life time static buffers one for read one
>> for write should be the same as the life time of the socket attached
>> to each session. In a server, there could be more than thousand
>> session connections, in terms of resource management, would you think
>> it is good or bad idea if to use a global buffer pool management class
>> so the global buffers can be shared by all sessions?
>>
>
> When writing software for high concurrent use, you have to assume that at
> some point, every client will read and write at the same time.
> Whether you use some centralised buffer resource or not, your total maximum
> working set size will be the same in either case.
> In this case you're better off allocating the maximum memory a session will
> need and having it owned by the session because if you run out of resources
> due to too many sessions, its better that it happens before the session is
> connected than half way through the user's operations.
>
> Therefore, if your server is able to handle (say) 5000 clients at the same
> time, you may as well allocate the memory for 5000 connections at program
> start (or even statically). If you don't have enough memory, better to know
> early!
>
>
>>
>> One thing is clear, that a fixed size array like boost::array<char,
>> MAX_SIZE> readBuffer for each session async_read should not be used
>> (which I am currently using), allocate and reserve more than thousands
>> MAX_SIZE buffer in memory is not going to work.
>
>
> It depends how big your MAX_SIZE is. If it's (say) 100k (a huge buffer for
> most use cases) then 1,000 concurrent connections requires 1000*2*100k =
> 200Mb. 200 megabytes is not a lot of memory in a modern server.
>
>
>> I'll have to see if it
>> can be replaced by other boost simple smart buffer like
>> boost::asio::buffer which can be converted to a raw pointer char* for
>> feeding msgpack unpack input. (Vinnie did mention to use beast, might be
>> another option, I have to see how complicated that will be)
>>
>
> asio::buffer creates a "buffer definition object" i.e. an object that
> describes the address and size of the actual buffer memory. Be careful
> about this. The asio ConstBufferSequence concept does not model actual
> memory, it models the idea of a sequence of "memory references". This is a
> slightly unclear (in my view) part of the documentation. It's worth looking
> at the example code, building it and single-stepping though it.
>
>
>>
>> _______________________________________________
>> Unsubscribe & other changes:
>> http://lists.boost.org/mailman/listinfo.cgi/boost
>>
>
> _______________________________________________
> Unsubscribe & other changes:
> http://lists.boost.org/mailman/listinfo.cgi/boost
>
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk