On 08/11/2015 03:27 PM, Sarvagya Pant wrote:
I have created a TCP server that can receive two types of message from client.
1. "Heartbeat" : If this is found and "first_fetch" is 1, send some config in Json format else send {"config_changed" : "true"}
2. "Message" : If this is found, send {"success" : "true"}

While both works fine, but the server is slower than expected. If one sees, I have used ~3kb message in client. When the client and server are connected, I did a preliminary benchmarking and the result were as follows:

1. For 50K messages, the time was 6 seconds.
2. For 100K messages, the time was 11 seconds.
3. For 200K messages, the time was 36 seconds.
3. For 500K messages, the time was 82 seconds
3. For 1million messages, the time was 174 seconds.

The server is not working as expected. Currently it works at ~6K message/seconds in localhost. I would like to optimize the current server to receive at-least 100K messages per second. How should I begin with optimizing the server code? Is it possible to receive such high message/seconds in boost.ASIO and if so, how should I approach with the server design?



The measurements suggest quadratic behavior.  Try plotting it on a graph.  I'd guess you have a resource leak, plus you are traversing the leaked resource, which results in both slowness and the slowdown with increased number of messages.

Try profiling the server (or the client).