Boost logo

Boost Users :

Subject: [Boost-users] [Asio]: Massive performance degradation with compiler optimization
From: Roland Bock (rbock_at_[hidden])
Date: 2008-10-06 02:39:46


I am experimenting with asio in order to replace our current tcp library
and stumbled over something weird this morning: The ansynchronous tcp
echo server from the examples is much slower (~factor 3) with compiler
optimization turned on!

For my test I modified the blocking single threaded tcp echo client to
send 1000 identical messages of about 100kB length to the server and
display the echo (see attachment).

with -O0:
time ./src/TcpClient localhost 3000 | wc
    1000 3000 104459000

real 0m6.072s
user 0m5.564s
sys 0m0.508s

with -O3:
time ./src/TcpClient localhost 3000 | wc
    1000 3000 104459000

real 0m21.287s
user 0m5.756s
sys 0m0.164s

The optimization has very little effect on the client, though. When
performing several runs, the time needed for the experiments varies a
lot with -O3, ranging from about 13 seconds to up to 40 seconds. With
-O0, the values do not change much (6-7 seconds).

Other optimization switches (-O1 or -O2) behave like -O3. I have not
tried to turn on/off the single optimization flags like
-ftree-dominator-opts yet.

Some system information (please do not hesitate to ask for more):
Ubuntu 8.04, 64bit
g++ 4.2.3

With other parts of boost I checked, opimization works as expected,
sometimes with outstanding results: My date parser using spirit gained
almost a factor of ten in performance.

Any ideas what might be happening with the server?

Thanks and regards,


Boost-users list run by williamkempf at, kalb at, bjorn.karlsson at, gregod at, wekempf at