Boost logo

Boost :

From: bwood (coal_at_[hidden])
Date: 2007-03-16 22:58:13


This is a MIME encoded message.

--=_42a47fb6896e1c55c591045b747d63a5
Content-Type: text/plain
Content-Transfer-Encoding: 7bit

Shalom

I've been comparing results from Boost Serialization (B.Ser) and
Ebenezer Enterprises(EE) on Windows XP lately. I've compared
saving a
1. set<int>,
2. list<int>, and
3. list<int> and deque<int>.

I'm using MSVC8.0, Boost 1.33.1, and software from www.webEbenezer.net
to build the tests. I use clock() statements to measure the amount
of time used. I've read on this list that there is an issue with
using clock() on Windows, but I use it the same way in all the tests
so doubt it is an issue here.
I use a buffer of 4096 bytes in the EE versions and from what I can
tell the Boost versions also use the same size of buffer. (I'm not
doing anything to set the buffer size with Boost. It seems to
default to 4096.) Each of the containers is filled with 1,000,000
ints.

Build times/Exe sizes
In each of the tests the B.Ser versions take longer to build and the
executables are more than two times bigger in bytes than the EE
versions.

Run times
I ran the B.Ser and EE versions 3 times in a row and threw out the
fastest and slowest times and kept the remaining middle time.
The following results are from optimized (O2) versions of the tests.

set<int>
B.Ser ----- 1630
EE --------- 451
B.Ser version takes 3.6 times longer than the EE version.

list<int>
B.Ser ----- 1440
EE --------- 271
B.Ser takes 5.3 times longer here.

list<int> and deque<int>
B.Ser ----- 2894
EE --------- 521
B.Ser takes 5.5 times longer here.

I've only done a few tests without optimization. The results from
those tests have had higher ratios than those listed above. For
example, the non-optimized B.Ser version of the list<int> test
is about 8 times slower than the non-optimized EE version.
One thing that sticks out in my mind is that the optimized B.Ser
version of the list<int> test is 3 times slower than the non-optimized
EE version.

These results are similar to what we observed on Linux previously.
http://lists.boost.org/Archives/boost/2005/11/96497.php

I didn't test exactly the same thing in the Windows tests and the
Linux tests. Feedback from the Linux tests indicated some objection
to our commenting out a generated call to flush the buffer we use.
I didn't comment out any of the generated code in these Windows
tests like I did with Linux. And so the Windows tests fill the
buffer and flush it numerous times.

Regards,
Brian Wood
Ebenezer Enterprises
www.webEbenezer.net
--=_42a47fb6896e1c55c591045b747d63a5--


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk