Boost logo

Boost :

From: Steven Ross (spreadsort_at_[hidden])
Date: 2008-07-03 10:39:48

Hi Phil,

On Thu, Jul 3, 2008 at 5:58 AM, Phil Endecott <
spam_from_boost_dev_at_[hidden]> wrote:
> I've done a simple benchmark with about 2 million latitude values scaled to
> ints in the range +/-90 x 1e6. The input is approximately alphabetically
> ordered by country and then by placename within the country. Your algorithm
> seems to be about 13% faster than std::sort. This is with g++-4.3 -O3.

Thanks for testing it. That's a little slower than I'd expect (but
believable). It's possible that the default tuning values don't work well
on your system. Are you running on Windows? I've seen poor performance
(only small speedups) on Windows for some reason. On OSX, cygwin, and Linux
I haven't seen a real (large) dataset with that small a speedup. If you can
afford to leave it running for an hour or so, you might also try running -skip_verify 2000000 to see if it changes the constants on your
system, and rerun with the constants generates. If the constants
differ significantly, I'd be interested to see what they are.
If I understand you correctly you're sorting 2M elements over a 28-bit
range. Most of my performance tests are with 40M elements over a 32-bit
range, and my experience is that the speedup gradually increases between 10K
and 10M elements, in roughly this fashion:
10K no difference
100K 10% faster
1M 20% faster (not too far from your result)
10M 30% faster
(and 50+% speedup on some systems or with some types of data and large

I assume you are comparing times that the application displays, not total
time which includes file I/O. If it isn't proprietary data, I'd also be
willing to look at your testcase to see if there is something I can improve
in either my algorithm or your testing of it. My development system is a G4
OSX laptop (yes, it's ancient) , so results can be expected to differ a
One final note: I see an error of around +-.03s with my timing approach.
With a data set as small as 2M elements which can be expected to sort in
well under a second, that can significantly impact the percentage runtime
Spreadsort is designed for large n. If you have small n, std::sort is
better, and between 1K and 1M, the speedup isn't much. If you have 100M
elements, then it starts to make a significant difference. That's the
reason for the initial check to see if the number of elements is less than
some constant value, in which case it uses std::sort.


Boost list run by bdawes at, gregod at, cpdaniel at, john at