Boost logo

Boost :

From: Steven Ross (spreadsort_at_[hidden])
Date: 2008-07-01 17:34:54


Hi Phil,

No, T T::operator>>(const T&) isn't what you're using; you need int
> T::operator>>(int). The shift amount is obviously an integer, not T, but
> furthermore you're using the result of the shift as a bin number. So if my
> fixed-point class lets you say e.g. assert(3.1415 >> 2 == 0.7854), i.e. it
> returns a T, then you can't use that to select a bin.

Actually, as long as the - operator works as I specified (returning a
size_t), my >> operator definition is correct. If >> worked as you specify,
then I wouldn't need you to define a - operator. Do you think that would be
more usable and/or efficient? Do you think calling the operator rightshift,
and defaulting it to >> would be cleaner?
As a note, doing it your way, I'd actually need this:
long T::operator>>(unsigned)
(to handle 64-bit values correctly)
With that proviso, I think your suggestion is worth running with, as long as
nobody sees a problem with limiting this sort to a maximum size of 64 bits.
That saves people having to define an additional operator. Now all they
need is this:
bool operator <(const T &) : for comparison
long operator >>(const T &) : to divide the values in a quick and efficient
manner that will preserve ordering (a >> n) < (b >> n) -> a < b.

> My fixed class provides a base() method to access the underlying integer
> type, and somehow your algorithm needs to be supplied with a functor that
> uses that base() method to extract the next few bits. I'm not sure how best
> to do that: it's easy to solve for specific cases but it's hard to be
> general without becoming so generic that the code is incomprehensible.
>

You could call >> on your base() integer and inline the operator, and the
sorting should work pretty quickly.

> Anyway, I suggest that you get it looking good for integers first and then
> we can all think about how to make it more generic. You might also like to
> prepare some benchmarks so that we can see how fast it really is and argue
> about the distribution of the values being sorted. I have a nice big file
> of GPS coordinates you can try it on...
>

sourceforge.net/projects/spreadsort has the source code in a form you can
download, and a regression test setup which tests it on various random
distributions trying different constant values (the defaults are pretty
good). If you're interested in checking the speed, I suggest looking there
(get the generic version). My testing across various platforms and
different distributions resulted in a 30-50% speedup on most types of data.
I saw no slowdowns.
Or you can just copy your GPS coordinate file to input.txt, modify Sample.C
to read your data in the right format, call "make spreadsort;./spreadsort",
and see how it turns out. Call spreadsort -std if you want to compare vs.
std::sort with the same setup. Processor-usage-based timing is integrated
with the sample app.

> Oh, since no-one else has mentioned it, there was someone else who proposed
> some sorting algorithms here a while ago. You might like to have a look at
> the list archive (search for "sorting library") and see what you can learn.
>

I took a look. It looked like suggesting putting a bunch of different
implementations in Boost, but not much happened. I have no objection to
putting a plain radix_sort in Boost if someone can figure out a way to
template it efficiently. My integer_sort idea is based on the idea of
having one standard algorithm for sorting all integers, if you want to sort
integers of any size a little faster than std::sort.
The main problem I saw with the previous sorting suggestion is that it adds
complexity, where ideally you should make it as easy as possible to choose
the ideal algorithm for what you're doing. Introsort is easy to use, good
in worst-case and average-case, and general, so I see the std::sort call as
the ideal all sorting libraries should try to live up to, and if they can't
come close, they shouldn't bother, unless their speedup is tremendous. I
hope to give a decent speed boost with a minimum of additional effort by the
user, and with a simple name that makes it clear that people should use it
if they're sorting by a key that is an integer or can act like an integer,
and not bother (use std::sort) otherwise.

Also, more on worst-case performance: I forgot about my MAX_SPLITS <
log(n_prev) check in the GetMaxCount function (which was in the code I
posted). The recursion check for a constant MAX_SPLITS is effectively this:
(log(n)) > (LOG_CONST * bit_sizeof(remaining_range)/MAX_SPLITS)
So for 64-bit data it would quit and call std::sort on the second iteration
if the log of the number of items in the bin is less than (55 *3 / 9)= 18,
and then go down by 3s (15, 12, 9 -> 9 is the absolute minimum at which
point it will call std::sort regardless). So the worst case relative to
std::sort would be log(n) = 19, and then still following the pattern I
described, recursing 7 times then calling std::sort on log(n) = 9 items at
the end. The absolute worst-case distribution would be sorted in reverse
order, recursively branching into two pieces at each level with the full
width of the remaining distribution between them, until the second to last
iteration, at which point it would split into the maximum size bins on which
std::sort is always recursively called (log(n) = 9). You need two roughly
equal-sized pieces for worst-case performance because with just one piece
and outliers, very little swapping would occur, and swapping dominates
Spreadsort's runtime.
Given constant MAX_SPLITS (which speeds up the average case by around 50% by
avoiding cache misses), the worst-case performance is the better of
O(n(bit_sizeof(range)/MAX_SPLITS)) and O(nlog(n)). Removing MAX_SPLITS the
worst-case is as I originally described, but it would be slower in reality
due to a bigger constant. In effect, spreadsort picks the better of
Radixsort and std::sort worst-case performance, and average-case tends to
perform better than either for large ranges and large n (LSB Radix sort can
be faster, depending on memory allocation overhead, for small ranges).
std::sort is still faster for small n (<1000), which is why Spreadsort
immediately calls std::sort if the provided vector is too small.

Thanks for your help Phil.

Steve


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk