|
Boost : |
From: Simonson, Lucanus J (lucanus.j.simonson_at_[hidden])
Date: 2008-06-30 15:26:18
Steven Ross wrote:
>For larger n, the advantage vs. pure comparison-based algorithms
increases.
>In effect, Spreadsort smoothly transitions from pure MSB Radix sort to
>std::sort
>with its data cut up by a couple iterations of MSB Radix sort
>(reducing the size of the log(n) factor),
>depending upon the distribution and size of the data.
The analysis seems a little too optimistic and seems to be based upon
the assumption that the input is evenly distributed. To analyze the
complexity you need to think like an adversary who knows your algorithm
and intentionally creates an input that will make it perform poorly. In
the worst case your algorithm would be slower than std::sort because a
data set with a cluster of values that all fall within the same bin and
a handful of outliers that give rise to the unfortunate binning would
result in runtime for std::sort + runtime for one or more recursive
stages of profitless binning. So, the algorithm should be expected to
be faster than std::sort alone for some inputs, but slower for others.
Luke
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk