|
Boost : |
From: Robert Ramey (ramey_at_[hidden])
Date: 2008-07-07 20:42:12
Simonson, Lucanus J wrote:
> Phil wrote:
>> Having said all that, personally I have little interest in an
>> algorithm
>
>> that has no benefit for fewer than millions of items. What do other
>> people think about that aspect?
>
> I frequently sort gigabytes, tens of gigabytes or even hundreds of
> gigabytes of geometry data as a pre-requisite to scanline over the
> data. However, a faster sort would only benefit me if the speedup were
> reliably 2X or more. A 30% speedup in sort would disappear with the
> effect of Amdahl's Law to something under 10%. I get better speedup
> just by adopting each new version of the compiler. For me, the modest
> size and tenuous nature (depends upon architecture specific constants)
> of the speedup does not justify any effort to integrate and carry the
> dependency to the library that provides it. I have been following the
> thread only out of academic interest.
hmmm - have you looked at postman's sort. This was the subject
of an article in 1992 C user's journal and subsequently available
as a commercial product. see www.rrsd.com.
Robert Ramey
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk