From: Dietmar Kuehl (dietmar_kuehl_at_[hidden])
Date: 2001-04-18 06:10:49
--- Geurt Vos <G.Vos_at_[hidden]> wrote:
> I will try to get them compiled for at least
> BC++5.5 and VC++ 6. This is not necessarily
> integrating stuff into boost. It's merely the
> first step.
Some problem with d-heaps can be the use of non-type template arguments
for "d", ie. the number of children in the tree (a d-heap is a d-nary
tree which is fully balanced; typically, d == 2 as is eg. the case in
'std::priority_queue' but other d's might actually be better).
> I already did some timing tests on Borland,
> with different implementations for stable
> priority queues. In the test I only measured
> the time it takes to push an element. 10
> elements are pushed. The first 5 are unique,
> the second 5 are equivalent to the 'middle'
I wouldn't call a total of 10 elements as representative... For this
amount of elements straight forward approaches like eg. using an
'std::list' and searching for the minimum element might be best. For
something like 10k things might look rather different.
> When the elements are of type std::string,
> and only the first char is the priority, then
> std::multiset out-performs all other
> implementations. It's about 10-30% faster.
multi-set is node based, '(std|boost)::priority_queue' is not (it can't
reasonably be; well, you may store pointers to the elements...). Thus,
these priority queues will move elements around. The other heaps are
not really suited for this use: These heaps are used when you need to
remove other elements than the smallest or if you need to change an
element's priority as is eg. the case in Dijkstra's algorithm.
> 1. boost::priority_queue is about 30% faster
> than the std one and around 60% faster than
The fact that my priority_queue is faster than the [typical] standard
implementation is due to a special case I take care of rather than
using the general algorithm. That it is faster than multi set is also
not surprising because copying integers is more efficient than
adjusting pointers. It would be interesting to see how this relates
when using lots of elements...
> 2. multiset is twice as fast as fibonacci_heap
> (lazy_fibonacci_heap is slightly faster then
> 3. multiset is around 25% faster than pairing_heap
> and splay_heap.
> didn't try d_heap because it doesn't compile...
All these heaps are well suited for changing element's priorities with
'd_heap' being normally the fastest one. I provided the others more or
less to give people an implementation to play with because the
asymptotic behavior of 'fibonacci_heap' is better than for 'd_heap'
(some operations are O(1) rather than O(ln n)). 'lazy_fibonacci_heap'
is a heuristic improvement over Fibonacci heaps (it collects certain
changes and applies them only when they are necessary; processing the
whole batch is more efficient). The other two heaps are interesting
because they operate on a stochastic basis: Although the asymptotic
worst case of the operations is really bad compared to the other heaps,
they generally perform pretty good.
However, most of these priority queues only make sense when it is
necessary to modify priorities or remove other elements than the top
elements. If this is not necessary, the 'priority_queue' classes are
> One conclusing from this is that a priorty_queue
> wrapper is less suitable for large objects, because
> some copying is done (correct?).
I think the objects are 'swap()'ed which should be pretty fast. If this
is not the case, performance can suffer...
> Another: heaps are slower than multiset!
For 10 elements. I wouldn't count on this being also true for
significantly more elements... Maybe I should look into this stuff a
little bit more: For small counts of elements, a special priority queue
might be even better suited using a 'std::list' or a 'std::vector'...
Do You Yahoo!?
Gesendet von Yahoo! Mail - http://mail.yahoo.de
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk