|
Boost : |
From: Herve Bronnimann (hbr_at_[hidden])
Date: 2001-04-20 15:52:53
> Anyways, I created a (I hope) more appropriate test. This time
> integers only. Pushed onto the queue are 10,000 and 100,000
> random integer values. Then 5 equivalent values are pushed,
> and the time this takes is measured. It was tested for multiset,
> boost::priority_queue and fibonacci_heap.
Geurt: A better test for your purposes is perhaps to take a certain
number of input keys (say 100,000) and:
* take two parameters (unique, repeat) in [0,1] with meaning:
- unique reflect the percentage of unique keys (its complement
duplicate is the percentage of duplicate keys)
- repeat reflects the *repetition rate* among the duplicate (10%
means only 10 duplicate keys repeated 10% of the total number of
duplicates each)
* alternatively, measure the "other end" of the range with (unique,k)
where k is the *number* of repeats of each duplicate key.
* generate an input for various values of unique, and for each model
let there be a range with:
- for different values of repeat (10%, 20%, ..., 90%, 100%)
- for different values of k (1, 2, 3, ...)
* try both:
- unique first, then each duplicate key (and its repeats) in order
- shuffle the above ordered input, to have random insertion
* plot the runtime (or whatever) of the queue for parameters in that
range.
With a bit of script writing, or a nice generation method, you'll just
let your computer run overnight and save the results.
You'll get a better idea of the performance that way.
Hope that inspires you,
-- Hervé
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk