|
Boost : |
Subject: Re: [boost] [review] Review of PolyCollection starts today (May 3rd)
From: Joaquin M López Muñoz (joaquinlopezmunoz_at_[hidden])
Date: 2017-05-16 17:35:43
El 16/05/2017 a las 11:11, Thorsten Ottosen via Boost escribió:
> Den 15-05-2017 kl. 21:21 skrev Joaquin M López Muñoz via Boost:
>> I think the similarity in performance between shuffled ptr_vector and
>> shuffled base_collectin goes the other way around: once sequentiality
>> is destroyed,
>> it doesn't matter much whether elements lie relativley close to each
>> other in
>> main memory.
>
> Ok. I guess (and there is no hurry) it will also be interesting to see
> for 0-1000 elements.
In my todo list.
> I know its nice to have a graph that grows as a function of n, but I
> think the best thing would be make each data point be based on the
> same number of iterations. So I would use an exponential growth for n,
> n = 8, 16, 32 ... max_n and then run each loop x times, x being max_n
> / n.
Not sure why this is better than having homogeneous units all across the
plot
(namely nanoseconds/element). In any case the testing utilities sort of
do the
loop repetition the way you suggest, at least for small values of n:
measure_start=high_resolution_clock::now();
do{
res=f();
++runs;
t2=high_resolution_clock::now();
}while(t2-measure_start<min_time_per_trial);
trials[i]=duration_cast<duration<double>>(t2-measure_start).count()/runs;
This runs the loop as many times as needed to at least occupy a
min_time_per_trial
slot (200 ms) so as to avoid measuring individual executions that are
too small for the
high_resolution_clock granularity. Then, of course, resulting times are
normalized
back to ns/element.
JoaquÃn M López Muñoz
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk