Boost logo

Boost :

Subject: Re: [boost] [review] Review of PolyCollection starts today (May 3rd)
From: Joaquin M López Muñoz (joaquinlopezmunoz_at_[hidden])
Date: 2017-05-16 17:35:43

El 16/05/2017 a las 11:11, Thorsten Ottosen via Boost escribió:
> Den 15-05-2017 kl. 21:21 skrev Joaquin M López Muñoz via Boost:
>> I think the similarity in performance between shuffled ptr_vector and
>> shuffled base_collectin goes the other way around: once sequentiality
>> is destroyed,
>> it doesn't matter much whether elements lie relativley close to each
>> other in
>> main memory.
> Ok. I guess (and there is no hurry) it will also be interesting to see
> for 0-1000 elements.

In my todo list.

> I know its nice to have a graph that grows as a function of n, but I
> think the best thing would be make each data point be based on the
> same number of iterations. So I would use an exponential growth for n,
> n = 8, 16, 32 ... max_n and then run each loop x times, x being max_n
> / n.

Not sure why this is better than having homogeneous units all across the
(namely nanoseconds/element). In any case the testing utilities sort of
do the
loop repetition the way you suggest, at least for small values of n:


This runs the loop as many times as needed to at least occupy a
slot (200 ms) so as to avoid measuring individual executions that are
too small for the
high_resolution_clock granularity. Then, of course, resulting times are
back to ns/element.

Joaquín M López Muñoz

Boost list run by bdawes at, gregod at, cpdaniel at, john at