Boost logo

Boost :

Subject: Re: [boost] [review] Review of PolyCollection starts today (May 3rd)
From: Thorsten Ottosen (tottosen_at_[hidden])
Date: 2017-05-16 09:11:53


Den 15-05-2017 kl. 21:21 skrev Joaquin M López Muñoz via Boost:
> El 15/05/2017 a las 19:46, Thorsten Ottosen via Boost escribió:

>
> I think this variation of a) might perform better (code untested):
>
> template<typename Model,typename Allocator>
> bool operator==(
> const poly_collection<Model,Allocator>& x,
> const poly_collection<Model,Allocator>& y)
> {
> typename poly_collection<Model,Allocator>::size_type s=0;
> const auto &mapx=x.map,&mapy=y.map;
> for(const auto& p:mapx){
> s+=p.second.size();
> auto it=mapy.find(p.first);
> if(it==mapy.end()?!p.second.empty():p.second!=it->second)return false;
> }
> if(s!=mapy.size())return false;
> return true;
> }
>

Yeah, could be.

>> Anyway, it is a bit surprising. Perhaps modern allocators are good at
>> allocating same size objects closely and without much overhead ...
>
> I think the similarity in performance between shuffled ptr_vector and
> shuffled base_collectin goes the other way around: once sequentiality is
> destroyed,
> it doesn't matter much whether elements lie relativley close to each
> other in
> main memory.

Ok. I guess (and there is no hurry) it will also be interesting to see
for 0-1000 elements.

I know its nice to have a graph that grows as a function of n, but I
think the best thing would be make each data point be based on the same
number of iterations. So I would use an exponential growth for n, n = 8,
16, 32 ... max_n and then run each loop x times, x being max_n / n.

kind regards

Thorsten


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk