Subject: Re: [boost] [flat_map] Any interest in a flat_map variant that is much faster on insertion/erase at the price of a slower lookup?
From: Phil Endecott (spam_from_boost_dev_at_[hidden])
Date: 2015-03-13 09:38:06
Giacomo Drago wrote:
> I have been working on a flat_map
> variant that keeps some of the distinctive features of the original one
> (contiguous storage/cache friendliness, zero memory
> overhead/shrink_to_fit support, vector-based representation, etc...) but
> makes insertion and erasure of the elements much faster, at the price of
> a slower lookup and in-order traversal.
> Explaining how it works requires time and it's not really necessary
No, "how it works" is the first thing I want to know! It's unlikely that
you've invented something completely new...
> Let's get straight to the benchmarks
Rather than quantified benchmarks, to start with I'd like to know the
big-O complexity of the operations.
(Personally, I have a flat_map that does batched insertions i.e. the new
items are appended individually and then merged at the end of the batch.
This will have significant benefits for some workloads.)
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk