
Boost : 
Subject: Re: [boost] [flat_map] Any interest in a flat_map variant that is much faster on insertion/erase at the price of a slower lookup?
From: Phil Endecott (spam_from_boost_dev_at_[hidden])
Date: 20150319 13:05:50
Giacomo Drago <giacomo_at_[hidden]> wrote:
> Anyway, I finally managed to find some time to publish the source code
> with a brief explanation: https://github.com/giacomodrago/flat_map
I think your "brief explanation" is this sentence:
[The elements] are organised in a heap where each level is kept sorted.
OK. If there is more that I've missed, please let me know.
That almost sounds like an implicit binary tree, except that I guess
you're storing the min item in the parent, not the mid. I.e. to store
the values 0 to 6, an implicit binary tree would store:
3 1 5 0 2 4 6
The children of item i are at locations 2i+1 and 2i+2. To iterate the
items in order, you do a midorder traversal.
But a (min) heap with sorted levels would be more like this:
0 1 4 2 3 5 6
In this case you can do a preorder traversal to get the items inorder.
Have I understood correctly?
An implicit tree can clearly do lookup in O(log N). Insertion at a leaf
is also O(log N), but I'm not sure about rebalancing; I suspect that each
time you move a node you need to move all of its children, which obviously
gets expensive.
You say that you think your lookup is O(log^2 N), so maybe I've not
correctly understood your design.
The last time I looked at implicit binary trees, it was with the aim of
improving locality of reference: a binary search in a flat_map, although
theoretically O(log N), has terribly unfriendly VM and cache behaviour
for large N. This is the same argument as for Btrees.
Regards, Phil.
P.S. Have you looked at Boost.Heap?
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk