Boost logo

Boost :

Subject: Re: [boost] [ boost ] [ Trie ]
From: Kenneth Adam Miller (kennethadammiller_at_[hidden])
Date: 2015-03-05 13:49:28

Well, my requirements are that I store range-offset encodings and be able
to access them for any reason as fast as possible. So, what I was thinking
about doing was storing a range-value encoding as a tree, but it could be
any kind of self balancing tree. The thing with red-black trees is, I know
how to make them wait-free by reading and implementing a whitepaper that
UTDallas published.

Does your nedtree cover the ability to store range value encodings? Because
my thinking is, with a trie, you can get the first 3 bytes of an integer
without giving up to much space (it requires only slightly less than 32
megabytes contiguously stored). Each entry in that 32 megabytes can be a
pointer to a red-black tree node, wherein when you do an insertion, you
revert back to the wait-free redblack tree implementation. So, you can
still do range-value encodings, but it will also be far, far faster than if
you didn't having this caching to accelerate it.

I estimate that it requires at most about 11 instructions in the absolute
worst case scenario to do an insertion, and typical 7 instruction on the
usual "cache-miss"., and amortized 2 instruction for contiguous memory
insertions. It is very highly scalable, and can be managed in flexibly with
various concurrency runtime patterns.

On Thu, Mar 5, 2015 at 1:14 PM, Niall Douglas <s_sourceforge_at_[hidden]>

> On 5 Mar 2015 at 12:15, Kenneth Adam Miller wrote:
> > Cosmin: Are you open to the idea of having the trie act as a container
> and
> > sort of adopt a superset of the operations that work on traditional STL
> > containers?
> >
> > Are you open to the idea of having the trie be parameterizable with
> > concurrency strategies? With value strategies that can even be an
> > additional data structures?
> >
> > Are you open to the idea of having one specialization that is highly
> > optimized for integer keys, with separate implementations for x32/x64?
> There is one of these already at I
> wouldn't recommend using its STL container implementation, but its C
> macro/C++ template implementation is plenty sound and very mature.
> If the concurrent unordered map GSoC goes well, it will be followed
> by a concurrent map GSoC (extension) which internally is based on
> concurrent bitwise tries. These are vastly faster under concurrent
> modification than red black trees, and still provide excellent
> concurrency given a good key hash function.
> None of these are a proper full fat trie container implementation,
> but then in my own code I don't need that right now, I need
> concurrent maps ordered and unordered.
> Niall
> --
> ned Productions Limited Consulting
> _______________________________________________
> Unsubscribe & other changes:

Boost list run by bdawes at, gregod at, cpdaniel at, john at