|
Boost : |
Subject: Re: [boost] Determining interest in container with efficient random access insert and erase
From: Chris Clearwater (chris_at_[hidden])
Date: 2015-09-14 07:56:19
On 09/14/2015 04:20 AM, Francisco José Tapia wrote:
> You have these characteristic in the Countertree library (
> https://github.com/fjtapia/countertree_2.0)
>
> You have a data structure with the same interface than the std::vector and
> std::deque, with access by position like the vectors. All the operations (
> read, write, modification ) are O (logN). You can find a concurrent and not
> concurrent version of the data structure.
>
> You have Random access iterators with access time O(logN) ( you can run a
> std::sort over the data structure with good results).
>
> You have too, a STL compatible set, multiset , map and multimap, with
> concurrent versions, with a new type of mutex designed for to improve the
> concurrency. And associated to this , parallel algorithms over these data
> structures.
>
>
> The library is pending of the final approval since more than a year ago.
>
> Actually, I am busy with the parallel sorting algorithms. When finish with
> it, I will continue with the Countertree
>
> I have designed the main part of the new version, designed for a very big
> trees, with new parallel algorithms. By example : create a std::set with
> 30000000 random elements uint64_t, my computer needs 52 seconds, with the
> new algorithms with 1 thread 6 seconds , and with 4 threads 3 seconds.
>
> This can be important, for the data base in memory, and in many search
> applications. ( A single server permit millions of operations per second ).
>
> When finish the new version , I will propose to the Boost community for the
> acceptation test
>
I benchmarked your vector_tree implementation vs mine and came up with
the following results: 4x slower for random access insert and erase of
64 bit data, 16x slower for iteration of 64 bit data, 10x slower for
random access insert and erase of 8 bit data and 110x slower iteration
of 8 bit data (not to mention the extreme memory overhead of allocation
per byte). Also, I had to fix 1 compile error and I get tons of warnings
spewed to my terminal because you appear to return a signed typed for
the size() member function. I wonder too how you get constant time
push_back/push_front when such an operation requires updating log(n)
branch nodes in a typical counted tree implementation.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk