I had a file containing about 0.1 million records, each record containing seven fields(two strings and five 64 bit integers). I indexed(ordered_unique) one of the integer fields and tried inserting the records to a multiindex container and it took only about 650 milliseconds. Inserting the same to a SQLite DB took almost 1 second. So, will inserting a really really huge amount of records will degrade the performance?

On Thu, Jun 23, 2016 at 10:32 AM, Ernest Zaslavsky <ernest.zaslavsky@sizmek.com> wrote:

IIRC it uses red-black tree

I had horrific experience with insert times, it just took too much time. Actually I had exactly your case, insert once and then just run on multiple indexes. Sounds like, if you don’t mind the load time, go for it.

 

From: Boost-users [mailto:boost-users-bounces@lists.boost.org] On Behalf Of Anaswara Nair
Sent: Wednesday, June 22, 2016 4:13 PM
To: boost-users@lists.boost.org
Subject: [Boost-users] Backend of multi_index container

 


Hi, I was going through the documentation of boost::multi_index container. I would like to know what is in its back end. Something like B-Tree,B+tree,etc.? Actually I want to create a database(in the sense it contains millions of records), but need not be reusable at a later period of time. i.e, I am looking only for run-time persistent data. Once data is inserted to this, it will not be modified. And there will be a unique id for each record. I would also like to know whether multi_index is the best suited one for the implementation of my so called database.. Thankyou

 


_______________________________________________
Boost-users mailing list
Boost-users@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-users