On 16 July 2015 at 02:36, Soul Studios <matt@soulstudios.co.nz> wrote:
I lack time right now but I'm interested in trying your container in the
context of implementations (mostly experimental) of components systems.
I don't think annything related to multithreading should be done by such
kind of container. Most high performance usage will be going through all
the elements
to transform them (if "alive") and as long as you provide iterators it
can be done in parallel the user wants.

Yes, the iterators simply 'jump' over the erased element areas. It's very fast.
I am glad for your support, but I'm not sure why you think it shouldn't be used with multithreading - it follows the same rules for multithreading as the std:: containers ie. reads can be synchronous, writes must be serialized.



That's not what I meant: I'm saying that you are right to follow the way the stl containers work and don't add any multithreading support _inside_ the containers themselves.
Basically, I believe that you did it the right way.
 

There are cases where vector is not the right solution but yeah in my
experience too it fits most use cases.
Also: http://bannalia.blogspot.fr/2015/06/cache-friendly-binary-search.html
I remember Stroustrup making the same statement at Going Native
conference too, and it's also common observation in game devs circles
(several other CPPcon talks talked about it too).

Yes, I feel it's a real shame that more developers don't understand these basic principles. In fact, one of the reviewers for the cppcon talk felt I didn't understand the "performance guarantees" of std:: containers - which is ironic, as what he means is "complexity guarantees", which in no way relate to performance, because of the aforementioned problem of cache saturation.