Boost logo

Boost :

Subject: Re: [boost] [sort] Parallel sorting sub-library mini-review. Performance tests.
From: Francisco José Tapia (fjtapia_at_[hidden])
Date: 2016-11-21 04:01:25


About the internal implementation:

The internal implementation need* threads which permit reuse the
initialized variables*. This feature is no implemented in the C++ standard
at today, and I didn’t see in the thread pool implementations which I
examined

I implemented this with threads and atomic variables. The thread are
created at the beginning of the program where initialize the thread pool
variables which are destroyed with the thread.* The threads are created
only 1 time*.

The parallel_for and parallel_while are implemented with atomic variables
in a short and easy way to understand and debug.

The works for to execute by the threads are implemented by std::function
objects stored in a concurrent stack. The work time estimated by each
object function is a few milliseconds.

This internal design is highly efficient with many threads. ( I checked in
a dual socket servers with 32 and 48 threads) and the results compared with
GCC parallel sort are better with many threads.

About the goals of this library :

The first is to create a library* extremely easy to understand and use*.
(You only need to include boost/sort/parallel/sort.hpp )

The library must be *independent of any other code*.( You can use
separately, only need to copy the boost/sort/parallel folder, and all the
code needed is included)

The library must use *only the C++11 compliant compiler*. The code can run
in an embedded multi core system or in a powerful processor, plenty of
cores, in a big server.

The advantage of the algorithms are

*parallel_sort, sample_sort* : this algorithm is commonly accepted as the
fastest for the parallel stable sort. The implementation is *notoriously
faster than the provided by the GCC compiler* ( based on Open MP ) using
less memory. TBB don’t provide parallel stable sort, and only have an
experimental code in their web pages

*parallel_sort* : the advantage of this algorithm, invented by me, is the
memory usage.* The speed is similar to GCC parallel sort, but THE MEMORY
USED IS A HALF.*

The algorithms are *exception safe*, meaning that the exceptions generated
by the algorithms guarantee the integrity of the objects to sort, but not
their relative order. If the exception is generated inside the objects (in
the move or in the copy constructor) the results can be unpredictable

Thanks by your interest

Francisco

* I take a quick look at your library, and see many interesting things
inside.

2016-11-20 21:24 GMT+01:00 Christophe Henry <christophe.j.henry_at_[hidden]>:

> Hi Boost Community,
>
> I did not manage yet to write a full review. One week is a bit short so a
> few more days will be necessary. I provide some performance tests first.
>
> To check the library performance, I pitted it against my own versions of a
> parallel mergesort and quicksort provided by the asynchronous library (
> https://github.com/henry-ch/asynchronous), which is now ready for review,
> so we have a stable state for testing.
> I also added a parallel version inside the asynchronous library using
> Francisco's single-thread implementations to compare with his parallel
> ones.
>
> I tested on a i7-5960X. Due to time limits, I could not test on more
> interesting NUMA platforms (I did not get the reviewed library to compile
> for my KNC) so the tests do not pretend to have value outside the i7
> architecture.
> I linked with tbb_malloc_proxy to limit memory influence.
>
> In a nutshell.
> I'm not a big fan of the parallel version of the algorithms. It seems to be
> based on std::async, so that a lot of threads are started and joined at
> every call. I would suggest using a threadpool.
> OTOH the single-threaded ones are interesting, especially the stable_sort
> and intro_sort for cases where usage of spreadsort is not possible.
>
> Cheers,
> Christophe
>
>
> A short summary:
>
> 100000000 uint64_t elements already sorted:
> OMP parallel sort : 0.390899 secs
> Boost parallel sort : 0.064965 secs
> OMP parallel stable sort : 1.06128 secs
> Boost sample sort : 0.0695357 secs
> Boost parallel stable sort : 0.0662401 secs
>
> Asynchronous parallel sort : 0.0167134 secs (same with other
> algorithms)
>
> Asynchronous provides a special optimization for this case.
>
>
> I added this one:
> 100000000 uint64_t elements reverse sorted
>
> OMP parallel sort : 0.317039 secs
> Boost parallel sort : 0.581381 secs
>
> OMP parallel stable sort : 1.06448 secs
> Boost sample sort : 0.524746 secs
> Boost parallel stable sort : 0.73036 secs
>
> Asynchronous parallel sort : 0.0478701 secs
>
> Asynchronous provides a special optimization for this case.
> I this the library should do it too. This one is pretty common and a
> popular DOS attack.
>
> 100000000 uint64_t elements randomly filled
> OMP parallel sort : 1.03594 secs
> Boost parallel sort : 0.796447 secs
>
> OMP parallel stable sort : 1.28601 secs
> Boost sample sort : 0.818954 secs
> Boost parallel stable sort : 1.13604 secs
>
> Asynchronous parallel quickspreadsort: 0.587432 secs
> Asynchronous quick_intro_sort : 0.728393 secs
>
> This mixing of quicksort degrading into a spreadsort works here best. The
> parallel adaptation of intro_sort is not bad either and best of other
> library algorithms.
>
> Asynchronous parallel stable sort : 1.26141 secs
> Asynchronous boost::stable sort : 0.804814 secs
>
> Interesting. The stable version of the library easily beats
> std::stable_sort I used until now.
>
>
> 10000000 strings randomly filled
>
> OMP parallel sort : 1.05803 secs
> Boost parallel sort : 0.933055 secs
>
> OMP parallel stable sort : 1.12269 secs
> Boost sample sort : 0.889216 secs
> Boost parallel stable sort : 1.56564 secs
>
> Asynchronous parallel quickspreadsort: 0.788856 secs
> Asynchronous quick_intro_sort : 0.893652 secs
>
> Asynchronous parallel stable sort : 1.23495 secs
> Asynchronous boost::stable sort : 1.21817 secs
>
> Similar results.
>
>
> Let's move on to big objects:
>
> 1562500 elements of size 512 randomly filled
> H E A V Y C O M P A R I S O N
>
> OMP parallel sort : 0.308923 secs
> Boost parallel sort : 0.342992 secs
>
> Asynchronous parallel_quicksort : 0.246709 secs
> Asynchronous quick_intro_sort : 0.269666 secs
>
> Both quicksorts are best, with a slight advantage to intro_sort.
>
> The light comparison is similar.
>
>
> My test code (the modified benchmark.cpp provided by the library):
> https://github.com/henry-ch/asynchronous/blob/master/libs/
> asynchronous/test/perf/boost_sort/benchmark_sort.cpp
>
> The full test results:
> https://github.com/henry-ch/asynchronous/blob/master/libs/
> asynchronous/test/perf/boost_sort/Results_sort.txt
>
> _______________________________________________
> Unsubscribe & other changes: http://lists.boost.org/
> mailman/listinfo.cgi/boost
>


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk