Boost logo

Ublas :

Subject: Re: [ublas] uBLAS parallelization
From: Riccardo Rossi (rrossi_at_[hidden])
Date: 2009-04-02 03:50:40


we spent some effort in the parallelization of matrix vector product and
other of such things.

in doing so we partially took advantage of the "omptl" library which is
a library freely available

i attach here the wrapper we use for "parallel ublas". It appears to
work fine over itanium (however we only did 1 test as we do not have it
"at home") but does not scale at all due to hardware limitation over
multicore cpus (neither intel nor AMDs)

i hope it can be helpful


Riccardo Rossi, Ph.D, Civil Engineer
member of the Kratos Group:
Centro Internacional de Métodos Numéricos en Ingeniería (CIMNE)
Universidad Politécnica de Cataluña (UPC)
Edificio C-1, campus Norte UPC
Gran Capitan, s/n
08034 Barcelona, España
Tel. (+34) 93 401 73 99


Los datos de carácter personal contenidos en el mensaje, se registrarán
en un fichero para facilitar la gestión de las comunicaciones de CIMNE.
Se pueden ejercitar los derechos de acceso, rectificación, cancelación y
oposición por escrito, dirigiéndose a nuestras oficinas de CIMNE, Gran
Capitán s/n, Edificio C1 - Campus Norte UPC, 08034 Barcelona, España.


Les dades de caràcter personal contingudes en aquest missatge es
registraran en un fitxer per facilitar la gestió de les comunicacions
del CIMNE. Es poden exercir els drets d'accés, rectificació,
cancel·lació i oposició, per escrit a les nostres oficines del CIMNE,
Gran Capità s/n, Edifici C1, Campus Nord UPC, 08034 Barcelona, Espanya.


All personal data contained in this mail will be processed
confidentially and stored in a file property of CIMNE in order to manage
corporate communications. You may exercise the right of access,
rectification, deletion and objection by letter sent to CIMNE, Gran
Capitán, Edificio C1 - Campus Norte UPC, 08034 Barcelona, Spain.

On Thu, 2009-04-02 at 08:29 +0200, Jörn Ungermann wrote:
> Hi all,
> I am currently looking into uBLAS to replace gsl/ATLAS as our linear
> algebra solution, as it supports sparse matrices, which our increasing
> problems (going towards 100000x100000) start to require.
> We use 8 core 64GB machines for our calculations and would like to use
> 'em as efficiently as possible, i.e. ideally use all cores all the time,
> even if a single problem requires all 64GB of memory. For dense
> matrices, ATLAS does this automatically with its threaded implementation
> (ptcblas).
> With the nice ATLAS bindings, this seems to work like a charm with
> uBLAS' dense matrices.
> But I have not seen anything (helpful) about speeding sparse matrix
> operations up with a threaded implementation (shared memory
> parallelization).
> I assume that it is possible to do so at least for some sparse matrix
> implementations, as certain specialized packages offer it (e.g. PetSC).
> So:
> 1) Is there some ready-to-use solution for parallelizing uBLAS sparse
> matrix operations?
> 2) If not, is there some ongoing development effort, I could tap
> into/get involved?
> 3) If not, could someone comment on how difficult it would be to
> implement such a thing for selected operations/matrix types from both a
> mathematical and a uBLAS-implementaion-constraints point of view (we
> already use OpenMP to parallelize the non-linear-algebra part of our
> program)?
> Thanks and kind regards,
> Jörn
> PS: uBLAS and the numerical bindings are really marvelous. The migration
> of our software package from GSL worked like a charm given all the
> helpful hints in the wiki and the test cases as working examples.
> _______________________________________________
> ublas mailing list
> ublas_at_[hidden]