
Ublas : 
Subject: [ublas] uBLAS parallelization
From: Jörn Ungermann (j.ungermann_at_[hidden])
Date: 20090402 02:29:15
Hi all,
I am currently looking into uBLAS to replace gsl/ATLAS as our linear
algebra solution, as it supports sparse matrices, which our increasing
problems (going towards 100000x100000) start to require.
We use 8 core 64GB machines for our calculations and would like to use
'em as efficiently as possible, i.e. ideally use all cores all the time,
even if a single problem requires all 64GB of memory. For dense
matrices, ATLAS does this automatically with its threaded implementation
(ptcblas).
With the nice ATLAS bindings, this seems to work like a charm with
uBLAS' dense matrices.
But I have not seen anything (helpful) about speeding sparse matrix
operations up with a threaded implementation (shared memory
parallelization).
I assume that it is possible to do so at least for some sparse matrix
implementations, as certain specialized packages offer it (e.g. PetSC).
So:
1) Is there some readytouse solution for parallelizing uBLAS sparse
matrix operations?
2) If not, is there some ongoing development effort, I could tap
into/get involved?
3) If not, could someone comment on how difficult it would be to
implement such a thing for selected operations/matrix types from both a
mathematical and a uBLASimplementaionconstraints point of view (we
already use OpenMP to parallelize the nonlinearalgebra part of our
program)?
Thanks and kind regards,
JÃ¶rn
PS: uBLAS and the numerical bindings are really marvelous. The migration
of our software package from GSL worked like a charm given all the
helpful hints in the wiki and the test cases as working examples.
 JÃ¶rn Ungermann Dipl.Mathematiker ICG1, Forschungszentrum JÃ¼lich Tel.: +49 2461 61 1840