Hi,

this is not right. OpenMP was written against C++98 not C++03 or C++11. And the status regarding this is outlined here:

http://stackoverflow.com/questions/13837696/can-i-safely-use-openmp-with-c11

with for example a real world problem outlined here:
http://stackoverflow.com/questions/13197510/why-do-c11-threads-become-unjoinable-when-using-nested-openmp-pragmas

So for example in an application where the gui runs in one thread and the computation in another and the computations use OpenMP, your application might crash.


On 09.12.2013 17:26, Riccardo Rossi wrote:
just a brief comment again on OpenMP:

openmp IS compatible with c++, and a clang version exists already, although it is not merged to the repository

http://clang-omp.github.io/


regards
Riccardo


On Mon, Dec 9, 2013 at 4:14 PM, Nasos Iliopoulos <nasos_i@hotmail.com> wrote:
Karli,
I am not so sure the requirements for smal /large containers are so diverse. After all you can have compile-time dispatching for small (static) or large (dynamic) containers if you want to use different algorithms for each or for mixed cases. Can you please elaborate if I am not getting it right?

-Nasos




On 12/09/2013 09:49 AM, Karl Rupp wrote:
Hi guys,


> the whole problem of most numerical packages, IMHO, is that everything
is tied together. I would encourage a very loosely coupled system. I.e.,
a system that maybe even would be able to switch storage layout,
algorithms, etc., at run-time, maybe following some simple numerical
tests. Of course, only if this would be enabled at compile-time.

From my experience with rebuilding the uBLAS interface within ViennaCL I fully support this suggestion.


* the storage of data. This is about memory-efficiency, and hence, speed
of computation. Storage engines might be linear, sparse, etc..
* functional shape of the data. Dense, triangular, etc.
* numerical properties of the data. Positive definite, etc.
* loading and saving the data. Why not support a whole bunch of
data-formats
* unified matrix/vector view on the data
* procedural manipulation of the data, shallow views on the data, views
on views, etc.

One important question should be answered upfront: Should ublas focus on linear algebra for 'small' vectors/matrices, or 'big' vectors/matrices? I don't consider 'both' a legitimate answer here, because the two extremes have highly diverse requirements:
 - For 'small' data (say, 10x10) one goes with expression templates, just because any additional runtime overhead is not acceptable.
 - For 'large' data one can do a lot of tricks with respect to avoiding memory transfers with some delayed execution techniques. Also, this is the regime where one can slowly start to think about threads, accelerators, etc. Most machine-specific metrics then become only available at runtime, so one needs some runtime logic in addition to expression templates anyway. One may even get rid of expression templates here and instead enjoy faster compilation times and increased flexibility at runtime.

Just my 2 cents of course... :-)

Best regards,
Karli

_______________________________________________
ublas mailing list
ublas@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: athanasios.iliopoulos.ctr.gr@nrl.navy.mil

_______________________________________________
ublas mailing list
ublas@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: rrossi@cimne.upc.edu



--

Dr. Riccardo Rossi, Civil Engineer

Member of Kratos Team

International Center for Numerical Methods in Engineering - CIMNE
Campus Norte, Edificio C1
 

c/ Gran Capitán s/n

08034 Barcelona, España

Tel:        (+34) 93 401 56 96

Fax:       (+34) 93.401.6517

web:       www.cimne.com


_______________________________________________
ublas mailing list
ublas@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/ublas
Sent to: Oswin.Krause@ruhr-uni-bochum.de