|
Boost : |
Subject: Re: [boost] Back to Boost.SIMD - Some performances ...
From: Joel Falcou (joel.falcou_at_[hidden])
Date: 2009-03-26 16:21:26
> An efficient and well tested scalar math library as the by-product of
> a generic SIMD vector library certainly wouldn't be useless, and I'd
> guess that an SSE scalar implementation would look pretty similar to
> the SSE vector implementation...
>
Well in fatc it does as the emulated vec<T,N> with N != cardinal<vector
T> use those. Boost.SIMD shouldmaybe renamed Boost.FastMath in fact
> I obviously wasn't. It's a bit unfortunate that there are so many
> parallel development efforts in the area of template libraries for
> linear algebra: ublas, mtl, eigen, nt2...
>
I know, so do I
> Have you thought about joining efforts with the Eigen guys? I'm no
> expert in this area, but their benchmark numbers look pretty
> compelling and the API seems to support fixed-size vectors in an
> elegant way. There would probably be huge economies of scale if the
> C++ community converged towards a single template matrix library.
Well, I largely prefer my API ;) but that's a domain rpeference. NT2
mimics exactly Matlab API and syntax wherever possible cause it was
aimed at a tool for physicist and automatism peopel to pass their matlab
demo onto a proper C++ paltform. Morevoer nt2 next version has a
extensive list of features that can be used as mark-up on matrix types.
Some exemples of divergence :
eigen2 sum of cube of column i is
r = m.cols(i).colwise().cube().sum()
NT2 sum of cube of column i is
r = sum0( cube( m(_,i) ) );
Compelx indexing is also supported :
Matlab : k(1,:, 1:2:10) = cos( m );
NT2 : k( 1, _, colon(1,2,10) ) = cos(m);
Mark-up settings :
want an upper-traingular matrix of flaot of maximum static_size of 50x50
with dynamic allocation and want to specify that all
loops involving it need to be blocked by a 3x3 pattern ?
matrix<float, settings( upper_triangular, 2D_(ofCapacity<50,50>),
cache(tiling<3,3>) )> m;
etc ...
I'm not against teaming-up but not sure which one is better than the
other. Moreover, I don't think eigen2 use proto as a base
while NT2 performs lots of pre-optimization using proto-transforms and
again that's something I don't want to lose.
Anyway, I'm not here to discuss NT2 as a whole, maybe we can continue
this elsewhere ;)
-- ___________________________________________ Joel Falcou - Assistant Professor PARALL Team - LRI - Universite Paris Sud XI Tel : (+33)1 69 15 66 35
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk