|
Boost : |
Subject: Re: [boost] How to structurate libraries ?
From: Patrick Mihelich (patrick.mihelich_at_[hidden])
Date: 2009-01-19 01:49:41
uBlas doesn't do any explicit vectorization. In an ideal world the compiler
would handle this and emit optimal code for whatever architecture. Back in
the real world, speed issues make uBlas basically unusable for my work.
Better compiler technology would help, BUT, I think that it is a mistake to
simply blame poor compilers. A high-level library like uBlas has a great
deal of compile-time knowledge about data layout and computational structure
that a compiler optimization pass on IR code does not. In such circumstances
I think it is reasonable and logical to shift some effort from the compiler
to library-side code generation.
It's worth looking at the Eigen2 library (http://eigen.tuxfamily.org/) which
is what I now use for high-performance linear algebra. It has its own
small-scale binding to SIMD ops (SSE2, Altivec, or fall-back to C) and
expresses all vectorizable calculations in terms of "packets," basically
generalized SIMD registers. For SSE, a packet of floats is 4 floats, a
packet of doubles is 2 doubles. If the platform doesn't have vector
instructions, then a float packet is just a single float. I think this is a
useful abstraction, and demonstrates the difference between something like
Boost.SIMD and uBlas. The SIMD library is concerned with operating with
maximal efficiency on fixed-size "packets" of data, where the size of a
packet is determined by the data type and available instruction set. This
can be used as a building block by, say, uBlas in operating on general
arrays of data.
Although I would be very happy to use Boost.SIMD directly as an end-user, I
think that its greatest impact would be in other libraries. uBlas, dynamic
bitset, GIL, and Math are Boost libraries which spring to mind as
potentially benefiting enormously from a good cross-platform wrapper for
SIMD operations.
In fact, I had been thinking recently about writing my own version of a
Boost.SIMD library based on Proto and Eigen2's packet model, but I'm very
happy that Joel has taken the lead and actually produced some working code.
I think a good Boost.SIMD library would be tremendously exciting, and I'm
eager to see some code in the Vault. I'm a little surprised at the apparent
hostility on the list so far.
-Patrick
On Sun, Jan 18, 2009 at 3:59 PM, Mathias Gaunard <
mathias.gaunard_at_[hidden]> wrote:
> Joel Falcou wrote:
>
> Do you really thing I can't be able to hide this somehow ?
>> This kind of thing *is* abstracted into the SIMD traits and use
>> compile-time define to know how large a vector is on different platform.
>> Thanks for thinking I am that much incompetent.
>>
>
> How are you supposed to code in a portable way?
>
> If I write
> vec<float> = {5., 5., 5., 5.};
>
> the code will only compile if the SIMD register is 4 floats big on that
> architecture (as you said yourself).
>
> If it was
> vec<float, 4> = {5., 5., 5., 5.};
> the library could actually fallback to something else to make the code
> work...
>
>
> You had declared r to be a vec<float>.
>>>
>> Which is call a typo. As I said, it's meant to be float r;
>>
>
> I was clarifying. You said you didn't declare 'r'.
>
>
> How is that any different?
>>>
>> Because sometimes you want to perform SIMD operation on something else
>> than an array of value.
>>
>
> Like what?
> That's what it is as far as I can see. That's what a vector is. N times the
> same type. And SIMD performs the same operation on all elements.
>
> I really don't see the difference.
> I don't see much difference with uBlas either, except yours is a POD, which
> doesn't bring anything useful as far as I can see.
>
>
> _______________________________________________
> Unsubscribe & other changes:
> http://lists.boost.org/mailman/listinfo.cgi/boost
>
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk