Boost logo

Boost :

Subject: Re: [boost] [OT?] SIMD and Auto-Vectorization (was Re: How to structurate libraries ?)
From: Joel Falcou (joel.falcou_at_[hidden])
Date: 2009-01-19 08:06:03


Dean Michael Berris a écrit :
> I'm a little weary about hiding these important issues to the people
> who understand the higher scheme of things. Which is why I personally
> don't think leaving the (non-C++ programming) domain experts in the
> dark about the inherent parallelism in their solution is a good idea.

No, we have to let them unaware of the *details*. Of course they
shouldknwo about how many or how coarse is the inehrent aprallelism or
if for a given paltform tasks parallelism performs better than
data-aprallelism. We have to hide them the ugly gears.

> As far as hiding the parallelism from them, the compiler is the
> perfect place to do that especially if your aim is to just leverage
> platform specific parallelism features of the machine. Even these
> domain experts once they know about the compiler capabilities may be
> able to write their code in such a way that the compiler will be happy
> to auto-vectorize -- and that's I think where it counts most.

Yes I fully agree if we lived in a perfect world. auto-whaterver-izing
compilers are alas not the norm currently.

> Actually, it's easy to say it -- it's a matter of acceptance that's a
> problem. Now if it was a library that forced users to change their
> code just to be able to leverage something that the compiler should be
> able to handle for them (like writing assembly code for instance)
> sounds to me like too much to ask for. After all, the reason we have
> higher level programming languages is to hide from ourselves the
> details of the assembly/machine language on which platform we're going
> to run programs on. ;-)

See my reamrks about NT2 previously. Librarie sliek NT2 are the way to
go for users. I think Boost.SIMD has its usefulness (as said in the
other thread) for library developeprs.

> I agree, but if you're going to tackle the concurrency problem through
> a DSEL, I'd think a DSEL at a higher level than SIMD extensions would
> be more fruitful. For example, I'd think something like:
>
> vector<huge_numbers> numbers;
> // populate numbers
> async_result_stream results =
> apply(numbers, [... insert funky parallelisable lambda construction ...])
> while (results) {
> huge_number a;
> results >> a;
> cout << a << endl;
> }

I never said I'll tackle concurency with Boost.SIMD ;) Of course it'll
need a far more expressive and abstarct DSEL.

> In which case I think that DSEL for parallelism would be much more
> acceptable than even the simplest SIMD DSEL mainly because I'd think
> if you really wanted to leverage SIMD by hand, you'd just use the
> vector registers and use the vector functions directly from your code
> instead. At least that's in my case as both a user and a library
> writer.

Then again, one's tool is not what another one's need.

> In which case I think a DSEL is clever, but a SIMD-only library would
> be too small in scope for my taste. But that's just me I think. ;-)

Patrick Mihelich seems to disagree ont he other thread ;)

> I like thinking at a higher level first and solving the problems in
> the lower level with more specific focus but within a bigger context.
> Once you can recognize the patterns in the solution from a higher
> level can you really try solving problems at a lower level with better
> insight. Missing context is always hard to deal with.

Maybe. I'm not sure myself on how to tackle this. Le's try and see how
it fares. If I ended up cornered then I'll see and go back to another
approach. Heck, I'm even paid for doing exactly this (searchign not
beign cornered ;))

The issue is large and arguments around how to do it properly are
necessary as no one lay be able to find the definite answer alone.
In my larger plan, there is a mix to find between exernal tools that
preprocess source to source, DSEL at low level and other kind of tools.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk