Boost logo

Boost :

From: Dean Michael Berris (mikhailberis_at_[hidden])
Date: 2006-06-29 17:00:51


On 6/29/06, Douglas Gregor <doug.gregor_at_[hidden]> wrote:
>
> On Jun 28, 2006, at 11:29 PM, Dean Michael Berris wrote:
> > I see that this requires any one of three MPI implementations -- and
> > there are a lot of differences between these implementations
> > internally and architecturally.
>
> In theory, the candidate Boost.MPI should work with any MPI that
> meets the MPI 1.1 specification. In practice, we tend to test with
> those three implementations.
>

API wise, I suppose this will be true. However, I'm really worried
about the internal implementations of these MPI implementations and
how some the the kinks will show from implementation to
implementation. But then this concern is misplaced, and not really
related to Boost.MPI, so I guess I don't need to go into too much
detail on these concerns.

> > There's also the question of compiler
> > compliance, and platform dependence
>
> You'll need a modern C++ compiler to use the candidate Boost.MPI. It
> should be platform-agnostic, but again--we only typically test a few
> platforms, namely x86 Linux, x86-64 Linux, and PowerPC Mac OS X.
>

Looks like a pretty short list, but just right if you're dealing with
Beowulf clusters. But then I don't really mind, because those are the
only clusters I have experience with anyway. :)

> > -- although I haven't seen the
> > code yet, my question would be more of pragmatic in the sense that it
> > maintain a common STL-like interface without breaking the
> > distributed/parallel computing frame.
> >
> > I would like to see really more of this -- though much like in
> > parallel supercomputing applications, the issue really will be more of
> > performance than anything.
>
> Although we have yet to run the tests with the candidate Boost.MPI,
> we ran NetPIPE numbers using a prototype of the same C++ interface.
> There was no impact on either bandwidth or latency.
>

I wouldn't really expect the candidate Boost.MPI bindings will make
too much of a difference if it's going to be using the different
existing MPI implementations. Like above, these concerns really are
geared towards the actual MPI implementations instead of the candidate
Boost.MPI

Is there a consideration of using asio to implement the actual MPI
standard, so that the candidate Boost.MPI will also implement the MPI
standard in case there aren't any available MPI implementations for
the platform? I'm thinking if asio will work on many different
platforms (maybe even embedded devices), then maybe Boost.MPI can have
a canned MPI implementation that comes along with it?

Just a thought though, and I'd think this is too much a separate
effort which might be a waste of time, but nonetheless might be a
worthy goal.

> > Anyone in the list can do a review right?
>
> I know there are a few Boosters that are in the HPC/parallel
> computing area. I'm also planning to strong-arm whatever MPI experts
> I can find :)
>

I'm no MPI expert, but have done some (read: very limited)
HPC/parallel computing programming before. I'll love to see the
discussion too, and be able to review it too when the process starts.

I'm really excited with this, and hope I can be of help in any way I can be.

Have a great day everyone!

-- 
Dean Michael C. Berris
C/C++ Software Architect
Orange and Bronze Software Labs
http://3w-agility.blogspot.com/
http://cplusplus-soup.blogspot.com/
Mobile: +639287291459
Email: dean [at] orangeandbronze [dot] com

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk