Subject: Re: [boost] [Boost-users] What's so cool about Boost.MPI?
From: Matthias Troyer (troyer_at_[hidden])
Date: 2010-11-11 02:53:26
On Nov 10, 2010, at 8:11 PM, David Abrahams wrote:
>> Usually, headers of serialized data adds about 5%
>> overhead, unless data packets are small. The internet and local
>> networks are about 10-gigabit, and are pushing into the 100-gigabit
>> range now. By the time you get a lot of programmers coding to the
>> library, the networks and CPU's will be so fast, I hardly think the
>> small overheads will make any difference. One thing I've noticed for
>> a decade now is that networks are an order or two magnitudes faster
>> than computers, I mean, networks deliver data way faster than a
>> computer can process it.
> That's not commonly the case with systems built on MPI. Communication
> costs tend to be very significant in a system where every node needs
> to talk to an arbitrary set of other nodes, and that's a common
> pattern for HPC problems.
I also disagree with the statement that communication is faster than computation. Even if you have 10 Gb/second networks into a compute node, that corresponds only to about 150 M double precision floating point numbers. Lets connect that to a node with a *single* quad core Nehalem CPU that operates at actually measured sustained speeds of 74 Gflop, and you see that the network is 500 times slower. Using 4 such CPUs on a quad-core node brings the performance ratio to 2000! Even 10 times faster networks will only take this down to a factor of 200.
Thus, in contrast to your statements networks are *not* an order or two magnitudes faster than computers but two or three orders of magnitude slower than the compute nodes. This will only get worse by an additional two orders of magnitude once we add GPUs or other future accelerator chips to the nodes.
One reason why you might have the impression that you cannot process the incoming data fast enough might be limitations in how you can get the data from he network to the CPU. That's where dedicated high performance network hardware, and optimized MPI libraries will help. Boost.MPI makes it easier to use those MPI libraries.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk