Hey Alain,

Yes, it helps a lot!
Thank you so munch for your very helpful reply.
Kind regards,

                         Éric.

On 10/11/2014 07:33 PM, Alain Miniussi wrote:
Forget it ;-)

Bjam try to use the most common compiler wraper option to display the link and build command. Intel does not support them anymore (their implementation is based on mpich those options are tested) so bjam cannot use those. To make things worse, mpiicpc -<any unknown option> will not return an error so I am not even sure bjam can try the other option (which do not exist either). There is a feature request filed at intel and things should improve in the future.

That being said:

"mpiicpc -show" will display the options to use to compile directly mpi application through icpc.

On my current box:

icpc -I/opt/intel//impi/5.0.1.035/intel64/include -L/opt/intel//impi/5.0.1.035/intel64/lib/release -L/opt/intel//impi/5.0.1.035/intel64/lib -Xlinker --enable-new-dtags -Xlinker -rpath -Xlinker /opt/intel//impi/5.0.1.035/intel64/lib/release -Xlinker -rpath -Xlinker /opt/intel//impi/5.0.1.035/intel64/lib -Xlinker -rpath -Xlinker /opt/intel/mpi-rt/5.0/intel64/lib/release -Xlinker -rpath -Xlinker /opt/intel/mpi-rt/5.0/intel64/lib -lmpicxx -lmpifort -lmpi -lmpigi -ldl -lrt -lpthread

you can use that to deduce the include path, library path and libraries.

From there, you can explicitly configure the mpi section of your project-config.jam, on my box, that's:

using mpi : :
      <find-shared-library>mpi
      <find-shared-library>mpigi
      <find-shared-library>dl
      <find-shared-library>rt
      <find-shared-library>pthread
      <library-path>/opt/intel/impi/5.0.1.035/intel64/lib
      <library-path>/opt/intel/impi/5.0.1.035/intel64/lib/release
      <include>/opt/intel/impi/5.0.1.035/include64
       ;

You might want to set the run path too depending on you preferences wrt dynamic libraries.
There is an extra argument which could be usefull to set if you want to run the test, depending on you intel's version of MPI:

using mpi : :
      <find-shared-library>mpi
      <find-shared-library>mpigi
      <find-shared-library>dl
      <find-shared-library>rt
      <find-shared-library>pthread
      <library-path>/opt/intel/impi/5.0.1.035/intel64/lib
      <library-path>/opt/intel/impi/5.0.1.035/intel64/lib/release
      <include>/opt/intel/impi/5.0.1.035/include64
     :
       opt/intel/impi/5.0.1.035/intel64/bin/mpiexec.hydra
     ;

that's a flavour of mpiexec that does not require the mpd daemon to be running. It should become the default but I think at least mpiexec still requires the daemon for the impi 5.x version.

Don't forget to have a look at boost/mpi/config.hpp and look for:
//#define BOOST_MPI_HOMOGENEOUS
you probably want to undocument it.

Hope it help.

A++

Alain

On 11/10/2014 07:03, Éric Germaneau wrote:
Dear all,

I'm building boost for the first time.
I'm using Intel compiler.
The b2 command print this message:
MPI auto-detection failed: unknown wrapper compiler mpic++
Please report this error to the Boost mailing list: http://www.boost.org
You will need to manually configure MPI support.
Would you please shed some light on this matter?
I need to use mpiicc and mpiicpc.
Thank you,

                     Éric.

--
Éric Germaneau (艾海克), Specialist
Center for High Performance Computing
Shanghai Jiao Tong University
Room 205 Network Center, 800 Dongchuan Road, Shanghai 200240 China
M:germaneau@sjtu.edu.cn P:+86-136-4161-6480 W:http://hpc.sjtu.edu.cn


_______________________________________________
Boost-mpi mailing list
Boost-mpi@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-mpi


-- 
---
Alain


_______________________________________________
Boost-mpi mailing list
Boost-mpi@lists.boost.org
http://lists.boost.org/mailman/listinfo.cgi/boost-mpi

--
Éric Germaneau (艾海克), Specialist
Center for High Performance Computing
Shanghai Jiao Tong University
Room 205 Network Center, 800 Dongchuan Road, Shanghai 200240 China
Email:germaneau@sjtu.edu.cn Mobi:+86-136-4161-6480 http://hpc.sjtu.edu.cn