Boost logo

Boost-MPI :

Subject: [Boost-mpi] Questions about attempt to use Open MPI, Boost.MPI, and published names
From: Damien Kick (dkick1_at_[hidden])
Date: 2013-05-13 12:50:54


I'm playing with some code which makes use of Open MPI's implmentation
of name publishing to implement a client/server. The first client
connection works but a subsequent client connection hangs. I'm not
sure if this is a problem with my use of these APIs; does anything
that I'm doing in the following code look erroneous? (Output has been
formatted to fit under 80 character lines to make Gmane happy.) I
realize that some/most of my questions are probably about Open MPI
specifically and not so much Boost.MPI but I thought that I'd start
here because I am using boost::mpi::intercommunicator and perhaps I'm
misusing it somehow and perhaps I'm missing an oppotunity to make
better use of the Boost.MPI. Thanks in advance for any help.

So the server code is

$ cat src/mdp-mpi.cc
#include "seed/mdp/mpi/client.hh"
#include "seed/mpi_info.hh"
#include "seed/scope_exit.hh"

#include <mpi.h>

#include <boost/mpi.hpp>

#include <array>
#include <cstdlib>
#include <chrono>
#include <iostream>
#include <ostream>
#include <string>
#include <thread>

int main(int argc, char* argv[])
{
    std::clog << argv[0] << " pid " << getpid() << '\n';

    shor::seed::Scope_exit finalize(
        []() {
            if (MPI::Is_initialized()) {
                MPI::Finalize();
            }
        });
    const auto required = MPI_THREAD_MULTIPLE;
    const auto provided = MPI::Init_thread(argc, argv, required);
    if (provided < required) {
        std::cerr << "Error: could not init with MPI_THREAD_MULTIPLE\n";
        return EXIT_FAILURE;
    }

    std::array<char, MPI_MAX_PORT_NAME> port_name;
    MPI::Open_port(MPI_INFO_NULL, port_name.data());
    std::clog << "Opened port " << port_name.data() << '\n';

    const std::string service_name = "mdp-server-example";
    using shor::seed::Mpi_info;
    MPI::Publish_name(
        service_name.c_str(), Mpi_info({{"ompi_global_scope", "true"}}),
        port_name.data());
    std::clog
        << "Published {\"" << service_name << "\", \"" << port_name.data()
        << "\"}\n";

    while (true) {
        std::clog
            << "Waiting to accept a connection on {\"" << service_name
            << "\", \"" << port_name.data() << "\"}\n";
        boost::mpi::intercommunicator intercomm(
            MPI::COMM_SELF.Accept(port_name.data(), MPI_INFO_NULL, 0),
            boost::mpi::comm_take_ownership);
        std::clog
            << "Accepted a connection on {\"" << service_name
            << "\", \"" << port_name.data() << "\"} with rank "
            << intercomm.rank() << " and size " << intercomm.size()
            << '\n';
        std::thread a_thread((shor::seed::Client(intercomm)));
        a_thread.detach();
    }
}
$ cat include/seed/mdp/mpi/client.hh
#ifndef INCLUDE_MR_AGENT_MPI_CLIENT_HH
#define INCLUDE_MR_AGENT_MPI_CLIENT_HH

#include <boost/mpi/intercommunicator.hpp>

namespace shor {
namespace seed {
class Client {
    boost::mpi::intercommunicator comm_;

public:
    explicit Client(const boost::mpi::intercommunicator& comm);
    Client(const Client& that) = delete;
    Client(Client&& that);
    ~Client() = default;

    Client& operator = (const Client& that) = delete;
    Client& operator = (Client&& that);
    void operator () ();
};

} // namespace seed
} // namespace shor

#endif
$ cat src/mdp/mpi/client.cc
#include "seed/mdp/mpi/client.hh"

#include "seed/message/mpi/tag.hh"
#include "seed/message/mr_example.hh"

#include <ostream>

shor::seed::Client::Client(
    const boost::mpi::intercommunicator& comm)
    : comm_(comm)
{ }

shor::seed::Client::Client(
    Client&& that)
    : comm_(std::move(that.comm_))
{ }

shor::seed::Client&
shor::seed::Client::operator = (
    Client&& that)
{
    comm_ = std::move(that.comm_);
    return *this;
}

void
shor::seed::Client::operator () ()
{
    Mr_example mr_example;
    const auto tag = mpi::Tag<Mr_example>::value;
    comm_.recv(boost::mpi::any_source, tag, mr_example);
    std::clog << "Received Mr_example " << mr_example << '\n';
}
$

The first thing that confused me was what happened if I tried to run
ompi-server via mpirun. For example, starting the ompi-server and
then my server code, I see

$ mpirun ompi-server \
-r /Users/dkick/shor/seed/2013/04/23/var/run/ompi-server/uri.txt
[Damien-Kicks-MacBook-Pro.local:36793] [[23053,1],0] ORTE_ERROR_LOG: \
A message is attempting to be sent to a process whose contact information \
is unknown in file \
$WHATEVER/openmpi-1.6.4/orte/mca/rml/oob/rml_oob_send.c at line 104
[Damien-Kicks-MacBook-Pro.local:36793] [[23053,1],0] \
could not get route to [[23040,1],0]
[Damien-Kicks-MacBook-Pro.local:36793] [[23053,1],0] ORTE_ERROR_LOG: \
A message is attempting to be sent to a process whose contact information \
is unknown in file \
$WHATEVER/openmpi-1.6.4/orte/runtime/orte_data_server.c at line 386

And my server code never gets as far as having been able to publish.
But if I start ompi-server without mpirun, then things seems to work
okay.

$ mpirun -mca btl tcp,sm,self \
--ompi-server file:$WHATEVER/var/run/ompi-server/uri.txt mdp-mpi
mdp-mpi pid 36804
Opened port 1517879296.0;\
tcp://10.160.30.104:61744;\
tcp://10.161.1.73:61744+1517879297.0;\
tcp://10.160.30.104:61745;\
tcp://10.161.1.73:61745:300
MPI_Info_set("ompi_global_scope", "true")
Published {"mdp-server-example", \
"1517879296.0;\
tcp://10.160.30.104:61744;\
tcp://10.161.1.73:61744+1517879297.0;\
tcp://10.160.30.104:61745;\
tcp://10.161.1.73:61745:300"}
Waiting to accept a connection on {"mdp-server-example", \
"1517879296.0;\
tcp://10.160.30.104:61744;\
tcp://10.161.1.73:61744+1517879297.0;\
tcp://10.160.30.104:61745;\
tcp://10.161.1.73:61745:300"}

Here is the client code and the use of Boost.Serialization for the
messaging, which isn't that interesting, but just to be complete.

$ cat src/mr-agent-mpi.cc
#include "seed/message/mpi/tag.hh"
#include "seed/message/mr_example.hh"
#include "seed/scope_exit.hh"

#include <mpi.h>

#include <boost/mpi.hpp>

#include <array>
#include <iostream>
#include <ostream>
#include <string>

#include <unistd.h>

int main(int argc, char* argv[])
{
    std::clog << argv[0] << " pid " << getpid() << '\n';

    shor::seed::Scope_exit finalize(
        []() {
            if (MPI::Is_initialized()) {
                MPI::Finalize();
            }
        });
    const auto required = MPI_THREAD_MULTIPLE;
    const auto provided = MPI::Init_thread(argc, argv, required);
    if (provided < required) {
        std::cerr << "Error: could not init with MPI_THREAD_MULTIPLE\n";
        return EXIT_FAILURE;
    }

    const std::string service_name = "mdp-server-example";
    std::clog
        << "Looking up port for service \"" << service_name << '\n';
    std::array<char, MPI_MAX_PORT_NAME> port_name;
    MPI::Lookup_name(
        service_name.c_str(), MPI_INFO_NULL, port_name.data());
    std::clog
        << "Found {\"" << service_name << "\", \"" << port_name.data()
        << "\"}\n";

    boost::mpi::intercommunicator comm(
        MPI::COMM_SELF.Connect(port_name.data(), MPI_INFO_NULL, 0),
        boost::mpi::comm_take_ownership);
    std::clog
        << "Connected to {\"" << service_name << "\", \""
        << port_name.data() << "\"} with rank " << comm.rank()
        << " and size " << comm.size() << '\n';

    using shor::seed::Mr_example;
    // Just a dummy for now ...
    Mr_example mr_example;
    mr_example.router_id = 11;
    mr_example.tenant_id = 13;
    mr_example.host_name = "host-name";
    mr_example.domain_name = "domain-name";
    mr_example.mgmt_ip_address = "mgmt-ip-address";
    mr_example.mac_address = "mac-address";
    mr_example.ws_user = "ws-user";
    mr_example.ws_password = "ws-password";
    mr_example.admin_url = "admin-url";

    const auto tag = shor::seed::mpi::Tag<Mr_example>::value;

    std::clog
        << "Sending to {\"" << service_name << "\", \""
        << port_name.data() << "\"}\n";
    comm.send(0, tag, mr_example);
}
$ cat include/seed/message/mpi/tag.hh
#ifndef INCLUDE_SEED_MESSAGE_MPI_TAG_HH
#define INCLUDE_SEED_MESSAGE_MPI_TAG_HH

#include "seed/message/mr_example.hh"

namespace shor {
namespace seed {
namespace mpi {
template<typename Msg_type> struct Tag;

template<>
struct Tag<Mr_example> {
    static const int value = 1;
};

} // namespace mpi
} // namespace seed
} // namespace shor

#endif
$ cat include/seed/message/mr_example.hh
#ifndef INCLUDE_MESSAGE_MR_EXAMPLE_HH
#define INCLUDE_MESSAGE_MR_EXAMPLE_HH

#include <cstdint>
#include <ostream>
#include <string>

namespace shor {
namespace seed {
namespace detail {
struct Mr_example {
    std::int32_t router_id;
    std::int32_t tenant_id;
    std::string host_name;
    std::string domain_name;
    std::string mgmt_ip_address;
    std::string mac_address;
    std::string ws_user;
    std::string ws_password;
    std::string admin_url;

    template<typename Archive>
    void serialize(
        Archive& ar, unsigned int version);
};

std::ostream& operator << (std::ostream& out, const Mr_example& that);

} // namespace detail
typedef detail::Mr_example Mr_example;

} // namespace seed
} // namespace shor

#include "seed/message/mr_example.cc.hh"

#endif
$ cat include/seed/message/mr_example.cc.hh
#ifndef INCLUDE_MESSAGE_MR_EXAMPLE_CC_HH
#define INCLUDE_MESSAGE_MR_EXAMPLE_CC_HH

#include "seed/message/mr_example.hh"

#include <boost/serialization/string.hpp>

template<typename Archive>
void
shor::seed::detail::Mr_example::serialize(
    Archive& ar,
    const unsigned int /**/)
{
    ar & this->router_id;
    ar & this->tenant_id;
    ar & this->host_name;
    ar & this->domain_name;
    ar & this->mgmt_ip_address;
    ar & this->mac_address;
    ar & this->ws_user;
    ar & this->ws_password;
    ar & this->admin_url;
}

#endif
$ cat src/message/mr_example.cc
#include "seed/message/mr_example.hh"

std::ostream&
shor::seed::detail::operator << (
    std::ostream& out,
    const Mr_example& that)
{
    out << "{ .router_id = " << that.router_id
        << ", .tenant_id = " << that.tenant_id
        << ", .host_name = \"" << that.host_name << '"'
        << ", .domain_name = \"" << that.domain_name << '"'
        << ", .mgmt_ip_address = \"" << that.mgmt_ip_address << '"'
        << ", .mac_address = \"" << that.mac_address << '"'
        << ", .ws_user = \"" << that.ws_user << '"'
        << ", .ws_password = \"" << that.ws_password << '"'
        << ", .admin_url = \"" << that.admin_url << '"'
        << " }";
    return out;
}
$

Now, if I run the client, the first connection works just fine.

$ mpirun -mca btl tcp,sm,self \
--ompi-server file:$WHATEVER/var/run/ompi-server/uri.txt mr-agent-mpi \
mr-agent-mpi pid 36817
Looking up port for service "mdp-server-example
Found {"mdp-server-example", \
"1517879296.0;tcp://10.160.30.104:61744;\
tcp://10.161.1.73:61744+1517879297.0;tcp://10.160.30.104:61745;\
tcp://10.161.1.73:61745:300"}
Connected to {"mdp-server-example", \
"1517879296.0;tcp://10.160.30.104:61744;\
tcp://10.161.1.73:61744+1517879297.0;tcp://10.160.30.104:61745;\
tcp://10.161.1.73:61745:300"} with rank 0 and size 1
Sending to {"mdp-server-example", \
"1517879296.0;tcp://10.160.30.104:61744;\
tcp://10.161.1.73:61744+1517879297.0;tcp://10.160.30.104:61745;\
tcp://10.161.1.73:61745:300"}
$

Seen by the server ...

Accepted a connection on {"mdp-server-example", \
"1517879296.0;tcp://10.160.30.104:61744;\
tcp://10.161.1.73:61744+1517879297.0;tcp://10.160.30.104:61745;\
tcp://10.161.1.73:61745:300"} with rank 0 and size 1
Waiting to accept a connection on {"mdp-server-example", \
"1517879296.0;tcp://10.160.30.104:61744;\
tcp://10.161.1.73:61744+1517879297.0;tcp://10.160.30.104:61745;\
tcp://10.161.1.73:61745:300"}
Received Mr_example { .router_id = 11, .tenant_id = 13, \
.host_name = "host-name", .domain_name = "domain-name", \
.mgmt_ip_address = "mgmt-ip-address", .mac_address = "mac-address", \
.ws_user = "ws-user", .ws_password = "ws-password", .admin_url = "admin-url" }

And if I wait before running another client, it works fine again. But
if I run the client "right away", the client hangs ...

$ mpirun -mca btl tcp,sm,self \
--ompi-server file:$WHATEVER/var/run/ompi-server/uri.txt mr-agent-mpi \
mr-agent-mpi pid 36819
Looking up port for service "mdp-server-example
Found {"mdp-server-example", \
"1517879296.0;tcp://10.160.30.104:61744;\
tcp://10.161.1.73:61744+1517879297.0;tcp://10.160.30.104:61745;\
tcp://10.161.1.73:61745:300"}
Connected to {"mdp-server-example", \
"1517879296.0;tcp://10.160.30.104:61744;\
tcp://10.161.1.73:61744+1517879297.0;tcp://10.160.30.104:61745;\
tcp://10.161.1.73:61745:300"} with rank 0 and size 1
Sending to {"mdp-server-example", \
"1517879296.0;tcp://10.160.30.104:61744;\
tcp://10.161.1.73:61744+1517879297.0;tcp://10.160.30.104:61745;\
tcp://10.161.1.73:61745:300"}
$ mpirun -mca btl tcp,sm,self \
--ompi-server file:$WHATEVER/var/run/ompi-server/uri.txt mr-agent-mpi \
mr-agent-mpi pid 36821
Looking up port for service "mdp-server-example
Found {"mdp-server-example", \
"1517879296.0;tcp://10.160.30.104:61744;\
tcp://10.161.1.73:61744+1517879297.0;tcp://10.160.30.104:61745;\
tcp://10.161.1.73:61745:300"}

With the server looking like

$ mpirun -mca btl tcp,sm,self \
--ompi-server file:$WHATEVER/var/run/ompi-server/uri.txt mdp-mpi \
mdp-mpi pid 36804
Opened port 1517879296.0;tcp://10.160.30.104:61744;\
tcp://10.161.1.73:61744+1517879297.0;tcp://10.160.30.104:61745;\
tcp://10.161.1.73:61745:300
MPI_Info_set("ompi_global_scope", "true")
Published {"mdp-server-example", "1517879296.0;tcp://10.160.30.104:61744;\
tcp://10.161.1.73:61744+1517879297.0;tcp://10.160.30.104:61745;\
tcp://10.161.1.73:61745:300"}
Waiting to accept a connection on {"mdp-server-example", "1517879296.0;\
tcp://10.160.30.104:61744;tcp://10.161.1.73:61744+1517879297.0;\
tcp://10.160.30.104:61745;tcp://10.161.1.73:61745:300"}
Accepted a connection on {"mdp-server-example", "1517879296.0;\
tcp://10.160.30.104:61744;tcp://10.161.1.73:61744+1517879297.0;\
tcp://10.160.30.104:61745;tcp://10.161.1.73:61745:300"} with rank 0 and size 1
Waiting to accept a connection on {"mdp-server-example", "1517879296.0;\
tcp://10.160.30.104:61744;tcp://10.161.1.73:61744+1517879297.0;\
tcp://10.160.30.104:61745;tcp://10.161.1.73:61745:300"}
Received Mr_example { .router_id = 11, .tenant_id = 13, \
.host_name = "host-name", .domain_name = "domain-name", \
.mgmt_ip_address = "mgmt-ip-address", .mac_address = "mac-address", \
.ws_user = "ws-user", .ws_password = "ws-password", .admin_url = "admin-url" }
Accepted a connection on {"mdp-server-example", "1517879296.0;\
tcp://10.160.30.104:61744;tcp://10.161.1.73:61744+1517879297.0;\
tcp://10.160.30.104:61745;tcp://10.161.1.73:61745:300"} with rank 0 and size 1
Waiting to accept a connection on {"mdp-server-example", "1517879296.0;\
tcp://10.160.30.104:61744;tcp://10.161.1.73:61744+1517879297.0;\
tcp://10.160.30.104:61745;tcp://10.161.1.73:61745:300"}
Received Mr_example { .router_id = 11, .tenant_id = 13, \
.host_name = "host-name", .domain_name = "domain-name", \
.mgmt_ip_address = "mgmt-ip-address", .mac_address = "mac-address", \
.ws_user = "ws-user", .ws_password = "ws-password", .admin_url = "admin-url" }

With Mac OS Activity Monitor telling me that mr-agent-mpi is taking up
100% of a CPU, spinning on something.

For completeness, here is ompi_info

$ ompi_info
                 Package: Open MPI dkick_at_Damien-Kicks-MacBook-Pro.local
                          Distribution
                Open MPI: 1.6.4
   Open MPI SVN revision: r28081
   Open MPI release date: Feb 19, 2013
                Open RTE: 1.6.4
   Open RTE SVN revision: r28081
   Open RTE release date: Feb 19, 2013
                    OPAL: 1.6.4
       OPAL SVN revision: r28081
       OPAL release date: Feb 19, 2013
                 MPI API: 2.1
            Ident string: 1.6.4
                  Prefix: $WHATEVER
 Configured architecture: x86_64-apple-darwin12.3.0
          Configure host: Damien-Kicks-MacBook-Pro.local
           Configured by: dkick
           Configured on: Thu May 9 21:36:29 CDT 2013
          Configure host: Damien-Kicks-MacBook-Pro.local
                Built by: dkick
                Built on: Thu May 9 21:53:32 CDT 2013
              Built host: Damien-Kicks-MacBook-Pro.local
              C bindings: yes
            C++ bindings: yes
      Fortran77 bindings: yes (single underscore)
      Fortran90 bindings: yes
 Fortran90 bindings size: small
              C compiler: gcc
     C compiler absolute: /usr/bin/gcc
  C compiler family name: GNU
      C compiler version: 4.8.0
            C++ compiler: g++ --std=c++0x
   C++ compiler absolute: /usr/bin/g++
      Fortran77 compiler: gfortran
  Fortran77 compiler abs: /sw/bin/gfortran
      Fortran90 compiler: gfortran
  Fortran90 compiler abs: /sw/bin/gfortran
             C profiling: yes
           C++ profiling: yes
     Fortran77 profiling: yes
     Fortran90 profiling: yes
          C++ exceptions: yes
          Thread support: posix (MPI_THREAD_MULTIPLE: yes, progress: no)
           Sparse Groups: no
  Internal debug support: no
  MPI interface warnings: no
     MPI parameter check: runtime
Memory profiling support: no
Memory debugging support: no
         libltdl support: yes
   Heterogeneous support: no
 mpirun default --prefix: no
         MPI I/O support: yes
       MPI_WTIME support: gettimeofday
     Symbol vis. support: yes
   Host topology support: yes
          MPI extensions: affinity example
   FT Checkpoint support: no (checkpoint thread: no)
     VampirTrace support: yes
  MPI_MAX_PROCESSOR_NAME: 256
    MPI_MAX_ERROR_STRING: 256
     MPI_MAX_OBJECT_NAME: 64
        MPI_MAX_INFO_KEY: 36
        MPI_MAX_INFO_VAL: 256
       MPI_MAX_PORT_NAME: 1024
  MPI_MAX_DATAREP_STRING: 128
           MCA backtrace: execinfo (MCA v2.0, API v2.0, Component v1.6.4)
           MCA paffinity: hwloc (MCA v2.0, API v2.0, Component v1.6.4)
               MCA carto: auto_detect (MCA v2.0, API v2.0, Component v1.6.4)
               MCA carto: file (MCA v2.0, API v2.0, Component v1.6.4)
               MCA shmem: mmap (MCA v2.0, API v2.0, Component v1.6.4)
               MCA shmem: posix (MCA v2.0, API v2.0, Component v1.6.4)
               MCA shmem: sysv (MCA v2.0, API v2.0, Component v1.6.4)
           MCA maffinity: first_use (MCA v2.0, API v2.0, Component v1.6.4)
           MCA maffinity: hwloc (MCA v2.0, API v2.0, Component v1.6.4)
               MCA timer: darwin (MCA v2.0, API v2.0, Component v1.6.4)
         MCA installdirs: env (MCA v2.0, API v2.0, Component v1.6.4)
         MCA installdirs: config (MCA v2.0, API v2.0, Component v1.6.4)
             MCA sysinfo: darwin (MCA v2.0, API v2.0, Component v1.6.4)
               MCA hwloc: hwloc132 (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA dpm: orte (MCA v2.0, API v2.0, Component v1.6.4)
              MCA pubsub: orte (MCA v2.0, API v2.0, Component v1.6.4)
           MCA allocator: basic (MCA v2.0, API v2.0, Component v1.6.4)
           MCA allocator: bucket (MCA v2.0, API v2.0, Component v1.6.4)
                MCA coll: basic (MCA v2.0, API v2.0, Component v1.6.4)
                MCA coll: hierarch (MCA v2.0, API v2.0, Component v1.6.4)
                MCA coll: inter (MCA v2.0, API v2.0, Component v1.6.4)
                MCA coll: self (MCA v2.0, API v2.0, Component v1.6.4)
                MCA coll: sm (MCA v2.0, API v2.0, Component v1.6.4)
                MCA coll: sync (MCA v2.0, API v2.0, Component v1.6.4)
                MCA coll: tuned (MCA v2.0, API v2.0, Component v1.6.4)
                  MCA io: romio (MCA v2.0, API v2.0, Component v1.6.4)
               MCA mpool: fake (MCA v2.0, API v2.0, Component v1.6.4)
               MCA mpool: rdma (MCA v2.0, API v2.0, Component v1.6.4)
               MCA mpool: sm (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA pml: bfo (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA pml: csum (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA pml: ob1 (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA pml: v (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA bml: r2 (MCA v2.0, API v2.0, Component v1.6.4)
              MCA rcache: vma (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA btl: self (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA btl: sm (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA btl: tcp (MCA v2.0, API v2.0, Component v1.6.4)
                MCA topo: unity (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA osc: pt2pt (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA osc: rdma (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA iof: hnp (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA iof: orted (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA iof: tool (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA oob: tcp (MCA v2.0, API v2.0, Component v1.6.4)
                MCA odls: default (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA ras: cm (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA ras: slurm (MCA v2.0, API v2.0, Component v1.6.4)
               MCA rmaps: load_balance (MCA v2.0, API v2.0, Component v1.6.4)
               MCA rmaps: rank_file (MCA v2.0, API v2.0, Component v1.6.4)
               MCA rmaps: resilient (MCA v2.0, API v2.0, Component v1.6.4)
               MCA rmaps: round_robin (MCA v2.0, API v2.0, Component v1.6.4)
               MCA rmaps: seq (MCA v2.0, API v2.0, Component v1.6.4)
               MCA rmaps: topo (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA rml: oob (MCA v2.0, API v2.0, Component v1.6.4)
              MCA routed: binomial (MCA v2.0, API v2.0, Component v1.6.4)
              MCA routed: cm (MCA v2.0, API v2.0, Component v1.6.4)
              MCA routed: direct (MCA v2.0, API v2.0, Component v1.6.4)
              MCA routed: linear (MCA v2.0, API v2.0, Component v1.6.4)
              MCA routed: radix (MCA v2.0, API v2.0, Component v1.6.4)
              MCA routed: slave (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA plm: rsh (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA plm: slurm (MCA v2.0, API v2.0, Component v1.6.4)
               MCA filem: rsh (MCA v2.0, API v2.0, Component v1.6.4)
              MCA errmgr: default (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA ess: env (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA ess: hnp (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA ess: singleton (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA ess: slave (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA ess: slurm (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA ess: slurmd (MCA v2.0, API v2.0, Component v1.6.4)
                 MCA ess: tool (MCA v2.0, API v2.0, Component v1.6.4)
             MCA grpcomm: bad (MCA v2.0, API v2.0, Component v1.6.4)
             MCA grpcomm: basic (MCA v2.0, API v2.0, Component v1.6.4)
             MCA grpcomm: hier (MCA v2.0, API v2.0, Component v1.6.4)
            MCA notifier: command (MCA v2.0, API v1.0, Component v1.6.4)
            MCA notifier: syslog (MCA v2.0, API v1.0, Component v1.6.4)
$


Boost-Commit list run by troyer at boostpro.com