Boost logo

Boost :

Subject: Re: [boost] [PBGL] Distributed adyacency_list memory allocation
From: Nick Edmonds (ngedmond_at_[hidden])
Date: 2011-05-06 16:43:31


On Apr 30, 2011, at 11:09 AM, Cosimo Calabrese <cosimo.calabrese_at_[hidden]> wrote:

> Hi to all,
>
> I'm working on WinXP, VS2010, boost 1.46.1. I've writed this short program:
>
> //----------------------------------
> #include <boost/graph/use_mpi.hpp>
> #include <boost/graph/distributed/mpi_process_group.hpp>
> #include <boost/graph/distributed/adjacency_list.hpp>
>
> using boost::graph::distributed::mpi_process_group;
>
> typedef boost::property<boost::vertex_distance_t, long,
> boost::property<boost::vertex_name_t, std::string > >
> vertex_prop;
> typedef boost::property<boost::edge_weight_t, long > edge_prop;
> typedef boost::adjacency_list<
> boost::vecS, boost::distributedS<mpi_process_group, boost::vecS>,
> boost::directedS, vertex_prop, edge_prop > Graph;
> typedef boost::graph_traits<Graph>::edge_descriptor EdgeDescriptor;
> typedef boost::graph_traits<Graph>::vertex_descriptor VertexDescriptor;
>
> int main(int argc, char* argv[])
> {
> boost::mpi::environment env(argc,argv);
> Graph g( 10 );
> return 0;
> }
> //----------------------------------
>
> When I launch it, every process allocates about 1 GB of memory. Why it allocates all that memory? Is it normal? Is there any tuning to the PBGL framework that I must made?
>
> I use it on a 2-core machine, and the command that I use is:
>
> mpiexec -n 2 .\test.exe
>
> I've tried with openMPI 1.4.3 and Microsoft HPC Pack 2008, but it seems irrilevant.

A large portion of this is likely due to the comm buffers in the mpi_process_group. Buffers which aren't used are never read so it shouldn't be a performance issue if your system has virtual memory. The number of buffers is set using a define in mpi_process_group.hpp, the size of each buffer is set in the ctor.

Let me know if you need more details (I kept this explanation brief as I'm writing it on my iPhone)

-Nick


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk