Boost logo

Boost Users :

Subject: Re: [Boost-users] [BGL] Distributed adiacency_list memory allocation
From: Nick Edmonds (ngedmond_at_[hidden])
Date: 2011-09-19 14:32:40

> Hi Cosimo,
> I run into the same problem as you, and I'm so disturbed by the question.
> Have you got the answer now? I'll appreciate for some ideas.
> Yan

Sorry, I can't remember which of these "PBGL consumes lots of memory" questions I've replied to and which I haven't but the key point here is that you have to consider not only the distributed graph itself, but other auxiliary data structures that may be created and most importantly in the "my test case is small but it allocates lots of space" case, the memory needed by the Process Group to buffer MPI_Isend and MPI_Irecv data.

Check out libs/graph_parallel/src/mpi_process_group.cpp... In there you'll see a #define for PREALLOCATE_BATCHES which tells the MPI Process Group how many "batches" to preallocate, where a batch is the size of the message buffer which, when full, invokes an MPI_Isend. This is all defined in mpi_process_group.tpp in the mpi_process_group::impl object. So... before you even create a single distributed data structure you have PREALLOCATE_BATCHES * (mpi_process_group::impl::batch_buffer_size + mpi_process_group::impl::batch_message_size) bytes allocated for buffering MPI communication... and those are just the *large* objects in the mpi_process_group. I'm sure you can find plenty of other moderately sized ones that will add up as well. The OOB Bsend buffer cache is a good example if you're using SEND_OOB_BSEND (off by default I believe).

The adjacency list is by no means the most compact data structure and you'll certainly find some unexpected uses of memory in there, especially if you're using undirected edges, but for small problem instances most of the memory consumption is in the process group. If you're concerned about memory utilization and don't need mutable graphs I'd definitely recommend the CSR graph type though.

Hope that helps,

Boost-users list run by williamkempf at, kalb at, bjorn.karlsson at, gregod at, wekempf at