Boost logo

Boost Users :

Subject: [Boost-users] [Graph] Large Graphs: performace boost
From: Sensei (senseiwa_at_[hidden])
Date: 2014-07-30 10:18:48


Dear all,

Facing an increasing size in my experiments, I am wondering now if my
data structure is still valid.

I need to traverse a graph (with my custom A*) that now can easily
contain 10M-100M nodes and 100M-400M edges. The graph is (now) without
weights, but this may change in the future. The only thing that is
guaranteed not to change is that the graph is immutable.

The graphs might even be larger, and at some point I will move to
parallel BGL, but not now.

To spare some memory, I am using a CSR graph:

     typedef boost::compressed_sparse_row_graph<boost::bidirectionalS>
boost_graph;

     graph_ = new boost_graph(boost::edges_are_unsorted_multi_pass,
results.begin(), results.end(), nodelist().size());

I am dubious if this the best solution.

I have two objectives right now, optimize with respect to 1) memory
consumption, and 2) graph traversal time. The main operation I need to
do on graph is just "given a node, find the adjacent ones" (with my A*).

Do you suggest to switch to another graph data structure? How do you
think I will sacrifice time for memory, and vice versa, if I move for
instance to an adjacency list?

Thanks for any suggestion!


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net