Boost logo

Boost :

Subject: Re: [boost] space complexity of push_relabel_max_flow
From: Dan Jiang (danjiang_at_[hidden])
Date: 2010-04-13 16:09:11

I am trying to use compressed_sparse_row_graph with push_relabel_max_flow. But I am stuck as a newbie to BOOST. I could not find any example using compressed_sparse_row_graph with the algorithm. Can anyone help? Here is my code with adjacency_list:
typedef adjacency_list< vecS, vecS, directedS, no_property, property<edge_capacity_t, float, property<edge_residual_capacity_t, float, property<edge_reverse_t, Traits::edge_descriptor> > >, no_property, vecS > BOOST_Graph;
property_map<BOOST_Graph, edge_capacity_t>::type capacity = get(edge_capacity, g);property_map<BOOST_Graph, edge_residual_capacity_t>::type residual_capacity = get(edge_residual_capacity, g);Traits::vertex_descriptor s, Traits::vertex_descriptor t;
// Code to set up s, t, capacity and reverse_capacity is omited...
// Now find max flowfloat flow = push_relabel_max_flow(g, s, t);
Dan Jiang
> Date: Tue, 13 Apr 2010 12:53:45 -0400
> From: jewillco_at_[hidden]
> To: boost_at_[hidden]
> Subject: Re: [boost] space complexity of push_relabel_max_flow
> On Tue, 13 Apr 2010, Dan Jiang wrote:
> > Hi All, I am a new BOOST user and like it very much. I have a question
> > regarding space complexity of push_relabel_max_flow. The
> > push_relabel_max_flow is very efficient compared to Edmond's and Karp.
> > In my application, I need to deal with half a million nodes and half a
> > billion arcs. It requires lots of ram (>= 6gb). Two questions:
> > 1) How do I find out space used per node and per arc?
> I do not know off hand for adjacency_list; it is probably reasonable (one
> integer per edge, some per vertex, plus properties). However, remember
> that dynamically allocated vectors can have up to a 100% space overhead.
> I would recommend using compressed_sparse_row_graph and then changing the
> EdgeIndex and Vertex types to uint32_t (since you have fewer than 2^32 of
> each). That will use 4 * (nedges + nvertices + 1) bytes of space, plus
> properties; accesses will also likely be faster than for adjacency_list.
> > 2) My network does not need any reverse edge (currently I set the
> > capacity of reverse edge to 0), is there anyway to not supply reverse
> > edges to save space?
> It does not appear so, because of the residual capacities that need to be
> stored. If you can find an algorithm description that doesn't need
> reverse edges, though, it might be possible to change the algorithm to
> remove them.
> > 3) Does push_relabel_algorithm dynamically allocates lots of memory
> > (besides the ram required by the input graph, i.e., adjacency_list)?
> It looks like there is a bunch of that. I see six vectors and a queue in
> there, and one of those vectors is a vector of std::lists.
> > 4) Is it possible to implement custom file-based edge lists so that I
> > could explicitly page to disk when not enough ram is available? How
> > slower would it be?
> It would probably be quite a bit slower. If you need to go beyond your
> system's memory, why not let the OS's swap mechanism do the disk accesses?
> That would take care of all of the caching and such that you would need,
> and I'm not sure writing your own would provide much benefit.
> How much memory do you have, and what type are you using for your edge
> capacities? I have some ideas for how to shrink things, but I would first
> like to see a baseline of how bad things are now. If you switch to
> compressed_sparse_row_graph and try to run the algorithm (possibly on
> scaled-down graphs), how much memory does it use? Running that for a few
> data sizes might give an idea of how much actual space per vertex and edge
> the algorithm uses.
> BTW, is there any regular structure in your graph? For example, is it a
> grid graph like some computer vision algorithms use?
> -- Jeremiah Willcock
> _______________________________________________
> Unsubscribe & other changes:
Got a phone? Get Hotmail & Messenger for mobile!

Boost list run by bdawes at, gregod at, cpdaniel at, john at