Boost logo

Boost Users :

From: Doug Gregor (dgregor_at_[hidden])
Date: 2007-10-18 15:23:46


On Oct 16, 2007, at 7:19 AM, abhishek.v_at_[hidden] wrote:
> After increasing the RAM size to 4GB i could get result for 5000
> vertices.
> Result of are calculate in BFS and DFS in 16 and 14 sec respectively..
> And testing out for 7000 vertices currently .. But in my case the
> edges are of O(VxV)(where V is the vertex)
> I m finding out the threshold values for all these. Which give a
> complete insight that for a given memory
> how big the graph can be constructed and what time the various
> algorithm takes to calculate the result.

An adjacency list is not the right data structure for a graph with O
(VxV) edges. Have you considering using the adjacency_matrix?

        http://www.boost.org/libs/graph/doc/adjacency_matrix.html

> Do you have any analysis data by which i can compare. Also let me
> know if any other important parameter can be considered checking
> for scalability. Right now i m considering complete graph so once
> it works fine for this then it will work for all other cases.

We've dealt with graphs with tens of millions of edges before with
the BGL. The most important point is to pick the appropriate

By the way, an undirected adjacency list requires far more memory
than either a directed or a bidirectional adjacency list, because
edges actually edge up being stored in three places: the source
vertex, the target vertex, and a separate "edges" list that allows
faster iteration over the set of edges in the graph.

        - Doug



Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net