On Nov 2, 2015 6:55 AM, "Quaglino Alessio" <alessio.quaglino@usi.ch> wrote:
>
> Is anyone using the parallel BGL? The following code gives me a segmentation fault for the parallel case but I can’t figure out what can be wrong. You can replace PetscInitialize and PetscFinalize with the respective standard MPI calls.
Could you try with fewer vertices (just a few) and see if this still happens? If it does, can you try to add some edges and see if it makes a difference? A couple more things to try is to use a single MPI process and dummy_property_map. See if any of these steps make a difference.
> Regards,
> Alessio Quaglino
>
>
> #define PARALLEL_GRAPH
>
> using namespace boost;
> using boost::graph::distributed::mpi_process_group;
>
> typedef adjacency_list <vecS, vecS, undirectedS> SerialGraph;
> typedef adjacency_list<vecS, distributedS<mpi_process_group, vecS>, undirectedS> Graph;
> typedef iterator_property_map<std::vector<int>::iterator, property_map<Graph, vertex_index_t>::type> LocalMap;
>
> static char help[] = "";
>
> int main(int argc,char **args)
> {
> PetscErrorCode ierr;
> PetscInitialize(&argc,&args,(char*)0,help);
>
>
>
> int nV = 40000;
> int num = 0;
>
>
>
> #ifdef PARALLEL_GRAPH
> Graph G(nV+1);
> synchronize(G);
> std::vector<int> localComponent(nV+1);
> LocalMap components(localComponent.begin(),get(vertex_index, G));
> num = connected_components_ps(G, components);
> #else
> SerialGraph G(nV+1);
> std::vector<int> globalComponent(nV+1);
> num = connected_components(G, &globalComponent[0]);
> #endif
>
>
>
> std::cout << num << " connected components" << std::endl;
>
>
>
> ierr = PetscFinalize();
> return 0;
> }
>
> _______________________________________________
> Boost-users mailing list
> Boost-users@lists.boost.org
> http://lists.boost.org/mailman/listinfo.cgi/boost-users