|
Boost Users : |
Subject: Re: [Boost-users] Details of boost parallel graph implementation
From: Mahesh Barve (mahesh.barve_at_[hidden])
Date: 2015-07-30 03:12:31
Hi Marcin,
Thanks for the reply.
1. Which are the files in the source tree where I can get the details of
implementation of queue,graph and the property map for serial as well as
distributed (MPI ) cases.
Also, I had the following query.
With reference to document
http://www.boost.org/doc/libs/1_58_0/libs/graph_parallel/doc/html/overview.html
For parallel version of BFS, the steps that I inferred are as below. Could
you please correct me in this:
1. The root node process reads the input file which is in the metix
format.
2. The root node process creates the adjacency list representation of the
graph.
3. The root node process creates the property map and "ghost cells" of the
graph. The various entries are marked as white corresponding to each of
the node.
4. The root node process will distribute the adjacency list and property
map to each of the processes.
5. Each of the non-root nodes now has its own copy of adjacency list and
property map along with details of the source node where the BFS is to
start
6. Now the process having the source node starts the traversal of the
graph.
7. The source node is put in a queue.
8. The source node will dequeue the source node and mark all its neighbors
as gray and put them in queue. Also it will mark itself as black.
9. The property maps on each of the nodes will now be updated
10. Each of the node processes will be monitoring its own property map
If any of the processes finds a node marked as gray, it will
enqueue that particular node and start its own traversal locally.
11. Thus on every node the property map will be marked as gray and then
eventually black after visit is complete. The property maps will be
synchronised.
12. The steps from 8 to 11 are repeated till all the entries in every
property map are marked as black.
Is my interpretation of the implementation for parallel correct?
How often are the property maps on various nodes synchronized and under
what condition?
Is there any synchronisation with respect to the queues ?
please help,
regards,
-Mahesh
From: Marcin Zalewski <marcin.zalewski_at_[hidden]>
To: boost-users_at_[hidden]
Date: 07/30/2015 10:03 AM
Subject: Re: [Boost-users] Details of boost parallel graph
implementation
Sent by: "Boost-users" <boost-users-bounces_at_[hidden]>
On Wed, Jul 29, 2015 at 4:26 AM Mayank Bakshi <mayank.bakshi_at_[hidden]>
wrote:
Hello,
I am working on mpi implementation of boost for breadth first search.
What is the algorithmic detail for BFS in boost.mpi? How the graph is
processed by each rank which we provide in mpirun (load distribution among
different processes)?
The sequential BGL library is overloaded on distributed graphs:
https://github.com/boostorg/graph/blob/master/include/boost/graph/breadth_first_search.hpp
Look for the mpl::true_ and mpl::bool_ to see where distributed vs.
not-distributed selection is made. graph_parallel provides some additional
overloads of the helper, but it does not define a brand new BFS:
The whole magic happens in the queue, graph, and property maps data
structures. The queue distributes work, and it also implements a global
"empty" check, and the property maps allow accessing remote vertices. So,
if you need MPI BFS, one is already there.
_______________________________________________
Boost-users mailing list
Boost-users_at_[hidden]
http://lists.boost.org/mailman/listinfo.cgi/boost-users
=====-----=====-----=====
Notice: The information contained in this e-mail
message and/or attachments to it may contain
confidential or privileged information. If you are
not the intended recipient, any dissemination, use,
review, distribution, printing or copying of the
information contained in this e-mail message
and/or attachments to it are strictly prohibited. If
you have received this communication in error,
please notify us by reply e-mail or telephone and
immediately and permanently delete the message
and any attachments. Thank you
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net