Boost logo

Boost-Commit :

Subject: [Boost-commit] svn:boost r52387 - in trunk/libs/graph_parallel/doc: . html
From: jewillco_at_[hidden]
Date: 2009-04-14 12:12:39


Author: jewillco
Date: 2009-04-14 12:12:38 EDT (Tue, 14 Apr 2009)
New Revision: 52387
URL: http://svn.boost.org/trac/boost/changeset/52387

Log:
Added MPI BSP process group docs
Added:
   trunk/libs/graph_parallel/doc/html/mpi_bsp_process_group.html (contents, props changed)
   trunk/libs/graph_parallel/doc/mpi_bsp_process_group.rst (contents, props changed)
Text files modified:
   trunk/libs/graph_parallel/doc/CMakeLists.txt | 7 ++++---
   trunk/libs/graph_parallel/doc/distributedS.rst | 2 +-
   trunk/libs/graph_parallel/doc/html/distributedS.html | 4 ++--
   3 files changed, 7 insertions(+), 6 deletions(-)

Modified: trunk/libs/graph_parallel/doc/CMakeLists.txt
==============================================================================
--- trunk/libs/graph_parallel/doc/CMakeLists.txt (original)
+++ trunk/libs/graph_parallel/doc/CMakeLists.txt 2009-04-14 12:12:38 EDT (Tue, 14 Apr 2009)
@@ -19,6 +19,7 @@
   overview
   page_rank
   process_group
+ mpi_bsp_process_group
   simple_trigger
   strong_components
   tsin_depth_first_visit
@@ -44,13 +45,13 @@
 set(PBGL_DOC_TARGETS)
 separate_arguments(RST2HTML_FLAGS)
 foreach(DOC ${PBGL_DOCS})
- add_custom_command(OUTPUT "${PBGL_BINARY_DIR}/libs/graph/doc/parallel/${DOC}.html"
+ add_custom_command(OUTPUT "${PBGL_BINARY_DIR}/libs/graph_parallel/doc/${DOC}.html"
     COMMAND "${RST2HTML}"
- ARGS ${RST2HTML_FLAGS} "${PBGL_SOURCE_DIR}/libs/graph/doc/parallel/${DOC}.rst"
+ ARGS ${RST2HTML_FLAGS} "${PBGL_SOURCE_DIR}/libs/graph_parallel/doc/${DOC}.rst"
          "${PBGL_BINARY_DIR}/libs/graph/doc/parallel/${DOC}.html"
     COMMENT "Generating document ${DOC}.html..."
     )
- list(APPEND PBGL_DOC_TARGETS "${PBGL_BINARY_DIR}/libs/graph/doc/parallel/${DOC}.html")
+ list(APPEND PBGL_DOC_TARGETS "${PBGL_BINARY_DIR}/libs/graph_parallel/doc/${DOC}.html")
 endforeach(DOC)
 
 add_custom_target(doc ALL

Modified: trunk/libs/graph_parallel/doc/distributedS.rst
==============================================================================
--- trunk/libs/graph_parallel/doc/distributedS.rst (original)
+++ trunk/libs/graph_parallel/doc/distributedS.rst 2009-04-14 12:12:38 EDT (Tue, 14 Apr 2009)
@@ -45,5 +45,5 @@
 
 .. _adjacency_list: http://www.boost.org/libs/graph/doc/adjacency_list.html
 .. _Distributed adjacency list: distributed_adjacency_list.html
-.. _Process group: ../parallel/ProcessGroup.html
+.. _Process group: process_group.html
 

Modified: trunk/libs/graph_parallel/doc/html/distributedS.html
==============================================================================
--- trunk/libs/graph_parallel/doc/html/distributedS.html (original)
+++ trunk/libs/graph_parallel/doc/html/distributedS.html 2009-04-14 12:12:38 EDT (Tue, 14 Apr 2009)
@@ -34,7 +34,7 @@
 <dt><strong>ProcessGroup</strong>:</dt>
 <dd>The type of the process group over which the property map is
 distributed and also the medium for communication. This type must
-model the <a class="reference external" href="../parallel/ProcessGroup.html">Process Group</a> concept, but certain data structures may
+model the <a class="reference external" href="process_group.html">Process Group</a> concept, but certain data structures may
 place additional requirements on this parameter.</dd>
 <dt><strong>LocalSelector</strong>:</dt>
 <dd>A selector type (e.g., <tt class="docutils literal"><span class="pre">vecS</span></tt>) that indicates how vertices or
@@ -47,7 +47,7 @@
 </div>
 <div class="footer">
 <hr class="footer" />
-Generated on: 2009-04-11 17:09 UTC.
+Generated on: 2009-04-13 17:18 UTC.
 Generated by <a class="reference external" href="http://docutils.sourceforge.net/">Docutils</a> from <a class="reference external" href="http://docutils.sourceforge.net/rst.html">reStructuredText</a> source.
 
 </div>

Added: trunk/libs/graph_parallel/doc/html/mpi_bsp_process_group.html
==============================================================================
--- (empty file)
+++ trunk/libs/graph_parallel/doc/html/mpi_bsp_process_group.html 2009-04-14 12:12:38 EDT (Tue, 14 Apr 2009)
@@ -0,0 +1,128 @@
+<?xml version="1.0" encoding="utf-8" ?>
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
+<head>
+<meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+<meta name="generator" content="Docutils 0.6: http://docutils.sourceforge.net/" />
+<title>Parallel BGL MPI BSP Process Group</title>
+<link rel="stylesheet" href="../../../../rst.css" type="text/css" />
+</head>
+<body>
+<div class="document" id="logo-mpi-bsp-process-group">
+<h1 class="title"><a class="reference external" href="http://www.osl.iu.edu/research/pbgl"><img align="middle" alt="Parallel BGL" class="align-middle" src="http://www.osl.iu.edu/research/pbgl/images/pbgl-logo.png" /></a> MPI BSP Process Group</h1>
+
+<!-- Copyright (C) 2004-2009 The Trustees of Indiana University.
+Use, modification and distribution is subject to the Boost Software
+License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at
+http://www.boost.org/LICENSE_1_0.txt) -->
+<div class="contents topic" id="contents">
+<p class="topic-title first">Contents</p>
+<ul class="simple">
+<li><a class="reference internal" href="#introduction" id="id1">Introduction</a></li>
+<li><a class="reference internal" href="#where-defined" id="id2">Where Defined</a></li>
+<li><a class="reference internal" href="#reference" id="id3">Reference</a></li>
+</ul>
+</div>
+<div class="section" id="introduction">
+<h1><a class="toc-backref" href="#id1">Introduction</a></h1>
+<p>The MPI <tt class="docutils literal"><span class="pre">mpi_process_group</span></tt> is an implementation of the <a class="reference external" href="process_group.html">process
+group</a> interface using the Message Passing Interface (MPI). It is the
+primary process group used in the Parallel BGL at this time.</p>
+</div>
+<div class="section" id="where-defined">
+<h1><a class="toc-backref" href="#id2">Where Defined</a></h1>
+<p>Header <tt class="docutils literal"><span class="pre">&lt;boost/graph/distributed/mpi_process_group.hpp&gt;</span></tt></p>
+</div>
+<div class="section" id="reference">
+<h1><a class="toc-backref" href="#id3">Reference</a></h1>
+<pre class="literal-block">
+namespace boost { namespace graph { namespace distributed {
+
+class mpi_process_group
+{
+public:
+ typedef boost::mpi::communicator communicator_type;
+
+ // Process group constructors
+ mpi_process_group(communicator_type comm = communicator_type());
+ mpi_process_group(std::size_t num_headers, std::size_t buffer_size,
+ communicator_type comm = communicator_type());
+
+ mpi_process_group();
+ mpi_process_group(const mpi_process_group&amp;, boost::parallel::attach_distributed_object);
+
+ // Triggers
+ template&lt;typename Type, typename Handler&gt;
+ void trigger(int tag, const Handler&amp; handler);
+
+ template&lt;typename Type, typename Handler&gt;
+ void trigger_with_reply(int tag, const Handler&amp; handler);
+
+ trigger_receive_context trigger_context() const;
+
+ // Helper operations
+ void poll();
+ mpi_process_group base() const;
+};
+
+// Process query
+int process_id(const mpi_process_group&amp;);
+int num_processes(const mpi_process_group&amp;);
+
+// Message transmission
+template&lt;typename T&gt;
+ void send(const mpi_process_group&amp; pg, int dest, int tag, const T&amp; value);
+
+template&lt;typename T&gt;
+ void receive(const mpi_process_group&amp; pg, int source, int tag, T&amp; value);
+
+optional&lt;std::pair&lt;int, int&gt; &gt; probe(const mpi_process_group&amp; pg);
+
+// Synchronization
+void synchronize(const mpi_process_group&amp; pg);
+
+// Out-of-band communication
+template&lt;typename T&gt;
+ void send_oob(const mpi_process_group&amp; pg, int dest, int tag, const T&amp; value);
+
+template&lt;typename T, typename U&gt;
+ void
+ send_oob_with_reply(const mpi_process_group&amp; pg, int dest, int
+ tag, const T&amp; send_value, U&amp; receive_value);
+
+template&lt;typename T&gt;
+ void receive_oob(const mpi_process_group&amp; pg, int source, int tag, T&amp; value);
+
+} } }
+</pre>
+<p>Since the <tt class="docutils literal"><span class="pre">mpi_process_group</span></tt> is an implementation of the <a class="reference external" href="process_group.html">process
+group</a> interface, we omit the description of most of the functions in
+the prototype. Two constructors need special mentioning:</p>
+<pre class="literal-block">
+mpi_process_group(communicator_type comm = communicator_type());
+</pre>
+<p>The constructor can take an optional MPI communicator. As default a communicator
+constructed from MPI_COMM_WORLD is used.</p>
+<pre class="literal-block">
+mpi_process_group(std::size_t num_headers, std::size_t buffer_size,
+ communicator_type comm = communicator_type());
+</pre>
+<p>For performance fine tuning the maximum number of headers in a message batch
+(num_headers) and the maximum combined size of batched messages (buffer_size)
+can be specified. The maximum message size of a batch is
+16*num_headers+buffer_size. Sensible default values have been found by optimizing
+a typical application on a cluster with Ethernet network, and are num_header=64k
+and buffer_size=1MB, for a total maximum batches message size of 2MB.</p>
+<hr class="docutils" />
+<p>Copyright (C) 2007 Douglas Gregor</p>
+<p>Copyright (C) 2007 Matthias Troyer</p>
+</div>
+</div>
+<div class="footer">
+<hr class="footer" />
+Generated on: 2009-04-14 16:11 UTC.
+Generated by <a class="reference external" href="http://docutils.sourceforge.net/">Docutils</a> from <a class="reference external" href="http://docutils.sourceforge.net/rst.html">reStructuredText</a> source.
+
+</div>
+</body>
+</html>

Added: trunk/libs/graph_parallel/doc/mpi_bsp_process_group.rst
==============================================================================
--- (empty file)
+++ trunk/libs/graph_parallel/doc/mpi_bsp_process_group.rst 2009-04-14 12:12:38 EDT (Tue, 14 Apr 2009)
@@ -0,0 +1,125 @@
+.. Copyright (C) 2004-2009 The Trustees of Indiana University.
+ Use, modification and distribution is subject to the Boost Software
+ License, Version 1.0. (See accompanying file LICENSE_1_0.txt or copy at
+ http://www.boost.org/LICENSE_1_0.txt)
+
+============================
+|Logo| MPI BSP Process Group
+============================
+
+.. contents::
+
+Introduction
+------------
+
+The MPI ``mpi_process_group`` is an implementation of the `process
+group`_ interface using the Message Passing Interface (MPI). It is the
+primary process group used in the Parallel BGL at this time.
+
+Where Defined
+-------------
+
+Header ``<boost/graph/distributed/mpi_process_group.hpp>``
+
+Reference
+---------
+
+::
+
+ namespace boost { namespace graph { namespace distributed {
+
+ class mpi_process_group
+ {
+ public:
+ typedef boost::mpi::communicator communicator_type;
+
+ // Process group constructors
+ mpi_process_group(communicator_type comm = communicator_type());
+ mpi_process_group(std::size_t num_headers, std::size_t buffer_size,
+ communicator_type comm = communicator_type());
+
+ mpi_process_group();
+ mpi_process_group(const mpi_process_group&, boost::parallel::attach_distributed_object);
+
+ // Triggers
+ template<typename Type, typename Handler>
+ void trigger(int tag, const Handler& handler);
+
+ template<typename Type, typename Handler>
+ void trigger_with_reply(int tag, const Handler& handler);
+
+ trigger_receive_context trigger_context() const;
+
+ // Helper operations
+ void poll();
+ mpi_process_group base() const;
+ };
+
+ // Process query
+ int process_id(const mpi_process_group&);
+ int num_processes(const mpi_process_group&);
+
+ // Message transmission
+ template<typename T>
+ void send(const mpi_process_group& pg, int dest, int tag, const T& value);
+
+ template<typename T>
+ void receive(const mpi_process_group& pg, int source, int tag, T& value);
+
+ optional<std::pair<int, int> > probe(const mpi_process_group& pg);
+
+ // Synchronization
+ void synchronize(const mpi_process_group& pg);
+
+ // Out-of-band communication
+ template<typename T>
+ void send_oob(const mpi_process_group& pg, int dest, int tag, const T& value);
+
+ template<typename T, typename U>
+ void
+ send_oob_with_reply(const mpi_process_group& pg, int dest, int
+ tag, const T& send_value, U& receive_value);
+
+ template<typename T>
+ void receive_oob(const mpi_process_group& pg, int source, int tag, T& value);
+
+ } } }
+
+Since the ``mpi_process_group`` is an implementation of the `process
+group`_ interface, we omit the description of most of the functions in
+the prototype. Two constructors need special mentioning:
+
+::
+
+ mpi_process_group(communicator_type comm = communicator_type());
+
+The constructor can take an optional MPI communicator. As default a communicator
+constructed from MPI_COMM_WORLD is used.
+
+::
+
+ mpi_process_group(std::size_t num_headers, std::size_t buffer_size,
+ communicator_type comm = communicator_type());
+
+
+For performance fine tuning the maximum number of headers in a message batch
+(num_headers) and the maximum combined size of batched messages (buffer_size)
+can be specified. The maximum message size of a batch is
+16*num_headers+buffer_size. Sensible default values have been found by optimizing
+a typical application on a cluster with Ethernet network, and are num_header=64k
+and buffer_size=1MB, for a total maximum batches message size of 2MB.
+
+
+
+-----------------------------------------------------------------------------
+
+Copyright (C) 2007 Douglas Gregor
+
+Copyright (C) 2007 Matthias Troyer
+
+.. |Logo| image:: http://www.osl.iu.edu/research/pbgl/images/pbgl-logo.png
+ :align: middle
+ :alt: Parallel BGL
+ :target: http://www.osl.iu.edu/research/pbgl
+
+.. _process group: process_group.html


Boost-Commit list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk