|
Boost : |
Subject: Re: [boost] [BGL] Multithread problem inserting edges and exploring the same graph
From: Cosimo Calabrese (cosimo.calabrese_at_[hidden])
Date: 2009-09-22 16:08:01
>
> 1. When does the shortest paths solution have to be correct (i.e. match
> the graph exactly)?
Well, I always need the exact solution.
> 2. Are there any constraints as to when your 'integrator' thread can
> call add_edge() (this relates to 1. pretty directly)
No, there aren't. The add_edge() can be called at any moment.
>
> Now the solution I propose, since you have a monotonically increasing
> edge set you can actually compute an incremental solution to the SSSP
> problem rather than completely re-running Dijkstra. Basically you can
> patch up the SSSP solution when you do the edge addition in O(N) time
> (expected time should be much lower but depends on parameters of your
> graph that I don't have so I won't try to explain in detail). Either
> way this is either somewhat better, or much better than O(N log N) to
> re-run Dijkstra. All you do is push the new weight onto the Dijkstra
> queue and run Dijkstra as normal.
Perhaps it is better to explain my true problem, because perhaps it can
carry to a simpler solution. The problem of the integrator thread and
the explorator thread that work together is a simplified problem
compared to my true problem.
My true problem is the following one: I'm designing a service that bases
its elaborations on a graph. The graph is huge, like I've yet described.
The vertices and edges have many properties; one of them is an ID; the
elements of the graph that I initially load have ID=0; they are the
'original' elements of the graph.
The service can receive the requests from many client at the same time;
it must elaborate the requests and give the responses to the clients.
The service creates a 'worker' thread for each request. Every request
type lanches a particular thread type, but every thread does basically
the same things, in sequence:
1 - the thread adds 'N' new vertices;
2 - the thread adds 2 edges for every vertex added at the previous step,
to connect the new vertex to the source and the target of an existing
edge (e.g.: the existing edge is A->B; I insert a new vertex V and
create the A->V edge and the V->B edge); so, now I've added '2*N' new
edges; this action was simulated by the 'integrator' thread;
3 - every thread has an unique ID > 0; I'm assigning that ID to the
ID-property of the new vertices and the new edges, added at the step 1
and 2; the number of new vertices and edges is very, very less compared
to the entire graph;
4 - the thread creates a graph view by a filter_graph adaptor, whose
edge and vertex predicates includes the elements with ID == 0 and ID ==
currentThreadID; practically every thread excludes the vertices and
edges added by the other threads;
5 - the thread launches a Dijkstra algorithm; the source is one of the
vertices added in the step 1; the distance map, the predecessor map and
the weight map don't belong to the graph, they are external to it;
6 - when the Dijkstra algorithm terminates, I read the results of the
exploration and produce the response.
7 - now I must 'clean' the graph; I must erase the elements added in the
step 1 and 2; in order to do this, there is a cleaner thread; the
'worker' thread launches a 'clean' request to the cleaner thread, that
delete the elements that belong to the 'worker' thread; this operation
has a cadence that depends on the CPU load; it can be do one deletion
per second, two per second, etc...
All the threads can potentially work together. There aren't time
constraints: a thread can start at any moment (the moment that the
request arrives).
The insertion of extra vertices and edges is doing on the unfiltered
graph; the exploration is doing on the filtered graph.
The Dijkstra's source isn't never the same one, it changes every time.
So I think that the incremental solution is inappropriate. What do you
think about?
If it can help, this is my graph definition:
typedef property< vertex_name_t, string,
property < vertex_originality_t, bool > >
vertexs_properties;
typedef property < edge_ID_t, EdgeID,
property < edge_weight_t, long,
property < edge_originality_t, bool > > > edges_properties;
typedef adjacency_list< vecS, vecS, directedS,
vertexs_properties, edges_properties > Graph;
I don't understand why the sync_adjacency_list adaptor doesn't work.
I've attached it to this post, if someone wants to take a look at it.
Yes, I'm synchronized every function, only one thread at time can access
to the graph, but it's only a first step. I must improve the
synchronization, allowing to the read-only functions to works together,
and the read-write function to works one at time.
Every non-member function is atomic. Only a single thread can enter in a
function, and read from or write to the graph. The other threads are
waiting. When the thread exits the function, it leaves the graph in a
stable state, and another thread can lock the graph. The Dijkstra's
algorithm uses these functions, so it must follow the same rules.
Instead I have obtained some heap corruption error message, every time
I've launched the application. I've thinked that if one explorer thread
works on the own elements, when another thread adds some other elements,
the first thread is insensitive to them. But I'm probably wrong. I think
that one reason can be that the filtered_graph adaptor uses the
filter_iterator, that isn't a real filter, but it's a 'skipper'
iterator. e.g: when I launch a out_edges(v) on a filtered graph, it
always considers all the real outers of v, but it skips those which do
not satisfy to the filter predicates. So, if I'm adding some new outers
to v, I'm modifying the outers vector, and the iterators returned by
out_edges() are not reliable.
The extreme solution is to find a way in my specific problem to not add
any vertex or edge, because I've noticed that if some explorer threads
traverse the graph in read-only modality, there aren't problem. But I
hope not to arrive to this point.
Thanks for your time,
Cosimo Calabrese.
//=======================================================================
// Author: Cosimo Calabrese
//=======================================================================
#ifndef BOOST_SYNC_ADJACENCY_LIST_HPP
#define BOOST_SYNC_ADJACENCY_LIST_HPP
#include <boost/graph/graph_traits.hpp>
#include <boost/graph/properties.hpp>
#include <boost/graph/adjacency_iterator.hpp>
#include <boost/interprocess/sync/interprocess_mutex.hpp>
#include <boost/interprocess/sync/scoped_lock.hpp>
namespace boost {
//===========================================================================
// sync_adjacency_list
struct sync_adjacency_list_tag { };
template <typename Graph>
class sync_adjacency_list {
typedef graph_traits<Graph> Traits;
typedef sync_adjacency_list<Graph> self;
public:
typedef Graph graph_type;
// Constructor
sync_adjacency_list( Graph& g ) : m_g(g) { }
// Graph requirements
typedef typename Traits::vertex_descriptor vertex_descriptor;
typedef typename Traits::directed_category directed_category;
typedef typename Traits::traversal_category traversal_category;
typedef typename Traits::edge_parallel_category edge_parallel_category;
// IncidenceGraph requirements
typedef typename Traits::edge_descriptor edge_descriptor;
typedef typename Traits::out_edge_iterator out_edge_iterator;
typedef typename Traits::degree_size_type degree_size_type;
// AdjacencyGraph requirements
typedef typename adjacency_iterator_generator<self,
vertex_descriptor, out_edge_iterator>::type adjacency_iterator;
// BidirectionalGraph requirements
typedef typename Traits::in_edge_iterator in_edge_iterator;
// VertexListGraph requirements
typedef typename Traits::vertex_iterator vertex_iterator;
typedef typename Traits::vertices_size_type vertices_size_type;
// EdgeListGraph requirements
typedef typename Traits::edge_iterator edge_iterator;
typedef typename Traits::edges_size_type edges_size_type;
typedef typename ::boost::edge_property_type<Graph>::type edge_property_type;
typedef typename ::boost::vertex_property_type<Graph>::type vertex_property_type;
typedef sync_adjacency_list_tag graph_tag;
#ifndef BOOST_GRAPH_NO_BUNDLED_PROPERTIES
// Bundled properties support
template<typename Descriptor>
typename graph::detail::bundled_result<Graph, Descriptor>::type&
operator[](Descriptor x)
{ return const_cast<Graph&>(this->m_g)[x]; } // TODO: Verificare che serva il const_cast
template<typename Descriptor>
typename graph::detail::bundled_result<Graph, Descriptor>::type const&
operator[](Descriptor x) const
{ return this->m_g[x]; }
#endif // BOOST_GRAPH_NO_BUNDLED_PROPERTIES
static vertex_descriptor null_vertex()
{
return Graph::null_vertex(); // TODO: oppure return Traits::null_vertex(); ???
}
// private:
Graph& m_g;
};
#ifndef BOOST_GRAPH_NO_BUNDLED_PROPERTIES
template<typename Graph>
struct vertex_bundle_type<sync_adjacency_list<Graph> >
: vertex_bundle_type<Graph> { };
template<typename Graph>
struct edge_bundle_type<sync_adjacency_list<Graph> >
: edge_bundle_type<Graph> { };
#endif // BOOST_GRAPH_NO_BUNDLED_PROPERTIES
//===========================================================================
// Non-member functions for the Synchronized Graph
// Helper functions
template <typename Graph>
inline sync_adjacency_list<Graph>
make_sync_adjacency_list(Graph& g) {
return sync_adjacency_list<Graph>(g);
}
template <typename Graph>
inline sync_adjacency_list<const Graph>
make_sync_adjacency_list(const Graph& g) {
return sync_adjacency_list<const Graph>(g, ep);
}
boost::interprocess::interprocess_mutex sync_adjacency_list_mutex;
// IncidenceGraph requirements
template <typename G>
std::pair<typename sync_adjacency_list<G>::out_edge_iterator,
typename sync_adjacency_list<G>::out_edge_iterator>
out_edges(const typename sync_adjacency_list<G>::vertex_descriptor u,
const sync_adjacency_list<G>& g)
{
boost::interprocess::scoped_lock<boost::interprocess::interprocess_mutex>
lock( sync_adjacency_list_mutex );
return out_edges(u, g.m_g);
}
template <typename G>
typename sync_adjacency_list<G>::vertex_descriptor
source(const typename sync_adjacency_list<G>::edge_descriptor e,
const sync_adjacency_list<G>& g)
{
boost::interprocess::scoped_lock<boost::interprocess::interprocess_mutex>
lock( sync_adjacency_list_mutex );
return source(e, g.m_g);
}
template <typename G>
typename sync_adjacency_list<G>::vertex_descriptor
target(const typename sync_adjacency_list<G>::edge_descriptor e,
const sync_adjacency_list<G>& g)
{
boost::interprocess::scoped_lock<boost::interprocess::interprocess_mutex>
lock( sync_adjacency_list_mutex );
return target(e, g.m_g);
}
template <typename G>
typename sync_adjacency_list<G>::degree_size_type
out_degree(const typename sync_adjacency_list<G>::vertex_descriptor u,
const sync_adjacency_list<G>& g)
{
boost::interprocess::scoped_lock<boost::interprocess::interprocess_mutex>
lock( sync_adjacency_list_mutex );
return out_degree(u, g.m_g);
}
// BidirectionalGraph requirements
template <typename G>
std::pair<typename sync_adjacency_list<G>::in_edge_iterator,
typename sync_adjacency_list<G>::in_edge_iterator>
in_edges(typename sync_adjacency_list<G>::vertex_descriptor u,
const sync_adjacency_list<G>& g)
{
boost::interprocess::scoped_lock<boost::interprocess::interprocess_mutex>
lock( sync_adjacency_list_mutex );
return in_edges(u, g.m_g);
}
template <typename G>
typename sync_adjacency_list<G>::degree_size_type
in_degree(typename sync_adjacency_list<G>::vertex_descriptor u,
const sync_adjacency_list<G>& g)
{
boost::interprocess::scoped_lock<boost::interprocess::interprocess_mutex>
lock( sync_adjacency_list_mutex );
return in_degree(u, g.m_g);
}
template <typename G>
typename sync_adjacency_list<G>::degree_size_type
degree(typename sync_adjacency_list<G>::edge_descriptor e,
const sync_adjacency_list<G>& g)
{
boost::interprocess::scoped_lock<boost::interprocess::interprocess_mutex>
lock( sync_adjacency_list_mutex );
return degree(e, g.m_g);
}
// AdjacencyGraph requirements
template <typename G>
std::pair<typename sync_adjacency_list<G>::adjacency_iterator,
typename sync_adjacency_list<G>::adjacency_iterator>
adjacent_vertices(const typename sync_adjacency_list<G>::vertex_descriptor u,
const sync_adjacency_list<G>& g)
{
boost::interprocess::scoped_lock<boost::interprocess::interprocess_mutex>
lock( sync_adjacency_list_mutex );
return adjacent_vertices(u, g.m_g);
}
// VertexListGraph requirements
template <typename G>
typename sync_adjacency_list<G>::vertices_size_type
num_vertices(const sync_adjacency_list<G>& g)
{
boost::interprocess::scoped_lock<boost::interprocess::interprocess_mutex>
lock( sync_adjacency_list_mutex );
return num_vertices(g.m_g);
}
template <typename G>
std::pair<typename sync_adjacency_list<G>::vertex_iterator,
typename sync_adjacency_list<G>::vertex_iterator>
vertices(const sync_adjacency_list<G>& g)
{
boost::interprocess::scoped_lock<boost::interprocess::interprocess_mutex>
lock( sync_adjacency_list_mutex );
return vertices(g.m_g);
}
// EdgeListGraph requirements
template <typename G>
typename sync_adjacency_list<G>::edges_size_type
num_edges(const sync_adjacency_list<G>& g)
{
boost::interprocess::scoped_lock<boost::interprocess::interprocess_mutex>
lock( sync_adjacency_list_mutex );
return num_edges(g.m_g);
}
template <typename G>
std::pair<typename sync_adjacency_list<G>::edge_iterator,
typename sync_adjacency_list<G>::edge_iterator>
edges(const sync_adjacency_list<G>& g)
{
boost::interprocess::scoped_lock<boost::interprocess::interprocess_mutex>
lock( sync_adjacency_list_mutex );
return edges(g.m_g);
}
// AdjacencyMatrix requirements
template <typename G>
std::pair<typename sync_adjacency_list<G>::edge_descriptor, bool>
edge(const typename sync_adjacency_list<G>::vertex_descriptor u,
const typename sync_adjacency_list<G>::vertex_descriptor v,
const sync_adjacency_list<G>& g)
{
boost::interprocess::scoped_lock<boost::interprocess::interprocess_mutex>
lock( sync_adjacency_list_mutex );
return edge(u, v, g.m_g);
}
// EdgeMutableGraph
template <typename G>
std::pair<typename sync_adjacency_list<G>::edge_descriptor, bool>
add_edge(typename sync_adjacency_list<G>::vertex_descriptor u,
typename sync_adjacency_list<G>::vertex_descriptor v,
sync_adjacency_list<G>& g)
{
boost::interprocess::scoped_lock<boost::interprocess::interprocess_mutex>
lock( sync_adjacency_list_mutex );
return add_edge(u, v, g.m_g);
}
// VertexMutableGraph
template <typename G>
typename sync_adjacency_list<G>::vertex_descriptor
add_vertex( sync_adjacency_list<G>& g )
{
boost::interprocess::scoped_lock<boost::interprocess::interprocess_mutex>
lock( sync_adjacency_list_mutex );
return add_vertex( g.m_g );
}
//===========================================================================
// Property map
namespace detail {
struct sync_adjacency_list_property_selector {
template <class SyncGraph, class Property, class Tag>
struct bind_ {
typedef typename SyncGraph::graph_type Graph;
typedef property_map<Graph, Tag> Map;
typedef typename Map::type type;
typedef typename Map::const_type const_type;
};
};
} // namespace detail
template <>
struct vertex_property_selector<sync_adjacency_list_tag> {
typedef detail::sync_adjacency_list_property_selector type;
};
template <>
struct edge_property_selector<sync_adjacency_list_tag> {
typedef detail::sync_adjacency_list_property_selector type;
};
template <typename G, typename Property>
typename property_map<G, Property>::type
get(Property p, sync_adjacency_list<G>& g)
{
boost::interprocess::scoped_lock<boost::interprocess::interprocess_mutex>
lock( sync_adjacency_list_mutex );
return get(p, const_cast<G&>(g.m_g));
}
template <typename G, typename Property>
typename property_map<G, Property>::const_type
get(Property p, const sync_adjacency_list<G>& g)
{
boost::interprocess::scoped_lock<boost::interprocess::interprocess_mutex>
lock( sync_adjacency_list_mutex );
return get(p, (const G&)g.m_g);
}
template <typename G, typename Property, typename Key>
typename property_map_value<G, Property>::type
get(Property p, const sync_adjacency_list<G>& g, const Key& k)
{
boost::interprocess::scoped_lock<boost::interprocess::interprocess_mutex>
lock( sync_adjacency_list_mutex );
return get(p, (const G&)g.m_g, k);
}
template <typename G, typename Property, typename Key, typename Value>
void
put(Property p, const sync_adjacency_list<G>& g, const Key& k,
const Value& val)
{
put(p, const_cast<G&>(g.m_g), k, val);
}
} // namespace boost
#endif // BOOST_SYNC_ADJACENCY_LIST_HPP
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk