Boost logo

Boost Users :

Subject: Re: [Boost-users] Using BGL with mutexes
From: Nick Edmonds (ngedmond_at_[hidden])
Date: 2012-01-03 18:30:27

On Jan 3, 2012, at 3:58 PM, Kelvin Chung wrote:

> On 2012-01-03 17:29:10 +0000, Nick Edmonds said:
>> On Jan 3, 2012, at 12:47 AM, Kelvin Chung wrote:
>>> I wish to use multiple threads to build a graph, and to topologically sort it. It would appear that the easiest way is to construct a graph adapter like so:
>>> template <class T>
>>> class Graph {
>>> boost::adjacency_list<boost::vecS, boost::vecS, boost::directedS, T> internalGraph;
>>> boost::shared_mutex mutex; // read-write lock enabler
>>> // Add other suitable adapter methods here
>>> };
>>> I'm wondering whether this is the better idea over using the PBGL, since it seems that the PBGL is more complicated.
>>> My specific scenario is that T is a functor type whose operation is expensive, and may depend on calling a similar functor (determining the dependent functors needed is cheap); thus, I'm trying to obtain all of the functors and then ordering them so that all dependent functors are always evaluated before the functor itself is evaluated. Thus, each thread would do something like this:
>>> Graph<T>& dependencyGraph = ...; // shared graph
>>> std::deque<T> tasks; // initially populated list
>>> while (!tasks.empty()) {
>>> T task = tasks.front();
>>> tasks.pop_front();
>>> std::list<T> dependents = task.getDependents();
>>> for (std::list<T>::const_iterator it = dependents.begin(); it != dependents.end(); ++it) {
>>> dependencyGraph.addEdge(task, *it); // Creates a new vertex if either vertex does not exist
>>> tasks.push_back(*it);
>>> }
>>> }
>>> No vertices or edges are ever removed from the graph, unless the whole graph is being destroyed. Would bundling a mutex and a graph be a good idea for this purpose?
>> If your graph is directed and the vertex and edge sets grow monotonically (i.e. there's no removal) then threaded graph construction is pretty simple. I would create a 1:1 mapping of mutexes to vertices and lock the adjacencies for each vertex (you could make this mapping 1:N as well to reduce the number of locks required). Of course when adding a new vertex you'd still need to lock the entire adjacency list temporarily.
>> I have an implementation somewhere that does something very similar to this, though I preallocate the vertex list to avoid having to lock the whole graph and use a property map to map vertices to mutexes which gives me adjustable granularity in the locking. I modified the adjacency_list ctors and calls to add_edge, but you could just as easily use a wrapper.
> This sounds simple and straightforward enough. Just for clarification:
> * I believe that T must be default-constructible, copyable, and copy-assignable if it is to be used as a bundled property. So I'd either have to store the mutexes in a separate data structure in my wrapper (for my needs, T is hashable, so that's not an issue), or add the mutex to T itself (and either create custom copy and assign operators or change my internal graph to use smart pointers, both of which could turn ugly quickly).

Yep, having the mutex be *in* the bundled property will definitely cause problems for the reasons you pointed out. If you wrap the adjacency list you can have whatever structure you use to hold the mutexes stored in the wrapper class and indexed either using the vertex-index of the graph or by hashing T. This approach will allow you to easily enforce proper initialization of the mutexes.

> * I also believe that if I were to add an edge, then I'd write lock both the source and destination vertices (Or is it write lock the source and read lock the destination? Or is it write lock the source without locking the destination?).

In any case where the source and target vertices don't exist you'll need to write-lock the whole graph and add them. If you have a lot of contention on this lock performance will go to heck in a hurry so you might want to consider pre-processing to generate the vertex set if necessary.

If the graph is directed you only need to write-lock the source vertex. If the graph is bidirectional then you need to write-lock both the source and destination vertices *and*, very importantly, do it in the same order in all threads... e.g. in vertex-index order otherwise you'll end up with deadlocks. If the graph is undirected then things are a bit trickier because of the way properties of an edge and its anti-edge are linked.

> So, what I would end up with is the following wrapper if I am correct:
> template <class T>
> class Graph {
> boost::adjacency_list<boost::vecS, boost::vecS, boost::directedS, T> internalGraph;
> // boost::shared_mutex is not copyable, so heap allocation it is
> boost::unordered_map<T, boost::shared_ptr<boost::shared_mutex> > vertexMutexes;
> boost::shared_mutex mutex; // global lock for vertex management
> // Other stuff goes here
> public:
> void addEdge(const T& from, const T& to);
> };
> void Graph::addEdge(const T& from, const T& to) {
> boost::shared_mutex& from_mutex = vertexMutexes[from];
> boost::shared_mutex& to_mutex = vertexMutexes[to];
> // write lock both mutexes at once
> boost::unique_lock<boost::shared_mutex> fromLock(from_mutex, boost::defer_lock);
> boost::unique_lock<boost::shared_mutex> toLock(to_mutex, boost::defer_lock);
> boost::lock(fromLock, toLock);

As long as both the 'from' and 'to' vertices exist here you're in good shape, modulo my comments about the locking requirements for different directed categories of the graph. If add_edge() on the internalGraph can trigger vertex addition here then you're in trouble.

Hope that helps,

> // Code to call add_edge() here
> }
> Is this accurate?
> _______________________________________________
> Boost-users mailing list
> Boost-users_at_[hidden]

Boost-users list run by williamkempf at, kalb at, bjorn.karlsson at, gregod at, wekempf at