Boost logo

Boost Users :

Subject: Re: [Boost-users] Using BGL with mutexes
From: Kelvin Chung (kelvSYC_at_[hidden])
Date: 2012-01-03 15:58:15


On 2012-01-03 17:29:10 +0000, Nick Edmonds said:

> On Jan 3, 2012, at 12:47 AM, Kelvin Chung wrote:
>
>> I wish to use multiple threads to build a graph, and to topologically
>> sort it. It would appear that the easiest way is to construct a graph
>> adapter like so:
>>
>> template <class T>
>> class Graph {
>> boost::adjacency_list<boost::vecS, boost::vecS, boost::directedS, T>
>> internalGraph;
>> boost::shared_mutex mutex; // read-write lock enabler
>>
>> // Add other suitable adapter methods here
>> };
>>
>> I'm wondering whether this is the better idea over using the PBGL,
>> since it seems that the PBGL is more complicated.
>>
>> My specific scenario is that T is a functor type whose operation is
>> expensive, and may depend on calling a similar functor (determining the
>> dependent functors needed is cheap); thus, I'm trying to obtain all of
>> the functors and then ordering them so that all dependent functors are
>> always evaluated before the functor itself is evaluated. Thus, each
>> thread would do something like this:
>>
>> Graph<T>& dependencyGraph = ...; // shared graph
>> std::deque<T> tasks; // initially populated list
>> while (!tasks.empty()) {
>> T task = tasks.front();
>> tasks.pop_front();
>>
>> std::list<T> dependents = task.getDependents();
>> for (std::list<T>::const_iterator it = dependents.begin(); it !=
>> dependents.end(); ++it) {
>> dependencyGraph.addEdge(task, *it); // Creates a new vertex if either
>> vertex does not exist
>> tasks.push_back(*it);
>> }
>> }
>>
>> No vertices or edges are ever removed from the graph, unless the whole
>> graph is being destroyed. Would bundling a mutex and a graph be a good
>> idea for this purpose?
>
> If your graph is directed and the vertex and edge sets grow
> monotonically (i.e. there's no removal) then threaded graph
> construction is pretty simple. I would create a 1:1 mapping of mutexes
> to vertices and lock the adjacencies for each vertex (you could make
> this mapping 1:N as well to reduce the number of locks required). Of
> course when adding a new vertex you'd still need to lock the entire
> adjacency list temporarily.
>
> I have an implementation somewhere that does something very similar to
> this, though I preallocate the vertex list to avoid having to lock the
> whole graph and use a property map to map vertices to mutexes which
> gives me adjustable granularity in the locking. I modified the
> adjacency_list ctors and calls to add_edge, but you could just as
> easily use a wrapper.

This sounds simple and straightforward enough. Just for clarification:

* I believe that T must be default-constructible, copyable, and
copy-assignable if it is to be used as a bundled property. So I'd
either have to store the mutexes in a separate data structure in my
wrapper (for my needs, T is hashable, so that's not an issue), or add
the mutex to T itself (and either create custom copy and assign
operators or change my internal graph to use smart pointers, both of
which could turn ugly quickly).
* I also believe that if I were to add an edge, then I'd write lock
both the source and destination vertices (Or is it write lock the
source and read lock the destination? Or is it write lock the source
without locking the destination?).

So, what I would end up with is the following wrapper if I am correct:

template <class T>
class Graph {
        boost::adjacency_list<boost::vecS, boost::vecS, boost::directedS, T>
internalGraph;
        
        // boost::shared_mutex is not copyable, so heap allocation it is
        boost::unordered_map<T, boost::shared_ptr<boost::shared_mutex> >
vertexMutexes;
        boost::shared_mutex mutex; // global lock for vertex management
        // Other stuff goes here
public:
        void addEdge(const T& from, const T& to);
};

void Graph::addEdge(const T& from, const T& to) {
        boost::shared_mutex& from_mutex = vertexMutexes[from];
        boost::shared_mutex& to_mutex = vertexMutexes[to];
        
        // write lock both mutexes at once
        boost::unique_lock<boost::shared_mutex> fromLock(from_mutex,
boost::defer_lock);
        boost::unique_lock<boost::shared_mutex> toLock(to_mutex, boost::defer_lock);
        boost::lock(fromLock, toLock);

        // Code to call add_edge() here
}

Is this accurate?


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net