Boost logo

Boost Users :

Subject: Re: [Boost-users] Graph-Based Threaded Algorithms
From: Mike Marchywka (marchywka_at_[hidden])
Date: 2008-12-23 13:22:39


________________________________
> Date: Tue, 23 Dec 2008 12:54:55 -0500
> From: andrew.n.sutton_at_[hidden]
> To: boost-users_at_[hidden]
> Subject: Re: [Boost-users] Graph-Based Threaded Algorithms
>
>
> I am looking for some guidance from some of the graph and thread experts out there.
>
>
> I have a graph that I want to execute a breadth-first search (BFS) on. When each node is visited, it spawns a thread to execute some work. I must not execute a child node before the parent execution is completed, but I also want to maximize concurrency. In other words, I don't want to "serialize" the BFS to force proper execution ordering.
>
>
> Are there any ideas of how to accomplish this efficiently?

For the sake of argument, I'll
assume you have a multi-processor system and the work assigned to each thread is large
compared to the fixed costs of spawning etc. Before you just relegate everything to
the thread scheduler, consider what benefits your may get from cache coherency and
see if doing everything on one thread, or even refactoring the threads' work to maximize
coherency within each one, can be used to reduce cache thrashing. The prefethcer
can do better if your code makes predictable memory accesses.
 
It is also important to think about literal multi-processor targets as opposed to multi-core. In
the latter case, cache coherency can an issue ( see my earlier post citing an IEEE article
on some problems others have found).
 
I would generally be worried about memory access patterns first and if you do use threads make sure
they don't fight over the cache. They are great when you have asynchronous unpredictable
events or can split up work among processors but I wouldn't use them as the
first line of attack even on multi-core targets when you can benefit from more
predictable execution order.
 
 
>
> It's been a while since I've done any serous MT programming, so I'm a bit rusty and probably a little behind the times.
>
>
> For the sake of argument, let's say your nor using a thread pool and don't have any limits on the number of threads you can spawn. You could associate a condition variable with each vertex using a property map, and make that available to your BFS visitor. By overriding on_tree_edge, you can get access to the parent/child vertices in the BFS tree. Your child vertex can spawn a new thread and wait on the condition variable associated with the parent. IIRC this means that you can have multiple children waiting on the parent. When the parent thread is done, it notifies the condition variable, waking up all of its dependant children. Of course, children wouldn't have to wait if the parent had already finished, so you'll probably want to make sure that there's a valid and initialized condition variable before waiting on it.
>
>
> Since you probably do have limits on the number of threads you're spawning, you might use a thread pool to control the pace at which the BFS proceeds. For example, the main thread can't continue while there are no threads available.
>
>
> I think that's the way that I'd start...
>
> Andrew Sutton
> andrew.n.sutton_at_[hidden]
_________________________________________________________________
Send e-mail faster without improving your typing skills.
http://windowslive.com/online/hotmail?ocid=TXT_TAGLM_WL_hotmail_acq_speed_122008


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net