Boost logo

Boost :

From: Andrey Melnikov (melnikov_at_[hidden])
Date: 2005-08-19 00:23:09

Oliver Kullmann wrote:
>>>9. Why isn't thread cancellation or termination provided?
>>>There's a valid need for thread termination, so at some point
>>>probably will include it, but only after we can find a truly safe
>> > (and portable) mechanism for this concept.
>>>How do you cancel threads?
>>The explanation in the FAQ doesn't explain the problem in details.
>>Here's a brief example:
>>bool f1(X x)
>> X * y = new X;
>> // if f1() is aborted at this point
>> // *y won't be deallocated
>> delete y;
>> return false;
> So we would get a memory leak. The underlying reason is that threads don't have their
> own address space, and thus the problem seems inherent to the notion of threads?!

I illustrated only a memory leak from C++ heap. Other kinds of memory
and other resources can leak too. For example, temporary files won't be
deleted (disk space leak), locks on shared objects won't be released
(deadlocks), network sockets and other system handles won't be closed
etc. It's not just memory leaks, it's resource leaks. If your code
acquires resources it should free them. Or you should register all
resources it acquires in an external entity and this external entity
will free them for you if you forget to. NT kernel services and Apache
resource manager are examples of such entities. If you have specific
resources which aren't freed when you terminate a thread, one possible
solution would be to write a special manager for these resources.

>>>Can one compute the short-circuit parallel or with your "futures" ?!
>>This is a common problem in the world of parallel computations. It's due
>>to the nature of parallel execution and it's independent from
>>boost::thread, futures or any library.
>>Basically, the solution is always the same: f1() polls a flag
>>periodically, and if it is raised it exits.
>>To cancel the thread you just raise the flag and the thread terminates
>>class A
>> Flag cancel_f1;
>> bool f1(X x)
>> {
>> X * y = new X;
>> for (int i = 1; i < 1000; i++)
>> {
>> // some computations, e.g. one iteration of the algorithm
>> if (flag.is_raised())
>> {
>> delete y;
>> return false;
>> }
>> }
>> return actual_result;
>> }
>>A a;
> This polling-solution doesn't look like a solution for
> what I have in mind.

Don't forget it's just a simple example. It can be made more elegant and
efficient if you need it.

> First of all, it's a source of
> inefficiency,

Poll rarely. There are a lot of ways to make this polling using a very
little percentage of total computation time. Polling is used widely for
different purposes for a long time, and a lot of efficient methods has
been invented.

> but much more important, it does not work
> with generic components: The component doing some computation
> should not need to worry about the circumstances under which
> the computation is performed.

 From your posts I see that you are at early design phases for your
distributed framework, and you don't see the whole picture yet.

I understand your idea. But your code always has to comply with some
requirements, even if it's generic. The requirement to terminate
gracefully isn't that hard to implement. You'll face a lot of other
problems and I bet that your components will inevitably have to worry
about a lot of different other circumstances.

> But the polling-solution forces
> each component to provide measures for its own abortion,
> and this looks like a design nightmare to me: The library
> I'm developing consists of many components solving some sort
> of "very hard computational problem" (for example AI ...).
> The components on their own as well as any combination should
> possibly run in a parallel computation. I don't believe that
> the polling-solution, effectively polluting every component,
> is suitable for these demands.
> Altogether, it seems that threads are not suitable at all
> what I want to achieve, but *processes* are right here.

In general, processes aren't right either. They won't help you with all
possible kinds of resources.

> Perhaps I outline shortly for what purpose I need distributed
> computing:
> As I said, I'm developing a generic (actually, generative) library
> for solving hard problems (constraint satisfaction etc.). This library
> consists mainly of concepts and models, that is, data structures and
> algorithms.

 From my experience with MPI it looks like you should look at MPI
clusters for your task. MPI is a framework for writing distributed
applications. It's standard and widely used. I think that 99% of large
computational systems/supercomputers are based on MPI API. MPI supports
multiprocessor systems and multicomputer network clusters. It's common
for a library for solving hard problems to use MPI for work distribution
  so IMO it would be a good choice because it enables interoperation of
different libraries.

For example take a look at MPICH2 It's one of MPI implementations
and I found it very easy to deploy.

> Due to the complicated problem domain, most of the time
> it's completely unknown which of the many competing algorithmic resources
> you should use. So I need to create an environment, where the user of
> the library can start several algorithms (whether on a single processor
> or using multiple processors doesn't matter), monitor them if needed
> (how much progressing is it making? perhaps we should abort it??),

You see? Your components have to "provide measures" for reporting
progress anyways. I think that integrating the abort poll into your
progress report routine won't introduce any additional code pollution.

> possibly getting a result from them, possibly aborting them.
> (The algorithms would all run independently, using their own
> data; they communicate only with the central control, and this in
> a quite restricted way. The main point is, that it's easy to run
> whatever I want in parallel and control it.)
> The main motivation for distributed computing (of course, if possible,
> not just on a single computer, but using the Internet ...) here is not
> just that I have multiple processors which I want to exploit, but
> equally (or even more) important is the ease with which alternative
> algorithmic resources can be exploited, possibly gaining a super-linear
> speed-up (quite common in this area --- even on a single processor
> partitioning the (large) search space into different parts and searching
> through them in parallel can be faster than doing it sequentially).

The problem is that for such tasks your own high-level scheduler will be
faster than multiple OS threads on single CPU. So you should rely on OS
threads/processes or MPI processes carefully. When you introduce
high-level domain-specific metrics into your work scheduler, you'll
probably get better performance. But of course you'll still need support
for multiple concurrent execution flows to be able to consume all
available CPU power.

> Originally it seemed clear to me, that processes are the right tool here,
> while threads are not suitable. But then I didn't find any support in
> Boost for process control, only the boost thread library, and I wondered
> (hoped) that this could be used. But it seems that was just wishful thinking.

> So, to conclude, there seems to be no support in Boost out there, and my
> only chance there is to use the C library, and start forking ?!?!
> Looks like a tough fate to me.

No please :) To support execution over network you'll need to write a
network daemon for work distribution and solve a lot of other problems.
MPI supports all this out of the box and you don't need to reinvent the

You definitely need MPI or a higher level framework for distributed
computations. At the moment Boost doesn't provide such high-level

> Now I hope I didn't bother the well-meaning reader too much, but it seemed
> necessary to me to outline for what actually I would need threads or
> whatever there is.
> I would be thankful for any comments, links ...

Please, don't use these "?!?!" and "??". It makes people think that you
are screaming aggressively.


Boost list run by bdawes at, gregod at, cpdaniel at, john at