Boost logo

Boost :

From: Howard Hinnant (hinnant_at_[hidden])
Date: 2007-03-27 19:40:32

I'm coming full circle...

As soon as I start allowing multiple threads to access std::thread at
once, I need to keep a mutex around for the lifetime of the
std::thread, whether or not it is already joined or even detached:

class thread
     pthread_t handle_; // 0 if detached
     thread_local_data* tl_ // 0 if detached
         mutex mut; // synchronize between parent and child threads
         condition cv;
         bool done;
         bool wait_on_owner;
         bool cancel_pending;
         bool cancel_enabled;
     mutex another_mut; // synchronize with multi-parent access,
always here

I can never get rid of another_mut, even after thread detach, even

t = std::thread();

I believe I'll even have to lock t.another_mut during the above move
assign, making move assign unacceptably slow.

This is the wrong class to allow multiple threads to touch at once.
Even if I load it with another_mut and bring move assignment to a
crawl, simultaneous access to join() and detach() is still a race.


vector<thread> v(100);
v.insert(v.begin(), std::thread(one_more_thread));

To accomplish that insert we just had to lock/unlock 100 mutexes. I
actually did this with std::string many years ago, and I *still* have
the bruises from my customers! Either that or we go with detach()
being a no-op as Peter suggests and we keep the thread state around
forever. And then we still just did 100 atomic operations to get that

> I think that all operations on std::thread should be safe to execute
> concurrently, and that they should have defined functionality in all
> cases,
> which should probably be a no-op.

Sorry, I just do not believe this is a good idea. It completely
destroys vector<thread> performance for what is at best a corner use
case and can be handled with layered-on classes or otherwise applied
external synchronization if desired. The whole beauty of the sole-
ownership model is how light and agile it is to manipulate.


Boost list run by bdawes at, gregod at, cpdaniel at, john at