Boost logo

Boost :

Subject: Re: [boost] Boost.Fiber mini-review September 4-13
From: Thomas Heller (thom.heller_at_[hidden])
Date: 2015-09-05 10:03:12


Am 05.09.2015 3:59 nachm. schrieb "Oliver Kowalke" <oliver.kowalke_at_[hidden]
>:
>
> 2015-09-05 15:12 GMT+02:00 Agustín K-ballo Bergé <kaballo86_at_[hidden]>:
>
> > On 9/5/2015 3:42 AM, Oliver Kowalke wrote:
> >
> >> 2015-09-04 21:05 GMT+02:00 Agustín K-ballo Bergé <kaballo86_at_[hidden]
>:
> >>
> >> Although holding a mutex while firing on the condition variable is
> >>> considered bad practice, so this would be even better:
> >>>
> >>>
> >> do you have a reference?
> >>
> >
> > Well, it's not exactly new and it is baked into the design of
> > `std::condition_variable`. I guess this will have to do for reference:
> > http://en.cppreference.com/w/cpp/thread/condition_variable/notify_one
> >
> > in Butenhof's examples pthread_cond_signal/pthread_cond_broadcast are
> >> always called if front of pthread_mutex_unlock
> >>
> >
> > Alas pthread specifies different semantics than the standard library,
and
> > there you are actually expected to hold the lock if you want predictable
> > scheduling. I hear pthread won't actually wake up any threads then
(which
> > wouldn't be able to make progress otherwise), but rather switch them
from
> > waiting on the cv to waiting on the mutex to avoid useless context
> > switches; when the mutex is finally unlocked the thread will finally
wake
> > up.
>
>
> keep in mind that fibers do not run in parallel - in a single thread the
> sequence of
>
> unique_lock< mutex > lk( mtx);
> ...
> lk.unlock();
> cnd.notify_one();
>
> is the same as
>
> unique_lock< mutex > lk( mtx);
> ...
> cnd.notify_one();
> lk.unlock();
>
> (no parallelism like threads)

What about two fibers running on different OS threads? Are they not allowed
to synchronize with each other?

>
> _______________________________________________
> Unsubscribe & other changes:
http://lists.boost.org/mailman/listinfo.cgi/boost


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk