Boost logo

Boost :

From: Andrey Semashev (andysem_at_[hidden])
Date: 2007-08-25 07:10:44


Hello Tobias,

Friday, August 24, 2007, 4:19:43 AM, you wrote:

>> Just came across this thread. I had a need of lightweight_call_once in
>> my Boost.FSM library and implemented it. It is not implemented as an
>> internal part of the library, but rather as a common tool, like
>> lightweight_mutex.

> Something you'd like to brush up as a Boost X-File ;-)?

> See

> http://article.gmane.org/gmane.comp.lib.boost.devel/162951

Hmm, I'm not sure of the purpose of this project. Is it supposed to
pass several tools under its umbrella to boost via fast-track review?

> The pthreads implementation seems to be using a global Mutex, which is
> inefficient, because it causes concurrent initializations (that might
> have nothing to do with each other) to be queued.
> To make things worse, that Mutex is initialized with 'pthread_once'.

Yes, but consider that this code will be executed only once. The rest
of the execution time this mutex is useless. As mutexes may actually
take some system resources (not sure whether it's true or not on the
wide variety of platforms out there), having a separate mutex for
every call_once is a direct waste of them.

> Also, some platforms will not call 'mutex_destroyer' within a dynamic
> library (you probably know)...

No, I'm not aware of this. Could you elaborate, please? Which
platforms are those?

> The "trigger" could contain the mutex and the macro for initialization
> would contain PTHREAD_MUTEX_INITIALIZER, so its creation can be done at
> compile time by setting up the appropriate bytes in the data segment
> (interestingly, you use a similar technique for the "no atomics variant").

In the "no atomics case" I had no other choice as I needed a mutex to
safely read the once flag.

> Other implementations use "while (check) sleep; stuff", which seems
> sorta awkward to me. Can't we use "proper" synchronization?

The fundamental problem arises here - I need to safely create a
synchronization object. Non-POSIX APIs don't provide things like
PTHREAD_MUTEX_INITIALIZER or I didn't find them in the docs. And,
besides that, the aforementioned drawback with waste of resources
comes up again.

> Win32 provides the 'InitOnceExecuteOnce' which seems to do pretty much
> all we need (it even takes a parameter to get a the state in and
> boost::function and a downcast from 'PVOID' will do for the type
> erasure). After putting another guard around it (to avoid the dynamic
> call and the construction of the boost::function) we're all done.

InitOnceExecuteOnce is available since Vista and up. I'd like not to
introduce such constraints on execution platform. Although, there
could be an alternative implementation for WinAPI, for ones that will
not execute their apps on XP, for example.

> I don't know too much about other threading platforms, but I'm sure
> there are similar means. We could also use a counter for the guard to
> use a Semaphore (if it's more handy to do so for some platform) to
> notify threads waiting for initialization to complete:

See my note above. I can't safely create a single semaphore or mutex
by an API-function call (which was not shown in your code sample). You
may see my dancing around creating a semaphore in BeOS implementation
to feel the problem.

> For some platforms (such as x86) memory access is atomic, so atomic
> operations are just a waste of time for simple read/write operations as
> the 'is_init' and 'set_called' stuff.

The point is not only in atomic reads and writes, but in performing
memory barriers too. Otherwise the result of executing the once
functor could not have been seen by other CPUs.

> There's some code that throws exceptions with pretty, formatted error
> messages: So we're out of resources and execute a whole bunch of code to
> format an error message... That code might run into the same problem
> we're trying to report, so probably throwing something lightweight (such
> as an enum) is a more appropriate choice (and also gets us rid of some
> header dependencies).

Well, you may be right here. I could try to reduce memory allocations
in error handling.
But the only possible problem I see there is memory depletion. In such
case you'll get std::bad_alloc which adheres the declared interface of
the implementation. So, strictly speaking, if you have enough memory
you get a detailed error description. If not, you get bad_alloc.

-- 
Best regards,
 Andrey                            mailto:andysem_at_[hidden]

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk