|
Boost : |
From: Alexander Terekhov (terekhov_at_[hidden])
Date: 2001-10-23 03:11:49
> > > BTW, the claim that putting a maximum count for a semaphore leads to
race
> > > conditions is, I believe, only true for "weak" semaphores. For
strong
> > > semaphores with a maximum count value, the increment/up/signal/V and
> > > decrement/down/wait/P operations are symmetric.
> >
> > well, you could certainly check whether sem_cnt + sem_signaled <= MAX.
> > but what is the purpose of such checking? e.g. it does not count
> > threads already unblocked (not counted by sem_waiting/sem_signaled)
> > but still on the way out of lock/wait/dec/down/P() call.
>
> Worry about the semantics and logical values first, and the
implementation
> later. If you bury yourself in the implementation before you have the
> semantics straight, you just get yourself confused. The semantics of
> a semaphore with a max count is fairly straightforward. In what follows,
S is
> always the *logical* value of the samphore, which may not be precisely
the
> same as any single variable (such as sem_cnt) in the implementation:
>
> - P(S): If S == 0, causes the thread to block; if S > 0 and no thread is
> blocked on S, decrements S; otherwise, unblocks a thread blocked on S.
> (In the last case the blocked threads must have blocked on a V(S)
operation,
> since S > 0).
>
> - V(S): If S == MAX, causes the thread to block; if S < MAX and no thread
is
> blocked on S, increments S; otherwise, unblocks a thread blocked on S.
> (In the last case the blocked threads must have blocked on a P(S)
operation,
> since S < MAX.)
>
> - P(S) and V(S) are atomic actions. That is, the external behavior is
> as if they happened instantaneously.
>
> The above describes the *logical* behavior. A thread that is on its way
out
> of its P(S) or V(S) operation (no future P(S) or V(S), or lack thereof,
can
> prevent it from completing the operation) may, from a logical standpoint,
be
> considered to be already unblocked. The delay in wending its way out of
the
> P(S) or V(S) is the same as if a delay got inserted (e.g., due to a
context
> switch) between between the end of the operation and the execution of the
next
> statement in the program.
Thanks for explanation. However, the semantics I was taking
about (MS-MAX checking) are not "V(S): If S == MAX, causes
the thread to block" but rather: V(S): If S == MAX, causes
the *V(S) return FALSE*.
Also, I am not sure what you want to prove with respect to
POSIX semaphores. POSIX semas are already part of POSIX
(IPC and semaphore option). The brief code example I've sent
to Bill was NOT intended to replace the current Boost.Threads
impl. (actually it is not entirely correct because I've
omitted checks preventing "late waitor" threads awakened
spuriously/timedout from stealing signals if/when they
acquire mutex before thread(s) unblocked by unlock()). The
intent was merely to show that current Boost.Threads sema
impl does not follow POSIX defs. and that MS-MAX checking
is a subject of a race condition.
Personally, I would suggest to remove semas from Boost.Threads
completely because they are mainly targeted at IPC with no
memory shared between processes (or posting from signal handler
to a thread it interrupts; note that static volatile
sig_atomic_t vars cannot be used to communicate data between
signal handler thread and other threads), NOT for synchronization
between *threads* within a process/processes with shared memory
("Rationale for System Interfaces", IEEE P1003.1, Draft 7,
June 2001/ Open Group Technical Standard, Issue 6):
"A semaphore is defined as an object that has an integral
value and a set of blocked processes associated with it.
If the value is positive or zero, then the set of blocked
processes is empty; otherwise, the size of the set is equal
to the absolute value of the semaphore value. The value
of the semaphore can be incremented or decremented by any
process with access to the semaphore and must be done as an
indivisible operation. When a semaphore value is less
than or equal to zero, any process that attempts to lock it
again will block or be informed that it is not possible to
perform the operation.
A semaphore may be used to guard access to any resource
accessible by more than one schedulable task in the system.
It is a global entity and not associated with any particular
process. As such, a method of obtaining access to the semaphore
has to be provided by the operating system. A process that
wants access to a critical resource (section) has to wait on
the semaphore that guards that resource. When the semaphore
is locked on behalf of a process, it knows that it can utilize
the resource without interference by any other cooperating
process in the system. When the process finishes its operation
on the resource, leaving it in a well-defined state, it posts
the semaphore, indicating that some other process may now
obtain the resource associated with that semaphore.
In this section, mutexes and condition variables are specified
as the synchronization mechanisms between threads.
These primitives are typically used for synchronizing threads
that share memory in a single process. However, this section
provides an option allowing the use of these synchronization
interfaces and objects between processes that share memory,
regardless of the method for sharing memory.
Much experience with semaphores shows that there are two
distinct uses of synchronization: locking, which is typically
of short duration; and waiting, which is typically of long or
unbounded duration. These distinct usages map directly onto
mutexes and condition variables, respectively.
Semaphores are provided in IEEE Std 1003.1-200x primarily
to provide a means of synchronization for processes; these
processes may or may not share memory. Mutexes and condition
variables are specified as synchronization mechanisms between
threads; these threads always share (some) memory. Both are
synchronization paradigms that have been in widespread use
for a number of years. Each set of primitives is particularly
well matched to certain problems.
With respect to binary semaphores, experience has shown that
condition variables and mutexes are easier to use for many
synchronization problems than binary semaphores. The primary
reason for this is the explicit appearance of a Boolean predicate
that specifies when the condition wait is satisfied. This
Boolean predicate terminates a loop, including the call to
pthread_cond_wait(). As a result, extra wakeups are benign
since the predicate governs whether the thread will actually
proceed past the condition wait.
With stateful primitives, such as binary semaphores, the wakeup
in itself typically means that the wait is satisfied. The burden
of ensuring correctness for such waits is thus placed on all
signalers of the semaphore rather than on an explicitly coded
Boolean predicate located at the condition wait. Experience has
shown that the latter creates a major improvement in safety and
ease-of-use.
Counting semaphores are well matched to dealing with producer/
consumer problems, including those that might exist between
threads of different processes, or between a signal handler
and a thread. In the former case, there may be little or no
memory shared by the processes; in the latter case, one is not
communicating between co-equal threads, but between a thread
and an interrupt-like entity. It is for these reasons that
IEEE Std 1003.1-200x allows semaphores to be used by threads.
Mutexes and condition variables have been effectively used with
and without priority inheritance, priority ceiling, and other
attributes to synchronize threads that share memory. The
efficiency of their implementation is comparable to or better
than that of other synchronization primitives that are sometimes
harder to use (for example, binary semaphores). Furthermore, there
is at least one known implementation of Ada tasking that uses
these primitives.
Mutexes and condition variables together constitute an
appropriate, sufficient, and complete set of inter-thread
synchronization primitives."
regards,
alexander.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk