|
Boost : |
From: Dan W. (danw_at_[hidden])
Date: 2003-12-30 11:37:47
Pavel Vozenilek wrote:
>>Call create_double_buffer<my_buff_t>( new my_buff_t, new my_buff_t )
> This is exception unsafe, btw.
Aha, ok. I'll try to take responsibility for allocation and watch out for
exemptions. Thanks.
> Is this similar to 'synchronized' library:
> http://libcalc.sourceforge.net/synchronized.hpp
> ?
I was just looking at synchronized.hpp. If I understand the code correctly,
synchronized aims to atomize access to a type. What my double_buffer class
aims to do is,
A) substitue for producer/consumer-type data flow between threads, where
one thread allocates memory and the other de-allocates it, as that is 1)
time consuming, and 2) problematic for some per-thread allocators. My code
re-uses two buffers which are allocated once; but it imposes the
restriction on both producer and consumer that they must let go of a buffer
before checking-out the other.
B) Implement such buffer exchange mechanism in a way that doesn't require
blocking of producer or consumer threads; --what I mean is that the
checkout_buff() and checkout_data() functions (and the release mechanism
via function call operator) can be called asynchronously.
> Can you please explain in detail how thread safety is reached without using
> mutexes? Is it because shared_ptr<> uses mutex inside? Is it safe against
> compiler optimizations and CPU cache effects?
Re.: "thread safety" --the term:
Please accept this excuse for my ignorance: C++ is my hobby; my line of
work is embedded systems using small microcontrollers, in Assembler. My
concept of thread-safety is probably much lower level than anyone-else's
around boost: I deal with interrupts. So, I'm not sure I used the term
correctly. If by 'thread safe' it is meant that any number of threads
could safely have producer or consumer access, that is NOT what I was
meaning to claim. The intended use of double_buffer is for *two* threads,
'producer' and 'consumer', to respectively call the functions meant for
each, namely 'checkout_buff()' and 'checkout_data()'. Creation of the
double buffer could be handled, I believe safely, by either, or yet another
thread.
Re.: "thread safety" --without mutexes:
Let me start from the last question
>Is it safe against ... CPU cache effects?
I believe so, given that (what I call) safety is achieved in the logic of
moving tokens (pointers) in T * volatile arr[6]; which I declared volatile
precisely to get around cache effects (though, looking at it again I wonder
if I shouldn't have written "T volatile * arr[6];"... No, I think it's
right; it is the pointers stored in the array that need to be volatile.)
>Is it safe against compiler optimizations?
I believe it is, though this calls for a long answer; I'll try to present
just a small example of one consideration: In the reset() routine, which is
only used by double-buffer-implementation's ctor, so it can be thought as
part of it, and goes like (some line numbers added),
template< typename T > //reset
void db_impl<T>::reset()
{
for(int i = 0; i < 6; ++i) arr[i] = 0;
pbuf2_->erase();
pbuf1_->erase();
1) arr[5] = pbuf2_;
2) arr[0] = pbuf1_;
3) if( arr[0] == 0 ) //see notes below
{
4) arr[0] = pbuf2_;
5) arr[5] = 0;
}
}
Lines 1) and 2) place the two buffers' tokens in the two waiting positions
for producer checkout. Even though pbuff1_ and pbuff2_ cannot be zero, I
check for zero in line 3) to make sure that the producer hasn't called
checkout in between the initializations of the two tokens. If it has, then
I move the token in the second waiting position to the first, lines 4) and
5). For the compiler to optimize away this check, it would have to infer
that pbuff1_ cannot be zero from an assertion to that effect in the calling
ctor, which I'm sure no compiler would go to such extent.
Similar reasonings apply to some of the post-checks I do in the checkout
functions. They would be at risk of a human programmer thinking that the
post-checks are unwarranted, but not at risk of compiler optimization,
simply because the compiler lacks enough info to optimize them.
>Is it because shared_ptr<> uses mutex inside?
I wasn't aware of this. Aren't mutexes infamous performance-wise? I read an
article in CUJ a year or more ago, the author was advising the use of
interlocked_exchange() as an alternative to mutexes; though I'm aware this
is a platform specific facility; but I'd assume there'd be similar
facilities in other platforms that could be all accessible via a portable
library. I'm obviously too ignorant in this area to be posting
thread-related solutions; though I'm sure all the thought I put into the
synchronization code could not be just good for nothing. I'm worried though
about performance impact of a mutex in shared_ptr...
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk