|
Boost : |
From: Christopher Kohlhoff (chris_at_[hidden])
Date: 2006-04-30 02:29:27
Hi Giovanni,
--- "Giovanni P. Deretta" <gpderetta_at_[hidden]> wrote:
> Does asio always grab a mutex before invoking a strand
> protected handler, or it guarantees that the same strand is
> run on the same thread and thus have automatic serialization?
> Just an implementation detail, i know, but i'm curious.
It uses a mutex, but the mutex is only held while manipulating
the linked list of pending handlers. It is not held while the
upcall to the handler is made.
Having a strand only execute on the one thread sounds like a
theoretically possible implementation approach. However, I'm
probably going to attempt an implementation that uses a
lock-free linked list first.
[ ... custom memory allocation ... ]
> I'm not sure if these functions are not enough (or at least
> have enough parameters). They have no way to know alignment
> requirement of the allocated object (and thus assume the
> worst), and the deallocator does not have a size_t parameter
> (as operator delete has). What about these signatures:
>
> template<typename T> T* asio_handler_allocate(size_t,
> handler_type&);
>
> and
>
> template<typename T> void asio_handler_deallocate(size_t,
> handler_type&, T*);
I've added the size parameter for deallocate.
My initial custom memory implementation done during the review
used templates. I later rejected this approach because the types
being allocated are completely internal to the library, and I
felt they should not be exposed to the user in any way. I also
think not requiring the user to define templates makes it
simpler to use -- it's closer to defining custom operator new
and delete for a class.
With respect to alignment, the functions have the same
restrictions as ::operator new(std::size_t) and malloc. I'm not
sure the increased complexity in supporting other alignment
requirements would give much, if any, benefit in practice. All
types being allocated this way are complex structures, e.g.:
- They contain a copy of the user-defined Handler
- On Windows they are OVERLAPPED-derived classes
- On non-Windows they will contain pointers to adjacent
elements in a linked list.
Although I see it as a QoI issue exactly how many allocations
occur per operation, in my proposal I plan to at minimum
encourage implementations to:
- Prefer to do only do one allocation per operation.
- Failing that, only do one allocation at a time (i.e. don't
allocate a new memory block associated with a handler until
the previous one has been deallocated).
[ ... reference counted buffer support ... ]
> Very good, but what about explicitly guaranteeing that
> *exactly* one *live* (1) copy of the buffers object will be
> maintained? I'm sure in practice asio already guarantees that,
Actually, it doesn't. The implementations of asio::async_read and
asio::async_write have to keep a copy of the buffers so that the
operation can be restarted. This copy is in addition to the copy
kept by the lower-level operation (i.e. read_some or write_some).
Instead of this, I plan to further tighten up the specification
of when and from which threads the implementation is allowed to
make calls back into user code, and how the appropriate memory
barriers are placed between these calls. In theory this should
allow reference counted buffers without needing synchronisation.
Cheers,
Chris
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk