Boost logo

Boost :

Subject: Re: [boost] [transact] code in sandbox
From: Bob Walters (bob.s.walters_at_[hidden])
Date: 2010-02-17 14:09:25


On Wed, Feb 17, 2010 at 3:27 AM, vicente.botet <vicente.botet_at_[hidden]> wrote:
>
> I don't think neither the criteria to don't accept capabilities that are only in one or two of the libraries. We need to explore what is useful to have and what is not useful independently from which library needs this now.
> We are at the beginning, compiling requirements, evaluating different interfaces, etc, .. This would take some time and I'm sure that at the end we will reach an interface that satisfy the user and the author of the 3 libraries.
> I would prefer to left process management considerations out of this discussion and that we concentrate the discussion on concrete cases.

My comment was not a "process management consideration". I'm
attempting to understand the scope of Boost.Transaction. I was under
the impression that it was to represent the intersection of the
capabilities of the 3 libraries, not the union of those capabilities.
I am also trying to understand the implications of this discussion
thus far on the TM/RM interface. For example, I currently don't
support transaction priorities. So I'm really just recommending that
capabilities which are limited to one library not become part of the
public API of Boost.transact, if instead there is an acceptable way to
expose those controls directly via the library that offers that
capability. For example, I am not going to suggest that there need to
be Macros which designate timeout durations for the retry loop, or
that the transaction class have a deadlock handling policy parameter
because such things are not applicable to every RM which may run under
the control of Boost.Transact.

> The mix of optimistic and pessimistic strategies needs a careful design. On Boost.STM we have a LocakAware TM mechanism that could inspire us in this point. Whether the Transact library will take care of this requirement will depend on whether we found a good solution.

I would like to see your approach, and welcome the inspriation. I
don't like what I currently have. My approach always involves the
user explicitly designating when a pessimistic lock is to be acquired:

const_iterator i = map->find(something);
iterator<wait_policy> j = i( txn ); // something like this is needed
to promote from const to non-const.
int balance = j->second; // dereference j acquires pessimistic lock
on that entry, and may block per wait_policy
int new_balance = balance + 100;
j->second = new_balance; // update entry as part of txn.

This has generally made the combination of optimistic vs. pessimistic
conventions possible within the scope of one transaction, but it does
place the burden for doing it correctly on the user. The user could
forget to lock the updated row correctly prior to updating it.

One thing I can do in regard to that is add protections that ensure
that either 1) the object is updated optimistically based on support
for optimistic locking by the value_type of the map, or 2) the entry
was locked correctly prior to the update. If the user violates both
of those conditions, the update() throws an exception. However, there
are use cases involving related entities where updating an unlocked
row without protection is still safe, based on locks held elsewhere.
So also need a convention for performing an unprotected update. And
I'm currently convinced this whole policy of protection against
improper locking needs to be optional, but available if users want the
assurance it provides.

> We need to consider how contention management helps to avoid starvation with optimistic synchronization. Transaction priorities could be a way, other need to be considered, and see if the interface is enough open to let the user choose its contention management strategy.

My approach to this is to allow pessimistic approaches as well as
optimistic. i.e. optimistic works well for low contention. Under
higher contention, the retry logic becomes a 'viscious cycle',
warranting a pessimistic approach as the alternative. I have not
thought about optimizing contention management in optimistic
algorithms to any significant extent, I just accepted that it is not
the perfect approach in all cases.

I would agree that any pessimistic capability requires that the
library support either deadlock prevention or deadlock detection (or
both.) I haven't implemented deadlock detection, and am not looking
forward to it, but agree it is the inevitable consequence of
supporting the pessimistic approach. If you had better,
optimistic-only alternatives, I would be very interested.

Best Regards,
Bob


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk