From: David Abrahams (david.abrahams_at_[hidden])
Date: 2001-06-19 12:47:35
----- Original Message -----
From: "Vesa Karvonen" <vesa.karvonen_at_[hidden]>
Sent: Tuesday, June 19, 2001 11:35 AM
Subject: Re: [boost] Stronger exception safety guarantee for smart_ptr.res
> From: <scleary_at_[hidden]>
> > From: "Vesa Karvonen" <vesa.karvonen_at_[hidden]>
> > > From: "David Abrahams" <david.abrahams_at_[hidden]>
> > > > From: "Vesa Karvonen" <vesa.karvonen_at_[hidden]>
> > > Throwing in a destructor is indeed bad practice and should be
> > > discouraged.
> > Yes, for the following reasons:
> > 1) An object with a throwing destructor *cannot* be used with the STL
> > there is *any* chance that it will be destructed [188.8.131.52/2].
> This issue is under consideration. I have described an alternative
> that provides certain arguable advantages over the nothrow requirement in
> case of smart pointer reset().
> > 2) It makes throwing *any* exception a risky business -- if there's an
> > object with a throwing destructor on the stack, your program's blown
> This is a valid issue. However in this case we are considering the
> behavior of smart pointer reset(). reset() results in deleting a heap
> A program can actually survive from the situation which we are
> It can perhaps be convincingly argued that the program should not survive
> this might be a good reason to stick with the current approach.
> > 3) It's impossible to have dynamic arrays of such objects.
> The above does not hold in this case.
> > [184.108.40.206/7]: "void reset(X* p=0) throw();"
> As I have previously indicated, the above is not the critical piece of
> knowledge that I'm missing.
> > Exception specifications are IMO a
> > guarantee-of-quality-of-implementation style of sugar, on the same level
> > automatic pre/post-condition testing, and the lack of them do not make
> > programs "weak".
> People who prefer programming languages with automatic run-time checking
> disagree completely.
> However, I tend to agree that specifically in C++, exception
> are mostly a quality of implementation issue. At least with the standard
> library components, it does not make sense to rely on them, because the
> behaviour is undefined.
> > > I think that the current nothrow guarantee of smart pointer reset()
> > (including
> > > std::auto_ptr and boost::shared_ptr) gives a false sense of exception
> > safety,
> > > because it simply doesn't guarantee any consistent programmatically
> > detectable
> > > effect - in the case that an exception is actually throw.
> > Neither does any other code producing undefined behaviour. Hence the
> > "undefined"...
> The last time I checked, everything does not turn undefined in C++ at the
> point the first exception is thrown - don't tell me they've changed
> Let me rephrase some of my thoughts using short sentences that others
> be able to understand:
> - Nothrow exception specification guarantees certain behaviour.
> - However, if deletion throws, the behaviour becomes undefined.
> - Currently exception specifications imply additional run-time overhead.
Only on some implementations.
> => Therefore the usefulness of the current specification is debatable.
I think you misunderstand the standard. The presence of "throw()" on a
signature in the standard does not actually imply that an
exception-specification must be present. It is a shorthand way of saying
"will not throw an exception". This is a confusing notation, but don't blame
me (please): I didn't invent it.
Since the standard states quite clearly that the behavior is undefined if
the user's destructor throws an exception, there's a strong case to be made
that there's no way a user's conforming code can detect whether there's an
exception-specification on the signature anyway.
> > > Exceptions are an interesting topic and I don't like defeatism.
> > I don't like defeatism, either. However, Dave and others have looked at
> > every possible side of this for years; if there was a better solution, I
> > think in this case it would have been found.
> I also don't like people who are not open to novel ideas.
Please, let's try not to make this personal.
> (Actually, I have
> been aware of this issue for a long time, too.) However, if "Dave and
> clearly indicate that they have, in fact, previously considered the
> that I described, and explain clearly why they previously considered it
> unviable, then we can move on.
It's not "unviable". In fact, it might be considered a useful implementation
technique. When you try to document in the standard that auto_ptr can handle
a throwing destructor, however, you've got trouble. Then you have to remove
the nothrow guarantee from auto_ptr::reset (and auto_ptr::~auto_ptr).
What I've "previously considered unviable" was trying to come up with
simple, useful defined behavior when destructors can throw exceptions. You
have obviously thought about the issues a bit, so I don't think I need to
explain the thought process to you.
Your implementation technique, as I've said, could be considered a kind of
QOI improvement to boost::shared_ptr, but since we should not remove any
nothrow guarantees from boost::shared_ptr, we can't document this
improvement. Thus users shouldn't rely on it.
Okay, we /could/ offer a conditional nothrow guarantee from
shared_ptr::reset (and destructor). Still, I wonder how useful that is. I
know how to make sense of a world with the basic, strong, and nothrow
guarantees where destructors don't throw exceptions. A system of
understanding is useful in proportion to its simplicity (among other things)
Once you start letting destructors throw outside of very restricted context,
I think the picture becomes a lot more complicated. I think the burden of
explaining what the new picture looks like falls to anyone proposing to make
throwing destructors defined behavior.
And don't ignore simplicity: Herb Sutter spent a bunch of time trying to
understand exception-safety in terms of database integrity, because I think
he was unsatisfied that there were some fine-grained distinctions being
missed by the 3 guarantees I had proposed. His system (ACID) used an
additional dimension of variability, but was not widely adopted. IMO that
was because it offered too little extra explanatory power for the complexity
> So far I have not been satisfied with the
> answers from David Abrahams and others: John Maddock and Steve Cleary,
> they hold no new information or explanations that would clarify issues
> might not have previously understood.
I think you're the one that needs to explain things to us, if you want your
ideas to gain acceptance ;-)
> > Just wanted to point out that the nothrow requirement also prevents
> > deletion and leaking,
> The requirement only makes the behaviour undefined. It does not prevent
> from making a destructor that might throw.
> > You mentioned above that throwing destructors should be discouraged, and
> > they should be. Discouraged strongly. By refusing to work with them.
> Perhaps. I believe that the current situation is such that the number of
> destructors out there that might throw greatly outnumbers the number of
> destructors that don't. Because of this, I think that it is useful to
> approaches for making software more robust even in the current situation.
> Sometimes tolerating minor faults is preferable to termination.
I agree. However, I doubt that your improvement will help much. Any program
designed to be exception-safe is written with an awareness of what might
throw. Even if shared_ptr::reset can tolerate a throwing destructor, can the
code calling reset tolerate a throwing reset (and so on...)?
Still, as I have said, I don't mind your improvement to shared_ptr, though I
am lukewarm to it. I have grave reservations about documenting it, however.
If we can't document it, it has no meaning in a standards context.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk