Boost logo

Boost :

From: Vesa Karvonen (vesa.karvonen_at_[hidden])
Date: 2001-06-19 10:35:51


From: <scleary_at_[hidden]>
> From: "Vesa Karvonen" <vesa.karvonen_at_[hidden]>
> > From: "David Abrahams" <david.abrahams_at_[hidden]>
> > > From: "Vesa Karvonen" <vesa.karvonen_at_[hidden]>

> > Throwing in a destructor is indeed bad practice and should be
> > discouraged.
>
> Yes, for the following reasons:
> 1) An object with a throwing destructor *cannot* be used with the STL if
> there is *any* chance that it will be destructed [17.4.3.6/2].

This issue is under consideration. I have described an alternative solution
that provides certain arguable advantages over the nothrow requirement in the
case of smart pointer reset().

> 2) It makes throwing *any* exception a risky business -- if there's an
> object with a throwing destructor on the stack, your program's blown away.

This is a valid issue. However in this case we are considering the run-time
behavior of smart pointer reset(). reset() results in deleting a heap object.
A program can actually survive from the situation which we are considering.

It can perhaps be convincingly argued that the program should not survive and
this might be a good reason to stick with the current approach.

> 3) It's impossible to have dynamic arrays of such objects.

The above does not hold in this case.

> [20.4.5.2/7]: "void reset(X* p=0) throw();"

As I have previously indicated, the above is not the critical piece of
knowledge that I'm missing.

> Exception specifications are IMO a
> guarantee-of-quality-of-implementation style of sugar, on the same level as
> automatic pre/post-condition testing, and the lack of them do not make
> programs "weak".

People who prefer programming languages with automatic run-time checking might
disagree completely.

However, I tend to agree that specifically in C++, exception specifications
are mostly a quality of implementation issue. At least with the standard
library components, it does not make sense to rely on them, because the
behaviour is undefined.

> > I think that the current nothrow guarantee of smart pointer reset()
> (including
> > std::auto_ptr and boost::shared_ptr) gives a false sense of exception
> safety,
> > because it simply doesn't guarantee any consistent programmatically
> detectable
> > effect - in the case that an exception is actually throw.
>
> Neither does any other code producing undefined behaviour. Hence the name,
> "undefined"...

The last time I checked, everything does not turn undefined in C++ at the
point the first exception is thrown - don't tell me they've changed this...

Let me rephrase some of my thoughts using short sentences that others should
be able to understand:
- Nothrow exception specification guarantees certain behaviour.
- However, if deletion throws, the behaviour becomes undefined.
- Currently exception specifications imply additional run-time overhead.
=> Therefore the usefulness of the current specification is debatable.

> > Exceptions are an interesting topic and I don't like defeatism.
>
> I don't like defeatism, either. However, Dave and others have looked at
> every possible side of this for years; if there was a better solution, I
> think in this case it would have been found.

I also don't like people who are not open to novel ideas. (Actually, I have
been aware of this issue for a long time, too.) However, if "Dave and others"
clearly indicate that they have, in fact, previously considered the approach
that I described, and explain clearly why they previously considered it
unviable, then we can move on. So far I have not been satisfied with the
answers from David Abrahams and others: John Maddock and Steve Cleary, because
they hold no new information or explanations that would clarify issues that I
might not have previously understood.

So, if you can show that the approach I have described is unviable, for
instance, because it would make an otherwise valid program invalid, then I
will be more than happy to change my mind.

(For the record, the above example of unviability is, according to my
interpretation, impossible to show to be true, because any such program would
be currently undefined. However, as I have explained, my idea is here to make
the behaviour in such cases more useful.)

> > As I have demonstrated, the technique that I have described:
> > - Prevents double deletion of the old pointer.
> > - Prevents leaking the new pointer.
> > - Removes the nothrow exception specification (which is practically
> useless
> > anyway).
>
> Just wanted to point out that the nothrow requirement also prevents double
> deletion and leaking,

The requirement only makes the behaviour undefined. It does not prevent people
from making a destructor that might throw.

> You mentioned above that throwing destructors should be discouraged, and so
> they should be. Discouraged strongly. By refusing to work with them. :)

Perhaps. I believe that the current situation is such that the number of
destructors out there that might throw greatly outnumbers the number of
destructors that don't. Because of this, I think that it is useful to consider
approaches for making software more robust even in the current situation.
Sometimes tolerating minor faults is preferable to termination.

You can change my mind by proving that a useful handcrafted C++ program exist,
that uses destructors, contains over 5000000 (5 million) tokens after
preprocessing, and does not have a single destructor that might throw.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk