|
Boost : |
Subject: Re: [boost] Looking for thoughts on a new smart pointer: shared_ptr_nonnull
From: Gavin Lambert (gavinl_at_[hidden])
Date: 2013-10-06 21:14:23
On 10/7/2013 7:14 AM, Quoth Matt Calabrese:
> You can't deal with an exception here because it was caused by a programmer
> passing in a value that violated the preconditions of the function. In
> other words, the programmer thought that he was passing something that met
> the preconditions, but he was incorrect. This is a bug and his handler
> can't fix the bug. He thought he was passing in one thing but was really
> passing in something else, so you assert. The reason that you can deal with
> exceptions elsewhere is because they are thrown due to conditions that the
> caller simply can't account for at the time the arguments are passed, for
> instance, because those conditions may be nondeterministic from the
> caller's point of view. Such exceptions do not get thrown because of
> programmer error.
Why not? (This is the point of std::logic_error after all.)
Sure, for performance reasons you want to elide the check that the
precondition has been met by the supplied arguments. I get that.
But that is a different argument from saying that it is somehow "wrong"
to include that check and throw if they are not met, which is what it
sounds like you are arguing.
And sure, the code that calls that constructor might not be prepared to
deal with exceptions. But that's the entire point of exceptions -- to
walk all the way back up the call stack until it finds the code that
*is* prepared to deal with the exception (perhaps by cancelling a
particular task, perhaps by tearing down and recreating an object or
entire module or subsystem, perhaps by aborting the entire program --
but this decision must be in the hands of the application itself).
The key point is that it is far better to detect that UB is about to
happen and to throw an exception than it is to allow that UB to happen.
It is absolutely NEVER correct for library code to call std::terminate
on any error (with the single exception of something that represents a
top-level caller that would do that in the OS anyway, such as a thread).
> My point isn't that preconditions, in a general sense, are easier or harder
> to work with than exceptions. I don't think that that is something anyone
> can say in a general sense. The point is that preconditions and exceptions
> serve two distinctly different purposes, so we are comparing apples to
> oranges. In this case, only an apple makes sense, so it's already end of
> story right there.
What I would consider reasonable behaviour for this sort of "guaranteed
non-null pointer" is the following:
- constructor will assert() and then throw an exception on receiving a
null argument [the first because it's a programmer error, the second
because asserts might be disabled]
- operator-> and other access methods would assert() that the internal
pointer is non-null but *not* otherwise check.
This is because it is reasonable for the constructor of the class to
receive null pointers by accident via simple programming errors (and
hands up everyone who has never made any of those). These are also
potentially recoverable errors because it does not necessarily signify
that anything has gone completely wrong.
If the internals encounter null pointers, however, this is a definite
indication that UB has already occurred at some point (probably a buffer
overrun somewhere), because the constructor should have eliminated all
other cases of such. There is no point in defending against this in
release code as it's just as likely that a garbage value was written
instead of a null, but having an assert may help track it down when
debugging (though it's probably about to segfault anyway).
Paranoia at interface boundaries is the fastest way to track down bugs,
and should only be removed if there is some demonstrably significant
performance benefit in doing so.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk