Boost logo

Boost :

Subject: Re: [boost] Looking for thoughts on a new smart pointer: shared_ptr_nonnull
From: Matt Calabrese (rivorus_at_[hidden])
Date: 2013-10-06 14:14:26


On Sun, Oct 6, 2013 at 2:11 AM, Daniel James <daniel_at_[hidden]> wrote:

> How is that not introducing undefined behaviour? Say there's a function:
>

> void foo(boost::shared_ptr<...> const& x) {
> if (!x) { something_or_other(); }
> else blah_blah(*x);
> }
>
> By using shared_ptr_nonull, that check can be removed:
>
> void foo(boost::shared_ptr_nonnull<...> const& x) {
> blah_blah(*x);
> }
>
> But then a caller believes that their shared_ptr is never null, so
> they copy it into a shared_ptr_nonnull without checking for null
> first, and that action introduces the possibility of undefined
> behaviour where there was none before (given that there's always a
> possibility of bugs in non-trivial code).
>

With a regular shared pointer, the possibility for UB is whenever you
dereference. With a non-null shared pointer, the UB possibility is only
during initalization/assignment, assuming that the programmer is doing so
from a raw pointer (again, people should use named construction functions
whenever they can anyway, which is probably the most-common case, so this
is somewhat moot).

Whenever you have a non-null shared pointer, you cannot have UB simply by
dereferencing it. I know you will retort "but what if someone violated
preconditions" again, but I will reiterate, that is the programmer's bug.
This is true with every single piece of code in the standard library that
has specified preconditions and in any library at all that /properly/
specifies preconditions for functions. It's not simply by convention or
doctrine or whatever you want to call it, it's because it is what makes
sense. There is nothing special about a non-null shared pointer that
changes this. If you violate the precondition it is your fault, not the
library's. By providing named construction functions, we get by the issue
partially, and these should be preferred whenever possible anyway.

As to whether or not non-nullness should be a precondition, I've explained
why it should be a precondition as opposed to documented check/throw and I
can't say much more if you simply don't see the rationale.

> > Saying that it somehow makes code less safe
> > is ludicrous. It makes preconditions for functions dealing with non-null
> > shared_ptrs simpler (since the non-nullness is already guaranteed by the
> > invariants), and also simplifies code dealing with them.
>
> The invariants don't guarantee anything if there is no check. This
> false sense of security is what makes the code less safe.
>

Yes, of course they're still guarantees. Invariants always only hold if you
keep up with your end of the contract (meeting the preconditions). It's not
a false sense of security. A library can't account for programmer error. If
you want an example of a false sense of security, it is having a library
check and throw an exception that a programmer can't handle for the reasons
I and others have already explained.

If you can't deal with an unexpected exception here, how can you deal
> with exceptions anywhere? Do you really think that programmers are
> always aware of every possible exception?
>

You can't deal with an exception here because it was caused by a programmer
passing in a value that violated the preconditions of the function. In
other words, the programmer thought that he was passing something that met
the preconditions, but he was incorrect. This is a bug and his handler
can't fix the bug. He thought he was passing in one thing but was really
passing in something else, so you assert. The reason that you can deal with
exceptions elsewhere is because they are thrown due to conditions that the
caller simply can't account for at the time the arguments are passed, for
instance, because those conditions may be nondeterministic from the
caller's point of view. Such exceptions do not get thrown because of
programmer error.

And, of course, undefined behaviour allows for the throwing of
> exceptions, so if you can't deal with exceptions, you can't deal with
> undefined behaviour.
>

...that's precisely the point. You /can't/ deal with it. That's why it's
undefined. These are bugs that are to be found during testing, which is why
you assert rather than specify check/throw behavior.

> What I was getting at is that it's easier to avoid using exceptions as
> control flow, as you're always aware that you're doing it. You'll
> never do it accidentally, it's always a conscious activity. But it's
> harder to avoid violating pre-conditions, as you're usually unaware
> that you're doing it, since it's almost always done by accident. And
> also that using exceptions for control flow might go against our
> principles, but it'll still cause less harm than undefined behaviour
> would, since it has more of a chance of being predictable. To say that
> A is worse than B is not an endorsement of B, it's pointing out the
> problems in A.
>

My point isn't that preconditions, in a general sense, are easier or harder
to work with than exceptions. I don't think that that is something anyone
can say in a general sense. The point is that preconditions and exceptions
serve two distinctly different purposes, so we are comparing apples to
oranges. In this case, only an apple makes sense, so it's already end of
story right there.

By your logic
> all use of exceptions is wrong, as we rely on functions and objects to
> throw when it's specified that they will throw. Anything that an
> exception is thrown for is an accepted possibility, otherwise there
> would be no exception.

No.

-- 
-Matt Calabrese

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk