Boost logo

Boost :

Subject: Re: [boost] Looking for thoughts on a new smart pointer: shared_ptr_nonnull
From: Ben Pope (benpope81_at_[hidden])
Date: 2013-10-07 06:41:46


On 07/10/13 14:49, Gavin Lambert wrote:
> As for being encountered as part of UB, that is *only* if you decide to
> declare the constructor as having a non-null-parameter precondition by
> assumption instead of actually validating it. Which is not the case in
> the position that I am arguing for, so that is irrelevant.

The point is still that validating at runtime by throwing an exception
is not good enough. If the programmer wants to use a non-null smart
pointer, he wants to only ever pass in a non-null pointer. Detecting a
null-pointer inside the construction of the smart pointer is already too
late; the programmer thought it was non-null, so in essence, he has no
idea what the code is doing... Whatever it *is* doing may not be
undefined, but it is most certainly wrong. How do you recover at
runtime from a program that does not do what you thought it did?

> Can you please restate why you think it should be a precondition? I've
> reread the thread and all I can find is "Initializing the object by
> passing a null pointer should be UB since it is a clear violation of the
> constructor's assumed precondition (don't pass a null pointer)" which is
> a statement with an assumption, not an explanation. (You go on to quite
> some lengths about why throwing is bad for preconditions, but not why
> you think it should be a precondition in the first place.)
>
> My argument is that it should be an explicitly-validated parameter, not
> an assumed-valid-via-precondition one.

If the programmer wants to use a non-null smart ptr, presumably it is
because he expects his pointer to never be null. If his pointer is
null, he has behaviour in his program that is not expected. You can't
recover from that, unless you expect the software to rewrite and
recompile itself into something that is correct and then try again.

> The exception avoids continuing on (in the case of a release build,
> where the assert is skipped) to construct an object with an invalid
> invariant and then execute further code that expects that invariant to
> be valid, which I hope we can all agree is well into unrecoverable UB
> territory.
>
> The point is that at that instant all of the program's state, both
> global and local, is still undamaged. (It may not be *happy* with
> having a null pointer, but until it tries to access it nothing undefined
> has yet happened, bar a quibble in documentation.)

No, the point is that the state probably is damaged; if a programmer who
expects a pointer to be non-null finds himself at runtime with a
null-pointer, either global state is already damaged, or there is a bug
in the program and very little can be said about the current state of
the data. Neither of which are particularly recoverable.

The argument is this:

{
   try
   {
      throw_if_null(p)
      auto p2 = shared_ptr_nonnull(p);
      doSomething(p2);
   }
   catch(null_pointer_exception const& e)
   {
      doSomethingElse();
   }
}

vs

{
   if(p)
   {
      auto p2 = shared_ptr_nonnull(p)
      doSomething(p2)
   }
   else
   {
      doSomethingElse();
   }
}

Would you write the first one or the second?

If the first, then do it, but many other people would prefer the second
variant. Don't burden the other users of the shared_ptr with the
throwing behaviour, it has nothing to do with a nonnull shared_ptr.

What if the user would prefer some other exception type to be thown? Or
assert? You gonna turn that into a policy?

The debate is that it's nicer to let the user of your smart pointer
decide how to handle the conditions that would result in indefined
behaviour than to decide for them, perhaps have a factory function:

auto p2 = throw_or_make_shared_ptr_nonnull(p);

> There is a qualitive difference between the object throwing up its hands
> (aka an exception) and saying "you idiot, don't pass me a null pointer"
> vs. not even thinking about it and at some undefined point in the future
> (where it may be harder to track down the source) crashing or
> overwriting memory due to unchecked usage of a
> "never-supposed-to-be-null" pointer that secretly is actually null.
>
> Again, in a perfect world these sorts of bugs will be caught in the
> debug build by the assert. But we do not live in a perfect world, and
> test coverage is rarely 100%.

Which is why an assert could be used. But throwing doesn't help,
unwinding the stack is unlikely to help the library user detect or solve
the problem in the wild; a crash dump is much better, not least because
you are less likely to further corrupt data.

>> I challenge you to define what you even mean by this. Do you mean that
>> the
>> code path is simply not common, and that this somehow makes things
>> acceptable? There are plenty of branches in code that are not "normal."
>
> I mean that the code path is never followed at all except in case of
> programmer error. It's not merely "abnormal" or "rare" but "should
> never happen unless you have a bug", just like the assert itself. Note
> that this last statement is NOT equivalent to "will never happen", which
> is what relying purely and only on asserts encourages.
>
> When people object to using exceptions as control flow, this is not what
> they mean (because if you want to extend it that far, you're basically
> just saying "never use exceptions").
>
> Would it help if you thought of it as "throw std::assertion_violation"
> in some version of "assert" that did not get compiled out for release
> builds?

I guess it's just hard to determine how to recover at runtime from a
program that doesn't do what the programmer thinks it does.

Ben


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk