|
Boost : |
Subject: Re: [boost] Looking for thoughts on a new smart pointer: shared_ptr_nonnull
From: Daniel James (daniel_at_[hidden])
Date: 2013-10-06 05:11:51
On 5 October 2013 17:11, Matt Calabrese <rivorus_at_[hidden]> wrote:
> On Sat, Oct 5, 2013 at 7:09 AM, Daniel James <daniel_at_[hidden]> wrote:
>> but ultimately this is an
>> > error that cannot be handled once such a condition is reached. You cannot
>> > reliably continue.
>>
>> I've already given a counter-example to that. If your data is well
>> encapsulated, an unexpected null is not an indicator of a problem in
>> the global state.
>>
>
> I apologize, but can you repeat your example? Skimming back, I do not see
> it.
A parser.
>> > it's a
>> > precondition violation, which is UB, so just use an assertion given that
>> > this particular precondition is easily able to be checked. Further, I
>> vote
>> > against removing it as a precondition/turning it into documented
>> > exception-on-null
>>
>> This class is meant to make code safer - that is its point. But as you
>> describe it, just using this class introduces the possibility of
>> undefined behaviour where there was none before. And undefined
>> behaviour can do anything. So it actually makes code less safe. So why
>> use the class?
>>
>
> It doesn't introduce UB where there was none before, it properly brings a
> hypothetical source of UB to initialization
How is that not introducing undefined behaviour? Say there's a function:
void foo(boost::shared_ptr<...> const& x) {
if (!x) { something_or_other(); }
else blah_blah(*x);
}
By using shared_ptr_nonull, that check can be removed:
void foo(boost::shared_ptr_nonnull<...> const& x) {
blah_blah(*x);
}
But then a caller believes that their shared_ptr is never null, so
they copy it into a shared_ptr_nonnull without checking for null
first, and that action introduces the possibility of undefined
behaviour where there was none before (given that there's always a
possibility of bugs in non-trivial code).
> Saying that it somehow makes code less safe
> is ludicrous. It makes preconditions for functions dealing with non-null
> shared_ptrs simpler (since the non-nullness is already guaranteed by the
> invariants), and also simplifies code dealing with them.
The invariants don't guarantee anything if there is no check. This
false sense of security is what makes the code less safe.
>> since this either A) throws an exception that cannot be
>> > properly handled
>>
>> You keep saying that, but you never justify it. This kind of thing is
>> very contextual, there are no universal laws.
>>
>
> I didn't think it needed explanation: If you are using a non-null
> shared_ptr it is because you always want the pointer to refer to an object.
> Therefore, if a user attempts to construct the non-null shared_ptr with a
> null pointer, it is a programmer error. In other words, the programmer
> mistakenly believes that he is passing the function a pointer that is not
> null. Given that the programmer thought he was passing a valid pointer, how
> can he now possibly handle the exception that pops out of the function? The
> handler can't fix the mistake in the programmer's code.
This goes back to what I was saying about local data and global state.
Problems due to unexpected local data can often be kept local. A
programmer error of this sort does not mean that everything is broken.
If you can't deal with an unexpected exception here, how can you deal
with exceptions anywhere? Do you really think that programmers are
always aware of every possible exception?
And, of course, undefined behaviour allows for the throwing of
exceptions, so if you can't deal with exceptions, you can't deal with
undefined behaviour.
>> or B) invites users to rely on the exception behavior to
>> > use the exception for control flow (I.E. instead of checking for null
>> > before handing it off, they pass it off and check for null by catching an
>> > exception, which is a misuse of exceptions).
>>
>> You have faith that programmers won't violate your pre-conditions, but
>> you don't have faith in them following your theories about exceptions.
>> Which is odd since the latter is easier to do, and less problematic if
>> it isn't followed.
>>
>
> Exactly what theory are you referring to? That exceptions shouldn't be used
> as a means of basic control flow? I'd hope that we're beyond that.
What I was getting at is that it's easier to avoid using exceptions as
control flow, as you're always aware that you're doing it. You'll
never do it accidentally, it's always a conscious activity. But it's
harder to avoid violating pre-conditions, as you're usually unaware
that you're doing it, since it's almost always done by accident. And
also that using exceptions for control flow might go against our
principles, but it'll still cause less harm than undefined behaviour
would, since it has more of a chance of being predictable. To say that
A is worse than B is not an endorsement of B, it's pointing out the
problems in A.
> If passing the
> null pointer was not a programmer error as describe in A, in other words,
> if it was an accepted possibility that the pointer might be null on the
> part of the programmer, then he is simply relying on the function to do the
> check and throw an exception. If this is the case, then the exception is no
> longer an exceptional case and he is instead using the exception for basic
> control flow.
We certainly should rely on functions throwing an exception when
appropriate. It doesn't mean they're not an exceptional case, and it
doesn't mean that they're used for basic control flow. By your logic
all use of exceptions is wrong, as we rely on functions and objects to
throw when it's specified that they will throw. Anything that an
exception is thrown for is an accepted possibility, otherwise there
would be no exception.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk