Boost logo

Boost :

Subject: Re: [boost] [outcome] success-or-failure objects
From: Emil Dotchevski (emildotchevski_at_[hidden])
Date: 2018-01-24 06:33:37


On Tue, Jan 23, 2018 at 9:44 PM, Gavin Lambert via Boost <
boost_at_[hidden]> wrote:

> Yes. *But* it is useful to actually verify that preconditions are not
> violated by accident.
>
> Usually this is done only in debug mode by putting in an assert. And that
> is sufficient, when unit tests exercise all paths in debug mode.
>

Yes. Notably, assert does not throw.

The checks are usually omitted from release builds for performance
> reasons. But some applications might prefer to retain the checks but throw
> an exception instead, in order to sacrifice performance for correctness
> even in the face of unexpected input.
>

I feel we're going in circles. If you define behavior for a given
condition, it is not a precondition and from the viewpoint of the function
it is not a logic error if it is violated (because it is designed to deal
with it).

Don't get me wrong, I'm not saying that it is a bad idea to write functions
which throw when they receive "bad" arguments. I'm saying that in this case
they should throw exceptions other than logic_error because calling the
function with "bad" arguments is not necessarily logic error; it is
something the caller can, correctly, rely on.

> The more confident that you are (hopefully backed up by static analysis
> and unit tests with coverage analysis) that the code doesn't contain such
> logic errors, the more inclined you might be to lean towards the
> performance end rather than the checking end. But this argument can't
> apply to public APIs of a library, since by definition you cannot know all
> callers so cannot prove they all get it "right".
>

This is not about how likely it is for the condition to occur, but what
_kind_ of condition it is. By definition, logic errors can not be reasoned
about, because the programmer, for whatever reason, did not consider them
and, by definition, you can't know what is the best way to deal with them.

Consider an engine control module for an airplane which has a logic error.
If I understand your point, you're saying that you should try to deal with
it in hopes of saving lives. What I'm saying is that you might end up
killing more people.

> To bring this back to Outcome:
>
> Some people would like .error() to be assert/UB if called on an object
> that has no error for performance reasons (since they will "guarantee" that
> they don't ever call it in any other case).
>

That would be assert(!error()), with the appropriate syntax. The meaning
is: I know that in this program it is impossible for this error to occur;
if it does occur this indicates a logic error. And this is not a matter of
preference, it's a matter of defining the correct semantics. It either is a
logic error or it isn't.

> Other people would prefer that this throws (or does something assert-like
> that still works in release builds), so that it always has non-UB failure
> characteristics in the case that some programmer fails to meet that
> "guarantee".
>

If you want asserts in release builds, don't define NDEBUG. Again, note
that this will abort in case of logic errors, rather than throw.

>
> Debug iterators exist to help find logic errors, not to define behavior in
>> case of logic errors.
>>
>
> Those are the same thing.
>

Nope, you don't write programs that recover from dereferencing invalid
debug iterators. The behavior is still undefined, it's just that now you
know you've reached undefined behavior.

> (I'm using "undefined" in the English "it could be in one of many
>>> intermediate states" sense, not the Standard "dogs and cats living
>>> together" sense. Mutexes might be broken, the data might be silly, and
>>> the
>>> class invariant might have been violated, but it is probably still
>>> destructible.)
>>>
>>
>> And I'm using its domain-specific meaning: moved-from objects don't have
>> undefined state, they have well-defined but unspecified state.
>>
>
> If you prefer, then, what I was saying is that in the absence of outright
> memory corruption (double free, writes out of bounds, etc), then all
> objects should at all times be in *unspecified* but destructible states --
> even after logic errors.
>

What does "should" mean? How can you know that an object is destructible in
all cases? I submit that this is true only if you don't have logic errors.

> They may contain incorrect results, or unexpected nulls, or otherwise not
> be in intended or expected states, but that shouldn't prevent destruction.
>

How can you guarantee this? In some other language, maybe. In C/C++, you
can't.

> More to the point, this situation is NOT a logic error, because the object
>> can be safely destroyed. Logic error would be if the object ends up in
>> undefined state, which may or may not be destructible. You most definitely
>> don't want to throw a C++ exception in this case.
>>
>
> If the invalid-parameter and out-of-bounds classes of logic errors are
> rigorously checked at all points before the bad behaviour even happens then
> the object won't ever end up in an undefined state to begin with -- merely
> an unexpected state from the caller's perspective.
>

If it is a logic error for the caller to not expect this state, you have no
idea what the caller will end up doing and you most definitely can't expect
you can safely throw.

Obviously (it's turtles all the way down) if the checks themselves have
> incorrect logic then this doesn't really help; but that's what the unit
> tests are for.
>

Yes. The only way you can deal with logic errors is by finding them and
removing them.

Emil


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk