Subject: Re: [boost] Noexcept
From: Andrzej Krzemienski (akrzemi1_at_[hidden])
Date: 2017-06-20 09:16:33
2017-06-20 10:32 GMT+02:00 Emil Dotchevski via Boost <boost_at_[hidden]>
> On Mon, Jun 19, 2017 at 11:58 PM, Andrzej Krzemienski via Boost <
> boost_at_[hidden]> wrote:
> > 2017-06-20 3:38 GMT+02:00 Emil Dotchevski via Boost <
> > >:
> > > On Mon, Jun 19, 2017 at 2:41 PM, Andrzej Krzemienski via Boost <
> > > > 1. I want to separate resource acquisition errors (exceptions are
> > > > thrown upon memory exhaustion) from input validation.
> > >
> > > Why?
> > ...
> I do not even treat validation failure as
> > "error". But I still like to have the "short exit" behavior of errors.
> If it's not an error then it is not an error -- and you should not treat it
> as such.
> > > > 2. Some debuggers/IDEs by default engage when any exception is
> > I
> > > do
> > > > not want this to happen when an incorrect input from the user is
> > > obtained.
> > > >
> > >
> > > "By default", so turn off that option.
> > But after a while I have concluded that it is a good default. Even if I
> > debugging something else, if I get a "resource-failure", or "logic error"
> > (like invariant broken)
> Yes, std::logic_error is another embarrassment for C++. Logic errors by
> definition leave the program in an undefined state, the last thing you want
> to do in this case is to start unwinding the stack. You should use an
> assert instead.
I think I personally agree with you here. However, whenever I try to
promote this philosophy, I encounter so much resistance, that I am forced
to drop before I have chance to present rational arguments.
> > I want to be alerted, and possibly stop what I was
> > debugging before. This default setting is my friend, provided I do not
> > exceptions for just any "irregularity".
> Exceptions are not used in case of "irregularities" but to enforce
> postconditions. When the program throws, it is in well defined state,
> working correctly, as if the compiler automatically writes "if" statements
> to check for errors before it executes any code for which it would be a
> logic error if control reaches it. The _only_ cost of this goodness is that
> your code must be exception safe.
No disagreement here. It is just that my sense tells me preconditions
should be defined so that breaking them is rare. So rare that I do not even
mind my debugger interrupting me.
> Programmers who write debuggers that by default break when a C++ exception
> is thrown likely do not understand the semantic differences between OS
> exceptions (e.g. segfaults, which *do* indicate logic errors) and C++
> exceptions. Semantically, that's like breaking, by default, every time a C
> function returns an error code.
Same here. How often C functions in your program return error codes? If it
is "quite often" then this might be a problem on its own.
> > > > 3. I want validation failers to be handled immediately: one level up
> > the
> > > > stack. I do not expect or intend to ever propagate them further.
> > >
> > > You can catch exceptions one level up if you want to. Right? :)
> > I can. And it would work. But it just feels not the right tool for the
> > It would not reflect my intention as clearly as `outcome`.
> That's because (in your mind, as you stated) you're not using Outcome to
> handle "real" errors.
Maybe you are right. Of course "real" and "unreal" are very subjective, but
maybe you have got a point. When writing a low-level asynchronous library
like AFIO, situations like not being able to open a file or write to it at
a given moment should not be treated as a "real error", because at this
level, in this context, there is no corresponding postcondition. But still,
a dedicated library for representing variant return values is needed. and
`variant` is not good enough.
> > > However, if you're only propagating errors one level up, it really
> > doesn't
> > > matter how you're handling them. I mean, how much trouble can you get
> > into
> > > in this case? It's trivial.
> > But t reflects my intentions clearly and gives me confidence that the
> > information will not escape the scope if I forget to put a try-block
> Not really, if you forget to check for errors and call .value() on the
> outcome object, it'll throw (if I understand the outcome semantics
I think this is where you do not appreciate the power of static checking.
Yes, technically it is possible just access the `o.value()` manually and
get a throw or some unintended behavior. But I would consider it an
irresponsible use and compare it the situation where you use type
unique_ptr, and only call `get()` and `release() to compromise it safety:
unique_ptr<T> factory(X x, Y y)
unique_ptr<T> ans = make_unique<T>(x)
T* raw = ans.release();
raw->m = compute_and_maybe_throw(y);
And you might argue that `unique_ptr` is unsafe, or that using `unique_ptr`
is dangerous. But that would be false, because only because you can
compromise a type's static-safety it does not mean that it is not safe.
Same goes for outcome:
outcome<T> append_Y(T t, Y y);
outcome<T> fun(X x, Y y)
outcome<T> t = make_X(x);
return append_Y(t, y); // fails to compile
return append_Y(TRY(t), y); // ok, and safe
If you forget to check for the error the compiler will remind you. Not the
runtime, not the call to std::terminate(), but the compiler!
> Assuming we agree that it is not
> > > acceptable for error-neutral contexts to kill errors they don't
> > recognize,
> > > this is a problem.
> > Ok. It is just that I have parts of the program that I do not want to be
> > exception neutral by accident.
> You mean bugs where exceptions aren't handled when they should be, which in
> C++ may result in std::terminate, which seems harsh. But these bugs are
> also possible when errors aren't reported by throwing, the result being
> that they're neither handled nor propagated up the call stack -- in other
> words, they're being ignored. I submit that std::terminate is much
> preferable outcome in this case (no pun intended).
std::terminate() is better than ignoring exceptions in this case, yes. But
having the compiler tell you that you have this problem is even better. And
this is what `outcome` offers.
> > For rare situations where I need different characteristics of error
> > reporting mechanism, I will need to resort to something else, like a
> > dedicated library.
> I personally think that libraries are definitely needed when they can deal
> efficiently with 97% of all use cases, the remaining 3% being not nearly as
> important. Evidently we disagree.
If I measure how much area of my program needs a `variant` it might be
about 5%. And in some programs I do not need it at all. But I appreciate
that I have a standard library (well tested, well designed) for it. Some of
the Standard Library components I have never used, but I still consider the
decision to have it there to be correct.
> > > Your use of outcome is probably fine in this simple case but
> > >
> > > out::expected<Range, BadInput> parse_range (const std::string& input)
> > >
> > > looks much too close to exception specifications:
> > >
> > > Range parse_range(const std::string& input) throw(BadInput)
> > In some other language - yes. In a language, where such throw
> > is enforced statically, like in Java.
> It's a bad idea. Again: generally, functions (especially library functions)
> can not know all the different kinds of errors they might need to forward
> (one way or another) up the call stack. From
> "When you go down the Java path, people love exception specifications until
> they find themselves all too often encouraged, or even forced, to add
> throws Exception, which immediately renders the exception specification
> entirely meaningless. (Example: Imagine writing a Java generic that
> manipulates an arbitrary type Tâ¦)"
I agree with your diagnosis. I am not advocating for Java exceptions. My
apologies if I have confused you.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk