Boost logo

Boost :

Subject: Re: [boost] expected/result/etc
From: Emil Dotchevski (emildotchevski_at_[hidden])
Date: 2016-02-10 15:26:51


On Wed, Feb 10, 2016 at 10:57 AM, Niall Douglas <s_sourceforge_at_[hidden]>
wrote:

> On 9 Feb 2016 at 12:04, Emil Dotchevski wrote:
>
> > It feels strange to have to defend the use of exceptions for reporting
> > errors in C++, on the boost development board of all places. There are
> many
> > other advantages, for example when returning errors there is no such
> thing
> > as error-neutral contexts in your program, which increases coupling. Yes,
> > in some contexts one can't afford to use exceptions, but all general
> > complains that exception handling causes performance or any other
> problems
> > are theoretical, at best.
>
> I've noticed a lot of people taking issue with the overhead of
> exceptions really mean to say they take issue with the
> *indeterminacy* introduced by exceptions, and even that often is
> really a proxy for the phrase "indirect/implicit/hidden/non-obvious
> use of malloc() or free()" which is the main source of unpredictable
> exception throws.
>
> In other words, people don't mind predictable exceptions anything
> like as much as potential unpredictable unknowable overheads.
>
> My current contract has my coworkers highly surprised that fixed
> worst case latency code can be easily written using the STL. They had
> assumed that games and audio development banned use of the STL and
> exceptions due to unpredictable execution times. They are not wrong,
> you just need to learn off which bits of the STL could call malloc or
> have worse than linear execution times and which bits never will, and
> only use the latter in hot code paths. That's really a
> training/familiarity(/maintenance) problem in the end.
>

If such tricky code targets multiple platforms, you should assume that any
standard C or C++ or OS function may allocate memory. Keep in mind that the
standard guarantees overall complexity, rather than the performance of a
specific call. For example, while std::sort is generally O(n log n), it may
allocate memory, and while the speed of that specific allocation is
independent of the number of elements you're sorting, it might be O(n^2)
for some other (possibly much larger) n.

I'd argue that in such tricky use cases you should be more concerned about
asynchronous events like memory allocations and unpredictable OS hitches
than about exception handling overhead. It's true that in some specific
case throwing an exception may be "too slow", but it'd be too slow as a
matter of fact (because the profiler said so) not because of some general
reasoning; and the solution is to not throw in that case, rather than to
avoid exception handling in principle.

> > > With just a little extra libclang tooling (some
> > > of which I plan to write) this style idiom ought to be mathematically
> > > provable as correct in the functional programming sense, which would
> > > be cool, not least for those programming nuclear reactors etc.
> >
> > Could you prove anything mathematically in the presence of side effects
> and
> > pointers?
>
> It's not my field so everything I'm about to say next is hearsay, but
> back during the nuclear reactors certification for QNX (which is
> written in C) I noticed you must always assume that functions you
> call behave as specified and the only goal is to prove the current
> function you are proving is no worse than the things it calls. From
> what I saw, you can't prove a program, but you can prove a program if
> you assume everything it calls is correct and you don't do a long
> list of things in C which would break the proof. They had LLVM based
> tooling which generated the proofs from the AST or flagged code where
> you were doing something not permitted, it appeared to work very
> well.
>
> Obviously C++ is orders of magnitude harder, but with a restrictive
> enough list of things you can't do I'm sure it's achievable. Whether
> such a program would still qualify as C++ is an open question.
>

I'm told that to this day some programs written in Fortran outperform
equivalent C programs, and that's because in the presence of pointers and
side effects it is impossible for the C optimizer to find all possible ways
a piece of memory can change. So, you can reason all you want about errors,
but if your program is in C, you must test; and testing the error path of
the code is extremely tricky. If you don't use exceptions, you're admitting
the possibility for logic errors every time you call a function that may
fail.

Logically, the only thing exception handling does is automatically check
for failures every time you call a function. It's a way to get the compiler
to automatically write if( error ) return error for you. This is a Good
Thing.

Emil


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk