Subject: Re: [boost] [review] Review of Outcome v2 (Fri-19-Jan to Sun-28-Jan, 2018)
From: Emil Dotchevski (emildotchevski_at_[hidden])
Date: 2018-01-29 03:40:52
On Sun, Jan 28, 2018 at 5:37 AM, Niall Douglas via Boost <
> > So the question is what's the upside of using the
> > cumbersome macro rather than relying on exception handling? Is it just
> > perceived performance improvement?
> Outcome's default configuration is to do nothing when E is a UDT. If E
> matches the various documented traits, observing T when an E is present
> will throw various exceptions according to the rules you tell it. The
> TRY operation is therefore an alternative way of unwinding the stack
> than throwing exceptions, one which does not invoke EH table scans.
> Sure, it's more manual and cumbersome, but it also means that the
> failure handling path has the same latency as the success handling path.
> And yes, if your EH implementation in the compiler is not table based,
> then there is not much performance gain.
> Some on WG21 have mused about an attribute by which one could mark up a
> function, class or namespace to say "please use balanced EH here". It
> would not be an ABI break, and code with the different EH mechanisms
> could intermix freely. If Outcome is successful in terms of low latency
> userbase, I don't doubt that SG14 will propose just such an EH attribute
> in the future.
> Until then, this is a library based solution.
The reason why I wanted to keep the discussion on semantics rather than
implementation details is that fundamentally exception handling ABI matters
only when crossing function call boundaries. When a function throws an
exception through 10 levels of function calls which got inlined, the
compiler has much more freedom to optimize exception handling, and in the
case when we throw an exception and catch it without crossing function call
boundaries (that is, all relevant functions are inlined), there is no need
to "really" throw an exception.
Also note that when performance matters, usually function calls are
inlined, which means that the case when throwing exceptions may, in theory,
cause performance problems, is also the case when the compilers, in theory,
have the most freedom to optimize.
Now, I'm not aware of any compiler doing this kind of optimizations today
(someone correct me if I'm wrong!) but I'm also old enough to remember when
inlining function calls was problematic. I've had discussions with C
programmers who argued that it was a mistake to introduce the unreliable
inlining in C++ because C already had perfectly reliable inlining
mechanism: preprocessor macros.
If we talk about real world performance benefits or lack thereof, I've had
this conversation with other game programmers many times before. The claim
is that there is no way they can afford the exception handling overhead, so
they don't use exceptions. But there is zero evidence to support this
claim. This is an axiomatic belief, just like the axiomatic belief that
they can't afford to use shared_ptr. In reality, in both cases it is a
matter of preference of one coding style over another. Which is fine, but
then (circling back) OUTCOME_TRY is puzzling me, because semantically it is
as if using exceptions (e.g. if(error) return error) except a lot less
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk