Boost logo

Boost :

From: Emil Dotchevski (emildotchevski_at_[hidden])
Date: 2019-11-27 21:55:18


On Wed, Nov 27, 2019 at 4:47 AM Niall Douglas via Boost <
boost_at_[hidden]> wrote:
>
> On 27/11/2019 00:01, Emil Dotchevski via Boost wrote:
> > I wrote a benchmark program for LEAF and Boost Outcome. Here are the
> > results:
> >
> > https://github.com/zajo/leaf/blob/master/benchmark/benchmark.md
>
> These results seem reasonable if you are calling 32 functions deep, and
> those functions do no work.
>
> Returning large sized objects where you cause RVO to be disabled is by
> definition a benchmark of memcpy().
>
> In real world code, the C++ compiler works very hard to avoid calling
> deep stacks of small functions. It's rare in the real world. Returning
> large sized objects from functions also tends to be rare in the real
world.

Interestingly, some of the feedback I got is that the call to rand()
contaminates the results since it isn't free. I tend to agree with that,
since the point of a benchmark is to amplify the impact of a system so its
performance can be evaluated.

Communicating large sized error objects does not cause RVO to be disabled
with LEAF. It is designed with a strong bias towards the most common use
case, where callers check for, but do not handle errors.

If the caller is only going to check for failures and forward them to its
caller, moving error objects one stack frame at a time adds overhead.
Besides, even though large sized objects are not common, the need to
communicate several error objects is. It makes no sense to try to bundle
all that in a return value and hope for the best from the optimizer, given
that most likely the immediate caller does not handle errors and therefore
will not access anything other than the discriminant.

To clarify, LEAF also needs to move error objects, including large sized
error objects up the call chain, but they are moved only to (and between)
error-handling stack frames, skipping all intermediate check-only levels.
The benchmark is actually a bit unfair to LEAF in this regard, since the
"handle some" case includes handling errors at every 4th function call,
which is excessive in my experience (the "check-only" case does handle
errors at the top).

It is true that compilers avoid calling deep stacks of small functions,
which is why the benchmark includes the inline vs. no_inline dimension. The
simplicity of leaf::result<T> makes it extremely friendly to the optimizer,
including when inlining is possible. I updated the benchmark paper to show
generated code:
https://github.com/zajo/leaf/blob/master/benchmark/benchmark.md#show-me-the-code
.

> But in truth, when I test the new implementation in an existing
> Outcome-based codebase, I find no statistically observable difference.
> If you're doing any real work at all, Outcome can be 10x less efficient
> and it gets lost by other more dominant work.

This is true for error handling in general. The most important function of
an error handling library is to allow users to easily communicate any and
all error objects to error handling contexts where they're needed. That it
can be done efficiently is an added bonus.

Emil


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk