Subject: Re: [boost] [variant2] Need rationale for never-empty guarantee
From: Andrey Semashev (andrey.semashev_at_[hidden])
Date: 2019-03-02 09:36:24
On 3/2/19 11:56 AM, Andrzej Krzemienski via Boost wrote:
> sob., 2 mar 2019 o 07:35 Emil Dotchevski via Boost <boost_at_[hidden]>
>> On Fri, Mar 1, 2019 at 9:37 PM Andrzej Krzemienski via Boost <
>> boost_at_[hidden]> wrote:
>>> My hypothesis is that reading valid-but-unspecified can only happen in a
>>> buggy program in an unintended path.
>> Running out of memory, or out of some other resource, does not indicate a
>> bug. In response, under the basic exception guarantee, you may get a state
>> which I'm saying shouldn't be merely "destructable" but also valid. For
>> example, if this was a vector<T>, it shouldn't explode if you call .size(),
>> or if you iterate over whatever elements it ended up with.
> This is where my imagination fails me. I cannot imagine why upon bad_alloc
> I would be stopping the stack unwinding and determining size of my vectors.
> This is why I ask about others' experience with real-world correct code.
That is not an unimaginable scenario. If you have two branches of code,
one requiring more memory but better performance, and the other that is
slower (or maybe lacking some other qualities but still acceptable) and
less resource consuming, operating on the same vector, you will want the
vector to stay valid if memory allocation fails. Although not
specifically with vectors, I had cases like this in real world.
However, in my experience, if I want to handle OOM condition gracefully,
I tend to not trust any third party components except the lowest level
ones, like C runtime, and write the relevant code myself. Especially,
this concerns components that allocate memory, like containers.
Unfortunately, it is often the case that either I don't trust
implementations to take OOM into account and handle it well or I want
some specific guarantees about how much memory is allocated and what the
state of the program is when OOM happens.
>>> And that making design compromises to
>>> address this path is not necessarily the best approach to take.
>> Consider that if you choose to allow, after an error, to have objects left
>> in such a state that they may explode if you attempt to do anything but
>> destroy them, there may not be any way to detect that state.
> Yes. and I do not see how this is a problem in practice. In my experience
> objects that failed on operation with basic guarantee can only be safely
> removed from the scope. (I do not even reset them.)
Removing the objects may be wasteful or require expensive operations. In
the vector example, that vector may be initially large or expensive or
even impossible to reconstruct. If you strive for the "destroy upon
failure" logic, you would have to duplicate the vector before attempting
the operation that may fail with an exception. Which is a point of
failure on its own, BTW. Generally, you want to minimize the number of
points of failure while also minimizing amount of work needed to be done
to complete the program. There is also a third subjective limit of code
quality or simplicity, design quality, etc., but that is not relevant to