From: Gavin Lambert (boost_at_[hidden])
Date: 2019-06-26 00:25:02
On 26/06/2019 08:02, JeanHeyd Meneide wrote:
> I can understand that it is cumbersome to write most of the specialization.
> I opted for this as the extension mechanism because people had interesting
> use cases for allowing optimizations that I could not adequately and
> generically capture using ADL or traits or similar. For example, at one
> point someone described storing a union of either a reference directly to
> the pointer inside of something equivalent to a local_shared_ptr<T> or just
> the plain old pointer itself, and doing different things based on what the
> use count was.
The specialisation syntax in C++ is fundamentally flawed, and in my
view, should absolutely never be used by anybody.
One such example of this is that (especially for header-only libraries,
although with care this can also be used for compiled libraries) it is
very useful to be able to allow multiple versions of a library to
co-exist through external namespacing:
Most of the time this Just Worksâ¢; one of the cases where this
completely falls over is that in this scenario it is impossible for the
library to define any specialisations in the std namespace (or indeed
any other namespace outside of its internal bubble, including the global
Using ADL is much more convenient; explicit policy arguments can be used
as a fallback when ADL is not possible.
> The problematic case is when `reset()` does throw, like with
> shared_ptr. I don't think it's useful or fair to impose the comparatively
> high overhead of std::uncaught_exceptions()/std::uncaught_exception()
> checking in the constructor and destructor to prevent this class of errors.
> The most graceful thing that can happen is an exception is thrown from the
> .reset(), and being the first one gets caught by the scope just above.
> Cleverness by providing the "guarantee" of not calling .reset() if an
> exception is in progress means actually leaking memory if the .reset() call
> fails: shared_ptr and friends all guarantee in their .reset/constructor
> calls will call delete for you if they toss an exception (
> http://eel.is/c++draft/util.smartptr.shared#const-10). This means even if
> you terminate you have memory safety: removing the .reset() call means
> there is no safety anymore, and it has to be built-in to (in)out_ptr for
> every type which might have a throwing .reset() call.
Throwing another exception during unwinding can (at least in some
implementations) clobber the original exception, which will disguise the
actual source of the problem for logging or debugging purposes.
Having said that, neither choice is a great one; ideally if you know
you're already unwinding then you wouldn't call a throwing reset() but
you still either have to call a (possibly also throwing) deleter or you
will get a memory leak if the original exception does get caught without
termination somewhere higher up the call stack.
Since I assume it's harder to extract the appropriate deleter than to
just call reset(), it's probably better to continue doing that.
On 26/06/2019 10:05, Andrey Semashev wrote:
> Not calling reset() on outer exception makes sense if you require
> that the function that threw it either didn't allocate or freed the
> memory before the exception propagated out of the function. In other
> words, if the function fails, it must clean up the mess after itself
> before returning, exception or not. This is not an unreasonable
> requirement, and in fact, this is the only behavior I consider
> reasonable in the context of raw pointers.
That's fine -- if it was the function itself which throws the exception.
(And that shouldn't normally be the case because it's a C-style API.)
But what if the function has two output parameters? Memory is allocated
for both and returned as a raw pointer by the function. One of the
out_ptrs is then destroyed and calls reset(), and this throws. The
other out_ptr is then destroyed as well, and now it has a problem; it
either has to call reset() as well and possibly throw yet again, or leak
If it calls reset() and it does throw, then terminate() will be called
and the memory leak isn't a problem after all.
If it calls reset() and it doesn't throw, then there's no memory leak
(assuming that the original reset() call cleaned up the memory before
throwing, as it is supposed to) -- and if the original exception is
caught upstream and handled, then the process can continue on its merry
way without issues (assuming it can fix whatever caused the original
If it doesn't call reset(), then it guarantees a memory (and possibly
also a more serious resource) leak -- which won't matter if the process
terminates, but does if the original exception gets caught and "handled".
Similar problems can occur if it was an out_ptr and some other
complex-destructor object used in the argument list of the original
function rather than two out_ptrs.
> That would make shared_ptr unsupported. Which might be an acceptable
> loss, given that C-style APIs cannot return a shared_ptr (meaning a
> pointer, which is owned by another shared_ptr). From the caller's
> standpoint, the returned pointer is unique, so the caller should be
> using unique_ptr to receive the pointer. He may later use that
> unique_ptr to initialize a shared_ptr, but that would be out of
> Boost.OutPtr scope.
I'm somewhat inclined to agree with this. It *must* be illegal to use a
non-unique shared_ptr with inout_ptr, for example; and it barely makes
sense to use a unique one. So *in principle* it would be better to only
allow unique_ptr. Having said that, the way that shared_ptr handles
deleters is more convenient to that of unique_ptr, which might be an
(I'm sad that the standard doesn't provide a type-erased deleter form of
unique_ptr, in addition to the current form. I know it's not hard to
make one, but it would be better if you didn't have to.)
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk