Subject: Re: [boost] [review] The review of Boost.DoubleEnded starts today: September 21 - September 30
From: Zach Laine (whatwasthataddress_at_[hidden])
Date: 2017-09-26 17:29:17
On Tue, Sep 26, 2017 at 11:32 AM, Thorsten Ottosen via Boost <
> Den 25-09-2017 kl. 23:13 skrev Benedek Thaler via Boost:
>> On Mon, Sep 25, 2017 at 2:52 AM, Zach Laine via Boost <
> There are no standard containers for which capacity() == 16 implies that
>>> when there are three elements, an allocation will occur when I add a
>>> fourth. That's substantially odd. It means that as a user I must know
>>> about extrinsic state information (how many pushes were to the front or
>>> back) in order to predict memory usage and the number of allocations even
>> No need for extrinsic state info, there are the front_free_capacity() and
>> back_free_capacity() members.
> 1. vector's capacity says nothing about whether the next push_back will
That's exactly what capacity says. See my earlier post, plus 126.96.36.199/1:
size_type capacity() const noexcept;
Returns: The total number of elements that the vector can hold without
> 2. there are containers that allocate on every push_back/push_front, so
> why is this odd?
Because capacity() indicates that it does not do this, in relation to the
current value of size(), when I call push_back(). Removing capacity() and
not calling devector a container would alleviate this particular concern.
> 3. we have different options here. I don't suppose you object to the fact
> when a small buffer is specified, then the small buffer size is also the
> initial capacity?
> So the choices we have is where to distribute the capacity between
> front_free_capacity and back_free_capacity:
> A. divide it as evenly as possible
> B. maximize front_free_capacity
> C. maximize back_free_capacity
> Currently, devector does (C) and provide means for the user to choose the
> capacity at both ends. What would you prefer instead?
I don't have a better alternative. I do find the current choice clunky.
> I don't understand why should_shrink() is part of GrowthPolicy. If I'm
>>> growing the container, why am I worried about shrinking it?
>> Because next time you can avoid growing it.
> To add to that: shrinking involves allocating a new buffer and copying or
> moving elements to the new buffer, which may be very expensive.
> Say you feel this is not worth doing if the amount of reclaimed memory is
> less than 5 percent (i.e. the vector is at +95% capacity).
Ah, thanks. That makes sense.