Boost logo

Boost :

From: degski (degski_at_[hidden])
Date: 2020-02-28 15:08:42

On Thu, 27 Feb 2020 at 08:57, Krystian Stasiowski via Boost <
boost_at_[hidden]> wrote:

> Any benefit that would be gained from this would be marginal, and the
> interface would suffer from

 Yes, maybe (did you measure ?), but it does away with the 'comparing
signed to unsigned' and the UB on signed overflow, does allow for
optimizations.' BUT, obviously if you write everything using std::size_t
you won't see that, and casting won't do that either. Suppose you need to
store some (or many of those indexes (of type std::size_t), the better
cache-locality (and lower memory use), will affect your performance. There
are certainly more use-cases. std::span almost had a ssize() param, but in
the end (I believe) holding on to the past seems more important. Iff we now
start implementing classes as I propose (with SizeType), we might over time
get to a stage where more devs are getting comfortable with int as a

I have never in my life seen a vector of size 2^32, even an array of chars
that size is huge. The STL-solution to use std::size_t is totally arbitrary
(and does not address the problem in principle, just in (all imaginable
cases) in practice) and (as usual with the STL) severe overshoot of solving
the problem. So using int's is not worse than using std::size_t. On virtual
memory (where one does have to deal with std::size_t's) one w/could use
offset-pointers (they are builtin-in in VC to this purpose, the so-called
base-pointers, with the keyword '_base', undoubtedly gcc/clang supports the
same thing (maybe a different key-word, I don't know) and clang-cl,
certainly supports it) and then also that problem can be reduced to an
int-problem) people, there is always std::int64_t, with a max of 2^63,
which is so large we can easily say that those arrays (> 2^63 ) will never
be needed. With std::size_t's we easily introduce UB (and a bad one for
that matter, because the wrapping of un-signed's might get unnoticed
(luckily there is a warning, but nothing stops you from ignoring it)).

f this does not convince you, let me throw in a fact. The number of
sand-grains on earth is estimated to be around 7.5 ^ 10, which is
'‭0110100000010101010110100100001101100111011011100000000000000000‬' in
binary, the size of an array of shorts is larger than that number (in
bytes), so let's turn all sand on earth into one giant optane-chip (just
look friendly at Intel, they already manage to do 32GB, and according to
the STL, getting to 2^64 is a doddle) and get calculating with the STL's
std::size_t, for a similar array of int's we'll just ship in the silicon
from the moon and beyond (yes, I do know that you won't need a grain of
sand per byte, but it begs the question what an ordinary program needs
std::size_t's for. Such large numbers are mostly good for counting stars
and counting sand-grains, but one would do that with doubles any way,
because they're guesses and hence no std::size_t's are required). To
summarize: 'it makes no sense' in my view.

PS1: you'll need std::size_t for labeling every sand-grain individually
(would be nice, we'll know exactly which sand-grain we mean. We'll need to
get a lot of ink to write is on them, though.)
PS2: I should have given a sarcasm warning, yes, I'm p-eed of, the answer
here (boost-dev-list) to any request is always a big no-way-jose, even when
it concerns something as easily implementable as the above.

"We value your privacy, click here!" Sod off! - degski
"Anyone who believes that exponential growth can go on forever in a finite
world is either a madman or an economist" - Kenneth E. Boulding
"Growth for the sake of growth is the ideology of the cancer cell" - Edward
P. Abbey

Boost list run by bdawes at, gregod at, cpdaniel at, john at