Boost logo

Boost :

From: Alexander Grund (alexander.grund_at_[hidden])
Date: 2020-03-02 15:57:15

> Could you please try and explain, why you think that signed is not a
> good type for a size (other than stating that "size cannot be
> negative"), what I am saying is that "valid-size cannot be negative"?
> You could also make the size a complex number, that would be an
> analogue. The imaginary-componant will have to be zero, but otherwise
> it would work just fine. The fact that that set is larger than the
> problem domain is IMO orthogonal to that.
Because if you use a signed type you have no compile-time guarantee that
the value is unsigned (using "type" and "value" to differentiate those
2). Same as with a pointer: It can be NULL. If you want an interface
where you want to guarantee at compile-time that the value passed over
an API is never NULL, you use e.g. a reference that can never be NULL
(or not_null<T>)
> Didn't you argue in the mail before that there will never be
> anything of
> size 2^32 and hence even not anything like 2^64? How could you
> overflow
> that then?
> If you are manipulating (subtracting, adding differences) pointers, I
> thought I wrote that. The pointers might be pointers into Virtual
> Memory and can have any value [1, 2^56].
Not sure I understand that. Can we agree that a 64-bit unsigned type is
big enough to store any size of any container and hence no overflow is
> > Finding bugs related to this is hard, using int's you'll know
> right away.
> How? Only if you underflow. On unsigned you'll get a very large
> number
> if you go below zero, on signed you get a negative number. Both
> can be
> detected. But you talked about overflow. For unsigned you'll get a
> small
> number (that is wrong obviously but you COULD check) but for
> signed you
> get UB and hence can't even check for that.
> >> If you get an unsigned
> >> value there is no need to check for below zero, if you get a signed
> >> value you might. It is the same there is `not_null<T>` in GSL(?).
> >>
> >   But you would need to check if it wrapped
> No. If you call `obj.size()` you get an unsigned value that is a
> valid
> unsigned value. It cannot wrap when returning (conditions apply).
>  Tautology: "... an unsigned value that is a valid unsigned value",
> they always are, whether it's the right number is another question.
Ok: "you get an unsigned type that is a valid unsigned value". If the
size was signed you get a signed type which may be an unsigned value.
You'll have to check.
> If obj.size() returns a signed value you'll got to check for <0
> before
> using the value unless the API somehow promises to not return
> negative
> values. Encoding this in the type is the natural thing to do.
> No, it's not, unsigned types are good for bit-manipulation only,
> nothing else. Unsigned types don't follow the rules of mathematics,
> they are fundamentally flawed by nature. The fact that int's are
> limited in range is not  flaw, but an implementation detail. The
> mathematically correct way of doing things (on a Turing-machine) is to
> use signed big-ints.
I disagree. And as mentioned you can do things like `int difference =
int(obj.size()) - 1` anytime you want to do operations that are not
fully defined on unsigned types (as in: may result in values outside the
range) same as you can't to `int foo = sqrt(integer)` because you may
get an imaginary number (if sqrt could do that, but I think you get the
> Adding is save for unsigned, as you argued: The type is wide
> enough for all uses as a size of something. Subtraction might not but
> you can check first (`if(a <= size) return size - a; else throw `)
> You've now just precluded the use of noexcept (noexcept move f.e.,
> super-important in modern C++) and added a branch (cannot be removed
> by something clever, exactly because it is unsigned, the compiler can
> make no assumptions and the code has to go through the math) to your
> code,
> What is much better is to use signed int's combined with assert's.
How is that any different from `assert(a <= size); return size - a;`?

> all that because it upsets you that something that should not occur
in the first place in correct code can occur iff one is writing code
that now (as one observed it got negative) is known to have a bug.

Again: How is that different to using a signed type for "size"? You have
exactly the same potential for bugs. You always have to make sure you
stay in your valid domain, and a negative size is outside of that valid
domain. Hence you got to check somewhere or use control flow to make
sure this doesn't happen. So no difference in signed vs unsigned size
regarding that

> The use of unsigned is false security (actually no security) and
> serves nothing. In the end, you still need to write correct code (so
> the signed int's WON'T BE negative, there where they shouldn't be),
> but this practice makes you're code less flexible, more verbose (the
> unavoidable casts add to that) and probably slower than using signed.
> All that because of this 'natural' way of looking at sizes.
It serves as a contract on the API level: "This value is unsigned.
Period." If the type was signed you'd need something else to enforce
that the value is unsigned. So yes you still need to write correct code
and passing a negative value to an API expecting an unsigned value is in
any case a bug.
> You want a guarantee to have a non-negative
> number. "unsigned" is that but it suffers from underflow going
> undetected. A `not_null<T>` like "wrapper" which otherwise behaves
> as T
> but guarantees the non-negativity would make the type suitable for
> representing an unsigned number in a signed type suitable for
> operations. Obviously if you subtract something from a
> `not_negative<int>` it will become a plain "int". Once you pass it
> to an
> API expecting a `not_negative<int>` the precondition will be checked.
> Got it now, yeah that would be great, but for now that would be
> run-time, no? And I guess, due to the halting problem, it can never be
> compile-time, unless it's a limited problem.

Surely at runtime. How else could you guarantee that your value isn't
negative after you subtract something from it? It can be compile-time if
you only add something to it and ignore overflow but you already do that
when using signed values anyway.

But all this arguing doesn't solve much: What piece of code would
actually benefit from having a signed size? And not only the part where
you request the size and use it, but also the part where you give that
size back to the object, so you'll need to ensure an unsigned value. And
yes `for(int i=0; i<int(obj.size())-1; i++)` is a known example that
would allow to get rid of the cast if the size was signed. But again:
That is due to operation used. `for(unsigned i=0; i+1<obj.size(); i++)`
is perfectly valid assuming no overflow

Boost list run by bdawes at, gregod at, cpdaniel at, john at