Boost logo

Boost :

From: Topher Cooper (topher_at_[hidden])
Date: 2006-06-01 11:13:44

I don't think anyone really objects to unsigned_integer, though some
question its utility.

The problem seems to me to be the idea that it inherits from integer
-- an almost classic example of misuse of inheritance. Pardon me if
I lost the thread somewhere but that seems to be what is being proposed.

As others have said, inheritance should represent ISA both in
interface and conceptually. If unsigned_integer followed this
relationship than there would be no argument here -- the negative of
an unsigned_integer would be clearly and unambiguously defined -- it
just wouldn't happen to be an unsigned_integer. But that seems to
eliminate the point of having a separate unsigned_integer
type. Inheriting unsigned_integer from integer means that I can
never be sure that negating an integer (which might actually be an
unsigned_integer) is a reasonable thing to do. VERY BAD DESIGN.

Personally I would vote for unsigned_integer which does not inherit
from integer, though I think that it is lower priority than infinite
(more accurately, indefinite) precision (more accurately,
"magnitude") integer. I do think that there is some use to an
integral type that provides assurance that it is never negative. I
would drop negate, and would include both a exception-throwing and a
modular subtract.

Just don't have it inherit from integer -- it makes integer useless
except under tightly controlled circumstances (e.g., I could never
use it as part of an API).


At 08:13 AM 6/1/2006, you wrote:
>Users that don't like the unsigned_integer
>and want to use integer although it will
>never become negative, are free to do so.
>But users that want to make sure that
>variables never become negative,
>but still want those variables to be really
>with infinite precision, have the option to use
>Regards, Maarten.
>"Daniel Mitchell" <danmitchell_at_[hidden]> wrote in message
> > I know I'm entering this discussion a little late, so forgive me if this
> > already been said, but I fail to see the point of having an
> > I understand that certain quantities are intrinsically non-negative and
> > therefore the idea of an unsigned_integer has aesthetic value, but my
> > experience with the built-in types is that unsigned integers create more
> > problems than they solve. (I'm talking about subtraction and comparison to
> > signed types.) An infinite precision signed integer can represent all the
> > same values as an unsigned integer, so from a practical point of view, why
> > bother with the unsigned type at all? It seems to me that it just
> > a lot of unnecessary complexity.
> >
> > D.
> > _______________________________________________
> > Unsubscribe & other changes:
> >
>Unsubscribe & other changes:

Boost list run by bdawes at, gregod at, cpdaniel at, john at