# Boost :

From: Victor A. Wagner, Jr. (vawjr_at_[hidden])
Date: 2002-08-22 10:57:30

At Thursday 2002/08/22 01:23, you wrote:
>Maxim Shemanarev writes:
> > > If the committee is seriously considering ways to handle stuff, how
> > > dealing with the two results returned by almost every integer divide
> > > instruction... quotient and remainder.
> >
> > Right. That's another thing. But in this case if you write x=a/b; y=a%b; a
> > compiler at least *can try* to optimize this. In case of x*a/b it cannot
> > because it's not allowed to!
>
>Everyone is forgetting that integer overflow on signed types results in
>undefined behaviour.

This is an artifact of the language definition (a poor one, IMO). When _I_
learned multiplication (way back when I was 6 or 7) I noticed pretty quick
that if I multiplied a n digit number by an m digit number I _might_ get an
m+n digit number. The first several computers I worked on always produced
a "two word" product from two "one word" operands. I was somewhat
disappointed to learn that the HLL's all seemed to ignore this "basic fact"
of mathematics. <shrug> it's what we're stuck with, tho I still consider
it mostly intellectual laziness that led to it. Especially with strongly
typed languages now.

>Since the only case in which you need the intermediate
>result to be larger than int is where (x*a) overflows, this expression
>yields undefined behaviour in such cases. The upshot of this is that a
>compiler is permitted (as an extension) to _define_ such behaviour, and ensure
>that x*a/b always yields the mathematically correct result if the final result
>fits in an int. Obviously, if the final result doesn't fit in an int, you
>still have overflow (e.g. INT_MAX*INT_MAX/1), but this actually allows
>implementations to define the result of a*b/c as the mathematically