Boost logo

Boost :

From: Maxim Shemanarev (mcseem_at_[hidden])
Date: 2002-08-20 22:49:31


This can be more philisophical question rather than a techical one and if
it's off topic, please ignore it.

Many times working on different libraries, particulary on graphic ones I
felt a lack of a very simple operation. It's not only in C/C++ but in many
other high-level languages. This is the operation of integer scaling,
namely,

y = x * a / b;

The problem is with integer overflow. Suppose, we have 32-bit integers on
32-bit architecture. The semantics of this construction assumes using 32
bits everywhere, including the intermediate result x*a because any compiler
must follow the instructions, and so, it *must* use only 32-bit for x*a.
Afterall, having the overflow may be important for the programmer. On the
other hand all the 32-bit architectures that I know store the result of
"mul" in the 64-bit register (or registers). It's also a rule that the "div"
command accepts 64-bit dividend. It became a standard de facto in the
hardware. All I need is:

mov R0, x
mul b ; 64-bit result is in R0+R1
div c ; 64-bit R0+R1 / 32-bit c and put into R0.

But instead of it any compiler will generate:

mov R0,x
mul b ; 64-bit result is in R0+R1
cdq R1,R0 ; lose most significant bits in R1 :-(
div c

Again, any compiler *has* to do so because it's told to. OK, I could use all
64-bit numbers. Great, if it worked as fast as the example above. In fact it
works about 10 times slower because it uses internal functions like __mul64
and __div64 (assume, we're talking about 32-bit architecture).
That's too bad to lose this common possibility of mul-div with 64-bit
intermediate result.

But as it was said above the sematics of the expression x * a / b cannot be
changed, and so, the only way to solve this problem is to include an
appropriate operation (or function) into the C++ standard. I'm not sure
about the syntax or name of this operation, maybe
std::muldiv(x,a,b) or std::scale(x,a,b). I don't expect and don't hope this
operation to be added into the language, but at least into the standard
library.

Well, I could use inline assembler (or win32 API MulDiv) but when my
objective is to create a *compatible* C++ library it's very painful. I want
this operation (or function) to be a standard one.

The argument that there true 64-bit architectures are coming is not
acceptable because as soon as they're available we will want to use 64/128
bit mul-div operation and so on. This process is endless :-)

Using floats or doubles is slower and *it_will_always_be_slower* than using
integers, and besides, there rounding issues arise.

What if a particular architecture does'n have this possibility? Well, it's
exactly the same issuse as the int-64 operations on a 32-bit platform. The
compiler just could use appropriate internal functions. But if the language
or the standard library could have this operation it could use the benefits
of the most popular hardware platforms (actually all of those we actually
use).

Maybe the most advanced compilers can optimize the expression y = (long
long)x * a / b to the three mentioned above assembler instructions. I
honestly don't think so because the rule says that all the data of the
expression are to be converted to the longest and/or most precise type. I
want to have an integer scaling operation that guarantees the intemediate
result to be twice longer than its arguments and that would promise the most
possible optimization for the particular hardware.

In other words I would propose to include this simple operation into the
standard. I realize that it can look naive, but at least it would be much
more useful and practical than, for example, the stupid innovation of
possibility of initializing static const int members inside the class :-)

If it's already done in STL or BOOST than sorry. Please just point me out.

McSeem
http://www.antigrain.com


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk