|
Boost Users : |
From: King, William D (William.D.King_at_[hidden])
Date: 2004-02-24 16:38:12
Hello,
I am seeing a bug that the regression tests miss for numeric_cast<unsigned
long> with gcc 3.3.2. Specifically, the following code is throwing an
exception:
double small_value = 1.25;
unsigned long l=0;
l = numeric_cast<unsigned long>(small_value); // throws exception!
unsinged char c=0;
c = numeric_cast<unsigned char>(small_value); // throws exception!
I am under the impression that numeric_cast is supposed to accept losses of
precision (by truncating the fractional values), and to throw exceptions
only if there is a range or sign error. Is this so?
This error happens only when I try to cast a value with a non-zero fraction;
i.e. 1.25 breaks but 1.0 is fine. It only occurs for unsigned types --
signed int, char, and long appear to work correctly. I have found the same
incorrect behavior with gcc 3.3.2 on both solaris and linux. Using sunpro on
solaris, the same code runs correctly.
The regression tests don't catch this, since they only test numeric_cast
with integral values 1 and LONG_MAX.
Any ideas? It looks like a compiler problem to me.
Davis King
william.d.king_at_[hidden]
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net