|
Boost : |
From: Gabriel Dos Reis (Gabriel.Dos-Reis_at_[hidden])
Date: 2000-10-20 10:42:50
"David Abrahams" <abrahams_at_[hidden]> writes:
| From: "Gabriel Dos Reis" <Gabriel.Dos-Reis_at_[hidden]>
|
| > | ... What if you have a user-defined floating
| > | type that is denorm_indeterminate?
| >
| > You're stuck -- from the standards point of view.
|
| That is what I'm claiming may be a defect.
Well, I disagree. The standard can't give meaning to something it
knows nothing about (the user defined floating type).
Even if the user-defined floating point type has denorm_absent, what
do you think min() should be?
-- Gaby
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk