[numeric conversation] behavior on std::numeric_limits<double>::max() + eps with numeric_cast

Hello, at this time I play with numeric_cast and struggle with this: first: double z = std::numeric_limits<double>::max(); z += 2*std::numeric_limits<double>::epsilon(); // doesn't matter if 1*eps std::cout << z << std::endl; std::cout << boost::numeric_cast<double>(z) << '\n'; I would expect that an exception is thrown which isn't, the output is 1.79769e+308 1.79769e+308 and as second: If I write my onw UDT as shown in the tutorial like: namespace my { enum range_check_result { cNonFinite = 0, cInRange = 1, cNegOverflow = 2, cPosOverflow = 3 } ; //! Define a custom range checker template<typename Traits, typename OverFlowHandler> struct range_checker { typedef typename Traits::argument_type argument_type; typedef typename Traits::source_type S; typedef typename Traits::target_type T; //! Check range of integral types. static range_check_result out_of_range(argument_type value) { if(!std::isfinite(value.to_builtin())) // Neither infinity nor NaN. return cNonFinite; else if(value > boost::numeric::bounds<T>::highest()) return cPosOverflow; else if(value < boost::numeric::bounds<T>::lowest()) return cNegOverflow; else return cInRange; } static void validate_range(argument_type value) { BOOST_STATIC_ASSERT(std::numeric_limits<T>::is_bounded); OverFlowHandler()(out_of_range(value)); } }; .... struct bad_numeric_cast : virtual boost::exception, virtual std::bad_cast { ... }; struct rounding_error : virtual boost::exception, std::domain_error { rounding_error() : std::domain_error("NaN or Infinity") {} // XXX Better solution? virtual const char * what() const throw() { return "bad numeric conversion: can not be represented in the target integer type"; } }; struct negative_overflow : virtual bad_numeric_cast { .... }; struct positive_overflow : virtual bad_numeric_cast { .... }; struct overflow_handler { void operator()(range_check_result result) { if(result == cNonFinite) BOOST_THROW_EXCEPTION(rounding_error()); else if(result == cNegOverflow) BOOST_THROW_EXCEPTION(negative_overflow()); else if(result == cPosOverflow) BOOST_THROW_EXCEPTION(positive_overflow()); } }; it doesn't change the result - is this the right way? Thanks, Olaf

On 6/9/2014 02:53 AM, Olaf Peter wrote:
std::cout << boost::numeric_cast<double>(z) << '\n';
I would expect that an exception is thrown which isn't, the output is 1.79769e+308 1.79769e+308
In the case of conversion to the same type I believe a trivial fall through is done which just returns the value, so this is the expected behavior.
and as second: If I write my onw UDT as shown in the tutorial like:
it doesn't change the result - is this the right way?
It's hard for me to tell just from the code snippets you've provided. Could you send me a small test which shows the problem you're having? Thanks, Brandon Kohn

Am 09.06.2014 14:06, schrieb Brandon Kohn:
On 6/9/2014 02:53 AM, Olaf Peter wrote:
std::cout << boost::numeric_cast<double>(z) << '\n';
I would expect that an exception is thrown which isn't, the output is 1.79769e+308 1.79769e+308
In the case of conversion to the same type I believe a trivial fall through is done which just returns the value, so this is the expected behavior.
OK.
and as second: If I write my onw UDT as shown in the tutorial like:
it doesn't change the result - is this the right way?
It's hard for me to tell just from the code snippets you've provided. Could you send me a small test which shows the problem you're having?
attached my 'playground', it's basically the UDT example simplified extended with math_round capabilities. The range_checker also checks on finite values, but I'm not sure if this is really required (I would say yes, since operator[<,>} wouldn't work on NaN or Inf). Line 486 doesn't compile - vice versa doesn't work for some reasons. The goal is to check the rounding near the type specific min/max boundaries due to math_round using value +/- 0.5 and floor/ceil. Maybe I should use float and long long int? Is there a way to get a stack trace using boost to find the code part which triggers the exception in the source? I mean here it would be e.g. line 478 if x would be greater MAX_INT. Is it usefull to add boost exception infos about the calling args in overflow_handler? Is there a clever way since the overflow_handler isn't templated so I have no info about the type and value to be converted. Where is the error for the vise versa cast at line 486? Thanks, Olaf

On 6/9/2014 14:13 PM, Olaf Peter wrote:
attached my 'playground', it's basically the UDT example simplified extended with math_round capabilities. The range_checker also checks on finite values, but I'm not sure if this is really required (I would say yes, since operator[<,>} wouldn't work on NaN or Inf). Line 486 doesn't compile - vice versa doesn't work for some reasons.
A couple of points: 1) the range checker you've implemented expects that value is always of type core::intrinsic_type<T> as it calls value.to_builtin(); This fails when the 'Source' type/argument_type is a different type (like int.) 2) On the other hand, when Target is of type core::intrinsic_type<T> and Source is a fundamental type, certain comparison operations fail because the fundamental type doesn't know how to compare with core::intrinsic_type<T>. In order to get this compiling I had to split the range checker into two specializations like so: //! Define a custom range checker template<typename Traits, typename OverFlowHandler> struct range_checker; //! One for checking intrinsic_type against some other type (fundamental) template<typename T, typename SourceT, typename OverFlowHandler> struct range_checker < boost::numeric::conversion_traits<core::intrinsic_type<T>, SourceT> , OverFlowHandler
;
//! And another one for checking other types against intrinsic_type template<typename T, typename TargetT, typename OverFlowHandler> struct range_checker < boost::numeric::conversion_traits<TargetT, core::intrinsic_type<T>> , OverFlowHandler
;
In each of these specializations I wrote logic specific to the conversion direction. I also had to add the unary minus operator to core::instrinsic_type. I've attached the file I modified for my tests (which were done in visual studio 2008.) I should note that in each of these specializations care must be taken that bounds are properly checked given the context of the conversion direction. In my test I essentially converted the fundamental type to an instance of intrinsic_type (via the template constructor) and then used that to compare with the bounds of the intrinsic_type<T>. In a real world example this is possibly nonsense.
Is there a way to get a stack trace using boost to find the code part which triggers the exception in the source? I mean here it would be e.g. line 478 if x would be greater MAX_INT.
I would just use whatever debugger you have handy.
Is it usefull to add boost exception infos about the calling args in overflow_handler? Is there a clever way since the overflow_handler isn't templated so I have no info about the type and value to be converted.
Where is the error for the vise versa cast at line 486?
The overflow handler was designed to communicate positive and negative overflow. It assumes the range checker is working properly for all inputs. So I think the the best way to develop these is to simply write tests for all the types of inputs you expect your range_checker to see and make sure it gives you the correct outputs. (i.e. I would test these directly on the range_checker<ConversionTraits, OverFlowHandler>::out_of_range interface.) Hope this helps, Brandon

Am 10.06.2014 18:28, schrieb Brandon Kohn:
On 6/9/2014 14:13 PM, Olaf Peter wrote:
attached my 'playground', it's basically the UDT example simplified extended with math_round capabilities. The range_checker also checks on finite values, but I'm not sure if this is really required (I would say yes, since operator[<,>} wouldn't work on NaN or Inf). Line 486 doesn't compile - vice versa doesn't work for some reasons.
A couple of points:
1) the range checker you've implemented expects that value is always of type core::intrinsic_type<T> as it calls value.to_builtin(); This fails when the 'Source' type/argument_type is a different type (like int.)
2) On the other hand, when Target is of type core::intrinsic_type<T> and Source is a fundamental type, certain comparison operations fail because the fundamental type doesn't know how to compare with core::intrinsic_type<T>.
In order to get this compiling I had to split the range checker into two specializations like so:
//! Define a custom range checker template<typename Traits, typename OverFlowHandler> struct range_checker;
//! One for checking intrinsic_type against some other type (fundamental) template<typename T, typename SourceT, typename OverFlowHandler> struct range_checker < boost::numeric::conversion_traits<core::intrinsic_type<T>, SourceT> , OverFlowHandler
;
//! And another one for checking other types against intrinsic_type template<typename T, typename TargetT, typename OverFlowHandler> struct range_checker < boost::numeric::conversion_traits<TargetT, core::intrinsic_type<T>> , OverFlowHandler
;
thank you for pointing this out.
I should note that in each of these specializations care must be taken that bounds are properly checked given the context of the conversion direction. In my test I essentially converted the fundamental type to an instance of intrinsic_type (via the template constructor) and then used that to compare with the bounds of the intrinsic_type<T>. In a real world example this is possibly nonsense.
I thought this is what the checker does already, including test on NaN and Inf. Could you explain this? Thank you, Olaf

On 6/11/2014 07:06 AM, Olaf Peter wrote:
Am 10.06.2014 18:28, schrieb Brandon Kohn:
I should note that in each of these specializations care must be taken that bounds are properly checked given the context of the conversion direction. In my test I essentially converted the fundamental type to an instance of intrinsic_type (via the template constructor) and then used that to compare with the bounds of the intrinsic_type<T>. In a real world example this is possibly nonsense.
I thought this is what the checker does already, including test on NaN and Inf. Could you explain this?
What I mean is that the range checker is checking the whether the values in the source instance can be represented in the target instance. So one must ensure that for a given conversion (like my_type -> int) a range_checker type exists which can perform that check. Your original checker did cover the types of checks needed, it just neglected to deal with the interfaces on the respective types (source/target.) Apologies if I'm stating the obvious. Let me know if you have any more issues. I'm happy to help. Cheers, Brandon

Am 11.06.2014 14:51, schrieb Brandon Kohn:
On 6/11/2014 07:06 AM, Olaf Peter wrote:
Am 10.06.2014 18:28, schrieb Brandon Kohn:
I should note that in each of these specializations care must be taken that bounds are properly checked given the context of the conversion direction. In my test I essentially converted the fundamental type to an instance of intrinsic_type (via the template constructor) and then used that to compare with the bounds of the intrinsic_type<T>. In a real world example this is possibly nonsense. I thought this is what the checker does already, including test on NaN and Inf. Could you explain this?
What I mean is that the range checker is checking the whether the values in the source instance can be represented in the target instance. So one must ensure that for a given conversion (like my_type -> int) a range_checker type exists which can perform that check. Your original checker did cover the types of checks needed, it just neglected to deal with the interfaces on the respective types (source/target.) Apologies if I'm stating the obvious.
Let me know if you have any more issues. I'm happy to help.
Thank you! Is there a way to override the default implementation for e.g. double? The attached example fails to compile due to redefinition of 'struct boost::numeric::numeric_cast_traits<double, double>' Thanks, Olaf

Am 15.06.2014 20:15, schrieb Olaf Peter:
Am 11.06.2014 14:51, schrieb Brandon Kohn:
On 6/11/2014 07:06 AM, Olaf Peter wrote:
Am 10.06.2014 18:28, schrieb Brandon Kohn:
I should note that in each of these specializations care must be taken that bounds are properly checked given the context of the conversion direction. In my test I essentially converted the fundamental type to an instance of intrinsic_type (via the template constructor) and then used that to compare with the bounds of the intrinsic_type<T>. In a real world example this is possibly nonsense. I thought this is what the checker does already, including test on NaN and Inf. Could you explain this?
What I mean is that the range checker is checking the whether the values in the source instance can be represented in the target instance. So one must ensure that for a given conversion (like my_type -> int) a range_checker type exists which can perform that check. Your original checker did cover the types of checks needed, it just neglected to deal with the interfaces on the respective types (source/target.) Apologies if I'm stating the obvious.
Let me know if you have any more issues. I'm happy to help.
Thank you!
Is there a way to override the default implementation for e.g. double? The attached example fails to compile due to redefinition of 'struct boost::numeric::numeric_cast_traits<double, double>'
Probably I did though to complicated before; but even using make_converter_from doesn't compile :( Is the way correct and only my typedef wrong? BTW; why isn't there a RoundMath<> here, RoundEven<> policy does exist. Thanks, Olaf

to be clear: I want to round a double to a double (or float); it would be possible to convert the double to e.g. int64 and cast them back to double, but all (range) checks are applied them (even I do this here also). Thanks, Olaf

On 6/15/2014 14:15 PM, Olaf Peter wrote:
Is there a way to override the default implementation for e.g. double? The attached example fails to compile due to redefinition of 'struct boost::numeric::numeric_cast_traits<double, double>'
If you define BOOST_NUMERIC_CONVERSION_RELAX_BUILT_IN_CAST_TRAITS then you can define the conversions for fundamental types. Even so, converting from double to double won't affect the value. There is a layer that checks for this an implements a trivial conversion that does nothing but return a const T&. On 6/15/2014 14:56 PM, Olaf Peter wrote:
Probably I did though to complicated before; but even using make_converter_from doesn't compile :( Is the way correct and only my typedef wrong?
BTW; why isn't there a RoundMath<> here, RoundEven<> policy does exist.
What type of round does RoundMath do?

Am 16.06.2014 14:26, schrieb Brandon Kohn:
On 6/15/2014 14:15 PM, Olaf Peter wrote:
Is there a way to override the default implementation for e.g. double? The attached example fails to compile due to redefinition of 'struct boost::numeric::numeric_cast_traits<double, double>'
If you define BOOST_NUMERIC_CONVERSION_RELAX_BUILT_IN_CAST_TRAITS then you can define the conversions for fundamental types. Even so, converting from double to double won't affect the value. There is a layer that checks for this an implements a trivial conversion that does nothing but return a const T&.
so, I can't use it like boost.math.round (<boost/math/special_functions/round.hpp>) with polycies or doubles, isn't it?
On 6/15/2014 14:56 PM, Olaf Peter wrote:
Probably I did though to complicated before; but even using make_converter_from doesn't compile :( Is the way correct and only my typedef wrong?
BTW; why isn't there a RoundMath<> here, RoundEven<> policy does exist.
What type of round does RoundMath do?
simply if(x < 0) floor(x-0.5) else ceil(x+0.5) Thanks, Olaf

On 6/16/2014 09:27 AM, Olaf Peter wrote:
so, I can't use it like boost.math.round (<boost/math/special_functions/round.hpp>) with polycies or doubles, isn't it? I don't think it (numeric_cast) should be used to round doubles. For that I would just call your rounder directly.
I wouldn't recommend this, but if you really wanted to use numeric_cast due to other reasons, you could try specializing it for the double to double case: namespace boost { template <> double numeric_cast<double, double>(double value) { return boost::math::round(value); } } Cheers, Brandon

Am 16.06.2014 16:24, schrieb Brandon Kohn:
On 6/16/2014 09:27 AM, Olaf Peter wrote:
so, I can't use it like boost.math.round (<boost/math/special_functions/round.hpp>) with polycies or doubles, isn't it? I don't think it (numeric_cast) should be used to round doubles. For that I would just call your rounder directly.
I wouldn't recommend this, but if you really wanted to use numeric_cast due to other reasons, you could try specializing it for the double to double case:
namespace boost { template <> double numeric_cast<double, double>(double value) { return boost::math::round(value); } }
My idea was to write a mathematical value class which can operate on integers and floats using different rounding algorithms (hence numeric_cast<int>(double from_sqrt) etc. ) and switch statically this type to see later which better fits my needs. Initialized are this values from boost.units where doubles are preferred as underlying type. Probably I have to specialize this class for integers and floats. Thanks, Olaf

Is there a way to override the default implementation for e.g. double? The attached example fails to compile due to redefinition of 'struct boost::numeric::numeric_cast_traits<double, double>'
If you define BOOST_NUMERIC_CONVERSION_RELAX_BUILT_IN_CAST_TRAITS then you can define the conversions for fundamental types. Even so, converting from double to double won't affect the value. There is a layer that checks for this an implements a trivial conversion that does nothing but return a const T&.
so, I can't use it like boost.math.round (<boost/math/special_functions/round.hpp>) with polycies or doubles, isn't it?
Why would you want to? It's a type conversion utility not a rounding one, if you want to do rounding why not use the Boost.Math functions directly? John.

2014-06-09 10:53 GMT+04:00 Olaf Peter <ope-devel@gmx.de>:
Hello,
at this time I play with numeric_cast and struggle with this:
first: double z = std::numeric_limits<double>::max(); z += 2*std::numeric_limits<double>::epsilon(); // doesn't matter if 1*eps std::cout << z << std::endl; std::cout << boost::numeric_cast<double>(z) << '\n';
I would expect that an exception is thrown which isn't, the output is 1.79769e+308 1.79769e+308
Machine epsilon gives an upper bound on the relative error due to rounding in floating point arithmetic. In other words absolute error for max value would be: double non_rel = std::numeric_limits<double>::max() * std::numeric_limits<double>::epsilon(); 2*std::numeric_limits<double>::epsilon() is much less than non_rel and adding it won't cause overflow: number will be rounded to std::numeric_limits<double>::max() -- Best regards, Antony Polukhin

Am 09.06.2014 14:33, schrieb Antony Polukhin:
Hello,
at this time I play with numeric_cast and struggle with this:
first: double z = std::numeric_limits<double>::max(); z += 2*std::numeric_limits<double>::epsilon(); // doesn't matter if 1*eps std::cout << z << std::endl; std::cout << boost::numeric_cast<double>(z) << '\n';
I would expect that an exception is thrown which isn't, the output is 1.79769e+308 1.79769e+308
Machine epsilon gives an upper bound on the relative error due to rounding in floating point arithmetic. In other words absolute error for max value would be:
double non_rel = std::numeric_limits<double>::max() * std::numeric_limits<double>::epsilon();
2*std::numeric_limits<double>::epsilon() is much less than non_rel and adding it won't cause overflow: number will be rounded to std::numeric_limits<double>::max()
thanks for answer, but eps = 2.22045e-16 max*eps = 3.99168e+292 on my machine, no_rel is a high value ... Is this right?, I thought eps is the absolute error/uncertainly of representation in real values. But probably I have to read about FP ... Thanks, Olaf
participants (4)
-
Antony Polukhin
-
Brandon Kohn
-
John Maddock
-
Olaf Peter