From: Jeff Garland (jeff_at_[hidden])
Date: 2007-04-29 10:27:17
John Maddock wrote:
> Jeff Garland wrote:
>> 1) Error handling
>> This is one part of the implementation that I have issues with.
>> Given that
>> there the library supports user replaceable error handling, they can
>> do pretty
>> much anything they want....so, in the end, the user can do what they
>> Anyway, this is a multi-part thing for me, so lets go into details:
>> a) Macros
>> If I understand correctly, to get 'fully signaling' functions I have
>> to write
>> the following code:
>> #define BOOST_MATH_THROW_ON_DOMAIN_ERROR
>> #define BOOST_MATH_THROW_ON_OVERFLOW_ERROR
>> #define BOOST_MATH_THROW_ON_UNDERFLOW_ERROR
>> #define BOOST_MATH_THROW_ON_DENORM_ERROR
>> #define BOOST_MATH_THROW_ON_LOGIC_ERROR
>> #include <boost/math/...whatever..>
>> This isn't 'pretty', to say the least. Shouldn't there at least be a
>> macro to convert from 'NAN-based' exception handling to exceptions?
>> #define BOOST_MATH_THROW_ON_ALL_ERROR
>> b) I suggest that exceptions on all be the default with no macro
>> Of course, numerics experts might have a different feeling here, but
>> many c++
>> programmers now expect exceptions by default when something goes
>> wrong. I
>> believe it simplifies library usage and those that really want to use
>> C style
>> programming with all kinds of 'errno' checks are free to set the
>> macro (that's
>> one macro) back the other way.
> Hmmm, I'm not sure about this, underflow and denormalised results aren't
> necessarily errors that need an exception.
Then are they really 'errors'? With underflow I would imagine that it would
only matter if the result was impacted. As for denormalised, I'm not sure why
I'd need to know -- likely this is my ignorance of numeric processing. But
still, maybe there's some standard set of policies here that could be grouped
logically into 'signalling' and 'non-signalling' error policies?
> There's another issue here: of all these error conditions domain errors are
> probably the only ones that we can guarentee to detect and signal with 100%
> certainty. I've tried really hard to ensure the others get detected as
> well, but it's almost certainly impossible to be 100% sure that every
> possible case is handled correctly. I must document this better as well
That's a QOI issue to be sure.
>> d) Handling 'Not Implemented' as 'domain error'
>> Handling of Functions that are Not Implemented
>> Functions that are not implemented for any reason, usually because
>> they are
>> not defined, (or we were unable to find a definition), are handled
>> as domain
>> I guess I'm wondering why the not-implemented cases aren't a
>> errors instead of runtime errors instead?
> Hmmm, probably an unfortunate choice of words: there are some statistical
> properties for some distributions that aren't defined: for example the
> Cauchy distribution doesn't have a mean. You can still compile code that
> asks for the mean of the Cauchy, but you will get a domain error if it's
> called. This leaway is actually useful: I used it to create an
> "any_distribution" class that virtualises away the actual type of the
> distribution so that the distribution type can be set at runtime. Without
> the "every property compiles for every distribution" rule this would be very
> hard to do. In any case, it's a legitimate question to ask what the mean of
> any distribution is, you just can't always expect a finite answer :-)
I guess I'm still uncomfortable here. Cauchy doesn't have a mean and that
isn't going to change at runtime -- it's fixed for all time. If I call the
function I always get an error -- really I should never call the function.
So, at a minimum, this should be a unique and different error from 'domain
error' which tells me I've called a valid function with parameters that are
'out of range'.
However, If a distribution doesn't define a particular function why not
exclude it using a trait, enable-if, or other compile-time logic to avoid
making the call in the first place? I think you can still achieve your
wrapper without ever calling the function at runtime and then handling an
error. I'm guessing what's really going on is there are multiple distribution
concepts that need to be defined with different groups of valid operations.
>> 2) NTL patch (docs p249)
>> In order to do so you will need to apply the following patch to
>> libs/math/tools/ntl.diff. This patch adds trivial converting
>> to NTL::RR and NTL::quad_float, and forces conversions to RR to
>> proceed via
>> long double rather than double. The latter change
>> Sounds kinda inconvenient. Isn't there a way this could be done
>> actually changing the NTL library?
> I'd love too. Probably by writing our own wrapper around the library I
> guess, but it was quicker/easier to patch it. Unfortunately the library
> seems not to be actively maintained at present :-(
Isn't NTL mostly a wrapper around gmp? Anyway, Arseny is building a BigInt
this summer. One implementation strategy he's using it to provide one
implementation as a wrapper of GMP. Might be fairly easy to do something
similar with a floating point type given that you've defined the concepts well
-- SoC 2008 ;-)
> Many thanks for the positive review!
Sure, but the real thx go to you guys for the thousands of hours of work on
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk