
Boost : 
From: Paul A Bristow (boost_at_[hidden])
Date: 20030620 10:14:52
It may helpful to those unfamiliar to the Boost Interval library
to see some exactly representable values of pi
(from test_pi_interval.cpp)
// Float 24 bit significand, 32 bit float ////////////////
// static const float pi_f_l = 13176794.0f/(1 << 22);
// static const float pi_f_u = 13176795.0f/ (1 << 22);
// Exactly representable values calculated using NTL.
static const float pi_f_l = 3.141592502593994140625F;
static const float pi_f_u = 3.1415927410125732421875F;
cout << "pi_f_l = " << pi_f_l << endl; // pi_f_l = 3.1415925
cout << "pi_f_u = " << pi_f_u << endl; // pi_f_u = 3.14159274
// double 53bit significand, 64 bit double //////////////
cout.precision(17); // significant digits10
// static const double pi_d_u = 3537118876014221.0f/(1 << 51);
// compiler chokes :(  divide by zero! so need a cunning
trick:
 static const double pi_d_l = (3373259426.0 + 273688.0 / (1<<21)) /
(1<<30);
// Or the NTL calculated exact representation:
static const double pi_d_l =
3.141592653589793115997963468544185161590576171875;
static const double pi_d_u =
3.141592653589793560087173318606801331043243408203125;
cout << "pi_d_l = " << pi_d_l << endl; // pi_d_l =
3.1415926535897931
cout << "pi_d_u = " << pi_d_u << endl; // pi_d_u =
3.141592653589794
// Long double 64 bit significand values, 80 bit ///////////////
cout.precision(21); // significant digits10
//static const long double pi_l_l = 7244019458077122842.0f/(1 <<
62)
//static const long double pi_l_u = 7244019458077122843.0f/(1 <<
62);
// Compiler will choke! or an even more cunning trick will be
needed.
static const long double pi_l_l =
3.14159265358979323829596852490908531763125210004425048828125L;
static const long double pi_l_u =
3.141592653589793238729649393903287091234233203524017333984375L;
cout << "pi_l_l = " << pi_l_l << endl; // 3.1415926535897931
cout << "pi_l_u = " << pi_l_u << endl; // 3.1415926535897931
and there are also 128 bit too, but I won't bore you further :)
 [*] It is not even true. Due to "double rounding" troubles,
 using a higher precision can lead to a value that is not the
 nearest number.
Is this true even when you have a few more digits than necessary?
Kahan's article suggested to me that adding two guard decimal digits
avoids this problem. This why 40 was chosen.
Consistency is also of practical importance  in practice, don't all
compilers read decimal digit strings the same way and will end up with
the same internal representation (for the same floating point format),
and thus calculations will be as portable as is possible? This is
what causes most trouble in practice  one gets a slightly different
result and wastes much time puzzling why.
 So maybe the interface should provide four
 values for each constant at a given
 precision: an approximation, the nearest value, a lower
 bound, and an upper bound.
Possible, but yet more complexity?
Paul
 Original Message
 From: boostbounces_at_[hidden]
 [mailto:boostbounces_at_[hidden]] On Behalf Of
 Guillaume Melquiond
 Sent: 20 June 2003 13:28
 To: Boost mailing list
 Subject: Re: [boost] Advanced match constants scheme


 On Thu, 19 Jun 2003, Augustus Saunders wrote:

 > >PS I'd like to hear more views on this 
 > >previous review comments were quite different,
 > >being very cautious about an 'advanced' scheme like this.

 I didn't react to this review at first because I was a bit
 disappointed by the content of the library. It was more like
 some questions about the best way to represent constants in
 a C++ library. And since I already had given my thoughts
 about that, I didn't feel the need to speak about it again.

 > Disclaimer: I am neither a mathemetician nor a scientist
 (I don't even
 > play one one TV). I do find the prospect of writing natural,
 > effecient, and precise code for solving various equations
 a worthwhile
 > goal. So, since you asked for comments, here's my nonexpert
 > thoughts.
 >
 > As I understand it, the original proposal's goal was to provide
 > conveniently accessible mathematical constants with
 precision greater
 > than current hardware floating point units without any unwanted
 > overhead and no conversion surprises. Additionally, it
 needed to work
 > with the interval library easily. To work around some compilers'
 > failure to remove unused constants or poor optimization,
 we wound up
 > discussing function call and macro interfaces. Nobody,
 however, is
 > thrilled with polluting the global namespace, so unless Paul
 > Mensonides convinces the world that macro namespaces are a
 good thing,
 > some of us need convincing that macros are really the way to go.

 I am not really interested in macros. I would prefer for the
 library to only provide one kind of interface. There could
 then be other headers on top of it to provide other
 interfaces to access the constants.

 The standard interface should provide a way to access a
 constant at a given precision and an enclosing interval of
 it. For example, this kind of scheme would be enough for me:
 "constant<pi, double>::lower()". I'm not suggesting that
 such a notation should be adopted; it's just a way to show
 what I consider important in a constant.

 If a particular precision is not available, the library
 should be able to infer it thanks to the value of the
 constant for other precisions. For example, if the only
 available precisions are "float" and "long double" for a
 particular architecture and/or constant, and if the user
 needs "double", the library should be able to do such conversions:

 constant<pi, double>::value() <> constant<pi, long
 double>::value()
 constant<pi, double>::lower() <> constant<pi, float>::lower()

 Please note that for the value of a constant, a higher
 precision constant can be used instead [*]; but for the
 lower and upper bound, it must be a lower precision
 constant. So it is a bit more complicated than just
 providing 40 digits constants.

 It is the reason why I was rooting for a library specialized
 in constants. It would provide an interface able to hide the
 conversion problems. The library would have to know the
 underlying format of floatingpoint numbers since the
 precision of the formats is not fixed (there are 80bits and
 128bits long double for example).

 The Interval library defines three constants: pi, 2*pi and
 pi/2. They are needed in order to compute interval
 trigonometric functions. At the time we designed the
 library, it was not easy task to correctly define these
 constants. Here is the example of one of the 91 lines of the
 header that defines them:

 static const double pi_d_l = (3373259426.0 + 273688.0 / (1<<21))
 / (1<<30);

 Using such a formula was (in our opinion) necessary in order
 for the compiler to correctly deal with these constants. I
 would be happy to remove such an header and use another
 library instead.

 > In the course of discussion, a more ambitions plan was proposed.
 > Instead of just providing a big list of constants, IIUC it was
 > suggested that an expression template library be used to
 allow common
 > constant combinations like 2*pi or pi/2 to be expressed
 with normal
 > operators. This seems good, it provides a natural syntax
 and reduces
 > namespace clutter and is easier to remember. However,
 since the idea
 > was to use a special math program to generate high precision
 > constants, I'm not sure whether an ETL can eliminate the need to
 > compute things like 2*pi with the third party program. So
 I'd like to
 > know:
 >
 > does
 >
 > 1) 2*pi ==> BOOST_2_PI
 >
 > where BOOST_2_PI is a constant already defined, or does
 >
 > 2) 2*pi ==> BOOST_PP_MULT( 2, BOOST_PI )
 >
 > using high precision preprocessor math (or something) to
 sidestep the
 > need for defining BOOST_2_PI in the first place?
 >
 > If this was implemented the first way, then I would see
 any "advanced"
 > scheme as being a layer on top of the actual constant
 library, to give
 > it a more convenient interface. The second way might
 actually impact
 > what constants get defined in the first place, in which
 case we should
 > talk it out enough to know what constants should be
 defined. But I'm
 > not sure the possibility of an advanced scheme should
 prevent us from
 > defining the basic constantsan expression framework
 could be another
 > library, right?

 I agree. I don't think the library should deal with
 expressions like "two
 * pi" in the first place. It is more like a layer on top of
 it, some kind of expression template library. The constant
 library should only define a common interface for accessing
 constants and choosing their precision. Then another library
 could be built on top of it and deal with expressions
 involving these constants.

 So, I'm not asking much from a constant library, I just want
 it to provide

 Guillaume

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk