|
Boost : |
From: Guillaume Melquiond (gmelquio_at_[hidden])
Date: 2003-06-20 07:28:01
On Thu, 19 Jun 2003, Augustus Saunders wrote:
> >PS I'd like to hear more views on this -
> >previous review comments were quite different,
> >being very cautious about an 'advanced' scheme like this.
I didn't react to this review at first because I was a bit disappointed by
the content of the library. It was more like some questions about the best
way to represent constants in a C++ library. And since I already had given
my thoughts about that, I didn't feel the need to speak about it again.
> Disclaimer: I am neither a mathemetician nor a scientist (I don't
> even play one one TV). I do find the prospect of writing natural,
> effecient, and precise code for solving various equations a
> worthwhile goal. So, since you asked for comments, here's my
> non-expert thoughts.
>
> As I understand it, the original proposal's goal was to provide
> conveniently accessible mathematical constants with precision greater
> than current hardware floating point units without any unwanted
> overhead and no conversion surprises. Additionally, it needed to
> work with the interval library easily. To work around some
> compilers' failure to remove unused constants or poor optimization,
> we wound up discussing function call and macro interfaces. Nobody,
> however, is thrilled with polluting the global namespace, so unless
> Paul Mensonides convinces the world that macro namespaces are a good
> thing, some of us need convincing that macros are really the way to
> go.
I am not really interested in macros. I would prefer for the library to
only provide one kind of interface. There could then be other headers on
top of it to provide other interfaces to access the constants.
The standard interface should provide a way to access a constant at a
given precision and an enclosing interval of it. For example, this kind of
scheme would be enough for me: "constant<pi, double>::lower()". I'm not
suggesting that such a notation should be adopted; it's just a way to show
what I consider important in a constant.
If a particular precision is not available, the library should be able to
infer it thanks to the value of the constant for other precisions. For
example, if the only available precisions are "float" and "long double"
for a particular architecture and/or constant, and if the user needs
"double", the library should be able to do such conversions:
constant<pi, double>::value() <-> constant<pi, long double>::value()
constant<pi, double>::lower() <-> constant<pi, float>::lower()
Please note that for the value of a constant, a higher precision constant
can be used instead [*]; but for the lower and upper bound, it must be a
lower precision constant. So it is a bit more complicated than just
providing 40 digits constants.
It is the reason why I was rooting for a library specialized in constants.
It would provide an interface able to hide the conversion problems. The
library would have to know the underlying format of floating-point numbers
since the precision of the formats is not fixed (there are 80-bits and
128-bits long double for example).
The Interval library defines three constants: pi, 2*pi and pi/2. They are
needed in order to compute interval trigonometric functions. At the time
we designed the library, it was not easy task to correctly define these
constants. Here is the example of one of the 91 lines of the header that
defines them:
static const double pi_d_l = (3373259426.0 + 273688.0 / (1<<21))
/ (1<<30);
Using such a formula was (in our opinion) necessary in order for the
compiler to correctly deal with these constants. I would be happy to
remove such an header and use another library instead.
> In the course of discussion, a more ambitions plan was proposed.
> Instead of just providing a big list of constants, IIUC it was
> suggested that an expression template library be used to allow common
> constant combinations like 2*pi or pi/2 to be expressed with normal
> operators. This seems good, it provides a natural syntax and reduces
> namespace clutter and is easier to remember. However, since the idea
> was to use a special math program to generate high precision
> constants, I'm not sure whether an ETL can eliminate the need to
> compute things like 2*pi with the third party program. So I'd like
> to know:
>
> does
>
> 1) 2*pi ==> BOOST_2_PI
>
> where BOOST_2_PI is a constant already defined, or does
>
> 2) 2*pi ==> BOOST_PP_MULT( 2, BOOST_PI )
>
> using high precision preprocessor math (or something) to sidestep the
> need for defining BOOST_2_PI in the first place?
>
> If this was implemented the first way, then I would see any
> "advanced" scheme as being a layer on top of the actual constant
> library, to give it a more convenient interface. The second way
> might actually impact what constants get defined in the first place,
> in which case we should talk it out enough to know what constants
> should be defined. But I'm not sure the possibility of an advanced
> scheme should prevent us from defining the basic constants--an
> expression framework could be another library, right?
I agree. I don't think the library should deal with expressions like "two
* pi" in the first place. It is more like a layer on top of it, some kind
of expression template library. The constant library should only define a
common interface for accessing constants and choosing their precision.
Then another library could be built on top of it and deal with expressions
involving these constants.
So, I'm not asking much from a constant library, I just want it to provide
Guillaume
[*] It is not even true. Due to "double rounding" troubles, using a higher
precision can lead to a value that is not the nearest number. So maybe the
interface should provide four values for each constant at a given
precision: an approximation, the nearest value, a lower bound, and an
upper bound.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk