Subject: Re: [boost] [review] Multiprecision review (June 8th - 17th, 2012)
From: Edward Diener (eldiener_at_[hidden])
Date: 2012-06-24 22:58:34
On 6/8/2012 10:28 AM, Jeffrey Lee Hellrung, Jr. wrote:
> Any review discussion should take place on the developers' list (
> boost_at_[hidden]), and anyone may submit a formal review, either
> publicly to the entire list or privately to just myself.
> As usual, please consider the following questions in your formal review:
This is my review of the multiprecision library.
I am going to use the term "general type" to refer to either 'integer',
'rational number', or 'floating point' type no matter what the backend is.
> What is your evaluation of the design?
The design is logical. I like the fact that various backends are
supported and that there is always a Boost backend to fall back on for a
general type. I also like it very much that future backends are also
supported via the backend requirements.
> What is your evaluation of the implementation?
I looked the implementation from the point of view as an end-user but
did not look at the details. I did not have the chance to test the
implementation so my comments are based on the doc.
It seems very easy to use the implementation. The hardest part is that
some of the backends have their own rules, which are not entirely
consistent with other backends of the same general type ( integer,
rational number, or floating point ). This does not occur very often but
when it does the end-user has to be aware of it. These slight
inconsistencies are documented but I would like to see an attempt to
regularize them by the front-end, perhaps via a compile time trait.
As a single example of this the gmp_int backend triggers a division by 0
signal when one tries to divide the integer by 0; the tom_int raises a
hardward signal with a division by 0; the cpp_int throws a
std::runtime_error with a division by 0. I would like to see some means
by which I could use any integer backend and know that a
std::runtime_error would be thrown by a division by 0.
It would be nice if other partcularities could be regularized in a
similar manner. This would make it little easier to use the library
The library allows conversions between values of a general type no
matter what the backend. This is good. The library allows what I think
of as widening conversions, from integer to rational or float, and from
rational to float. This is good. Both follow the similar idea in C++
itself. In C++ one can do a narrowing conversion if a static_cast is
used, otherwise a compiler error ensues. The document also explains that
a narrowing conversion will produce a compiler error.
Can a static_cast be used to do a narrowing conversion ?
The introduction states that "mixing arithmetic operations using types
of different precision is strictly forbidden". I was disappointed not to
read any discussion of why this would be so. In C++ this is not the case
with integer and floating point types. Considering that this library
does allow conversons within the same general type and widening
conversions it would seem that doing operations with different types
could be technically allowed fairly easily, by converting all values in
an operation to the largest type and/or greatest precision.
> What is your evaluation of the documentation?
The doc is totally adeqate.
> What is your evaluation of the potential usefulness of the library?
Tremendously important to C++. Although I am not a mathematician myself,
my background in math and science suggests that a multiprecision library
involving huge and/or highly accurate numbers is an absolute necesssity
to serious mathematical and scientific calculations.
> Did you try to use the library? With what compiler? Did you have any
I did not have time to try it out with any compiler. I originally wanted
to modify some of the tests in the library so I could try them out with
the compilers I have, but the tests were too complicated for me to
understand easily. I am going to try to cobble together some simple
tests for myself during the upcoming week but i wanted to submit my
review nonetheless before the period for it was over.
> How much effort did you put into your evaluation? A glance? A quick
> reading? In-depth study?
I put a good deal of effort attempting to understand the pluses and
negatives of the design, and thinking about the problems involved.
> Are you knowledgeable about the problem domain?
I am fairly knowledgeable about mathematics without being a
mathematician or claiming expertise in any particular area of mathematics.
> And, most importantly, please explicitly answer the following question:
> Do you think the library should be accepted as a Boost library?
I think the library should be accepted as a Boost library. I do think
more work needs to be done to make the library usable, but I have no
doubt the authors of the library are capable of doing so.
> Lastly, please consider that John and Christopher have compiled a TODO list
>  based on pre-review comments. Feel free to comment on the priority and
> necessity of such TODO items, and whether any might be show-stoppers or
> warrant conditional acceptance of the library.
I would like to see more work done in the two areas I mentioned:
regularizing the backends and performing operations with different
precisions. I do realize that accuracy is paramount when using the
library, but in the tradition of C++ as long as the end-user knows any
possible shortcoming of these two areas I think they should be allowed
if technically feasible.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk