Subject: Re: [boost] [review] Multiprecision review (June 8th - 17th, 2012)
From: Paul A. Bristow (pbristow_at_[hidden])
Date: 2012-06-09 05:40:15
> -----Original Message-----
> From: boost-bounces_at_[hidden] [mailto:boost-bounces_at_[hidden]] On Behalf Of Jeffrey
> Hellrung, Jr.
> Sent: Friday, June 08, 2012 3:28 PM
> To: boost_at_[hidden]; boost-announce_at_[hidden]; boost-users_at_[hidden]
> Subject: [boost] [review] Multiprecision review (June 8th - 17th, 2012)
> > --------
> > "The Multiprecision Library provides *User-defined* integer, rational
> > and floating-point C++ types which try to emulate as closely as
> > practicable the
> > C++ built-in types, but provide for more range and precision.
> > C++ Depending
> > upon the number type, precision may be arbitrarily large (limited only
> > by available memory), fixed at compile time values, for example 50
> > decimal digits, or a variable controlled at run-time by member
> > functions. The types are expression-template-enabled for better
> > performance than naive user-defined types."
> > --------
> What is your evaluation of the design?
Boost.Multiprecision is an exciting development because it provides two items that Boost has long
needed in its toolkit:
fixed and arbitrary precision integer types,
fixed and arbitrary precision floating-point types.
That Boost.Math functions can be called directly is a massive step forward - even just for 'trivial'
tasks like pre-computing math constants.
Wheee! - we can now use brute force and effortlessly hurl heaps of bits at any recalcitrant
Suddenly, we can do all sorts of tricks at altogether monstrous precision and range! Getting more
precision for some of the calculation is suddenly painless.
(Of course limiting the hundreds-of-digits calculations to the nitty-bitty bits will reduce the
(Not to mention random and rational, and that complex and fixed-point might also finally be made
fully generic is looking distinctly possible and that would be even more wonderful).
> What is your evaluation of the implementation?
License and backend
Allowing a choice of backend has crucial license advantages, allowing the 'gold standard' optimised
GMP, but also allowing the Boost licensed version with remarkably little loss of speed. (I note the
price we are paying for the commercial greed and patent abuse that has made maintaining the GPL
license status of GMP such a quasi-religious issue).
Crucially, the implementation works hard to be as near as possible plug-in for C++ built-in types
including providing std::numeric_limits (where these - mostly- make sense). In general, all the
iostream functions do what you could (reasonably) expect. And it is a very strong plus-point that
fpclassify and all the usual suspects of regular C99 functions exist: this should mean that most
moves from built-in floating-point to multiprecision should 'just work'.
(Any potential license problems from copyright of Christopher Kormanyos's e_float by the ACM have
Fast enough for many purposes, especially if it is possible to use a GPL backend. Optional
expression-template-enable is cool.
There is a big suite of test programs written by Christopher Kormanyos to test his e_float type
(which engine was hi-jacked by John Maddock to extend it to (optionally) use expression templates).
These provide a good assurance that the underlying integer and floating point types work correctly
and that it is going to work when used in anger.
Unsurprisingly, I was able to run the test package using MSVC VS 10 OK. Don't hold your breath!
(Testing iostream is, of course, a nightmare - there are an infinity of possible I/O combinations
and the standard is sketchy in places and there are some differences between major platforms, so
portability is never going to be 100%. But I got the impression that it works as expected).
Writing a simple loopback stream output and re-input, I found that using Boost.Test to compare
values can mislead.
A patch is at https://svn.boost.org/trac/boost/ticket/5758
#5758: Boost.Test Floating-point comparison diagnostic output does not support radix 10 (not enough
For example, it leads to nonsensical reports from a loopback test like
[1e+2776234983093287513 != 1e+2776234983093287513]
when the true situation should be obvious from this
However the underlying Boost.Test macros appeared to work fine and are used by a comprehensive set
of tests provided (dealing with complications of multiple backend and error handling policies).
> What is your evaluation of the documentation?
I was able to write some trivia before I needed to dig in. Nice. Convincing user examples. Warns of
some dragons waiting to burn the unwary. (Very few typos - nice proof-reading ;-))
> What is your evaluation of the potential usefulness of the library?
When you need it, you need it very badly. So essential.
> Did you try to use the library? With what compiler?
Used with MSVC 10 for a few experiments and to calculate high precision constants.
Did 'what it said on the tin', and agreed with Mathematica and other sources.
> Did you have any problems?
Shamefacedly, I fell into the pits noted below and was duly singed by the dragons lurking therein
> How much effort did you put into your evaluation?
Reasonable, including playing with e_float.
I wrote a simple loopback stream output and re-input using the random package.
ss << std::setprecision(std::numeric_limits<cpp_dec_float_100>::max_digits10) << b <<
It was essential to use max_digits10, not just digits10. This ran until I got bored.
Careful reading of docs.
Enough use to be confident it works OK.
> Are you knowledgeable about the problem domain?
> Do you think the library should be accepted as a Boost library?
> Lastly, please consider that John and Christopher have compiled a TODO list  based on
> comments. Feel free to comment on the priority and necessity of such TODO items, and whether any
> might be show-stoppers or warrant conditional acceptance of the library.
It is ready for use.
I am sure that the library will be refined in the light of wider user experience (and that will only
really come when it is issued as a Boost library).
Initially e_float (like NTL - used during development of Boost.Math as an example of a
multiprecision type, and for calculation of constants) both forbade implicit conversion. But it
soon became clear that it was impractical to make everything explicit and we patched NTL to permit
this (and to provide the usual suspects of std::numeric_limits and functions too).
This leaves some dragons waiting for the unwary, so those who write
cpp_dec_float_100 v1234567890 =
will get what they asked for, and deserve - the catastrophic loss of accuracy starting at the 17th
(`std::numeric_limits<double>::max_digits10` = 17 for the common 64-bit representation).
This loss of accuracy will rarely jump out at you :-(
(If I get a £1, $1, or 1 euro for everyone who makes this mistake (or one of the many other complex
pits discussed in the docs), I believe I will become rich ;-)
It would be nice to catch these mistakes, but not at the price of losing use of all the Boost.Math
functionality (and much more).
(I fantasize about a macro that can switch intelligently between explicit and implicit to protect
the hapless user from his folly, but advising on loss of accuracy is probably really a compiler
On the old hand, it is really, really cool that using a string works:
cpp_dec_float_50 df = "3.14159265358979323846264338327950288419716939937510";
The existence and (surprising) number of these has already been discussed and I see no problem with
the way it works. It will be an exceptional program that really needs to use max_digits10 rather
than digits10. (Boost.Test is an example, to avoid nonsensical display of [2 != 2] when guard
digits differ - see above).
--- Paul A. Bristow, Prizet Farmhouse, Kendal LA8 8AB UK +44 1539 561830 07714330204 pbristow_at_[hidden]
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk