Boost logo

Boost :

Subject: Re: [boost] Name and Namespace for Potential Boost Extended Floating-Point
From: Simonson, Lucanus J (lucanus.j.simonson_at_[hidden])
Date: 2011-09-01 01:31:46


Christopher Kormanyos wrote:

> I hear you. But I beg to augment this critical point.
> Multiple-precision mathematical algorithms spend roughly 80-90% of
> their computational time within the multiplication routine. If you
> can get a fast multiplication algorithm and subsequently apply this
> to Newton iteration for inverse and for roots, then you've got a good
> package. If you make it portable and adherent to the C++ semantics,
> well, then you're finally finished with this mess. If you investigate
> the MFLOPS of MP-math, even a modest 100 digits is, maybe, 100-200
> times slower slower than double-precision. The rat-race lies within
> the multiplication routine, not the allocation mechanisms. Fixed-size
> arrays and a basic custom allocator above size=N can solve the
> allocation performance bottleneck (to within certain limits). But
> fast multiply and C++ adherence ultimately elevate the package from a
> the realm of a hack to a real, specifiable, high-performance type.

>From my perspective though, the common case is numbers with only a small constant factor more bits than the largest built-ins. In my case 65, 96 and 126 bit integer and rationals. You need 65 bits for a 32 bit integer cross product in the worst case. In many cases the probability of a large number occuring goes down with increase in its size. Yes, a fixed size array in the datatype would be my preference, but the way I would like to use such a boost multiprecision library is as a wrapper for gmp that defaults to a portable c++ implementation when gmp is not available. From my perspective usage of gmp with modest sized multiprecision values is the common case. As much as we may hate the lgpl, gmp is part of the linux environment of every machine I use for real work and it is free. Right now I have no fall back and my algorithms run in non-robust mode without gmp. I'd be very happy to have a library in boost to provide the default multiprecision datatype implementation.

Given that xint was recently rejected, which we all have mixed feelings about, I'm sure, we need to be careful about what the scope and intent of this fairly similar library is. Is the scope multiprecision floating point only with the intent to provide high-performance multiprecision algorithms? I'm afraid that direction would lead to benchmark comparisons with gmp, with predictable results. If you provide a metafunction for looking up the implementation of the mutliprecision arithemtic so that people can specify gmp (or suitable alternative) then the focus will turn toward how well you wrap gmp with your expression templates and the primary concern about your own algorithms will be their correctness and portability. I think an effort in that direction is realistic and achievable and something that could be accepted as a boost library. Be careful to set yourself a task you can succeed at by limiting the scope of your library and be careful also to avoid confusion about what it is you are trying to do by being very clear about what those limits are. Building a coalition of contributing authors might also be a good idea. For example, will you provide an extensible framework for adding mp_int and mp_rational later if your initial scope is limited to mp_float? I wish you the best of luck and success.

Regards,
Luke


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk