|
Boost : |
Subject: Re: [boost] Multiprecision vs. xint
From: John Maddock (boost.regex_at_[hidden])
Date: 2012-06-17 04:16:31
>> Although the proposed Boost.Multiprecision provides default
>> implementations of int, rational and float, its main goal is
>> not to compete with the world's best performing implementations
>> thereof.
>
> Of course it needn't compete with the world-class implementations, at
> least not initially. However, it must be fast enough to be usable in
> enough use cases to get sufficient experience to determine whether the
> interface is right and whether the customization points are appropriate,
> in order to have a solid foundation for a standard proposal.
Point taken, however it does wrap many world class implementations such as
GMP and MPFR. Frankly in the general case it's foolish to believe that
Boost can compete with those guys. However, basic C++ implementations for
integer and floating point types are provided which are much better than
toys, much less than world class, but perfectly usable for most cases (or
where a non-GNU license is necessary).
>> I would prefer to work on the interface, not the performance
>> of the default classes that John and I have provided. If we
>> bicker about these, we will never get to the real matter at
>> hand which is specifying the abstraction for non-built-in
>> types.
>
> While I agree with your sentiment, note that Phil's concern about being
> able to create a fast, slightly-larger-than-built-in type is important.
> Showing how such a type can be created is an important exercise because it
> will show whether the abstraction and points of customization have been
> properly conceived to permit creating such types.
>
> Indeed, given the likelihood of folks wanting to do what Phil did, the
> library could provide a template-based backend implementation that does
> most of the heavy lifting.
There is a problem here - in that there is no "one true way" for a slightly
larger integer type - it all depends on how much larger, and the particular
size of the integers it encounters in practice.
For example:
* For int128, simplicity will probably always win out, making the "naive"
implementation the best.
* By the time you get to say int1024, computational complexity wins out, so
something like the fixed sized integers already provided are likely to be
best (modulo the fact that more profiling and fine tuning is still
required). That's why I was persuaded to switch to those prior to the
review from a more traditional fixed int (like Phil's).
* To complicate matters more, for say int1024, the "naive" version wins if
most of the values use all the bits, where as the version that maintains a
runtime-length wins when most values are small.
Having said all that, I think it is true that a weakness of the library is
inherent overhead when dealing with "trivial" types that are extremely close
to the metal to begin with. Probably that reflects the original focus of
the library where this is a non-issue.
>> John's concept takes the first step toward establishing an
>> architecture for extended numeric types.
>
> It is reasonable to view this as "the first step" and leave the
> fulfillment of some of these other requirements for later. However, if
> there is no proof of concept for the various use cases, then you can't be
> sure the abstraction and points of customizations are correct.
True. However, you can't carry on forever, you have to ask for a review at
some point. At present the focus is more on "breadth" than "depth".
>> I do, however, have UINT128 and INT128. I wrote them sometime
>> around 2002. If you *really* would like, I could approach
>> John and ask how we could make these boost.ready. I also have
>> UINT24 for embedded systems.
>
> One of those might be an excellent tutorial for defining a backend, though
> actually including the code, in some form, with the library would be
> ideal.
Adding a couple of tutorials for backend writing is an excellent idea.
Chris - I think we all have such beasts as an int128 - there's one (not part
of this review) under boost/multiprecision/depreciated/fixed_int.hpp.
Unfortunately I suspect that int128 is so close the metal, that the fastest
implementation wouldn't use mp_number - however, it might make a good use
case to try and find out where the bottlenecks are etc.
Regards, John.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk