Subject: Re: [boost] Looking for some "real world" extended precision integer arithmetic tests
From: Andrii Sydorchuk (sydorchuk.andriy_at_[hidden])
Date: 2012-01-24 16:09:48
I was following development process of your library and was just waiting
this moment. I'll test your fixed_int code and compare it to my own
FYI gmp library performance was 4 times slower comparing to my fixed_int
I was also wondering if it's possible to provide generalized interface for
the Boost licensed big_number and specifying underlying data storage using
template parameter (boost::array for fixed integers or vector for not fixed
integers). Most of the operations implementations (addition, multiplication
and all the others) should remain the same. You may want to have a look at
I would suggest that this way you should get not fixed int implementation
performing almost the same as fixed_int, and as a plus you would not
On Tue, Jan 24, 2012 at 6:50 PM, John Maddock <boost.regex_at_[hidden]>wrote:
> >I'm continuing to add to the multiprecision arithmetic library in the
>>> sandbox (under "big_number"), and I've just added a Boost licensed
>>> fixed-precision integer type which on Win32 at least looks to be very
>>> competitive with GMP for reasonable numbers of bits (up to about 1024),
>> Wow, those are some big numbers you are putting up. You must be stoked.
>> However, as we all know there are lies damn lies and performance stats
>>> Plus the test results I have above ignore the effect of memory
>>> (needed by libtommath and GMP, but not the fixed_int code - which in
>>> should make it faster still). So I'd be really interested to put the
>>> through it's paces with some real world code that really thrashes an
>>> extended precision integer type. Any suggestions? Anything in Boost?
>> In my experience memory allocations dominate gmp performance for modest
>> sized big numbers.
>> My library uses infinite precision rational data type. I would love to
>> see your library interoperate with boost rational. However, because I use
>> lazy exact you would not be able to observe much effect of using a
>> different numerical data type with my library.
> The number types I have can all be plugged straight into Boost.Rational.
> However, performance compared to say mpq_t is truly terrible. Not sure if
> it's Boost.Rational or Boost's gcd that's the bottleneck, though I suspect
> both could be improved.
> The voronoi diagram feature being implemented in my library by Andrii as
>> part of GSOC2010 implements its own extended precision arithmetic (float
>> and int) and has performance that is more heavily dependent on the
>> numerical data type because for line segements, at least, it is impossible
>> to avoid extended precision in the common case. You should work with
>> Andrii to both make sure your library works well for his needs and collect
>> performance results with his tests.
> Sure, where do I find him?
> Thanks, John.
> Unsubscribe & other changes: http://lists.boost.org/**
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk