Boost logo

Boost :

From: Simonson, Lucanus J (lucanus.j.simonson_at_[hidden])
Date: 2008-08-21 12:38:47


Luke wrote:
>> Can your digit_type be 64 bit unsigned
>> integer? If so you have pretty much the ideal variable precision
>> integer.

Kevin wrote:
>It can, but only if you have a 128 bit word type (necessary for
>intermediate results in the calculations) or if all important
>calculations were coded in assembler. Then I could take advantage of
>add with carry instructions etc and handle carries in place.

Ah, I see. 32 bit is certainly good enough. Speaking of assembler and
platform specific stuff, do you know how to trap integer overflow events
in C++ to handle them as exceptions? It would be a big performance
advantage to try a calculation that might overflow, catch the overflow
exception thrown by the event handler and then execute high
precision/complex calculations instead to get the correct answer. This
would need to coexist with code that doesn't catch overflow, so the
event handler would need to inspect a dynamic policy that informs it
whether to ignore or throw in it's currently context.

overflow_policy.push(THROW);
try {
        a = b * c * d;
} catch overthrow_exception {
        a = foo(b, c, d);
};
overflow_policy.pop();

Kevin wrote:
>So what you actually need is a fixed precision integer that lives on
>the stack. mp_int involves memory allocation and algorithms designed
>for very large integers which is probably overkill.

Yes, I was originally thinking a parameterized fixed precision integer
that lives on the stack. Something like:

template <int bit_width>
class integer {
        static const int word_length = bit_width / word_width + 1;
        word_type vals_[bit_width / word_width + 1];
        int sign_;
public:
        ...
};

where word_width and word_type would be platform dependent.

At which point you could do many of the same things with &vals_ that you
do with your array of digits, but would have to overflow instead of
reallocating when the size of vals_ becomes insufficient.

I can look at any arithmetic operation and quickly tell exactly how many
bits I need to represent it. That was how I knew I needed 65 bits to
compute the line segment intersection. (I brought it down of 96 bits
with algebra, but couldn't reduce the sum of products term...) Instead
of adaptive precision, I would like to be able to specify how much space
I need at compile time.

After looking at your mp_int data structure this sounds like something
you could implement quite easily by reusing a large portion of your
existing code.

Luke


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk