# Boost :

From: Joel Eidsath (jeidsath_at_[hidden])
Date: 2005-09-06 10:00:29

Andras Erdei wrote:

>this would lock the implementation to a decimal-based radix one
>(otherwise you can't specify the precision as the number of decimal digits)
>which seems to be a very severe limitation
>
>the most obvious choice for a 32 bit machine is a 2^31-based radix
>representation and to my understanding your precision requirements
>rule that out
>
>
int). What you quote was just an example explaining what error ranges
were. To implement them, I imagine that something like this would be
reasonable:

rational r = 1.234
r.error_range = .02

This is meant to show that r can range from 1.214 to 1.254
Arithmetic operations would update the error range as you go along.

This let's you handle a lot more than just decimal error ranges, but
also let's you handle normal decimal error ranges simply:

rational r = 2.3
r.error_range = 0.05

Now r ranges from 2.25 to 2.35 which is what the notation "2.3"
generally means.

>
>>4) It should use fast algorithms for multiplication and division, but
>>instead of trying to compete with other libraries in computation speed
>>(GMP uses hand-coded assembly in places), it should concentrate on code
>>portability and ease of extension.
>>
>>
>
>my secret hope is that using expression templates GMP and its ilk can be
>beaten :O)
>
>if you want to calculate a+b+c using a C library, you have to make two
>loops over the digits (e.g. to add a and b, and then to add c to the
>result), while in C++ you can add together all three in a single loop
>
>(iirc GMP does have a few hacks built in for fast computation of frequently
>appearing cases like x*x+y*y and a+b+c but even if memory serves me right,
>it is just a few special cases, not a general solution)
>
>if GMP really cannot be beaten, then we have to have a wrapper around it
>for those who need the efficiency and don't care about the licensing
>
>
Wow. That's the first time I'd heard of expression templates. I can
see them being very useful.

>
>
>>5) I do not envision any fancy memory management. The "big int" class
>>will probably use a std::vector<int> to store it's implementation data.
>>
>>
>
>i do not yet have benchmarks (partly because do not know what to
>benchmark -- if someone would be willing to develop some benchmarks
>derived from real-life applications, it would be very helpful), but
>i suspect that memory management will be an important factor in the
>speed of the end result
>
>currently i have three methods in use:
>
>- linked list of single digits (std::list)
>
>- a continuos storage for all digits (std::vector)
>
>- a linked list of ever-growing slices of digits (atm i have two versions,
>one has nodes with 1,2,4,8,...2^n digits, and the other with
>1,2,3,5,8...F(n)
>digits)
>
>
>
>
By fancy memory management, I meant custom allocators. I can envision
using containers other than vector if they would speed things up.

Joel Eidsath