Boost logo

Boost :

From: Kevin Sopp (baraclese_at_[hidden])
Date: 2007-03-23 15:32:55

> (1) evaluate the need for a specific allocator because std::allocator
> doesn't provide a reallocate method, and growing allocated space could be
> a better policy than a create/destroy policy;

Both can be supported easily because you can simulate a realloc via
std::allocator functions. So you just need to specialize a struct on
the allocator type then either simulate the realloc() or call the real

> (2) minimize useless object copy: where it's possible we'd have to adopt a
> move semantic technique as described in
> "
> Move Solution" ;

Haven't looked at the paper so I can't say anything except that it
sounds like a good idea.

> (3) use different algorithms according to the size of the numbers
> involved, the GMP library can be taken as an example and Knuth's TCP Vol 2
> as a reference for the algorithms' implementation;

One can make these thresholds user definable at compile time because
they may be different for each platform. One could write a simple test
which prints suggested thresholds.

> (4) decide if we have to use exception handling C++ constructs for
> arithmetic exceptions ( divide by zero, etc) or simply abort the program;
> this is essentially a matter of performance;

Not just a matter of performance but also one of interface, usually
bigint will be used with data that comes from outside the program so
programmer has to be prepared to catch bigint exceptions. This could
be parameterized <bool use_exceptions = true/false>. A member variable
to hold some state about the bigint will be useful, that way you can
check for overflow, divide by zero and encode +inf, -inf, etc.

> (5) implement ctors in order to inizialize a big integer from standard
> arithmetic type and obviously from a number represented by a string;
> implement conversion functions to arithmetic types and string types;

No objection.

> (6) implement common arithmetic, shift, bitwise and comparison operators;
> using the boost::operator library would be usefull where this doesn't lead
> to a performance loss;


> (7) implement io stream operator;


> (8) basic functions: sign(), odd(), abs(), etc; numeric functions pow,
> sqrt, etc; algebric and combinatorial functions: factorial, gcd, etc.;
> there are a lot of functions we could implement : all depend on the
> avaible time;
> (9) decide if it's worth creating a specific class unsigned big integer;

Is there a use case? Maybe for fixed size bigint where you would want
defined overflow semantics.

> (10) about fixed size big integer I had tought to something like this:
> template< unsigned int N > class big_int; where N are the number of bits
> which have to be used;

Or template<unsigned int N, typename T = unsigned int> where T is the
base type of the value array, i.e. array<T> data_;

> (11) make all that working on the larger number of architectures; write
> down library documentation, implement correctness and performance tests;


> (12) it would be nice to support infinity and indefinite forms, but it
> could mean performance problems because we'd have to evaluate all possible
> cases, maybe it will be better realize a specific wrapper/adaptor class
> for such a goal, moreover this class adaptor could be used also for
> standard integral type;

I suggest as above having a state variable, I suspect checking for the
states gets lost in the noise especially when the numbers get large.

> Question : is it possible to get good performance without implement the
> basic algorithms in assembler ?

First bigint needs to be portable then one can easily add optimized
paths for different architectures (and assemblers). About the
question, I don't know - if you do alot of number crunching fast is
never fast enough ;)

Btw, this paper made big int division much more understandable to me:


Boost list run by bdawes at, gregod at, cpdaniel at, john at