From: Moore, Paul (Paul.Moore_at_[hidden])
Date: 2000-03-06 07:50:39
From: Dave Abrahams [mailto:abrahams_at_[hidden]]
> I think you'll find yourself pressed by the group to make
> performance high and to make sure it works well in edge cases.
> These things are often what makes the difference between a
> library being widely used and being widely ignored. I hope
> you'll be willing to rise to the challenge.
Hm. I overstated things. I see a value in a library targetted at casual
users, where specialists already have high-performance, tailored
Consider: STL provides a map<> class. The implementation technique is
considered not relevant to the target audience. Yes, there are users who
need to care about the performance and other issues between hash tables,
red-black trees, splay trees, etc. But casual users just want a map class
which is optimal for "normal" use.
As far as unlimited-precision integers are concerned, consider the int type.
It's a compromise. It trades off precision in favour of other benefits
(speed, space, natural syntax and semantics). I was suggesting an
alternative which places a higher emphasis on precision (but losing some
speed and space efficiency). [The syntax and semantics issues simply mean
that I want the result to "look like" a built-in integer]. I am not a
numerical analyst, so I can't offer fancy implementation techniques (chinese
remainders, residue maths, etc). But I can build a straightforward "take the
algorithms from Knuth" implementation, which should remain efficient enough
for "normal" use.
Again, take rational<>. The key design decision was to implement it as a
template, with the type of the numerator/denominator as the template
parameter. (This assumes a num/den implementation, which isn't the only
possibility, but it is the simplest, it is what is presently implemented,
and it illustrates the issues well). Now the problem is that if the template
parameter used is "int", we hit issues related to the implementation --
because of the limitations of "int". Specifically, the range/precision
behaviour is variable. Maximum value is 2^32/1, but highest precision of
1/2^32 is only achieved near zero. This type of behaviour is well known with
floating point, but it is unexpected with rationals (after all, rationals
are generally seen as "exact" fractions). But the only way to avoid it is to
build rationals on the back of unlimited-precision integers. This can be
done, using rational<my_integer>, but the default is rational<int>, (a)
because there is no standard unlimited-precision integer (yet!) and (b)
because the fundamental operations required to implement rationals
(multiplication and division) are generally slow on unlimited-precision
Anyway, this is getting both theoretical, and defensive. I'll keep
implementing things, and I'll rely on you to keep me honest! (But I'll stand
by my view that the requirements of casual users are more important than
those of specialists).
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk