Boost logo

Boost :

From: John Phillips (phillips_at_[hidden])
Date: 2004-06-03 01:48:28


David B. Held wrote:
> "Reid Sweatman" <drunkardswalk_at_[hidden]> wrote in message
> news:!~!UENERkVCMDkAAQACAAAAAAAAAAAAAAAAABgAAAAAAAAAQ16IR/xA0BGKJURFU1QAAAKJAAAQAAAArLDup9m3ck+fnQ8diyZEOwEAAAAA_at_earthlink.net...
>
>>[...]
>>I'd say (speaking as a soi-disant mathematician) you also need to
>>find ways to overcome the perception that "other languages do it
>>faster and/or more accurately."
>>[...]
>
>
>
> For most libraries in Boost, I can see a clear and obvious benefit
> from providing that library in C++. But when it comes to algorithms,
> it is much easier to separate the interface from the implementation,
> so it seems that we should at least consider simply providing
> C++ wrappers for implementations that exist in other languages.
> If there are clear benefits from reimplementing everything in C++,
> I'm all for it. I just think it's worthwhile to look at all the angles
> before
> diving in.
>
> Dave
>

   Some clear and obvious benefits of offering at least wrappers for
math libraries written in other languages (not a complete list, but to
start the conversation).

*Access to C++ type safety.
   For all the usual reasons about compile time type checking.

*The ability to work only with STL-like containers.
   Bad pointer arithmetic in C causes many code errors around the
Physics Department I am a member of. It is a better idea to confine it
to the internals of well tested algorithms (if it is needed for
efficiency) and let the user code with well behaved containers.

*Easier function pointer handling.
   Since many methods require the user to pass a function pointer (or a
function object, or the like) for the algorithm to operate on, the tools
offered by C++ and Boost that allow binding, composition and other
manipulation of the function pointer can greatly simplify code. This
simplification would have to be weighed against possible performance
losses, however.

*Making generic algorithms generic.
   Templates and overloading, of course, are able to simplify the
library interface substantially. No more separate names for the float,
double and complex versions of what is really the same algorithm.

   Some things that may be natural in C++ implementations and very
difficult or impossible in other libraries (again, not a complete list,
but the ones I am thinking of at the moment).

*Policy based design
   There are many different methods for finding the roots of a known
function. In some cases, they require different inputs from the user,
and so are sensibly very different members of a code library. However,
some use the same input information but do different things with the
information while finding roots. These appear to me to be different
policies for a more generic root finder. More generally, many operations
that a numeric library should address appear to me to have sensible
orthogonal decompositions into policies.

*Metaprogramming
   A utility I would love to have for some things I have done in the
past is a good, general tensor library. One that allows compile time
selection of the number of covariant and contravariant indices, raising
and lowering operations, index contraction, a wide variety of element
types (at least double, complex and functions that return doubles or
complexes), and all the other basic tools of tensor manipulation that
numeric work might need. If I ever get the time, I would start trying to
implement this using the boost preprocessor and metaprogramming
libraries to keep the size of the code base sensible.

*Use of boost::interval for error checking.
   Interval could be a great library for people doing numeric work.
However, if I understand it correctly (and someone should please correct
me if I don't), using interval with C or FORTRAN libraries would be a
mess. The user would have to almost recreate much of the functionality,
since in a complicated algorithm it is very hard to estimate what would
happen to an interval. It is only practical one operation at a time.
   Similar facilities to boost::interval that take into account the fact
that the errors are not uniformly distributed on the interval would also
be nice. Armed with these two tools and a small (but selected to be
characteristic) data set, the user could produce conservative error
intervals and more likely accurate error distributions from the very
same code used to actually analyze the full data set.

   I'm sure that none of these reasons are impossible to disagree with.
I'm also sure there are good reasons to make C++ libraries that I have
left out and good reasons not to that have not already been brought up
in this thread. However, as it stands at the moment, I am very
enthusiastic about this project and I hope to contribute to its development.

                        John Phillips


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk