
Boost : 
From: Eric Ford (eford_at_[hidden])
Date: 20010907 15:57:42
> > Can this possibly be efficient, what with all that
> construction/destruction going on? Maybe it is and I'm worried
about
> nothing. If it isn't, maybe it can be made so by appropriate static
> constructions at start time of "standard" checkers. Building some
more
> complex examples and comparing space/time use to similarly coded
bare
> functions would be a good idea.
It is not presently efficient. However, I beleive it could be.
For example, presently I'm passing things by value. The "checkers"
should probably be passed by const references. (The reason I didn't
do this yet is that it should probably be done using call_traits, but
I don't want to deal with that until bigger issues are decided.)
Another way to improve the efficiency is to use inline static member
functions when possible. For example, if you don't want domain
checking, your domain checker can have no member variables and an
inline static member function that returns true. Then it would never
need to be constructed.
One thing I'm not sure about... If I have a class with only const
member variables and const member functions can the member variables
be optimized out? This is desirable in some accuracey checkers, so
the user can specify an accuracey requirement at compile time without
performance penalty.
Of course all of this assumes intelligent compilers. Is there another
performance concern I'm missing?
> > The examples are of implementing some pretty silly functions
(this is
> not a complaint, since they aren't bad examples, just pretty
trivial).
> I would just like to see some examples of how this works out in more
> elaborate settings. What about, for example, multiple argument
> functions; does this interface still work well? What about more
> elaborate functions like, say, the bessel functions? What about
> standard functions that take reals and integers as arguments (think
the
> standard polynomials of mathematical physics)... does the interface
> expand appropriately? Probably, but I'm not sure how horrible it
gets,
> or how many special cases you need.
Sure, I just wanted to concentrate on the interface for now. Here are
examples of how some functions like you suggest might be called with
the default accuracey chcker, domain checker, and error handeler.
// multiple arguments
typedef complex<double,double> complex_d;
divide<complex_d, pair<complex_d,complex_d> > divide_complex;
complex_d a(2.,3.), b(4.,5.);
complex_c = divide_complex(make_pair(a,b));
// simple bessel functions of first kind
bessel_J<int,double> J0(0);
double x = 1.;
double y = J0(x);
// or
double y = bessel_J<int,double>(0)(x);
// or
bessel_Jn<pair<double,int> > Jn;
double y = Jn(make_pair(x,0));
//or
double y = bessel_J<pair<double,int> >(make_pair(x,0));
// Legendre polynomial
int l = 1;
double x = 0.5;
legendre_polynomial<int,double> P_l(l)
double y = P_l(x);
// or
legendre_polynomial<pair<double,int> > P;
double y = P(make_pair(x,l));
By "or" I mean that I think that multiple ways should be supported.
The
choice of what's an argument and what's a parameter is more than just
semantic. While it's fine to call a function repeatedly with
different argument vaules, a parameter is something that doesn't
change too often from call to call. The advantage of having both is
that then the algorithm can be chosen to match the expected calling
patterns
(e.g. a few random calls versus a series expansion in l)
Do those instantiations look too unreasonable?
The make_pair/make_tuple calls are a little ugly. Any better ideas?
In any case, I don't think it's too bad.
One concern this did make me think of... Consider the following
// Associated Legendre polynomial
const int l = 1, m = 1;
double x = 0.5;
legendre_polynomial<pair<int,int>,double> P_lm(l,m)
double y = P_lm(x);
// or
legendre_polynomial<int,pair<double,int> > P_m(m)
double y = P_m(make_pair(x,l));
// or
legendre_polynomial<tuple<double,int,int> > P;
double y = P(make_tuple(x,l,m));
This illustrates the desirability for several different choices of
what's
an argument and what's a parameter. For example, if you're going to
need to evaluate one particular P_lm over and over again for different
values, then the first method would be able to choose an appropriate
algorithm, probably caching coefficients, or even calculating them at
compile time. If you're going to do a series expansion in l, then the
second would make sense and a recurrence relation in l could be
chosen. If you're only going to make one or two calls, then the third
would probably be best.
One issue that this example does expose is what if I want to have a
function with l as a parameter and m as a argument (e.g. calculate a
series with increasing m's). How should that be distinguished from a
series with increasing l's?
One possibility is to use template parameters like make_pair(l,false)
or make_pair(m,true)
(or use an enum similarly). This is a little messy, so providing a
better way to call it to users would probably be desirable. e.g.
legendre_polynomial_arg_m<int, pair<double,int> > P_m(m);
double y = P_m(make_pair(x,l));
and
legendre_polynomial_arg_l<int,pair<double,int> > P_l(l);
double y = P_l(make_pair(x,m));
> > Can we hide the complexity from USERS of the interface? Can one
> provide efficient implementations of a more standard function call
> interface with no checking on top of this class interface? Can that
> function interface be made efficient in both space and time (such
that a
> library won't have to construct thousands of small classes at start
time
> to avoid construction/destruction at run time, and will run
quickly).
> Most of the time, you will want to just say BesselJ(x, n) where you
need
> it, rather than constructing an instance of BesselJ<double, double,
> int, ......> that you then call... or maybe you won't.
I would think so. I think the above examples illustrated that. But
it might be difficult to standardize the short hand. Even if we do
want to try to do that, I think the short hand style discussions would
probably come later. Again, if there's no checking, then
constructions can be avoiding by using checkers with no member
variables and static member functions.
I guess that I'm so used to constructing functions, I don't think much
of
it. Is it really that undesirable? It's fine to do something like y
=
BeeselJ<int,double>(0)(x); so long as you only use it ocassionally or
use it with no checking/error handeling.
I suppose we could also provide template functions that basically
instantiated the relevant class and returned the result. But I'd be a
little worried that might have a performacne cost. Even if those
functions had no checking, it might encourage users to not choose the
most efficient algorithm.
> > Can the functions already in the standard library be
reimplemented to
> fit the interface without breaking legacy code? (again, this would
> require a more conventional function interface with no checking)
I'm not entirely sure if I'm understanding you correctly. But I think
we'd probably want to reimplement most of those, possibly using the
standard library functions for some parameter values (e.g. low
accuracey).
Are you worried about name conflicts? I would think that could be
resolved by using namespaces.
> > Since I keep asking about a more standard function call
interface,
> can one do away with the class interface entirely? Can it be made a
> completely template function based interface, or does the interface
NEED
> to be a class based interface for language reasons? I'm not sure
what
> all the pros and cons of both types of interfaces are... certainly,
for
> passing "functions" to "algorithms" (think integrators, or
> interpolators, etc....), a class based approach is more useful,
since
> you would otherwise end up writing little classes to wrap functions.
> But in most other cases, you simply want to call the function
without
> constructing objects all over the place. One thing I'm worried
about is
> that using a class interface for functions will scare off a large
number
> of potential users; but you might be willing to lose a few users for
> demonstrable benefits. Are those benefits there?
I beleive the benefits are there. As you mentioned classes make it
easier to pass a (mathematical) function to an algorithm or another
function. Also, classes allow functions to save state information
without using statics (which are quite messy with templatized
functions). State could be as simple as function parameters, making
it easier to pass a function to an algorithmic function. Or, since we
often call one function over and over again with similar arguments, I
could image some algorithms using information from the previous call
to speed things up.
If you're only using a function once or twice, it won't hurt to do
something like
y = bessel_J<int,double>(2)(x);
If you're going to use a function enough to be worried about speed,
then I don't think it's so bad to have to do something like
bessel_J<int,double> j2(2);
for(int i=0;i<i_max;++i) y[i] = j2(i/(double)(i_max));
> > You mention using tuple for multiple returns... I thought of that
> myself, but how efficient is tuple? Would array<> or something
similar
> work better? Would this interface be yet another good example of
how
> useful a fundamental tuple type would be? Pulling items out of
tuples
> seems to require construction of a number of temporaries... is the
speed
> hit sufficiently small to be a nonissue? What about array<>'s?
Well, array<>'s are another possibility. Obviously, they can only
hold one type, so they wouldn't be appropriate for
arguments/parameters of differing types. Although, there you could
have a tuple containing an array. I think that would probably do more
for the sake of clarity (if there were a lot of parameters that went
together in some logical way) than for speed. Maybe I'm mistaken, but
I thought tuple was quite efficient at runtime, although a little
costly in compile time.
> > Thinking about passing these things into algorithms, can those
> algorithms be built on top of this interface effectively? I think
this
> is an important consideration for any mathematical function
interface...
> after all, calling a function is important, but to be useful for
real
> work, you need to be able to feed them to generic algorithms easily
and
> efficiently. It would be nice if you could build a function
interface
> that would make it easily to build a generic, arbitrary argument,
monte
> carlo integrator, for example. Can that be done here, or do we need
> something more? or less?
I think so. As long as the function provides typedefs for return_type
and arg_type and a function
return_type operator()(const arg_type) const { ... };
then it seems rather strait forward to drop into an
algorithm. Of course, your generic monte carlo integrator also needs
to know how to generate arg_type's over your region of integration,
but I think that should be a separate argument to your monte carlo
integrator. Am I missing something here?
(BTW This is another argument for using classes, since parameters can
be dealt with at construction time, so the algorithm function doesn't
need to know anything about them.)
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk