Boost logo

Boost :

From: Michael Kenniston (Msk_at_[hidden])
Date: 2001-07-19 08:16:53


Paul A. Bristow wrote:

> I am concerned, as I have been all along, about
> the implications of having many (say 100 -ish)
> of constants. I feel there are advnatages of having this many
> (for example if you look at Knuth or J F Hart Computer Approximations,

> a classic on polynomial methods for functions from sqrt to Bessel,
> they quote and use over 50 constants.

> But if the common use is just to get pi and e,
> all this may be overkill, and worse have a cost in compile time,
> link size, or worst of all, code bloat.

> Any ideas or views on this aspect?

Include them all. Just to pick some arbitrary round numbers, if
there are 100 possible constants, and an average user needs ten
of them, and you only implement 90 of them, then about two thirds
of the potential users will find your library deficient, i.e.
missing at least one thing that they need. (Assuming everything is
random, which may be a reasonable way to model many software
projects. :-) This is sort of an inverse 80/20 rule, where 80% of
the utility is in the last 20% of the effort. Boosters especially
know how annoying it is to use a tool, a compiler for instance, that
implements 98% of the standard. :-(

If the constants are placed in a namespace instead of in a struct,
then you can have multiple header files:

    #include "boost/math/common_constants.hpp"
    #include "boost/math/uncommon_constants.hpp"
    #include "boost/math/really_rare_constants.hpp"

and so on. Users can then include only the one(s) they really need.

This is orthogonal to the issue of whether users should be adding
their own constants to the boost::math namespace -- I agree that
they should not be adding anything to any boost namespace (except
perhaps specializations and overloads), but regardless of what you
do with the math constants, the namespace restriction simply has to
be documented as there is no way for us to enforce it.

Before going to the trouble of splitting things up, though, it's
probably worth running some tests to see if it actually saves enough
compile time to be worth the effort and added complexity. I don't
see space being a problem; even if you have 128 constants at 16 bytes
each, that's noise on most machines, and the few folks who are so
tight that they actually care about 2Kib are going to cut-and-paste
exactly the constants they need anyway. And now that I think of it,
the solution I proposed shouldn't use any space at all for unused
constants.

Greg Chicares wrote:

> Does this solve the initialization order problem? Consider

> > namespace double_constants
> > {
> > constant< double, pi_tag > const pi;
> > constant< double, e_tag > const e;
> > }

> The 'constant' struct has a nontrivial constructor, so the lifetime of

> 'pi' begins begins when its constructor finishes. If we add at global
> scope in a different source file

> double global_pi_squared = std::pow(double_constants::pi, 2);

> and the compiler chooses to initialize that before 'pi', then 'pi' is
> converted to an rvalue, but isn't that undefined behavior?

I'm going to have to spend some time with my reference books on this
one.
I think the standard may guarantee that pi will be zero-initialized
before any use, even if the constructor hasn't run yet. For our
purposes that's good enough since the constructor doesn't do anything
anyway, but you may be right about this constituting undefined behavior.

There must be some way to patch this up so it's strictly correct.

> Can that be fixed by removing the user-defined constructor from
> 'constant' and the const-qualifier from 'pi'? We don't care whether
> 'pi' is constant as long as its operator Rep() is.

Well, I declared pi "const" to try to express intent, and so that if
anyone did something silly like:

    pi = e;

the compiler would catch it and give some half-way reasonable
diagnostic.
Maybe that isn't really needed, though, since the statement above would
already be rejected as a type mismatch. The default constructor was to
make a g++ warning go away, which would go away by itself without the
"const".

However, I'm not sure whether having operator Rep() const but having
pi non-const actually gives enough information to the optimizer to get
the intended results, and somehow it doesn't seem right not to have
what is clearly a constant not be declared "const".

> If you write 'template<>' before the four lines like this one:

Oops, my mistake. When fixed as you specified it still works fine
under MSVC, too.

> Also suggest removing unused argument 'dummy' from
> void show_all( const T & dummy, const char * label )

That's a workaround for a nasty compiler bug. Dropping the dummy
argument and invoking the specialization explicitly with

    show_all< double >( "double" );

causes MSVC6 to silently call the wrong specialization. It's only in
the demonstration example though, so it doesn't affect the actual
library.

> Specifying the value once with maximal precision and relying on
standard
> floating-point conversions is an attractive idea. But the value you
get
> can depend on the hardware direction, at least on an 80x87, so it can
> change during program execution.

Good point, and it reinforces my gut feeling that in this context
automatic conversions are misguided. I only included the example of
how to do it to show that the question "Is the constant<> class a
reasonable solution?" is independent of the question "Should new
representation types get default values?"

--
- Michael Kenniston
  mkenniston_at_[hidden]
  msk_at_[hidden]     http://www.xnet.com/~msk/

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk