|
Boost : |
From: Matthias Schabel (boost_at_[hidden])
Date: 2007-01-22 15:38:51
Hi Noah,
> I think it is a definite must that any unit library needs to be at
> least
> extensible to support the two problems I described. I personally
> don't
> see much use in being able to convert, statically, between two
> disparate
> unit systems. I don't know of any project that would do this.
> Perhaps
Boost Units efforts have a long and storied history; it is quite
illuminating to go through the archives of this mailing list and
read, in
particular, the discussion and reviews of Andy Little's library
(searching
on [PQS] and [Quan] in the subject line will get most of them for
you...)
Furthermore, this discussion goes much further back, starting with
Walter Brown's SI Units library and the Barton and Nackman text. While
I understand the desire and, perhaps, even need for a runtime unit
system,
if you are in the majority in wanting this, it has been a relatively
silent
majority. While you may not be able to envision applications for a zero-
overhead compile-time unit library, there are many physicists,
engineers,
and others out there who need precisely that. For many of these
potential
users, catching a single dimensional error at compile time can save many
painful hours of debugging. In many, if not most, applications in
scientific
and high-performance computing the amount of acceptable overhead
incurred is exactly zero, a goal that is impossible to achieve by any
implementation of runtime unit checking. Since this is the application
domain in which I am knowledgeable, that's been my focus.
> have to use this system. The primary goal of a boost units library
> should be to support safe unit conversions of user defined units in a
> way that ensures that conversions are safe and inexpensive (as in not
> done more than once per assignment) and the primary use of this
> will be
> during runtime.
This could be the primary goal of a boost runtime units library. It
is not
the objective of the library I've proposed here. I understand that
there is
a potential user community for a runtime units system, and would fully
support (your?) efforts to implement such a thing and have it
incorporated
into Boost as a complement to the compile-time units library. But, as
I'm sure
Andy Little would tell you, there are a huge number of complex decisions
to be made in such an undertaking.
> I was hoping that your library could be used as a base for such a
> runtime solution but you make it sound like more trouble than
> worth. I
> will probably continue to look at ways to work this in, especially if
> you get accepted, but since I already have a very simple answer to the
> problem it may be placed on the back burner. It would be nice to be
I would be more than happy to help you understand the current library
implementation, and suggest ways of reimplementing the dimensional
analysis functionality at runtime. In principle, this should be
relatively
straightforward and, as I mentioned in a previous post, it would be
possible
to simply specialize the unit and quantity classes for runtime support.
Of course, this still leaves a significant amount of work in
developing an
efficient runtime system, implementing all the operators correctly,
settling
on a syntax for unit construction, IO, internationalization, etc...
> positive that units share the same base system and provide a general
> solution when they might not but I really think, practically speaking,
> that the likelihood of someone needing to have two different static
> base
> systems is next to nil.
If you want to write a generic library that implemented basic formulas
for electromagnetism, for which the equations themselves differ
depending
on whether you choose to use SI or one of the several CGS variant
electromagnetic units (esu/emu/gaussian), it is impossible to get
compile-time
overloading with runtime units, so you would have to check the units at
each function invocation. While this is fine for toy programs or
interactive
unit conversion calculators, in a simulation code where this function
might
be invoked millions of times, the overhead quickly becomes unacceptable.
Furthermore, this is quite inelegant - if this function calls another
one using
units, the runtime checking will be replicated at each layer, adding
further
inefficiency. Similarly, any function that takes runtime units as
arguments
will need to check them for validity before doing anything. This can
rapidly
become a significant fraction of the total execution time for
something simple
like electrostatic force - compare:
vector< quantity<runtime> >
electrostatic_force(const quantity<runtime>& Q1,
const quantity<runtime>& Q2,
const vector< quantity<runtime> >& r)
{
assert(Q1 == SI_runtime_charge);
assert(Q2 == SI_runtime_charge);
for (int i=0;i<3;++i)
assert(r[i] == SI_runtime_length);
using namespace boost::units::SI::constants;
const vector< quantity<runtime> > ret = Q1*Q2*unit_vector(r)/
(4*pi*epsilon_0*dot(r,r));
for (int i=0;i<3;++i)
assert(ret[i] == SI_runtime_force);
return ret;
}
with
vector< quantity<SI::force> >
electrostatic_force(const quantity<SI::charge>& Q1,
const quantity<SI::charge>& Q2,
const vector< quantity<SI::length> >& r)
{
using namespace boost::units::SI::constants;
return Q1*Q2*unit_vector(r)/(4*pi*epsilon_0*dot(r,r));
}
Which one of these is more self-documenting? More runtime efficient?
Now imagine you
want to be able to do this in CGS electrostatic units. Here we go
(note that the equation is different):
vector< quantity<runtime> >
electrostatic_force(const quantity<runtime>& Q1,
const quantity<runtime>& Q2,
const vector< quantity<runtime> >& r)
{
if (unit_system(Q1) == SI &&
unit_system(Q2) == SI &&
unit_system(r[0]) == SI &&
unit_system(r[1]) == SI &&
unit_system(r[2]) == SI)
{
... as above ...
}
if (unit_system(Q1) == CGS &&
unit_system(Q2) == CGS &&
unit_system(r[0]) == CGS &&
unit_system(r[1]) == CGS &&
unit_system(r[2]) == CGS)
{
assert(Q1 == CGS_runtime_charge);
assert(Q2 == CGS_runtime_charge);
for (int i=0;i<3;++i)
assert(r[i] == CGS_runtime_length);
const vector< quantity<runtime> > ret = Q1*Q2*unit_vector(r)/dot(r,r);
for (int i=0;i<3;++i)
assert(ret[i] == CGS_runtime_force);
return ret;
}
}
That's ugly and slow... For compile-time units:
vector< quantity<CGS::force> >
electrostatic_force(const quantity<CGS::charge>& Q1,
const quantity<CGS::charge>& Q2,
const vector< quantity<CGS::length> >& r)
{
return Q1*Q2*unit_vector(r)/dot(r,r);
}
No mess. No fuss. No overhead.
> You have a great library, something that might be a great backbone
> for a
> runtime units system and provide an extra level of safety, but I think
> the primary use to most people of a unit library is going to be
> runtime
> units you either need to support this directly or better document the
> methods to use your library for such purposes. 99.99% of the time
> users
> are going to stick with a single system, usually the SI system, as
> their
I guess I'll take this as a mixed complement : a great library for
0.01% of users...sigh...
Matthias
----------------------------------------------------------------
Matthias Schabel, Ph.D.
Assistant Professor, Department of Radiology
Utah Center for Advanced Imaging Research
729 Arapeen Drive
Salt Lake City, UT 84108
801-587-9413 (work)
801-585-3592 (fax)
801-706-5760 (cell)
801-484-0811 (home)
matthias dot schabel at hsc dot utah dot edu
----------------------------------------------------------------
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk