|
Boost Users : |
Subject: Re: [Boost-users] [units] Automatic Physical Units in Mathematica and Boost.Units
From: Alfredo Correa (alfredo.correa_at_[hidden])
Date: 2010-12-24 00:52:16
On Thu, Dec 23, 2010 at 8:13 PM, Matthias Schabel
<boost_at_[hidden]> wrote:
> I believe there are some users who have
> applied Boost.Units to vectors in varying coordinate systems as well.
Thank you, I would like to see what they did, for example whether the
units are built-in in the vector types or just in the metric of the
vector space. I hope I can have feedback in this respect.
>> Are there future planned developments for Boost.Units, in terms of
>> external features (e.g. automatic conversion, better interaction with
>> boost.phoenix) or internal implementation?
>
> Automatic conversion is a much-requested and highly contentious issue that
> we intentionally did not support in the library, making the decision after much
> careful consideration. It is difficult to do it correctly - in the situation where you
> have, e.g., mixed meters and feet :
>
> 1.5*meters + 3.7*feet
>
> either there must be a specified default set of units to which everything is converted
> or an arbitrary decision must be made by the compiler. While it probably doesn't
> matter too much in this case, and for MP numeric types it shouldn't matter at all
> (excepting execution time), there are possible unit combinations where truncation/
> round-off could become a problem. There is no obvious way to determine this at
> compile-time.
Precisely, there no obvious way to do automatic conversions.
> Why not just explicitly specify your default units of choice in your code -
> this is self-documenting and should be consistent and correct.
> Do you have a use
> case where the requirement of explicit unit specification is excessively problematic?
I believe I often need a sort of "automatic conversion" because I am
writing code for other colleges. The implementation is written in
certain commonly used units but the formulas are expected to have
natura expressions without horrible cast and intermediate conversion
variables. This is an example of a formula directly copied from a
paper:
// V and V0 given Angstrom^3
power_typeof_helper<quantity<nonsi::angstrom_unit>, static_rational<3>
>::type V, V0;
double B1;
//B0 given in GPa
make_scaled_unit<si::pressure, scale<10, static_rational<9> > >::type B0;
// phi0 given in electron volt
quantity<nonsi::electron_volt_unit> phi0;
double X = 3./2.*(B1-1.)*(root<3>(V/V0)-1.);
4.*V0*B0/pow<2>(B1-1.)*(1.-(1.+X)*exp(-X)) + phi0;
As I said most of the automatic conversion that I need is tied to
formulas, that is, the conversion is generally needed to occur in the
middle of expression evaluation. Of course there are only four
operators that need conversion +,-,*,/. I need automatic conversion
for each case for different reasons. In the case of + or - the reason
is more or less obvious, the library won't add subtract quantities of
different systems, even if they have the same dimension.
For * and / I need it for a different reason, the library *can*
multiply quantities from different systems but them it becomes very
difficult to convert that quantity to a third system of units. It
seems that explicit conversion can't handle such conversion even
though all the conversion factors are defined.
So by adding factors in an expression one can inadvertedly introduce
many different systems of units that later can not be converted.
To solve the problem with + and - what I did was to define operator+
and operator- that performs automatic conversion but only inside a
namespace
namespace boost :: units :: auto_conversion_operators :: to_left{
quantity<...System1> operator ( quantity<.. System1>,
quantity<...System2> ) { ... }
}
and the mirror of it
namespace boost :: units :: auto_conversion_operators :: to_right{...}
in this way I can activate automatic conversion if I declare
using namespace boost::units::auto_conversion::to_left;
just before the demanding formula.
In the same way I can imagine declaring operators that always
priviledges one system of units (for example SI):
using namespace boost::units::auto_conversion_operators :: to_si;
(like in the example of the mathematica Blog)
For * and / I can do the same, i.e. always giving the result in the
System or the lefthand side or the right hand side.
By declaring that I want to use these operators a priori inside a
scope I guaranty the correct dimensional answer that the result will
be in a certain system and not a mixture. I don't have the bitter
feeling of truncation because the truncation will occur anyway after
the unavoidable step of converting and doing the arithmetic operation.
I admit it is not completely elegant but it gives me certain control
on the conversions without affecting the formula syntax.
I can share the code of these auto_conversion_operators if you are interested.
>
> As far as Boost.Phoenix, what are the concerns?
The problem is that both units and phoenix seem to requiere a fair
amount of glue (template) code.
Mainly it needs to translate the units::*_typeof_helper protocol to
phoenix::result_of_* protocol, this is needed for quantities,
references of quantities, units and other type with which quantites
can interact with (e.g. double) the that I managed to add is long and
very repetitive and probably still limited. Also I hesitant to invest
a lot of effort since something seem to be different with the upcoming
Phoenix3.
I have some code that does this.
I also have some other minor code to make units work with
Boost.Accumulators, and Boost.Interval and a set of atomic and non-SI
units with their own separate mini-system.
> That being said, one
> of the beauties of the open source model is that other interested parties (you, perhaps)
> are welcome to extend the work. Going from code that works for your particular
> application to code that is Boost-ready is non-trivial, but there are quite a number
> of people who can help provide advice. Boost.Units.Phoenix would be, I'm sure, a
> great addition to the library.
I really appreciate your work. Certainly I would like to contribute
these additions that I have so far. I doubt they are the same quality
of code as the rest of the library. I certainly not against using the
version control myself but I would like to have the code seen by
experts before really uploading something.
> As far as internal implementation, what are the concerns?
In the same way that I implemented some additions I would have liked
to implement other features like the explicit conversion in cases were
three (or more) systems or units are involved. That part seemed to
involve a fair amount of research in the library, in particular in its
compile-time linear algebra, static_rational and a lot of MPL. My
question about implementation is to know if the library is planned to
change in some particular way.
Thank you,
Alfredo
Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net