[units] Automatic Physical Units in Mathematica and Boost.Units

Since there are not many computer software that handles units seriously, except possibly for Boost.Units and Mathematica`Units in my opinion, I just wanted to point out to this post: Automatic Physical Units in Mathematica http://blog.wolfram.com/2010/12/09/automatic-physical-units-in-mathematica/ Interesting discussion in terms of features. One of the reasons I hated Mathematica units handling so far was due to the lack of a typing system which C++ and boost.Units have naturally, the other reason was that the physical quantites didn't seem to play well with other parts of the system. Now it seems that Mathematica Units is discovering "types". One of the features that it mentions is the ability to perform automatic conversions upon arithmetic operations of mixed units. In fact I was working in a way to perform automatic conversion in Boost.Units inside arithmetic formulas, since Mathematica has unbounded precision arithmetic by default the automatic conversions are not a real problem. An exact matching of this feature can not be achieved with a limited precision representation (like double). I'll take the opportunity ask two broad general questions: Does any body have experience with using boost.units quantity with a underlying type that are not double, e.g. multiple precision, or vector (geometric) objects? Does it work well? Are there future planned developments for Boost.Units, in terms of external features (e.g. automatic conversion, better interaction with boost.phoenix) or internal implementation? Thank you, Alfredo

Alfredo, As one-half of the Boost.Units developer's community (actually I should also give due credit to Torsten Mähne for contributing the lambda component), I'll try to answer your questions :
Does any body have experience with using boost.units quantity with a underlying type that are not double, e.g. multiple precision, or vector (geometric) objects? Does it work well?
For "well-behaved" types, Boost.Units should work transparently. See the examples for complex and measurement types. I believe there are some users who have applied Boost.Units to vectors in varying coordinate systems as well.
Are there future planned developments for Boost.Units, in terms of external features (e.g. automatic conversion, better interaction with boost.phoenix) or internal implementation?
Automatic conversion is a much-requested and highly contentious issue that we intentionally did not support in the library, making the decision after much careful consideration. It is difficult to do it correctly - in the situation where you have, e.g., mixed meters and feet : 1.5*meters + 3.7*feet either there must be a specified default set of units to which everything is converted or an arbitrary decision must be made by the compiler. While it probably doesn't matter too much in this case, and for MP numeric types it shouldn't matter at all (excepting execution time), there are possible unit combinations where truncation/ round-off could become a problem. There is no obvious way to determine this at compile-time. Why not just explicitly specify your default units of choice in your code - this is self-documenting and should be consistent and correct. Do you have a use case where the requirement of explicit unit specification is excessively problematic? As far as Boost.Phoenix, what are the concerns? While I can't speak for Steven, I personally do not have a tremendous amount of time or resources to devote to Boost.Units on an ongoing basis; my bread-and-butter work is in medical imaging and, therefore, Boost.Units is a somewhat tangential project. That being said, one of the beauties of the open source model is that other interested parties (you, perhaps) are welcome to extend the work. Going from code that works for your particular application to code that is Boost-ready is non-trivial, but there are quite a number of people who can help provide advice. Boost.Units.Phoenix would be, I'm sure, a great addition to the library. As far as internal implementation, what are the concerns? Best, Matthias

On Thu, Dec 23, 2010 at 8:13 PM, Matthias Schabel <boost@schabel-family.org> wrote:
I believe there are some users who have applied Boost.Units to vectors in varying coordinate systems as well.
Thank you, I would like to see what they did, for example whether the units are built-in in the vector types or just in the metric of the vector space. I hope I can have feedback in this respect.
Are there future planned developments for Boost.Units, in terms of external features (e.g. automatic conversion, better interaction with boost.phoenix) or internal implementation?
Automatic conversion is a much-requested and highly contentious issue that we intentionally did not support in the library, making the decision after much careful consideration. It is difficult to do it correctly - in the situation where you have, e.g., mixed meters and feet :
1.5*meters + 3.7*feet
either there must be a specified default set of units to which everything is converted or an arbitrary decision must be made by the compiler. While it probably doesn't matter too much in this case, and for MP numeric types it shouldn't matter at all (excepting execution time), there are possible unit combinations where truncation/ round-off could become a problem. There is no obvious way to determine this at compile-time.
Precisely, there no obvious way to do automatic conversions.
Why not just explicitly specify your default units of choice in your code - this is self-documenting and should be consistent and correct. Do you have a use case where the requirement of explicit unit specification is excessively problematic?
I believe I often need a sort of "automatic conversion" because I am writing code for other colleges. The implementation is written in certain commonly used units but the formulas are expected to have natura expressions without horrible cast and intermediate conversion variables. This is an example of a formula directly copied from a paper: // V and V0 given Angstrom^3 power_typeof_helper<quantity<nonsi::angstrom_unit>, static_rational<3>
::type V, V0; double B1; //B0 given in GPa make_scaled_unit<si::pressure, scale<10, static_rational<9> > >::type B0; // phi0 given in electron volt quantity<nonsi::electron_volt_unit> phi0;
double X = 3./2.*(B1-1.)*(root<3>(V/V0)-1.); 4.*V0*B0/pow<2>(B1-1.)*(1.-(1.+X)*exp(-X)) + phi0; As I said most of the automatic conversion that I need is tied to formulas, that is, the conversion is generally needed to occur in the middle of expression evaluation. Of course there are only four operators that need conversion +,-,*,/. I need automatic conversion for each case for different reasons. In the case of + or - the reason is more or less obvious, the library won't add subtract quantities of different systems, even if they have the same dimension. For * and / I need it for a different reason, the library *can* multiply quantities from different systems but them it becomes very difficult to convert that quantity to a third system of units. It seems that explicit conversion can't handle such conversion even though all the conversion factors are defined. So by adding factors in an expression one can inadvertedly introduce many different systems of units that later can not be converted. To solve the problem with + and - what I did was to define operator+ and operator- that performs automatic conversion but only inside a namespace namespace boost :: units :: auto_conversion_operators :: to_left{ quantity<...System1> operator ( quantity<.. System1>, quantity<...System2> ) { ... } } and the mirror of it namespace boost :: units :: auto_conversion_operators :: to_right{...} in this way I can activate automatic conversion if I declare using namespace boost::units::auto_conversion::to_left; just before the demanding formula. In the same way I can imagine declaring operators that always priviledges one system of units (for example SI): using namespace boost::units::auto_conversion_operators :: to_si; (like in the example of the mathematica Blog) For * and / I can do the same, i.e. always giving the result in the System or the lefthand side or the right hand side. By declaring that I want to use these operators a priori inside a scope I guaranty the correct dimensional answer that the result will be in a certain system and not a mixture. I don't have the bitter feeling of truncation because the truncation will occur anyway after the unavoidable step of converting and doing the arithmetic operation. I admit it is not completely elegant but it gives me certain control on the conversions without affecting the formula syntax. I can share the code of these auto_conversion_operators if you are interested.
As far as Boost.Phoenix, what are the concerns?
The problem is that both units and phoenix seem to requiere a fair amount of glue (template) code. Mainly it needs to translate the units::*_typeof_helper protocol to phoenix::result_of_* protocol, this is needed for quantities, references of quantities, units and other type with which quantites can interact with (e.g. double) the that I managed to add is long and very repetitive and probably still limited. Also I hesitant to invest a lot of effort since something seem to be different with the upcoming Phoenix3. I have some code that does this. I also have some other minor code to make units work with Boost.Accumulators, and Boost.Interval and a set of atomic and non-SI units with their own separate mini-system.
That being said, one of the beauties of the open source model is that other interested parties (you, perhaps) are welcome to extend the work. Going from code that works for your particular application to code that is Boost-ready is non-trivial, but there are quite a number of people who can help provide advice. Boost.Units.Phoenix would be, I'm sure, a great addition to the library.
I really appreciate your work. Certainly I would like to contribute these additions that I have so far. I doubt they are the same quality of code as the rest of the library. I certainly not against using the version control myself but I would like to have the code seen by experts before really uploading something.
As far as internal implementation, what are the concerns?
In the same way that I implemented some additions I would have liked to implement other features like the explicit conversion in cases were three (or more) systems or units are involved. That part seemed to involve a fair amount of research in the library, in particular in its compile-time linear algebra, static_rational and a lot of MPL. My question about implementation is to know if the library is planned to change in some particular way. Thank you, Alfredo

For * and / I need it for a different reason, the library *can* multiply quantities from different systems but them it becomes very difficult to convert that quantity to a third system of units. It seems that explicit conversion can't handle such conversion even though all the conversion factors are defined. So by adding factors in an expression one can inadvertedly introduce many different systems of units that later can not be converted.
AFAIK, if conversions are correctly defined there should not be a problem converting to a third system of units. Can you provide a small, self-contained example of the problem?
I can share the code of these auto_conversion_operators if you are interested.
Steven, any thoughts on implementing an expression-local way of allowing implicit conversions? It actually seems like it might be a great GSOC project to re-implement Boost.Units using Boost.Proto...
As far as internal implementation, what are the concerns?
In the same way that I implemented some additions I would have liked to implement other features like the explicit conversion in cases were three (or more) systems or units are involved. That part seemed to involve a fair amount of research in the library, in particular in its compile-time linear algebra, static_rational and a lot of MPL. My question about implementation is to know if the library is planned to change in some particular way.
The above comment aside, I'm not aware of any major planned changes in the Boost.Units implementation. Matthias

AMDG On 1/3/2011 11:06 AM, Matthias Schabel wrote:
For * and / I need it for a different reason, the library *can* multiply quantities from different systems but them it becomes very difficult to convert that quantity to a third system of units. It seems that explicit conversion can't handle such conversion even though all the conversion factors are defined. So by adding factors in an expression one can inadvertedly introduce many different systems of units that later can not be converted. AFAIK, if conversions are correctly defined there should not be a problem converting to a third system of units. Can you provide a small, self-contained example of the problem?
The problem is that the set of definitions required can get very weird when you mix base units with different dimensions.
I can share the code of these auto_conversion_operators if you are interested. Steven, any thoughts on implementing an expression-local way of allowing implicit conversions? It actually seems like it might be a great GSOC project to re-implement Boost.Units using Boost.Proto...
I suppose that it would be possible to use Proto when attempting to add/subtract different units with the same dimensions. We could allow constructs like static_cast<quantity<si::length> >(1.5*meters + 3.7*feet). The result of this is well-defined. Evaluating (arbitrarily complex) expressions of this form would actually be a fairly straightforward extension of the current conversion code. We'd just have to extract all the base units used in the expression, find a basis, and use that to form a common system to which everything can be converted. (I need to think about this more. I'm probably over-complicating it) In Christ, Steven Watanabe

As far as Boost.Phoenix, what are the concerns?
The problem is that both units and phoenix seem to requiere a fair amount of glue (template) code. Mainly it needs to translate the units::*_typeof_helper protocol to phoenix::result_of_* protocol, this is needed for quantities, references of quantities, units and other type with which quantites can interact with (e.g. double) the that I managed to add is long and very repetitive and probably still limited. Also I hesitant to invest a lot of effort since something seem to be different with the upcoming Phoenix3.
Certainly, if all that is involved is to provide result_of_* classes that forward to the *_typeof_helper classes that should not be a big problem. A small demonstration of the problem in integrating Boost.Units and Boost.Phoenix, along with a proposed solution, would be a great starting point. Matthias

I also have some other minor code to make units work with Boost.Accumulators, and Boost.Interval and a set of atomic and non-SI units with their own separate mini-system.
Assuming the necessary changes to Boost.Accumulators and Boost.Interval don't break existing code, I can't imagine that the library authors/maintainers would have any objections. I suppose it would involve bringing in Boost.Typeof support... Why don't you post the proposed changes and see if it can strike up some discussion. Matthias

On 24/12/10 05:13, Matthias Schabel wrote:
Automatic conversion is a much-requested and highly contentious issue that we intentionally did not support in the library, making the decision after much careful consideration. It is difficult to do it correctly - in the situation where you have, e.g., mixed meters and feet :
1.5*meters + 3.7*feet
either there must be a specified default set of units to which everything is converted or an arbitrary decision must be made by the compiler.
Just chiming in on this subject. Why not having a way to defines user rules for conversion in some kind of conversion compile time strategy and have a way to say: unit X and Z use this map please ? By default, to preserve old code, let the library define a no_conversion policy ? It dont add anything to the existing library but allow for a smooth customisation point for users.
participants (5)
-
alfC
-
Alfredo Correa
-
Joel Falcou
-
Matthias Schabel
-
Steven Watanabe