Boost logo

Boost :

From: Matt Calabrese (rivorus_at_[hidden])
Date: 2005-10-11 11:09:21


On 10/11/05, Eric Niebler <eric_at_[hidden]> wrote:
>
> This set off a bell in my head. I had to do similar expression template
> manipulation in xpressive, and I wrote an expression template parser to
> help. It currently lives in boost/xpressive/proto. It is essentially a
> generic expression template framework and a handlful of generally useful
> "compilers" that plug into the framework to transform the expression
> template into a target form. For instance, there is a transform_compiler
> that finds a pattern in your expression template and transforms it
> according to a MPL-style lambda that you provide.

Very nice. I will most-likely use it in the future, but I probably won't use
it until after I complete the current version of implementation since it is
already well under way and I want to get this version out as soon as
possible.

On 10/11/05, Andy Little <andy_at_[hidden]> wrote:
>
>
> Actually this is not quite correct. if t1 ... t6 are temperatures then e.g
> .:
>
> (t1 + t2 + t3 + t4 + t5 + t6 ) / 6 is perfectly legal.
>

If you look back at my post, I mentioned this very case:

"Finally, just like in geometry, there are a few places where the rules can
be broken regarding adding points, such as with barycentric combinations.
With temperatures, a barycentric combination could be used to find the
average temperature of the day."

The example you gave is actually a barycentric combination with all weights
equal. Just like in geometry, this does not mean that the addition operator
should be defined for points to perform component-wise addition. Instead, I
provide a "componentwise_add" function for such rare cases, and I will also
be making a set of average and a barycentric combination functions to
abstract away the concepts.

On 10/11/05, Andy Little <andy_at_[hidden]> wrote:

> The example works for addition but what about multiplication and division?
> Is
> the example realistic?

Yes, it works just the same. In a multiplicative expression, like-operands
are grouped and multiplied prior to evaluated the overall expression.
Divisors in a simple multiplicative expression are combine by multiplication
and the expression is kept as a ratio until needing to be evaluated. This is
the best solution I could come up with to minimize precision loss with
integral/fixed point types. This is probably the most controversial of my
optimizations. Again, though, it will be toggleable once complete.

On 10/11/05, Andy Little <andy_at_[hidden]> wrote:
> In the above pqs would convert the intermediate result to millimeters
> under the
> rule that millimeters is the most fine grained unit. I guess that
> improvements
> could be made with expression templates. OTOH will the rules regarding the
> result of an operation be simple to understand? Maybe that doesnt matter.
> BTW
> Are there not subtleties with temporaries using ET that users need to know
> about?

I don't see a good reason to convert to the most fine-grained unit type. If
anything, that can only cause more conversions which could not only make the
expression less-optimized, but also have less-precise results depending on
the value_type of the operands. I decided that determining the type to
convert to be context is the best solution. In the case of a "tie" for which
type should be used, I convert to the closest unit type to the "natural
unit" in the nodular conversion tree I described. If that is a tie, the
choice is really arbitrary (I use the left operand). Fortunately, users
shouldn't ever have to worry about this. I am making sure to document any
subtleties. Particularly with the optimizations such as expression
rearrangement, I am going to give examples of where it may be a good idea to
turn it off.

On 10/11/05, Andy Little <andy_at_[hidden]> wrote:
> One problem in pqs was explaining how to add new types and units. If this
> can be
> simplified that would be great.

Adding new classification types and new unit types is actually very simple
at the moment. For derived classification types, it's as simple as:

// force is "length" times "mass" times "time" to the power of negative two.
typedef classification< length, mass, power_c< time, -2 > > force;

// The natural unit is newtons
// (the above operations performed on the natural
// unit types of the corresponding classifications)
quantity< force::natural_unit > a_force_quantity_in_newtons;

classification has a variadic template argument list which has the effect of
multiplying together all of the operands. I also have a variadic "per"
template which multiplies together all of the operands and then raises them
to the power of negative one. In this example, length, mass, and time are
all fundamental classifications, however you can also use derived
classifications in the formation of force (such as velocity or acceleration
if it is defined). In compilers that support typeof you can also do typeof(
length() * mass() * power_fun< -2 >( time() ). I focused much time on making
all of this as easy as possible.

Making new derived units is just as easy:

typedef unit< miles, per< hour > > mph;

// or

typedef typeof( miles() / hour() ) miles_per_hour;

Creating new fundamental classifications is a little bit more complicated,
but is still done in a single line, though with a macro call.

Adding new fundamental classification is also simple, though is done through
a macro:

SURGE_TUCAN_BASE_CLASSIFICATION( money )

The reason why a macro is used is because internally a template is
instantiated with a unique type tag which I create off of the passed
argument, and that instantiated type is then typedefed to the argument
passed.

For those curious, surge is just the encapsulating namespace name I use for
libraries I develop, and tucan is the name of this library, which stands for
"Templated Unit Containment And Nodular conversion library).

On 10/11/05, Jeff Flinn <TriumphSprint2000_at_[hidden]> wrote:

> As a casual lurker, your description read very well and made perfect sense
> to me. It is a great formalization of these concepts. Have you read the
> concept section of the DateTime library, which seems analogous to your
> temperature example? Do your concepts handle the DateTime concepts?
>
> Thanks, Jeff

When I first started using boost I read through documentation on DateTime,
but I haven't really had a need to use it and so don't remember much of the
details. I will look back sometime in next couple of weeks and compare the
concepts, but probably not before I have a version out for people to play
with.

On 10/11/05, Deane Yang <deane_yang_at_[hidden]> wrote:
>
> Although I appreciate your efforts to implement an efficient way to
> automatically convert different units for the same quantity, I would
> still prefer a library that requires an explicit "cast" to change units,
> rather than implicit. Perhaps the library could be built in two layers,
> with the lower level requiring explicit casts and a layer above that
> that implements implicit casting for those who want it.

When I first started with the library, I thought that might be a good idea,
but as development went on I decided that it wasn't really necessary. Could
you actually give any example as to why you would actually want such
restrictions? Keep in mind that explicit unit casts are still available.

--
-Matt Calabrese

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk