From: Michael Marcin (mmarcin_at_[hidden])
Date: 2007-04-05 12:00:29
As has been mentioned earlier in this thread compile-time dimensional
analysis is extremely useful. However many common compilers have issues
fully optimizing these wrappers away. The correctness verification also
happens on every compile and takes a measurable amount of time. Is there a
configuration of this library one can use that could ensure drop in
compatability with raw floating-point types?
I'm thinking you could control dimensional analysis with a preprocessor
switch (much like many people do for concept checking now).
What would this take? Would disallowing all implicit and explicit
conversions be sufficient?
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk