|
Boost : |
From: Marat Khalili (0x8.0p15_at_[hidden])
Date: 2007-07-09 16:50:21
Hello,
At least someone else needs this feature if not implementation, that's
great.
Kevin Lynch wrote:
>> 6. Correlations are not considered; instead, worst case is always assumed.
> Is there a use case where this behavior is useful and meaningful?
I do have this kind of case, but I don't know if it's typical or not. It
depends on: (a) how many independent sources of error do you have, and
(b) is it fine to have just upper bound for error. In my case I have few
numerical integrations (each producing inexact value) and various
operations on the results. I just wanted to be sure errors do not grow
too large. Every time I add uncorrelated values as correlated I'm wrong
by factor of two at most, and who can tell for sure if my integrations
are really uncorrelated or not.
> Without this feature, I can't come up with any reason for me to use it.
You can always write:
z = inexact(
x.value() + y.value(),
sqrt(x.error()*x.error() + y.error()*y.error()))
in place of z = x + y where I really need it. Feature is just
formalizing it. Problem is, one might want to add x and z later.
Zeroth principle for me was, it is not opaque library, but just some
help with arithmetics. It will not allow person knowing nothing about
errors to calculate errors.
> I certainly can't see a use for it in publication quality basic
> research or any engineering work where you need to rely on the errors it
> calculates.
It correctly produces upper bounds almost everywhere. Where it doesn't
(like cos(0.01 +- 0.1)) unexperienced physicist will also make a mistake :).
> Do you have an intended use case?
> I don't want to sound too critical of your work (although it is hard to
> NOT sound critical, I suppose ... sorry).
No, I did need it myself, another question is I already spent more (like
x10 :)) time on the library than just doing it manually could take. So I
hope it will be of a use for someone else, that's why I need some
criticism, really.
> I know how hard this area is
> to do right. I've tried to build my own class to handle this type of
> work and failed miserably.
I'd be happy to hear more about your experience.
> I've come to the conclusion that it isn't
> even possible to do in pure C++ without substantial run time performance
> penalties (although I would love to be proved wrong).
Yes, at the very least we would have to keep lots of correlations in
memory. N^2 correlations for N floating-point variables in the worse case.
> The problem as I
> see it is that proper error analysis requires a level of global
> knowledge to which the language just doesn't give you access at compile
> time. Now, maybe you could do it with sufficient compile-time
> reflection/introspection support that interacts well with current
> template metaprogramming facilities, but even then I'm not sure it can
> be done. Perhaps you could write a compile-time DSL that does the right
> thing, but then you aren't programming transparently in C++ anymore.
Well, it's an interesting idea - this can be done in Java with
reflection and runtime disassembling (Soot anyone here?). If only I had
some time... :(
> There was a brief exchange concerning this topic during the review of
> the Quantitative Units Library back in the first half of the year.
> Matthias Schabel was persuaded not to include his version of this class
> in the library at the time since it didn't "do the right thing".
Was it in this list? Where can I take a look at the library?
> All that said, I would love to see an efficient library solution to this
> problem ... I'd love to be able to use real C++ and not the current
> poorly performing, hacky solutions that I have access to.
One day temptation to write a helper function or two becomes too strong :)
With Best Regards,
Marat
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk