Boost logo

Boost :

From: Marat Khalili (0x8.0p15_at_[hidden])
Date: 2007-07-16 04:23:08


Paul A Bristow wrote:

>> What do you think about 'uncertainty_interval_half_width'?
>
> It could be - but that makes all sorts of assumptions about distribution, normality? 95% confidence? ...

Not really, the library can just crunch whatever it was given by user,
95% or 66%, as long as it is consistent. Problems only arise when it is
necessary to estimate 95% interval from the 66% one.

> so I think you want a more vague (and shorter!) name.

Shorter! IMO 'uncertainty' is not short, and, at the same time, is
neither strict nor it is immediately clear if it's width, half-width or
what.

>> but I'll think about it. Some help from English-speaking folks will be appreciated.
> I *really* am English-speaking!

I just wanted to point that I'm not, and hence not that good in finding
synonyms.

> - unlike Boost's many American-English speakers ;-)

:)))

> It isn't in a state where I would want anyone else to see it with my name on it ;-((

But so is mine. :)

>>> My uncertain class has a value, standard deviation, degrees
>> of freedom, and 16
>>> uncertainty type flags - square, triangular, gaussian, exact etc
>
> This fits into 128 bits, so only doubling the space required to store a 'value'.

I'm sure it can be very useful for storing measurement results, but
AFAIU applying various arithmetic operations to a limited number of
uncertainty types will create unlimited (probably even uncountable)
number of more types, so implementing corresponding algebra in C++ is
hardly realistic.

On the other hand, AFAIU the error in estimating uncertainty if we
disregard distributions completely is some constant power number of
operations.

> True - but I felt there was value in recording where the uncertainty estimates came from - repeated (with how many) measurements,
> some other estimating method, A/D conversion quantisation, round-off from number of significant digits...

Indeed it is valuable to know this for the raw data, it will help to
aggregate your statistics correctly, but as soon as you aggregated it
(for number of measurements >~= 30) the distribution of the estimate
should resemble normal. So IMHO you are talking about a different task
or stage.

BTW shouldn't you have large volumes of data sharing same uncertainty type?

> But I also wanted to achieve the compile-time checking of units, now a Boost library, and get output nice, including the 'right'
> number of significant digits (from the uncertainty).

Compile-time checking of units using templates is completely different
task, and it is solved at least once AFAIK (but I don't remember the
link). As for the output, I also spent some time on it, and finally more
or less satisfied with the results.

> PS I recall some New Zealand programmers work on this, but can't find a reference immediately.

Would be nice to see their considerations.

With Best Regards,
Marat


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk