Boost logo

Boost :

From: Darryl Green (darryl.green_at_[hidden])
Date: 2006-12-14 18:26:50


Eric Niebler wrote:
> Darryl Green wrote:
>
>>It was only when I read this that I took another look at the docs and
>>realized that your library is using a non-uniform sampling model. This
>>is interesting - but could make some algorithms hard to write and/or
>>much less efficient?
>
> I think you misunderstood, or I was unclear. The library accommodates
> both uniform and non-uniform sampling. In another reply, I wrote:

I was unclear - I (mostly) understood this, I should have said "also
accommodates non-uniform sampling" not "is using a non-uniform...".

> Take the case of a dense_series. It has uniform sampling. Its "runs"
> sequence has the property of being "dense", meaning that each run is of
> unit length, and the run offsets are monotonically increasing.
> Obviously, finding a sample with a particular offset in a dense series
> is O(1), but not in a sparse series. Algorithms can test for density
> using a trait and select an optimal implementation. The default
> implementation would handles series with non-uniform sampling.

Makes sense.

>
> It's all done using compile time traits, so there is zero runtime overhead.

Yes, thats true. I was imprecise, and the efficiency (of non-uniform vs
uniform algs) is going to be dependent on the type of data (sparseness)
and the type of processing being done. The ability to use the same lib
for both seems like a great feature. I need to review my non-uniform
sampling theory (not something I've ever done much of, and I'm pretty
rusty on all this stuff anyway) *and* actually try using your library
before I comment any further.

Thanks
Darryl


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk