|
Boost : |
From: Steven Watanabe (steven_at_[hidden])
Date: 2007-08-10 15:05:02
AMDG
Eric Niebler <eric <at> boost-consulting.com> writes:
> I don't doubt it. std::map does it for its range-based insert so it must
> be possible. And this may seem like splitting hairs to you, but what I
> said is true -- you can't efficiently build a std::map by pushing in
> sorted data in one element at a time, because the interface doesn't
> allow for it. You have to pass it a range of sorted elements. You may
> say, "Well, then don't use a std::map to build a time series!" Fair
> enough. But when a perfectly satisfactory data structure like std::map
> cannot be adapted to satisfy a concept, it suggests a faulty concept.
Or that the container is not perfectly satisfactory
for the particular task. std::map is simply not designed
for inserting many elements at the end. Buffering the
input still increases the overhead significantly because
std::map is not allowed to assume that the input range
is sorted. It must check. So using std::map
is worse than a custom container for two reasons--it needs
temporary storage and it must check the range before inserting it.
> And (crucially) you would do away with commit() on both ordered_assign
> and ordered_append? Or would you keep them on the low-level interface
> and add a higher level interface that called commit for you and handled
> the 90% case?
I would remove commit and require buffering
to be explicit.
> The thought I'm not sold on is:
>
> - get rid of commit() on the lower level interface.
I consider immediate insertion lower level
than buffering.
> OK. It seems to me that such a thing could be built on top of an
> ordered_append that has a commit(), simply by calling commit() after
> every appended element. Would you agree?
I would prefer to implement the "commit" version
in terms of the plain version. See below.
> Anyway, the STL-ish assign(begin,end) doesn't have this problem because
> the end is specified explicitly and it has extra benefits (the size may
> be known), but this doesn't mesh well with the signal processing domain.
> In assign(begin,end), the container *pulls* the data. In signal
> processing, data is often read by polling, and then *pushed* into a
> container, hence the desire for something like a back_inserter.
Just wondering...
Isn't it possible to have a pull model
from the container's side *and* a push model
from the data's side with coroutines?
> Time passes ...
>
> If we ditched the assigners and the appenders and went with the
> range-based STL-ish interface, what happens? Any convenient push-based
> interface we build on top would have to resort to sticking in one value
> at a time, just like std::back_inserter. Do you really feel that's an
> improvement? It's an honest question.
Yes I do.
If you want buffering it doesn't have to
be done behind the scenes. By adjusting the
interface slightly it can be implemented in terms
of concepts that already exist.
typedef get_efficient_collector<series_type>::type collector_type;
collector_type collector;
back_inserter<collector_type> inserter(collector);
for(;;)
*inserter++ = make_tuple(poll_some_value(), last_time, current_time);
series.assign(std::move(collector));
In Christ,
Steven Watanabe
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk