|
Boost : |
From: rwgk (rwgk_at_[hidden])
Date: 2002-03-06 17:50:50
--- In boost_at_y..., "Stewart, Robert" <stewart_at_s...> wrote:
> From: rwgk [mailto:rwgk_at_y...]
> >
> > --- In boost_at_y..., "Stewart, Robert" <stewart_at_s...> wrote:
> > > > Unfortunately the reserve() solution eventually requires
> > > > use of push_back(), which is also slow.
> > >
> > > If you use push_back() following reserve(), there is no
performance
> > penalty.
> > > The push_back()'ed object is copied into the next element. If
you
> > exceed
> > > the size allocated with reserve(), then push_back() will cause
> > reallocation
> > > and copying, so be sure to choose the right size when calling
> > reserve().
> >
> > Here is an example push_back() implementation:
> >
> > void push_back(const ElementType& x) {
> > if (size() < capacity()) {
> > new (end()) ElementType(x);
> > m_incr_size(1);
> > }
> > else {
> > m_insert_overflow(end(), size_type(1), x, true);
> > }
> > }
> >
> > I doubt that the optimizer can entirely remove the
> > overhead associated with incrementing the size,
> > and it can certainly not remove the "if" statement.
>
> The memory for the vector is not reallocated, and the elements are
not
> copied from the old to a new allocation. That is, with a judicious
call to
> reserve() before push_back(), size() < capacity() will always be
true, so x
> will be copied into the uninitialized memory at end() and the size
will be
> incremented. Thus, there is no memory/copy performance penalty
which is
> what I thought "slow" was meant to imply. If a comparison of two
integers
> and the increment of another is more overhead than you can bear --
though it
> certainly doesn't justify the label "slow"! -- then you need a
fixed-size
> array such as a C array or a wrapper class of a C array.
I agree that the overhead for std::vector will probably not be
noticeable in most applications and on most CPUs. But sometimes
people mention that an if-statement inside a loop causes
certain CPU-specific optimizations/advantages to break down.
I.e., the performance degrades disproportionally. I am not
sure how true this still is for modern CPUs.
The other point is that the code above is from a reference
counted container implementation. size(), capacity() and end()
look up member data of a separate object (the handle that
maintains the reference count and the memory). In this
situation you really have to think twice before using
push_back().
Back to the original question of changing the default
constructor of std::complex (to not initialize the
data members): this would mean that, e.g.:
std::complex<double>()
behaves differently than
double()
B. Stroustrup writes (section 6.2.8 of The C++ Prog. Lang.):
"The value of an explicit use of the constructor for a
built-in type is 0 converted to that type."
IMO std::complex<double>() and double() should behave
the same way.
Back to the question of initializing vectors of
std::complex: IMO the relevant container types
(vector, valarray, anything else?) should provide
a facility (constructor) to set the size() without
initializing the values if possible, i.e. if the
value_type has a trivial destructor (this is more general
than the requirement that the value_type is a POD
type). A couple of hours ago I posted a suggestion
for this that has unfortunately not yet shown up...
(but see the recent replies from David under the same
subject).
Ralf
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk