|
Boost : |
Subject: Re: [boost] Stack-based vector container
From: Domagoj Saric (dsaritz_at_[hidden])
Date: 2011-01-25 18:07:48
"Thorsten Ottosen" <nesotto_at_[hidden]> wrote in message
news:4D3C972E.30509_at_cs.aau.dk...
> Den 22-01-2011 15:18, Domagoj Saric skrev:
>> And measure what and how exactly?
>
> assembler or runtime.
As already explained 'runtime' is IMO untestable in practice (in the 'wide
range of ways' sense)...
Assembler OTOH, if by that you meant inspecting the generated code, is IMO
the way to go in this case (especially because it reveals size as well as
speed issues)...however no real analysis is probably required as the
assembly output is quite predictable here (e.g. in my last post to Emil
Dotchevski)...
>>> For one, the container will allow you to do
>>>
>>> boost:.auto_buffer<T,N> buffer( n, boost::dont_initialize );
>>>
>>> to make it avoid initialization.
>>
>> Nice. It would be useful if it could do something similar for non-POD
>> types, i.e. it would take a functor in the constructor which it would
>> then 'walk' through the N entries and let it in-place construct them (to
>> avoid the usual default construction then assignment procedure)...
>
> well, you can just do that post-construction wise. No need for another
> constructor.
How would you do that in an exception-safe way (you need to correctly
'roll-back'/unwind the construction if any of the constructors fail...)?
>>> Futhermore, you may use
>>>
>>> buffer.unchecked_push_back( x );
>>>
>>> to avoid code that checks for a full buffer and subsequent expansion.
>>
>> AFAICT there is no ('unchecked') resize equivalent...
>
> unitialised_resize() ?
Is this also 'unchecked'?
>> ...Perhaps it would be a 'happier' solution to go with only one
>> option (policies/tags) or to replace both with a proxy 'unchecked'
>> class...So that when you want 'unchecked' access you call the proxy
>> getter and in effect get different semantics through an identical
>> interface...This would also 'play better' with Boost.Range and
>> <algorithms>...
>
> How? Range algorithms use iterators and not much else.
>
> boost::push_back( cont, range )
By calling/passing boost::push_back( cont.unchecked(), range ) or
boost::push_back( cont.uninitialised(), range ) ;)
> boost::push_back( cont, range )
>
> does not call cont.push_back(), and is the preferred way compared
> to using copy().
Yes, it calls insert() which still benefits from 'unchecked' resizing...
>> The gap between std::array and std::vector is larger then it may
>> seem...and it might be worthwhile to use this opportunity to fill it
>> completely...Besides a resizable stack-only/fixed-max-size buffer and a
>> stack-to-heap expandable buffer (auto buffer) there is also a need for a
>> fixed sized (non-resizable) buffer whose size is dynamically determined
>> which also comes in two flavours, a stack-only portable alloca wrapper
>
> alloca() does not seem to apply here: think about where you would put the
> call.
Of course, the alloca/'dynamic-stack-based' version would have to use a
'factory macro'...it is still possible to make it type safe...the macro
would only allocate local function stack storage and pass it on to a factory
function...
>> and a heap version...both can be modeled with a const iterator_range
>> (the only essential difference being that the heap version has a
>> non-trivial destructor that calls delete[]...)...
>
> As I said, I'm not sure we need all these variants.
I could point out that I have use for both for example in my GIL.IO2 code
however IMO no (other) justification is necessary considering that 'all
these variants' already exist as C counter-parts (malloc and alloca) as well
as all the in-house (C++) versions...
> For example, if
> the buffer does not need to be expanded, then it is likely the compiler
> can determine that the capacity_ member is not used.
>From my experience I'd consider that (highly) unlikely (if any resizing
function is called)...
> As for the conjectured extra indirection, then I'm not sure it can become
> any major performance problem,
Again, sometimes it will sometimes it will not...with a library like this we
should not go for 'rules of thumb' but rather provide a sharp set of
tools...
As hinted before, code size is (as always, at least in my book) a concern
here just as speed and saving a dozen instructions on every use can be
significant when combating template bloat (for example in GIL.IO with
certain backends like LibTIFF there are hundreds of possible format
combinations each of which instantiates its own set of function templates
that use local temporary buffers)...
> nor do I see how one can avoid it when we need to go for the heap in
> certain cases.
Exactly, that's why I asked for a no-heap policy...however now I think its
better to model each of the different 'variants' with a separate
type/template...
-- "What Huxley teaches is that in the age of advanced technology, spiritual devastation is more likely to come from an enemy with a smiling face than from one whose countenance exudes suspicion and hate." Neil Postman
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk