Boost logo

Boost :

From: David White (dave_at_[hidden])
Date: 2002-04-18 17:21:57

On Fri, 2002-04-19 at 04:55, hicks wrote:
> Re 6. Re: Re: Query of Interest - vlarray (David White)
> - I don't think seperate storage is necessary (see below).
> - I meant to suggest that one might want to allow heap objects to use
> the stack as well.
> Suppose the heap object belongs to an auto_ptr which is itself on the
> stack, but
> the order of destruction inside the object it not the same as the order
> of construction,
> (even if just because the code got written that way and I'm too busy to
> fix it for
> appearances sake now).

but if you want to do this, why not just use any small object allocator?
I would suggest that in this case the user should use a general purpose
small object allocator, and then only when they can guarantee LIFO
destruction order they could convert to use the LIFO allocator if they
think the performance gain warrants it.

> - I also meant to suggest that out of order destruction might otherwise
> be desirable.
> Suppose you want the functionality to resize a psuedo-valarray array
> size. Then you have
> to "free" a block not at the end of the stack, and allocate a new block
> which will be freed
> after the others that come before it.

Well I think if you want to do this, you should use a vector, possibly
with a custom allocator. I think disallowing resizing of vlarrays
encourages good programming practice too - use the least-flexible data
structure which does everything you want. Adding the flexibility of
vector could also lead to misunderstandings of what exactly vlarray
does, thus causing it to be used in inefficient patterns etc.

> - I did not mean to suggest that mutex's would have any use for
> psuedo-valarray.

ok, sorry; my mistake.

> I asked about the relative costs of (a) a mutex call compared to
> (b) a heap allocation call. I asked this thinking that if one was
> dissatisfied
> with heap allocation performance, and one knew these relative costs, and
> if (a) the
> mutex call was much more expensize than (b) overall, then simply
> changing to a thread
> specific heap might be enough to achieve satisfactory performance
> improvement.

These costs vary heavily of course, but I think they are pretty
comparable. Often a mutex is more expensive than a general allocation.
Herb Sutter has a good article comparing them (mostly in the context of
strings, but with wider implications in mind) here:

> Really, I wish there were a standard pair of calls: thread_new and
> thread_delete.
> I think it's the exception rather than the rule to create objects which
> need to be deleted
> in a different thread. In general, I find the tendency for libraries
> (e.g., strings, streams)
> to contain secret mutexes rather awful. (At least we should have the
> option of choosing.)
> - I do agree with you about forcing errors if usage was not as intended.
> If you intend to
> all LIFO only then it should be so.

ahhh ok, I was thinking you meant that the allocator "should be
deallocated in LIFO, but if it isn't we will still handle other cases
just in case...".

> - How to implement a blocklist with using memory outside the stack
> I would suggest one of 2 things:
> 1.Put a 32K limit on the block size, and store a short
> just beyond the bottom of every allocated block.
> In that short, the top bit would indicate "free",
> and the other 15 would give the size.
> 2. The same thing with a 128 byte limit and a byte at the end of
> every block.
> Then the list could be traversed backwards begging from the stack end
> pointer,
> and no seperate storage (outside of the stack) would be required.

yes, this would work to get rid of the need for another structure. I'm
not convinced it wouldn't involve additional time overhead, however I
would be interested in comparing implementations of each to see.
Certainly if your suggestion doesn't incur significant overhead then it
would be worth using. Also, the short would have to be properly aligned,
although this could probably be done without too much hassle.


Boost list run by bdawes at, gregod at, cpdaniel at, john at