Subject: Re: [boost] [unordered] unordered_set::erase() complexity bug?
From: Beman Dawes (bdawes_at_[hidden])
Date: 2009-11-25 08:35:58
On Tue, Nov 24, 2009 at 3:55 AM, Daniel James <daniel_james_at_[hidden]>wrote:
> > 2009/11/23 Christopher Jefferson <chris_at_[hidden]>:
> >> One way to significantly reduce this problem is to use a different hash
> function, which shuffles the values more.
> That will improve things, but not by much as there'll still be a lot
> more buckets than elements. That would need to combined with regularly
> manually rehashing the container (ie. calling 's.rehash(0)') as
> elements are erased.
> 2009/11/23 Daniel James <daniel_james_at_[hidden]>:
> > I could apply another hash function to the hash value to make this
> > less likely, or perhaps store a pointer to the last non-empty bucket.
> > I'll think about it.
> Neither of those will work, there will still be other similar cases.
> With the current interface the only way to deal with this that I can
> think of is by changing the data structure, which will require more
> memory use. And there's been opposition to that. It could possibly use
> an iterator which only finds the next element when used, although
> that'll make a lot of other operations more expensive.
IMO, the right way to fix this is for the LWG to fix the problem they
created when they changed the interface to no longer return void.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk