Boost logo

Boost :

From: Dave Abrahams (abrahams_at_[hidden])
Date: 2000-06-01 06:45:54


on 6/1/00 7:20 AM, Dave Abrahams at abrahams_at_[hidden] wrote:

>> template<class Iterator>
>> void insert_unique_x(Iterator first, Iterator last) {
>> m_impl.insert(m_impl.end(), first, last);
>> std::sort(m_impl.begin(), m_impl.end(),
>> compose_f_gx_hy(m_key_compare,
>> KeyOfValue(),
>> KeyOfValue()));
>>
>> m_impl.erase(std::unique(m_impl.begin(),
>> m_impl.end(),
>> compose_f_gx_hy(m_key_compare,
>> KeyOfValue(),
>> KeyOfValue())),
>> m_impl.end());
>> }
>>
>> would be advantageous in certain circumstances.
>
> That's a cool idea, too. But I think you'd need to use stable_sort() here.
> Just because elements are equivalent under the comparison criterion doesn't
> mean that they're identical ;)

Oh, and another thing: in this case you could run out of memory for the
intermediate size of the vector. If there were lots of duplicate elements,
the initial operation might have succeeded (or might have been a no-op). You
don't want to throw an exception, as that effect may swamp others as well,
so I think you'd have to approach it this way:

1. adaptively allocate a temporary buffer with malloc(), and if that
succeeds:
2. use partial_sort_copy to copy in the new elements (M log M)
3. count the number of unique new elements O(N + M)
4. reserve memory in the vector
  -and here's where it gets hairy-**
5. linearly merge in the new elements O(N + M)

**the problem here is that you need license to do something which only the
vector implementation can do, which is to construct the elements you're
moving into the vector's uninitialized memory. We could, I suppose, do all
of this in the temporary buffer, but I'm not sure it's worth it. The more
adaptive optimizations you perform, the bigger the code gets.

-Dave


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk