From: David Abrahams (dave_at_[hidden])
Date: 2006-02-03 19:45:53
Thorsten Ottosen <tottosen_at_[hidden]> writes:
> David Abrahams wrote:
>> Thorsten Ottosen <tottosen_at_[hidden]> writes:
>>>David Abrahams wrote:
>>>>Thorsten Ottosen <tottosen_at_[hidden]> writes:
>>>That is true that eg. std::list<T>::size() might be O(1).
>>>But how do you detect that? (A: you can't).
>> Sure you can; you just look at your implementation and if it is O(1)
>> you write a specialization of the metafunction (or whatever).
> Well, I don't personally have access to all the implementations.
> This seems like overkill.
std::list is not the only sequence out there that might have O(1) size
and no random access iterators! Consider slist, hash_set, ...
>>>Anyway, what issue are you really talking about. I must
>>>be missing something.
>> If there's an O(1) size available you can do loop unrolling; otherwise
>> you have to test the iterators every time through the loop.
> You mean Duff's device?
You can use Duff's device to do loop unrolling, but there are other
ways, too. I'm not talking about specific details of how you achieve
> Do we have tests on all our platforms that suggests that Duffing
> leads to faster code?
It's much deeper than that.
> I was told that Duffing could actually hurt a modern optimizer.
Loop unrolling helps unconditionally in this sort of case because you
eliminate comparisons and branches. Some compilers have loop
unrolling built in, so you could conceivably interfere with their
smart unrolling by trying to do it yourself, but in these cases they
don't have enough knowledge to do the unrolling themselves. That's
what the traits that tell you there's an O(1) size are all about:
supplying the domain knowledge that's too high-level for the compiler.
[Compiler unrolling only works for loops that have an integer counter,
and even then such factors as whether the integer is signed or not can
cause the compiler to go back to not unrolling.]
>> Look at the big picture. A generalized algorithm suite will be
>> written in terms of property maps and cursors, not in terms of
> And how does it affect the return type of range_based algorithms other
> than replacing iterator_range with cursor_range?
> I still don't see the relevance until we have an established
> cursor/pm library.
It's relevant to the design space, not to the particular issue of the
-- Dave Abrahams Boost Consulting www.boost-consulting.com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk