From: Howard Hinnant (hinnant_at_[hidden])
Date: 2004-01-02 21:21:32
On Jan 2, 2004, at 7:48 PM, Jeremy Maitin-Shepard wrote:
>> Why is that better? It seems overconstrained to me.
> I have a hard time seeing what a user would do with the remaining
> elements other than erase them.
Perhaps a hash container contains expensive-to-compute elements. And
if the hash function throws and some of the elements are then lost, the
program may decide to catch the exception, correct the problem, and
then rebuild the lost elements. It could examine the container and
discover what needs to be recomputed. Why force it to start over from
an empty container?
Otoh, if the program would prefer to start over from an empty
container, it is trivial for it to do so: it just clears it.
Conclusion: A library should never needlessly throw information away.
It is always easy for your client to throw information away, but often
difficult for your client to recover lost information.
> Or, the issue could be avoided by caching the hash codes.
The client can also easily decide to cache hash codes in the key and
hash function itself. For example, here is how you could cache hash
codes for std::string:
typedef std::pair<std::size_t, std::string> Key;
std::size_t operator()(const Key& x) const
typedef std::tr1::unordered_set<Key, hashfunc> Set;
word.second = "...data...";
word.first = std::tr1::hash<std::string>()(word.second);
// now any time m needs to hash this key, it just looks up the first
// part of the pair.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk