Boost logo

Boost Users :

From: Peter Dimov (pdimov_at_[hidden])
Date: 2006-12-04 10:24:45


Boris Mansencal wrote:

> It is working... but my tests show better performances when I use my
> hash function (using width & height) instead of this hash_combine
> solution.

>> return ((size_t)p.x()<<1)*P_width + (size_t)(p.y()<<1);

What type does p.x() have in your case, and why do you shift it left by one?
In principle, this discards one bit of entropy, if the bucket count is even
(but not when it's prime).

Why do you multiply p.x() by the width? Isn't x() supposed to be within
0..width and y() within 0..height? It seems that you need to either multiply
y() by width or x() by height.

> Is there no solution to my question ?

You can't have a context-dependent hash_value, but you can pass a 'Hasher'
function object to unordered_map that stores your context.

Depending on the typical values of your width, it may also be possible to
just use a fixed width of, say, 65536.

Unfortunately, there are no "context free" rules regarding hash functions,
you have to find one that works best in your particular situation. Even if
it doesn't make much sense. :-) One problem with this approach is that it
can tie you to a particular unordered_map implementation. We've tried to
make hash_combine work adequately for as many cases as possible, but this
makes it a bit slower to compute.


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net