|
Boost : |
From: Dave Harris (brangdon_at_[hidden])
Date: 2005-06-24 19:12:12
In-Reply-To: <42BC96BD.90901_at_[hidden]>
darren_at_[hidden] (Darren Cook) wrote (abridged):
> > Admittedly, hash-based checking is not 100% safe, ...
>
> I think this is unacceptable. If I have two objects A and B, and they do
> happen to hash to the same value, then B won't get saved. However good
> the hash code is does not matter: this would stop me using the
> serialization library in anything mission critical.
As I understand it, this hashing does not change the semantics of correct
client code, or change what is stored in anyway. It merely provides a
check for buggy client code. Specifically, client code which either
changes an object while serialisation is in progress, or even worse, which
serialises two distinct objects at the same address - those things are
unsupported. The hashing doesn't detect all such mistakes, but it should
detect a high percentage of them.
When it does so, it flags a run-time error. It doesn't use the hash to
decide whether to store the object, or otherwise try to work around the
mistake. The hash value itself is never stored.
Arguably it should assert. In fact, if I understand correctly, the hashing
framework could be conditionally compiled out with _NDEBUG or similar.
Have I got this right? I can also see some value in always calculating the
hash, and storing it in the archive and using it to check integrity when
loading it back in - but I don't think that was the original intention.
Also it's not clear what should happen with pointer values, which probably
won't hash consistently between runs.
-- Dave Harris, Nottingham, UK.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk