|
Boost : |
From: Dave Harris (brangdon_at_[hidden])
Date: 2004-05-05 15:10:51
In-Reply-To: <95C853D0-9EAF-11D8-BFD9-000A95DC1C98_at_[hidden]>
troyer_at_[hidden] (Matthias Troyer) wrote (abridged):
> As I see it the current serialization library allows both options,
> depending on your preferences. Any archive may choose which types it
> view as fundamental, but both have their disadvantages:
I would use variable length integers. Use as many bytes as you need for
the integer's actual value. That way the archive format is independent of
whether short, int, long or some other type was used by the outputting
program. It can also give you byte order independence for free.
Specifically, I'd use a byte oriented scheme where the low 7 bits of each
byte contribute to the current number, and the high bit says whether there
are more bytes to come.
void load( uintmax_t &result ) {
result = 0;
while (true) {
unsigned char byte;
*this >> byte;
result = (result << 7) | (byte & 0x7f);
if ((byte & 0x80) == 0)
return;
}
}
Thus integers less than 128 take 1 byte, less than 16,000 take 2 bytes
etc. This gives a compact representation while still supporting 64-bit
ints and beyond. You can use boost::numeric_cast<> or similar to bring the
uintmax_t down to a smaller size.
-- Dave Harris, Nottingham, UK
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk