|
Boost : |
From: Robert Ramey (ramey_at_[hidden])
Date: 2002-11-28 12:49:06
>From: Matthias Troyer <troyer_at_[hidden]>
>In any case the library user should be reminded that short, int and
>long are never portable, but that by using int*_t and appropriate
>archive formats one can achieve portable serialization.
>The base class basic_[i|o]archive define functions to read/write
>fundamental types from/to an archive.
Fundamental types in C++ are unsigned char, signed char, unsigned
short int, signed short int, ... unsigned long, signed long. In addition
to the above some compilers define int32_t, and other as fundamental
types. It is a unfortunate accident of history that the nomenclature is confusing.
It is an unfortunate original design choice that this size of int, char etc
were not defined as a specific number of bits. However at the time
there were in common usage machines with 9, 16, 18, 24, 32, 36 and
48 bit words. What else were the authors to do?
It is common among programers to define types int16_t, ..., etc
using the typedef facility to map intergers of a specific size between
machines. This does no harm and can facilitate portability. However
it in no way alters the fundamental types that are available on a given
platform.
The serialization library includes 3 examples of archive implementations.
One is based on portable text while the other non-portable binary. (non - portable
refers to the archive created and the program compilation and execution)
In portable text libraries output of integers are mapped to text strings
while input maps the string back to an integer.
Output
int -> string
Input
string -> int
Even if the ints are of different sizes on the platform creating the archive
and the platform reading the archive, the values of the integers will be
preserved as long as the value of the integer does not exceed the
maximum value permited on the reading platform. That is why the
portable text archives are in fact portable. Also, by relying upon
the int<->string mapping implemented in the stream i/o system, the
case where an integer is too large to be represented as an int
on the target machine should throw an i/o exception.
The fact that the programmer may have typedef'ed int32_t, etc in no
way affects the operation of the system as derived above. The usage
of these typedefs could aid in avoiding the throwing of the exceptions
described above. For example if the writing platform had a 32 bit integer
and saved a value exceeding 2^16 in the archive while the reading
platform had a 16 bit integer a problem would normally occur. If the
user has typedefed int32_t as long in the 16 bit machine and used
int32_t in his code and then tried to read the archive things would
work better.
On Output
int32_t -> int (32 bit int) -> string
On Input
string -> long -> int32_t
So archiving perfectly compatible with the practice of typedefing int32_t
etc. but doesn't require it. This is as it should be. There is not nor
should there be any notion of predefined typedefs in the archive. It is
not the function of the library to enforce a particular coding style
but rather to be compatible with the widest array of practices.
Should a user desire to build on top of the library by deriving his own
implementation of binary or other type of archive, he is free to enforce
any practice he wants. If this implementation is to be used within a closed
group of programmers this is might be a good idea. But
I doubt its a good idea if you want your library to get wide usage.
In any case the basic archive interface permits implementations
to acommodate either viewpoint.
Robert Ramey
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk