Boost logo

Boost :

Subject: Re: [boost] Serialisation: Is is_trivial<T> a sufficient precondition to bypass serialisation?
From: Niall Douglas (s_sourceforge_at_[hidden])
Date: 2015-02-06 09:49:27

On 6 Feb 2015 at 13:52, Olaf van der Spek wrote:

> >> So if I try to write a std::vector that is too large to fit in a
> >> scatter/gather buffer, my write will fail?
> >
> > AFIO observes the IOV_MAX batch limit, so on POSIX with pwritev()
> > support no it should never fail, though of course you lose atomicity
> > between IOV_MAX batches. On POSIX without pwritev() support AFIO
> > issues each buffer singly anyway, so your atomicity is per buffer.
> >
> > On Windows if buffered i/o is on then there is no limit and atomicity
> > is per buffer (Windows has no scatter gather file i/o functions for
> > buffered files). If buffered i/o is off, the WriteFileGather() API
> > currently has an unofficial limit of 32Mb on x64 operating systems
> > due to NT kernel structure limits. Because this limit is not
> > documented and not stable even across 32 bit vs 64 bit systems never
> > mind between Intel and ARM, AFIO passes through your request as-is,
> > and returns an error if the WriteFileGather() API does.
> What's the definition of atomicity here?

Atomicity on filing systems is that the whole of a write operation
will be atomically seen as a single unit by all readers of that file
whether in other processes or other machines [1]. So if you write
64Kb of data, either none of the 64Kb will appear to readers, or all
of it will. You can't see it mid write.

This feature lets you do fun stuff like distributed mutual exclusion
algorithms using atomic appends as the message channel, and extent
zeroing on the front of the file to prevent the file growing in
physical allocation. You can see an example of such an algorithm at
c_file_io/atomic_logging.html where performance, except on ZFS, is
very respectable. Plus the code is completely platform independent.

[1]: Excluding mmaps.


ned Productions Limited Consulting

Boost list run by bdawes at, gregod at, cpdaniel at, john at