Boost logo

Boost :

Subject: Re: [boost] [AFIO] Formal review
From: Giovanni Piero Deretta (gpderetta_at_[hidden])
Date: 2015-08-31 15:31:36


On Mon, Aug 31, 2015 at 5:18 PM, Niall Douglas
<s_sourceforge_at_[hidden]> wrote:
> On 31 Aug 2015 at 5:26, Jeremy Maitin-Shepard wrote:
>
>> > 1. Should Boost.AFIO be accepted into Boost? Please state all conditions for
>> > acceptance explicity.
>>
>> In its current state, Boost.AFIO should NOT be accepted into Boost. However, I
>> am confident that Niall will be able to deliver a future version of AFIO that
>> should be accepted into Boost after another review. There is a great need for
>> more powerful and efficient file handling functionality in C++ than is provided
>> by iostreams and Boost.Filesystem/Filesystem TS. I appreciate the enormous
>> complexity of trying to define a multi-platform filesystem interface, and it is
>> clear that Niall has done a ton of very valuable research on this topic.
>
> Thanks.
>
>> > 2. What is your evaluation of the design?
>>
>> A key strength of the design is that it provides a common interface for
>> asynchronous operations regardless of whether they are internally implemented
>> using Windows IOCP support, async IO on FreeBSD, or regular POSIX operations via
>> a thread pool.
>>
>> Another key strength is that it exposes handle-based (i.e. *at), rather than
>> path-based APIs where possible in order to minimize race conditions, which tend
>> to be difficult or impossible to avoid completely with file system operations.
>>
>> I do have a number of concerns about several aspects of the design, however:
>>
>> Batch API:
>>
>> - It is not clear what, if any, purpose the batch API serves. The significant
>> added complexity of using it would have to be justified by a significant
>> performance advantage, but it is not at all clear that there is even any
>> performance advantage at all.
>>
>> - I suggest removing it unless there is a strong argument otherwise.
>
> The batch API may, in the end, not prove to be performance positive
> for the file:/// URI backend. But I can see it could be of great use
> for http:// or ftp:// URI backends. Even with a zip:// URI backend,
> batch write would save a lot of rebuilding the ZIP archive.
>
>> File handle API:
>>
>> - The file handle abstraction in AFIO is very heavy weight, involving a
>> shared_ptr to a structure containing a whole bunch of fields in addition to
>> the native file handle, including a mutex, numerous flags, the path used to
>> open it, a pointer to a handle to the parent directory, a chrono time_point
>> specifying the time the file was opened, the total number of bytes read,
>> written, and written since last fsync using this file handle.
>>
>> - Because file handles are at the core of AFIO, it is critical that users
>> precisely understand what this abstraction involves, and specifically how it
>> differs from a native file handle.
>>
>> - I'd be much more comfortable with a much leaner abstraction, like a
>> unique_handle type that just contains the bare native handle, and uses unique
>> ownership semantics. The fat handles impose both computational/memory
>> overhead as well as mental overhead on the programmer. I understand these
>> extra fields are probably necessary to provide various functionality,
>> particularly emulation of behavior provided by other operating systems, but it
>> would be nicer to be able to opt-in to such functionality and only pay for it
>> when needed.
>>
>> - At the very least, if these fat file handles are kept, a very strong argument
>> needs to be given as a rationale. (Perhaps you have found the overhead to be
>> negligible relative to the cost of the native file handle.)
>
> I really feel the need to strenuously disagree with the notion that
> AFIO has a "fat" handle. People have no comprehension of the cost of
> the open()/CreateFile() system call relative to a shared_ptr or any
> of the other things afio::handle does, and they are fixated on
> judging AFIO's design choice completely inappropriately as a result.
>
> To show what I mean, here is a gist demonstrating how lightweight
> AFIO's handle is relatively speaking:
>
> https://gist.github.com/ned14/affa007440f0568e4aa6
>
> This test simply compares opening and closing a file with the same
> thing plus a shared_ptr creation and destruction.
>
> On Linux: +4.76%
> On Windows: +1.8%.
>
> Before people go "aha!", remember that afio::handle has proper error
> handling and has to configure a number of other items, including
> asking the OS for what its "real" path is and other housekeeping.
>
> When I said that the overhead of shared_ptr was about 0.5%, I meant
> it. I benchmarked it before taking the decision.
>
>> - It should be possible to construct an AFIO handle from a native file
>> descriptor.
>
> That's already in there: simply subclass afio::handle with your own
> handle type. I can't allow existing afio::handle to use third party
> descriptors because they need to be configured very exactly on
> Windows for AFIO to work correctly. This is less of a problem on
> POSIX, but it still applies there too. When the handle subclass type
> is third party, AFIO can recognise the situation and handle it
> correctly.
>
>> - On POSIX, AFIO should support supplying platform-specific open flags, like
>> O_NOATIME.
>
> Hmm, maybe. I can see pros and cons. I've logged it to
> https://github.com/BoostGSoC13/boost.afio/issues/110.
>
>> - I'd much prefer native_handle() to return the actual appropriate native handle
>> type for the platform, e.g. int on POSIX, rather than a void *. I'm guessing
>> this is done in order to support alternative dispatchers, that aren't
>> necessarily based on OS files at all, in the future, but perhaps a better way
>> of doing this that doesn't require casting the void* to an int, can be found.
>
> It's more ABI stability. Casting void * to int is fine.
>
>> - Given that it is extremely likely for user code to execute from multiple
>> threads given the nature of AFIO, relying on a thread-local current dispatcher
>> seems very error prone.
>>
>> - I suggest that the thread-local current dispatcher be eliminated, and that the
>> dispatcher always be specified explicitly to any API call that requires one.
>
> Noted.
>
>> Directory handle cache:
>>
>> - A cache of open handles to directories is kept by AFIO by default. This is
>> particularly problematic because (as is documented) it makes reading directory
>> entries thread-unsafe by default. If this is to be kept, a very strong
>> rationale needs to be given, and the behavior must be changed somehow to
>> ensure that directory reading is reliable by default.
>
> The documentation explains exactly why there is this cache, it's a
> big win despite the complexity it introduces. The problem of races on
> directory enumeration on shared directory handles is eliminated if
> you make the number of entries enumerated in a single shot bigger
> than the total. In practice, this is not a problem, even on 10M item
> directories, just throw max_items at the problem.
>
>> Enumerate API:
>>
>> - The future-based API of async_enumerate seems to be unnecessarily complicated,
>> due to the need to specify maxitems and deal with making repeated calls, and
>> all of the copying and memory allocation required to return a std::vector of
>> directory_entry.
>
> If you want race free snapshots (up to the platform specific
> guarantees), simply make sure maxitems far exceeds any potential
> contents.
>
> std::vector does not copy memory on C++ 11 when returned by value.
>
>> I suggest instead something like passing in a
>> std::function<bool (expected<directory_entry> const &next)> callback (possibly
>> replacing expected by the new name for your monad type). An empty next value
>> means end of directory has been reached. Returning true means continue
>> listing the directory, returning false means stop listing the directory.
>
> I didn't do this because we actually do fetch maxitems from the
> kernel in a single shot. If you ask for 10M items, all 10M items are
> returned in a single kernel syscall as a single, race free snapshot.
> Calling some callback per item is therefore not useful.
>
> On POSIX, with the current implementation directory enumeration isn't
> particularly race free past the inode and type, which is still quite
> useful much of the time. However I had in mind in the future we'd
> pull the directory inode by hand in a single shot and parse it into
> the AFIO output. FreeBSD lets you do that without any filing system
> specific code, unfortunately Linux requires you to understand ext4
> inode layouts.
>
> My point is that you need to think not in terms of conventional
> directory enumeration. AFIO will enumerate a 1M item directory faster
> than a traditional design can do 10,000 items. It's so fast you can
> afford to fetch more than you need and throw away the excess.
>
>> - Operations including rmdir, rmfile, symlink, rmsymlink (as well as async
>> variants) that normally would not necessarily even operate on file handles,
>> actually return a handle_ptr. Perhaps if on some platforms a handle has to be
>> opened, then there could be a way for the user to specify that a handle is
>> desired. Otherwise, it would make sense to just return void or
>> std::future<void> equivalent for async variants.
>
> Already logged.
>
>> - The documentation notes that on MS Windows, DOS short names are not supported.
>> I don't have much familiarity with Windows, but if programs generally accept
>> DOS short names when a filename is required, then AFIO should probably support
>> these names as well.
>
> The cost of this support is high, and the benefit very low unless you
> are likely to run original 16bit DOS programs on files generated by
> AFIO (note 64 bit Windows no longer can run 16 bit DOS programs).
> You'll run into long path problems on Windows long before 8.3 name
> compatibility problems.
>
>> - The documentation notes: "Don't do directory enumerations from an x86 binary
>> on a x64 Windows system. This is due to a bug in Microsoft's WOW64 syscall
>> translation layer which Microsoft have decided is wontfix. " Given that a
>> major purpose of AFIO is to provide a portable abstraction that works around
>> OS limitations, AFIO needs to support directory enumeration even in this case
>> (e.g. by detecting WOW64 and using a synchronous call via a threadpool in that
>> case).
>
> This problem is caused by how ASIO's reactor works and is due to code
> I can't modify in ASIO. It'll go away when I rewrite ASIO's reactor
> using lightweight futures because I can control the source code.
>
>> - It is noted that to do an asynchronous stat you have to use enumerate with an
>> appropriate glob specified, but constructing such a glob may be non-trivial if
>> the path contains characters that are interpreted specially by the glob
>> syntax, and impossible to do in a race-free manner.
>
> On POSIX fnmatch() which is used to implement glob allows escaping of
> the metacharacters. On Windows the glob metacharacters are illegal in
> a path anyway.
>
>> - The implementation might be better split up into multiple independent files.
>> There are various independent facilities, like secded_ecc, that could be
>> placed in a separate file.
>
> Already logged.
>
>> - Although this tends to be a problem with most Boost libraries, it would be
>> nice if the compile times could be reduced, particularly when not in
>> header-only mode. Some libraries like Boost Spirit have an interface
>> inherently based on template metaprogramming, such that long compile times are
>> pretty much unavoidable, but the Boost AFIO interface seems to be mostly
>> non-template based, which suggests that it may be possible to get pretty good
>> compile times.
>
> AFIO if configured to do so can achieve excellent compile times as
> it's something I've worked hard to optimise:
> https://boostgsoc13.github.io/boost.afio/doc/html/afio/FAQ/slow_compil
> e.html
>
>> > 4. What is your evaluation of the documentation?
>>
>> - The extensive documentation about race guarantees is extremely useful.
>>
>> - I appreciate the in-depth examples.
>>
>> - Much more design rationale is needed.
>>
>> - The term "filing system" is used in place of the more common "file system",
>> leading me to wonder if there is a difference in meaning.
>>
>> - Every instance where there is a platform-specific #ifdef in an example should
>> be evaluated, so that you can figure out the appropriate platform-independent
>> abstraction that could be added to AFIO in order to move that #ifdef into the
>> library and out of user code.
>>
>> - There is an attempt to document differences in behavior on different
>> platforms, but this is not done sufficiently. For example, it is stated that
>> async_write requires that the destination region does not go off the end of
>> the file, but on POSIX pwrite supports this just fine as a way to extend the
>> file. I assume that this restriction applies to Windows only.
>
> It's more that pwrite off the end of the file is inherently racy to
> concurrent read/writers. If you want safe automatic file length
> extension on writes, open a handle with the append flag and use that.
>
> AFIO fatal exits the process if you try to read or write past the
> current file size, it's a logic error and your program is inherently
> broken.
>
>> - The semantics of the Batch API are not precisely documented, nor is any
>> rationale given for it. For instance, I understand that the operations
>> themselves cannot happen atomically, since that is not supported by operating
>> systems, though this is not clearly stated. It is stated: "Once a batch of
>> input ops has been verified at the point of entry as not errored, you are
>> guaranteed that the batch is atomically scheduled as a whole, unless a failure
>> to allocate memory occurs." Does this means that if the *_req object involves
>> futures, it waits for all of them to be ready before launching any of the
>> operations in the batch? Is that the purpose of the Batch API, to do a
>> wait_all on all of the relevant futures, then to launch each operation? This
>> issue goes away though if the Batch API is eliminated, as seems to make sense.
>>
>> - It would be useful to document for each operation which underlying system
>> calls it invokes on each platform.
>>
>> - Many operations other than open (e.g. rmfile) take file_flags, but it is not
>> specified which flags are supported.
>
> Good point. Logged to
> https://github.com/BoostGSoC13/boost.afio/issues/111.
>
>> - References to historical versions of AFIO (prior to the review) should be
>> removed from the main parts of the documentation, since they only distract
>> from the useful information. This information could be kept in a different
>> section at the end for the extra curious.
>>
>> - Formatting of the "description" in the Boost Quickbook format function
>> documentation has some problems: the lack of line breaks between the general
>> description, other notes, and the common note about the use of NT kernel paths
>> make it difficult to read.
>>
>> > 5. What is your evaluation of the potential usefulness of the library?
>>
>> The library is potentially extremely useful. C++ and Boost iostreams and
>> Boost/TS Filesystem provide only very limited functionality. The C library
>> libuv is a close competitor but seems to offer less functionality and a less
>> convenient API. For achieving similar functionality, the status quo in C++ is
>> to rely on platform-specific C APIs that are much more cumbersome to use.
>>
>> > 6. Did you try to use the library? With what compiler? Did you have
>> any problems?
>>
>> I tried some very simple usage of the library with GCC 5.2.0 on Linux. There
>> were a few warnings (variable set but not used) with -Wall that would be best
>> eliminated.
>
> Logged to https://github.com/BoostGSoC13/boost.afio/issues/112
>
>> - The error messages are not ideal: for instance, in the case that there was an
>> error opening a file, this is the result:
>>
>> Dynamic exception type: boost::system::system_error std::exception::what: File
>> '/home/jbms/tmp/afio-test/test.txt' not found [Host OS Error: No such file or
>> directory (2) in
>> '/home/jbms/tmp/boost.afio/include/boost/afio/v2/detail/impl/afio.ipp':
>> async_io_handle_posix:805]: No such file or directory. Op was scheduled at:
>> 1. 0x42fb4f: ./test (+0x2fb4f) : No such file or directory
>>
>> Some of this is only useful when debugging AFIO, some only useful when debugging
>> the application, and there is a lot of redundancy, for instance the actual error
>> "No such file or directory" is shown in 3 places. Ideally there would be a way
>> to get a simple message along the lines of "Error opening file
>> '/home/jbms/tmp/afio-test/test.txt': No such file or directory" that could be
>> shown directly to the user.
>
> Those "No such file or directory" were from the stack backtracer. I
> assume it could not figure out your debug info.
>
> Regarding your request, in the lightweight futures rewrite we will
> exclusively use std::error_code for unexceptional errors, and
> therefore you will have exactly as you ask.
>
>> - A separate issue is that attempting to read a range that is not present in a
>> file, something that the library should handle elegantly, leads to the program
>> aborting immediately with an error message, rather than allowing the error to
>> be handled:
>
> There is no way of gracefully handling this without introducing a lot
> of complexity to IPC with concurrent readers and writers.
>
> Besides, it's a logic error to do i/o outside the file's maximum
> permitted extent, and this is enforced in all asynchronous file i/o
> applications far outside AFIO. The reason I chose a fatal exception
> is that this is not a recoverable error, this is a fundamental
> mistake in your implementation (or memory corruption) and you need to
> fix it.
>

what happens if another process truncates the file while I'm writing
to it? Also is the 'max size' cached? What happens if another process
extends the file size, notifies me out of band of the new size and I
write beyond the original limit?

I would certainly hope that the application won't crash in neither
scenario. It must never be possible for a process to cause another
process in another address space to crash (discounting super user
privileges or ptrace of course).

-- gpd

-- gpd


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk