|
Boost : |
Subject: Re: [boost] [afio] Formal review of Boost.AFIO
From: Sebastian Theophil (stheophil_at_[hidden])
Date: 2015-08-27 04:25:04
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk
--
Dr. Sebastian Theophil | stheophil_at_[hidden]<mailto:stheophil_at_[hidden]>
Senior Software Engineer
We are looking for C++ Developers: http://www.think-cell.com/career
________________________________
think-cell Software GmbH http://www.think-cell.com
Chausseestr. 8/E phone / fax +49 30 666473-10 / -19
10115 Berlin, Germany US phone / fax +1 800 891 8091 / +1 212 504 3039
Amtsgericht Berlin-Charlottenburg, HRB 85229 | European Union VAT Id DE813474306
Directors: Dr. Markus Hannebauer, Dr. Arno Schödl
On 27 Aug 2015, at 04:45, boost-request_at_[hidden]<mailto:boost-request_at_[hidden]> wrote:
Ok, let's revisit the original pattern code I mentioned:
EXAMPLE A:
shared_future h=async_file("niall.txt");
// Perform these in any order the OS thinks best
for(size_t n=0; n<100; n++)
async_read(h, buffer[n], 1, n*4096);
Niall,
is parallel read (parallel writing maybe?) the only use-case where you want a shared_future?
If I understand Thomas correctly, he doubts you need the shared_future semantics because one async operation hands down a single handle to the next continuation.
Essentially something like:
async_file(âniall.txtâ)
.async_read(buffer, length_to_read)
.async_truncate(length_to_read)
.async_close()
Your counter example was an asynchronous *and* parallel read where you need to share the file handle (or rather the future<handle>) between parallel reads. Shouldnât this be abstracted away in the API somehow? I canât think of many file operations you want to do N times in parallel. Truncating a file in parallel several times doesnât seem to make much sense :-)
So why not make it:
async_file(âniall.txtâ)
// Read 100 times asynchrnously and in parallel and provide lambda returning n-th buffer and offset:
.async_parallel_read(100, [&](int nCount) { return std::make_pair(buffer[n], n*4096; })
.async_truncate(length_to_read)
.async_close()
The 100 reads are internally not ordered but they can only begin once the file has been opened, they consume this handle together, and only after all reads are complete can we truncate.
Is this not what youâre trying to do?
Regards
Sebastian