Boost logo

Boost :

Subject: [boost] [RPC] Boost.Asio to write *concurrency ready* code
From: microcai (microcaicai_at_[hidden])
Date: 2013-08-26 09:30:51


Hi, boost experts! This is my first mail. Although I am not boost
newbee, I am indeed new to the boost community.

==

I was using Boost.Asio for a long time, for *Async Network I/O*, but
up till recently, I made an amazing discover that Boost.Asio is NOT
ONLY a Network library, but indeed a very *general* toolkit for
building concurrent algorithm.

I made the first discover when my friend show up an concurrent sum

```

#include <iostream>
#include <vector>
#include <algorithm>
#include <numeric>
#include <future>

template <typename RAIter>
int parallel_sum(RAIter beg, RAIter end)
{
     typename RAIter::difference_type len = end-beg;
     if(len < 1000)
         return std::accumulate(beg, end, 0);
       RAIter mid = beg + len/2;
     auto handle = std::async(std::launch::async,
         parallel_sum<RAIter>, mid, end);
     int sum = parallel_sum(beg, mid);
     return sum + handle.get();
}

int main() {
     std::vector<int> v(10000, 1);
     std::cout << "The sum is " << parallel_sum(v.begin(), v.end()) << '\n';
}

```

Here the actual calculations are done async and concurrently.

But, the problem remains:

** The total threads that this algorithm spawn is not predictable and
not configurable. **

And that's why we will not like such algorithm altogether.

Days later, when I am review an old code that was doing an *async
directory walk through* , I soon discover an general *concurrency
ready* algorithm that the number of threads utilized can be fully
controlled.

The code utilized *any number of threads that is doing io_service::run()*

Here is the good old demo that uses recursive algorithm :

```
// this is sequential recursing version
void hanoi_sync(int n, char A, char B, char C)
{
    if (n == 1)
    {
        printf("Move disk %d from %c to %c\n", n, A, C);
    }
    else
    {
        hanoi_sync (n-1, A, C, B);
        printf("Move disk %d from %c to %c\n", n, A, C);
        hanoi_sync (n-1, B, A, C);
    }
}

```

But with Boost.Asio, we turn it into parallelism algorithm:

```

// concurrent hannoi tower solver
void hanoi_async(boost::asio::io_
service& io, int n, char A, char B, char C)
{
    if (n == 1)
    {
        printf("thread id %d, Move disk %d from %c to %c\n",
boost::this_thread::get_id(), n, A, C);
    }
    else
    {
        io.post(boost::bind(&hannoi_async, boost::ref(io), n-1, A, C, B));
        printf("thread id %d, Move disk %d from %c to %c\n",
boost::this_thread::get_id(), n, A, C);
        io.post(boost::bind(&hannoi_async, boost::ref(io), n-1, B, A, C));
    }
}

```

Thanks to the amazing io_service.post, we can now turing any (if not
mostly) *recursing code* into *parallelism code*. And the threads
that used by such *parallelism code* is fully controlled. If one
thread is calling io_service::run(), then it is simple an async
version of the sequential recursing one, if more that one thread is
doing io_service::run(), then you know that the job is now run in
parallel.

And yet there needs only one place to control how may threads is used.
Want it to be faster? spawn more threads to call io_service::run()!
The code will *automatically* use all threads that you have allocated
to Boost.Asio.

I am *not sure* if anyone else has made such discovery, anyway,
comment is welcomed.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk