Boost logo

Boost :

Subject: Re: [boost] Cooperative Multi-Tasking
From: Daniel Larimer (dlarimer_at_[hidden])
Date: 2010-03-04 07:56:55


On Mar 4, 2010, at 5:50 AM, Giovanni Piero Deretta wrote:

> On Thu, Mar 4, 2010 at 9:20 AM, Mathias Gaunard
> <mathias.gaunard_at_[hidden]> wrote:
>> Daniel Larimer wrote:
>>
>>> I am guessing that Boost.Asio would benefit from such a library. It seems
>>> obvious to me now that coroutines / cooperative multi-tasking is the
>>> superior approach to solving problems with a large number of actors and
>>> "locks" where heavy weight preemptive scheduling would bog down the system.
>>
>> A significant advantage of not using coroutines or fibers with Asio is that
>> you use as little memory per task as you need, while using a coroutine
>> requires having a stack and other context data.
>>
>
> GCC supports (experimentally) split stacks
> (http://gcc.gnu.org/wiki/SplitStacks ) to let a thread dynamically
> grow and shrink its stack size (in principle with frame granularity).
>
> BTW, that project has been implemented to support goroutines in go.
>

My ultimate goal would be to achieve some what what go can do within C++. Unfortunately, go-like compile times will be impossible.

Does anyone here know if Boost.Coroutine has variable (even compile time variable?) size stacks or what the default stack size of boost coroutine is? I would likely want to play with it. I remember reading someplace that on Windows you had no choice.

It seems like it should be possible to implement a "smart stack allocation" scheme where new/delete are overloaded for some template type like

sstack<my_data> v1;
...
In theory you could get most of the speed advantages of stack-based allocation and yet offload "big objects" to "heap" without having to worry about expensive malloc/free calls except when the sstack<> detects that it needs to expand.

Overhead would be a thread/coroutine specific data pointer to your dynamic stack, a conditional check on each sstack allocation and any performance gains that g++ would lose. There would also be an 8 byte "pointer" overhead which would cause functions to look like this:

int fun()
{
        struct _local{
                int a, b, c;
                ...
        };
        sstack<_local> local;

        local.a = ...
}

I am not sure how to estimate how much overhead something like this has compared to the benefits of using less ram. I have often felt that having the ability to define a "auto release heap" to handle many situations where new/delete is used for objects that have stack-based life time.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk