Boost logo

Boost :

Subject: [boost] [context review] Several Questions
From: Artyom (artyomtnk_at_[hidden])
Date: 2011-03-21 04:26:31


Hello,

Before I start working with this library for review I need
a clarification of several statements to understand
actual usefulness of this library:

>
> A context switch between threads costs usually thousands of CPU
> cycles on x86 compared to a user-level context switch with few
> hundreds of cycles.
>

Performance
-----------

One of the first things I did when read this statement
and take a look on measurements is to write my own
benchmark of context switch.

I've run it on:

- Intel i5 2.5GHz CPU 2 cores 4 threads.
- Linux x86_64, Ubuntu 10.10

And compared context switch performance using
sched_yeild and jump_to. I used default boost::context<>
settings and default build (against Boost-1.46.0)
and used dummy switch to measure actual operations
beyond the switching

There were two threads giving each one a time quanta and measured
how much time context switch takes (of course included some warm up)

sched_yeild - 377us
Boost.Contex - 214us
Dummy - 10us

All tests done on a single CPU using taskset 0x1 ./test params

So finally I can see that Boost.Context does not behave **much**
better then OS context switch?

I understand that I probably used by default ucontext but it
is default and this is how it is going to be shipped by
most distributions as it would probably be the safest.

I need to see rationale, limitations and so on,
in very explicit way.

Usefulness of N:M model (or even 1:M model)
-------------------------------------------

Long time ago OS developers used N:M threading
model where several kernel threads were mapped
to several user-space thread:

- Solaris < 9 or 10
- Linux <= 2.4,
- FreeBSD < 8

All these OSs today moved to 1:1 model as most efficient
one, so I can't buy it that N:M or even 1:N model
in this case would give performance advantage.

As you know, even POSIX 2008 deprecated ucontext at all.

So I would like to see some very good and based rationale
with description of specific use cases and examples.

I understand that it may be very useful paradigm but
as long as I see most of implementations move
away of N:M model...

Why do you bring it back despite of huge drawbacks
users space threads have: like interaction with
blocking system calls, interaction with physical CPUs
and so on?

--------------------

Thanks, I'd like to see an answer
on this topics before I continue.

Artyom

      


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk