Boost logo

Boost Users :

From: Stephen Nuchia (snuchia_at_[hidden])
Date: 2008-03-17 12:22:31


> So is there no way to create a thread and not have it take up all the
CPU
> time until it's done without using SMP? (This is going to be for
> carputers, so it won't be in an SMP environment.)

Now that you've clarified your intended purpose, it is apparent that
your simple producer/consumer test program was not a good model of the
real program.

How the threaded serial port monitor will work in your production
environment is something you'll have to experiment with. If the target
environment has good support for threads and a fairly high-level driver
for the serial port then simply assigning a higher priority to the port
monitor thread should work. But it may be necessary to use an
interrupt-based solution instead.

The normal paradigm for threads on a uniprocessor is that the
highest-priority runnable thread runs until it is no longer the
highest-priority runnable thread. That can happen when
1) It blocks, either for I/O or on a mutex or
2) The priority of that thread or another thread is changed or
3) Some higher-priority thread becomes runnable (I/O completes,
interrupting event occurs, or mutex freed).
4) If more than one thread is runnable at the highest priority, the
system may time slice among them at some fairly low frequency (10-1000
ms typically).
5) In non-realtime unix environments, such as the one in which you are
experimenting, there is a dynamic priority adjustment scheme at work
that is designed to give good interactive responsiveness under load.
This scheduler protocol is often not suitable for realtime applications.

Time slicing more frequently would impact efficiency because the context
switch itself takes time and the new thread will want a different
working set in the cache, leading to a lot of unnecessary memory
traffic. You may want to pay that price for simulation purposes but in
general you don't want it. See others' comments about inserting sleep
commands if you want to simulate multiprocessor behavior.

Ideally, your production program's port monitor thread would run at high
priority but spend most of its time blocked waiting for input. When a
frame becomes available the hardware should interrupt the driver,
leading to your thread becoming runnable. When the OS returns from the
driver's interrupt service routine your input thread should then run
until it loops back to wait for more input. If you haven't signaled any
semaphores the previously running thread resumes at that point. If you
did signal data available a different thread may be scheduled.

I've built several systems on this principle very successfully but you
have to know your OS and compiler and hardware cold to get reasonable
performance and reliability.


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net