Boost logo

Boost :

From: Carlo Wood (carlo_at_[hidden])
Date: 2004-09-13 14:31:40

On Mon, Sep 13, 2004 at 11:05:01AM -0700, Sean Kelly wrote:
> I suppose if you really wanted to you could loop over multiple calls to
> WFMO until all of your (>64) events had been processed,

Suppose an application has 640 sockets. It groups those in ten
groups of each 64 sockets and calls the WFMO for the first group.
Suppose there are no events in that group. Then what about the
other 576 sockets? This solution would mean you'd need to use
a timeout for each group, lets say 10 ms - otherwise it takes
too much cpu. And then the application becomes unresponsive:
it take 100 ms before it even LOOKED again at the first group
of sockets. Once we'd increase the number of sockets, this
would become totally non-practical. If this is the only solution
next to threading then I will choose threading any time.

> but this wouldn't
> scale well and would risk having data back up if the number of events got
> too large. But it is just about the only way to handle 1000 simultaneous
> events in a single thread in Windows. Still, I'd wonder what the point of
> using WFMO is when MS provides completion ports.

I only learned about those since the author of asio mentioned them.
This seems the way to go.

> Is there a need to
> service more than just file and socket i/o, or is it just a matter of
> avoiding the complexities of multithreading?
> > It doesn't I have to agree. My main problem is that I cannot
> > seem to figure out how to handle all possible events, including
> > socket events, in a single thread on windows. If anyone could
> > inform me how to do that then the NEED for threads would disappear
> > (especially when this method also evades the 64 limit).
> See above. Basically, I don't think there is a truly practical way to do
> this in a single-threaded Windows application.

That is what I was thinking too, hence the Subject line of this thread.

> > Ok, perhaps I am too paranoid :). The results from this impl detail
> > is that the application needs to link with boost.threads (and that
> > that must be supported on the used platform) even when the user only
> > wants to write a single threaded application. I was afraid that people
> > would object to that. It never was my intention to force a user to
> > create threads himself or be bothered with any locking or semaphores
> > or whatsoever.
> You could always have wrapper classes for the synchronization mechanisms
> and use boost.threads or skeleton code depending on whether a single or
> multithreaded app were being compiled. But the user would have to be
> aware of scaling limitations that may be inherent in a single-threaded
> version of the code.


> > Hmm, I'd like to stick to "It never was my intention to force a user to
> > create threads himself or be bothered with any locking or semaphores
> > or whatsoever."
> And it should be entirely possible to hide all of this from the user. Use
> a thread pool that grows as needed and put all the synchronization stuff
> in interface methods.

Definitely, but Aaron will oppose to this idea. He is against using
threads even they are completely hidden from the user. If that is possible
(completely avoiding threads) using completion ports, then I am all for it.

> > We'll go for what is best - they are implementation details as you say :).
> > As for the user doing synchronisation - the user will also get the option
> > to explicitely create threads and start to wait for and dispatch events
> > in his own threads; if he chooses to use the interface in that way
> > then he will have the flexibility to do what you say.
> It might be nice if you offered a means for users to get into the guts of
> the lib if they wanted to. Some folks may want to do lockless i/o.

Agreed. Personally I think it should be possible to fine tune a library
like this to the last bit. But it should not be at the cost of ease of
use for the beginner, nor should it cost other trade offs.

Carlo Wood <carlo_at_[hidden]>

Boost list run by bdawes at, gregod at, cpdaniel at, john at