Boost logo

Boost :

From: Aaron W. LaFramboise (aaronrabiddog51_at_[hidden])
Date: 2004-09-13 00:19:07


Jeff Garland wrote:
> On Sun, 12 Sep 2004 22:18:05 -0500, Aaron W. LaFramboise wrote
>>Carlo Wood wrote:
>>>1) On windows we have a limitation of at most 64 'Event' objects
>>> that can be 'waited' for at a time. This is not enough
>>> for large server applications that might need thousands
>>> of TCP/IP sockets.
>>
>>In the case of sockets, Winsock has other mechanisms for scaling in this
>>respect, such as I/O completion routines. On pre-Winsock2 platforms,
>>which hopefully are dwindling, I don't think falling back to the 64
>>handle limit will be a problem.
>
> It's easy to blow this if you start monitoring any significant hardware.
> Start monitoring some serial ports and setting various timeouts associated
> with those prots and you can run into troulble easily. In fact, timers is a
> big problem -- you need to have a smart queing implementation that keeps the
> number of timers down to the bare minimum....

I'm not sure I follow you here. The limitation is specifically this:
#define MAXIMUM_WAIT_OBJECTS 64 // Maximum number of wait objects
That is the maximum amount of objects you can pass to
WaitForMultipleObjectsEx(). In the case of serial ports, you only need
one of these per port. These are reusable, and several user-visible
events may depend on a single system object. Outside of the networking
problem, which has the separate solution (APC callbacks, which as far as
I know, are unlimited), it is difficult for me to see how you would use
more than 64 of these system objects.

For timers specifically, I do not think there is any need to use any of
the Win32 timer facilities at all. WaitForMultipleObjectsEx() has a
millisecond timeout parameter which hopefully is 'good enough' for
timing needs. In my implementation, timers are organized into a queue,
the difference between now and the timer on the top used for the timeout
value.

>>It seems unlikely to me that there are many cases were the limit
>>would be exceeded. However, in those cases, I don't think it would
>>be a problem if the multiplex user were required to create another
>>thread, and another multiplex. I don't think the multiplex should
>>do this.
>
> Well, it's ugly for the user because it's tough to predict when you are going
> to hit the 64. So I disagree, I'd like to see the user shielded from this issue.

I guess I'm thinking this will be a very rare event, rare enough that a
user will know for sure that they need an extra thread. I can't think
of how it would happen. The only way I can see it happening is on an
older system where APCs are unavailible, but these tend not to have good
support for things like reading from 64 files all at once, anyway.

I wouldn't be opposed to a higher level multiplexor abstraction that
builds on the core multiplexor abstraction that did spawn extra threads
as needed, though.

> Well, I think there might need to be some interface here. For example, it
> would be nice for the multiplexor would have a pool of threads and dispatch
> each event to execute in a thread. The size of that pool might be '0' in
> which case the multiplexor uses its' thread to dispatch in -- hence
> degenerating into a single-threaded arrangement.

I agree. However, I think this could be designed best by having the
core multiplexor thread-ignorant, with a separate wrapper component
around that core doing the thread management.

> I'd like to see a template approach (see below) that allows new multiplexor
> and event handler types to be added in as they are developed. The core then
> just sets up and manages the core of the dispatching.

This sounds excellent.

>>2) Efficient - For many applications, performance will be paramount.
>>Many asynchronous algorithms will depend on the multiplex core having
>>negligable overhead, and Boost should not disappoint. As it may be a
>>crucial building block of nearly any real-world program, it should also
>>be storage efficient, to not rule out application in embedded areas.
>
>
> Agreed. BTW, I'd like to see an attempt to remove all virtual methods from
> the mix.

I also agree. For various reasons, I was unable to do this in my own
implementation, but I think it should be avoidable.

>>3) Compatible - See
>>http://article.gmane.org/gmane.comp.lib.boost.devel/109475
>>
>>It is my opinion, in fact, that this multiplex class should be in its
>>own library, isolated from any other particular library that would
>>depend on it. In other words, it wouldn't be any more coupled with I/O
>>than it would be with Boost.Thread or date_time.
>
> Well, I'll disagree with this one as well. I think it should be coupled with
> both thread and date_time, since you picked those two ;-)
>
> Here's why. I think the interface should look something like this:
>

>
> Note that the amount of time to run the event loop is specified as a
> time_duration which allows things like 'pos_infinity' or run forever to be
> specified more cleanly than the typical interface which passes '0' meaning run
> forever. Now I can write code that looks like and it's perfectly clear what
> it means:
>
> multiplexor m(...); //setup
>
> while (!done) {
> m.run_event_loop(seconds(1));
> //do other stuff like set done
> }
>
> So there's the hook to date_time. BTW, for this part of date_time you only
> need headers -- you don't need to link the lib.

You're right. date_time will need to be coupled.

> As for boost.thread, that will be needed because the mutliplexor implemenation
> of register will need to manage a list of event handlers and will need to be
> capable of dealing with the remove, suspend, and register running in different
> user threads. This means it will need to lock. So even if you argue away
> date-time I don't see how you avoid boost.thread.

I still think that this management should be done in a component
separable from the core demultiplexing. But yes, you're right: there
will be some dependency from the multiplexor library as a whole on
threads, too.

Aaron W. LaFramboise


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk