|
Boost : |
From: Aaron W. LaFramboise (aaronrabiddog51_at_[hidden])
Date: 2004-09-13 10:18:00
Carlo Wood wrote:
> On Sun, Sep 12, 2004 at 10:18:05PM -0500, Aaron W. LaFramboise wrote:
>
>>Carlo Wood wrote:
>>
>>>>2. It is unavoidable that this library uses threads.
>>
>>I disagree strongly. I think spawning additional threads is both
>>unnecessary and undesirable.
>
> I have no problem whatsoever to stick to a single thread if you
> are right. But, I am convinced it will not be possible
> to do this without threads. If you know otherwise then I suggest
> you tell me the solution for problems I will run into when
> trying to code this within one thread.
Nothing irritates me more than using some third party library and
noticing its spawning some hidden thread that I just know it doesn't
need and isn't adding any real value. Especially as I have implemented
such a demultiplexor core without having sophisticated thread built-in
support that was nonetheless very useful in conjunction with
multithreading, I know that if Boost's multiplexor uses threading, I
would not use it.
>>>1) On windows we have a limitation of at most 64 'Event' objects
>>> that can be 'waited' for at a time. This is not enough
>>> for large server applications that might need thousands
>>> of TCP/IP sockets.
>>
>>In the case of sockets, Winsock has other mechanisms for scaling in this
>>respect, such as I/O completion routines. On pre-Winsock2 platforms,
>>which hopefully are dwindling, I don't think falling back to the 64
>>handle limit will be a problem.
>
> I have no problem not supporting winsock1, or with at most 64 sockets.
> But, it could be supported without that the user even KNOWS it created
> additional threads - why would it be bad to do that?
I suppose it could also answer the user's email, and download
advertisements from the internet that the user might be interested in
seeing. I don't see how about could be bad...
I am not opposed to any of these fancy thread management features.
However, I do think that there needs to be fundemental, minimal
demultiplexor core that does not have them. These extra features may be
implemented by a separate module on top of the core, through delegation,
inheritance, or something else.
>>It seems unlikely to me that there are many cases were the limit would
>>be exceeded. However, in those cases, I don't think it would be a
>>problem if the multiplex user were required to create another thread,
>>and another multiplex. I don't think the multiplex should do this.
>
> Why not? This is what libACE does too; I am interested to hear why
> you think it is wrong.
I think ACE is too complicated, unnecessarily.
>>>2) On windows there are different types of handles/events. It
>>> seems to make a lot more sense to use different threads
>>> to wait for different types. For example, there is a
>>> WSAWaitForMultipleObjects (for sockets) and a WaitForMultipleObjects
>>> that allows one to wait for arbitrary events (but not socket
>>> events(?)). More in general however - there seems to be
>>> a need to use different ways to demultiplex and handle
>>> different types - even if the handles of all different types
>>> are the same (ie, 'int' on UNIX). Consider the major
>>> difference between a listen socket, a very busy UDP socket
>>> and a very busy memory mapped filedescriptor. It might be
>>> easier to regular priority issues between the different types
>>> by putting their dispatchers in separate threads.
>>> Note however that I DO think that the callback functions
>>> for each event (that is, the moment we start calling IOstream
>>> functions) should happen in the same thread again; this
>>> new library should shield the use of threads for the user
>>> as much as possible!
>>
>>
>>I also don't think the multiplex should do this. Boost shouldn't
>>second-guess what the user is trying to do. If the user knows he needs
>>two separate threads to handle two separate resources, then let the user
>>create two threads and put a multiplex in each.
>
>
> But while that might be possible on GNU/Linux, it might be impossible
> on windows (for example). The demand to provide a portable interface
> forces us to create (hidden) threads therefore. If a user decided that
> he only needs one thread and the library is not allowed to implement
> the requested interface by running two or more threads internally,
> then how can I implement an interface that allows one to wait for
> events on 100 sockets, a timer, a few named pipes, a fifo and some
> large diskfile I/O at the same time? That is possible on linux,
> but I failed to figure out how this can be done on windows :/
I am confused. What feature is missing on Windows? It is my perception
that the Windows API is quite as expressive as anything Linux has.
>>By _multiplex_ I mean the class (or whatever entity) that implements the
>>core of the demultiplexing of various resources. (I'm using this name
>>because thats what I called it in my own library.) I beleive this class
>>should have these characteristics:
>>
>>1) Minimal - It should handle every sort of event that might need to be
>>handled, but nothing more. More complex logic, such as pooling and
>>balancing, should be handled elsewhere, possibly by a derived class.
>
>
> Agreed.
>
>
>>In
>>addition, the design should be as unsophisticated as possible. In
>>particular, event notification might be simple functors (no 'observer'
>>frameworks)
>
>
> How would one use functors to wait for the plethora of different events
> to be handled? Surely not as template parameter of the multiplexor class.
> You mean as template parameter of methods of that class? That would
> still involve a dereference (some virtual function) in the end, somewhere
> imho; you don't seem to gain anything from this in terms of inlining
> (the main reason for functors I thought). Templates do however tend
> to cause a code bloat :/
Well, I'm not sure. As I've mentioned in some thread, my previous
design had a few cases on indirection. At the very least, the use of
indirection should be minimized.
On the other hand, I am not sure that I agree with the general form of
multiplexor used by ACE, or mentioned by Jeff. In particular, I do not
like the monolithic multiplexor that pokes its way into every module of
the program. I think a multiplexor is silent, and only seen when looked
for. I also think that an 'event,' while a useful notion for
implementors, is not something that needs to exist tangibly.
The above comments are based on several reimplementations of the
demultiplexor I mentioned that I worked on myself. I found my initial
monolithic design, more similar to be ACE, to be much harder to work
with, for no particular advantage.
>>2) Efficient - For many applications, performance will be paramount.
>>Many asynchronous algorithms will depend on the multiplex core having
>>negligable overhead, and Boost should not disappoint. As it may be a
>>crucial building block of nearly any real-world program, it should also
>>be storage efficient, to not rule out application in embedded areas.
>
> Hmm. I agree with the efficiency in terms of cpu. But as always,
> storage efficiency and cpu efficiency are eachothers counter parts.
> You cannot persuade both at the same time. I think that embedded applications
> need a different approach - they are a different field. It will not
> necessarily be possible to serve both: high performance, real time
> server applications AND embedded applications at the same time. In
> that case I will choose for the high-end server applications at any
> time :/ (cause that is were my personal interests are).
I do not feel there is any need for a demultiplexor to be large. You
could easily make one large, as with ACE, but I do not think it would
provide an advantage, even for "high-end server applications."
>>3) Compatible - See
>>http://article.gmane.org/gmane.comp.lib.boost.devel/109475
>>
>>It is my opinion, in fact, that this multiplex class should be in its
>>own library, isolated from any other particular library that would
>>depend on it. In other words, it wouldn't be any more coupled with I/O
>>than it would be with Boost.Thread or date_time.
>
> I still think we will need threads - not only internally but even
> as part of the interface to support users that WANT to write
> multi-threaded applications.
>
> Consider a user who has two threads running and he wants to wait
> for events in both threads. He will then need a 'Reactor' object
> for both threads: both threads will need their own 'main loop'.
> Supporting that (and we have to support it imho) means that the
> library is thread aware. I think we have to depend on Boost.Thread
> as soon as threads are being used; I am not willing duplicate
> the code from Boost.Thread inside Boost.Multiplexor, just to be
> independend of of Boost.Thread.
I disagree with the monolithic style used by some other libraries that
requires such a complex approach. That case you mentioned is
particularly important to me. I have had much luck handling it by
simply instantiating two demultiplexor objects--one in each thread.
I do think it is a good idea to make parts of the demultiplexor object
threadsafe, where there is a need and it does not slow critical operations.
Aaron W. LaFramboise
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk