From: Don G (dongryphon_at_[hidden])
Date: 2005-04-23 11:28:30
>> The reason I suppose that I preferred not-yet-connected
>> as a state of stream (vs. a separate class) is that it
>> is logically a 1-to-1 relationship (and even the same
>> socket) whereas acceptor is a 1-to-N relationship (the
>> acceptor socket never becomes something else).
> I don't see connector as 1 to 1 concept. You can use a
> connector to establish several connections to the same
> address or endpoint if that is needed. The connector
> pattern can be used to model fail over between
> different endpoints providing the same servcie, by
> having a service_connector implementation that wraps
> and coordinates several alternate connectors to
> different end points over potentially different
> transports. This also hides/abstracts the fail over
> strategy new endpoint selection in a nice way from
> the protocol handler and stream.
I see this as something useful, but at higher level than what I am
proposing. That may be just because I have never needed anything like
it. ;) In all my own uses of connection establishment, it has been
one-off connections. Occasionally, I need to reconnect to the same
server, but I have always done that with the address object in my
>> The other reason I went with the approach I have is
>> that there is only one object to deal with for cancel.
>> I don't need to worry about the transition from
>> connecting to connected when I want to abort.
> The exact ownership of the socket/handle is always
> clear either the connector owns it during the
> connection phase and when it's connected the
> ownership is transferred to the stream. So I don't
> see this as a potential problem.
The scenario I am describing goes along with the above statements on
single use: I want to do an HTTP get. I first connect, then write,
then read. If I decide to cancel, especially with threads in the mix
and using blocking style, I must know that I am blocked on the
connector, not the stream. But this is not so easy to "know". I
cannot hold a lock while I am blocked on connect. What this does is
create a race: the entity wanting to cancel cannot know if the
blocking call has just finished or not. Which is why I don't like
sync programming anymore! ;)
I see the granularity as not getting along well with the desire to
cancel a specific operation that required a connection. This is
especially true for reuse of the connector in the environment I just
I hope this clarifies my concern. I really feel that a granular
cancel feature has to be part of the design. Preferably in a way that
is MT friendly. :)
> Seperating the connection establishment would make the
> stream more stateless and have fewer concerns and it
> separates responsibility keeping the interfaces easier.
> I also doesn't imply inheritance as is the case whit a
> connectable stream. You also don't have to handle
> questions such as if a stream can be connected again
> after it is closed and so forth (and i guess this could
> be different in different implementations).
The stream still has many states; this eliminates only one. Also, I
don't see any (good<g>) reason to reuse streams. That would be
particularly bad in a MT scenario.
>> Some platforms have socket features that aren't available
>> on other platforms. Unix has signal-driven I/O, while
>> Windows does not. Windows has event object based I/O,
>> completion ports, HWND messages, etc. which are unique
>> to it. The common intersection is blocking, non-blocking
>> and select. One can write 99% portable sockets code based
>> on that subset.
> So basically layer 0 should support this portable subset.
Well, I would mostly agree, but I don't think everyone will. One of
the goals several people have expressed is the ability to use level 0
wrappers in non-portable ways.
What I am proposing is that there be a level 1 that provides a less
bumpy, more easily handled, but conceptually very similar interface.
This was another reason I don't have a connector object; that concept
has no association with layer 0.
> You can define FD_SETSIZE to some arbitrary number. On
> my FC1 including select.hpp FD_SETSIZE is 1024 so we
> will have to handle arbitrary limits in the interface.
> On windows select could be implemented using
> WSAEventSelect and WaitForMultipleObjects but this will
> have to be cascaded to other threads if the set is to
> big and in that case handles can be mixed. So either we
> have to cater for arbitrary limits, letting the user
> implement on top of the limitst. Or remove arbitrary
> limits at cost of complexity and probably increased
On Windows, you cannot increase the fd_set size limit, but I think
you are correct on Unix that you can increase it. I have never tried
because I wasn't sure if that would require kernel recompile<g> and
it wouldn't solve my problem even if it were possible.
I would not want to do the thread cascade approach (ala Cygwin). I
think a much better approach is what I am proposing: hide this detail
completely. I am still scratching around in my head for a way to
allow true single threaded use (similar to Peter's net::poll
> Yes the rules must be really clear, and the library
> should probably never hold any kind of lock when
> calling a user defined callback since that i
> genreally very error prone and deadlock creating, but
> this also makes it really hard to implement ;) but
> also interesting.
Yes, quite "interesting" :) And you can remove the "probably" between
the "should" and the "never"... ;)
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk