|
Boost : |
From: Don G (dongryphon_at_[hidden])
Date: 2005-04-24 09:12:09
Hi Michel,
Mixing two replies here, just for fun :)
>>> So basically layer 0 should support this portable
>>> subset.
>>
>> Well, I would mostly agree, but I don't think
>> everyone will. One of the goals several people
>> have expressed is the ability to use level 0
>> wrappers in non-portable ways.
>
> Isn't that just saying the handle should be
> accessible. And you should be able to attach and
> release an underlying handle from the objects at
> this level.
I believe so, at level 0 - the socket wrapper classes. As you
probably noticed, the interfaces I propose do not expose this handle
and indeed they cannot as it may/probably won't be what is expected
if it even exists.
> I still think that at least ip/ipv6 should be
> on by default (and implemented in such a way that
> the lazilly initialize themselves so the cost is
> low of having them active at all times). I have
> no problem with user registering the some odd net
> providers or their own user defined. One should
> be aware that several dlls could load the libary
> simoltaneously and the library should preferably
> cater for that or at least not break down. Maybe
> just set some rules.
I guess I am still focused on the user creating network objects and
using them directly since that is how I use this where it was born. I
will have to think further on this because, while this textual
address centric approach sounds reasonable, I am concerned by so much
happening without direct control. In some environments (from my
experiences with plug-ins and ActiveX mostly), it is important to
know when things can be started and at what point they must end. The
lazy init is good for starting, but the stopping is more troublesome.
I cannot tell you how frustrating it was to debug threading issues
during global destruction! :(
Also, the mapping from user entered form into some of the cryptic
forms we have discussed would be more tedious to me than just using
the right object. ;) Even in config files this seems a bit hard to
understand and document. As a user of a product, I would not react
well to this stuff. Imagine configuring your proxy server setting:
proxy=tcp:/proxy.boost.com:80
I would expect it to be:
proxy=http://proxy.boost.com
Pushing the multi-network detail to the user seems like a mixed
blessing. Like I said: more thinking...
> So you have a switch somewhere like
> switch(transport) {
> case serial: return mSerial;
> case tcp; return mTcp;
> }
>
> Or something to that matter that corresponds to
> the map? Or maybe you don't provide the same
> services/protocols over two networks, in that
> case the context would be clear from
> my_fancy_serial_protocl_handler use mSerial and
> from my_cool_tcp_handler us mTcp?
Most of our UI's make the context obvious for the user's benefit and
so makes the code switch-less as well. A common sequence goes like
this:
- app init creates mTcp and sets some stuff in it
- user says, dial 555-1212 on modem X, so we create
mSerial (just one modem dial at a time<g>)
- user goes to the TCP page and enters an address
and hits the "Do Foo" button.
- because of the context, we know that mTcp is the
one to use.
- user goes back to dial page and presses the "Doo
Foo" button.
- again by context we use mSerial.
The contexts are usually different methods invoked by the GUI
framework based on the current page the user is working with.
We try to avoid having the user enter arcane syntax addresses, so "Do
Foo" is where we establish the scheme/protocol as well.
The map would be important in this kind of app if the user entered
such addresses directly. It would also come up if certain things were
in config files (they haven't in my experience). Other kinds of apps
may run into this if they need to expose multiple networks at that
level.
> That was the method i was referring to when saying
> post notifiactions to the gui thread. And of course
> eliminating threads isn't important for regular apps
> as well. But one additional io thread and a hidden
> window and some post message calls and all your code
> is serialized and a very simple model without
> chopping up the event loop.
Yes. The "generic async library" I keep mentioning is where this is
handled w/o the user A) being called in the wrong thread or B) the
user manually doing any posting.
In pseudo code:
class foo
{
public:
foo (net::stream_ptr s)
: strm_(s)
{
ch_.open();
strm_->async_connect(ch_.bind_async_call(&on_connect));
}
// implicit: ~foo () { ch_.close(); }
private:
net::stream_ptr strm_;
channel ch_;
void on_connect () // called in same thread as ctor
{
strm_->async_write("Hello world!", ...,
ch_.bind_async_call(&on_written));
}
void on_written () // also called in ctor's thread
{ ... }
};
There is a lot that could be explained about channel. I posted this
some weeks ago as Asynchronicity or something like that, but I
haven't focused much on it.
The current thread (app main in this case) must cooperate for the
channel to open; the thread must declare its intent to deliver queued
messages (boost::function<> objects) by creating a different object
(I called it a nexus in my post, but that was just a first crack at
renaming the concept from the code at work).
The channel class exists to allow ~foo (or some other impulse) to
cancel delivery of any messages that might be in the queue. Imagine
if "delete foo" happened with the call to on_connect already in the
queue. Yuk. So, channel solves this problem.
The channel approach does complicate usage in this case, and it could
be improved by exposing the concept from the network layer:
class foo
{
public:
foo (net::stream_ptr s)
: strm_(s)
{
strm_->set_async_context(); // to this thread
strm_->async_connect(&on_connect);
}
private:
net::stream_ptr strm_;
void on_connect () // same thread as set_async_context
{
strm_->async_write("Hello world!", ...,
&on_written);
}
void on_written () // also by set_async_context thread
{ ... }
};
This just adds some (useful<g>) coupling between "net" and the
general async library.
> I dont think threads can't be avoided as I see it
> at least not when hitting an internal limit of the
> notfication mechanism choosen eg 64 for windows
> and 1024 for Linux FC3. Otherwise polling would
> have to be done in some roundrobin way over the
> available handle sets and that would introduce
> latency.
Without threads the network would have a capacity limit of
FD_SETSIZE, but even with threads the system has a limit. I found it
trying to stress test my thread pool solution<g>. So, if the capacity
limit of FD_SETSIZE is acceptable for the application, and threads
are to be avoided, then I can see their argument. I think it is a bit
premature to form conclusions (without measuring/optimizing), but it
is a valid gut feel.
Best,
Don
__________________________________________________
Do You Yahoo!?
Tired of spam? Yahoo! Mail has the best spam protection around
http://mail.yahoo.com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk