From: Jeff Garland (jeff_at_[hidden])
Date: 2007-04-24 09:09:13
Stjepan Rajko wrote:
> I have uploaded some documentation and code of a prototype RPC /
> marshal library:
> It differs from the one I attached yesterday in that it actually works
> :-) Tested on Win/MSVC, *should* build on GCC (haven't tested on GCC
> after recent changes).
> The code is pretty infant but shows some functionality. The docs on
> the above website show an example and some discussion points.
> In addition to what's listed at the website, I'm wondering what the
> proper way of returning the function call results would be... For
> example, if the function is void something(int &x), should the
> modified "x" value on the remote computer appear in the local x after
> the remote call returns?
Just to add some perspective, 'full RPC' systems typically support this way of
returning values. In CORBA, parameters are characterized as 'in', 'out' or
'in-out' in the IDL method descriptions.
> I can see that as being reasonable if the RPC is synchronous, but if
Well, not necessarily.
> it is asynchronous maybe something like a future would be a good way
> of providing the modified value? (speaking of, could someone suggest
> a futures implementation out of the ones that were floating around a
> while back?)
> The alternative would be to have all modified parameter values be
> stored in a call structure (which is what happens now with the regular
> return value) and accessible from there.
> Any suggestions welcome!
If you want to study some past experience, there's a wealth of literature on
the designs and tradeoffs. Just a couple examples:
I have a one other comment for the moment - in the doc you say:
The entire server-side code is running in only one thread. This is
probably not good. Should there be one thread per client? One thread
per function call? Is there a "best" solution or should there be options
There's not a 'simple' best answer to this. A single thread might be
perfectly fine for something that executes a fast function and doesn't serve
many clients at the same time (say calculate the current time). Something
that needs to execute a function that performs significant computation, thus
taking substantial time, needs a different strategy. It might spawn a
sub-process or a thread to do the actual work allowing the main thread to wait
for and process other inbound connections and requests. A typical strategy
for problems that require scalability is to use a thread pool. At any given
moment one thread from the pool is waiting for any new i/o on the network --
when it is received that thread begins processing the request and will process
it to completion. At the start of request processing another thread takes over
waiting for network i/o. This approach allows for minimal context switching
w.r.t to processing a request and can be tuned to the number of processors
actually available to handle requests and the nature of the processing.
Usually the number of threads in this sort of scheme is significantly less
than the number of simultaneous clients. Anyway, the 'thread per client'
approach is inherently not scalable...which is fine -- as long as you don't
Anyway, it's an area of some significant design depth -- and one for which
boost doesn't provide all the facilities needed. We don't have the thread
pool or thread-safe queue implementations that might be needed in some of the
various strategies you might desire.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk