Boost logo

Boost :

From: Carlo Wood (carlo_at_[hidden])
Date: 2004-09-13 13:12:28

On Tue, Sep 14, 2004 at 12:17:20AM +1000, Christopher Kohlhoff wrote:
> > - will asio use epoll on modern linux boxes?
> >
> > - will asio use kqueue when that is available on other OS?
> At the moment, the answer to both the previous questions is no. The
> only implementation it currently uses on Linux is select-based.
> However, the API has been designed to transparently support other
> mechanisms such as epoll, for when I get the time to add them :)

Ok, np. It will easy to add it.

> > - What does asio use on windows? Is that also scalable
> > to 20,000 socket descriptors, like epoll and kqueue are?
> It uses IO completion ports. I'm afraid I don't recall any numbers, but
> certainly they scale to the thousands. I believe Microsoft pushes them
> as the most scalable way to write servers on Windows.

Can someone explain to me how it is possible that libACE doesn't use
IO completion ports by default?! (Or at all) :)

Anyway - I am glad to hear that.

Next question that comes to mind: is asio creating any threads itself
in order to achieve its normal functionality (on windows)?

I am still reading your documentation, but I already have the following
questions / remarks that I'd like you reaction on:

- The timer resolution of 1 second is too low (see below). Would you be
  willing to revise/change that part of the interface?

[ A resolution of 1 second means that you can request 1 second and get
  0 seconds (ie, you call now() 1 microsecond before it would return
  the next integer, add 1 and request the time out with that).
  A better timer interface, one that I use myself in my networking
  library, is to always work with "times" as offsets relative to a
  function returning 'now()'. The value returned by 'now()' should
  not change in between calls to the system function that actually
  waits for events (it is only updated once per mainloop loop thus).
  When requesting a time event you would for example do:

  asio::timer t(d, seconds(5));

  instead of

  asio::timer t(d, asio::time::now() + 5);

  The advantage is that if you request 1 second this way, you will
  get a much more accurate 'second' because internally the clock
  has microsecond accuracy (ie, select(2) on linux) or at least
  milisecond accuracy.

  On top if that, I think that you should allow miliseconds precision
  timeouts, ie: asio::timer t(d, miliseconds(5));

  A timer of 100 ms and even 10 ms is not an imaginary need in some

- What happens when asio::demuxer::run is running
  (and waiting/sleeping for events) and then the
  application receives a signal?
  - Is it possible to treat signals as events that
    are dispatched from the mainloop?

If not, then I think direct support for this has to be added.
A signal handler can be called at almost any moment in the code
and can therefore not use system resources, or shared resources,
of any kind. You can't even have it wait for a mutex. Basically
all a signal handler can do is set an atomic flag that there has
been a signal.

The actual actions that need to be taken cannot be done from
within the signal handler but need to be done from the mainloop.
Therefore it is a requirement that a signal causes either a
callback - or at least causes a return from asio::demuxer::run()
(at which point you need to be able to detect that it returned
because of a signal, and not because 'no more work' was available).

And a last remark so far,

- asio::thread doesn't belong in the library
  (its a handy thing, but just doesn't belong HERE).

Carlo Wood <carlo_at_[hidden]>

Boost list run by bdawes at, gregod at, cpdaniel at, john at