Boost logo

Boost Users :

Subject: Re: [Boost-users] asio: cancelling a named pipe client
From: Stian Zeljko Vrba (vrba_at_[hidden])
Date: 2018-01-30 14:03:23


Hi, thanks for your thoughts.

> It is not valid to have more than one outstanding async read on an asio io object at a time

Although I'm not doing this, this restriction is mentioned in the documentation only for composite operations (free-standing async_read) but not for members on objects (async_read_some()).

> Because the handler has already been passed to the io_service for invocation.... Think of cancel() as meaning, “please cancel the last request if it’s not already completed." ... Anything posted to the io_service will happen.

So asio leaves handling of difficult (edge-)cases to all users instead of offering a user-friendly opt-in solution, such as: each i/o object tracks posted, but not-yet-executed handlers. When cancel on the object is called, it would traverse the list and update error codes. (Meaningful only if the operation completed successfully.) Handlers don't need to be immutable, and given that handlers take error_code by const reference (implying that error_code must already be stored somewhere deep in asio), I suspect they aren't.

(Unrelated: individual operations cannot be canceled (e.g., read, but not write); this is a glaring design omission from my POV. I needed that in another project.)

> Handlers often hold lifetime-extending shared-pointers to the source of their invocations.

Yes, that's another gotcha when you have outstanding both reads and writes. Esp. tricky to discover and fix when only, say, read, fails due to broken pipe, but there's no data to send so that also write() fails in the forseeable future. Then io_service just sits and hangs there waiting for the write handler to return...

> This would indicate a design error. Think of the io_service as “the global main loop” of your program.

I have a program where configuration cannot be changed dynamically. I have to stop the components and recreate them with the new configuration object. The amount of time I spent figuring out how to get clean shutdown/cancellation working (and I'm probably still not there yet!), leads me to accept "design error" as a valid solution to the problem: stop(), delete io_service and recreate it when needed. Ugly and inelegant solution that fixes absolutely all problems (hey, that's what's engineering all about!), including "stale" handlers being invoked upon restart.

.. I guess this semi-rant can be summarized as: asio needs documentation on best practices/patterns/guidelines for life-time management of handlers.

(Right now, in my design, the "controlling" object has weak_ptrs to "worker object", while workers keep themselves alive through a shared_ptr. Each worker also has a unique ID because different instances may be recreated at the same addresses... which leads me to the following question.)

Does weak_ptr protect against an analogue of the "ABA" problem: Say I have a permanent weak_ptr to an alive shared_ptr. Then the shared_ptr gets destroyed. Then another shared_ptr of the same type gets created, but both the object and the control block get the same addresses as the previous instances (not unthinkable with caching allocators). How will lock() on the existing weak_ptr behave? Intuitively, it should return null, but will it? Does the standard say anything about this?

-- Stian

________________________________
From: Richard Hodges <hodges.r_at_[hidden]>
Sent: Tuesday, January 30, 2018 8:45:26 AM
To: boost-users_at_[hidden]
Cc: Stian Zeljko Vrba; Gavin Lambert
Subject: Re: [Boost-users] asio: cancelling a named pipe client

Hi Stian,

Some thoughts from an ASIO veteran and fan:

> - it's an all or nothing thing, i.e., it can't be used to cancel individual I/O requests

It is not valid to have more than one outstanding async read on an asio io object at a time*. cancel() will cancel the current async operation that is in progress on that object if there is one.

You have to remember that notifications come through the io_service and therefore “happen” for the client later than they actually “happened” in reality. If you want to correlate every completion handler invocation with every read call, then you might want to consider assigning an “invocation id” to each read operation and passing that to the closure (handler).

* clarification: deadline_timers may have more than one outstanding wait, and an io object may have an an outstanding read and write at the same time.

> it doesn't work on completed requests

Because the handler has already been passed to the io_service for invocation. From the socket’s point of view, you’ve been notified. Sending a cancel before the execution of the handler can only meaningfully result in a NOP because it’s a crossing case. Think of cancel() as meaning, “please cancel the last request if it’s not already completed."

> EVEN THOUGH the io service has a list of outstanding requests (waiting for completion) and pending (completed) handlers

Anything posted to the io_service will happen. It’s a done deal. The io_service is (amongst other things) a multi-producer, multi-consumer queue with some clever thread marshalling. This is important. Handlers often hold lifetime-extending shared-pointers to the source of their invocations. The handler’s invocation is where the resource can be optionally released.

> I could also just call stop() on the io_service…

This would indicate a design error. Think of the io_service as “the global main loop” of your program. When writing a windows or OSX program, no-one “stops” the windows message loop. Messages have been posted. They must be dealt with. This is the nature of the reactor-pattern World.

R

On 30 Jan 2018, at 08:26, Stian Zeljko Vrba via Boost-users <boost-users_at_[hidden]<mailto:boost-users_at_[hidden]>> wrote:

Ok, thanks for the suggestion.

As a side-note, cancellation/shutdown seems to be the least thought-through feature in ASIO..

- it's an all or nothing thing, i.e., it can't be used to cancel individual I/O requests

- it doesn't work on completed requests

.. EVEN THOUGH the io service has a list of outstanding requests (waiting for completion) and pending (completed) handlers.

I could also just call stop() on the io_service, but when it's started again, all the "old" handlers will be called as well. The only complete solution probably being stopping and deleting io_service, and recreating it in the next "go".

-- Stian
________________________________
From: Boost-users <boost-users-bounces_at_[hidden]<mailto:boost-users-bounces_at_[hidden]>> on behalf of Gavin Lambert via Boost-users <boost-users_at_[hidden]<mailto:boost-users_at_[hidden]>>
Sent: Tuesday, January 30, 2018 4:27:46 AM
To: boost-users_at_[hidden]<mailto:boost-users_at_[hidden]>
Cc: Gavin Lambert
Subject: Re: [Boost-users] asio: cancelling a named pipe client

On 27/01/2018 06:50, Stian Zeljko Vrba wrote:
> Am I correct in assuming that cancel() is an apparent noop in this case
> because of the race-condition where an I/O request completes
> successfully before cancel is invoked?

Most likely yes. It just internally calls through to the OS API, which
will have nothing to do if there isn't an outstanding OS request at that
exact moment.

ASIO can't internally consider this a permanent failure because there
may be cases where you wanted to cancel a single operation and then
start a new one that you expect to continue normally.

> If so, can you suggest a more elegant way (i.e., a way that doesn't
> induce a hard error) of exiting a loop as described here? Setting a
> member variable instead of calling cancel?

Probably the best thing to do is to do both, in this order:

   1. Set a member variable that tells your completion handler code to
not start a new operation.
   2. Call cancel() to abort any pending operation.

This covers both cases; if you miss the pending operation then the
member will tell your completion handler to not start a new one and just
return, and if you don't then the cancellation will generate an
operation_aborted which you can either silently ignore and return
immediately or fall through to the code that checks the member.

There's still a race between when you check the member and when the
operation actually starts -- but that's why you need to post your
cancellation request to the same io_service (and use explicit strands if
you have more than one worker thread).

Omitting the cancel isn't recommended as this would prolong shutdown in
the case that the remote end isn't constantly transmitting.

_______________________________________________
Boost-users mailing list
Boost-users_at_[hidden]<mailto:Boost-users_at_[hidden]>
https://lists.boost.org/mailman/listinfo.cgi/boost-users
_______________________________________________
Boost-users mailing list
Boost-users_at_[hidden]<mailto:Boost-users_at_[hidden]>
https://lists.boost.org/mailman/listinfo.cgi/boost-users



Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net