|
Boost : |
From: JH (jupiter.hce_at_[hidden])
Date: 2019-08-22 08:16:11
On 8/22/19, Bjorn Reese via Boost <boost_at_[hidden]> wrote:
> On 8/22/19 3:55 AM, JH via Boost wrote:
>
>> Yes, I am doing the experiment to increase process priority, it will
>> help, but don't know how much. I am not quite sure if replace
>> boost::asio::deadline_timer by high_resolution_timer will help or not,
>> or if multithreading will help, the device is running simple process
>
> It is not only a matter of processing priority. Your data can also be
> delayed because the network is busy.
>
> However, I suspect that your main problem is not so much that a single
> timeout is delayed, but rather that a delay causes a shift in the
> subsequent timeouts. In that case you could measure the time between
> two timeouts, and then compensate for any deviations by making the
> next expiration time accordingly shorter or longer.
Very good point, it is almost impossible to get realtime effect in
Linux, but at least the compensation can mitigate the time shifting.
> In other words, you want a sequence of expiration time like this:
>
> T + 1 * delta
> T + 2 * delta
> T + 3 * delta
> T + 4 * delta
>
> but because of delays you are actually getting have an accumulated
> error:
>
> T + 1 * delta
> T + 2 * delta + error
> T + 3 * delta + error
> T + 4 * delta + error
>
> With compensation you will get:
>
> T + 1 * delta
> T + 2 * delta + error
> T + 3 * delta - error (this is the compensation step)
> T + 4 * delta
That was exactly I should do.
Thanks Bjorn and Gavin.
Kind regards,
- jupiter
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk