Vincente,
If I’m reading this correctly, what you are proposing is the same as what I did, except that I used an implicit duration_cast<>. Yours is certainly more concise and readable.
This method did work by the way.
Dan
From: Boost-users [mailto:boost-users-bounces@lists.boost.org]
On Behalf Of Vicente J. Botet Escriba
Sent: Wednesday, June 12, 2013 1:44
To: boost-users@lists.boost.org
Subject: Re: [Boost-users] [chrono] Initializing system_clock from microseconds
Le 11/06/13 19:15, Kelly, Dan a écrit :
I’m using 1.47, so I would expect this issue to apply. However, I don’t think this is what I am experiencing.
Right now I am testing out a fix which initially appears to work but I have to fully verify it.
void my_class::calculate_packet_statistics ( const struct pcap_pkthdr *header ) {
frame_interval_.intervalTimeStamp_ = system_clock::from_time_t( static_cast<time_t>( header->ts.tv_sec ) );
// Cast to a duration using the system_clock’s tick period.
system_clock::duration us_duration = duration_cast< system_clock::duration >( microseconds( header->ts.tv_usec ) );
frame_interval_.intervalTimeStamp_ += us_duration;
…
}
I would have thought my previous implementation would have done this cast implicitly, but that doesn’t appear to be the case. I’ll let you know if this works.
What about something like
frame_interval_.intervalTimeStamp_ =
system_clock::from_time_t( static_cast<time_t>( header->ts.tv_sec ) ) +
nanoseconds( header->ts.tv_usec);
Best,
Vicente