Boost logo

Boost Testing :

Subject: Re: [Boost-testing] Much longer linux regression run times
From: Tom Kent (lists_at_[hidden])
Date: 2017-03-01 12:54:06

On Tue, Feb 28, 2017 at 9:08 PM, Rene Rivera <grafikrobot_at_[hidden]> wrote:

> On Tue, Feb 28, 2017 at 8:55 PM, Tom Kent via Boost-Testing <
> boost-testing_at_[hidden]> wrote:
>> I'm not really sure how to trace this down. Is there any way to log the
>> time it takes the various libraries to complete their test suites?
> Not currently. The individual tests can run in parallel. To give you that
> number we would have to save the information as part of the regression data
> and sum it up as part of the regression reports.
>> I'm guessing that will show one library that is using 3+ hours. Although,
>> it is possible that a change went in to a higher level library that just
>> adds a few seconds to each call and is used across many of the libraries.
>> Thoughts?
> Binary search manually? You can limit what tests you run on the command
> line by using the "--include-tests" b2 option <
>>. So
> start off by only running tests for [a-m] or [n-z], then [a-g], and so on.
> Until you find the time hog.

That sounds tough. As a shortcut, I've looked at the libraries that had
commits to develop on the 18th:

It looks like hana, chrono, ratio, and thread (with wave and core/winapi
the previous day). If I get a chance tonight I'll start checking through
 those. Meanwhile, if those authors could look at those and see if there is
anything that might be causing this, it would be appreciated.

> Other than that, maybe it's one of the libraries tested on Travis <
>>. And maybe check the length of
> those first if they are long on Travis perhaps they are also long on
> regular tests.

 I could find the travis scripts for the superset repo (no change in build
time), but I'm not clear on what that is doing. Is it testing individual


Boost-testing list run by mbergal at