Boost logo

Boost :

Subject: Re: [boost] [testing] Need optimization help!!!
From: Peter Dimov (lists_at_[hidden])
Date: 2017-02-01 13:35:59


Rene Rivera wrote:

> And then the fun part of finding out what programs get run and what is
> slow :-)

>From the look of it, most of the time is spent in "Generating links files".

The architecture looks a bit odd. If I understand it correctly, the test
runners generate a big .xml file, it's zipped, uploaded, the report script
downloads all zips and then generates the whole report.

It would be more scalable for the test runners to do most of the work. For
instance, what immediately comes to mind is that they could generate the
so-called links files directly instead of combining everything into one .xml
which is then decomposed back to individual pages.

Longer term we could think about splitting the report into individual pages
per test runner; the current structure with a column per test runner was
indeed more convenient when the table fitted on screen, but now it doesn't
and a layout with a row per runner may be more useful:

Sandia-darwin-c++11 2017-02-01 20:31:00 102 passed, 14 failed

For this to be useful however the expected failure markup needs to be
decentralized per-library, so that the maintainer can achieve a zero
failures state as a baseline and then to only need look at the red rows
following a change.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk