Boost logo

Boost Testing :

Subject: Re: [Boost-testing] BenPope x86_64 runners + runner requirements
From: Raffi Enficiaud (raffi.enficiaud_at_[hidden])
Date: 2015-03-21 05:37:44


Le 20/03/15 20:55, Rene Rivera a écrit :
>
>
> The test tools are a combination of Bash, Batch, C, C++, Jam, and
> Python. And it builds the C++ tools before starting. So at least the
> *default* system C++ compiler works just fine. And that's all we can
> test at this level. It's the tested toolsets that are not OK in this
> case. Which we can't determine easily without running the regression
> tests themselves. We could possibly add a global test to the Boost tests
> and use that to check for upload or not. But at that point you've
> already run all the tests. So there's no savings for the tester as 95%
> of the work has been done. The other 5% is processing the results and
> uploading. Arguably you could save that 5% of work. Which leaves
> possibly only avoiding the upload so that user don't see results from
> broken testers. But if we avoid users the slight displeasure of seeing
> those broken results we would likely never notice that the tester is
> broken. And hence never complain to the tester to fix it. And hence end
> up wasting 100% of the testing resource.
>
> So.. I'd rather see those results. For the basic reason that having some
> information is better than having none and wasting work.
>
>

I haven't had a look to this machinery. But I was thinking of a
requirement target that checks, in run.py or in the /status jam files,
that the requirements are met before proceeding further the tests.
This can look like the current requirements in boost.config:

http://www.boost.org/doc/libs/1_57_0/libs/config/doc/html/boost_config/build_config.html

This would save 100% of everything, plus unclutter the dashboard from
the noise generated by the runner setup time (+faster feedback for those
people in their setup).

I really do not see any added value in looking at noise.

Raffi


Boost-testing list run by mbergal at meta-comm.com