Boost logo

Boost-Build :

Subject: Re: [Boost-build] [Boost-testing] Additional testing prerequisites
From: Niklas Angare (li51ckf02_at_[hidden])
Date: 2016-10-20 19:27:02

"Stefan Seefeld" wrote:
> On a related note, the
> test matrix displays a disturbing number of failing test runs (runs
> where almost all tests fail, suggesting a setup problem, rather than a
> problem with individual tests), and I as the Boost.Python maintainer
> find myself unable to even try to reproduce or fix those.
> For now I have set up my own testing on travis-ci (where I only build
> and test Boost.Python using SCons, instead of Boost.Build), but
> ultimately I would like to be able to understand all the above failures.
> Ideally one could figure out a single setup issue and thus flag an
> entire test run as invalid, improving the signal-to-noise ratio of he
> tests. I believe all this would be vastly helped using pre-defined
> containers...

For my test runner NA-QNX650-SP1-x86 which has Python 2.5.2, at least some
of the failures seem to be caused by the test code trying to use newer
Python features. The documentation for Boost.Python claims to require only
Python 2.2. Did the author of those tests forget to maintain compatibility,
or are those tests only relevant to newer versions? If it's the latter,
perhaps those tests shouldn't even be run when the Python version is too

My other runner NA-QNX650-SP1-ARM is cross compiling and the target
environment doesn't have Python so testing Boost.Python is not desirable.
Should I disable it with --bjam-options="--without-python"?

If you want more information about the configuration of the test runners,
you could add a test that simply outputs diagnostic information
unconditionally. For example config_test from Boost.System or config_info
from Boost.Config do this.


Niklas Angare

Boost-Build list run by bdawes at, david.abrahams at, gregod at, cpdaniel at, john at