Boost Testing :
Subject: Re: [Boost-testing] Additional testing prerequisites
From: Stefan Seefeld (stefan_at_[hidden])
Date: 2016-10-19 18:50:09
On 19.10.2016 18:11, Tom Kent wrote:
> On Wed, Oct 19, 2016 at 9:25 AM, Stefan Seefeld <stefan_at_[hidden]
> <mailto:stefan_at_[hidden]>> wrote:
> I recently added support for NumPy to the Boost.Python module. To
> compile (and test) that, the development environment needs to include
> Python's NumPy package. As I don't see the Boost.Python NumPy tests on
> suspect none of the test machines have that Python NumPy package
> What is the right way to update these ?
> I just ran "pip install numpy" on my teeks99-09 machine, lets see if
> those runners start hitting it.
Thanks, I'll keep an eye on the test URL...
> In general, I think we seriously need to update the "Running
> Regression Tests" page
> (http://www.boost.org/development/running_regression_tests.html) with
> lots more details on how to get a runner up and going. Nowhere on that
> page does it mention python needs to be added to the user-config.jam
> file in order to complete these tests. If I'm not mistaken there are
> other external dependencies that are needed for effective boost
> testing (zlib, bz2 for iostreams...others?).
Yeah. An MPI implementation for Boost.MPI comes to mind, too...
> Specifically for python, since the library supports python 2 and 3
> should both of those be installed? How do we configure user-config.jam
> to use both versions and how do we make sure that the test run hits
> both versions? How about python 32 bit vs. 64 bit? If I just install
> 32-bit python to use as the test runner, but I do a build with
> address-model=64, I don't think that will allow for testing the python
> library, correct?
All good questions. I'm cross-posting my reply to the Boost.Build list,
as I figure people there might have some of the answers (notably how to
configure the build environment).
I fully agree about the need for a formal document describing the setup
of a test machine. In fact, I wonder whether it wouldn't be useful to
set up a few containers with various platforms (OSes, compilers, etc.),
which contributors could then download to run test on. That would be
very convenient for contributors.
On a related note, the
test matrix displays a disturbing number of failing test runs (runs
where almost all tests fail, suggesting a setup problem, rather than a
problem with individual tests), and I as the Boost.Python maintainer
find myself unable to even try to reproduce or fix those.
For now I have set up my own testing on travis-ci (where I only build
and test Boost.Python using SCons, instead of Boost.Build), but
ultimately I would like to be able to understand all the above failures.
Ideally one could figure out a single setup issue and thus flag an
entire test run as invalid, improving the signal-to-noise ratio of he
tests. I believe all this would be vastly helped using pre-defined
-- ...ich hab' noch einen Koffer in Berlin...