Boost logo

Boost :

Subject: [boost] Boost Library Testing - a modest proposal - was boost.test regression or behavior change (was Re: Boost.lockfree)
From: Robert Ramey (ramey_at_[hidden])
Date: 2015-10-09 12:37:57


I believe this whole thread started from the changes in Boost.Test such
that it can no longer support testing of C++03 compatible libraries.
This is totally unrelated to the testing of Boost libraries.

Here is what I would like to see:

a) local testing by library developers.

Of course library developers need this in order to develop and maintain
libraries.

Currently we have this and has worked quite well for many years. Making
Boost.Test require C++11+ throws a monkey wrench into things for the
libraries which use it. But that's only temporary. Libraries whose
developers feel they need to maintain compatibility with C++98 can move
to lightweight test with relatively little effort.

Developers who are concerned that the develop branch is a "soup" can
easily isolate themselves from this by testing against the master branch
of all the other libraries. The Boost modularization system with git has
made this very simple and practicle (thank you Beman!).

So - not a problem.

b) Testing on other platforms.

We have a system which has worked pretty well for many years. Still it
has some features that I'm not crazy about.

i) it doesn't scale well - as boost gets bigger the testing load gets
bigger.

ii) it tests the develop branch of each library against the develop
branch of all the other libraries - hence we have a testing "soup" where
a test might show failure but this failure might not be related to the
library under test but some other library. It diminishes the utility of
the test results in tracking down problems.

iii) it relies on volunteer testers to select compilers/platforms to
test under. So it's not exhaustive and the selection might not reflect
that which people are actually using.

I would like to see us encourage our users to test the libaries that
they use. This system would work in the following way.

a) A user downloads/builds boost.

b) he decides he's going to use library X, and Y

c) he runs a tool which tells him which libraries he has to test. This
would be the result of a dependency analysis. We have tools which do
similar dependency analysis but they would have to be slightly enhanced
to distinguish between testing, deployment, etc. I don't think this
would be a huge undertaking given the work that has already been done.

d) he runs the local testing setup on those libraries and their dependents.

e) he uploads the test results to a dashboard similar if not identical
to the current one.

f) we would discourage uses from just using the boost libraries without
runnig they're own tests. We would do this by exhortation and by
refusing to support users who have been unwilling to run and post local
tests.

This would give us the following:

a) a scalable testing setup which could handle a Boost containing any
number of libraries.

b) All combinations of libraries/platforms/compilers actually being used
would be those being tested and vice versa. We would have complete and
efficient test coverage.

c) We would have statistics on libraries being used. Something we are
sorely lacking now.

d) We would be encouraging better software development practices.
Sometime ago someone posted that he had a problem but couldn't run the
tests because "management" wouldn't allocate the time - and this was a
critical human life safety app. He escaped before I could weedle out of
him which company he worked.

And best of all - We're almost there !!!! we'd only need to:

a) enhance slightly the dependency tools we've crafted but aren't
actually using.

b) develop a tool to post the local results to a common dashboard

c) enhance the current dashboard to accept these results.

Robert Ramey


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk