Subject: Re: [boost] TravisCI and Coverall usage policies in Boost
From: Niall Douglas (s_sourceforge_at_[hidden])
Date: 2014-09-18 18:46:11
On 18 Sep 2014 at 17:40, Antony Polukhin wrote:
> > > Can we enable automated tests coverage using Coverall for a Boost repo?
> > Yes. This should also be mandatory for all Boost libraries. It's
> > free, nobody has an excuse.
> > We should really add a Boost wiki page on setting this stuff up, and
> > strongly hint at it being nearly a requirement on the community
> > review page for new libraries. I can help write this if you'd like to
> > start it Antony?
> Having such a wiki page would be good.
> I've finished writing a generic .travis.yml draft file that is suitable for
> almost any library that uses Boost:
> https://github.com/apolukhin/variant/blob/travisci/.travis.yml There's some
> 'sed' black magic to provide only current library files to coveralls site.
> Fast recursive git cloning is taken from run.py script.
You may find my Travis script inspirational at
In particular, I find the time spent cloning all of Boost takes away
valuable unit testing time especially when you're running valgrind or
the thread sanitiser, so I keep an automatically updated copy of
Boost releases at https://github.com/ned14/boost-release. What I do
then on my Jenkins CI is to extract that only, delete from libs/ the
libraries I want trunk for, and symbolic link in the trunk submodules
just for those libraries I want trunk for. A quick b2 headers later
and it's good.
I also skip the Travis GEM and use curl :)
Note my script is heavily based on Daniel Pfeifer's, so it's not all
> I've also made a draft of a README.md file with results table
Nice. Only other thing is you should really disambiguate by compiler
version and platform. For example,
> We could start writing the wiki. Do you know where to start?
I guess go to https://svn.boost.org/trac/boost/wiki and start a page.
> It would be good to hear more opinions about the TravisCI+Coveralls before
> we start to add .travis.yml files to all the libraries.
Yes, I think that where the full unit test library is being used that
a summary of failing versus passing tests should be recorded.
Coverage can be great, but if unit tests don't return failure for a
problem it can get overlooked.
For example, I patched Boost.Expected to spit out unit test results,
and Jenkins makes this nice table:
I also think a valgrind pass needs to happen, plus a thread sanitiser
pass, plus a clang static analysis pass. For all libraries. Unless
they have big red fail marks all over them, libraries won't get
If we ever get Windows on Travis, I was surprised how good the MSVC
static analyser has become, and we should have a pass with that too.
-- ned Productions Limited Consulting http://www.nedproductions.biz/ http://ie.linkedin.com/in/nialldouglas/