Boost logo

Boost :

Subject: Re: [boost] What would make tool authors happier..
From: Robert Ramey (ramey_at_[hidden])
Date: 2015-06-04 01:36:39


> I've been very happy with this whole system for many years.

LOL - sorry I meant to say "unhappy"
>
>
> Thank you.. But I haven't been happy with it.

So we're in agreement again.

It works, but it has many
> drawbacks (none of which you mentioned). The biggest being that it suffers
> from low resilience. And in an ideal world I would rather see a testing
> system in which an individual Boost library..
>
> 1) The authors registers to be tested on a could CI system (like Travis-CI
> and Appveyor)
> 2) Has a configuration that is standard for such testing across all of the
> Boost libraries. Such a configuration would:
> a) Automatically clone the appropriate git branch (including develop,
> master, PRs, whatever else you want)
> b) Download the latest, small, single, test script. Which would be
> setup from the common configuration to run for each of the cloud CI test
> steps.
> c) Download & install required software (for example it would install
> the version of gcc, clang, etc it's going to test with).
> d) Download the latest Boost Build (and build + install it)
> e) Download (aka git clone) the dependent Boost libraries (master by
> default, but could be any branch)
> f) Run the tests with b2.
> g) As part of (f) b2 "itself" would upload test results to a cloud
> results system live (which would process the results live and present them
> live)

How about something much simpler

1) clone or update the boost super project
2) bootstrap.sh to create binaries - if he hasn't alread2) run b2 headers
3) cd to any library he want's to test
4) run the test script - a little more complicated than the
library_status one - leaves two files - test result table and html text
5) ftp test_result tables to "someplace"
6) if desired run library_status 2 to display test result table

This would be immensely simpler than the current system - basically
because it does less. it would:

1) Permit and encourage each users to test the libraries he's going to
use on the platforms he's going to use them on and upload the results
for those libraries.

2) would be easy for users of non-accepted boost libraries to use. That
is once one cloned in a non-boost library into the right place it could
be tested just a the boost libraries are.

3) There would be no separate testing procedure for official testers vs
library developers - same system for everyone. Much simpler.

2) wouldn't require much in the way of script and not require python

> Anyway.. Test system engineering was not actually the substance of the
> thread.

I see that now. It never occurred to me that you would somehow try to
accommodate gratuitous deviations from our traditional/standard
directory structure. I think you made a mistake going down this path in
the first place. Actually I think you should stop doing that now and
only support the standard layout. Anything that doesn't get tested is
the library maintenance department's problem. (We're working on that as
a separate initiative)

But this subject is very important to me - I'm sort of amazed that most
everyone else seems content with the current setup.

But if you want to see 1, 2.a, 2.b, 2.c, 2.d, and 2.f in action you
> can take a look at:
>
> <https://ci.appveyor.com/project/boostorg/predef>
> <https://travis-ci.org/boostorg/predef>
> <https://github.com/boostorg/predef/blob/develop/appveyor.yml>
> <https://github.com/boostorg/predef/blob/develop/.travis.yml>
> <https://github.com/boostorg/regression/blob/develop/ci/src/script.py>
>
> Note, the script.py is going to get smaller soon as there's extra code in
> it I though I needed as I implemented this for the past two weeks.

Hmmm - looks like we're on divergent paths here.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk