Boost logo

Boost :

Subject: Re: [boost] What would make tool authors happier..
From: Rene Rivera (grafikrobot_at_[hidden])
Date: 2015-06-03 23:18:09


On Wed, Jun 3, 2015 at 5:46 PM, Robert Ramey <ramey_at_[hidden]> wrote:

> On 6/3/15 2:21 PM, Rene Rivera wrote:
>
>> No matter how bright anyone is an automated approach, as above, can't
>> account for human nature.
>
> well, maybe I'm just buying your argument that library authors must adhere
> to some reasonable conventions if they want to get their stuff tested. I
> think that's your original point.

Yes, that's a big point.

> You pointed out as an example the multi-level directory in numeric. I
> guess that mislead me.
>

Sorry about that.. It was easier to just count and post numbers than go
through the pain of enumerating individual examples.

Sooooo - I'm going to support your contention that it's totally reasonable
> and indeed necessary that library authors adhere to some reasonable
> conventions if they want thier libraries to build and test in boost. All
> we have to do is agree on these conventions. Here's my starting point:
>
> Libraries should have the following directories with jamfiles in them
>
> build
> test
> doc
>

Which is what we've already agreed to for more than a decade.

Libraries can be nested if they adhere to the above convention.
>

Yep.

... add your own list here
>

Peter, John, and I have some more specific ones for the list.

So we can agree on that. (I think).
>

We, as in Peter, John, you, and I, can ;-) Can't speak for others. But I
would think there's some disagreement given the current variety of
structures.

Now I've got another issue. Why can't we just run the local testing setup
> and upload the results somewhere.

It's not impossible, or even hard.

> Right now I'd like users to be able to run b2 in the directories which
> interest them, and upload summary test results to a server which would
> display them.

You've mentioned this desire before :-)

> This would mean that testing would be much more widespread.
>

It may be more widespread. But it likely will not be more varied.

The current system requires that one download a python script which ...
> does what?

Mostly it does a lot of error checking, fault tolerance, and handling of
options (although I keep working to remove the options that are no longer
used in the hope that things will get simpler).

> it looks like it downloads a large number of other python scripts which
> then do a whole bunch of other stuff.

It downloads 3 other Python files (I could reduce that to 2 now.. just
haven't gotten to it). But it also downloads the current Boost Build,
process_jam_log (the C++ program), and of course Boost itself. Although it
does clone the regression repo to get the related process_jam_log sources
and build files.

> My view is that this is and always has been misguided.
>
> a) It's too complex
>

Yes, but it used to be worse.

> b) too hard to understand and maintain.
>

Yes, but I'm trying to fix that. From multiple fronts, including
investigating the option of bypassing it entirely.

> c) isn't usable for the casual user
>

It's not meant to be as it's a serious commitment of resources to be tester
for Boost. But it's easy enough to read the brief instructions and run it
without knowing what it actually does.

> d) requires a large amount of resources by the test
>

The test system resources are minuscule compared to the resources to run
the tests themselves (i.e. if you just ran b2 in the status dir).

> e) which he can't figure out in advance
>

We have experimentally arrived at resources numbers for full testing (the
test scripts themselves use so little they would likely run on a modern
smart phone).

> f) and does a lot of unknown stuff.
>

I gave a presentation long ago at BoostCon #2 (IIRC), yes that far back,
saying what gets done. And it does less now than it used to. But can be
summarized as: 1) downloads the test system, 2) downloads Boost, 3) builds
the test system, 4) builds and tests Boost, 5) processes the results, 6)
uploads to a server.

> Wouldn't be much easier to do something like the following:
>

0) download Boost (and deal with errors and proxies)

a) pick a library

a.2) build b2
a.3) install/setup b2 and your toolset, and possibly device or simulator or
VM

b) run b2
>

b.2) download process jam log (and deal with errors and proxies)
b.3) build process jam log

c) run process jam log
>

c.2) download X if it's not part of Boost
c.3) build X

d) run X which walks the tree and produces a pair of files - like
> library_status does.
>

...Would need to produce a considerably more information laden file than
what library_status does to be useful to lib authors (like what one of
those Python scripts above currently does). But I understand your point.

e) ftp the files to "someplace"
>

Like it does now.

> f) the reporting system would consolidate the results and display them.
>

Like it does now.

And of course don't forget to add a bunch of error handling (remember it's
the internet, errors are everywhere), and proxy options.

This would be much more flexible and easier. It would be much easier to
> maintain as well.
>

What part would be more flexible? I don't see how it would be easier to
maintain. The code would be easier to maintain? The servers easier to
maintain? The report consolidation servers and code would be easier?

Of course this is somewhat speculative as it's not clear to me how the
> python scripts work and it's not clear how to see them without actually
> running the tests myself.
>

They pretty much work just like you described but with more automation glue
:-)

I've been very happy with this whole system for many years.

Thank you.. But I haven't been happy with it. It works, but it has many
drawbacks (none of which you mentioned). The biggest being that it suffers
from low resilience. And in an ideal world I would rather see a testing
system in which an individual Boost library..

1) The authors registers to be tested on a could CI system (like Travis-CI
and Appveyor)
2) Has a configuration that is standard for such testing across all of the
Boost libraries. Such a configuration would:
    a) Automatically clone the appropriate git branch (including develop,
master, PRs, whatever else you want)
    b) Download the latest, small, single, test script. Which would be
setup from the common configuration to run for each of the cloud CI test
steps.
    c) Download & install required software (for example it would install
the version of gcc, clang, etc it's going to test with).
    d) Download the latest Boost Build (and build + install it)
    e) Download (aka git clone) the dependent Boost libraries (master by
default, but could be any branch)
    f) Run the tests with b2.
    g) As part of (f) b2 "itself" would upload test results to a cloud
results system live (which would process the results live and present them
live)

Anyway.. Test system engineering was not actually the substance of the
thread. But if you want to see 1, 2.a, 2.b, 2.c, 2.d, and 2.f in action you
can take a look at:

<https://ci.appveyor.com/project/boostorg/predef>
<https://travis-ci.org/boostorg/predef>
<https://github.com/boostorg/predef/blob/develop/appveyor.yml>
<https://github.com/boostorg/predef/blob/develop/.travis.yml>
<https://github.com/boostorg/regression/blob/develop/ci/src/script.py>

Note, the script.py is going to get smaller soon as there's extra code in
it I though I needed as I implemented this for the past two weeks.

-- 
-- Rene Rivera
-- Grafik - Don't Assume Anything
-- Robot Dreams - http://robot-dreams.net
-- rrivera/acm.org (msn) - grafikrobot/aim,yahoo,skype,efnet,gmail

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk