Boost logo

Boost Testing :

Subject: Re: [Boost-testing] Testing direction (was: Request for funding - Test Assets)
From: Raffi Enficiaud (raffi.enficiaud_at_[hidden])
Date: 2015-12-14 19:44:05


Le 13/12/15 17:47, Rene Rivera a écrit :
> On Sun, Dec 13, 2015 at 8:59 AM, Tom Kent <lists_at_[hidden]
> <mailto:lists_at_[hidden]>> wrote:
>
> On Sat, Dec 12, 2015 at 8:35 PM, Rene Rivera <grafikrobot_at_[hidden]
> <mailto:grafikrobot_at_[hidden]>> wrote:
>
> Not going to comment on the aspect of purchasing a machine. But
> will point out that the real benefit to having dedicated
> machines is that of having non-traditional setups (OS+toolset).
> I.e. dedicated machines give you coverage.
>
> On Sat, Dec 12, 2015 at 8:08 PM, 'Tom Kent' via Boost Steering
> Committee <boost-steering_at_[hidden]
> <mailto:boost-steering_at_[hidden]>> wrote:
>
>
> I also think that, like Niall said, we should move towards
> CI style testing where every commit is tested, but that is
> going to be a *huge* transition.
>
>
> I wouldn't say huge.. Maybe "big".
>
> I would love to see direction on this in general from the
> steering committee, and am encouraged that almost all new
> libraries already have this..
>
>
> I can't speak for the committee. But as testing manager I can
> say moving Boost to CI is certainly something I work on a fair
> amount.
>
> Retrofitting it onto all the existing libraries will be an
> undertaking.
>
>
> Working on that. Getting closer and closer.
>
>
> I didn't realize this was being actively pursued. How many of the
> existing libraries have been setup for this? Is there a broader
> strategy for getting the individual maintainers to take these
> changes? Any simple tasks I could help with in my (very limited)
> spare time?
>
>
> The only library so far I have is my own (Predef.. But that's an easy
> one). There are a lot of small changes needed to deal with this. You can
> look at the current functionality for this CI testing here
> <https://github.com/boostorg/regression/tree/develop/ci/src> (plus the
> .travis.yml and appveyor.yml in Predef).
>
> One in particular I did a PR for BB as it was a functionally "radical"
> change (see <https://github.com/boostorg/build/pull/83>). But I will
> likely move on without that change anyway. My plan was to start on the
> "Robert" version of isolated testing (checking out a library to a
> particular commit, but checking out the monolithic Boost to a release
> commit).
>
> My next step on that was to move to testing another more complex library
> using the CI script (and extend the script as needed).

Hi all,

I will just give my personal opinions about that, and what I did for
boost.test.

After all the complains boost.test got lately, I deployed an internal CI
based on Atlassian Bamboo (https://www.atlassian.com/software/bamboo):
- tests every commits that happens on boost.test only
- tests every branch of boost.test

What I do is what you call the "Robert" CI testing:
- I clone boost to develop
- I checkout a specific branch of boost.test (tolerant to force updates,
since those are topic branches I force push them before merge)
- Bamboo runs boost.test unit tests on this branch vs. develop, on
several configurations (more or less 7 configurations, win, osx and
linux), on exactly the same version of the code.

The benefit is that
- I test the same version of the code on several configuration, so the
feedback I have from the CI is for this specific version, which is
currently lacking for the regression dashboard
- I have clean topic branches, and clear status on all those
- Forking the CI "plan" to a new branch is automatically done by
Atlassian Bamboo
- I am not polluting the boost.test develop branch with immature
developments anymore, I do not need to use a fork of the repository for
that neither
- I have a very fast feedback on all branches, and I can have a branch
policy that is also avoiding any clash of topics: I merge different
topics on a "next" branch that gets automatically tested as well,
asserting that an union of topic is still ok. Once "next" is green, it
is more or less safe to merge to develop.
- the interface is clear, it keeps the history and the logs and
everything I need.

Atlassian Bamboo is a paying solution, but it is free for open source
projects. It needs a master server, that schedules the builds on several
slave machines (agents). I used to use Jenkins a lot, I have to say
Bamboo is far above.

The problem I can see in using this kind of solution though is that the
current runners are asynchronous in their result: they run whenever they
can, and there is no enforcement on the revision that is getting tested.
It is more or less push vs. pull, and bamboo is better adapted to a park
of runners that are highly available. This is the current setup I have
for boost.test and I am pretty happy with it.

> As for broader strategy.. At some point when I have reasonably complete
> CI support (Travis and Appveyor and a complex library) I'll just start
> making changes to all libraries. As I know that getting authors to do
> this work will likely not work. I.e. I'll take my usual "just do it"
> approach :-) As for resources.. My goal is to move the testing of the
> common toolsets/platforms all to cloud based services. Relieving our
> dedicated tester to concentrate on the not so common & bleeding edge
> toolsets (such as Android, IBM, Intel, BSD, etc configurations).
>
> [snip]
>
> Here's the idea I've been pondering for a while...curious what you
> (and others) think of it....
>
> Currently when a user starts the regression tests with run.py, the
> specify the branch that they want to run (master or develop) and
> then get the latest commit from that branch. I would like to remove
> this from the user's control. When they call run.py, they just pass
> in their configuration and their id string and run.py goes out to a
> server to see what needs to be run. By default this server could
> just alternate between giving back the latest master/develop (or
> maybe only run master 1 in 3 times). That would give us uniform
> coverage of master and develop branches.
>
>
> Interesting. I'll have to think about that some.
>
> This would also enable us to have a bit more control around release
> time. Once an RC is created, we could give each runner that commit
> to test (allowing master's latest to have changes), then we could
> get tests of what is proposed for the release (something that is a
> bit lacking right now, although the fact that we freeze the master
> branch gets close to this). After a release, we could save the
> snapshot of tests and archive that so that future users of that
> release could have something documenting its state.

Instead of doing it that way, and specifically for RCs, I would go for a
specific branching scheme:
- an RC goes to a release branch
- runners check that release branch first, and test it if not already done.

Also it lets ppl see in the repository this specific RC, lets them clone
it and test it.

> As far as processing the output, what I was envisioning was moving a
> lot more of it to each test runner and the rest to the client side
> with some javascript. To re-create the summary page, each runner
> could upload a json file with all the data for their column:
> pass/fail, percent failed, metadata. Then we could run a very
> lightweight php (or other) script on the server that keeps track of
> which json files are available (i.e. all of the ones uploaded,
> except those not white-listed on master) and whenever a user opens
> that page, their browser is given that list of json files which the
> browser then downloads, renders and displays. There would be a
> similar pattern for each of the libraries' individual result
> summaries. Which could link to separately uploaded results for each
> failure..
>
>
> That's not far from what I plan to do, and have partly working. Except
> for the aspect of doing as much on the client side as you say. I
> attempted to do that early on in my work and found that it just didn't
> work. First there wasn't enough testing time computation that could be
> done to facilitate the server/client side. As much of the computation
> cuts across various testers. Second it conflicted with one of my goals
> of making the testing side simpler to increase the number of testers
> (which is a common complaint currently).
>
> Right now what I have is: Testers upload results as they happen (each
> test would do a post to the Google cloud). When a test run is done the
> data is aggregated (again in the Google cloud) to generate the
> collective stats & structure (it's this part that I'm optimizing at the
> moment). When a person browses to the results the web client downloads
> json describing that page of results, and renders a table with client
> side C++ (emscripten currently). Note, I try and only generate on the
> server the minimum stats information possible to reduce that processing
> time and shifting as much as possible to the web client.
>

 From all that, as I understand it is that you want to have a new
dashboard, and not necessarily a new full testing CI?
I believe this is a big but not huge development effort to mimic a CI
dashboard.

Also, I do not know if mixing server and client side technologies is the
way to go. I would rather go for server side only, rendering static html
files asynchronously: those will be easily cached by the web server and
web client and the rendering would be almost immediate on every device.
To be honest, I hate JS and node.js bubble just makes me smile.

I also think that summarizing/visualizing the information is the key for
a dashboard:
- most of the state should be rendered as a function of time, where time
is the time of the commit
- for each commit, associated # of runners, # of tests, # failing tests,
and the deltas wrt. previous version (including removed/added tests)
- the same for every libraries, with the list of test we have now,
without the segmented logs
- access to full build log instead of segmented/broken ones

It means that, if at some point a lunatic runner wakes up, it will push
its result to a specific commit, making this point of time richer that
before (and not removing/replacing part of the information).

I do not know the server side technologies you are using right now,
lately (last 2 years) I have been using Django, and I find it pretty
cool. It's Python, it has a big community, I think it would benefit from
contributions (including me).
I made something that manages revisions and branches, permissions etc.
for storing documentation from a CI:
https://bitbucket.org/renficiaud/code_doc (or about page
https://bitbucket.org/renficiaud/code_doc/src/6fe3560284ca84a31e4379331af1edfa1e458999/code_doc/templates/code_doc/about.html?at=master&fileviewer=file-view-default)

Best,
Raffi


Boost-testing list run by mbergal at meta-comm.com