Boost logo

Boost Testing :

Subject: Re: [Boost-testing] [testers] New/old/final run.py script location..
From: Rene Rivera (grafikrobot_at_[hidden])
Date: 2015-04-29 10:13:16


On Wed, Apr 29, 2015 at 8:50 AM, Alain Miniussi <alain.miniussi_at_oca.eu>
wrote:

> Hi,
>
> Is the testing pipeline described somewhere ?
> There is a index.html file, but it only talk about libary_status..
> Also, some command seems to have a python and c++ version (process_jam_log
> for example), which one should be used ?
>

You shouldn't need to worry about that.. As all the needed subprograms are
run by run.py/regression.py. But for clarification, process_jam_log.py is
an experiment in using the direct XML b2 output instead of reading the log
output. It hasn't been tested in a while (since the switch to git).

For some reson (network performance, network authorisations, also, the fact
> I'm running on a cluster...) make it impractical for me to use run.py, so I
> need to find a way to run each step independently and then collect and
> upload the single xml report file.
>

Yep. I remember you mentioning that.. You should be able to run each "step"
individually just from run.py/regression.py (run.py is just a bootstrap
shell around regression.py). And hence you should be able to run
regression.py directly after you download it, if that's easier.

I'll be expanding the documentation, slowly. But for now.. You can only get
the "details" if you read through "regression.py" (it's straightforward
code). They core place to start is the "command_regression" method <
https://github.com/boostorg/regression/blob/develop/testing/src/regression.py#L548>.
And to keep in mind that anything that starts with "command_" can be
invoked individually from the CLI.

And again.. Anything we can figure out to change in the scripts to make it
work automatically for your case would be ideal.

-- 
-- Rene Rivera
-- Grafik - Don't Assume Anything
-- Robot Dreams - http://robot-dreams.net
-- rrivera/acm.org (msn) - grafikrobot/aim,yahoo,skype,efnet,gmail


Boost-testing list run by mbergal at meta-comm.com