|
Boost : |
Subject: Re: [boost] Proposed changes to page running_regression_tests.html
From: Dave Abrahams (dave_at_[hidden])
Date: 2010-12-19 15:15:59
At Sun, 19 Dec 2010 08:17:13 -0600,
Jim Bell wrote:
>
> [1 <text/plain; ISO-8859-1 (7bit)>]
> I propose these changes to boost's web-page
> <http://www.boost.org/development/running_regression_tests.html>
>
> I've attached the full file. My change is at the beginning: how to run
> regression tests locally. The previous contents follow, under "Running
> Boost's Automated Regression and Reporting."
>
> Comments welcome.
Looks like a text file that was named with a .html extension ;-)
Otherwise, looks fine.
> Who would actually make the change?
Provided you keep it actual HTML, I'd be happy for you to make it, and
will set you up with SVN write access.
>
> [2 running_regression_tests.html <text/plain; US-ASCII (7bit)>]
>
> Running Boost Regression Tests
>
> Running Regression Tests Locally
>
> It's easy to run regression tests on your Boost distribution.
>
> To run a library's regression tests, run Boost's [1]bjam utility from that
> library's libs/<library>/test directory.
>
> See the [2]Getting Started guide for details on building or downloading bjam
> for your platform, and for navigating your Boost distribution.
>
> To run every library's regression tests, run bjam from Boost's /status
> directory.
>
> To run Boost.Build's regression tests, run "python test_all.py" from Boost's
> tools/build/v2/test directory.
>
> Running Boost's Automated Regression and Reporting
>
> This runs all regressions and reports the results back to the Boost
> community.
>
> Requirements
>
> * Python 2.3 or later.
> * Subversion 1.4 or later (optional).
> * At least 5 gigabytes of disk space per compiler to be tested.
>
> Step by step instructions
>
> 1. Create a new directory for the branch you want to test.
> 2. Download the [3]run.py script into that directory.
> 3. Run "python run.py [options] [commands]" with at minimum the two
> options:
> + --runner - Your choice of name that identifies your results in the
> reports ^[4]1, [5]2.
> + --toolsets - The toolset(s) you want to test with ^[6]3.
> For example:
> python run.py --runner=Metacomm --toolsets=gcc-4.2.1,msvc-8.0
>
> Note: If you are behind a firewall/proxy server, everything should still
> "just work". In the rare cases when it doesn't, you can explicitly specify
> the proxy server parameters through the --proxy option, e.g.:
> python run.py ... --proxy=http://www.someproxy.com:3128
>
> Options
>
> commands: cleanup, collect-logs, get-source, get-tools, patch, regression,
> setup, show-revision, test, test-boost-build, test-clean, test-process, test-
> run, update-source, upload-logs
>
> Options:
> -h, --help show this help message and exit
> --runner=RUNNER runner ID (e.g. 'Metacomm')
> --comment=COMMENT an HTML comment file to be inserted in the reports
> --tag=TAG the tag for the results
> --toolsets=TOOLSETS comma-separated list of toolsets to test with
> --libraries=LIBRARIES
> comma separated list of libraries to test
> --incremental do incremental run (do not remove previous binaries)
> --timeout=TIMEOUT specifies the timeout, in minutes, for a single test
> run/compilation
> --bjam-options=BJAM_OPTIONS
> options to pass to the regression test
> --bjam-toolset=BJAM_TOOLSET
> bootstrap toolset for 'bjam' executable
> --pjl-toolset=PJL_TOOLSET
> bootstrap toolset for 'process_jam_log' executable
> --platform=PLATFORM
> --user=USER Boost SVN user ID
> --local=LOCAL the name of the boost tarball
> --force-update do an SVN update (if applicable) instead of a clean
> checkout, even when performing a full run
> --have-source do neither a tarball download nor an SVN update; used
> primarily for testing script changes
> --ftp=FTP FTP URL to upload results to.
> --proxy=PROXY HTTP proxy server address and port
> (e.g.'http://www.someproxy.com:3128')
> --ftp-proxy=FTP_PROXY
> FTP proxy server (e.g. 'ftpproxy')
> --dart-server=DART_SERVER
> the dart server to send results to
> --debug-level=DEBUG_LEVEL
> debugging level; controls the amount of debugging
> output printed
> --send-bjam-log send full bjam log of the regression run
> --mail=MAIL email address to send run notification to
> --smtp-login=SMTP_LOGIN
> STMP server address/login information, in the
> following form: <user>:<password>@<host>[:<port>]
> --skip-tests do not run bjam; used for testing script changes
>
> To test trunk use "--tag=trunk" (the default), and to test the release use
> "--tag=branches/release". Or substitute any Boost tree of your choice.
>
> Details
>
> The regression run procedure will:
> * Download the most recent regression scripts.
> * Download the designated testing tool sources including Boost.Jam,
> Boost.Build, and the various regression programs.
> * Download the most recent from the [7]Boost Subversion Repository into
> the subdirectory boost.
> * Build bjam and process_jam_log if needed. (process_jam_log is a utility,
> which extracts the test results from the log file produced by
> Boost.Build).
> * Run regression tests, process and collect the results.
> * Upload the results to a common FTP server.
>
> The report merger process running continuously will merge all submitted test
> runs and publish them at [8]various locations.
>
> Advanced use
>
> Providing detailed information about your environment
>
> Once you have your regression results displayed in the Boost-wide reports,
> you may consider providing a bit more information about yourself and your
> test environment. This additional information will be presented in the
> reports on a page associated with your runner ID.
>
> By default, the page's content is just a single line coming from the
> comment.html file in your run.py directory, specifying the tested platform.
> You can put online a more detailed description of your environment, such as
> your hardware configuration, compiler builds, and test schedule, by simply
> altering the file's content. Also, please consider providing your name and
> email address for cases where Boost developers have questions specific to
> your particular set of results.
>
> Incremental runs
>
> You can run run.py in [9]incremental mode by simply passing it an
> identically named command-line flag:
> python run.py ... --incremental
>
> Getting sources from Tarball
>
> By default the sources are obtained from the [10]Boost Subversion Repository
> and we prefer testers use SVN. But if you can't have an SVN client installed
> you can obtain the sources as tarballs (*.tar.gz). To indicate this pass an
> empty user to run.py:
> python run.py ... --user=
>
> Note: Both methods to obtain the sources will get the latest code. This is
> accomplished by building the tarball on-demand from the SVN sources.
>
> Patching Boost sources
>
> You might encounter an occasional need to make local modifications to the
> Boost codebase before running the tests, without disturbing the automatic
> nature of the regression process. To implement this under regression.py:
> 1. Codify applying the desired modifications to the sources located in the
> ./boost subdirectory in a single executable script named patch_boost
> (patch_boost.bat on Windows).
> 2. Place the script in the run.py directory.
>
> The driver will check for the existence of the patch_boost script, and, if
> found, execute it after obtaining the Boost sources.
>
> Feedback
>
> Please send all comments/suggestions regarding this document and the testing
> procedure itself to the [11]Boost Testing list.
>
> Notes
>
> [1] If you are running regressions interlacingly with a different set of
> compilers (e.g. for Intel in the morning and GCC at the end of the day), you
> need to provide a different runner id for each of these runs, e.g.
> your_name-intel, and your_name-gcc.
>
> [2] The limitations of the reports' format/medium impose a direct dependency
> between the number of compilers you are testing with and the amount of space
> available for your runner id. If you are running regressions for a single
> compiler, please make sure to choose a short enough id that does not
> significantly disturb the reports' layout. You can also use spaces in the
> runner ID to allow the reports to wrap the name to fit.
>
> [3] If --toolsets option is not provided, the script will try to use the
> platform's default toolset (gcc for most Unix-based systems).
>
> [4] By default, the script runs in what is known as full mode: on each
> run.py invocation all the files that were left in place by the previous run
> â including the binaries for the successfully built tests and libraries â
> are deleted, and everything is rebuilt once again from scratch. By contrast,
> in incremental mode the already existing binaries are left intact, and only
> the tests and libraries which source files has changed since the previous
> run are re-built and re-tested.
>
> The main advantage of incremental runs is a significantly shorter turnaround
> time, but unfortunately they don't always produce reliable results. Some
> type of changes to the codebase (changes to the bjam testing subsystem in
> particular) often require switching to a full mode for one cycle in order to
> produce trustworthy reports.
>
> As a general guideline, if you can afford it, testing in full mode is
> preferable.
>
> Revised $Date: 2010-05-12 05:44:26 +0100 (Wed, 12 May 2010) $
>
> Copyright Rene Rivera 2007.
>
> Copyright MetaCommunications, Inc. 2004-2007.
>
> References
>
> 1. file://localhost/doc/tools/build/doc/html/jam/usage.html
> 2. file://localhost/doc/libs/release/more/getting_started/index.html
> 3. http://svn.boost.org/svn/boost/trunk/tools/regression/src/run.py
> 4. file://localhost/tmp/tmpagV-W6.html#runnerid1
> 5. file://localhost/tmp/tmpagV-W6.html#runnerid2
> 6. file://localhost/tmp/tmpagV-W6.html#toolsets
> 7. file://localhost/users/download/#repository
> 8. file://localhost/tmp/testing.html#RegressionTesting
> 9. file://localhost/tmp/tmpagV-W6.html#incremental
> 10. file://localhost/users/download/#repository
> 11. file://localhost/community/groups.html#testing
> [3 <text/plain; us-ascii (7bit)>]
> _______________________________________________
> Unsubscribe & other changes: http://lists.boost.org/mailman/listinfo.cgi/boost
-- Dave Abrahams BoostPro Computing http://www.boostpro.com
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk