Boost logo

Boost-Commit :

From: ramey_at_[hidden]
Date: 2007-08-19 23:40:49


Author: ramey
Date: 2007-08-19 23:40:48 EDT (Sun, 19 Aug 2007)
New Revision: 38776
URL: http://svn.boost.org/trac/boost/changeset/38776

Log:
initial documentation page for library status
Added:
   branches/serialization_next_release/boost/tools/regression/library_status.html (contents, props changed)

Added: branches/serialization_next_release/boost/tools/regression/library_status.html
==============================================================================
--- (empty file)
+++ branches/serialization_next_release/boost/tools/regression/library_status.html 2007-08-19 23:40:48 EDT (Sun, 19 Aug 2007)
@@ -0,0 +1,166 @@
+<html>
+
+<head>
+<meta http-equiv="Content-Language" content="en-us">
+<meta http-equiv="Content-Type"
+content="text/html; charset=iso-8859-1">
+<title>Libary Status</title>
+</head>
+
+<body bgcolor="#FFFFFF">
+
+<table border="0">
+<tr>
+<td><img border="0" src="../../boost.png" width="277" height="86" alt="boost.png (6897 bytes)"></td>
+<td><h1>Generating Library Status Tables</h1></td>
+</tr>
+</table>
+
+<h3>Purpose</h3>
+Any time one considers using a library as large and complex
+as the Boost libraries, he must have a way of validating
+the the library functions in his environment. This should
+be done when the library is installed and anytime questions
+are raised regarding its applicabililty and/or its usage.
+<p>
+The procedures described here permit a user to run any
+combination of tests on any or all libraries and generate
+a set of convenient tables which show which libraries
+pass which tests under what conditions.
+<h3>Preliminaries</h3>
+Generating these tables requires a couple of utility programs:
+<code>process_jam_log</code> and <code>library_status</code>.
+These can be built by moving to the directory <code>tools/regression/build</code>
+and invoking bjam. If all goes well these utility programs
+will be found in the directory <code>dist/bin</code>. From
+there they should be moved to a place in the current
+path.
+<p>
+<h3>Running Tests for One Library</h3>
+
+<ol>
+ <li>Start from your command line environment.
+ <li>set the current directory to:../libs/&lt;library name&gt;/test
+ <li>Invoke one of the following:
+ <ul>
+ <li><code>../../../tools/regression/library_test (*nix)</code>.
+ <li><code>..\..\..\tools\regression\library_test (windows)</code>.
+ </ul>
+ <li>This will display short help message describing the how to set
+ the command line arguments for the compilers and variants you want to
+ appear in the final table.
+ <li>Setting these arguments requires rudimentary knowledge of bjam
+ usage. Hopefully, if you've arrived at this page you've gained the
+ required knowledge during the installation and library build process.
+ <li>Rerun the abve command with the argument set accordingly.
+ <li>When the command terminates, there should be a file named
+ "library_status.html" in the current directory.
+ <li>Display this file with any web browser.
+</ol>
+There should appear a table similar to the following for the regex
+library.
+<p>
+<table border="1" cellspacing="0" cellpadding="5">
+<tr>
+<td rowspan="4">Test Name</td>
+<td align="center" colspan="4" >msvc-7.1</td>
+</tr><tr>
+<td align="center" colspan="2" >debug</td>
+<td align="center" colspan="2" >release</td>
+</tr><tr>
+<td align="center" >link-static</td>
+<td align="center" rowspan="2" >threading-multi</td>
+<td align="center" >link-static</td>
+<td align="center" rowspan="2" >threading-multi</td>
+</tr><tr>
+<td align="center" >threading-multi</td>
+<td align="center" >threading-multi</td>
+</tr><tr><td>bad_expression_test</a></td><td align="right"><i>Missing</i></td><td align="right">Warn</td><td align="right"><i>Missing</i></td><td align="right">Warn</td></tr>
+<tr><td>captures</a></td><td align="right"><i>Missing</i></td><td align="right">Fail</td><td align="right"><i>Missing</i></td><td align="right">Fail</td></tr>
+<tr><td>captures_test</a></td><td align="right"><i>Missing</i></td><td align="right">Warn</td><td align="right"><i>Missing</i></td><td align="right">Warn</td></tr>
+<tr><td>concept_check</a></td><td align="right"><i>Missing</i></td><td align="right">Pass</td><td align="right"><i>Missing</i></td><td align="right">Pass</td></tr>
+<tr><td>icu_concept_check</a></td><td align="right"><i>Missing</i></td><td align="right">Pass</td><td align="right"><i>Missing</i></td><td align="right">Pass</td></tr>
+<tr><td>object_cache_test</a></td><td align="right"><i>Missing</i></td><td align="right">Warn</td><td align="right"><i>Missing</i></td><td align="right">Warn</td></tr>
+<tr><td>posix_api_check</a></td><td align="right"><i>Missing</i></td><td align="right">Warn</td><td align="right"><i>Missing</i></td><td align="right">Warn</td></tr>
+<tr><td>posix_api_check_cpp</a></td><td align="right"><i>Missing</i></td><td align="right">Pass</td><td align="right"><i>Missing</i></td><td align="right">Pass</td></tr>
+<tr><td>recursion_test</a></td><td align="right"><i>Missing</i></td><td align="right">Warn</td><td align="right"><i>Missing</i></td><td align="right">Warn</td></tr>
+<tr><td>regex_config_info</a></td><td align="right"><i>Missing</i></td><td align="right">Pass</td><td align="right"><i>Missing</i></td><td align="right">Pass</td></tr>
+<tr><td>regex_dll_config_info</a></td><td align="right"><i>Missing</i></td><td align="right">Pass</td><td align="right"><i>Missing</i></td><td align="right">Pass</td></tr>
+<tr><td>regex_regress</a></td><td align="right">Pass<sup>*</sup></td><td align="right"><i>Missing</i></td><td align="right">Pass<sup>*</sup></td><td align="right"><i>Missing</i></td></tr>
+<tr><td>regex_regress_dll</a></td><td align="right"><i>Missing</i></td><td align="right">Pass<sup>*</sup></td><td align="right"><i>Missing</i></td><td align="right">Pass<sup>*</sup></td></tr>
+<tr><td>regex_regress_threaded</a></td><td align="right"><i>Missing</i></td><td align="right">Pass</td><td align="right"><i>Missing</i></td><td align="right">Pass</td></tr>
+<tr><td>static_mutex_test</a></td><td align="right"><i>Missing</i></td><td align="right">Pass</td><td align="right"><i>Missing</i></td><td align="right">Pass</td></tr>
+<tr><td>test_collate_info</a></td><td align="right"><i>Missing</i></td><td align="right">Warn</td><td align="right"><i>Missing</i></td><td align="right">Warn</td></tr>
+<tr><td>unicode_iterator_test</a></td><td align="right"><i>Missing</i></td><td align="right">Warn</td><td align="right"><i>Missing</i></td><td align="right">Warn</td></tr>
+<tr><td>wide_posix_api_check_c</a></td><td align="right"><i>Missing</i></td><td align="right">Warn</td><td align="right"><i>Missing</i></td><td align="right">Warn</td></tr>
+<tr><td>wide_posix_api_check_cpp</a></td><td align="right"><i>Missing</i></td><td align="right">Warn</td><td align="right"><i>Missing</i></td><td align="right">Warn</td></tr>
+</table>
+<p>
+This table was generated by invoking the following command line:
+<p>
+<code>
+../../../tools/regression/library_test --toolset=msvc-7.1 variant=debug,release
+</code>
+<p>
+from within the .../libs/regex/test directory.
+<p>
+This table shows the regex test results for both debug and release
+versions of the library. Also it displays the fact that one of the
+tests is run specifically with the static linking/multi-threading
+versions of the runtime libraries. The cells marked "Missing" correspond
+to tests that were not run for some reason or another. This is usually
+because the corresponding <code>Jamfile.v2</code> excludes this test
+for the given combination of compiler and build attributes. In this
+example, all tests were run with the same compiler. If additional
+compilers were used, they would appear as more columns in the table.
+<p>
+The table above is just an illustration so the links don't actually
+point to anything. In the table you generated, the links will
+display a page describing any errors, warnings or other available
+information about the tests. If the test passes, usually, there
+is no additional information and hence no link.
+<p>
+The tables are cumulative. That is, if you run one set of tests
+now and tests with different attributes later, the table will
+contain all the results to date. The test results are stored
+in <code>../bin.v2/libs/test/&lt;library%gt;/...</code>.
+To reinitialize the test results to empty, delete the corresponding
+files in this directory.
+<p>
+The procedure above assumes that the table are generated within
+the directory <code>../libs/&lt;library&gt;/test</code>. This is the
+most common case since this directory contains the
+<code>Jamfile.v2</code> as well as the source code that is
+used by official boost testers. However, this is just a convention.
+The table can be generated for other directories within the
+libary. One possiblity would be to generate the table for
+all the examples in <code>../libs/%lt;library%gt;/example</code>. Or
+one might have a special directory of performance tests which
+take a long time to run and hence are not suitable for running
+by official boost testers. Just remember that library
+status table is generated in the directory from which the
+<code>library_test</code> command is invoked.
+<p>
+<h3>Running Tests for All Libraries</h3>
+For those with *nix or cygwin command line shells, there is shell
+script that can be run from the boost root directory:
+<p>
+<code> tools/regression/library_test_all</code>
+<p>
+The command line arguments are the same as for running the test
+for one library. This script creates all the html files in all
+the test directories as well as an html page in the <code>status</code>
+directory named <code>library_status_summary.html</code>. This
+can be used to browse through all test results for all test in
+all libraries.
+<hr>
+
+<p>
+Copyright 2007 Robert Ramey. Distributed under the Boost Software License, Version 1.0.
+(See accompanying file LICENSE_1_0.txt or http://www.boost.org/LICENSE_1_0.txt)
+<p>
+Revised <!--webbot bot="Timestamp" startspan s-type="EDITED"
+s-format="%d %B, %Y" -->14 August, 2007<!--webbot bot="Timestamp"
+i-checksum="38582" endspan --></p>
+</body>
+</html>


Boost-Commit list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk