Boost logo

Boost-Commit :

Subject: [Boost-commit] svn:boost r62071 - in trunk: libs/test/doc libs/test/doc/src libs/test/doc/src/examples libs/test/doc/src/snippet libs/test/doc/src/xsl tools/boostbook/xsl
From: steven_at_[hidden]
Date: 2010-05-17 16:09:23


Author: steven_watanabe
Date: 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
New Revision: 62071
URL: http://svn.boost.org/trac/boost/changeset/62071

Log:
Add source for Boost.Test docs
Added:
   trunk/libs/test/doc/Jamfile.v2 (contents, props changed)
   trunk/libs/test/doc/src/Jamfile.v2 (contents, props changed)
   trunk/libs/test/doc/src/btl-toc.xml (contents, props changed)
   trunk/libs/test/doc/src/btl.xml (contents, props changed)
   trunk/libs/test/doc/src/execution-monitor.xml (contents, props changed)
   trunk/libs/test/doc/src/faq.xml (contents, props changed)
   trunk/libs/test/doc/src/minimal-testing.xml (contents, props changed)
   trunk/libs/test/doc/src/program-execution-monitor.xml (contents, props changed)
   trunk/libs/test/doc/src/snippet/
   trunk/libs/test/doc/src/snippet/const_string.hpp (contents, props changed)
   trunk/libs/test/doc/src/snippet/const_string_test.cpp (contents, props changed)
   trunk/libs/test/doc/src/snippet/snippet1.cpp (contents, props changed)
   trunk/libs/test/doc/src/snippet/snippet10.cpp (contents, props changed)
   trunk/libs/test/doc/src/snippet/snippet11.cpp (contents, props changed)
   trunk/libs/test/doc/src/snippet/snippet12.cpp (contents, props changed)
   trunk/libs/test/doc/src/snippet/snippet13.cpp (contents, props changed)
   trunk/libs/test/doc/src/snippet/snippet14.cpp (contents, props changed)
   trunk/libs/test/doc/src/snippet/snippet15.cpp (contents, props changed)
   trunk/libs/test/doc/src/snippet/snippet16.cpp (contents, props changed)
   trunk/libs/test/doc/src/snippet/snippet17.cpp (contents, props changed)
   trunk/libs/test/doc/src/snippet/snippet18.cpp (contents, props changed)
   trunk/libs/test/doc/src/snippet/snippet2.cpp (contents, props changed)
   trunk/libs/test/doc/src/snippet/snippet3.cpp (contents, props changed)
   trunk/libs/test/doc/src/snippet/snippet4.cpp (contents, props changed)
   trunk/libs/test/doc/src/snippet/snippet5.cpp (contents, props changed)
   trunk/libs/test/doc/src/snippet/snippet6.cpp (contents, props changed)
   trunk/libs/test/doc/src/snippet/snippet7.cpp (contents, props changed)
   trunk/libs/test/doc/src/snippet/snippet8.cpp (contents, props changed)
   trunk/libs/test/doc/src/snippet/snippet9.cpp (contents, props changed)
   trunk/libs/test/doc/src/tutorial.hello-the-testing-world.xml (contents, props changed)
   trunk/libs/test/doc/src/tutorial.intro-in-testing.xml (contents, props changed)
   trunk/libs/test/doc/src/tutorial.new-year-resolution.xml (contents, props changed)
   trunk/libs/test/doc/src/utf.testing-tools.xml (contents, props changed)
   trunk/libs/test/doc/src/utf.tutorials.xml (contents, props changed)
   trunk/libs/test/doc/src/utf.usage-recommendations.xml (contents, props changed)
   trunk/libs/test/doc/src/utf.user-guide.runtime-config.xml (contents, props changed)
   trunk/libs/test/doc/src/utf.users-guide.fixture.xml (contents, props changed)
   trunk/libs/test/doc/src/utf.users-guide.test-organization.xml (contents, props changed)
   trunk/libs/test/doc/src/utf.users-guide.test-output.xml (contents, props changed)
   trunk/libs/test/doc/src/utf.users-guide.xml (contents, props changed)
   trunk/libs/test/doc/src/utf.xml (contents, props changed)
   trunk/libs/test/doc/src/xsl/
   trunk/libs/test/doc/src/xsl/docbook.xsl (contents, props changed)
   trunk/libs/test/doc/src/xsl/html.xsl (contents, props changed)
   trunk/libs/test/doc/utf-boostbook.jam (contents, props changed)
   trunk/tools/boostbook/xsl/html-base.xsl
      - copied, changed from r62041, /trunk/tools/boostbook/xsl/html.xsl
Text files modified:
   trunk/libs/test/doc/src/examples/example05.cpp | 2
   trunk/libs/test/doc/src/examples/example17.cpp | 2
   trunk/libs/test/doc/src/examples/example22.cpp | 2
   trunk/libs/test/doc/src/examples/example23.cpp | 2
   trunk/tools/boostbook/xsl/chunk-common.xsl | 3
   trunk/tools/boostbook/xsl/html-base.xsl | 23 ---
   trunk/tools/boostbook/xsl/html.xsl | 299 ---------------------------------------
   7 files changed, 8 insertions(+), 325 deletions(-)

Added: trunk/libs/test/doc/Jamfile.v2
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/Jamfile.v2 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,27 @@
+# Jamfile.v2
+#
+# Copyright (c) 2010
+# Steven Watanabe
+#
+# Distributed Under the Boost Software License, Version 1.0. (See
+# accompanying file LICENSE_1_0.txt or copy at
+# http://www.boost.org/LICENSE_1_0.txt)
+
+using utf-boostbook ;
+import path ;
+
+path-constant .here : . ;
+
+here = [ path.make $(.here) ] ;
+
+boostbook standalone
+ :
+ src/btl.xml
+ :
+ <xsl:param>chunk.toc=$(here)/src/btl-toc.xml
+ <xsl:param>manual.toc=$(here)/src/btl-toc.xml
+ <xsl:param>snippet.dir=file://$(here)/src/snippet
+ <xsl:param>example.dir=file://$(here)/src/examples
+ <utf-boostbook>on
+ <xsl:path>$(.here)/src
+;

Added: trunk/libs/test/doc/src/Jamfile.v2
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/Jamfile.v2 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,5 @@
+project boost/doc ;
+import boostbook : boostbook ;
+
+boostbook test-doc : UTF.xml ;
+

Added: trunk/libs/test/doc/src/btl-toc.xml
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/btl-toc.xml 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,254 @@
+<?xml version="1.0" encoding="utf-8"?>
+
+<toc role="chunk-toc" last-revision="$Date$">
+ <tocentry linkend="btl">
+ <?dbhtml filename="index.html"?>
+
+ <tocentry linkend="btl.intro">
+ <?dbhtml filename="intro.html"?>
+
+ <tocentry linkend="btl.faq">
+ <?dbhtml filename="faq.html"?>
+ </tocentry>
+
+ <tocentry linkend="btl.open-issues">
+ <?dbhtml filename="open-issues.html"?>
+ </tocentry>
+
+ <tocentry linkend="btl.aknowledgements">
+ <?dbhtml filename="aknowledgements.html"?>
+ </tocentry>
+ </tocentry>
+
+ <tocentry linkend="execution-monitor">
+ <?dbhtml filename="execution-monitor.html"?>
+
+ <tocentry linkend="execution-monitor.compilation">
+ <?dbhtml filename="execution-monitor/compilation.html"?>
+ </tocentry>
+
+ <tocentry linkend="execution-monitor.user-guide">
+ <?dbhtml filename="execution-monitor/user-guide.html"?>
+ </tocentry>
+
+ <tocentry linkend="execution-monitor.reference">
+ <?dbhtml filename="execution-monitor/reference.html"?>
+ </tocentry>
+ </tocentry>
+
+ <tocentry linkend="pem">
+ <?dbhtml filename="prg-exec-monitor.html"?>
+
+ <tocentry linkend="pem.impl">
+ <?dbhtml filename="prg-exec-monitor/impl.html"?>
+ </tocentry>
+
+ <tocentry linkend="pem.compilation">
+ <?dbhtml filename="prg-exec-monitor/compilation.html"?>
+ </tocentry>
+ </tocentry>
+
+ <tocentry linkend="minimal">
+ <?dbhtml filename="minimal.html"?>
+ </tocentry>
+
+ <tocentry linkend="utf">
+ <?dbhtml filename="utf.html"?>
+ <tocentry linkend="utf.intro">
+ <?dbhtml filename="utf/intro.html"?>
+ </tocentry>
+
+ <tocentry linkend="utf.tutorials">
+ <?dbhtml filename="utf/tutorials.html"?>
+
+ <tocentry linkend="tutorial.intro-in-testing">
+ <?dbhtml filename="tutorials/intro-in-testing.html"?>
+ </tocentry>
+
+ <tocentry linkend="tutorial.hello-the-testing-world">
+ <?dbhtml filename="tutorials/hello-the-testing-world.html"?>
+ </tocentry>
+
+ <tocentry linkend="tutorial.new-year-resolution">
+ <?dbhtml filename="tutorials/new-year-resolution.html"?>
+ </tocentry>
+ </tocentry>
+
+ <tocentry linkend="utf.compilation">
+ <?dbhtml filename="utf/compilation.html"?>
+ <tocentry linkend="utf.compilation.standalone">
+ <?dbhtml filename="utf/compilation/standalone.html"?>
+ </tocentry>
+ <tocentry linkend="utf.compilation.auto-linking">
+ <?dbhtml filename="utf/compilation/auto-linking.html"?>
+ </tocentry>
+ <tocentry linkend="utf.compilation.direct-include">
+ <?dbhtml filename="utf/compilation/direct-include.html"?>
+ </tocentry>
+ </tocentry>
+
+ <tocentry linkend="utf.user-guide">
+ <?dbhtml filename="utf/user-guide.html"?>
+ <tocentry linkend="utf.user-guide.usage-variants">
+ <?dbhtml filename="utf/user-guide/usage-variants.html"?>
+ <tocentry linkend="utf.user-guide.static-lib-variant">
+ <?dbhtml filename="utf/user-guide/usage-variants/static-lib-variant.html"?>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.dynamic-lib-variant">
+ <?dbhtml filename="utf/user-guide/usage-variants/dynamic-lib-variant.html"?>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.single-header-variant">
+ <?dbhtml filename="utf/user-guide/usage-variants/single-header-variant.html"?>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.extern-test-runner-variant">
+ <?dbhtml filename="utf/user-guide/usage-variants/extern-test-runner-variant.html"?>
+ </tocentry>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.test-runners">
+ <?dbhtml filename="utf/user-guide/test-runners.html"?>
+ <tocentry linkend="utf.user-guide.external-test-runner">
+ <?dbhtml filename="utf/user-guide/usage-variants/extern-test-runner.html"?>
+ </tocentry>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.initialization">
+ <?dbhtml filename="utf/user-guide/initialization.html"?>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.test-organization">
+ <?dbhtml filename="utf/user-guide/test-organization.html"?>
+ <tocentry linkend="utf.user-guide.test-organization.nullary-test-case">
+ <?dbhtml filename="utf/user-guide/test-organization/nullary-test-case.html"?>
+ <tocentry linkend="utf.user-guide.test-organization.manual-nullary-test-case">
+ <?dbhtml filename="utf/user-guide/test-organization/manual-nullary-test-case.html"?>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.test-organization.auto-nullary-test-case">
+ <?dbhtml filename="utf/user-guide/test-organization/auto-nullary-test-case.html"?>
+ </tocentry>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.test-organization.unary-test-case">
+ <?dbhtml filename="utf/user-guide/test-organization/unary-test-case.html"?>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.test-organization.test-case-template">
+ <?dbhtml filename="utf/user-guide/test-organization/test-case-template.html"?>
+ <tocentry linkend="utf.user-guide.test-organization.manual-test-case-template">
+ <?dbhtml filename="utf/user-guide/test-organization/manual-test-case-template.html"?>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.test-organization.auto-test-case-template">
+ <?dbhtml filename="utf/user-guide/test-organization/auto-test-case-template.html"?>
+ </tocentry>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.test-organization.test-suite">
+ <?dbhtml filename="utf/user-guide/test-organization/test-suite.html"?>
+ <tocentry linkend="utf.user-guide.test-organization.manual-test-suite">
+ <?dbhtml filename="utf/user-guide/test-organization/manual-test-suite.html"?>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.test-organization.auto-test-suite">
+ <?dbhtml filename="utf/user-guide/test-organization/auto-test-suite.html"?>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.test-organization.master-test-suite">
+ <?dbhtml filename="utf/user-guide/test-organization/master-test-suite.html"?>
+ </tocentry>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.test-organization.expected-failures">
+ <?dbhtml filename="utf/user-guide/test-organization/expected-failures.html"?>
+ </tocentry>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.fixture">
+ <?dbhtml filename="utf/user-guide/fixture.html"?>
+ <tocentry linkend="utf.user-guide.fixture.model">
+ <?dbhtml filename="utf/user-guide/fixture/model.html"?>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.fixture.per-test-case">
+ <?dbhtml filename="utf/user-guide/fixture/per-test-case.html"?>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.fixture.test-suite-shared">
+ <?dbhtml filename="utf/user-guide/fixture/test-suite-shared.html"?>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.fixture.global">
+ <?dbhtml filename="utf/user-guide/fixture/global.html"?>
+ </tocentry>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.test-output">
+ <?dbhtml filename="utf/user-guide/test-output.html"?>
+ <tocentry linkend="utf.user-guide.test-output.log">
+ <?dbhtml filename="utf/user-guide/test-output/test-log.html"?>
+ <tocentry linkend="utf.user-guide.test-output.log.BOOST_TEST_MESSAGE">
+ <?dbhtml filename="utf/user-guide/test-output/BOOST_TEST_MESSAGE.html"?>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.test-output.log.BOOST_TEST_CHECKPOINT">
+ <?dbhtml filename="utf/user-guide/test-output/BOOST_TEST_CHECKPOINT.html"?>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.test-output.log.BOOST_TEST_PASSPOINT">
+ <?dbhtml filename="utf/user-guide/test-output/BOOST_TEST_PASSPOINT.html"?>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.test-output.log.FPT">
+ <?dbhtml filename="utf/user-guide/test-output/BOOST_TEST_PASSPOINT.html"?>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.test-output.log.human-readabe-format">
+ <?dbhtml filename="utf/user-guide/test-output/log-hr-format.html"?>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.test-output.log.xml-format">
+ <?dbhtml filename="utf/user-guide/test-output/log-xml-format.html"?>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.test-output.log.ct-config">
+ <?dbhtml filename="utf/user-guide/test-output/log-ct-config.html"?>
+ </tocentry>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.test-output.results-report">
+ <?dbhtml filename="utf/user-guide/test-output/results-report.html"?>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.test-output.progress">
+ <?dbhtml filename="utf/user-guide/test-output/test-progress.html"?>
+ </tocentry>
+ </tocentry>
+ <tocentry linkend="utf.user-guide.runtime-config">
+ <?dbhtml filename="utf/user-guide/runtime-config.html"?>
+
+ <tocentry linkend="utf.user-guide.runtime-config.run-by-name">
+ <?dbhtml filename="utf/user-guide/runtime-config/run-by-name.html"?>
+ </tocentry>
+
+ <tocentry linkend="utf.user-guide.runtime-config.reference">
+ <?dbhtml filename="utf/user-guide/runtime-config/reference.html"?>
+ </tocentry>
+ </tocentry>
+ </tocentry>
+
+ <tocentry linkend="utf.testing-tools">
+ <?dbhtml filename="utf/testing-tools.html"?>
+
+ <tocentry linkend="utf.testing-tools.output-test">
+ <?dbhtml filename="utf/testing-tools/output-test.html"?>
+ </tocentry>
+
+ <tocentry linkend="utf.testing-tools.custom-predicate">
+ <?dbhtml filename="utf/testing-tools/custom-predicate.html"?>
+ </tocentry>
+
+ <tocentry linkend="utf.testing-tools.fpv-comparison">
+ <?dbhtml filename="utf/testing-tools/floating_point_comparison.html"?>
+ </tocentry>
+
+ <tocentry linkend="utf.testing-tools.reference">
+ <?dbhtml filename="utf/testing-tools/reference.html"?>
+ </tocentry>
+ </tocentry>
+
+ <tocentry linkend="utf.usage-recommendations">
+ <?dbhtml filename="utf/usage-recommendations.html"?>
+
+ <tocentry linkend="utf.usage-recommendations.generic">
+ <?dbhtml filename="utf/usage-recommendations/generic.html"?>
+ </tocentry>
+
+ <tocentry linkend="utf.usage-recommendations.dot-net-specific">
+ <?dbhtml filename="utf/usage-recommendations/dot-net-specific.html"?>
+ </tocentry>
+
+ <tocentry linkend="utf.usage-recommendations.command-line-specific">
+ <?dbhtml filename="utf/usage-recommendations/command-line-specific.html"?>
+ </tocentry>
+ </tocentry>
+ </tocentry>
+ </tocentry>
+</toc>
+

Added: trunk/libs/test/doc/src/btl.xml
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/btl.xml 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,165 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!DOCTYPE library PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN" "../../../../tools/boostbook/dtd/boostbook.dtd" [
+ <!ENTITY utf "<acronym>UTF</acronym>">
+]>
+<library name="Test" dirname="test" id="btl" last-revision="$Date$">
+ <libraryinfo>
+ <author>
+ <firstname>Gennadiy</firstname>
+ <surname>Rozental</surname>
+ <email>boost-test =at= emailaccount =dot= com</email>
+ </author>
+ <copyright>
+ <year>2001</year>
+ <year>2002</year>
+ <year>2003</year>
+ <year>2004</year>
+ <year>2005</year>
+ <year>2006</year>
+ <year>2007</year>
+ <year>2008</year>
+ <holder>Gennadiy Rozental</holder>
+ </copyright>
+
+ <legalnotice>
+ <simpara>
+ Use, modification and distribution is subject to the Boost Software License, Version 1.0. (See accompanying file
+ <filename>LICENSE_1_0.txt</filename> or copy at
+ <ulink url="http://www.boost.org/LICENSE_1_0.txt">http://www.boost.org/LICENSE_1_0.txt> )
+ </simpara>
+ </legalnotice>
+
+ <librarypurpose>
+ The Boost Test Library provides a matched set of components for writing test programs, organizing tests in to simple
+ test cases and test suites, and controlling their runtime execution.
+ </librarypurpose>
+ <librarycategory name="category:correctness-and-testing"/>
+ </libraryinfo>
+
+ <title>Boost Test Library</title> <!-- TO FIX: should be header 1 -->
+
+ <section id="btl.intro">
+ <title>Introduction</title>
+
+ <epigraph>
+ <attribution>XP maxim</attribution>
+ <simpara>Test everything that could possibly break</simpara>
+ </epigraph>
+
+ <para role="first-line-indented">
+ The Boost Test Library provides a matched set of components for writing test programs, organizing tests in to
+ simple test cases and test suites, and controlling their runtime execution. The Program Execution Monitor is also
+ useful in some production (non-test) environments.
+ </para>
+
+ <section id="btl.about-docs">
+ <title>About this documentation</title>
+
+ <para role="first-line-indented">
+ This documentation is <emphasis role="bold">not</emphasis> intended to be read though from the beginning to the end by
+ a novice user. You can do that if you are interested in detailed bottom-up description of all Boost.Test components.
+ Otherwise you are better off jumping directly to the subject of your interest. For example, if you are interested in
+ unit testing framework you may go directly <link linkend="utf">there</link>, but for novice users I recommend to
+ start from simple <link linkend="utf.tutorials">tutorials</link>. Looking for quick help - check
+ <link linkend="btl.faq">FAQ</link> section for resolution to many popular issues you may be faced with. Most pages
+ should link you directly to the terms that you need to understand it. For help with compilation see compilation
+ sections of an appropriate component.
+ </para>
+ </section>
+
+ <section id="btl.release-notes">
+ <title>Release notes</title>
+
+ <para role="first-line-indented">
+ For more details see complete release notes. <!-- TO FIX -->
+ </para>
+ </section>
+
+ <xi:include href="faq.xml" xmlns:xi="
http://www.w3.org/2001/XInclude"/>
+
+ <section id="btl.portability">
+ <title>Portability</title>
+
+ <para role="first-line-indented">
+ Because the Boost Test Library is critical for porting and testing Boost libraries, it has been written to be
+ conservative in its use of C++ features, and to keep dependencies to a bare minimum.
+ </para>
+
+ <para role="first-line-indented">
+ Boost.Test supports all main Boost compilers and platforms. Confirmation of its status on core and additional
+ platforms/compilers can be seen by viewing Boost.Test's own internal regression test results on
+ <ulink url="http://beta.boost.org/development/tests/release/user/test_release.html">release status page</ulink> or
+ <ulink url="http://beta.boost.org/development/tests/trunk/developer/test.html">trunk status page</ulink>
+ </para>
+ </section>
+
+ <section id="btl.open-issues">
+ <title>Open issues</title>
+
+ <itemizedlist>
+ <listitem><simpara>Finish update for the command line arguments support</simpara></listitem>
+ <listitem><simpara>Selective test cases run by name</simpara>
+ </listitem>
+ <listitem><simpara>
+ Boost.Test thread safety need to be achieved at least if BOOST_TEST_THREAD_SAFE is defined. This one will require
+ separate discussion
+ </simpara></listitem>
+ <listitem><simpara>
+ Some performance testing tools (aka profiler). Unless somebody else will come up with something like this
+ </simpara></listitem>
+ <listitem><simpara>
+ Build info feature needs to be updated: there are at least two different "build infos": library build and test
+ module build
+ </simpara></listitem>
+ <listitem><simpara>More tutorial documentation.</simpara></listitem>
+ <listitem>
+ <simpara>
+ Projects that could be very interesting, but I may not be able to do it by myself:
+ <itemizedlist>
+ <listitem>
+ <simpara>An add-on for Visual Studio to automate test case/ test modules generation</simpara>
+ </listitem>
+ <listitem>
+ <simpara>Set of Python and/or Perl scripts to automate test case/test modules generation from command line</simpara>
+ </listitem>
+ </itemizedlist>
+ </simpara>
+ </listitem>
+ <listitem><simpara>Memory usage test tools.</simpara></listitem>
+ <listitem><simpara>Time-out implementation on Win32 platform.</simpara></listitem>
+ <listitem><simpara>Make output_test_stream match like diff do</simpara></listitem>
+ <listitem><simpara>Better Unicode support (reports and log in wostream)</simpara></listitem>
+ <listitem><simpara>Support for custom test case dependency/condition</simpara></listitem>
+ </itemizedlist>
+ </section>
+
+ <section id="btl.aknowledgements">
+ <title>Acknowledgements</title>
+
+ <simpara>Original Test Library:</simpara>
+
+ <para role="first-line-indented">
+ Ed Brey, Kevlin Henney, Ullrich Koethe, and Thomas Matelich provided very helpful comments during development.
+ Dave Abrahams, Ed Brey, William Kempf, Jens Maurer, and Wilka suggested numerous improvements during the Formal
+ Review. Jens Maurer was the review manager. Beman Dawes is the developer and maintainer.
+ </para>
+
+ <simpara>Second incarnation including the Unit Test Framework:</simpara>
+
+ <para role="first-line-indented">
+ Beman Dawes and Ullrich Koethe started the library. Fernando Cacciola, Jeremy Siek, Beman Dawes, Ullrich Koethe,
+ Dave Abrahams suggested numerous improvements during the Formal Review. Jeremy Siek was the review manager. Beman
+ Dawes was a great help in both final testing and merging library with rest of the boost. Gennadiy Rozental is the
+ developer and maintainer.
+ </para>
+
+ </section>
+ </section>
+
+ <xi:include href="execution-monitor.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+ <xi:include href="program-execution-monitor.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+ <xi:include href="minimal-testing.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+ <xi:include href="utf.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+
+ <!-- TO FIX: index -->
+</library>

Modified: trunk/libs/test/doc/src/examples/example05.cpp
==============================================================================
--- trunk/libs/test/doc/src/examples/example05.cpp (original)
+++ trunk/libs/test/doc/src/examples/example05.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -1,5 +1,5 @@
 #define BOOST_TEST_DYN_LINK
-#include <boost/test/included/unit_test.hpp>
+#include <boost/test/unit_test.hpp>
 #include <boost/bind.hpp>
 using namespace boost::unit_test;
 

Modified: trunk/libs/test/doc/src/examples/example17.cpp
==============================================================================
--- trunk/libs/test/doc/src/examples/example17.cpp (original)
+++ trunk/libs/test/doc/src/examples/example17.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -24,4 +24,4 @@
 
 //____________________________________________________________________________//
 
-BOOST_AUTO_TEST_SUITE_END()
+BOOST_AUTO_TEST_SUITE_END()
\ No newline at end of file

Modified: trunk/libs/test/doc/src/examples/example22.cpp
==============================================================================
--- trunk/libs/test/doc/src/examples/example22.cpp (original)
+++ trunk/libs/test/doc/src/examples/example22.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -22,4 +22,4 @@
     int j = 2/(i-1);
 }
 
-//____________________________________________________________________________//
+//____________________________________________________________________________//
\ No newline at end of file

Modified: trunk/libs/test/doc/src/examples/example23.cpp
==============================================================================
--- trunk/libs/test/doc/src/examples/example23.cpp (original)
+++ trunk/libs/test/doc/src/examples/example23.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -17,4 +17,4 @@
     int j = *p;
 }
 
-//____________________________________________________________________________//
+//____________________________________________________________________________//
\ No newline at end of file

Added: trunk/libs/test/doc/src/execution-monitor.xml
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/execution-monitor.xml 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,491 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!DOCTYPE part PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN" "../../../../tools/boostbook/dtd/boostbook.dtd" [
+ <!ENTITY utf "<acronym>UTF</acronym>">
+]>
+<section id="execution-monitor" last-revision="$Date$">
+ <title>Boost Test Library: The Execution Monitor</title>
+ <titleabbrev>The Execution Monitor</titleabbrev>
+
+ <section id="execution-monitor.intro">
+ <title/>
+
+ <para role="first-line-indented">
+ Sometimes we need to call a function and make sure that no user or system originated exceptions are being thrown
+ by it. Uniform exception reporting is also may be convenient. That's the purpose of the Boost.Test's
+ <firstterm>Execution Monitor</firstterm>.
+ </para>
+
+ <para role="first-line-indented">
+ The Execution Monitor is a lower-level component of the Boost Test Library. It is the base for implementing all
+ other Boost.Test components, but also can be used standalone to get controlled execution of error-prone functions
+ with a uniform error notification. The Execution Monitor calls a user-supplied function in a controlled
+ environment, relieving users from messy error detection.
+ </para>
+
+ <para role="first-line-indented">
+ The Execution Monitor usage is demonstrated in the example exec_mon_example <!-- TO FIX: link to example -->. Additional examples are
+ in <xref linkend="pem"/> or <xref linkend="utf"/>.
+ </para>
+
+ <section id="execution-monitor.design">
+ <title>Design Rationale</title>
+
+ <para role="first-line-indented">
+ The Execution Monitor design assumes that it can be used when no (or almost no) memory available. Also the
+ Execution Monitor is intended to be portable to as many platforms as possible.
+ </para>
+ </section>
+ </section>
+
+ <section id="execution-monitor.compilation">
+ <title>The Execution Monitor compilation variants and procedures</title>
+ <titleabbrev>Compilation</titleabbrev>
+
+ <section id="execution-monitor.impl">
+ <title>Implementation</title>
+
+ <para role="first-line-indented">
+ The Execution Monitor is implemented in two modules: one header file and one source file.
+ </para>
+
+ <variablelist>
+ <?dbhtml term-separator=": "?>
+ <?dbhtml list-presentation="list"?>
+
+ <varlistentry>
+ <term><filename>boost/test/execution_monitor.hpp</filename></term>
+ <listitem>
+ <simpara>
+ defines abstract execution monitor interfaces and implements execution exception.
+ </simpara>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term><ulink url="../../../../boost/test/impl/execution_monitor.ipp"><filename>libs/test/execution_monitor.hpp</filename></ulink></term>
+ <listitem>
+ <simpara>
+ provides the Execution Monitor implementation for all supported configurations, including Microsoft structured
+ exception based, UNIX signals.
+ </simpara>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+
+ <para role="first-line-indented">
+ You may use this component in both debug and release modes, but in release mode the Execution Monitor won't
+ catch Microsoft C runtime debug events.
+ </para>
+ </section>
+
+ <section id="execution-monitor.lib-compilation">
+ <title>Standalone library compilation</title>
+
+ <para role="first-line-indented">
+ To compile the Execution Monitor as standalone library compose it using only
+ <filename>execution-monitor.cpp</filename> as a source file. Alternatively you can add this file directly to the
+ list of source files for your project. Boost Test Library's components include this file as a part of their
+ compilation procedure.
+ </para>
+ </section>
+
+ <section id="execution-monitor.direct-include">
+ <title>Direct include</title>
+
+ <para role="first-line-indented">
+ In some cases you may want to include the source file along with header file into your sources. But be aware that
+ to be able to catch all kinds of standard exceptions and to implement signal handling logic this file will bring
+ a lot of dependencies.
+ </para>
+ </section>
+ </section>
+
+ <section id="execution-monitor.user-guide">
+ <title>The Execution Monitor user's guide</title>
+ <titleabbrev>User's guide</titleabbrev>
+
+ <para role="first-line-indented">
+ The Execution Monitor is designed to solve the problem of executing potentially dangerous function that may result
+ in any number of error conditions, in monitored environment that should prevent any undesirable exceptions to
+ propagate out of function call and produce consistent result report for all
+ <link linkend="execution-monitor.user-guide.monitor-outcomes">outcomes</link>. The Execution Monitor is able to
+ produce informative report for all standard C++ exceptions and intrinsic types. All other exceptions are reported as
+ unknown. If you prefer different message for your exception class or need to perform any action, the Execution
+ Monitor supports <link linkend="execution-monitor.user-guide.errors-reporting">custom exception translators</link>.
+ There are several other <link linkend="execution-monitor.user-guide.monitor-params">parameters</link> of the
+ monitored environment can be configured by setting appropriate properties of the Execution Monitor.
+ </para>
+
+ <para role="first-line-indented">
+ All symbols in the Execution Monitor implementation are located in the namespace boost. To use the Execution
+ Monitor you need to:
+ </para>
+
+ <using-namespace name="boost"/>
+
+ <orderedlist>
+ <listitem>
+ <simpara>
+ #include &lt;<ulink url="../../../../boost/test/execution_monitor.hpp"><filename>boost/test/execution_monitor.hpp</filename></ulink>&gt;
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>Make an instance of <classname>execution_monitor</classname></simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ Optionally register custom exception translators for exception classes which require special processing.
+ </simpara>
+ </listitem>
+ </orderedlist>
+
+ <section id="execution-monitor.user-guide.monitor-outcomes">
+ <title>Monitored function execution</title>
+
+ <para role="first-line-indented">
+ To start the monitored function, invoke the method <methodname>execution_monitor::execute</methodname> and pass
+ the monitored function as an argument. If the call succeeds, the method returns the result code produced by the
+ monitored function. If any of the following conditions occur:
+ </para>
+
+ <itemizedlist>
+ <listitem>
+ <simpara>Uncaught C++ exception.</simpara>
+ </listitem>
+ <listitem>
+ <simpara>Hardware or software signal, trap, or other exception.</simpara>
+ </listitem>
+ <listitem>
+ <simpara>Timeout reached.</simpara>
+ </listitem>
+ <listitem>
+ <simpara>Debug assert event occurred (under Microsoft Visual C++ or compatible compiler).</simpara>
+ </listitem>
+ </itemizedlist>
+
+ <simpara>
+ then the method throws the <classname>execution_exception</classname>. The exception contains unique
+ <enumname>error_code</enumname> value identifying the error condition and the detailed message that can be used to report
+ the error.
+ </simpara>
+ </section>
+
+ <section id="execution-monitor.user-guide.monitor-params">
+ <title>The execution monitor parameters</title>
+
+ <para role="first-line-indented">
+ All parameters are implemented as public read-write properties of class <classname>execution_monitor</classname>.
+ </para>
+
+ <para role="first-line-indented">
+ The <firstterm>p_catch_system_errors</firstterm> property is a boolean flag (default value is true) specifying whether
+ or not <classname>execution_monitor</classname> should trap system level exceptions (second category in above list).
+ Set this property to false, for example, if you wish to force coredump file creation. The Unit Test Framework
+ provides a runtime parameter --catch_system_errors=yes to alter the behavior in monitored test cases.
+ </para>
+
+ <para role="first-line-indented">
+ The <firstterm>p_auto_start_dbg</firstterm> property is a boolean flag (default value is false) specifying whether or
+ not <classname>execution_monitor</classname> should try to attach debugger in case system error is caught.
+ </para>
+
+ <para role="first-line-indented">
+ The <firstterm>p_timeout property</firstterm> is an integer timeout (in seconds) for monitored function execution. Use
+ this parameter to monitor code with possible deadlocks or indefinite loops. This feature is only available for some
+ operating systems (not yet Microsoft Windows).
+ </para>
+
+ <para role="first-line-indented">
+ The <firstterm>p_use_alt_stack</firstterm> property is a boolean flag (default value is false) specifying whether or
+ not <classname>execution_monitor</classname> should use an alternative stack for the
+ <functionname>sigaction</functionname> based signal catching. When enabled the signals are delivered to the
+ <classname>execution_monitor</classname> on a stack different from current execution stack, which is safer in case
+ if it is corrupted by monitored function. For more details on alternative stack handling see appropriate
+ <ulink url="http://www.opengroup.org/onlinepubs/000095399/functions/sigaltstack.html">manuals</ulink>.
+ </para>
+
+ <para role="first-line-indented">
+ The <firstterm>p_detect_fp_exceptions</firstterm> property is a boolean flag (default value is false) specifying
+ whether or not <classname>execution_monitor</classname> should install hardware traps for the floating point
+ exception on platforms where it's supported.
+ </para>
+
+ </section>
+
+ <section id="execution-monitor.user-guide.errors-reporting">
+ <title>Errors reporting and translation</title>
+ <para role="first-line-indented">
+ If you need to report an error inside monitored function execution you have to throw an exception. Do not use the
+ <classname>execution_exception</classname> - it's not intended to be used for this purpose. The simplest choice is
+ to use one of the following C++ types as an exception:
+ </para>
+
+ <itemizedlist>
+ <listitem>
+ <simpara>C string.</simpara>
+ </listitem>
+ <listitem>
+ <simpara>std:string.</simpara>
+ </listitem>
+ <listitem>
+ <simpara>any exception class in std::exception hierarchy.</simpara>
+ </listitem>
+ </itemizedlist>
+
+ <para role="first-line-indented">
+ In case if you prefer to use your own exception classes or can't govern what exceptions are generated by monitored
+ function and would like to see proper error message in a report, the Execution Monitor allows you to register the
+ translator for any exception class. You can register as many independent translators as you like. See
+ <classname>execution_monitor</classname> specification for requirements on translator function. Also see below
+ for usage example.
+ </para>
+
+ <para role="first-line-indented">
+ Finally, if you need to abort the monitored function execution without reporting any errors, you can throw an
+ exception <classname>execution_aborted</classname>. As a result the execution is aborted and zero result code
+ is produced by the method <methodname>execution_monitor::execute</methodname>.
+ </para>
+
+ </section>
+
+ <section id="execution-monitor.user-guide.mem-leaks-detection">
+ <title>Memory leaks detection</title>
+
+ <para role="first-line-indented">
+ The Execution Monitor provides a limited ability to detect memory leaks during program execution, and to
+ break program execution on specific memory allocation order-number (1 - first allocation of memory in program, 2 -
+ second and so on). Unfortunately this feature is, at the moment, implemented only for the Microsoft family of
+ compilers (and Intel, if it employs Microsoft C Runtime Library). Also it can not be tuned per instance of the
+ monitor and is only triggered globally and reported after the whole program execution is done. In a future this
+ ought to be improved. An interface is composed from two free functions residing in namespace boost:
+ </para>
+
+ <!-- TO FIX -->
+ <programlisting>void detect_memory_leaks( bool on_off );
+void break_memory_alloc( long mem_alloc_order_num );</programlisting>
+
+ <para role="first-line-indented">
+ Use function detect_memory_leaks to switch memory leaks detection on/off. Use break_memory_alloc to break a
+ program execution at allocation specified by mem_alloc_order_num argument. The Unit Test Framework
+ provides a runtime parameter (--detect_memory_leak=yes or no) allowing you to manage this feature during monitored
+ unit tests.
+ </para>
+ </section>
+ </section>
+
+ <library-reference id="execution-monitor.reference">
+ <title>The Execution Monitor reference</title>
+ <titleabbrev>Reference</titleabbrev>
+
+ <header name="boost/test/execution_monitor.hpp">
+ <namespace name="boost">
+
+ <class name="execution_monitor">
+ <purpose>
+ uniformly detects and reports the occurrence of several types of signals and exceptions, reducing various errors
+ to a uniform <classname>execution_exception</classname> that is returned to a caller
+ </purpose>
+
+ <data-member name="p_catch_system_errors">
+ <type><classname>unit_test::readwrite_property</classname>&lt;bool&gt;</type>
+
+ <!-- TO FIX: init value? -->
+
+ <!-- TO FIX -->
+ <purpose>
+ Specifies whether the monitor should try to catch system errors/exceptions that would cause program to crash in
+ regular case.
+ </purpose>
+ </data-member>
+
+ <data-member name="p_auto_start_dbg">
+ <type><classname>unit_test::readwrite_property</classname>&lt;bool&gt;</type>
+
+ <!-- TO FIX -->
+ <purpose>
+ Specifies whether the monitor should try to attach debugger in case of caught system error.
+ </purpose>
+ </data-member>
+
+ <data-member name="p_timeout">
+ <type><classname>unit_test::readwrite_property</classname>&lt;int&gt;</type>
+
+ <!-- TO FIX -->
+ <purpose>
+ Specifies the seconds that elapse before a timer_error occurs. May be ignored on some platforms.
+ </purpose>
+ </data-member>
+
+ <data-member name="p_use_alt_stack">
+ <type><classname>unit_test::readwrite_property</classname>&lt;int&gt;</type>
+
+ <!-- TO FIX -->
+ <purpose>
+ Specifies whether the monitor should use alternative stack for the signal catching.
+ </purpose>
+ </data-member>
+
+ <data-member name="p_detect_fp_exceptions">
+ <type><classname>unit_test::readwrite_property</classname>&lt;bool&gt;</type>
+
+ <!-- TO FIX -->
+ <purpose>
+ Specifies whether or not <classname>execution_monitor</classname> should install hardware traps for the floating
+ point exception.
+ </purpose>
+ </data-member>
+
+ <constructor>
+ <throws><simpara>Nothing.</simpara></throws>
+
+ <effects><simpara>Constructs <classname>execution_monitor</classname> object.</simpara></effects>
+ </constructor>
+
+ <method-group name="execution">
+ <method name="execute">
+ <parameter name="F">
+ <paramtype><classname>unit_test::callback0</classname>&lt;int&gt; const&amp;</paramtype>
+ <description>zero arity function to be monitored</description> <!-- TO FIX -->
+ </parameter>
+
+ <type>int</type>
+
+ <returns><simpara>Value returned by monitored function F call.</simpara></returns>
+
+ <throws>
+ <simpara>
+ <classname>execution_exception</classname> on detected uncaught C++ exception, a hardware or software signal,
+ trap, or other monitored function F premature failure.
+ </simpara>
+ </throws>
+
+ <notes><simpara>method execute doesn't consider it an error for F to return a non-zero value</simpara></notes>
+ </method>
+ </method-group>
+
+ <method-group name="registration">
+ <method name="register_exception_translator">
+ <template>
+ <template-type-parameter name="Exception"/>
+ <template-type-parameter name="ExceptionTranslator"/><!-- TO FIX: how to specify parameter concept? -->
+ </template>
+
+ <parameter name="tr">
+ <paramtype>ExceptionTranslator const&amp;</paramtype>
+ </parameter>
+
+ <parameter name="dummy">
+ <paramtype><classname>boost::type</classname>&lt;Exception&gt;*</paramtype>
+ <default>0</default>
+ </parameter>
+
+ <type>void</type>
+
+ <throws><simpara>Nothing.</simpara></throws>
+
+ <purpose>register custom (user supplied) exception translator</purpose> <!-- TO FIX: where it is? -->
+
+ <effects>
+ <simpara>
+ Registers translator function tr for an exception of type Exception. Translators get chained, so you can
+ register as many as you want. The Exception type needs to be specified explicitly as the member function
+ template argument. The translator function gets called when an exception of type Exception is thrown from
+ within the monitored function. The translator receives a thrown exception object as its first argument.
+ Result value of translator is ignored and no exception is reported if this function exits normally. But you
+ can always rethrow the exception or throw a different one.
+ </simpara>
+ </effects>
+
+ <!-- TO FIX: extra indent before template result type -->
+ </method>
+ </method-group>
+ </class>
+
+ <!-- TO FIX: separate page per class -->
+
+ <class name="execution_exception">
+ <enum name="error_code">
+ <description>
+ <para role="first-line-indented">
+ These values are sometimes used as program return codes. The particular values have been chosen to avoid
+ conflicts with commonly used program return codes: values &lt; 100 are often user assigned, values &gt; 255 are
+ sometimes used to report system errors. Gaps in values allow for orderly expansion.
+ </para>
+
+ <note>
+ <simpara>
+ Only uncaught C++ exceptions are treated as errors. If the application catches a C++ exception, it will never
+ reach the <classname>execution_monitor</classname>.
+ </simpara>
+ </note>
+
+ <note>
+ <simpara>
+ The system errors include <acronym>UNIX</acronym> signals and Windows structured exceptions. They are often
+ initiated by hardware traps.
+ </simpara>
+ </note>
+
+ <para role="first-line-indented">
+ The implementation decides what a fatal_system_exception is and what is just a system_exception. Fatal errors
+ are so likely to have corrupted machine state (like a stack overflow or addressing exception) that it is
+ unreasonable to continue execution.
+ </para>
+ </description>
+ <enumvalue name="no_error"/>
+ <enumvalue name="user_error"/>
+ <enumvalue name="cpp_exception_error"/>
+ <enumvalue name="system_error"/>
+ <enumvalue name="timeout_error"/>
+ <enumvalue name="user_fatal_error"/>
+ <enumvalue name="system_fatal_error"/>
+ </enum>
+
+ <rationale>
+ fear of being out (or nearly out) of memory.
+ </rationale>
+
+ <purpose>
+ uniformly reports monitored function execution problems
+ </purpose>
+
+ <description>
+ <para role="first-line-indented">
+ The class execution_exception is an exception used by the Execution Monitor to report problems detected during
+ a monitored function execution. It intentionally does not allocate any memory so as to be safe for use when
+ there is a lack of memory.
+ </para>
+ </description>
+ </class>
+
+ <class name="execution_aborted">
+ <purpose>
+ This is a trivial default constructible class. Use it to report graceful abortion of a monitored function
+ execution.
+ </purpose>
+ </class>
+
+
+ <class name="system_error">
+ <constructor>
+ <throws><simpara>Nothing.</simpara></throws>
+
+ <effects><simpara>Constructs <classname>system_error</classname> object.</simpara></effects>
+ </constructor>
+
+ <data-member name="p_errno">
+ <type><classname>unit_test::readonly_property</classname>&lt;long&gt;</type>
+
+ <!-- TO FIX -->
+ <purpose>
+ System errno value at the point of error.
+ </purpose>
+ </data-member>
+
+ <purpose>
+ This is a default constructible class. Use it to report failure in system call invocation.
+ </purpose>
+ </class>
+ </namespace>
+ </header>
+ </library-reference>
+</section>

Added: trunk/libs/test/doc/src/faq.xml
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/faq.xml 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,149 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!DOCTYPE section PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN" "../../../../tools/boostbook/dtd/boostbook.dtd" [
+]>
+<section id="btl.faq">
+ <title>Frequently Asked Questions</title>
+ <titleabbrev>FAQ</titleabbrev>
+
+ <qandaset defaultlabel="none">
+ <?dbhtml label-width="0%"?>
+
+ <qandaentry>
+ <question>
+ <simpara>
+ Where the latest version of the Boost Test Library is located?
+ </simpara>
+ </question>
+ <answer>
+ <para role="first-line-indented">
+ The latest version of Boost Test Library is available online at <ulink url="http://www.boost.org/libs/test"/>
+ </para>
+ </answer>
+ </qandaentry>
+
+ <qandaentry>
+ <question>
+ <simpara>
+ I found a bug. Where can I report it?
+ </simpara>
+ </question>
+ <answer>
+ <para role="first-line-indented">
+ You can send a bug report to the boost users' mailing list and/or directly to
+ <ulink url="mailto:boost-test -at- emailacocunt -dot- com">Gennadiy Rozental</ulink>.
+ </para>
+ </answer>
+ </qandaentry>
+
+ <qandaentry>
+ <question>
+ <simpara>
+ I have a request for a new feature. Where can I ask for it?
+ </simpara>
+ </question>
+ <answer>
+ <para role="first-line-indented">
+ You can send a request to the boost developers' mailing list and/or directly to
+ <ulink url="mailto:boost-test -at- emailacocunt -dot- com">Gennadiy Rozental</ulink>.
+ </para>
+ </answer>
+ </qandaentry>
+
+ <qandaentry>
+ <question>
+ <simpara>
+ How to create test case using the Unit Test Framework?
+ </simpara>
+ </question>
+ <answer>
+ <para role="first-line-indented">
+ To create a test case use the macro BOOST_AUTO_TEST_CASE( test_function ). For more details see the Unit Test Framework
+ <link linkend="utf.user-guide.test-organization.auto-nullary-test-case">documentation</link>.
+ </para>
+ </answer>
+ </qandaentry>
+
+ <qandaentry>
+ <question>
+ <simpara>
+ How to create test suite using the Unit Test Framework?
+ </simpara>
+ </question>
+ <answer>
+ <para role="first-line-indented">
+ To create a test suite use the macro BOOST_AUTO_TEST_SUITE( suite_name ). For more details see the Unit Test Framework
+ <link linkend="utf.user-guide.test-organization.auto-test-suite">documentation</link>.
+ </para>
+ </answer>
+ </qandaentry>
+
+ <qandaentry>
+ <question>
+ <simpara>
+ Why did I get a linker error when compiling my test program?
+ </simpara>
+ </question>
+ <answer>
+ <para role="first-line-indented">
+ Boost Test Library components provide several usage variants: to create a test program you can
+ link with the one of the precompiled library variants or use single-header variant. For example, to use Unit Test
+ Framework you may either include the &lt;<ulink url="../../../../boost/test/unit_test.hpp"><filename>boost/test/unit_test.hpp</filename></ulink>&gt;
+ and link with libunit_test_framework.lib or you can include &lt;<ulink url="../../../../boost/test/included/unit_test.hpp"><filename>boost/test/included/unit_test.hpp</filename></ulink>&gt;
+ , in which case you should not need to link with any precompiled component. Note also that
+ you should strictly follow specification on initialization function in other case some compilers may produce linker
+ error like this.
+ </para>
+
+ <computeroutput>Unresolved external init_unit_test_suite(int, char**).</computeroutput>
+
+ <para role="first-line-indented">
+ The reason for this error is that in your implementation you should specify second argument of
+ init_unit_test_suite exactly as in the specification, i.e.: char* [].
+ </para>
+ </answer>
+ </qandaentry>
+
+ <qandaentry>
+ <question>
+ <simpara>
+ How can I redirect testing output?
+ </simpara>
+ </question>
+ <answer>
+ <para role="first-line-indented">
+ Use unit_test_log::instance().set_log_output( std::ostream&amp; ). For more details see the Unit Test Framework
+ <link linkend="utf.user-guide.test-output.log.ct-config.output-stream">documentation</link>.
+ </para>
+ </answer>
+ </qandaentry>
+
+ <qandaentry>
+ <question>
+ <simpara>
+ I want different default log trace level
+ </simpara>
+ </question>
+ <answer>
+ <para role="first-line-indented">
+ Use environment variable BOOST_TEST_LOG_LEVEL to define desired log trace level. You still will be able to reset
+ this value from the command line. For the list of acceptable values see the Unit Test Framework
+ <link linkend="utf.user-guide.runtime-config.parameters">documentation</link>.
+ </para>
+ </answer>
+ </qandaentry>
+
+ <qandaentry>
+ <question>
+ <simpara>
+ Is there DLL version of Boost.Test components available on Win32 platform?
+ </simpara>
+ </question>
+ <answer>
+ <para role="first-line-indented">
+ Yes. Starting with Boost 1.34.0.
+ </para>
+ </answer>
+ </qandaentry>
+
+ </qandaset>
+</section>

Added: trunk/libs/test/doc/src/minimal-testing.xml
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/minimal-testing.xml 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,151 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!DOCTYPE part PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN" "../../../../tools/boostbook/dtd/boostbook.dtd" [
+ <!ENTITY mtf "minimal testing facility">
+]>
+<section id="minimal" last-revision="$Date$">
+ <title>Boost Test Library: The minimal testing facility</title>
+ <titleabbrev>Minimal testing facility</titleabbrev>
+
+ <section id="minimal.intro">
+ <title>Introduction</title>
+
+ <para role="first-line-indented">
+ <firstterm>Boost.Test minimal testing facility</firstterm> provides the functionality previously implemented by the
+ original version of Boost.Test. As the name suggest, it provides only minimal basic facilities for test creation. It
+ have no configuration parameters (either command line arguments or environment variables) and it supplies
+ a limited set of <link linkend="minimal.tools">testing tools</link> which behaves similarly to ones defined amount
+ the Unit Test Framework <link linkend="utf.testing-tools">Testing tools</link>. The &mtf; supplies its own function
+ main() (so can not be used for multi unit testing) and will execute the test program in a monitored environment.
+ </para>
+
+ <para role="first-line-indented">
+ As it follows from the name this component provides only minimal set of the testing capabilities and as a general
+ rule the Unit Test Framework should be preferred. In a majority of the cases it provides you with much wider set of
+ testing tools (and other goods), while still being as easy to set up.
+ </para>
+ </section>
+
+ <section id="minimal.usage">
+ <title>Usage</title>
+
+ <para role="first-line-indented">
+ The only change (other then including <ulink url="../../../../boost/test/minimal.hpp">
+ <filename>boost/test/minimal.hpp</filename></ulink>) you need to make, to integrate your test module with &mtf; is
+ the signature of your function main(). It should look like this:
+ </para>
+
+ <programlisting>int test_main( int argc, char* argv[] )
+{
+ ...
+}</programlisting>
+
+ <para role="first-line-indented">
+ Once you apply the change test automatically starts running in monitored environment. Also you can start using
+ <link linkend="minimal.tools">testing tools</link> provided by the &mtf; and get uniform errors reporting.
+ </para>
+ </section>
+
+ <section id="minimal.example">
+ <title>Example</title>
+
+ <para role="first-line-indented">
+ Following example illustrates different approaches you can employ to detect and report errors using different
+ testing tools
+ </para>
+
+ <btl-example name="example27">
+ <title>Minimal testing facility application</title>
+
+ <annotations>
+ <annotation id="snippet18.ann-1" coords="1">
+ <para role="first-line-indented">
+ This approach uses the BOOST_CHECK tool, which displays an error message on std::cout that includes the
+ expression that failed, the source file name, and the source file line number. It also increments the error count.
+ At program termination, the error count will be displayed automatically by the &mtf;.
+ </para>
+ </annotation>
+
+ <annotation id="snippet18.ann-2" coords="1">
+ <para role="first-line-indented">
+ This approach using the BOOST_REQUIRE tool, is similar to #1, except that after displaying the error, an
+ exception is thrown, to be caught by the &mtf;. This approach is suitable when writing an explicit test program,
+ and the error would be so severe as to make further testing impractical. BOOST_REQUIRE differs from the C++
+ Standard Library's assert() macro in that it is always generated, and channels error detection into the uniform
+ reporting procedure.
+ </para>
+ </annotation>
+
+ <annotation id="snippet18.ann-3" coords="1">
+ <para role="first-line-indented">
+ This approach is similar to #1, except that the error detection is coded separately. This is most useful when
+ the specific condition being tested is not indicative of the reason for failure.
+ </para>
+ </annotation>
+
+ <annotation id="snippet18.ann-4" coords="1">
+ <para role="first-line-indented">
+ This approach is similar to #2, except that the error detection is coded separately. This is most useful when
+ the specific condition being tested is not indicative of the reason for failure.
+ </para>
+ </annotation>
+
+ <annotation id="snippet18.ann-5" coords="1">
+ <para role="first-line-indented">
+ This approach throws an exception, which will be caught and reported by the &mtf;. This approach is suitable
+ for both production and test code, in libraries or not. The error message displayed when the exception is
+ caught will be most meaningful if the exception is derived from <classname>std::exception </classname>, or is a
+ char* or <classname>std::string</classname>.
+ </para>
+ </annotation>
+
+ <annotation id="snippet18.ann-6" coords="1"> <!-- TO FIX: all the coords -->
+ <para role="first-line-indented">
+ This approach uses the BOOST_CHECK_MESSAGE tool, is similar to approach #1, except that similar to the approach #3
+ displays an alternative error message specified as a second argument.
+ </para>
+ </annotation>
+ </annotations>
+ </btl-example>
+ </section>
+
+ <section id="minimal.tools">
+ <title>Provided testing tools</title>
+
+ <para role="first-line-indented">
+ The &mtf; supplies following four tools:
+ </para>
+
+ <inline-synopsis>
+ <macro name="BOOST_CHECK" kind="functionlike" ref-id="none">
+ <macro-parameter name="predicate"/>
+ </macro>
+ <macro name="BOOST_REQUIRE" kind="functionlike" ref-id="none">
+ <macro-parameter name="predicate"/>
+ </macro>
+ <macro name="BOOST_ERROR" kind="functionlike" ref-id="none">
+ <macro-parameter name="message"/>
+ </macro>
+ <macro name="BOOST_FAIL" kind="functionlike" ref-id="none">
+ <macro-parameter name="message"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ Their behavior is modeled after the <link linkend="utf.testing-tools.reference">similarly named tools</link>
+ implemented by the Unit Test Framework.
+ </para>
+ </section>
+
+ <section id="minimal.impl">
+ <title>Implementation</title>
+
+ <para role="first-line-indented">
+ The &mtf; is implemented inline in one header <ulink url="../../../../boost/test/minimal.hpp">
+ <filename>boost/test/minimal.hpp</filename></ulink>. There are no special compilation instructions for this component.
+ </para>
+
+ <para role="first-line-indented">
+ There is a single unit test program that validates &mtf; functionality: minimal_test <!-- TO FIX: link to test -->
+ </para>
+ </section>
+</section>

Added: trunk/libs/test/doc/src/program-execution-monitor.xml
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/program-execution-monitor.xml 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,375 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!DOCTYPE part PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN" "../../../../tools/boostbook/dtd/boostbook.dtd" [
+ <!ENTITY pem "Program Execution Monitor">
+]>
+<section id="pem" last-revision="$Date$">
+ <title>Boost Test Library: The &pem;</title>
+ <titleabbrev>The &pem;</titleabbrev>
+
+ <section id="pem.intro">
+ <title>Introduction</title>
+
+ <para role="first-line-indented">
+ The components of a C++ program may report user-detected errors in several ways, such as via a return value or
+ throwing an exception. System-detected errors such as dereferencing an invalid pointer are reported in other ways,
+ totally operating system and compiler dependent.
+ </para>
+
+ <para role="first-line-indented">
+ Yet many C++ programs, both production and test, must run in an environment where uniform reporting of errors is
+ necessary. For example, converting otherwise uncaught exceptions to non-zero program return codes allows many
+ command line, script, or batch environments to continue processing in a controlled manner. Even some
+ <acronym>GUI</acronym> environments benefit from the unification of errors into program return codes.
+ </para>
+
+ <para role="first-line-indented">
+ <firstterm>The Boost Test Library's &pem;</firstterm> relieves users from messy error
+ detection and reporting duties by providing a replacement function main() which calls a user-supplied cpp_main()
+ function within a monitored environment. The supplied main() then uniformly detects and reports the occurrence of
+ several types of errors, reducing them to a uniform return code which is returned to the host environment.
+ </para>
+
+ <para role="first-line-indented">
+ Uniform error reporting is particularly useful for programs running unattended under control of scripts or batch
+ files. Some operating systems pop up message boxes if an uncaught exception occurs, and this requires manual
+ intervention. By converting such exceptions into non-zero program return codes, the library makes the program a
+ better citizen. More uniform reporting of errors isn't a benefit to some programs, particularly programs always
+ run by hand of a knowledgeable person. So the &pem; wouldn't be worth using in that environment.
+ </para>
+
+ <para role="first-line-indented">
+ Uniform error reporting can be also useful in test environments such as the Boost regression tests. Be aware though
+ in such case it might be preferable to use the <link linkend="utf">Unit Test Framework</link>, cause it allows one
+ to use the <link linkend="utf.testing-tools">Testing tools</link> and generate more detailed error information.
+ </para>
+ </section>
+
+ <section id="pem.usage">
+ <title>Usage</title>
+
+ <para role="first-line-indented">
+ To facilitate uniform error reporting the &pem; supplies function main() as part if it's implementation. To use the
+ &pem; instead of regular function main your program is required to supply a function cpp_main() with same signature.
+ </para>
+
+ <para role="first-line-indented">
+ Here is the traditional Hello World program implemented using the &pem;:
+ </para>
+
+ <btl-example name="example24">
+ <title>The &pem;: Hello World</title>
+ </btl-example>
+
+ <para role="first-line-indented">
+ It really is that simple - just change the name of your initial function from main() to cpp_main(). Do make sure
+ the argc and argv parameters are specified (although you don't have to name them if you don't use them).
+ </para>
+
+ <para role="first-line-indented">
+ The &pem; treats as errors:
+ </para>
+
+ <itemizedlist>
+ <listitem>
+ <simpara>Exceptions thrown from cpp_main().</simpara>
+ </listitem>
+ <listitem>
+ <simpara>Non-zero return from cpp_main().</simpara>
+ </listitem>
+ </itemizedlist>
+
+ <para role="first-line-indented">
+ So what if some function had thrown a runtime_error with the message "big trouble" and it's not trapped by any
+ catch clause? Like in a following example:
+ </para>
+
+ <btl-example name="example25">
+ <title>The &pem;: standard exception detection</title>
+ </btl-example>
+
+ <para role="first-line-indented">
+ Note that in both examples above we used single-header variant of the &pem;. Alternatively we can build and link with
+ standalone library. In case of static library we are not required to include any &pem; related headers. To use dynamic
+ library you are required to include
+ <ulink url="../../../../boost/test/prg_exec_monitor.hpp"><filename>boost/test/prg_exec_monitor.hpp</filename></ulink>
+ and define <xref linkend="pem.flag.dyn-link" endterm="pem.flag.dyn-link"/> during program compilation. The same
+ header is required if you want to employ <link linkend="pem.compilation.auto-linking">auto-linking</link> feature.
+ </para>
+
+ <para role="first-line-indented">
+ Let's consider an example where function cpp_main() had bubbled up a return code of 5:
+ </para>
+
+ <btl-example name="example26">
+ <title>The &pem;: error return code detection</title>
+ </btl-example>
+
+ <para role="first-line-indented">
+ The &pem; reports errors to both cout (details) and cerr (summary). Primary detailed error
+ messages appear on standard output stream so that it is properly interlaced with other output, thus aiding error
+ analysis. While the final error notification message appears on standard error stream. This increases the
+ visibility of error notification if standard output and error streams are directed to different devices or files.
+ </para>
+
+ <para role="first-line-indented">
+ The &pem;'s supplied main() will return following result codes:
+ </para>
+
+ <itemizedlist>
+ <listitem>
+ <simpara>boost::exit_success - no errors</simpara>
+ </listitem>
+ <listitem>
+ <simpara>boost::exit_failure - non-zero and non-boost::exit_success return code from cpp_main().</simpara>
+ </listitem>
+ <listitem>
+ <simpara>boost::exit_exception_failure - cpp_main() throw an exception.</simpara>
+ </listitem>
+ </itemizedlist>
+ </section>
+
+ <section id="pem.config">
+ <title>Configuration</title>
+
+ <para role="first-line-indented">
+ There are two aspects of the &pem; behavior that you can customize at runtime. Customization is performed using
+ environment variables.
+ </para>
+
+ <table id="pem.config.flags">
+ <title>The &pem; configuration environment variables</title>
+
+ <tgroup cols="2">
+ <colspec colname="c1"/>
+ <colspec colname="c3"/>
+ <thead>
+ <row>
+ <entry>Flag</entry>
+ <entry>Usage</entry>
+ </row>
+ </thead>
+ <tbody>
+ <row>
+ <entry>BOOST_TEST_CATCH_SYSTEM_ERRORS</entry>
+ <entry>
+ allows customizing behavior of the &pem; in regards of catching system errors. For more details about the
+ meaning of this option see the <link linkend="boost.execution_monitor">Execution Monitor</link>. If you
+ want to prevent the Program Execution Monitor from catching system exception, set the value of this
+ variable to "no". The default value is "yes".
+ </entry>
+ </row>
+
+ <row>
+ <entry>BOOST_PRG_MON_CONFIRM</entry>
+ <entry>
+ allows avoiding success confirmation message. Some users prefer to see a confirmation message in case if program
+ successfully executed. While others don't like the clutter or any output is prohibited by organization standards.
+ To avoid the message set the value of this variable to "no". The default value is "yes".
+ </entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </section>
+
+ <section id="pem.impl">
+ <title>The &pem; implementation</title>
+ <titleabbrev>Implementation</titleabbrev>
+
+ <para role="first-line-indented">
+ To monitor execution of user supplied function cpp_main() the &pem; relies on the Boost.Test's
+ <link linkend="execution-monitor">Execution Monitor</link>. Also the &pem; supplies the function main() to facilitate
+ uniform error reporting. Following files constitute the &pem; implementation:
+ </para>
+
+ <variablelist>
+ <?dbhtml list-presentation="list"?>
+
+ <varlistentry>
+ <term><ulink url="../../../../boost/test/impl/execution_monitor.ipp"><filename>libs/test/execution_monitor.cpp</filename></ulink></term>
+ <listitem>
+ <simpara>
+ provides <link linkend="execution-monitor">Execution Monitor</link> implementation for all supported
+ configurations.
+ </simpara>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><ulink url="../../../../boost/test/impl/cpp_main.ipp"><filename>libs/test/cpp_main.cpp</filename></ulink></term>
+ <listitem>
+ <simpara>supplies function main() for static library build</simpara>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><ulink url="../../../../boost/test/included/prg_exec_monitor.hpp"><filename>boost/test/included/prg_exec_monitor.hpp</filename></ulink></term>
+ <listitem>
+ <simpara>combines all implementation files into single header to be use as inlined version of component</simpara>
+ </listitem>
+ </varlistentry>
+
+ <varlistentry>
+ <term><ulink url="../../../../boost/test/prg_exec_monitor.hpp"><filename>boost/test/prg_exec_monitor.hpp</filename></ulink></term>
+ <listitem>
+ <simpara>
+ contains definitions for main() function for dynamic library build and pragmas for auto-linking feature support.
+ </simpara>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+
+ <para role="first-line-indented">
+ The &pem; implementation wraps several system headers and is intended to be used as standalone library. While there
+ exist an alternative variant to <link linkend="pem.compilation.direct-include">include the whole implementation
+ directly</link> into your program, for the long term usage the preferable solution is to
+ <link linkend="pem.compilation.standalone">build library once</link> and reuse it.
+ </para>
+ </section>
+
+ <section id="pem.compilation">
+ <title>The &pem; compilation</title>
+ <titleabbrev>Compilation</titleabbrev>
+
+ <para role="first-line-indented">
+ In comparison with many other boost libraries, which are completely implemented in header files, compilation and
+ linking with the &pem; may require additional steps. The &pem; presents you with options to either
+ <link linkend="pem.compilation.standalone">built and link with a standalone library</link> or
+ <link linkend="pem.compilation.direct-include">include the implementation directly</link> into your
+ program. If you opt to use the library the &pem; header implements the
+ <link linkend="pem.compilation.auto-linking">auto-linking support</link> and following flags can be used to configure
+ compilation of the &pem; library and your program:
+ </para>
+
+ <table id="pem.compilation.flags">
+ <title>&pem; compilation flags</title>
+ <tgroup cols="2">
+ <colspec colname="c1"/>
+ <colspec colname="c3"/>
+ <thead>
+ <row>
+ <entry>Variable</entry>
+ <entry>Usage</entry>
+ </row>
+ </thead>
+ <tbody>
+ <row>
+ <entry id="pem.flag.dyn-link">BOOST_TEST_DYN_LINK</entry>
+ <entry>Define this flag to build/use dynamic library.</entry>
+ </row>
+ <row>
+ <entry id="pem.flag.no-lib">BOOST_TEST_NO_LIB</entry>
+ <entry>Define this flag to prevent auto-linking.</entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+
+ <section id="pem.compilation.standalone">
+ <title>Standalone library compilation</title>
+
+ <para role="first-line-indented">
+ If you opted to link your program with the standalone library, you need to build it first. To build a standalone
+ library all C++ files (.cpp), that constitute &pem; <link linkend="pem.impl">implementation</link> need to be
+ listed as source files in your makefile<footnote><simpara>There are varieties of make systems that can be used. To name
+ a few: <acronym>GNU</acronym> make (and other make clones) and build systems integrated into <acronym>IDE</acronym>s
+ (for example Microsoft Visual Studio). The Boost preferred solution is Boost.Build system that is based on top of
+ bjam tool. Make systems require some kind of configuration file that lists all files that constitute the library
+ and all build options. For example the makefile that is used by make, or the Microsoft Visual Studio project file,
+ Jamfile is used by Boost.Build. For the sake of simplicity let's call this file the makefile.</simpara></footnote>.
+ </para>
+
+ <para role="first-line-indented">
+ The makefile for use with Boost.Build system is supplied in <filename class="directory">libs/test/build</filename>
+ directory. The &pem; can be built as either <link linkend="pem.compilation.standalone.static">static</link>
+ or <link linkend="pem.compilation.standalone.dynamic">dynamic</link> library.
+ </para>
+
+ <section id="pem.compilation.standalone.static">
+ <title>Static library compilation</title>
+
+ <para role="first-line-indented">
+ There are no additional build defines or options required to build static library. Using Boost.Build system you
+ can build the static library with a following command from libs/test/build directory:
+ </para>
+
+ <cmdsynopsis>
+ <!-- TO FIX -->
+ <command>bjam</command>
+ <arg>-sTOOLS=&lt;your-tool-name&gt;</arg>
+ <arg choice="req">-sBUILD=boost_prg_exec_monitor</arg>
+ </cmdsynopsis>
+
+ <para role="first-line-indented">
+ Also on Windows you can use the Microsoft Visual Studio .NET project file provided.
+ </para>
+ </section>
+
+ <section id="pem.compilation.standalone.dynamic">
+ <title>Dynamic library compilation</title>
+
+ <para role="first-line-indented">
+ To build the dynamic library<footnote><simpara>What is meant by the term dynamic library is a <firstterm>dynamically
+ loaded library</firstterm>, alternatively called a <firstterm>shared library</firstterm>.</simpara></footnote> you
+ need to add <xref linkend="pem.flag.dyn-link" endterm="pem.flag.dyn-link"/> to the list of macro definitions in the
+ makefile. Using the Boost.Build system you can build the dynamic library with the following command from
+ <filename class="directory">libs/test/build</filename> directory:
+ </para>
+
+ <cmdsynopsis>
+ <!-- TO FIX -->
+ <command>bjam</command>
+ <arg>-sTOOLS=&lt;your-tool-name&gt;</arg>
+ <arg choice="req">-sBUILD=boost_prg_exec_monitor</arg>
+ </cmdsynopsis>
+
+ <para role="first-line-indented">
+ Also on Windows you can use the Microsoft Visual Studio .NET project file provided.
+ </para>
+
+ <important>
+ <simpara>
+ For your program to successfully link with the dynamic library the flag
+ <xref linkend="pem.flag.dyn-link" endterm="pem.flag.dyn-link"/> needs to be defined both during dynamic library
+ build and during your program compilation.
+ </simpara>
+ </important>
+ </section>
+ </section>
+
+ <section id="pem.compilation.auto-linking">
+ <title>Support of the auto-linking feature</title>
+ <titleabbrev>Auto-linking support</titleabbrev>
+
+ <para role="first-line-indented">
+ For the Microsoft family of compilers the &pem; provides an ability to automatically select proper library name
+ and add it to the list of objects to be linked with. To employ this feature you required to include either header
+ <ulink url="../../../../boost/test/prg_exec_monitor.hpp"><filename>boost/test/prg_exec_monitor.hpp</filename></ulink>
+ or header
+ <ulink url="../../../../boost/test/included/prg_exec_monitor.hpp"><filename>boost/test/included/prg_exec_monitor.hpp</filename></ulink>
+ By default the feature is going to be enabled. To disable it you have to define the flag
+ <xref linkend="pem.flag.no-lib" endterm="pem.flag.no-lib"/>.
+ </para>
+
+ <para role="first-line-indented">
+ For more details on the auto-linking feature implementation and configuration you should consult the
+ <ulink url="under_construction.html">appropriate documentation</ulink>.
+ </para>
+ </section>
+
+ <section id="pem.compilation.direct-include">
+ <title>Including the &pem; directly into your program</title>
+ <titleabbrev>Direct include</titleabbrev>
+
+ <para role="first-line-indented">
+ If you prefer to avoid the standalone library compilation you have two alternative usage variants: you can either
+ include all files that constitute the static library in your program's makefile or include them as a part of
+ your program's source file. To facilitate the later variant the &pem; implementation presents the header
+ <ulink url="../../../../boost/test/included/prg_exec_monitor.hpp"><filename>boost/test/included/prg_exec_monitor.hpp</filename></ulink>
+ In both variants neither <xref linkend="pem.flag.dyn-link" endterm="pem.flag.dyn-link"/> nor
+ <xref linkend="pem.flag.no-lib" endterm="pem.flag.no-lib"/> are applicable. This solution may not be the best choice
+ in a long run, since it requires the &pem; sources recompilation for every program you use it with.
+ </para>
+ </section>
+ </section>
+</section>

Added: trunk/libs/test/doc/src/snippet/const_string.hpp
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/snippet/const_string.hpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,165 @@
+// (C) Copyright Gennadiy Rozental 2001-2007.
+// Distributed under the Boost Software License, Version 1.0.
+// (See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+// See http://www.boost.org/libs/test for the library home page.
+//
+// File : $RCSfile$
+//
+// Version : $Revision$
+//
+// Description : simple string class definition
+// ***************************************************************************
+
+#ifndef CONST_STRING_HPP
+#define CONST_STRING_HPP
+
+// STL
+#include <string>
+using std::string;
+
+namespace common_layer {
+
+// ************************************************************************** //
+// ************** const_string ************** //
+// ************************************************************************** //
+
+class const_string {
+public:
+ // Subtypes
+ typedef char const* iterator;
+ typedef char const* const_iterator;
+ typedef std::reverse_iterator<iterator,char, char const&> reverse_iterator;
+ typedef reverse_iterator const_reverse_iterator;
+
+ // Constructor
+ const_string()
+ : m_begin( "" ), m_end( m_begin ) {}
+
+ // Copy constructor is generated by compiler
+
+ const_string( const std::string& s )
+ : m_begin( s.c_str() ),
+ m_end( m_begin + s.length() ) {}
+
+ const_string( char const* s );
+
+ const_string( char const* s, size_t length )
+ : m_begin( s ), m_end( m_begin + length ) { if( length == 0 ) erase(); }
+
+ const_string( char const* first, char const* last )
+ : m_begin( first ), m_end( last ) {}
+
+ // data access methods
+ char operator[]( size_t index ) const { return m_begin[index]; }
+ char at( size_t index ) const;
+
+ char const* data() const { return m_begin; }
+
+ // length operators
+ size_t length() const { return m_end - m_begin; }
+ bool is_empty() const { return m_end == m_begin; }
+
+ void erase() { m_begin = m_end = ""; }
+ void resize( size_t new_len ) { if( m_begin + new_len < m_end ) m_end = m_begin + new_len; }
+ void rshorten( size_t shift = 1 ) { m_end -= shift; if( m_end <= m_begin ) erase(); }
+ void lshorten( size_t shift = 1 ) { m_begin += shift; if( m_end <= m_begin ) erase(); }
+
+ // Assignment operators
+ const_string& operator=( const_string const& s );
+ const_string& operator=( string const& s ) { return *this = const_string( s ); }
+ const_string& operator=( char const* s ) { return *this = const_string( s ); }
+
+ const_string& assign( const_string const& s ) { return *this = s; }
+ const_string& assign( string const& s, size_t len ) { return *this = const_string( s.data(), len ); }
+ const_string& assign( string const& s ) { return *this = const_string( s ); }
+ const_string& assign( char const* s ) { return *this = const_string( s ); }
+ const_string& assign( char const* s, size_t len ) { return *this = const_string( s, len ); }
+ const_string& assign( char const* f, char const* l ) { return *this = const_string( f, l ); }
+
+ void swap( const_string& s ) {
+ // do not want to include alogrithm
+ char const* tmp1 = m_begin;
+ char const* tmp2 = m_end;
+
+ m_begin = s.m_begin;
+ m_end = s.m_end;
+
+ s.m_begin = tmp1;
+ s.m_end = tmp2;
+ }
+
+ // Comparison operators
+ friend bool operator==( const_string const& s1, const_string const& s2 );
+ friend bool operator==( const_string const& s1, char const* s2 ) { return s1 == const_string( s2 ); }
+ friend bool operator==( const_string const& s1, const string& s2 ) { return s1 == const_string( s2 ); }
+
+ friend bool operator!=( const_string const& s1, const_string const& s2 ) { return !(s1 == s2); }
+ friend bool operator!=( const_string const& s1, char const* s2 ) { return !(s1 == s2); }
+ friend bool operator!=( const_string const& s1, const string& s2 ) { return !(s1 == s2); }
+
+ friend bool operator==( char const* s2, const_string const& s1 ) { return s1 == s2; }
+ friend bool operator==( const string& s2, const_string const& s1 ) { return s1 == s2; }
+
+ friend bool operator!=( char const* s2, const_string const& s1 ) { return !(s1 == s2); }
+ friend bool operator!=( const string& s2, const_string const& s1 ) { return !(s1 == s2); }
+
+ // Iterators
+ iterator begin() const { return m_begin; }
+ iterator end() const { return m_end; }
+ reverse_iterator rbegin() const { return m_end; }
+ reverse_iterator rend() const { return m_begin; }
+
+ // search operation
+ iterator find_first_of( char c );
+ iterator find_first_of( const_string cs );
+ iterator find_last_of( char c );
+ iterator find_last_of( const_string cs );
+
+private:
+
+ // Data members
+ char const* m_begin;
+ char const* m_end;
+};
+
+//____________________________________________________________________________//
+
+// first character
+class first_char {
+public:
+ char operator()( const_string source, char default_char = '\0' ) const {
+ return source.is_empty() ? default_char : *source.data();
+ }
+};
+
+//____________________________________________________________________________//
+
+// last character
+class last_char {
+public:
+ char operator()( const_string source, char default_char = '\0' ) const {
+ return source.is_empty() ? default_char : *source.rbegin();
+ }
+};
+
+//____________________________________________________________________________//
+
+inline const_string&
+const_string::operator=( const_string const& s ) {
+ if( &s != this ) {
+ m_begin = s.m_begin;
+ m_end = s.m_end;
+ }
+
+ return *this;
+}
+
+//____________________________________________________________________________//
+
+typedef const_string const literal;
+
+}; // namespace common_layer
+
+#endif // CONST_STRING_HPP

Added: trunk/libs/test/doc/src/snippet/const_string_test.cpp
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/snippet/const_string_test.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,193 @@
+// (C) Copyright Gennadiy Rozental 2001-2005.
+// Distributed under the Boost Software License, Version 1.0.
+// (See accompanying file LICENSE_1_0.txt or copy at
+// http://www.boost.org/LICENSE_1_0.txt)
+
+// See http://www.boost.org/libs/test for the library home page.
+//
+// File : $RCSfile: const_string_test.cpp,v $
+//
+// Version : $Revision: 1.1 $
+//
+// Description : simple string class test
+// ***************************************************************************
+
+#define BOOST_TEST_MODULE const_string test
+#include <boost/test/unit_test.hpp>
+
+#include <const_string.hpp>
+using common_layer::const_string;
+
+BOOST_AUTO_TEST_CASE( constructors_test )
+{
+ const_string cs0( "" );
+ BOOST_CHECK_EQUAL( cs0.length(), (size_t)0 );
+ BOOST_CHECK_EQUAL( cs0.begin(), "" );
+ BOOST_CHECK_EQUAL( cs0.end(), "" );
+ BOOST_CHECK( cs0.is_empty() );
+
+ const_string cs01( NULL );
+ BOOST_CHECK_EQUAL( cs01.length(), (size_t)0 );
+ BOOST_CHECK_EQUAL( cs01.begin(), "" );
+ BOOST_CHECK_EQUAL( cs01.end(), "" );
+ BOOST_CHECK( cs01.is_empty() );
+
+ const_string cs1( "test_string" );
+ BOOST_CHECK_EQUAL( std::strcmp( cs1.data(), "test_string" ), 0 );
+ BOOST_CHECK_EQUAL( cs1.length(), std::strlen("test_string") );
+
+ std::string s( "test_string" );
+ const_string cs2( s );
+ BOOST_CHECK_EQUAL( std::strcmp( cs2.data(), "test_string" ), 0 );
+
+ const_string cs3( cs1 );
+ BOOST_CHECK_EQUAL( std::strcmp( cs1.data(), "test_string" ), 0 );
+
+ const_string cs4( "test_string", 4 );
+ BOOST_CHECK_EQUAL( std::strncmp( cs4.data(), "test", cs4.length() ), 0 );
+
+ const_string cs5( s.data(), s.data() + s.length() );
+ BOOST_CHECK_EQUAL( std::strncmp( cs5.data(), "test_string", cs5.length() ), 0 );
+
+ const_string cs_array[] = { "str1", "str2" };
+
+ BOOST_CHECK_EQUAL( cs_array[0], "str1" );
+ BOOST_CHECK_EQUAL( cs_array[1], "str2" );
+}
+
+BOOST_AUTO_TEST_CASE( data_access_test )
+{
+ const_string cs1( "test_string" );
+ BOOST_CHECK_EQUAL( std::strcmp( cs1.data(), "test_string" ), 0 );
+ BOOST_CHECK_EQUAL( std::strcmp( cs1.data(), cs1 ), 0 );
+
+ BOOST_CHECK_EQUAL( cs1[(size_t)0], 't' );
+ BOOST_CHECK_EQUAL( cs1[(size_t)4], '_' );
+ BOOST_CHECK_EQUAL( cs1[cs1.length()-1], 'g' );
+
+ BOOST_CHECK_EQUAL( cs1[(size_t)0], cs1.at( 0 ) );
+ BOOST_CHECK_EQUAL( cs1[(size_t)2], cs1.at( 5 ) );
+ BOOST_CHECK_EQUAL( cs1.at( cs1.length() - 1 ), 'g' );
+
+ BOOST_CHECK_THROW( cs1.at( cs1.length() ), std::out_of_range );
+
+ BOOST_CHECK_EQUAL( common_layer::first_char()( cs1 ), 't' );
+ BOOST_CHECK_EQUAL( common_layer::last_char()( cs1 ) , 'g' );
+}
+
+
+BOOST_AUTO_TEST_CASE( length_test )
+{
+ const_string cs1;
+
+ BOOST_CHECK_EQUAL( cs1.length(), (size_t)0 );
+ BOOST_CHECK( cs1.is_empty() );
+
+ cs1 = "";
+ BOOST_CHECK_EQUAL( cs1.length(), (size_t)0 );
+ BOOST_CHECK( cs1.is_empty() );
+
+ cs1 = "test_string";
+ BOOST_CHECK_EQUAL( cs1.length(), (size_t)11 );
+
+ cs1.erase();
+ BOOST_CHECK_EQUAL( cs1.length(), (size_t)0 );
+ BOOST_CHECK_EQUAL( cs1.data(), "" );
+
+ cs1 = const_string( "test_string", 4 );
+ BOOST_CHECK_EQUAL( cs1.length(), (size_t)4 );
+
+ cs1.resize( 5 );
+ BOOST_CHECK_EQUAL( cs1.length(), (size_t)4 );
+
+ cs1.resize( 3 );
+ BOOST_CHECK_EQUAL( cs1.length(), (size_t)3 );
+
+ cs1.rshorten();
+ BOOST_CHECK_EQUAL( cs1.length(), (size_t)2 );
+ BOOST_CHECK_EQUAL( cs1[(size_t)0], 't' );
+
+ cs1.lshorten();
+ BOOST_CHECK_EQUAL( cs1.length(), (size_t)1 );
+ BOOST_CHECK_EQUAL( cs1[(size_t)0], 'e' );
+
+ cs1.lshorten();
+ BOOST_CHECK( cs1.is_empty() );
+ BOOST_CHECK_EQUAL( cs1.data(), "" );
+
+ cs1 = "test_string";
+ cs1.lshorten( 11 );
+ BOOST_CHECK( cs1.is_empty() );
+ BOOST_CHECK_EQUAL( cs1.data(), "" );
+}
+
+BOOST_AUTO_TEST_CASE( asignment_test )
+{
+ const_string cs1;
+ std::string s( "test_string" );
+
+ cs1 = "test";
+ BOOST_CHECK_EQUAL( std::strcmp( cs1.data(), "test" ), 0 );
+
+ cs1 = s;
+ BOOST_CHECK_EQUAL( std::strcmp( cs1.data(), "test_string" ), 0 );
+
+ cs1.assign( "test" );
+ BOOST_CHECK_EQUAL( std::strcmp( cs1.data(), "test" ), 0 );
+
+ const_string cs2( "test_string" );
+
+ cs1.swap( cs2 );
+ BOOST_CHECK_EQUAL( std::strcmp( cs1.data(), "test_string" ), 0 );
+ BOOST_CHECK_EQUAL( std::strcmp( cs2.data(), "test" ), 0 );
+}
+
+BOOST_AUTO_TEST_CASE( comparison_test )
+{
+ const_string cs1( "test_string" );
+ const_string cs2( "test_string" );
+ std::string s( "test_string" );
+
+ BOOST_CHECK_EQUAL( cs1, "test_string" );
+ BOOST_CHECK_EQUAL( "test_string", cs1 );
+ BOOST_CHECK_EQUAL( cs1, cs2 );
+ BOOST_CHECK_EQUAL( cs1, s );
+ BOOST_CHECK_EQUAL( s , cs1 );
+
+ cs1.resize( 4 );
+
+ BOOST_CHECK( cs1 != "test_string" );
+ BOOST_CHECK( "test_string" != cs1 );
+ BOOST_CHECK( cs1 != cs2 );
+ BOOST_CHECK( cs1 != s );
+ BOOST_CHECK( s != cs1 );
+
+ BOOST_CHECK_EQUAL( cs1, "test" );
+}
+
+BOOST_AUTO_TEST_CASE( iterators_test )
+{
+ const_string cs1( "test_string" );
+ std::string s;
+
+ std::copy( cs1.begin(), cs1.end(), std::back_inserter( s ) );
+ BOOST_CHECK_EQUAL( cs1, s );
+
+ s.erase();
+
+ std::copy( cs1.rbegin(), cs1.rend(), std::back_inserter( s ) );
+ BOOST_CHECK_EQUAL( const_string( s ), "gnirts_tset" );
+}
+
+BOOST_AUTO_TEST_CASE( search_test )
+{
+ const_string cs( "test_string" );
+
+ BOOST_CHECK_EQUAL( cs.find_first_of( 't' ), cs.begin() );
+ BOOST_CHECK_EQUAL( cs.find_last_of( 't' ), cs.begin() + 6 );
+
+ BOOST_CHECK_EQUAL( cs.find_first_of( "st" ), cs.begin() + 2 );
+ BOOST_CHECK_EQUAL( cs.find_last_of( "st" ), cs.begin() + 5 );
+}
+
+// EOF

Added: trunk/libs/test/doc/src/snippet/snippet1.cpp
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/snippet/snippet1.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,11 @@
+void single_test( int i )
+{
+ BOOST_CHECK( /* test assertion */ );
+}
+
+void combined_test()
+{
+ int params[] = { 1, 2, 3, 4, 5 };
+
+ std::for_each( params, params+5, &single_test );
+}

Added: trunk/libs/test/doc/src/snippet/snippet10.cpp
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/snippet/snippet10.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,8 @@
+#include <my_class.hpp>
+
+int main( int, char* [] )
+{
+ my_class test_object( "qwerty" );
+
+ return test_object.is_valid() ? EXIT_SUCCESS : EXIT_FAILURE;
+}

Added: trunk/libs/test/doc/src/snippet/snippet11.cpp
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/snippet/snippet11.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,10 @@
+#include <my_class.hpp>
+#define BOOST_TEST_MODULE MyTest
+#include <boost/test/unit_test.hpp>
+
+BOOST_AUTO_TEST_CASE( my_test )
+{
+ my_class test_object( "qwerty" );
+
+ BOOST_CHECK( test_object.is_valid() );
+}

Added: trunk/libs/test/doc/src/snippet/snippet12.cpp
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/snippet/snippet12.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,25 @@
+#define BOOST_TEST_MODULE MyTest
+#include <boost/test/unit_test.hpp>
+
+int add( int i, int j ) { return i+j; }
+
+BOOST_AUTO_TEST_CASE( my_test )
+{
+ // seven ways to detect and report the same error:
+ BOOST_CHECK( add( 2,2 ) == 4 ); // #1 continues on error
+
+ BOOST_REQUIRE( add( 2,2 ) == 4 ); // #2 throws on error
+
+ if( add( 2,2 ) != 4 )
+ BOOST_ERROR( "Ouch..." ); // #3 continues on error
+
+ if( add( 2,2 ) != 4 )
+ BOOST_FAIL( "Ouch..." ); // #4 throws on error
+
+ if( add( 2,2 ) != 4 ) throw "Ouch..."; // #5 throws on error
+
+ BOOST_CHECK_MESSAGE( add( 2,2 ) == 4, // #6 continues on error
+ "add(..) result: " << add( 2,2 ) );
+
+ BOOST_CHECK_EQUAL( add( 2,2 ), 4 ); // #7 continues on error
+}

Added: trunk/libs/test/doc/src/snippet/snippet13.cpp
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/snippet/snippet13.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,4 @@
+#define BOOST_TEST_MODULE const_string test
+#include <boost/test/unit_test.hpp>
+
+// EOF

Added: trunk/libs/test/doc/src/snippet/snippet14.cpp
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/snippet/snippet14.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,16 @@
+class const_string {
+public:
+ // Constructors
+ const_string();
+ const_string( std::string const& s )
+ const_string( char const* s );
+ const_string( char const* s, size_t length );
+ const_string( char const* begin, char const* end );
+
+ // Access methods
+ char const* data() const;
+ size_t length() const;
+ bool is_empty() const;
+
+ ...
+};

Added: trunk/libs/test/doc/src/snippet/snippet15.cpp
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/snippet/snippet15.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,36 @@
+#define BOOST_TEST_MODULE const_string test
+#include <boost/test/unit_test.hpp>
+
+BOOST_AUTO_TEST_CASE( constructors_test )
+{
+ const_string cs0( "" ); // 1 //
+ BOOST_CHECK_EQUAL( cs0.length(), (size_t)0 );
+ BOOST_CHECK( cs0.is_empty() );
+
+ const_string cs01( NULL ); // 2 //
+ BOOST_CHECK_EQUAL( cs01.length(), (size_t)0 );
+ BOOST_CHECK( cs01.is_empty() );
+
+ const_string cs1( "test_string" ); // 3 //
+ BOOST_CHECK_EQUAL( std::strcmp( cs1.data(), "test_string" ), 0 );
+ BOOST_CHECK_EQUAL( cs1.length(), std::strlen("test_string") );
+
+ std::string s( "test_string" ); // 4 //
+ const_string cs2( s );
+ BOOST_CHECK_EQUAL( std::strcmp( cs2.data(), "test_string" ), 0 );
+
+ const_string cs3( cs1 ); // 5 //
+ BOOST_CHECK_EQUAL( std::strcmp( cs1.data(), "test_string" ), 0 );
+
+ const_string cs4( "test_string", 4 ); // 6 //
+ BOOST_CHECK_EQUAL( std::strncmp( cs4.data(), "test", cs4.length() ), 0 );
+
+ const_string cs5( s.data(), s.data() + s.length() ); // 7 //
+ BOOST_CHECK_EQUAL( std::strncmp( cs5.data(), "test_string", cs5.length() ), 0 );
+
+ const_string cs_array[] = { "str1", "str2" }; // 8 //
+ BOOST_CHECK_EQUAL( cs_array[0], "str1" );
+ BOOST_CHECK_EQUAL( cs_array[1], "str2" );
+}
+
+// EOF

Added: trunk/libs/test/doc/src/snippet/snippet16.cpp
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/snippet/snippet16.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,7 @@
+class const_string {
+public:
+ ...
+ char operator[]( size_t index ) const;
+ char at( size_t index ) const;
+ ...
+};

Added: trunk/libs/test/doc/src/snippet/snippet17.cpp
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/snippet/snippet17.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,23 @@
+#define BOOST_TEST_MODULE const_string test
+#include <boost/test/unit_test.hpp>
+
+BOOST_AUTO_TEST_CASE( constructors_test )
+{
+ ...
+}
+
+BOOST_AUTO_TEST_CASE( data_access_test )
+{
+ const_string cs1( "test_string" ); // 1 //
+ BOOST_CHECK_EQUAL( cs1[(size_t)0], 't' );
+ BOOST_CHECK_EQUAL( cs1[(size_t)4], '_' );
+ BOOST_CHECK_EQUAL( cs1[cs1.length()-1], 'g' );
+
+ BOOST_CHECK_EQUAL( cs1[(size_t)0], cs1.at( 0 ) ); // 2 //
+ BOOST_CHECK_EQUAL( cs1[(size_t)2], cs1.at( 5 ) );
+ BOOST_CHECK_EQUAL( cs1.at( cs1.length() - 1 ), 'g' );
+
+ BOOST_CHECK_THROW( cs1.at( cs1.length() ), std::out_of_range ); // 3 //
+}
+
+// EOF

Added: trunk/libs/test/doc/src/snippet/snippet18.cpp
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/snippet/snippet18.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,42 @@
+#define BOOST_TEST_MODULE example
+#include <boost/test/included/unit_test.hpp>
+
+BOOST_AUTO_TEST_CASE( testA )
+{
+}
+
+BOOST_AUTO_TEST_CASE( testB )
+{
+}
+
+BOOST_AUTO_TEST_SUITE( s1 )
+
+BOOST_AUTO_TEST_CASE( test1 )
+{
+}
+
+BOOST_AUTO_TEST_CASE( lest2 )
+{
+}
+
+BOOST_AUTO_TEST_SUITE_END()
+
+BOOST_AUTO_TEST_SUITE( s2 )
+
+BOOST_AUTO_TEST_CASE( test1 )
+{
+}
+
+BOOST_AUTO_TEST_CASE( test11 )
+{
+}
+
+BOOST_AUTO_TEST_SUITE( in )
+
+BOOST_AUTO_TEST_CASE( test )
+{
+}
+
+BOOST_AUTO_TEST_SUITE_END()
+
+BOOST_AUTO_TEST_SUITE_END()

Added: trunk/libs/test/doc/src/snippet/snippet2.cpp
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/snippet/snippet2.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,12 @@
+template <typename T>
+void single_test()
+{
+ BOOST_CHECK( /* test assertion */ );
+}
+
+void combined_test()
+{
+ single_test<int>();
+ single_test<float>();
+ single_test<unsigned char>();
+}

Added: trunk/libs/test/doc/src/snippet/snippet3.cpp
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/snippet/snippet3.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,4 @@
+BOOST_TEST_CASE_TEMPLATE_FUNCTION( test_case_name, type_name )
+{
+ // test case template body
+}

Added: trunk/libs/test/doc/src/snippet/snippet4.cpp
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/snippet/snippet4.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,6 @@
+template<typename type_name>
+void
+test_case_name()
+{
+ // test case template body
+}

Added: trunk/libs/test/doc/src/snippet/snippet5.cpp
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/snippet/snippet5.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,20 @@
+struct MyFixture {
+ MyFixture() { i = new int; *i = 0 }
+ ~ MyFixture() { delete i; }
+
+ int* i;
+};
+
+BOOST_AUTO_TEST_CASE( test_case1 )
+{
+ MyFixture f;
+
+ // do something involving f.i
+}
+
+BOOST_AUTO_TEST_CASE( test_case2 )
+{
+ MyFixture f;
+
+ // do something involving f.i
+}

Added: trunk/libs/test/doc/src/snippet/snippet6.cpp
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/snippet/snippet6.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,15 @@
+template<typename TestType>
+void
+specific_type_test( TestType* = 0 )
+{
+ MyComponent<TestType> c;
+ ... // here we perform actual testing
+}
+
+void my_component_test()
+{
+ specific_type_test( (int*)0 );
+ specific_type_test( (float*)0 );
+ specific_type_test( (UDT*)0 );
+ ...
+}

Added: trunk/libs/test/doc/src/snippet/snippet7.cpp
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/snippet/snippet7.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,6 @@
+#include <ostream>
+
+int main()
+{
+ std::cout << "Hello World\n";
+}

Added: trunk/libs/test/doc/src/snippet/snippet8.cpp
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/snippet/snippet8.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,36 @@
+double find_root( double (*f)(double),
+ double low_guess,
+ double high_guess,
+ std::vector<double>& steps,
+ double tolerance )
+{
+ double solution;
+ bool converged = false;
+
+ while(not converged)
+ {
+ double temp = (low_guess + high_guess) / 2.0;
+ steps.push_back( temp );
+
+ double f_temp = f(temp);
+ double f_low = f(low_guess);
+
+ if(abs(f_temp) < tolerance)
+ {
+ solution = temp;
+ converged = true;
+ }
+ else if(f_temp / abs(f_temp) == f_low / abs(f_low))
+ {
+ low_guess = temp;
+ converged = false;
+ }
+ else
+ {
+ high_guess = temp;
+ converged = false;
+ }
+ }
+
+ return solution;
+}

Added: trunk/libs/test/doc/src/snippet/snippet9.cpp
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/snippet/snippet9.cpp 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,2 @@
+if( something_bad_detected )
+ std::cout << "something bad has been detected" << std::endl;
\ No newline at end of file

Added: trunk/libs/test/doc/src/tutorial.hello-the-testing-world.xml
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/tutorial.hello-the-testing-world.xml 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,136 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!DOCTYPE section PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN" "../../../../tools/boostbook/dtd/boostbook.dtd" [
+ <!ENTITY utf "<acronym>UTF</acronym>">
+]>
+<section id="tutorial.hello-the-testing-world">
+ <title>Hello the testing world &hellip; or beginner's introduction into testing using the Unit Test Framework</title>
+ <titleabbrev>Hello the testing world</titleabbrev>
+
+ <para role="first-line-indented">
+ How should a test program report errors? Displaying an error message is an obvious possibility:
+ </para>
+
+ <btl-snippet name="snippet9"/>
+
+ <para role="first-line-indented">
+ But that requires inspection of the program's output after each run to determine if an error occurred. Since test
+ programs are often run as part of a regression test suite, human inspection of output to detect error messages is
+ time consuming and unreliable. Test frameworks like GNU/expect can do the inspections automatically, but are
+ overly complex for simple testing.
+ </para>
+
+ <para role="first-line-indented">
+ A better simple way to report errors is for the test program to return EXIT_SUCCESS (normally 0) if the test program
+ completes satisfactorily, and EXIT_FAILURE if an error is detected. This allows a simple regression test script to
+ automatically and unambiguous detect success or failure. Further appropriate actions such as creating an HTML table
+ or emailing an alert can be taken by the script, and can be modified as desired without having to change the actual
+ C++ test programs.
+ </para>
+
+ <para role="first-line-indented">
+ A testing protocol based on a policy of test programs returning EXIT_SUCCESS or EXIT_FAILURE does not require any
+ supporting tools; the C++ language and standard library are sufficient. The programmer must remember, however, to
+ catch all exceptions and convert them to program exits with non-zero return codes. The programmer must also remember
+ to not use the standard library assert() macro for test code, because on some systems it results in undesirable side
+ effects like a message requiring manual intervention.
+ </para>
+
+ <para role="first-line-indented">
+ The Boost Test Library's Unit Test Framework is designed to automate those tasks. The library supplied main()
+ relieves users from messy error detection and reporting duties. Users could use supplied testing tools to perform
+ complex validation tasks. Let's take a look on the following simple test program:
+ </para>
+
+ <btl-snippet name="snippet10"/>
+
+ <para role="first-line-indented">
+ There are several issues with above test.
+ </para>
+
+ <orderedlist>
+ <listitem>
+ <simpara>You need to convert is_valid result in proper result code.</simpara>
+ </listitem>
+ <listitem>
+ <simpara>Would exception happen in test_object construction of method is_valid invocation, the program will crash.</simpara>
+ </listitem>
+ <listitem>
+ <simpara>You won't see any output, would you run this test manually.</simpara>
+ </listitem>
+ </orderedlist>
+
+ <para role="first-line-indented">
+ The Unit Test Framework solves all these issues. To integrate with it above program needs to be changed to:
+ </para>
+
+ <btl-snippet name="snippet11"/>
+
+ <para role="first-line-indented">
+ Now, you not only receive uniform result code, even in case of exception, but also nicely formatted output from
+ BOOST_CHECK tool, would you choose to see it. Is there any other ways to perform checks? The following example test
+ program shows several different ways to detect and report an error in the add() function.
+ </para>
+
+ <btl-snippet name="snippet12">
+ <annotations>
+ <annotation id="snippet12.ann-1" coords="1">
+ <para role="first-line-indented">
+ This approach uses the BOOST_CHECK tool, which displays an error message (by default on std::cout) that includes
+ the expression that failed, the source file name, and the source file line number. It also increments the error
+ count. At program termination, the error count will be displayed automatically by the Unit Test Framework.
+ </para>
+ </annotation>
+
+ <annotation id="snippet12.ann-2" coords="1">
+ <para role="first-line-indented">
+ This approach uses the BOOST_REQUIRE tool, is similar to approach #1, except that after displaying the error,
+ an exception is thrown, to be caught by the Unit Test Framework. This approach is suitable when writing an
+ explicit test program, and the error would be so severe as to make further testing impractical. BOOST_REQUIRE
+ differs from the C++ Standard Library's assert() macro in that it is always generated, and channels error
+ detection into the uniform Unit Test Framework reporting procedure.
+ </para>
+ </annotation>
+
+ <annotation id="snippet12.ann-3" coords="1">
+ <para role="first-line-indented">
+ This approach is similar to approach #1, except that the error detection and error reporting are coded separately.
+ This is most useful when the specific condition being tested requires several independent statements and/or is
+ not indicative of the reason for failure.
+ </para>
+ </annotation>
+
+ <annotation id="snippet12.ann-4" coords="1">
+ <para role="first-line-indented">
+ This approach is similar to approach #2, except that the error detection and error reporting are coded separately.
+ This is most useful when the specific condition being tested requires several independent statements and/or is
+ not indicative of the reason for failure.
+ </para>
+ </annotation>
+
+ <annotation id="snippet12.ann-5" coords="1">
+ <para role="first-line-indented">
+ This approach throws an exception, which will be caught and reported by the Unit Test Framework. The error
+ message displayed when the exception is caught will be most meaningful if the exception is derived from
+ std::exception, or is a char* or std::string.
+ </para>
+ </annotation>
+
+ <annotation id="snippet12.ann-6" coords="1">
+ <para role="first-line-indented">
+ This approach uses the BOOST_CHECK_MESSAGE tool, is similar to approach #1, except that similar to the approach #3
+ displays an alternative error message specified as a second argument.
+ </para>
+ </annotation>
+
+ <annotation id="snippet12.ann-7" coords="1"> <!-- TO FIX: all the coords -->
+ <para role="first-line-indented">
+ This approach uses the BOOST_CHECK_EQUAL tool and functionally is similar to approach #1. This approach is most
+ attractive for checking equality of two variables, since in case of error it shows mismatched values.
+ </para>
+ </annotation>
+ </annotations>
+ </btl-snippet>
+
+ <para/> <!-- TO FIX: some finishing statements here -->
+
+</section>

Added: trunk/libs/test/doc/src/tutorial.intro-in-testing.xml
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/tutorial.intro-in-testing.xml 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,169 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!DOCTYPE section PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN" "../../../../tools/boostbook/dtd/boostbook.dtd" [
+ <!ENTITY utf "<acronym>UTF</acronym>">
+]>
+<section id="tutorial.intro-in-testing">
+ <sectioninfo>
+ <author>
+ <firstname>John</firstname>
+ <othername role="mi">R</othername>
+ <surname>Phillips</surname>
+ <email>jphillip at capital dot edu (please unobscure)</email>
+ </author>
+ <copyright>
+ <year>2006</year>
+ <holder>John R. Phillips</holder>
+ </copyright>
+
+ <legalnotice>
+ <simpara>
+ Use, modification and distribution is subject to the Boost Software License, Version 1.0. (See accompanying file
+ <filename>LICENSE_1_0.txt</filename> or copy at
+ <ulink url="http://www.boost.org/LICENSE_1_0.txt">http://www.boost.org/LICENSE_1_0.txt> )
+ </simpara>
+ </legalnotice>
+ </sectioninfo>
+
+ <title>Introduction into testing &hellip; or why testing is worth the effort</title>
+ <titleabbrev>Introduction into testing</titleabbrev>
+
+ <para role="first-line-indented">
+ For almost everyone, the first introduction to the craft of programming is a version of the simple "Hello World" program. In C++, this first example might be written as
+ </para>
+
+ <btl-snippet name="snippet7"/>
+
+ <para role="first-line-indented">
+ This is a good introduction for several reasons. One is that the program is short enough, and the logic of its
+ execution simple enough that direct inspection can show whether it is correct in all use cases known to the new
+ student programmer. If this were the complexity of all programming, there would be no need to test anything before
+ using it. In programming as a new student experiences it, testing is pointless and adds unneeded complexity.
+ </para>
+
+ <para role="first-line-indented">
+ However, no actual programs are as simple as an introductory lesson makes "Hello World" seem. Not even "Hello World".
+ In all real programs, there are decisions to be made and multiple paths of execution based on these decisions. These
+ decisions could be based on user input, streaming data, resource availability and dozens of other factors. The
+ programmer strives to control the inputs, and results of these decisions, but no one can keep all of them clearly
+ in mind once the size of the project exceeds just a few hundred lines. Even "Hello World" hides complexities of
+ this sort in the simple seeming call to std::cout.
+ </para>
+
+ <para role="first-line-indented">
+ Since the individual programmer can no longer determine the correctness of the program, there is a need for a
+ different approach. An obvious possibility is testing the program after construction. Someone develops a set of
+ test cases, where inputs are given to the program such that the behavior and outputs of a correctly performing
+ program are known. The performance of the new program is compared to known standards and the new program either
+ passes or fails. If it fails, attempts are made to fix it. If the test cases are carefully chosen, the specifics of
+ the failure give an indication of what in the program needs to be fixed.
+ </para>
+
+ <para role="first-line-indented">
+ This is an improvement over just not knowing whether the program is working properly, but it isn't a big improvement.
+ If the whole program is tested at once, it is nearly impossible to develop test cases that clearly indicate what
+ the failure is. The system is too complex, and the programmer still needs to understand almost all of the possible
+ outcomes to be able to develop tests. As always, when a problem is too big and complicated a good idea is to try
+ splitting it into smaller and simpler pieces.
+ </para>
+
+ <para role="first-line-indented">
+ This approach leads to a layered system of testing, that is similar to the layered approach to original development
+ and should be integrated into it. When writing a program, the design is factored into small units that are
+ conceptually and structurally easier to grasp. A standard rule for this is that one unit performs one job or
+ embodies one concept. These simple units are composed into larger and more complicated algorithms by passing needed
+ information into a unit and receiving the desired result out of it. The units are integrated to perform the whole
+ task. Testing should reflect this structure of development.
+ </para>
+
+ <para role="first-line-indented">
+ The simplest layer is Unit Testing. A unit is the smallest conceptually whole segment of the program. Examples of
+ basic units might be a single class or a single function. For each unit, the tester (who may or may not be the
+ programmer) attempts to determine what states the unit can encounter while executing as part of the program. These
+ states include determining the range of appropriate inputs to the unit, determining the range of possible
+ inappropriate inputs, and recognizing any ways the state of the rest of the program might affect execution in this
+ unit.
+ </para>
+
+ <para role="first-line-indented">
+ With so many general statements, an example will help clarify. Imagine the following procedural function is part of
+ a program, and the programmer wants to test it. For the sake of brevity, header includes and namespace qualifiers
+ have been suppressed.
+ </para>
+
+ <btl-snippet name="snippet8"/>
+
+ <para role="first-line-indented">
+ This code, although brief and simple is getting long enough that it takes attention to find what is done and why.
+ It is no longer obvious at a glance what the intent of the program is, so careful naming must be used to carry that
+ intent.
+ </para>
+
+ <para role="first-line-indented">
+ Thanks to the control structures, there are some obvious execution paths in the code. However, there are also a few
+ less obvious paths. For example, if the root finder takes many steps to converge to an acceptable answer, the
+ vector that is holding the history of steps taken may need to reallocate for additional space. In this case, there
+ are many hidden steps in the single push_back command. These steps also include the chance of failure, since that
+ is always a possibility in a memory allocation.
+ </para>
+
+ <para role="first-line-indented">
+ A second example notes that the value of the function at the low guess has not been tested, so there is the chance
+ of a zero division. Also, if the value of the function at the high guess is zero, the root finder will miss that
+ root entirely. It may even fall into an infinite loop if no root lies between the low and high values.
+ </para>
+
+ <para role="first-line-indented">
+ In this unit, proper testing includes checking the behavior in each possibility. It also includes checking the
+ function by giving inputs where the correct answer is known and checking the results against that answer. Thus,
+ the unit is tested in every execution path to assure proper behavior.
+ </para>
+
+ <para role="first-line-indented">
+ Test cases are chosen to expose as many errors as possible. A defining characteristic of a good test case is that
+ the programmer knows what the unit should do if it is functioning properly. Test cases should be generated to
+ exercise each available execution path. For the above snippet, this includes the obvious and the not so obvious
+ paths. Every path should be tested, since every path is a possible outcome of program execution.
+ </para>
+
+ <para role="first-line-indented">
+ Thus, to write a good testing suite, the tester must know the structure of the code. The most dependable way to
+ accomplish this is if the original programmer writes tests as part of creating the code. In fact, it is advisable
+ that the tests are produced before the code is written, and updated whenever structure decisions are changed. This
+ way, the tests are written with a view toward how the unit should perform instead of reproducing the programmer's
+ thinking from writing the code. While black box testing is also useful, it is important that someone who knows the
+ design decisions made and the rationale for those decisions test the code unit. A programmer who can't devise
+ good tests for a unit does not yet know the problem at hand well enough to program dependably.
+ </para>
+
+ <para role="first-line-indented">
+ When a unit is completed and tested, it is ready for integration with other units in the program. This is
+ integration should also be tested. At this point, the test cases focus on the interaction between the units. Tests
+ are designed to exercise each way the units can affect each other.
+ </para>
+
+ <para role="first-line-indented">
+ This is the point in development where proper unit testing really shines. If each unit is doing what it should be
+ doing and not creating unexpected side effects, any issues in testing a set of integrated units must come from how
+ they are passing information. Thus, the nearly intractable problem of finding an error while many units interact
+ becomes the less intimidating problem of finding the breakdown in communications.
+ </para>
+
+ <para role="first-line-indented">
+ At each layer of increasing complexity, new tests are run, and if the prior tests of the components are well
+ designed and all issues are fixed, new errors are isolated to the integration. This process continues, in parallel
+ with development, from the smallest units to the completed program.
+ </para>
+
+ <para role="first-line-indented">
+ This shows that there is a need to be able to check and test code snippets such as individual functions and classes
+ independent the program of which they will become a part. That is, the need for a means to provide predetermined
+ inputs to the unit to check the outputs against expected results. Such a system must allow for both normal
+ operation and error conditions, allow the programmer to produce a thorough description of the results.
+ </para>
+
+ <para role="first-line-indented">
+ This is the goal and rationale for all unit testing, and supporting testing of this sort is the purpose of the
+ Boost.Test library. As is shown below, Boost.Test provides a well-integrated set of tools to support this testing
+ effort throughout the programming and maintenance cycles of software development.
+ </para>
+</section>

Added: trunk/libs/test/doc/src/tutorial.new-year-resolution.xml
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/tutorial.new-year-resolution.xml 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,112 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!DOCTYPE section PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN" "../../../../tools/boostbook/dtd/boostbook.dtd" [
+ <!ENTITY utf "<acronym>UTF</acronym>">
+]>
+<section id="tutorial.new-year-resolution">
+ <title>Boost.Test driven development &hellip; or "getting started" for TDD followers</title>
+ <titleabbrev>Boost.Test driven development</titleabbrev>
+
+ <para role="first-line-indented">
+ Today is a momentous day - first day of new year. Today I am going to start a new life. I am going to stop eating a
+ greasy food, start attending a fitness club and &hellip; today I am going to test programs I am writing. I can start
+ right after the last line of a program is completed or, even better, I can write tests while I am coding. And maybe
+ next time I will write tests before the coding, during the design stage. I have read a lot of literature on how to
+ write the tests, I have the unit test framework in hand and an idea of new class. So let's get started.
+ </para>
+
+ <para role="first-line-indented">
+ Let say I want to encapsulate an unchangeable C character buffer with a length into the simple class
+ <code>const_string</code>. Rationale: a string class that does not allocate a memory and provide a convenient
+ read-only access to the preallocated character buffer. I will probably want <code>const_string</code> to have an
+ interface similar to the class std::string. What will I do first? In my new life I will start with writing a test
+ module for future class <code>const_string</code>. It will look like this:
+ </para>
+
+ <btl-snippet name="snippet13"/>
+
+ <para role="first-line-indented">
+ Now I can compile it and link with the unit test framework. Done! I have a working test program. It is empty, so
+ when I run the program it produces following output:
+ </para>
+
+ <screen>*** No errors detected</screen>
+
+ <para role="first-line-indented">
+ Well, now it could be a good time to start a work on <code>const_string</code>. First thing I imagine would be good
+ to have is a constructors and trivial access methods. So my class initial version looks like this:
+ </para>
+
+ <simpara>const_string.hpp:</simpara>
+
+ <btl-snippet name="snippet14"/>
+
+ <para role="first-line-indented">
+ Now I am able to write a first test case - constructors testing - and add it to a test suite. My test program became
+ to look like this:
+ </para>
+
+ <simpara>const_string_test.cpp:</simpara>
+
+ <btl-snippet name="snippet15"/>
+
+ <para role="first-line-indented">
+ The constructors_test test case is intended to check a simple feature of the class <code>const_string</code>: an
+ ability to construct itself properly based on different arguments. To test this feature I am using such
+ characteristics of constructed object as a data it contains and a length. The specification of the class
+ <code>const_string</code> does not contain any expected failures, so, though the constructor can fail if I would
+ pass a pointer to an invalid memory, error check control is not performed (can't require what was not promised
+ :-)). But for any valid input it should work. So I am trying to check a construction for an empty string (1), a NULL
+ string (2) a regular C string(3), an STL string(4), a copy construction(5) and so on. Well, after fixing all the
+ errors in the implementation (do you write programs without errors from scratch?) I am able to pass this test case
+ and the unit test framework gives me the following report:
+ </para>
+
+ <screen>Running 1 test case&hellip;
+
+*** No errors detected</screen>
+
+ <para role="first-line-indented">
+ Encouraged I am moving on and adding more access methods:
+ </para>
+
+ <simpara>const_string.hpp:</simpara>
+
+ <btl-snippet name="snippet16"/>
+
+ <para role="first-line-indented">
+ I added the new feature - I need a new test case to check it. As a result my test suite became to look like this:
+ </para>
+
+ <simpara>const_string_test.cpp:</simpara>
+
+ <btl-snippet name="snippet17"/>
+
+ <para role="first-line-indented">
+ In the data_access_test test case I am trying to check the class <code>const_string</code> character access
+ correctness. While tests (1) checks valid access using <code>const_string</code>::operator[] and test (2) checks
+ valid access using method <code>const_string</code>::at(), there is one more thing to test. The specification of the
+ method <code>const_string</code>::at() contains validation for the out of bound access. That was test (3) is
+ intended to do: check that the validation is working. A testing of a validation and error handling code is an
+ important part of a unit testing and should not be left for a production stage. The data_access_test test case
+ passed and I am ready for the next step.
+ </para>
+
+ <para role="first-line-indented">
+ Continuing my effort I am able to complete class <code>const_string</code> (see
+ <ulink url="../src/snippet/const_string.hpp">Listing 1</ulink>) and testing module for it (see
+ <ulink url="../src/snippet/const_string_test.cpp">Listing 2</ulink>) that is checking all features that are presented
+ in the class <code>const_string</code> specification.
+ </para>
+
+ <para role="first-line-indented">
+ Well, I am step closer to fulfilling my new year resolution (we should see about this fitness club sometime next
+ &hellip;). What about you? Your testing habits could be a little different. You could start with a class/library
+ development and then at some point start writing test cases on feature basis. Or you can, given a detailed
+ specification for the future product, including expected interfaces, immediately start with writing all test cases
+ (or it could be a different person, while you working on implementation at the same time). In any case you should not
+ have any problems to use facilities provided by the Boost.Test unit test framework and, let me hope, be able to
+ write a stable, bulletproof code. And what is even more important is your confidence in an ability to make changes
+ of any complexity without involving a lengthy regression testing of your whole product. Your test module and the
+ unit test framework will stay behind your back to help you with any occasional errors.
+ </para>
+</section>

Added: trunk/libs/test/doc/src/utf.testing-tools.xml
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/utf.testing-tools.xml 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,1168 @@
+<?xml version="1.0" encoding="utf-8" ?><!DOCTYPE chapter PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN" "../../../../tools/boostbook/dtd/boostbook.dtd" [
+ <!ENTITY utf "<acronym>UTF</acronym>">
+]>
+<section id="utf.testing-tools" last-revision="$Date$">
+ <title>The &utf; testing tools &hellip; or tester's toolbox for all occasions</title>
+ <titleabbrev>Testing tools</titleabbrev>
+
+ <section id="utf.testing-tools.intro">
+ <title>Introduction</title>
+
+ <para role="first-line-indented">
+ The &utf;'s supplies a toolbox of testing tools to ease creation and maintenance of test programs and
+ provide a uniform error reporting mechanism. The toolbox supplied in most part in a form of macro and function
+ declarations. While the functions can be called directly, the usual way to use testing tools is via convenience
+ macros. All macros arguments are calculated once, so it's safe to pass complex expressions in their place.
+ All tools automatically supply an error location: a file name and a line number. The testing tools are intended
+ for unit test code rather than library or production code, where throwing exceptions, using assert(),
+ <classname>boost::concept_check</classname> or <macroname>BOOST_STATIC_ASSERT</macroname>() may be more suitable
+ ways to detect and report errors. For list of all supplied testing tools and usage examples see the reference.
+ </para>
+ </section>
+
+ <section id="utf.testing-tools.flavors">
+ <title>Testing tools flavors</title>
+
+ <para role="first-line-indented">
+ All the tools are supplied in three flavors(levels): <firstterm>WARN</firstterm>, <firstterm>CHECK</firstterm> and
+ <firstterm>REQUIRE</firstterm>. For example: <macroname>BOOST_WARN_EQUAL</macroname>,
+ <macroname>BOOST_CHECK_EQUAL</macroname>, <macroname>BOOST_REQUIRE_EQUAL</macroname>. If an assertion designated by
+ the tool passes, confirmation message can be printed in log output<footnote><simpara>to manage what messages appear
+ in the test log stream set the proper <link linkend="utf.user-guide.test-output.log">log
+ level</link></simpara></footnote>. If an assertion designated by the tool failed, depending on the level following
+ will happened<footnote><simpara>in some cases log message can be slightly different to reflect failed tool
+ specifics</simpara></footnote>:
+ </para>
+
+ <table id="utf.testing-tools.levels-diffs">
+ <title>Testing tools levels differences</title>
+
+ <tgroup cols="4">
+ <colspec colnum="1" colname="col1" />
+ <colspec colnum="2" colname="col2" />
+ <colspec colnum="3" colname="col3" />
+ <colspec colnum="4" colname="col4" />
+ <thead>
+ <row>
+ <entry>Level</entry>
+ <entry>Test log content</entry>
+ <entry>Errors counter</entry>
+ <entry>Test execution</entry>
+ </row>
+ </thead>
+
+ <tbody>
+ <row>
+ <entry>WARN</entry>
+ <entry>
+ warning in <replaceable>&lt;test case name&gt;</replaceable>: condition
+ <replaceable>&lt;assertion description&gt;</replaceable> is not satisfied
+ </entry>
+ <entry>not affected</entry>
+ <entry>continues</entry>
+ </row>
+
+ <row>
+ <entry>CHECK</entry>
+ <entry>
+ error in <replaceable>&lt;test case name&gt;</replaceable>: test
+ <replaceable>&lt;assertion description&gt;</replaceable> failed
+ </entry>
+ <entry>increased</entry>
+ <entry>continues</entry>
+ </row>
+
+ <row>
+ <entry>REQUIRE</entry>
+ <entry>
+ fatal error in <replaceable>&lt;test case name&gt;</replaceable>: critical test
+ <replaceable>&lt;assertion description&gt;</replaceable> failed
+ </entry>
+ <entry>increased</entry>
+ <entry>aborts</entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+
+ <para role="first-line-indented">
+ Regularly you should use CHECK level tools to implement your assertions. You can use WARN level tools to validate
+ aspects less important then correctness: performance, portability, usability etc. You should use REQUIRE level
+ tools only if continuation of the test case doesn't make sense if this assertions fails.
+ </para>
+ </section>
+
+ <section id="utf.testing-tools.output-test">
+ <title>Output testing tool</title>
+
+ <para role="first-line-indented">
+ How do you perform correctness test for <code>operator&lt;&lt;( <classname>std::ostream</classname>&amp;, ... )</code>
+ operations? You can print into the standard output stream and manually check that it is matching your expectations.
+ Unfortunately, this is not really acceptable for the regression testing and doesn't serve a long term purpose of a
+ unit test. You can use <classname>std::stringstream</classname> and compare resulting output buffer with the
+ expected pattern string, but you are required to perform several additional operations with every check you do. So it
+ becomes tedious very fast. The class <firstterm><classname>output_test_stream</classname></firstterm> is designed to
+ automate these tasks for you. This is a simple, but powerful tool for testing standard
+ <classname>std::ostream</classname> based output operation. The class <classname>output_test_stream</classname>
+ complies to <classname>std::ostream</classname> interface so it can be used in place of any
+ <classname>std::ostream</classname> parameter. It provides several test methods to validate output content,
+ including test for match to expected output content or test for expected output length. Flushing, synchronizing,
+ string comparison and error message generation is automated by the tool implementation.
+ </para>
+
+ <para role="first-line-indented">
+ All <classname>output_test_stream</classname> validation methods by default flush the stream once check is performed.
+ If you want to perform several checks with the same output, specify parameter <firstterm>flush_stream</firstterm>
+ with value false. This parameter is supported on all comparison methods.
+ </para>
+
+ <para role="first-line-indented">
+ In some cases manual generation of expected output is either too time consuming or is impossible at all bacause
+ of sheer volume. What we need in cases like this is to be able to check once manually that the output is as expected
+ and to be able in a future check that it stays the same. To help manage this logic the class
+ <classname>output_test_stream</classname> allows matching output content versus specified pattern file and produce
+ pattern file based on successful test run.
+ </para>
+
+ <para role="first-line-indented">
+ Detailed specification of class <classname>output_test_stream</classname> is covered in reference section.
+ </para>
+
+ <section id="utf.testing-tools.output-test.usage">
+ <title>Usage</title>
+
+ <para role="first-line-indented">
+ There are two ways to employ the class <classname>output_test_stream</classname>: explicit output checks and
+ pattern file matching.
+ </para>
+ </section>
+
+ <btl-example name="example28">
+ <title>output_test_stream usage with explicit output checks</title>
+
+ <para role="first-line-indented">
+ Use the instance of class output_test_stream as an output stream and check output content using tool's methods.
+ Note use of <literal>false</literal> to prevent output flushing in first two invocation of check functions. Unless
+ you want to perform several different checks for the same output you wouldn't need to use it though. Your
+ test will look like a serious of output operators followed by one check. And so on again. Try to perform checks as
+ frequently as possible. It not only simplifies patterns you compare with, but also allows you to more closely
+ identify possible source of failure.
+ </para>
+ </btl-example>
+
+ <btl-example name="example29">
+ <title>output_test_stream usage for pattern file matching</title>
+
+ <para role="first-line-indented">
+ Even simpler: no need to generate expected patterns. Though you need to keep the pattern file all the time somewhere
+ around. Your testing will look like a serious of output operators followed by match pattern checks repeated several
+ times. Try to perform checks as frequently as possible, because it allows you to more closely identify possible source
+ of failure. Content of the pattern file is:
+ </para>
+ <simpara>
+ <literallayout>i=2
+File: test.cpp Line: 14</literallayout>
+ </simpara>
+ </btl-example>
+ </section>
+
+ <section id="utf.testing-tools.custom-predicate">
+ <title>Custom predicate support</title>
+
+ <para role="first-line-indented">
+ Even though supplied testing tools cover wide range of possible checks and provide detailed report on cause of error
+ in some cases you may want to implement and use custom predicate that perform complex check and produce intelligent
+ report on failure. To satisfy this need testing tools implement custom predicate support. There two layers of custom
+ predicate support implemented by testing tools toolbox: with and without custom error message generation.
+ </para>
+
+ <para role="first-line-indented">
+ The first layer is supported by BOOST_CHECK_PREDICATE family of testing tools. You can use it to check any custom
+ predicate that reports the result as boolean value. The values of the predicate arguments are reported by the tool
+ automatically in case of failure.
+ </para>
+
+ <btl-example name="example30">
+ <title>Custom predicate support using BOOST_CHECK_PREDICATE</title>
+ </btl-example>
+
+ <para role="first-line-indented">
+ To use second layer your predicate have to return
+ <classname>boost::test_tools::predicate_result</classname>. This class encapsulates boolean result value along
+ with any error or information message you opt to report.
+ </para>
+
+ <para role="first-line-indented">
+ Usually you construct the instance of class <classname>boost::test_tools::predicate_result</classname> inside your
+ predicate function and return it by value. The constructor expects one argument - the boolean result value. The
+ constructor is implicit, so you can simply return boolean value from your predicate and
+ <classname>boost::test_tools::predicate_result</classname> is constructed automatically to hold your value and empty
+ message. You can also assign boolean value to the constructed instance. You can check the current predicate value by
+ using <methodname>operator!</methodname>() or directly accessing public read-only property p_predicate_value. The
+ error message is stored in public read-write property p_message.
+ </para>
+
+ <btl-example name="example31">
+ <title>Custom predicate support using class predicate_result</title>
+ </btl-example>
+ </section>
+
+ <section id="utf.testing-tools.fpv-comparison">
+ <title>Floating-point comparison algorithms</title>
+
+ <para role="first-line-indented">
+ In most cases it is unreasonable to use an <code>operator==(...)</code> for a floating-point values equality check.
+ The simple, absolute value comparison based, solution for a floating-point values <varname>u</varname>,
+ <varname>v</varname> and a tolerance &egr;:
+ </para>
+
+ <btl-equation index="1">
+ |<varname>u</varname> &minus; <varname>v</varname>| &le; &egr;
+ </btl-equation>
+
+ <simpara>
+ does not produce expected results in many circumstances - specifically for very small or very big values (See
+ <xref linkend="bbl.Squassabia"/> for examples). The &utf; implements floating-point comparison algorithm that is
+ based on the more confident solution first presented in <xref linkend="bbl.KnuthII"/>:
+ </simpara>
+
+ <btl-equation index="2">
+ |<varname>u</varname> &minus; <varname>v</varname>| &le; &egr; &times; |<varname>u</varname>| &and; |<varname>u</varname> &minus; <varname>v</varname>| &le; &egr; &times; |<varname>v</varname>|
+ </btl-equation>
+
+ <simpara>
+ defines a <firstterm>very close with tolerance &egr;</firstterm> relationship between <varname>u</varname> and <varname>v</varname>
+ </simpara>
+
+ <btl-equation index="3">
+ |<varname>u</varname> &minus; <varname>v</varname>| &le; &egr; &times; |<varname>u</varname>| &or; |<varname>u</varname> &minus; <varname>v</varname>| &le; &egr; &times; |<varname>v</varname>|
+ </btl-equation>
+
+ <simpara>
+ defines a <firstterm>close enough with tolerance &egr;</firstterm> relationship between <varname>u</varname> and <varname>v</varname>
+ </simpara>
+
+ <para role="first-line-indented">
+ Both relationships are commutative but are not transitive. The relationship defined by inequations
+ (<ulink linkend="utf.testing-tools.fpv-comparison.eq.2">2</ulink>) is stronger
+ that the relationship defined by inequations (<ulink linkend="utf.testing-tools.fpv-comparison.eq.3">3</ulink>)
+ (i.e. (<ulink linkend="utf.testing-tools.fpv-comparison.eq.2">2</ulink>) &rArr;
+ (<ulink linkend="utf.testing-tools.fpv-comparison.eq.3">3</ulink>)). Because of the multiplication in the right side
+ of inequations, that can cause an unwanted underflow condition, the implementation is using modified version of the
+ inequations (<ulink linkend="utf.testing-tools.fpv-comparison.eq.2">2</ulink>) and
+ (<ulink linkend="utf.testing-tools.fpv-comparison.eq.3">3</ulink>) where all underflow, overflow conditions can be
+ guarded safely:
+ </para>
+
+ <btl-equation index="4">
+ |<varname>u</varname> &minus; <varname>v</varname>| &sol; |<varname>u</varname>| &le; &egr; &and; |<varname>u</varname> &minus; <varname>v</varname>| / |<varname>v</varname>| &le; &egr;
+ </btl-equation>
+
+ <btl-equation index="5">
+ |<varname>u</varname> &minus; <varname>v</varname>| &sol; |<varname>u</varname>| &le; &egr; &or; |<varname>u</varname> &minus; <varname>v</varname>| / |<varname>v</varname>| &le; &egr;
+ </btl-equation>
+
+ <para role="first-line-indented">
+ Checks based on equations (<ulink linkend="utf.testing-tools.fpv-comparison.eq.4">4</ulink>) and
+ (<ulink linkend="utf.testing-tools.fpv-comparison.eq.5">5</ulink>) are implemented by two predicates with
+ alternative interfaces: binary predicate <classname>close_at_tolerance</classname><footnote><simpara>check type
+ and tolerance value are fixed at predicate construction time</simpara></footnote> and predicate with four arguments
+ <classname>check_is_close</classname><footnote><simpara>check type and tolerance value are the arguments of the
+ predicate</simpara></footnote>.
+ </para>
+
+ <para role="first-line-indented">
+ While equations (<ulink linkend="utf.testing-tools.fpv-comparison.eq.4">4</ulink>) and
+ (<ulink linkend="utf.testing-tools.fpv-comparison.eq.5">5</ulink>) in general are preferred for the general floating
+ point comparison check over equation (<ulink linkend="utf.testing-tools.fpv-comparison.eq.1">1</ulink>), they are
+ unusable for the test on closeness to zero. The later check is still might be useful in some cases and the &utf;
+ implements an algorithm based on equation (<ulink linkend="utf.testing-tools.fpv-comparison.eq.1">1</ulink>) in
+ binary predicate <classname>check_is_small</classname><footnote><simpara><varname>v</varname> is zero</simpara></footnote>.
+ </para>
+
+ <para role="first-line-indented">
+ On top of the generic, flexible predicates the &utf; implements macro based family of tools
+ <macroname>BOOST_CHECK_CLOSE</macroname> and <macroname>BOOST_CHECK_SMALL</macroname>. These tools limit the check
+ flexibility to strong-only checks, but automate failed check arguments reporting.
+ </para>
+
+ <section id="utf.testing-tools.fpv-comparison.tolerance-selection">
+ <title>Tolerance selection considerations</title>
+
+ <para role="first-line-indented">
+ In case of absence of domain specific requirements the value of tolerance can be chosen as a sum of the predicted
+ upper limits for "relative rounding errors" of compared values. The "rounding" is the operation by which a real
+ value 'x' is represented in a floating-point format with 'p' binary digits (bits) as the floating-point value 'X'.
+ The "relative rounding error" is the difference between the real and the floating point values in relation to real
+ value: |x-X|/|x|. The discrepancy between real and floating point value may be caused by several reasons:
+ </para>
+
+ <itemizedlist>
+ <listitem><simpara>Type promotion</simpara></listitem>
+ <listitem><simpara>Arithmetic operations</simpara></listitem>
+ <listitem><simpara>Conversion from a decimal presentation to a binary presentation</simpara></listitem>
+ <listitem><simpara>Non-arithmetic operation</simpara></listitem>
+ </itemizedlist>
+
+ <para role="first-line-indented">
+ The first two operations proved to have a relative rounding error that does not exceed &frac12; &times;
+ "machine epsilon value" for the appropriate floating point type (represented by
+ <classname>std::numeric_limits</classname>&lt;FPT&gt;::epsilon()). Conversion to binary presentation, sadly, does
+ not have such requirement. So we can't assume that float 1.1 is close to real 1.1 with tolerance &frac12;
+ &times; "machine epsilon value" for float (though for 11./10 we can). Non arithmetic operations either do not have a
+ predicted upper limit relative rounding errors. Note that both arithmetic and non-arithmetic operations might also
+ produce others "non-rounding" errors, such as underflow/overflow, division-by-zero or 'operation errors'.
+ </para>
+
+ <para role="first-line-indented">
+ All theorems about the upper limit of a rounding error, including that of &frac12; &times; epsilon, refer only to
+ the 'rounding' operation, nothing more. This means that the 'operation error', that is, the error incurred by the
+ operation itself, besides rounding, isn't considered. In order for numerical software to be able to actually
+ predict error bounds, the IEEE754 standard requires arithmetic operations to be 'correctly or exactly rounded'.
+ That is, it is required that the internal computation of a given operation be such that the floating point result
+ is the exact result rounded to the number of working bits. In other words, it is required that the computation used
+ by the operation itself doesn't introduce any additional errors. The IEEE754 standard does not require same behavior
+ from most non-arithmetic operation. The underflow/overflow and division-by-zero errors may cause rounding errors
+ with unpredictable upper limits.
+ </para>
+
+ <para role="first-line-indented">
+ At last be aware that &frac12; &times; epsilon rules are not transitive. In other words combination of two
+ arithmetic operations may produce rounding error that significantly exceeds 2 &times; &frac12; &times; epsilon. All
+ in all there are no generic rules on how to select the tolerance and users need to apply common sense and domain/
+ problem specific knowledge to decide on tolerance value.
+ </para>
+
+ <para role="first-line-indented">
+ To simplify things in most usage cases latest version of algorithm below opted to use percentage values for
+ tolerance specification (instead of fractions of related values). In other words now you use it to check that
+ difference between two values does not exceed x percent.
+ </para>
+
+ <para role="first-line-indented">
+ For more reading about floating-point comparison see references below.
+ </para>
+ </section>
+
+ <bibliography id="bbl.fpv-comparison">
+ <title>A floating-point comparison related references</title>
+
+ <bibliodiv id="bbl.fpv-comparison.books"><title>Books</title>
+
+ <biblioentry id="bbl.KnuthII">
+ <abbrev>KnuthII</abbrev>
+
+ <title>The art of computer programming (vol II)</title>
+ <author><firstname>Donald. E.</firstname><surname>Knuth</surname></author>
+ <copyright><year>1998</year><holder>Addison-Wesley Longman, Inc.</holder></copyright>
+ <isbn>0-201-89684-2</isbn>
+ <publisher><publishername>Addison-Wesley Professional; 3 edition</publishername></publisher>
+ </biblioentry>
+
+ <biblioentry id="bbl.Kulisch">
+ <abbrev>Kulisch</abbrev>
+
+ <biblioset relation="section">
+ <title>Rounding near zero</title>
+ </biblioset>
+ <biblioset relation="book">
+ <title><ulink url="
http://www.amazon.com/Advanced-Arithmetic-Digital-Computer-Kulisch/dp/3211838708">Advanced Arithmetic for the Digital Computer</ulink></title>
+
+ <author><firstname>Ulrich W</firstname><surname>Kulisch</surname></author>
+ <copyright><year>2002</year><holder>Springer, Inc.</holder></copyright>
+ <isbn>0-201-89684-2</isbn>
+ <publisher><publishername>Springer; 1 edition</publishername></publisher>
+ </biblioset>
+ </biblioentry>
+
+ </bibliodiv>
+
+ <bibliodiv id="bbl.fpv-comparison.periodicals"><title>Periodicals</title>
+
+ <biblioentry id="bbl.Squassabia">
+ <abbrev>Squassabia</abbrev>
+
+ <title><ulink url="http://www.adtmag.com/joop/carticle.aspx?ID=396">Comparing Floats: How To Determine if Floating Quantities Are Close Enough Once a Tolerance Has Been Reached</ulink></title>
+ <author><firstname>Alberto</firstname><surname>Squassabia</surname></author>
+
+ <biblioset relation="journal">
+ <title>C++ Report</title>
+ <issuenum>March 2000</issuenum>.
+ </biblioset>
+ </biblioentry>
+
+ <biblioentry id="bbl.Becker">
+ <abbrev>Becker</abbrev>
+
+ <biblioset relation="article">
+ <title>The Journeyman's Shop: Trap Handlers, Sticky Bits, and Floating-Point Comparisons</title>
+ <author><firstname>Pete</firstname><surname>Becker</surname></author>
+ </biblioset>
+ <biblioset relation="journal">
+ <title>C/C++ Users Journal</title>
+ <issuenum>December 2000</issuenum>.
+ </biblioset>
+ </biblioentry>
+
+ </bibliodiv>
+
+ <bibliodiv id="bbl.fpv-comparison.publications"><title>Publications</title>
+
+ <biblioentry id="bbl.Goldberg">
+ <abbrev>Goldberg</abbrev>
+
+ <biblioset relation="article">
+ <title><ulink url="http://citeseer.ist.psu.edu/goldberg91what.html">What Every Computer Scientist Should Know About Floating-Point Arithmetic</ulink></title>
+ <author><firstname>David</firstname><surname>Goldberg</surname></author>
+ <copyright><year>1991</year><holder>Association for Computing Machinery, Inc.</holder></copyright>
+ <pagenums>150-230</pagenums>
+ </biblioset>
+ <biblioset relation="journal">
+ <title>Computing Surveys</title>
+ <issuenum>March</issuenum>.
+ </biblioset>
+ </biblioentry>
+
+ <biblioentry id="bbl.Langlois">
+ <abbrev>Langlois</abbrev>
+
+ <title><ulink url="http://www.inria.fr/rrrt/rr-3967.html">From Rounding Error Estimation to Automatic Correction with Automatic Differentiation</ulink></title>
+ <author><firstname>Philippe</firstname><surname>Langlois</surname></author>
+ <copyright><year>2000</year></copyright>
+ <issn>0249-6399</issn>
+ </biblioentry>
+
+ <biblioentry id="bbl.Kahan">
+ <abbrev>Kahan</abbrev>
+
+ <title><ulink url="http://www.cs.berkeley.edu/~wkahan/">Lots of information on William Kahan home page</ulink></title>
+ <author><firstname>William</firstname><surname>Kahan</surname></author>
+ </biblioentry>
+
+ </bibliodiv>
+ </bibliography>
+ </section>
+
+ <section id="utf.testing-tools.reference">
+ <title>The &utf; testing tools reference</title>
+ <titleabbrev>Reference</titleabbrev>
+
+ <inline-reference id="utf.testing-tools.reference.body">
+ <refentry name="BOOST_&lt;level&gt;">
+ <inline-synopsis>
+ <macro name="BOOST_WARN" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="predicate"/>
+ </macro>
+ <macro name="BOOST_CHECK" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="predicate"/>
+ </macro>
+ <macro name="BOOST_REQUIRE" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="predicate"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ These tools are used to validate the predicate value. The only parameter for these tools is a boolean predicate
+ value that gets validated. It could be any expression that could be evaluated and converted to boolean value. The
+ expression gets evaluated only once, so it's safe to pass complex expression for validation.
+ </para>
+
+ <btl-example name="example34">
+ <title>BOOST_&lt;level&gt; usage</title>
+ </btl-example>
+
+ <seealso>
+ <ref>BOOST_&lt;level&gt;_MESSAGE</ref>
+ </seealso>
+ </refentry>
+
+ <refentry name="BOOST_&lt;level&gt;_BITWISE_EQUAL">
+ <inline-synopsis>
+ <macro name="BOOST_WARN_BITWISE_EQUAL" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ </macro>
+ <macro name="BOOST_CHECK_BITWISE_EQUAL" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ </macro>
+ <macro name="BOOST_REQUIRE_BITWISE_EQUAL" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ These tools are used to perform bitwise comparison of two values. The check shows all positions where left and
+ right value's bits mismatch.
+ </para>
+
+ <para role="first-line-indented">
+ The first parameter is the left compared value. The second parameter is the right compared value. Parameters are
+ not required to be of the same type, but warning is issued if their type's size does not coincide.
+ </para>
+
+ <btl-example name="example33">
+ <title>BOOST_&lt;level&gt;_BITWISE_EQUAL usage</title>
+ </btl-example>
+
+ <seealso>
+ <ref>BOOST_&lt;level&gt;_EQUAL</ref>
+ </seealso>
+ </refentry>
+
+ <refentry name="BOOST_&lt;level&gt;_CLOSE">
+ <inline-synopsis>
+ <macro name="BOOST_WARN_CLOSE" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ <macro-parameter name="tolerance"/>
+ </macro>
+ <macro name="BOOST_CHECK_CLOSE" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ <macro-parameter name="tolerance"/>
+ </macro>
+ <macro name="BOOST_REQUIRE_CLOSE" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ <macro-parameter name="tolerance"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ These tools are used to check on closeness using strong relationship defined by the predicate
+ <functionname>check_is_close</functionname>( left, right, tolerance ). To check for the weak relationship use
+ <ref>BOOST_&lt;level&gt;_PREDICATE</ref> family of tools with explicit <functionname>check_is_close</functionname>
+ invocation.
+ </para>
+
+ <para>
+ The first parameter is the <emphasis>left</emphasis> compared value. The second parameter is the
+ <emphasis>right</emphasis> compared value. Last third parameter defines the tolerance for the comparison in
+ <link linkend="utf.testing-tools.fpv-comparison.tolerance-selection"><emphasis role="bold">percentage units</emphasis></link>.
+ </para>
+
+ <note>
+ <simpara>
+ It is required for left and right parameters to be of the same floating point type. You will need to explicitly
+ resolve any type mismatch to select which type to use for comparison.
+ </simpara>
+ </note>
+
+ <note>
+ <simpara>
+ Note that to use these tools you need to include additional header floating_point_comparison.hpp.
+ </simpara>
+ </note>
+
+ <btl-example name="example42">
+ <title>BOOST_&lt;level&gt;_CLOSE usage with very small values</title>
+ </btl-example>
+
+ <btl-example name="example43">
+ <title>BOOST_&lt;level&gt;_CLOSE usage with very big values</title>
+ </btl-example>
+
+ <seealso>
+ <ref>BOOST_&lt;level&gt;_CLOSE_FRACTION</ref>, <ref>BOOST_&lt;level&gt;_SMALL</ref>, <ref>BOOST_&lt;level&gt;_EQUAL</ref>,
+ <link linkend="utf.testing-tools.fpv-comparison">Floating point comparison algorithms</link>
+ </seealso>
+ </refentry>
+
+ <refentry name="BOOST_&lt;level&gt;_CLOSE_FRACTION">
+ <inline-synopsis>
+ <macro name="BOOST_WARN_CLOSE_FRACTION" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ <macro-parameter name="tolerance"/>
+ </macro>
+ <macro name="BOOST_CHECK_CLOSE_FRACTION" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ <macro-parameter name="tolerance"/>
+ </macro>
+ <macro name="BOOST_REQUIRE_CLOSE_FRACTION" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ <macro-parameter name="tolerance"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ These tools are used to check on closeness using strong relationship defined by the predicate
+ <functionname>check_is_close</functionname>( left, right, tolerance ). To check for the weak relationship use
+ <ref>BOOST_&lt;level&gt;_PREDICATE</ref> family of tools with explicit <functionname>check_is_close</functionname>
+ invocation.
+ </para>
+
+ <para>
+ The first parameter is the <emphasis>left</emphasis> compared value. The second parameter is the
+ <emphasis>right</emphasis> compared value. Last third parameter defines the tolerance for the comparison as
+ <link linkend="utf.testing-tools.fpv-comparison.tolerance-selection"><emphasis role="bold">fraction of absolute
+ values being compared</emphasis></link>.
+ </para>
+
+ <note>
+ <simpara>
+ It is required for left and right parameters to be of the same floating point type. You will need to explicitly
+ resolve any type mismatch to select which type to use for comparison.
+ </simpara>
+ </note>
+
+ <note>
+ <simpara>
+ Note that to use these tools you need to include additional header floating_point_comparison.hpp.
+ </simpara>
+ </note>
+
+ <btl-example name="example44">
+ <title>BOOST_&lt;level&gt;_CLOSE_FRACTION usage</title>
+ </btl-example>
+
+ <seealso>
+ <ref>BOOST_&lt;level&gt;_CLOSE</ref>, <ref>BOOST_&lt;level&gt;_SMALL</ref>, <ref>BOOST_&lt;level&gt;_EQUAL</ref>,
+ <link linkend="utf.testing-tools.fpv-comparison">Floating point comparison algorithms</link>
+ </seealso>
+ </refentry>
+
+ <refentry name="BOOST_&lt;level&gt;_EQUAL">
+ <inline-synopsis>
+ <macro name="BOOST_WARN_EQUAL" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ </macro>
+ <macro name="BOOST_CHECK_EQUAL" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ </macro>
+ <macro name="BOOST_REQUIRE_EQUAL" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ Check performed by these tools is the same as the one performed by <ref>BOOST_&lt;level&gt;</ref>( left == right ).
+ The difference is that the mismatched values are reported as well.
+ </para>
+
+ <note>
+ <simpara>
+ It is bad idea to use these tools to compare floating point values. Use <ref>BOOST_&lt;level&gt;_CLOSE</ref> or
+ <ref>BOOST_&lt;level&gt;_CLOSE_FRACTION</ref> tools instead.
+ </simpara>
+ </note>
+
+ <btl-example name="example35">
+ <title>BOOST_&lt;level&gt;_EQUAL usage</title>
+ </btl-example>
+
+ <seealso>
+ <ref>BOOST_&lt;level&gt;</ref>, <ref>BOOST_&lt;level&gt;_CLOSE</ref>, <ref>BOOST_&lt;level&gt;_NE</ref>
+ </seealso>
+ </refentry>
+
+ <refentry name="BOOST_&lt;level&gt;_EQUAL_COLLECTION">
+ <inline-synopsis>
+ <macro name="BOOST_WARN_EQUAL_COLLECTION" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left_begin"/>
+ <macro-parameter name="left_end"/>
+ <macro-parameter name="right_begin"/>
+ <macro-parameter name="right_end"/>
+ </macro>
+ <macro name="BOOST_CHECK_EQUAL_COLLECTION" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left_begin"/>
+ <macro-parameter name="left_end"/>
+ <macro-parameter name="right_begin"/>
+ <macro-parameter name="right_end"/>
+ </macro>
+ <macro name="BOOST_REQUIRE_EQUAL_COLLECTION" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left_begin"/>
+ <macro-parameter name="left_end"/>
+ <macro-parameter name="right_begin"/>
+ <macro-parameter name="right_end"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ These tools are used to perform an element by element comparison of two collections. They print all mismatched
+ positions, collection elements at these positions and check that the collections have the same size. The first two
+ parameters designate begin and end of the first collection. The two parameters designate begin and end of the
+ second collection.
+ </para>
+
+ <btl-example name="example36">
+ <title>BOOST_&lt;level&gt;_EQUAL_COLLECTION usage</title>
+ </btl-example>
+
+ <seealso>
+ <ref>BOOST_&lt;level&gt;_EQUAL</ref>
+ </seealso>
+ </refentry>
+
+ <refentry name="BOOST_&lt;level&gt;_EXCEPTION">
+ <inline-synopsis>
+ <macro name="BOOST_WARN_EXCEPTION" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="expression"/>
+ <macro-parameter name="exception"/>
+ <macro-parameter name="predicate"/>
+ </macro>
+ <macro name="BOOST_CHECK_EXCEPTION" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="expression"/>
+ <macro-parameter name="exception"/>
+ <macro-parameter name="predicate"/>
+ </macro>
+ <macro name="BOOST_REQUIRE_EXCEPTION" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="expression"/>
+ <macro-parameter name="exception"/>
+ <macro-parameter name="predicate"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ These tools are used to perform an exception detection and validation check. Tools execute the supplied expression
+ and validate that it throws an exception of supplied class (or the one derived from it) that complies with the
+ supplied predicate. If the expression throws any other unrelated exception, doesn't throw at all or
+ predicate evaluates to false, check fails. In comparison with <ref>BOOST_&lt;level&gt;_THROW</ref> tools these
+ allow performing more fine-grained checks. For example: make sure that an expected exception has specific
+ error message.
+ </para>
+
+ <btl-example name="example37">
+ <title>BOOST_&lt;level&gt;_EXCEPTION usage</title>
+ </btl-example>
+
+ <seealso>
+ <ref>BOOST_&lt;level&gt;_THROW</ref>
+ </seealso>
+ </refentry>
+
+ <refentry name="BOOST_&lt;level&gt;_GE">
+ <inline-synopsis>
+ <macro name="BOOST_WARN_GE" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ </macro>
+ <macro name="BOOST_CHECK_GE" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ </macro>
+ <macro name="BOOST_REQUIRE_GE" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ Check performed by these tools is the same as the one performed by <ref>BOOST_&lt;level&gt;</ref>( left &gt;= right ).
+ The difference is that the argument values are reported as well.
+ </para>
+
+ <btl-example name="example57">
+ <title>BOOST_&lt;level&gt;_GE usage</title>
+ </btl-example>
+
+ <seealso>
+ <ref>BOOST_&lt;level&gt;_LE</ref>, <ref>BOOST_&lt;level&gt;_LT</ref>, <ref>BOOST_&lt;level&gt;_GT</ref>
+ </seealso>
+ </refentry>
+
+ <refentry name="BOOST_&lt;level&gt;_GT">
+ <inline-synopsis>
+ <macro name="BOOST_WARN_GT" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ </macro>
+ <macro name="BOOST_CHECK_GT" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ </macro>
+ <macro name="BOOST_REQUIRE_GT" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ Check performed by these tools is the same as the one performed by <ref>BOOST_&lt;level&gt;</ref>( left &gt;= right ).
+ The difference is that the argument values are reported as well.
+ </para>
+
+ <btl-example name="example58">
+ <title>BOOST_&lt;level&gt;_GT usage</title>
+ </btl-example>
+
+ <seealso>
+ <ref>BOOST_&lt;level&gt;_LE</ref>, <ref>BOOST_&lt;level&gt;_LT</ref>, <ref>BOOST_&lt;level&gt;_GE</ref>
+ </seealso>
+ </refentry>
+
+ <refentry name="BOOST_&lt;level&gt;_LE">
+ <inline-synopsis>
+ <macro name="BOOST_WARN_LE" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ </macro>
+ <macro name="BOOST_CHECK_LE" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ </macro>
+ <macro name="BOOST_REQUIRE_LE" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ Check performed by these tools is the same as the one performed by <ref>BOOST_&lt;level&gt;</ref>( left &lt;= right ).
+ The difference is that the argument values are reported as well.
+ </para>
+
+ <btl-example name="example55">
+ <title>BOOST_&lt;level&gt;_LE usage</title>
+ </btl-example>
+
+ <seealso>
+ <ref>BOOST_&lt;level&gt;_LT</ref>, <ref>BOOST_&lt;level&gt;_GE</ref>, <ref>BOOST_&lt;level&gt;_GT</ref>
+ </seealso>
+ </refentry>
+
+ <refentry name="BOOST_&lt;level&gt;_LT">
+ <inline-synopsis>
+ <macro name="BOOST_WARN_LT" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ </macro>
+ <macro name="BOOST_CHECK_LT" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ </macro>
+ <macro name="BOOST_REQUIRE_LT" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ Check performed by these tools is the same as the one performed by <ref>BOOST_&lt;level&gt;</ref>( left &lt; right ).
+ The difference is that the argument values are reported as well.
+ </para>
+
+ <btl-example name="example56">
+ <title>BOOST_&lt;level&gt;_LT usage</title>
+ </btl-example>
+
+ <seealso>
+ <ref>BOOST_&lt;level&gt;_LE</ref>, <ref>BOOST_&lt;level&gt;_GE</ref>, <ref>BOOST_&lt;level&gt;_GT</ref>
+ </seealso>
+ </refentry>
+
+ <refentry name="BOOST_&lt;level&gt;_MESSAGE">
+ <inline-synopsis>
+ <macro name="BOOST_WARN_MESSAGE" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="predicate"/>
+ <macro-parameter name="message"/>
+ </macro>
+ <macro name="BOOST_CHECK_MESSAGE" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="predicate"/>
+ <macro-parameter name="message"/>
+ </macro>
+ <macro name="BOOST_REQUIRE_MESSAGE" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="predicate"/>
+ <macro-parameter name="message"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ These tools perform exactly the same check as <ref>BOOST_&lt;level&gt;</ref> tools. The only difference is that
+ instead of generating an error/confirm message these use the supplied one.
+ </para>
+
+ <para>
+ The first parameter is the boolean expression. The second parameter is the message reported in case of check
+ failure. The message argument can be constructed of components of any type supporting the
+ <code>std::ostream&amp; operator&lt;&lt;(std::ostream&amp;)</code>.
+ </para>
+
+ <btl-example name="example38">
+ <title>BOOST_&lt;level&gt;_MESSAGE usage</title>
+ </btl-example>
+
+ <seealso>
+ <ref>BOOST_&lt;level&gt;</ref>
+ </seealso>
+ </refentry>
+
+ <refentry name="BOOST_&lt;level&gt;_NE">
+ <inline-synopsis>
+ <macro name="BOOST_WARN_NE" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ </macro>
+ <macro name="BOOST_CHECK_NE" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ </macro>
+ <macro name="BOOST_REQUIRE_NE" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="left"/>
+ <macro-parameter name="right"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ Check performed by these tools is the same as the one performed by <ref>BOOST_&lt;level&gt;</ref>( left != right ).
+ The difference is that the matched values are reported as well.
+ </para>
+
+ <btl-example name="example54">
+ <title>BOOST_&lt;level&gt;_NE usage</title>
+ </btl-example>
+
+ <seealso>
+ <ref>BOOST_&lt;level&gt;_EQUAL</ref>
+ </seealso>
+ </refentry>
+
+ <refentry name="BOOST_&lt;level&gt;_NO_THROW">
+ <inline-synopsis>
+ <macro name="BOOST_WARN_NO_THROW" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="expression"/>
+ </macro>
+ <macro name="BOOST_CHECK_NO_THROW" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="expression"/>
+ </macro>
+ <macro name="BOOST_REQUIRE_NO_THROW" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="expression"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ These tools are used to perform a "no throw" check. Tools execute the supplied expression and validate that it does
+ not throw any exceptions. Error would be reported by the framework even if the statement appear directly in test
+ case body and throw any exception. But these tools allow proceeding further with test case in case of failure.
+ </para>
+
+ <para>
+ If check is successful, tools may produce a confirmation message, in other case they produce an error message in
+ a form "error in &lt;test case name&gt;exception was thrown by &lt;expression&gt;.
+ </para>
+
+ <para>
+ The only parameter is an expression to execute. You can use do-while(0) block if you want to execute more than one
+ statement.
+ </para>
+
+ <btl-example name="example39">
+ <title>BOOST_&lt;level&gt;_NO_THROW usage</title>
+ </btl-example>
+
+ <seealso>
+ <ref>BOOST_&lt;level&gt;_THROW</ref>
+ </seealso>
+ </refentry>
+
+ <refentry name="BOOST_&lt;level&gt;_PREDICATE">
+ <inline-synopsis>
+ <macro name="BOOST_WARN_PREDICATE" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="predicate"/>
+ <macro-parameter name="arguments_list"/>
+ </macro>
+ <macro name="BOOST_CHECK_PREDICATE" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="predicate"/>
+ <macro-parameter name="arguments_list"/>
+ </macro>
+ <macro name="BOOST_REQUIRE_PREDICATE" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="predicate"/>
+ <macro-parameter name="arguments_list"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ These are generic tools used to validate an arbitrary supplied predicate functor (there is a compile time limit on
+ predicate arity defined by the configurable macro <macroname>BOOST_TEST_MAX_PREDICATE_ARITY</macroname>). To
+ validate zero arity predicate use <ref>BOOST_&lt;level&gt;</ref> tools. In other cases prefer theses tools. The
+ advantage of these tools is that they show arguments values in case of predicate failure.
+ </para>
+
+ <para>
+ The first parameter is the predicate itself. The second parameter is the list of predicate arguments each wrapped
+ in round brackets (BOOST_PP sequence format).
+ </para>
+
+ <btl-example name="example40">
+ <simpara>
+ Note difference in error log from
+ </simpara>
+ <title>BOOST_&lt;level&gt;_PREDICATE usage</title>
+ </btl-example>
+
+ <seealso>
+ <ref>BOOST_&lt;level&gt;</ref>
+ </seealso>
+ </refentry>
+
+ <refentry name="BOOST_&lt;level&gt;_SMALL">
+ <inline-synopsis>
+ <macro name="BOOST_WARN_SMALL" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="value"/>
+ <macro-parameter name="tolerance"/>
+ </macro>
+ <macro name="BOOST_CHECK_SMALL" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="value"/>
+ <macro-parameter name="tolerance"/>
+ </macro>
+ <macro name="BOOST_REQUIRE_SMALL" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="value"/>
+ <macro-parameter name="tolerance"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ These tools are used to check that supplied value is small enough. The "smallness" is defined by absolute value
+ of the tolerance supplied as a second argument. Use these tools with caution. To compare to values on closeness
+ it's preferable to use <ref>BOOST_&lt;level&gt;_CLOSE</ref> tools instead.
+ </para>
+
+ <para role="first-line-indented">
+ The first parameter is the value to check. The second parameter is the tolerance.
+ </para>
+
+ <note>
+ <simpara>
+ Note that to use these tools you need to include additional header floating_point_comparison.hpp.
+ </simpara>
+ </note>
+
+ <btl-example name="example41">
+ <title>BOOST_&lt;level&gt;_SMALL usage</title>
+ </btl-example>
+
+ <seealso>
+ <ref>BOOST_&lt;level&gt;_CLOSE</ref>, <ref>BOOST_&lt;level&gt;_CLOSE_FRACTION</ref>,
+ <link linkend="utf.testing-tools.fpv-comparison">Floating point comparison algorithms</link>
+ </seealso>
+ </refentry>
+
+ <refentry name="BOOST_&lt;level&gt;_THROW">
+ <inline-synopsis>
+ <macro name="BOOST_WARN_THROW" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="expression"/>
+ <macro-parameter name="exception"/>
+ </macro>
+ <macro name="BOOST_CHECK_THROW" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="expression"/>
+ <macro-parameter name="exception"/>
+ </macro>
+ <macro name="BOOST_REQUIRE_THROW" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="expression"/>
+ <macro-parameter name="exception"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ These tools are used to perform an exception detection check. Tools execute the supplied expression and validate
+ that it throws an exception of supplied class (or the one derived from it) or it's child. If the statement
+ throws any other unrelated exception or doesn't throw at all, check fails.
+ </para>
+
+ <para>
+ If check is successful, the tool produces a confirmation message, in other case it produces an error message in a
+ form "error in <replaceable>test case name</replaceable>: exception <replaceable>exception</replaceable> expected.
+ </para>
+
+ <para role="first-line-indented">
+ The first parameter is the expression to execute. Use do-while(0) block if you want to execute more than one
+ statement. The second parameter is an expected exception.
+ </para>
+
+ <btl-example name="example45">
+ <title>BOOST_&lt;level&gt;_THROW usage</title>
+ </btl-example>
+
+ <seealso>
+ <ref>BOOST_&lt;level&gt;NO_THROW</ref>
+ </seealso>
+ </refentry>
+
+ <refentry name="BOOST_ERROR">
+ <inline-synopsis>
+ <macro name="BOOST_ERROR" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="message"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ BOOST_ERROR tool behave the same way as <code>BOOST_CHECK_MESSAGE( false, message )</code>. This tool is used for
+ an unconditional error counter increasing and message logging.
+ </para>
+
+ <para>
+ The only tool's parameter is an error message to log.
+ </para>
+
+ <btl-example name="example46">
+ <title>BOOST_ERROR usage</title>
+ </btl-example>
+
+ <seealso>
+ <ref>BOOST_&lt;level&gt;</ref>
+ </seealso>
+ </refentry>
+
+ <refentry name="BOOST_FAIL">
+ <inline-synopsis>
+ <macro name="BOOST_FAIL" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="message"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ BOOST_FAIL behave the same way as <code>BOOST_REQUIRE_MESSAGE( false, message )</code>. This tool is used for an
+ unconditional error counter increasing, message logging and the current test case aborting.
+ </para>
+
+ <para>
+ The only tool's parameter is an error message to log.
+ </para>
+
+ <btl-example name="example47">
+ <title>BOOST_FAIL usage</title>
+ </btl-example>
+
+ <seealso>
+ <ref>BOOST_&lt;level&gt;</ref>
+ </seealso>
+ </refentry>
+
+ <refentry name="BOOST_IS_DEFINED">
+ <inline-synopsis>
+ <macro name="BOOST_IS_DEFINED" kind="functionlike" ref-id="utf.testing-tools.reference">
+ <macro-parameter name="symbol"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ Unlike the rest of the tools in the toolbox this tool does not perform the logging itself. Its only purpose
+ is to check at runtime whether or not the supplied preprocessor symbol is defined. Use it in combination with
+ <ref>BOOST_&lt;level&gt;</ref> to perform and log validation. Macros of any arity could be checked. To check the
+ macro definition with non-zero arity specify dummy arguments for it. See below for example.
+ </para>
+
+ <para>
+ The only tool's parameter is a preprocessor symbol that gets validated.
+ </para>
+
+ <btl-example name="example48">
+ <title>BOOST_IS_DEFINED usage</title>
+ </btl-example>
+
+ <seealso>
+ <ref>BOOST_&lt;level&gt;</ref>
+ </seealso>
+ </refentry>
+
+ </inline-reference>
+
+ </section>
+</section>

Added: trunk/libs/test/doc/src/utf.tutorials.xml
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/utf.tutorials.xml 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,18 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!DOCTYPE chapter PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN" "../../../../tools/boostbook/dtd/boostbook.dtd" [
+ <!ENTITY utf "<acronym>UTF</acronym>">
+]>
+<section id="utf.tutorials">
+ <title>The unit test framework tutorials</title>
+ <titleabbrev>Tutorials</titleabbrev>
+
+ <para role="first-line-indented">
+ You think writing tests is difficult, annoying and fruitless work? I beg to differ. Read through these tutorials
+ and I am sure you will agree.
+ </para>
+
+ <xi:include href="tutorial.intro-in-testing.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+ <xi:include href="tutorial.hello-the-testing-world.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+ <xi:include href="tutorial.new-year-resolution.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+
+</section>

Added: trunk/libs/test/doc/src/utf.usage-recommendations.xml
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/utf.usage-recommendations.xml 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,237 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!DOCTYPE chapter PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN" "../../../../tools/boostbook/dtd/boostbook.dtd" [
+ <!ENTITY utf "<acronym>UTF</acronym>">
+]>
+<section id="utf.usage-recommendations">
+ <title>The unit test framework usage recommendations</title>
+ <titleabbrev>Usage recommendations</titleabbrev>
+
+ <para role="first-line-indented">
+ On following pages collected tips and recommendations on how to use and apply the &utf; in your real life practice.
+ You don't necessarily need to follow them, but I found them handy.
+ </para>
+
+ <section id="utf.usage-recommendations.generic">
+ <title>Generic usage recommendations</title>
+ <titleabbrev>Generic</titleabbrev>
+
+ <qandaset defaultlabel="none">
+ <?dbhtml label-width="0%"?>
+
+ <qandaentry>
+ <question>
+ <simpara>
+ Prefer offline compiled libraries to the inline included components
+ </simpara>
+ </question>
+ <answer>
+ <para role="first-line-indented">
+ If you are just want to write quick simple test in environment where you never used Boost.Test before - yes,
+ use included components. But if you plan to use Boost.Test on permanent basis, small investment of time needed
+ to build (if not build yet), install and change you makefiles/project settings will soon return to you in a
+ form of shorter compilation time. Why do you need to make your compiler do the same work over and over again?
+ </para>
+ </answer>
+ </qandaentry>
+
+ <qandaentry>
+ <question>
+ <simpara>
+ If you use only free function based test cases advance to the automatic registration facility
+ </simpara>
+ </question>
+ <answer>
+ <para role="first-line-indented">
+ It's really easy to switch to automatic registration. And you don't need to worry about forgotten test case
+ anymore
+ </para>
+ </answer>
+ </qandaentry>
+
+ <qandaentry>
+ <question>
+ <simpara>
+ To find location of first error reported by test tool within reused template function, use special hook within
+ framework headers
+ </simpara>
+ </question>
+ <answer>
+ <para role="first-line-indented">
+ In some cases you are reusing the same template based code from within one test case (actually I recommend
+ better solution in such case- see below). Now if an error gets reported by the test tool within that reused
+ code you may have difficulty locating were exactly error occurred. To address this issue you could either a add
+ <link linkend="utf.user-guide.test-output.log.BOOST_TEST_MESSAGE">BOOST_TEST_MESSAGE</link> statements in
+ templated code that log current type id of template parameters or you can use special hook located in
+ unit_test_result.hpp called first_failed_assertion(). If you set a breakpoint right on the line where this
+ function is defined you will be able to unroll the stack and see where error actually occurred.
+ </para>
+ </answer>
+ </qandaentry>
+
+ <qandaentry>
+ <question>
+ <simpara>
+ To test reusable template base component with different template parameter use test case template facility
+ </simpara>
+ </question>
+ <answer>
+ <para role="first-line-indented">
+ If you writing unit test for generic reusable component you may have a need to test it against set of different
+ template parameter types . Most probably you will end up with a code like this:
+ </para>
+
+ <btl-snippet name="snippet6"/>
+
+ <para role="first-line-indented">
+ This is namely the situation where you would use test case template facility. It not only simplifies this kind
+ of unit testing by automating some of the work, in addition every argument type gets tested separately under
+ unit test monitor. As a result if one of types produce exception or non-fatal error you may still continue and
+ get results from testing with other types.
+ </para>
+ </answer>
+ </qandaentry>
+
+ <qandaentry>
+ <question>
+ <simpara>
+ Prefer BOOST_CHECK_EQUAL to BOOST_CHECK
+ </simpara>
+ </question>
+ <answer>
+ <para role="first-line-indented">
+ Even though BOOSK_CHECK tool is most generic and allows validating any bool convertible expression, I would
+ recommend using, if possible, more specific tools dedicated to the task. For example if you need you validate
+ some variable vs. some expected value use BOOST_CHECK_EQUAL instead. The main advantage is that in case of
+ failure you will see the mismatched value - the information most useful for error identification. The same
+ applies to other tools supplied by the framework.
+ </para>
+ </answer>
+ </qandaentry>
+ </qandaset>
+ </section>
+
+ <section id="utf.usage-recommendations.dot-net-specific">
+ <title>Microsoft Visual Studio .NET usage recommendations</title>
+ <titleabbrev>Microsoft Visual Studio .NET users specific</titleabbrev>
+
+ <qandaset defaultlabel="none">
+ <?dbhtml label-width="0%"?>
+
+ <qandaentry>
+ <question>
+ <simpara>
+ Use custom build step to automatically start test program after compilation
+ </simpara>
+ </question>
+ <answer>
+ <para role="first-line-indented">
+ I found it most convenient to put test program execution as a post-build step in compilation. To do so use
+ project property page:
+ </para>
+
+ <mediaobject>
+ <imageobject>
+ <imagedata format="jpg" fileref="../img/post_build_event.jpg"/>
+ </imageobject>
+ </mediaobject>
+
+ <para role="first-line-indented">
+ Full command you need in "Command Line" field is:
+ </para>
+
+ <cmdsynopsis>
+ <command>&quot;$(TargetDir)\$(TargetName).exe&quot;</command>
+ <arg choice="plain">--result_code=no</arg>
+ <arg choice="plain">--report_level=no</arg>
+ </cmdsynopsis>
+
+ <para role="first-line-indented">
+ Note that both report level and result code are suppressed. This way the only output you may see from this
+ command are possible runtime errors. But the best part is that you could jump through these errors using usual
+ keyboard shortcuts/mouse clicks you use for compilation error analysis:
+ </para>
+
+ <mediaobject>
+ <imageobject>
+ <imagedata format="jpg" fileref="../img/post_build_out.jpg"/>
+ </imageobject>
+ </mediaobject>
+
+ </answer>
+ </qandaentry>
+
+ <qandaentry>
+ <question>
+ <simpara>
+ If you got fatal exception somewhere within test case, make debugger break at the point the failure by adding
+ extra command line argument
+ </simpara>
+ </question>
+ <answer>
+ <para role="first-line-indented">
+ If you got "memory access violation" message (or any other message indication fatal or system error) when you
+ run you test, to get more information of error location add --catch_system_errors=no to the test run command
+ line:
+ </para>
+
+ <mediaobject>
+ <imageobject>
+ <imagedata format="jpg" fileref="../img/run_args.jpg"/>
+ </imageobject>
+ </mediaobject>
+
+ <para role="first-line-indented">
+ Now run the test again under debugger and it will break at the point of failure.
+ </para>
+ </answer>
+ </qandaentry>
+ </qandaset>
+ </section>
+
+ <section id="utf.usage-recommendations.command-line-specific">
+ <title>Command line usage recommendations</title>
+ <titleabbrev>Command-line (non-GUI) users specific</titleabbrev>
+
+ <qandaset defaultlabel="none">
+ <?dbhtml label-width="0%"?>
+
+ <qandaentry>
+ <question>
+ <simpara>
+ If you got fatal exception somewhere within test case, make program generate coredump by adding extra command
+ line argument
+ </simpara>
+ </question>
+ <answer>
+ <para role="first-line-indented">
+ If you got "memory access violation" message (or any other message indication fatal or system error) when you
+ run you test, to get more information about the error location add --catch_system_errors=no to the test run
+ command line. Now run the test again and it will create a coredump you could analyze using you preferable
+ debugger. Or run it under debugger in a first place and it will break at the point of failure.
+ </para>
+ </answer>
+ </qandaentry>
+
+ <qandaentry>
+ <question>
+ <simpara>
+ How to use test module build with Boost.Test framework under management of automated regression test facilities?
+ </simpara>
+ </question>
+ <answer>
+ <para role="first-line-indented">
+ My first recommendation is to make sure that the test framework catches all fatal errors by adding argument
+ --catch_system_error=yes to all test modules invocations. Otherwise test program may produce unwanted
+ dialogs (depends on compiler and OS) that will halt you regression tests run. The second recommendation is to
+ suppress result report output by adding --report_level=no argument and test log output by adding
+ --log_level=nothing argument, so that test module won't produce undesirable output no one is going to look at
+ anyway. I recommend relying only on result code that will be consistent for all test programs. An
+ alternative to my second recommendation is direct both log and report to separate file you could analyze
+ later on. Moreover you can make Boost.Test to produce them in XML format using output_format=XML and use some
+ automated tool that will format this information as you like.
+ </para>
+ </answer>
+ </qandaentry>
+ </qandaset>
+ </section>
+</section>

Added: trunk/libs/test/doc/src/utf.user-guide.runtime-config.xml
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/utf.user-guide.runtime-config.xml 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,570 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!DOCTYPE section PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN" "../../../../tools/boostbook/dtd/boostbook.dtd" [
+ <!ENTITY utf "<acronym>UTF</acronym>">
+]>
+<section id="utf.user-guide.runtime-config">
+ <title>Runtime configuration &hellip; or what are the strings I can pull?</title>
+ <titleabbrev>Runtime configuration </titleabbrev>
+
+ <para role="first-line-indented">
+ The &utf; supports multiple parameters that affect test module execution. To set the parameter's value you can
+ either use a runtime configuration subsystem interface from within the test module initialization function or you can
+ specify the value at runtime during test module invocation.
+ </para>
+
+ <para role="first-line-indented">
+ The &utf; provides two ways to set a parameter at runtime: by specifying a command line argument and by setting an
+ environment variable. The command line argument always overrides the corresponding environment variable.
+ </para>
+
+ <para role="first-line-indented">
+ During test module initialization the &utf; parses the command line and excludes all parameters that belong to it and
+ their values from the argument list. The rest of command line is forwarded to the test module initialization function
+ supplied by you. The command line argument format expected by the &utf; is:
+ </para>
+
+ <simpara> <!-- TO FIX -->
+ --&lt;command line argument name&gt;=&lt;argument_value&gt;.
+ </simpara>
+
+ <para role="first-line-indented">
+ The command line argument name is case sensitive. It is required to match exactly the name in parameter specification.
+ There should not be any spaces between '=' and either command line argument name or argument value.
+ </para>
+
+ <para role="first-line-indented">
+ The corresponding environment variable name is also case sensitive and is required to exactly match the name in the
+ parameter specification.
+ </para>
+
+ <para role="first-line-indented">
+ All information about supported parameters is summarized below in the reference section.
+ </para>
+
+ <section id="utf.user-guide.runtime-config.run-by-name">
+ <title>Running specific test units selected by their name</title>
+ <titleabbrev>Run by name</titleabbrev>
+
+ <para role="first-line-indented">
+ In regular circumstances test module execution initiates testing of all test units manually or automatically
+ registered in master test suite. The &utf; provides an ability to run specific set of test unit as well. It can be
+ single test case, single test suite or some combination of test cases and suites. The tests units to run are
+ selected by the runtime parameter <link linkend="utf.user-guide.runtime-config.reference">run_test</link>. In the
+ following examples I select tests to run using command line arguments, but the same filter expression can be used as
+ an appropriate environment variable value.
+ </para>
+
+ <para role="first-line-indented">
+ Filter expressions are specified in a form a/b/c, where a, b and c are filters for corresponding levels of test tree.
+ Symbol '*' can be used at the beginning, at the end and as the level filter itself as an asterisk. Symbol ',' is used
+ to create list of test units residing on the same level in test tree.
+ </para>
+
+ <para role="first-line-indented">
+ Let's consider following test module consisting from several test suites and test cases.
+ </para>
+
+ <btl-snippet name="snippet18"/>
+
+ <itemizedlist>
+ <listitem>
+ <simpara>Run single test case by specifying it's name.</simpara>
+
+ <screen html-class="test-execution-output">&gt;example --log_level=test_suite --run_test=testA
+Running 1 test case...
+Entering test suite "example"
+Entering test case "testA"
+Test case testA doesn't include any assertions
+Leaving test case "testA"
+Leaving test suite "example"
+
+*** No errors detected</screen>
+ </listitem>
+ <listitem>
+ <simpara>
+ Running multiple test cases residing within the same test suite by listing their names in coma separated list.
+ </simpara>
+
+ <screen html-class="test-execution-output">&gt;example --log_level=test_suite --run_test=testA,testB
+Running 2 test case...
+Entering test suite "example"
+Entering test case "testA"
+Test case testA doesn't include any assertions
+Leaving test case "testA"
+Entering test case "testB"
+Test case testA doesn't include any assertions
+Leaving test case "testB"
+Leaving test suite "example"
+
+*** No errors detected</screen>
+ </listitem>
+ <listitem>
+ <simpara>Incorrect test case name may lead to no test to be run.</simpara>
+
+ <screen html-class="test-execution-output">&gt;example --log_level=test_suite --run_test=testC
+Test setup error: no test cases matching filter</screen>
+ </listitem>
+ <listitem>
+ <simpara>
+ Test unit name can refer to a test suite as well. All test units within the referred test suites are going to be
+ run.
+ </simpara>
+
+ <screen html-class="test-execution-output">&gt;example --log_level=test_suite --run_test=s1
+Running 2 test cases...
+Entering test suite "example"
+Entering test suite "s1"
+Entering test case "test1"
+Test case test1 doesn't include any assertions
+Leaving test case "test1"
+Entering test case "lest2"
+Test case lest2 doesn't include any assertions
+Leaving test case "lest2"
+Leaving test suite "s1"
+Leaving test suite "example"
+
+*** No errors detected</screen>
+ </listitem>
+ <listitem>
+ <simpara>
+ Using '/' as levels separator you can refer to any test unit inside a test tree.
+ </simpara>
+
+ <screen html-class="test-execution-output">&gt;example --log_level=test_suite --run_test=s2/in/test
+Running 1 test case...
+Entering test suite "example"
+Entering test suite "s2"
+Entering test suite "in"
+Entering test case "test"
+Test case test doesn't include any assertions
+Leaving test case "test"
+Leaving test suite "in"
+Leaving test suite "s2"
+Leaving test suite "example"
+
+*** No errors detected</screen>
+ </listitem>
+ <listitem>
+ <simpara>
+ The &utf; supports simple regular expression-like '*' wildcard. Single '*' match any name of test unit. Accordingly
+ expression 's1/*' is equivalent to the expression 's1' and matches all test units inside test suite s1. Similarly
+ expression '*/test1' refers to all test units named test1 that reside in master test suite's direct child suites.
+ </simpara>
+
+ <screen html-class="test-execution-output">&gt;example --log_level=test_suite --run_test=*/test1
+Running 2 test cases...
+Entering test suite "example"
+Entering test suite "s1"
+Entering test case "test1"
+Test case test1 doesn't include any assertions
+Leaving test case "test1"
+Leaving test suite "s1"
+Entering test suite "s2"
+Entering test case "test1"
+Test case test1 doesn't include any assertions
+Leaving test case "test1"
+Leaving test suite "s2"
+Leaving test suite "example"
+
+*** No errors detected</screen>
+ </listitem>
+ <listitem>
+ <simpara>
+ The &utf; allows to match specific prefix in test unit names. For example, expression 's2/test*' filters out only
+ test units in test suite s2 with name that starts with 'test'. This avoids running test suite s2/in.
+ </simpara>
+
+ <screen html-class="test-execution-output">&gt;example --log_level=test_suite --run_test=s2/test*
+Running 2 test cases...
+Entering test suite "example"
+Entering test suite "s2"
+Entering test case "test1"
+Test case test1 doesn't include any assertions
+Leaving test case "test1"
+Entering test case "test11"
+Test case test11 doesn't include any assertions
+Leaving test case "test11"
+Leaving test suite "s2"
+Leaving test suite "example"
+
+*** No errors detected</screen>
+ </listitem>
+ <listitem>
+ <simpara>
+ The &utf; allows to match specific suffix in test unit names. For example, expression '*/*1' filters out test units
+ with name that ends with '1' and reside in master test suite's direct child suites.
+ </simpara>
+
+ <screen html-class="test-execution-output">&gt;example --log_level=test_suite --run_test=*/*1
+Running 2 test cases...
+Entering test suite "example"
+Entering test suite "s2"
+Entering test case "test1"
+Test case test1 doesn't include any assertions
+Leaving test case "test1"
+Entering test case "test11"
+Test case test11 doesn't include any assertions
+Leaving test case "test11"
+Leaving test suite "s2"
+Leaving test suite "example"
+
+*** No errors detected</screen>
+ </listitem>
+ <listitem>
+ <simpara>
+ Finally, the &utf; allows to match specific substring in test unit names.
+ </simpara>
+
+ <screen html-class="test-execution-output">&gt;example --log_level=test_suite --run_test=s1/*est*
+Running 2 test cases...
+Entering test suite "example"
+Entering test suite "s1"
+Entering test case "test1"
+Test case test1 doesn't include any assertions
+Leaving test case "test1"
+Entering test case "lest2"
+Test case lest2 doesn't include any assertions
+Leaving test case "lest2"
+Leaving test suite "s1"
+Leaving test suite "example"
+
+*** No errors detected</screen>
+ </listitem>
+ </itemizedlist>
+ </section>
+
+ <section id="utf.user-guide.runtime-config.reference">
+ <title>Runtime parameters reference</title>
+ <titleabbrev>Parameters reference</titleabbrev>
+
+ <para role="first-line-indented">
+ Each parameter specification includes: the full parameter name, corresponding environment variable name, command
+ line argument name, acceptable values and a long description. The default value for the parameter is bold in the
+ acceptable values list. All values are case sensitive and are required to exactly match the parameter specification.
+ </para>
+
+ <btl-parameter-reference id="utf.user-guide.runtime-config.parameters">
+ <refentry name="auto_start_dbg">
+ <name>Automatically attach debugger in case of system failure</name>
+ <env>BOOST_TEST_AUTO_START_DBG</env>
+ <cla>auto_start_dbg</cla>
+ <vals>
+ <simplelist>
+ <member><emphasis role="bold">no</emphasis></member>
+ <member>yes</member>
+ <member>debugger identifier</member>
+ </simplelist>
+ </vals>
+ <descr>
+ <simpara>
+ specifies whether Boost.Test should try to attach a debugger in case if fatal system error occurs. If value is "yes"
+ the default debugger configured for the platform is going to be attempted. Alternatively the debugger identified
+ by the argument value of the parameter is used. For more details on advanced debugger support in Boost.Test check
+ <!-- TO FIX: add link --> section dedicated to Boost.Test debug API.
+ </simpara>
+ </descr>
+ </refentry>
+
+ <refentry name="build_info">
+ <name>Print build info</name>
+ <env>BOOST_TEST_BUILD_INFO</env>
+ <cla>build_info</cla>
+ <vals>
+ <simplelist>
+ <member><emphasis role="bold">no</emphasis></member>
+ <member>yes</member>
+ </simplelist>
+ </vals>
+ <descr>
+ <simpara>
+ makes the framework to print build information that include: platform, compiler, STL implementation in use and
+ boost version.
+ </simpara>
+ </descr>
+ </refentry>
+
+ <refentry name="catch_system_errors">
+ <name>Catch system errors</name>
+ <env>BOOST_TEST_CATCH_SYSTEM_ERRORS</env>
+ <cla>catch_system_errors</cla>
+ <vals>
+ <simplelist>
+ <member><emphasis role="bold">yes</emphasis></member>
+ <member>no</member>
+ </simplelist>
+ </vals>
+ <descr>
+ <simpara>
+ value "no" prohibits the framework from catching asynchronous system events. This could be used for test programs
+ executed within GUI or to get a coredump for stack analysis. See usage recommendations page for more details.
+ </simpara>
+ </descr>
+ </refentry>
+
+ <refentry name="detect_memory_leak">
+ <name>Detect memory leaks</name>
+ <env>BOOST_TEST_DETECT_MEMORY_LEAK</env>
+ <cla>build_info</cla>
+ <vals>
+ <simplelist>
+ <member>0</member>
+ <member><emphasis role="bold">1</emphasis></member>
+ <member>integer value &gt; 1</member>
+ </simplelist>
+ </vals>
+ <descr>
+ <simpara>
+ positive value tells the framework to detect the memory leaks (if any). Any value greater then 1 in addition
+ is treated as leak allocation number and setup runtime breakpoint. In other words setting this parameter to
+ the positive value N greater than 1 causes the framework to set a breakpoint at Nth memory allocation (don't
+ do that from the command line - only when you are under debugger). Note: if your test program produce memory
+ leaks notifications, they are combined with allocation number values you could use to set a breakpoint.
+ Currently only applies to MS family of compilers.
+ </simpara>
+ </descr>
+ </refentry>
+
+ <refentry name="detect_fp_exceptions">
+ <name>[Do not] trap floating point exceptions</name>
+ <env>BOOST_TEST_DETECT_FP_EXCEPTIONS</env>
+ <cla>detect_fp_exceptions</cla>
+ <vals>
+ <simplelist>
+ <member><emphasis role="bold">no</emphasis></member>
+ <member>yes</member>
+ </simplelist>
+ </vals>
+ <descr>
+ <simpara>
+ enables/disable hardware traps for the floating point exception if they are supported. By default is disabled.
+ </simpara>
+ </descr>
+ </refentry>
+
+ <refentry name="log_format">
+ <name>The log format</name>
+ <env>BOOST_TEST_LOG_FORMAT</env>
+ <cla>log_format</cla>
+ <vals>
+ <simplelist>
+ <member><emphasis role="bold">HRF</emphasis> - human readable format</member>
+ <member>XML - XML format for automated output processing</member>
+ </simplelist>
+ </vals>
+ <descr>
+ <simpara>
+ allows selecting the &utf; log format from the list of formats supplied by the framework. To specify custom log
+ format use the <link linkend="utf.user-guide.test-output.log.ct-config.log-formatter">unit_test_log API</link>.
+ </simpara>
+ </descr>
+ </refentry>
+
+ <refentry name="log_level">
+ <name>The &utf; log level</name>
+ <env>BOOST_TEST_LOG_LEVEL</env>
+ <cla>log_level</cla>
+ <vals>
+ <variablelist>
+ <?dbhtml term-separator=" - "?>
+
+ <varlistentry>
+ <term>all</term>
+ <listitem><simpara>report all log messages including the passed test notification</simpara></listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>success</term>
+ <listitem><simpara>the same as all</simpara></listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>test_suite</term>
+ <listitem><simpara>show test suite messages</simpara></listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>message</term>
+ <listitem><simpara>show user messages</simpara></listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>warning</term>
+ <listitem><simpara>report warnings issued by user</simpara></listitem>
+ </varlistentry>
+ <varlistentry>
+ <term><emphasis role="bold">error</emphasis></term>
+ <listitem><simpara>report all error conditions</simpara></listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>cpp_exception</term>
+ <listitem><simpara>report uncaught c++ exception</simpara></listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>system_error</term>
+ <listitem><simpara>report system originated non-fatal errors (for example, timeout or floating point exception)</simpara></listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>fatal_error</term>
+ <listitem><simpara>report only user or system originated fatal errors (for example, memory access violation)</simpara></listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>nothing</term>
+ <listitem><simpara>do not report any information</simpara></listitem>
+ </varlistentry>
+ </variablelist>
+ </vals>
+ <descr>
+ <simpara>
+ allows setting the &utf; <link linkid="utf.user-guide.test-output.log">log level</link> in a range from a
+ complete log, when all successful tests are confirmed and all test suite messages are included, to an empty
+ log when nothing is logged a test output stream. Note that log levels are accumulating, in other words each
+ log level includes also all the information reported by less restrictive ones.
+ </simpara>
+ </descr>
+ </refentry>
+
+ <refentry name="output_format">
+ <name>The output format</name>
+ <env>BOOST_TEST_OUTPUT_FORMAT</env>
+ <cla>output_format</cla>
+ <vals>
+ <simplelist>
+ <member><emphasis role="bold">HRF</emphasis> - human readable format</member>
+ <member>XML - XML format for automated output processing</member>
+ </simplelist>
+ </vals>
+ <descr>
+ <simpara>
+ combines an effect of report_format and log_format parameters. Has higher priority than either one of them if
+ specified.
+ </simpara>
+ </descr>
+ </refentry>
+
+ <refentry name="random">
+ <name>Random seed for random order of test cases</name>
+ <env>BOOST_TEST_RANDOM</env>
+ <cla>random</cla>
+ <vals>
+ <simplelist>
+ <member><emphasis role="bold">0</emphasis></member>
+ <member>1</member>
+ <member>integer value &gt; 1</member>
+ </simplelist>
+ </vals>
+ <descr>
+ <simpara>
+ positive value makes the framework to run the test cases in random order. Also if this value is greater than 1,
+ it's used as a random seed. In other case random seed is generated based on current time.
+ </simpara>
+ </descr>
+ </refentry>
+
+ <refentry name="report_format">
+ <name>The report format</name>
+ <env>BOOST_TEST_REPORT_FORMAT</env>
+ <cla>report</cla>
+ <vals>
+ <simplelist>
+ <member><emphasis role="bold">HRF</emphasis> - human readable format</member>
+ <member>XML - XML format for automated output processing</member>
+ </simplelist>
+ </vals>
+ <descr>
+ <simpara>
+ allows selecting the &utf; report format from the list of formats supplied by the framework. To
+ specify custom report format use unit_test_report API.
+ </simpara>
+ </descr>
+ </refentry>
+
+ <refentry name="report_level">
+ <name>The results report level</name>
+ <env>BOOST_TEST_LOG_FORMAT</env>
+ <cla>log_format</cla>
+ <vals>
+ <simplelist>
+ <member>no</member>
+ <member><emphasis role="bold">confirm</emphasis></member>
+ <member>short</member>
+ <member>detailed</member>
+ </simplelist>
+ </vals>
+ <descr>
+ <simpara>
+ allows setting the level of detailization for the testing results report generated by the framework. Use value
+ "no" to eliminate the results report completely. See the
+ <xref linkend="utf.user-guide.test-output.results-report"/> for description of different report formats.
+ </simpara>
+ </descr>
+ </refentry>
+
+ <refentry name="result_code">
+ <name>[Do not] return result code</name>
+ <env>BOOST_TEST_RESULT_CODE</env>
+ <cla>result_code</cla>
+ <vals>
+ <simplelist>
+ <member><emphasis role="bold">yes</emphasis></member>
+ <member>no</member>
+ </simplelist>
+ </vals>
+ <descr>
+ <simpara>
+ value "no" enforces the framework to always return zero result code. This could be used for test programs
+ executed within GUI. See <link linkend="utf.usage-recommendations.dot-net-specific">usage recommendations</link>
+ section for more details.
+ </simpara>
+ </descr>
+ </refentry>
+
+ <refentry name="run_test">
+ <name>Test units to run</name>
+ <env>BOOST_TESTS_TO_RUN</env>
+ <cla>run_test</cla>
+ <vals>
+ <simplelist>
+ <member>
+ <link linkend="utf.user-guide.runtime-config.run-by-name">specification</link> of test units to run
+ </member>
+ </simplelist>
+ </vals>
+ <descr>
+ <simpara>
+ specifies which test units to run.
+ </simpara>
+ </descr>
+ </refentry>
+
+ <refentry name="show_progress">
+ <name>Show progress</name>
+ <env>BOOST_TEST_SHOW_PROGRESS</env>
+ <cla>show_progress</cla>
+ <vals>
+ <simplelist>
+ <member><emphasis role="bold">no</emphasis></member>
+ <member>yes</member>
+ </simplelist>
+ </vals>
+ <descr>
+ <simpara>makes the framework to print progress information.</simpara>
+ </descr>
+ </refentry>
+
+ <refentry name="use_alt_stack">
+ <name>Use alternative stack</name>
+ <env>BOOST_TEST_USE_ALT_STACK</env>
+ <cla>use_alt_stack</cla>
+ <vals>
+ <simplelist>
+ <member><emphasis role="bold">yes</emphasis></member>
+ <member>no</member>
+ </simplelist>
+ </vals>
+ <descr>
+ <simpara>
+ specifies whether or not the Boost.Test Execution Monitor should employ an alternative stack for signals
+ processing on platforms where they are supported.
+ </simpara>
+ </descr>
+ </refentry>
+
+ </btl-parameter-reference>
+ </section>
+</section>

Added: trunk/libs/test/doc/src/utf.users-guide.fixture.xml
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/utf.users-guide.fixture.xml 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,284 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!DOCTYPE section PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN" "../../../../tools/boostbook/dtd/boostbook.dtd" [
+ <!ENTITY utf "<acronym>UTF</acronym>">
+]>
+<section id="utf.user-guide.fixture">
+ <title>Fixtures &hellip; or let me repeat myself</title>
+ <titleabbrev>Fixtures</titleabbrev>
+
+ <para role="first-line-indented">
+ In general terms a test fixture or test context is the collection of one or more of the following items, required
+ to perform the test:
+ </para>
+
+ <itemizedlist>
+ <listitem>
+ <simpara>preconditions</simpara>
+ </listitem>
+ <listitem>
+ <simpara>particular states of tested unites</simpara>
+ </listitem>
+ <listitem>
+ <simpara>necessary cleanup procedures</simpara>
+ </listitem>
+ </itemizedlist>
+
+ <para role="first-line-indented">
+ Though these tasks are encountered in many if not all test cases, what makes a test fixture different is
+ repetition. Where a normal test case implementation does all preparatory and cleanup work itself, a test fixture
+ allows this to be implemented in a separate reusable unit.
+ </para>
+
+ <para role="first-line-indented">
+ With introduction of extreme programming (XP), the testing style, that require test setup/cleanup repetition, is
+ becoming more and more popular. Single XP adopted test modules may contain hundreds of single assertion test cases,
+ many requiring very similar test setup/cleanup. This is the problem that the test fixture is designed to solve.
+ </para>
+
+ <para role="first-line-indented">
+ In practice a test fixture usually is a combination of setup and teardown functions, associated with test case.
+ The former serves the purposes of test setup; the later is dedicated to the cleanup tasks. Ideally it's
+ preferable that a test module author is able to define variables used in fixtures on the stack and the same time
+ is able to refer to them directly in test case.
+ </para>
+
+ <para role="first-line-indented">
+ It's important to understand that C++ provides a way to implement a straightforward test fixture solution
+ that almost satisfies our requirements without any extra support from the test framework. This may explain why
+ test fixtures support was introduced in the &utf; somewhat late in its life cycle. Here is how simple test module
+ with such a fixture may look like:
+ </para>
+
+ <btl-snippet name="snippet5"/>
+
+ <para role="first-line-indented">
+ This is a generic solution that can be used to implement any kind of shared setup or cleanup procedure. Still
+ there are several more or less minor practical issues with this pure C++ based fixtures solution:
+ </para>
+
+ <itemizedlist>
+ <listitem>
+ <simpara>
+ We need to add a fixture declaration statement into each test case manually.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ Objects defined in fixture are references with &quot;&lt;fixture-instance-name&gt;.&quot; prefix.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ There is no place to execute a &quot;global&quot; fixture, which performs &quot;global&quot; setup/cleanup
+ procedures before and after testing.
+ </simpara>
+ </listitem>
+ </itemizedlist>
+
+ <para role="first-line-indented">
+ While there is little the &utf; can do to address these issues for manually registered test units, it's
+ possible to resolve them for test units that are automatically registered. To do this the &utf; defines a
+ <link linkend="utf.user-guide.fixture.model">generic fixture model</link> - fixed interfaces that both setup and
+ teardown fixture functions should comply to. Based on the generic fixture model, the &utf; presents solution for
+ the following tasks:
+ </para>
+
+ <itemizedlist>
+ <listitem>
+ <simpara><link linkend="utf.user-guide.fixture.per-test-case">per test case fixture automation</link></simpara>
+ </listitem>
+ <listitem>
+ <simpara><link linkend="utf.user-guide.fixture.test-suite-shared">shared test suite level fixture</link></simpara>
+ </listitem>
+ <listitem>
+ <simpara><link linkend="utf.user-guide.fixture.global">&quot;global&quot; fixture support</link></simpara>
+ </listitem>
+ </itemizedlist>
+
+ <section id="utf.user-guide.fixture.model">
+ <title>Generic fixture model</title>
+ <titleabbrev>Generic model</titleabbrev>
+
+ <para role="first-line-indented">
+ The &utf; defines the generic fixture model as follows:
+ </para>
+
+ <programlisting>struct &lt;fixture-name&gt;{
+ &lt;fixture-name&gt;(); // setup function
+ ~&lt;fixture-name&gt;(); // teardown function
+};</programlisting>
+
+ <para role="first-line-indented">
+ In other words a fixture is expected to be implemented as a class where the class constructor serves as a setup
+ method and class destructor serves as teardown method. The &utf; opted to avoid explicit names in fixture
+ interface for setup and teardown methods, since is considered most natural in C++ for tasks similar to RAII and
+ coincides with the pure C++ solution discusses earlier.
+ </para>
+
+ <important>
+ <simpara>
+ The above interface prevents you to report errors in the teardown procedure using an exception. It does make
+ sense though: if somehow more than one fixture is assigned to a test unit, you want all teardown procedures to
+ run, even if some may experience problems.
+ </simpara>
+ </important>
+ </section>
+
+ <section id="utf.user-guide.fixture.per-test-case">
+ <title>Per test case fixture</title>
+ <titleabbrev>Per test case</titleabbrev>
+
+ <para role="first-line-indented">
+ To automate the task of assigning a fixture for the test case, for test case creation use the macro
+ BOOST_FIXTURE_TEST_CASE in place of the macro <macroname>BOOST_AUTO_TEST_CASE</macroname>:
+ </para>
+
+ <inline-synopsis>
+ <macro name="BOOST_FIXTURE_TEST_CASE" kind="functionlike">
+ <macro-parameter name="test_case_name"/>
+ <macro-parameter name="fixure_name"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ The only difference from the macro <macroname>BOOST_AUTO_TEST_CASE</macroname> is the presence of an extra argument
+ - fixture name. Unlike the pure C++ solution you have direct access to the public and protected members of the
+ fixture, though you still need to refer to the fixture name in every test case.
+ </para>
+
+ <note>
+ <simpara>
+ You can't access private members of fixture, but then why would you make anything private?
+ </simpara>
+ </note>
+
+ <btl-example name="example18">
+ <title>Per test case fixture</title>
+
+ <para role="first-line-indented">
+ In this example only test_case1 and test_case2 have fixture F assigned. In the next section you going to see
+ what can be done if all test cases in a test suite require the same fixture.
+ </para>
+ </btl-example>
+ </section>
+
+ <section id="utf.user-guide.fixture.test-suite-shared">
+ <title>Test suite level fixture</title>
+ <titleabbrev>Test suite shared</titleabbrev>
+
+ <para role="first-line-indented">
+ If all test cases in a test require the same fixture (you can group test cases in the test suite based on a
+ fixture required) you can make another step toward an automation of a test fixture assignment. To assign the
+ same shared fixture for all test cases in a test suite use the macro BOOST_FIXTURE_TEST_SUITE in place of the
+ macro <macroname>BOOST_AUTO_TEST_SUITE</macroname> for automated test suite creation and registration.
+ </para>
+
+ <inline-synopsis>
+ <macro name="BOOST_FIXTURE_TEST_SUITE" kind="functionlike">
+ <macro-parameter name="suite_name"/>
+ <macro-parameter name="fixure_name"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ Once again the only difference from the macro <macroname>BOOST_AUTO_TEST_SUITE</macroname> usage is the presence of
+ an extra argument - the fixture name. And now, you not only have direct access to the public and protected members
+ of the fixture, but also do not need to refer to the fixture name in test case definition. All test cases assigned
+ the same fixture automatically.
+ </para>
+
+ <tip>
+ <simpara>
+ If necessary you can reset the fixture for a particular test case with the use of the macro
+ <macroname>BOOST_FIXTURE_TEST_CASE</macroname>.
+ </simpara>
+ </tip>
+
+ <note>
+ <simpara>
+ The fixture assignment is &quot;deep&quot;. In other words unless reset by another
+ <macroname>BOOST_FIXTURE_TEST_SUITE</macroname> or <macroname>BOOST_FIXTURE_TEST_CASE</macroname> definition the
+ same fixture is assigned to all test cases, including ones that belong to the sub test suites.
+ </simpara>
+ </note>
+
+ <btl-example name="example19">
+ <title>Test suite level fixture</title>
+ </btl-example>
+ </section>
+
+ <section id="utf.user-guide.fixture.global">
+ <title>Global fixture</title>
+
+ <para role="first-line-indented">
+ Any global initialization that needs to be performed every time testing begins or a global cleanup that is to be
+ performed once testing is finished is called a global fixture. The &utf; global fixture design is based on a
+ generic test fixture model and is supported by the utility class boost::unit_test::global_fixture. The global
+ fixture design allows any number of global fixtures to be defined in any test file that constitutes a test module.
+ Though some initialization can be implemented in the test module initialization function, there are several
+ reasons to prefer the global fixture approach:
+ </para>
+
+ <itemizedlist>
+ <listitem>
+ <simpara>There is no place for cleanup/teardown operations in the initialization function.</simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ Unlike the initialization function, the global fixture setup method invocation is guarded by the execution
+ monitor. That means that all uncaught errors that occur during initialization are properly.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ Any number of different global fixtures can be defined, which allows you to split initialization code by
+ category.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ The fixture allows you to place matching setup/teardown code in close vicinity in your test module code.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ If the whole test tree is constructed automatically the initialization function is empty and auto-generated by
+ the &utf;. To introduce the initialization function can be more work than the use of a global fixture facility,
+ while global fixture is more to the point.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ Since all fixtures follow the same generic model you can easily switch from local per test case fixtures to
+ the global one.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ If you are using the interactive test runner (non-supplied yet) global test fixtures are applied to every run,
+ while an initialization function is executed only once during a test module startup (just make sure that
+ it's what you really want).
+ </simpara>
+ </listitem>
+ </itemizedlist>
+
+ <para role="first-line-indented">
+ To define a global test module fixture you need to implement a class that matched generic fixture model and
+ passed it as an argument to the macro BOOST_GLOBAL_FIXTURE.
+ </para>
+
+ <inline-synopsis>
+ <macro name="BOOST_GLOBAL_FIXTURE" kind="functionlike">
+ <macro-parameter name="fixure_name"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ The statement, that performs global fixture definition, has to reside at a test file scope.
+ </para>
+
+ <btl-example name="example20">
+ <title>Global fixture</title>
+ </btl-example>
+ </section>
+</section>
\ No newline at end of file

Added: trunk/libs/test/doc/src/utf.users-guide.test-organization.xml
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/utf.users-guide.test-organization.xml 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,840 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!DOCTYPE section PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN" "../../../../tools/boostbook/dtd/boostbook.dtd" [
+ <!ENTITY utf "<acronym>UTF</acronym>">
+]>
+<section id="utf.user-guide.test-organization">
+ <title>Test organization &hellip; or the house that Jack built</title>
+ <titleabbrev>Test organization</titleabbrev>
+
+ <para role="first-line-indented">
+ If you look at many legacy test modules, big chance is that it's implemented as one big test function that
+ consists of a mixture of check and output statements. Is there anything wrong with it? Yes. There are various
+ disadvantages in single test function approach:
+ </para>
+
+ <itemizedlist mark="square">
+ <listitem>
+ <simpara>
+ One big function tends to become really difficult to manage if the number of checks exceeds a reasonable limit
+ (true for any large function). What is tested and where - who knows?
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ Many checks require similar preparations. This results in code repetitions within the test function.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ If a fatal error or an exception is caused by any checks within the test function the rest of tests are
+ skipped and there is no way to prevent this.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ No way to perform only checks for a particular subsystem of the tested unit.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ No summary of how different subsystems of the tested unit performed under in the test.
+ </simpara>
+ </listitem>
+ </itemizedlist>
+
+ <para role="first-line-indented">
+ The above points should make it clear that it's preferable to split <link linkend="test-module.def">test module
+ </link> into smaller units. These units are test cases. A test case has to be constructed based on some kind of
+ function and registered in a test tree, so that the test runner knows how to invoke it. There are different
+ possible designs for the test case construction problem: inheritance from the predefined base class, specifically
+ named member function and so on. The &utf; opted to avoid classed altogether and to use the least intrusive &quot;
+ generic callback&quot; approach. The facility, the &utf; provides, requires specific function signature and allows
+ adopting any function or function object that matches the signature as a test case. The signatures the &utf;
+ supports are:
+ </para>
+
+ <itemizedlist>
+ <listitem>
+ <simpara>
+ <link linkend="utf.user-guide.test-organization.nullary-test-case">Nullary function</link>
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ <link linkend="utf.user-guide.test-organization.unary-test-case">Unary function</link>
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ <link linkend="utf.user-guide.test-organization.test-case-template">Nullary function template</link>
+ </simpara>
+ </listitem>
+ </itemizedlist>
+
+ <para role="first-line-indented">
+ To solve test tree creation problem the &utf; provides facilities for
+ <link linkend="utf.user-guide.test-organization.test-suite">test suite creation</link>.
+ </para>
+
+ <para role="first-line-indented">
+ Generic test case construction design used by the &utf; causes the test case implementation (test function body)
+ and test case creation/registration points to be remote. As a result you may forget to register the test case
+ and it's never going to be executed, even though it's present in test file. To alleviate this issue
+ the &utf; presents facilities for automated (in place) test case creation and registration in the test tree. These
+ facilities sacrifice some generality and work for selected test function signatures only. But the result is that
+ library users are relieved from the necessity to manually register test cases. These facilities are the most
+ user-friendly and are recommended to be used whenever possible. In addition they support automated registration
+ of test suites, thus allowing whole test tree to be created without any use of manual registration.
+ </para>
+
+ <para role="first-line-indented">
+ The single test module may mix both automated and manual test case
+ registration. In other words, within the same test module you can have both test cases implemented remotely and
+ registered manually in the test module initialization function and test cases that are registered automatically at
+ implementation point.
+ </para>
+
+ <para role="first-line-indented">
+ In some cases it's desirable to allow some &quot;expected&quot; failures in test case without failing a
+ test module. To support this request The &utf; allows specifying the number of
+ <link linkend="utf.user-guide.test-organization.expected-failures">expected failures</link> in a test case.
+ </para>
+
+ <section id="utf.user-guide.test-organization.nullary-test-case">
+ <title>Nullary function based test case</title>
+
+ <para role="first-line-indented">
+ The most widely used are test cases based on a nullary function. These include nullary free functions, nullary
+ function objects created with <functionname>boost::bind</functionname> and nullary <classname>boost::function</classname>
+ instances. The simplest is a free function and the &utf; provides facilities to create a free function based test
+ case that is automatically registered. Here are the two construction interfaces:
+ </para>
+
+ <itemizedlist>
+ <listitem>
+ <simpara>
+ <link linkend="utf.user-guide.test-organization.manual-nullary-test-case">Manually registered test case</link>
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ <link linkend="utf.user-guide.test-organization.auto-nullary-test-case">Test case with automated registration</link>
+ </simpara>
+ </listitem>
+ </itemizedlist>
+
+ <section id="utf.user-guide.test-organization.manual-nullary-test-case">
+ <title>Manually registered nullary function based test case</title>
+ <titleabbrev>Manual registration</titleabbrev>
+
+ <para role="first-line-indented">
+ To create a test case manually, employ the macro BOOST_TEST_CASE:
+ </para>
+
+ <inline-synopsis>
+ <macro name="BOOST_TEST_CASE" kind="functionlike">
+ <macro-parameter name="test_function"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ BOOST_TEST_CASE creates an instance of the class boost::unit_test::test_case and returns a pointer to the
+ constructed instance. The test case name is deduced from the macro argument test_function. If you prefer to
+ assign a different test case name, you have to use the underlying make_test_case interface instead. To
+ register a new test case, employ the method test_suite::add. Both test case creation and registration are
+ performed in the test module initialization function.
+ </para>
+
+ <para role="first-line-indented">
+ Here is the simplest possible manually registered test case. This example employs the original test module
+ initialization function specification. A single test case is created and registered in the master test suite.
+ Note that the free function name is passed by address to the macro BOOST_TEST_CASE.
+ </para>
+
+ <btl-example name="example01">
+ <title>Nullary free function manually registered</title>
+ </btl-example>
+
+ <para role="first-line-indented">
+ A test case can be implemented as a method of a class. In this case a pointer to the class instance has to be
+ bound to the test method to create a test case. You can use the same instance of the class for multiple test
+ cases. The &utf; doesn't take an ownership of the class instance and you are required to manage the class
+ instance lifetime yourself.
+ </para>
+
+ <warning>
+ <simpara>
+ The class instance can't be defined in the initialization function scope, since it becomes invalid as
+ soon as the test execution exits it. It needs to be either defined statically/globally or managed using a
+ shared pointer.
+ </simpara>
+ </warning>
+
+ <btl-example name="example02">
+ <title>Nullary method of a class bound to global class instance and manually registered</title>
+ </btl-example>
+
+ <btl-example name="example03">
+ <title>Nullary method of a class bound to shared class instance and manually registered</title>
+ </btl-example>
+
+ <para role="first-line-indented">
+ If you do not need to reuse the test class instance and can't or do not wish to create test class
+ instance globally it may be easier and safer to create an instance on the stack of free function:
+ </para>
+
+ <btl-example name="example04">
+ <title>Nullary method of a class bound to local class instance inside free function</title>
+ </btl-example>
+
+ <para role="first-line-indented">
+ If you have to perform the same set of tests with different sets of parameters you may want to base your test
+ case on a function with arguments and bind particular parameters during test case creation.
+ </para>
+
+ <warning>
+ <simpara>
+ If you bind parameters by reference or pointer, the referenced value can't have local storage in the
+ test module initialization function.
+ </simpara>
+ </warning>
+
+ <btl-example name="example05">
+ <title>Binary free function bound to set of different parameter pairs</title>
+
+ <para role="first-line-indented">
+ This example employs the alternative test module initialization function specification.
+ </para>
+ </btl-example>
+
+ <para role="first-line-indented">
+ The &utf; also presents an alternative method for parameterized test case creation, which is covered in
+ <xref linkend="utf.user-guide.test-organization.unary-test-case"/>.
+ </para>
+ </section>
+
+ <section id="utf.user-guide.test-organization.auto-nullary-test-case">
+ <title>Nullary function based test case with automated registration</title>
+ <titleabbrev>Automated registration</titleabbrev>
+
+ <para role="first-line-indented">
+ To create a nullary free function cased test case, which is registered in place of implementation, employ the
+ macro BOOST_AUTO_TEST_CASE.
+ </para>
+
+ <inline-synopsis>
+ <macro name="BOOST_AUTO_TEST_CASE" kind="functionlike">
+ <macro-parameter name="test_case_name"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ The macro is designed to closely mimic nullary free function syntax. Changes that are required to make an
+ existing test case, implemented as a free function, registered in place are illustrated in the following
+ example (compare with <xref linkend="utf.user-guide.test-organization.manual-nullary-test-case.example01"/>):
+ </para>
+
+ <btl-example name="example06">
+ <title>Nullary function based test case with automated registration</title>
+ </btl-example>
+
+ <para role="first-line-indented">
+ With this macro you don't need to implement the initialization function at all. The macro creates and
+ registers the test case with the name free_test_function automatically.
+ </para>
+ </section>
+ </section>
+ <section id="utf.user-guide.test-organization.unary-test-case">
+ <title>Unary function based test case</title>
+
+ <para role="first-line-indented">
+ Some tests are required to be repeated for a series of different input parameters. One way to achieve this is
+ manually register a test case for each parameter as in example above. You can also invoke a test function with
+ all parameters manually from within your test case, like this:
+ </para>
+
+ <btl-snippet name="snippet1"/>
+
+ <para role="first-line-indented">
+ The &utf; presents a better solution for this problem: the unary function based test case, also referred as
+ parameterized test case. The unary test function can be a free function, unary functor (for example created
+ with boost::bind) or unary method of a class with bound test class instance). The test function is converted
+ into test case using the macro BOOST_PARAM_TEST_CASE. The macro expects a collection of parameters (passed as
+ two input iterators) and an unary test function:
+ </para>
+
+ <inline-synopsis>
+ <macro name="BOOST_PARAM_TEST_CASE" kind="functionlike">
+ <macro-parameter name="test_function"/>
+ <macro-parameter name="params_begin"/>
+ <macro-parameter name="params_end"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ BOOST_PARAM_TEST_CASE creates an instance of the test case generator. When passed to the method test_suite::add,
+ the generator produces a separate sub test case for each parameter in the parameters collection and registers
+ it immediately in a test suite. Each test case is based on a test function with the parameter bound by value,
+ even if the test function expects a parameter by reference. The fact that parameter value is stored along with
+ bound test function releases you from necessity to manage parameters lifetime. For example, they can be defined
+ in the test module initialization function scope.
+ </para>
+
+ <para role="first-line-indented">
+ All sub test case names are deduced from the macro argument test_function. If you prefer to assign different
+ names, you have to use the underlying make_test_case interface instead. Both test cases creation and
+ registration are performed in the test module initialization function.
+ </para>
+
+ <para role="first-line-indented">
+ The parameterized test case facility is preferable to the approach in the example above, since execution of
+ each sub test case is guarded and counted independently. It produces a better test log/results report (in
+ example above in case of failure you can't say which parameter is at fault) and allows you to test against
+ all parameters even if one of them causes termination a particular sub test case.
+ </para>
+
+ <para role="first-line-indented">
+ In comparison with a manual test case registration for each parameter approach the parameterized test case
+ facility is more concise and easily extendible.
+ </para>
+
+ <para role="first-line-indented">
+ In following simple example the same test, implemented in <code>free_test_function</code>, is
+ performed for 5 different parameters. The parameters are defined in the test module initialization function
+ scope. The master test suite contains 5 independent test cases.
+ </para>
+
+ <btl-example name="example07">
+ <title>Unary free function based test case</title>
+ </btl-example>
+
+ <para role="first-line-indented">
+ Next example is similar, but instead of a free function it uses a method of a class. Even though parameters are
+ passed into test method by reference you can still define them in the test module initialization function scope.
+ This example employs the alternative test module initialization function specification.
+ </para>
+
+ <btl-example name="example08">
+ <title>Unary class method based test case</title>
+ </btl-example>
+ </section>
+
+ <section id="utf.user-guide.test-organization.test-case-template">
+ <title>Test case template</title>
+ <para role="first-line-indented">
+ To test a template based component it's frequently necessary to perform the same set of checks for a
+ component instantiated with different template parameters. The &utf; provides the ability to create a series of
+ test cases based on a list of desired types and function similar to nullary function template. This facility is
+ called test case template. Here are the two construction interfaces:
+ </para>
+
+ <itemizedlist>
+ <listitem>
+ <simpara>
+ <link linkend="utf.user-guide.test-organization.manual-test-case-template">Manually registered test case
+ template</link>
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ <link linkend="utf.user-guide.test-organization.auto-test-case-template">Test case template with automated
+ registration</link>
+ </simpara>
+ </listitem>
+ </itemizedlist>
+
+ <section id="utf.user-guide.test-organization.manual-test-case-template">
+ <title>Manually registered test case template</title>
+ <titleabbrev>Manual registration</titleabbrev>
+
+ <para role="first-line-indented">
+ One way to perform the same set of checks for a component instantiated with different template parameters is
+ illustrated in the following example:
+ </para>
+
+ <btl-snippet name="snippet2"/>
+
+ <para role="first-line-indented">
+ There several problems/inconveniencies with above approach, including:
+ <itemizedlist>
+ <listitem>
+ <simpara>
+ Fatal error in one of the invocation will stop whole test case and will skip invocations with different types
+ </simpara>
+ <simpara>
+ You need to repeat function invocation manually for all the parameters you are interested in
+ </simpara>
+ <simpara>
+ You need two functions to implement the test
+ </simpara>
+ </listitem>
+ </itemizedlist>
+ Ideally the test case template would be based on nullary function template (like single_test above).
+ Unfortunately function templates are neither addressable nor can be used as template parameters. To alleviate
+ the issue the manually registered test case template facility consists of two co-working macros:
+ BOOST_TEST_CASE_TEMPLATE_FUNCTION and BOOST_TEST_CASE_TEMPLATE. Former is used to define the test case
+ template body, later - to create and register test cases based on it.
+ </para>
+
+ <para role="first-line-indented">
+ The macro BOOST_TEST_CASE_TEMPLATE_FUNCTION requires two arguments: the name of the test case template and the
+ name of the format type parameter:
+ </para>
+
+ <inline-synopsis>
+ <macro name="BOOST_TEST_CASE_TEMPLATE_FUNCTION" kind="functionlike">
+ <macro-parameter name="test_case_name"/>
+ <macro-parameter name="type_name"/>
+ </macro>
+ </inline-synopsis>
+
+ <btl-snippet name="snippet3"/>
+
+ <para role="first-line-indented">
+ The macro BOOST_TEST_CASE_TEMPLATE_FUNCTION is intended to be used in place of nullary function template
+ signature:
+ </para>
+
+ <btl-snippet name="snippet4"/>
+
+ <para role="first-line-indented">
+ The only difference is that the BOOST_TEST_CASE_TEMPLATE_FUNCTION makes the test case template name usable in
+ the template argument list.
+ </para>
+
+ <para role="first-line-indented">
+ BOOST_TEST_CASE_TEMPLATE requires two arguments: the name of the test case template and Boost.MPL compatible
+ collection of types to instantiate it with. The names passed to both macros should be the same.
+ </para>
+
+ <inline-synopsis>
+ <macro name="BOOST_TEST_CASE_TEMPLATE" kind="functionlike">
+ <macro-parameter name="test_case_name"/>
+ <macro-parameter name="collection_of_types"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ BOOST_TEST_CASE_TEMPLATE creates an instance of the test case generator. When passed to the method
+ test_suite::add, the generator produces a separate sub test case for each type in the supplied collection of
+ types and registers it immediately in the test suite. Each test case is based on the test case template body
+ instantiated with a particular test type.
+ </para>
+
+ <para role="first-line-indented">
+ Sub test case names are deduced from the macro argument test_case_name. If you prefer to assign different test
+ case names, you need to use the underlying make_test_case interface instead. Both test cases creation and
+ registration is performed in the test module initialization function.
+ </para>
+
+ <para role="first-line-indented">
+ The test case template facility is preferable to the approach in example above, since execution of each sub
+ test case is guarded and counted separately. It produces a better test log/results report (in example above in
+ case of failure you can't say which type is at fault) and allows you to test all types even if one of
+ them causes termination of the sub test case.
+ </para>
+
+ <btl-example name="example09">
+ <title>Manually registered test case template</title>
+ </btl-example>
+ </section>
+
+ <section id="utf.user-guide.test-organization.auto-test-case-template">
+ <title>Test case template with automated registration</title>
+ <titleabbrev>Automated registration</titleabbrev>
+
+ <para role="first-line-indented">
+ To create a test case template registered in place of implementation, employ the macro
+ BOOST_AUTO_TEST_CASE_TEMPLATE. This facility is also called <firstterm>auto test case template</firstterm>.
+ </para>
+
+ <inline-synopsis>
+ <macro name="BOOST_AUTO_TEST_CASE_TEMPLATE" kind="functionlike">
+ <macro-parameter name="test_case_name"/>
+ <macro-parameter name="formal_type_parameter_name"/>
+ <macro-parameter name="collection_of_types"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ The macro BOOST_AUTO_TEST_CASE_TEMPLATE requires three arguments:
+ </para>
+ <variablelist>
+ <?dbhtml list-presentation="list"?>
+ <?dbhtml term-width="60%" list-width="100%"?>
+ <?dbhtml term-separator=" - "?> <!-- TO FIX: where separator? -->
+
+ <varlistentry>
+ <term>The test case template name</term>
+ <listitem>
+ <simpara>
+ unique test cases template identifier
+ </simpara>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>The name of a formal template parameter</term>
+ <listitem>
+ <simpara>
+ name of the type the test case template is instantiated with
+ </simpara>
+ </listitem>
+ </varlistentry>
+ <varlistentry>
+ <term>The collection of types to instantiate test case template with</term>
+ <listitem>
+ <simpara>
+ arbitrary MPL sequence
+ </simpara>
+ </listitem>
+ </varlistentry>
+ </variablelist>
+
+ <btl-example name="example10">
+ <title>Test case template with automated registration</title>
+ </btl-example>
+ </section>
+ </section>
+
+ <section id="utf.user-guide.test-organization.test-suite">
+ <title>Test suite</title>
+
+ <para role="first-line-indented">
+ If you consider test cases as leaves on the test tree, the test suite can be considered as branch and the master
+ test suite as a trunk. Unlike real trees though, our tree in many cases consists only of leaves attached
+ directly to the trunk. This is common for all test cases to reside directly in the master test suite. If you do
+ want to construct a hierarchical test suite structure the &utf; provides both manual and automated
+ test suite creation and registration facilities:
+ </para>
+
+ <itemizedlist>
+ <listitem>
+ <simpara>
+ <link linkend="utf.user-guide.test-organization.manual-test-suite">Manually registered test suite</link>
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ <link linkend="utf.user-guide.test-organization.auto-test-suite">Test suite with automated registration</link>
+ </simpara>
+ </listitem>
+ </itemizedlist>
+
+ <section id="utf.user-guide.test-organization.test-suite-registration-interface">
+ <title>Test unit registration interface</title>
+
+ <para role="first-line-indented">
+ The &utf; models the notion of test case container - test suite - using class boost::unit_test::test_suite. For
+ complete class interface reference check advanced section of this documentation. Here you should only be
+ interested in a single test unit registration interface:
+ </para>
+
+ <programlisting>void test_suite::add( test_unit* tc, counter_t expected_failures = 0, int timeout = 0 );</programlisting>
+
+ <para role="first-line-indented">
+ The first parameter is a pointer to a newly created test unit. The second optional parameter -
+ expected_failures - defines the number of test assertions that are expected to fail within the test unit. By
+ default no errors are expected.
+ </para>
+
+ <caution>
+ <simpara>
+ Be careful when supplying a number of expected failures for test suites. By default the &utf; calculates the
+ number of expected failures in test suite as the sum of appropriate values in all test units that constitute
+ it. And it rarely makes sense to change this.
+ </simpara>
+ </caution>
+
+ <para role="first-line-indented">
+ The third optional parameter - timeout - defines the timeout value for the test unit. As of now the &utf;
+ isn't able to set a timeout for the test suite execution, so this parameter makes sense only for test case
+ registration. By default no timeout is set. See the method
+ <methodname>boost::execution_monitor::execute</methodname> for more details about the timeout value.
+ </para>
+
+ <para role="first-line-indented">
+ To register group of test units in one function call the boost::unit_test::test_suite provides another add
+ interface covered in the advanced section of this documentation.
+ </para>
+ </section>
+
+ <section id="utf.user-guide.test-organization.manual-test-suite">
+ <title>Manually registered test suites</title>
+ <titleabbrev>Manual registration</titleabbrev>
+
+ <para role="first-line-indented">
+ To create a test suite manually, employ the macro BOOST_TEST_SUITE:
+ </para>
+
+ <inline-synopsis>
+ <macro name="BOOST_TEST_SUITE" kind="functionlike">
+ <macro-parameter name="test_suite_name"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ BOOST_TEST_SUITE creates an instance of the class boost::unit_test::test_suite and returns a pointer to the
+ constructed instance. Alternatively you can create an instance of class boost::unit_test::test_suite yourself.
+ </para>
+
+ <note>
+ <simpara>
+ boost::unit_test::test_suite instances have to be allocated on the heap and the compiler won't allow you
+ to create one on the stack.
+ </simpara>
+ </note>
+
+ <para role="first-line-indented">
+ Newly created test suite has to be registered in a parent one using add interface. Both test suite creation and
+ registration is performed in the test module initialization function.
+ </para>
+
+ <btl-example name="example11">
+ <title>Manually registered test suites</title>
+ </btl-example>
+
+ <para role="first-line-indented">
+ This example creates a test tree, which can be represented by the following hierarchy:
+ </para>
+
+ <mediaobject>
+ <imageobject>
+ <imagedata format="jpg" fileref="../img/class-hier.jpg"/>
+ </imageobject>
+ </mediaobject>
+ </section>
+
+ <section id="utf.user-guide.test-organization.auto-test-suite">
+ <title>Test suites with automated registration</title>
+ <titleabbrev>Automated registration</titleabbrev>
+
+ <para role="first-line-indented">
+ The solution the &utf; presents for automated test suite creation and registration is designed to facilitate
+ multiple points of definition, arbitrary test suites depth and smooth integration with automated test case creation
+ and registration. This facility should significantly simplify a test tree construction process in comparison with
+ manual explicit registration case.
+ </para>
+
+ <para role="first-line-indented">
+ The implementation is based on the order of file scope variables definitions within a single compilation unit.
+ The semantic of this facility is very similar to the namespace feature of C++, including support for test suite
+ extension. To start test suite use the macro BOOST_AUTO_TEST_SUITE. To end test suite use the macro
+ BOOST_AUTO_TEST_SUITE_END. The same test suite can be restarted multiple times inside the same test file or in a
+ different test files. In a result all test units will be part of the same test suite in a constructed test tree.
+ </para>
+
+ <inline-synopsis>
+ <macro name="BOOST_AUTO_TEST_SUITE" kind="functionlike">
+ <macro-parameter name="test_suite_name"/>
+ </macro>
+ <macro name="BOOST_AUTO_TEST_SUITE_END" kind="functionlike"/>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ Test units defined in between test suite start and end declarations become members of the test suite. A test
+ unit always becomes the member of the closest test suite declared. Test units declared at a test file scope
+ become members of the master test suite. There is no limit on depth of test suite inclusion.
+ </para>
+
+ <btl-example name="example12">
+ <title>Test suites with automated registration</title>
+
+ <para role="first-line-indented">
+ This example creates a test tree that matches exactly the one created in the manual test suite registration
+ example. As you can see test tree construction in this example is more straightforward and automated.
+ </para>
+ </btl-example>
+
+ <btl-example name="example53">
+ <title>Example of test suite extension using automated registration facility</title>
+
+ <para role="first-line-indented">
+ In this example test suite test_suite consists of two parts. Their definition is remote and is separated by another
+ test case. In fact these parts may even reside in different test files. The resulting test tree remains the same. As
+ you can see from the output both test_case1 and test_case2 reside in the same test suite test_suite.
+ </para>
+ </btl-example>
+
+ </section>
+ <section id="utf.user-guide.test-organization.master-test-suite">
+ <title>Master Test Suite</title>
+
+ <para role="first-line-indented">
+ As defined in introduction section the master test suite is a root node of a test tree. Each test module built
+ with the &utf; always has the master test suite defined. The &utf; maintain the master test suite instance
+ internally. All other test units are registered as direct or indirect children of the master test suite.
+ </para>
+
+ <programlisting>namespace boost {
+namespace unit_test {
+class master_test_suite_t : public test_suite {
+public:
+ int argc;
+ char** argv;
+};
+
+} // namespace unit_test
+
+} // namespace boost</programlisting>
+
+ <para role="first-line-indented">
+ To access single instance of the master test suite use the following interface:
+ </para>
+
+ <programlisting>namespace boost {
+namespace unit_test {
+namespace framework {
+
+master_test_suite_t&amp; master_test_suite();
+
+} // namespace framework
+} // namespace unit_test
+} // namespace boost</programlisting>
+
+ <section id="utf.user-guide.test-organization.cla-access" >
+ <title>Command line arguments access interface</title>
+
+ <para role="first-line-indented">
+ Master test suite implemented as an extension to the regular test suite, since it maintains references to the
+ command line arguments passed to the test module. To access the command line arguments use
+ </para>
+
+ <programlisting>boost::unit_test::framework::master_test_suite().argc
+boost::unit_test::framework::master_test_suite().argv</programlisting>
+
+ <para role="first-line-indented">
+ In below example references to the command line arguments are accessible either as an initialization function
+ parameters or as members of the master test suite. Both references point to the same values. A test module that
+ uses the alternative initialization function specification can only access command line arguments through the
+ master test suite.
+ </para>
+
+ <note>
+ <simpara>
+ This interface for runtime parameter access is temporary. It's planned to be updated once runtime
+ parameters support is redesigned.
+ </simpara>
+ </note>
+
+ <para role="first-line-indented">
+ Returning to the free function example, let's modify initialization function to check for absence of any
+ test module arguments.
+ </para>
+
+ <btl-example name="example13">
+ <title>Command line access in initialization function</title>
+ </btl-example>
+ </section>
+
+ <section id="utf.user-guide.test-organization.master-test-suite-name">
+ <title>Naming</title>
+ <para role="first-line-indented">
+ The master test suite is created with default name &quot;Master Test Suite&quot;. There are two methods two
+ reset the name to a different value: using the macro <xref linkend="utf.flag.module" endterm="utf.flag.module"/>
+ and from within the test module initialization function. Former is used for test modules that don't have the
+ manually implemented initialization function. Following examples illustrate these methods.
+ </para>
+
+ <btl-example name="example14">
+ <title>Naming master test suite using the macro <xref linkend="utf.flag.module" endterm="utf.flag.module"/></title>
+
+ <para role="first-line-indented">
+ If the macro <xref linkend="utf.flag.module" endterm="utf.flag.module"/> is defined, the test module initialization
+ function is <link linkend="utf.user-guide.initialization.auto-generation">automatically generated</link> and the
+ macro value becomes the name of the master test suite. The name may include spaces.
+ </para>
+ </btl-example>
+
+ <btl-example name="example15">
+ <title>Naming master test suite explicitly in the test module initialization function</title>
+
+ <para role="first-line-indented">
+ Without the <xref linkend="utf.flag.main" endterm="utf.flag.main"/> and the <xref linkend="utf.flag.module"
+ endterm="utf.flag.module"/> flags defined, the test module initialization function has to be manually implemented.
+ The master test suite name can be reset at any point within this function.
+ </para>
+ </btl-example>
+ </section>
+ </section>
+ </section>
+
+ <section id="utf.user-guide.test-organization.expected-failures">
+ <title>Expected failures specification</title>
+
+ <para role="first-line-indented">
+ While in a perfect world all test assertions should pass in order for a test module to pass, in some situations
+ it is desirable to temporarily allow particular tests to fail. For example, where a particular feature is not
+ implemented yet and one needs to prepare a library for the release or when particular test fails on some
+ platforms. To avoid a nagging red box in regression tests table, you can use the expected failures feature.
+ </para>
+
+ <para role="first-line-indented">
+ This feature allows specifying an expected number of failed assertions per test unit. The value is specified
+ during test tree construction, and can't be updated during test execution.
+ </para>
+
+ <para role="first-line-indented">
+ The feature is not intended to be used to check for expected functionality failures. To check that a particular
+ input is causing an exception to be thrown use <macroname>BOOST_CHECK_THROW</macroname> family of testing
+ tools.
+ </para>
+
+ <para role="first-line-indented">
+ The usage of this feature should be limited and employed only after careful consideration. In general you should
+ only use this feature when it is necessary to force a test module to pass without actually fixing the problem.
+ Obviously, an excessive usage of expected failures defeats the purpose of the unit test. In most cases it only
+ needs be applied temporarily.
+ </para>
+
+ <para role="first-line-indented">
+ You also need to remember that the expected failure specification is per test case. This means that any failed
+ assertion within that test case can satisfy the expected failures quota. Meaning it is possible for an
+ unexpected failure to occur to satisfy this quota.
+ </para>
+
+ <note>
+ <simpara>
+ If an assertion at fault is fixed and passed, while an expected failures specification still present, the test
+ case is going to fail, since the number of failures is smaller than expected.
+ </simpara>
+ </note>
+
+ <section id="utf.user-guide.test-organization.manual-expected-failures">
+ <title>Usage with manually registered test cases</title>
+
+ <para role="first-line-indented">
+ To set the value of expected failures for the manually registered test unit pass it as a second argument for the
+ test_suite::add call during test unit registration.
+ </para>
+
+ <btl-example name="example16">
+ <title>Expected failures specification for manually registered test case</title>
+ </btl-example>
+ </section>
+ <section id="utf.user-guide.test-organization.auto-expected-failures">
+ <title>Usage with automatically registered test cases</title>
+
+ <para role="first-line-indented">
+ To set the number of expected failures for the automatically registered test unit use the macro
+ BOOST_AUTO_TEST_CASE_EXPECTED_FAILURES before the test case definition.
+ </para>
+
+ <inline-synopsis>
+ <macro name="BOOST_AUTO_TEST_CASE_EXPECTED_FAILURES" kind="functionlike">
+ <macro-parameter name="test_case_name"/>
+ <macro-parameter name="number_of_expected_failures"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ You can use this macro both on a file scope and inside a test suite. Moreover you can use it even if name of test
+ units coincide in different test suites. Expected failures specification applies to the test unit belonging to the same
+ test suite where BOOST_AUTO_TEST_CASE_EXPECTED_FAILURES resides.
+ </para>
+
+ <btl-example name="example17">
+ <title>Expected failures specification for automatically registered test case</title>
+ </btl-example>
+ </section>
+ </section>
+</section>
\ No newline at end of file

Added: trunk/libs/test/doc/src/utf.users-guide.test-output.xml
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/utf.users-guide.test-output.xml 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,638 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!DOCTYPE section PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN" "../../../../tools/boostbook/dtd/boostbook.dtd" [
+ <!ENTITY utf "<acronym>UTF</acronym>">
+]>
+<section id="utf.user-guide.test-output">
+ <title>Test Output &hellip; or let's see what you got for your money</title>
+ <titleabbrev>Test Output </titleabbrev>
+
+ <para role="first-line-indented">
+ The output produced by a test module is one of the major assets the &utf; brings to users. In comparison with any
+ kind of manual/assert based solution the &utf; provide following services:
+ </para>
+
+ <itemizedlist>
+ <listitem>
+ <simpara>All test errors are reported uniformly</simpara>
+ <simpara>
+ The test execution monitor along with standardized output from all included
+ <link linkend="utf.testing-tools">testing tools</link> provides uniform reporting for all errors including fatal
+ errors, like memory assess violation and uncaught exceptions.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>Detailed information on the source of an error</simpara>
+ <simpara>
+ The &utf; test tool's based assertion provides as much information as possible about cause of error,
+ usually allowing you to deduce what is wrong without entering the debugger or core analysis.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ Separation of the test errors description (test log) from the results report summary (test results report)
+ </simpara>
+ <simpara>
+ The information produced during test execution, including all error, warning and info messages from the test
+ tools, executed test units notification constitute the test log. By default all entries in the test log are
+ directed to the standard output. Once testing is completed the &utf; may produce a summary test report with
+ different levels of detail. The test report is by default directed to the standard error output.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>Flexibility in what is shown in the output</simpara>
+ <simpara>
+ The &utf; provides the ability to configure what if shown in both the test log and the test report. The
+ configuration is supported both at runtime, during test module invocation and at compile time from within a
+ test module.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>Flexibility in how output is formatted</simpara>
+ <simpara>
+ The &utf; provides the ability to configure the format of the test module output. At the moment only 2 formats
+ are supported by the &utf; itself, the well defined public interface allows you to customize an output for
+ your purposes almost any way you want.
+ </simpara>
+ </listitem>
+ </itemizedlist>
+
+ <section id="utf.user-guide.test-output.log">
+ <title>Test log output</title>
+ <titleabbrev>Test log</titleabbrev>
+
+ <para role="first-line-indented">
+ The test log is produced during the test execution. All entries in the test log are assigned a particular log
+ level. Only the entries with level that exceeds the <firstterm>active log level threshold</firstterm> actually
+ appear in the test log output. Log levels are arranged by the &quot;importance&quot; of the log entries. Here is
+ the list of all levels in order of increasing &quot;importance&quot;:
+ </para>
+
+ <itemizedlist>
+ <listitem>
+ <simpara>Success information messages</simpara>
+ <simpara>
+ This category includes messages that provide information on successfully passed assertions
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>Test tree traversal notifications</simpara>
+ <simpara>
+ This category includes messages that are produced by the &utf; core and indicate which test suites/cases are
+ currently being executed or skipped
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>General information messages</simpara>
+ <simpara>
+ This category includes general information massages produced in most cases by a test module author using the
+ macro <macroname>BOOST_TEST_MESSAGE</macroname>.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>Warning messages</simpara>
+ <simpara>
+ This category includes messages produced by failed warning level assertions.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>Non fatal error messages</simpara>
+ <simpara>
+ This category includes messages produced by failed check level assertions
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>Uncaught C++ exceptions notifications</simpara>
+ <simpara>
+ This category includes messages that are produced by the &utf; and provide detailed information on the C++
+ exceptions uncaught by the test case body.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>Non-fatal system error</simpara>
+ <simpara>
+ This category includes messages that are produced by the &utf; itself and provides information about caught
+ non-fatal system error. For example it includes messages produced in the case of test case timeout or if
+ floating point values calculation errors are caught.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>Fatal system error</simpara>
+ <simpara>
+ This category includes messages produced by failed require level assertions and by the &utf; itself in case of
+ abnormal test case termination.
+ </simpara>
+ </listitem>
+ </itemizedlist>
+
+ <note>
+ <simpara>
+ The active log level works namely as threshold, not as selector. For the given active log level threshold, all
+ test log entries with &quot;importance&quot; higher than threshold are enabled and all test log entries with
+ &quot;importance&quot; below threshold are disabled.
+ </simpara>
+ </note>
+
+ <para role="first-line-indented">
+ In addition to the levels described above the test log defines two special log levels. The current log level can
+ be set to:
+ </para>
+
+ <itemizedlist>
+ <listitem>
+ <simpara>All messages</simpara>
+ <simpara>
+ If active log level threshold is set to this value, all test log entries appear in the output. In practice
+ this is equivalent to setting the active log level threshold to &quot;success information messages&quot;
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>Nothing</simpara>
+ <simpara>
+ If the active log level threshold is set to this value, none of test log entries appear in the output. This log level
+ is used to execute a &quot;silent&quot; test that doesn't produce any test log and only generates a result code indicating whether test failed or passed.
+ </simpara>
+ </listitem>
+ </itemizedlist>
+
+ <para role="first-line-indented">
+ By default the active log level threshold is set to &quot;non fatal error messages&quot; and the test log output
+ is generated in the human readable format. The active log level threshold and the output format can be configured
+ at runtime during a test module invocation and at compile time from within a test module using the test log
+ public interfaces. For example, for automated test module output processing it might be more convenient to use
+ the XML based format.
+ </para>
+
+ <para role="first-line-indented">
+ In most cases The &utf; can't provide an exact location, where system error occurs or uncaught C++ exception
+ is thrown from. To be able to pinpoint it as close as possible the &utf; keeps track of checkpoints - the
+ location a test module passed through. A test case entrance and exit points, a test tool invocation point the
+ &utf; tracks automatically. Any other checkpoints should be entered by you manually. The test log provides two
+ macros for this purpose: <macroname>BOOST_TEST_CHECKPOINT</macroname> - to specify a &quot;named&quot; checkpoint
+ and <macroname>BOOST_TEST_PASSPOINT</macroname> - to specify an &quot;unnamed&quot; checkpoint.
+ </para>
+
+ <section id="utf.user-guide.test-output.log.testing-tool-args">
+ <title>Logging tool arguments</title>
+
+ <para role="first-line-indented">
+ Most of the <link linkend="utf.testing-tools">testing tools</link> print values of their arguments to the output
+ stream in some form of log statement. If arguments type does not support <code>operator&lt;&lt;(std::ostream&amp;,
+ ArgumentType const&amp;)</code> interface you will get a compilation error. You can either implement above
+ interface or prohibit the <link linkend="utf.testing-tools">testing tools</link> from logging argument values for
+ specified type. To do so use following statement on file level before first test case that includes statement
+ failing to compile:
+ </para>
+
+ <inline-synopsis>
+ <macro name="BOOST_TEST_DONT_PRINT_LOG_VALUE" kind="functionlike">
+ <macro-parameter name="ArgumentType"/>
+ </macro>
+ </inline-synopsis>
+
+ <btl-example name="example32">
+ <title>BOOST_TEST_DONT_PRINT_LOG_VALUE usage</title>
+
+ <simpara>
+ Try to comment out BOOST_TEST_DONT_PRINT_LOG_VALUE statement and you end up with compile time error.
+ </simpara>
+ </btl-example>
+ </section>
+
+ <section id="utf.user-guide.test-output.log.runtime-config">
+ <title>Runtime configuration</title>
+
+ <para role="first-line-indented">
+ The active log level threshold can be configured at runtime using the parameter
+ <link linkend="utf.user-guide.runtime-config.reference">log_level</link>. The test log output format can be
+ selected using either parameter <link linkend="utf.user-guide.runtime-config.reference">log_format</link> or the
+ parameter <link linkend="utf.user-guide.runtime-config.reference">output_format</link>.
+ </para>
+ </section>
+
+ <section id="utf.user-guide.test-output.log.BOOST_TEST_MESSAGE">
+ <title>BOOST_TEST_MESSAGE</title>
+
+ <para role="first-line-indented">
+ The macro BOOST_TEST_MESSAGE is intended to be used for the purpose of injecting an additional message into the
+ &utf; test log. These messages are not intended to indicate any error or warning conditions, but rather as
+ information/status notifications. The macro signature is as follows:
+ </para>
+
+ <inline-synopsis>
+ <macro name="BOOST_TEST_MESSAGE" kind="functionlike">
+ <macro-parameter name="test_message"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ The test_message argument can be as simple as C string literal or any custom expression that you can produce
+ with in a manner similar to standard iostream operation.
+ </para>
+
+ <important>
+ <simpara>
+ Messages generated by this tool do not appear in test log output with default value of the active log level
+ threshold. For these messages to appear the active log level threshold has to be set to a value below or equal
+ to &quot;message&quot;.
+ </simpara>
+ </important>
+
+ <btl-example name="example21">
+ <title>BOOST_TEST_MESSAGE usage</title>
+ </btl-example>
+ </section>
+
+ <section id="utf.user-guide.test-output.log.BOOST_TEST_CHECKPOINT">
+ <title>BOOST_TEST_CHECKPOINT</title>
+
+ <para role="first-line-indented">
+ The macro BOOST_TEST_CHECKPOINT is intended to be used to inject &quot;named&quot; checkpoint position. The
+ macro signature is as follows:
+ </para>
+
+ <inline-synopsis>
+ <macro name="BOOST_TEST_CHECKPOINT" kind="functionlike">
+ <macro-parameter name="checkoint_message"/>
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ The message formatted at the checkpoint position is saved and reported by the exception logging functions (if any
+ occurs). Similarly to the <macroname>BOOST_TEST_MESSAGE</macroname> the message can be formatted from any standard
+ output stream compliant components.
+ </para>
+
+ <btl-example name="example22">
+ <title>BOOST_TEST_CHECKPOINT usage</title>
+ </btl-example>
+ </section>
+
+ <section id="utf.user-guide.test-output.log.BOOST_TEST_PASSPOINT">
+ <title>BOOST_TEST_PASSPOINT</title>
+
+ <para role="first-line-indented">
+ The macro BOOST_TEST_PASSPOINT is intended to be used to inject an &quot;unnamed&quot; checkpoint position. The
+ macro signature is as follows:
+ </para>
+
+ <inline-synopsis>
+ <macro name="BOOST_TEST_PASSPOINT" kind="functionlike">
+ </macro>
+ </inline-synopsis>
+
+ <para role="first-line-indented">
+ Unlike the macro <macroname>BOOST_TEST_CHECKPOINT</macroname> this macro doesn't require any message to be
+ supplied with it. It's just a simple &quot;been there&quot; marker that records file name and line number
+ code passes through.
+ </para>
+
+ <btl-example name="example23">
+ <title>BOOST_TEST_PASSPOINT usage</title>
+ </btl-example>
+ </section>
+
+ <section id="utf.user-guide.test-output.log.FPT">
+ <title>Logging floating point type numbers</title>
+
+ <para role="first-line-indented">
+ It may appear that floating-point numbers are displayed by the &utf; with an excessive number of decimal digits.
+ However the number of digits shown is chosen to avoid apparently nonsensical displays like <code>[1.00000 != 1.00000]</code>
+ when comparing exactly unity against a value which is increased by just one least significant binary digit using
+ the default precision for float of just 6 decimal digits, given by
+ <classname>std::numeric_limits</classname>&lt;float&gt;::digits10. The function used for the number of decimal
+ digits displayed is that proposed for a future C++ Standard,
+ <ulink url="http://www2.open-std.org/JTC1/SC22/WG21/docs/papers/2005/n1822.pdf">A Proposal to add a max
+ significant decimal digits value</ulink>, to be called <classname>std::numeric_limits</classname>::max_digits10();.
+ For 32-bit floats, 9 decimal digits are needed to ensure a single bit change produces a different decimal digit
+ string.
+ </para>
+
+ <para role="first-line-indented">
+ So a much more helpful display using 9 decimal digits is thus:
+ <computeroutput>[1.00000000 != 1.00000012]</computeroutput> showing that the two values are in fact different.
+ </para>
+
+ <para role="first-line-indented">
+ For <acronym>IEEE754</acronym> 32-bit float values - 9 decimal digits are shown. For 64-bit <acronym>IEEE754</acronym> double - 17 decimal digits. For
+ <acronym>IEEE754</acronym> extended long double using 80-bit - 21 decimal digits. For <acronym>IEEE754</acronym> quadruple long double 128-bit, and Sparc
+ extended long double 128-bit - 36 decimal digits. For floating-point types, a convenient formula to calculate
+ max_digits10 is: 2 + <classname>std::numeric_limits</classname>&lt;FPT&gt;::digits * 3010/10000;
+ </para>
+
+ <note>
+ <simpara>
+ Note that a user defined floating point type UDFPT must define
+ <classname>std::numeric_limits</classname>&lt;UDFPT&gt;::is_specialized = true and provide an appropriate value
+ for <classname>std::numeric_limits</classname>&lt;UDFPT&gt;::digits, the number of bits used for the significand
+ or mantissa. For example, for the Sparc extended long double 128, 113 bits are used for the significand (one of
+ which is implicit).
+ </simpara>
+ </note>
+ </section>
+
+ <section id="utf.user-guide.test-output.log.human-readabe-format">
+ <title>Human readable log output format</title>
+ <titleabbrev>Human readable format</titleabbrev>
+
+ <para role="first-line-indented">
+ The human readable log format is designed to closely match an errors description produced by the Microsoft family
+ of C++ compilers. This format allows jumping to the error location, if test module output is redirected into IDE
+ output window. The rest of the log messages are designed to produce the most human friendly description of the
+ events occurring in test module. This is a default format generated by test modules.
+ </para>
+
+ <para role="first-line-indented">
+ Here the list of events along with corresponding message and the condition that has to be satisfied for it to appear
+ in the output.
+ </para>
+
+ <segmentedlist>
+ <?dbhtml list-presentation="list"?>
+
+ <segtitle>Event</segtitle>
+ <segtitle>Condition</segtitle>
+ <segtitle>Output</segtitle>
+
+ <seglistitem>
+ <seg>On testing start</seg>
+ <seg>threshold != log_nothing</seg>
+ <seg>
+ <computeroutput><literallayout>Running <userinput>total number of test cases</userinput> test case(s) &hellip;</literallayout></computeroutput>
+ </seg>
+ </seglistitem>
+
+ <seglistitem>
+ <seg>On testing start</seg>
+ <seg>threshold != log_nothing and show_build_info is set</seg>
+ <seg>
+ <computeroutput><literallayout>Platform: $BOOST_PLATFORM
+Compiler: $BOOST_COMPILER
+STL : $BOOST_STDLIB
+Boost : $BOOST_VERSION</literallayout></computeroutput>
+ </seg>
+ </seglistitem>
+
+ <seglistitem>
+ <seg>On abnormal testing termination</seg>
+ <seg>threshold &lt;= log_messages</seg>
+ <seg>
+ <computeroutput><literallayout>Test is aborted</literallayout></computeroutput>
+ </seg>
+ </seglistitem>
+
+ <seglistitem>
+ <seg>On test unit start</seg>
+ <seg>threshold &lt;= log_test_units</seg>
+ <seg>
+ <computeroutput><literallayout>Entering test <userinput>test unit type</userinput> <userinput>test unit name</userinput></literallayout></computeroutput>
+ </seg>
+ </seglistitem>
+
+ <seglistitem>
+ <seg>On test unit end</seg>
+ <seg>threshold &lt;= log_test_units; testing time is reported only if elapsed time is more than 1 mks.</seg>
+ <seg>
+ <computeroutput><literallayout>Leaving test <userinput>test unit type</userinput> <userinput>test unit name</userinput>; testing time <userinput>value</userinput></literallayout></computeroutput>
+ </seg>
+ </seglistitem>
+
+ <seglistitem>
+ <seg>On skipped test unit</seg>
+ <seg>threshold &lt;= log_test_units</seg>
+ <seg>
+ <computeroutput><literallayout>Test <userinput>test unit type</userinput> <userinput>test unit name</userinput> is skipped</literallayout></computeroutput>
+ </seg>
+ </seglistitem>
+
+ <seglistitem>
+ <seg>On uncaught C++ exception</seg>
+ <seg>threshold &lt;= log_cpp_exception_errors. Checkpoint message is reported only if provided</seg>
+ <seg>
+ <computeroutput><literallayout>unknown location(0): fatal error in <userinput>test case name</userinput>: <userinput>explanation</userinput>
+<userinput>last checkpoint location</userinput>: last checkpoint: <userinput>checkpoint message</userinput></literallayout></computeroutput>
+ </seg>
+ </seglistitem>
+
+ <seglistitem>
+ <seg>On resumable system error</seg>
+ <seg>threshold &lt;= log_system_errors. Checkpoint message is reported only if provided</seg>
+ <seg>
+ <computeroutput><literallayout>unknown location(0): fatal error in <userinput>test case name</userinput>: <userinput>explanation</userinput>
+<userinput>last checkpoint location</userinput>: last checkpoint: <userinput>checkpoint message</userinput></literallayout></computeroutput>
+ </seg>
+ </seglistitem>
+
+ <seglistitem>
+ <seg>On fatal system error</seg>
+ <seg>threshold &lt;= log_fatal_errors. Checkpoint message is reported only if provided</seg>
+ <seg>
+ <computeroutput><literallayout>unknown location(0): fatal error in <userinput>test case name</userinput>: <userinput>explanation</userinput>
+<userinput>last checkpoint location</userinput>: last checkpoint: <userinput>checkpoint message</userinput></literallayout></computeroutput>
+ </seg>
+ </seglistitem>
+
+ <seglistitem>
+ <seg>On passed test assertion</seg>
+ <seg>threshold &lt;= log_successful_tests</seg>
+ <seg>
+ <computeroutput><literallayout><userinput>assertion location</userinput>: info: check<userinput>assertion expression</userinput> passed</literallayout></computeroutput>
+ </seg>
+ </seglistitem>
+
+ <seglistitem>
+ <seg>On failed WARNING level test assertion</seg>
+ <seg>threshold &lt;= log_warnings</seg>
+ <seg>
+ <computeroutput><literallayout><userinput>assertion location</userinput>: warning in <userinput>test case name</userinput>: condition <userinput>assertion description</userinput> is not satisfied</literallayout></computeroutput>
+ </seg>
+ </seglistitem>
+
+ <seglistitem>
+ <seg>On failed CHECK level test assertion</seg>
+ <seg>threshold &lt;= log_all_errors</seg>
+ <seg>
+ <computeroutput><literallayout><userinput>assertion location</userinput>: error in <userinput>test case name</userinput>: check <userinput>assertion description</userinput> failed</literallayout></computeroutput>
+ </seg>
+ </seglistitem>
+
+ <seglistitem>
+ <seg>On failed REQUIRE level test assertion</seg>
+ <seg>threshold &lt;= log_fatal_errors</seg>
+ <seg>
+ <computeroutput><literallayout><userinput>assertion location</userinput>: fatal error in <userinput>test case name</userinput>: critical check <userinput>assertion description</userinput> failed</literallayout></computeroutput>
+ </seg>
+ </seglistitem>
+
+ <seglistitem>
+ <seg>On test log message</seg>
+ <seg>threshold &lt;= log_messages</seg>
+ <seg>
+ <computeroutput><literallayout><userinput>Message content</userinput></literallayout></computeroutput>
+ </seg>
+ </seglistitem>
+ </segmentedlist>
+
+ <para role="first-line-indented">
+ Advanced <link linkend="utf.testing-tools">testing tools</link> may produce more complicated error messages.
+ </para>
+ </section>
+
+ <section id="utf.user-guide.test-output.log.xml-format">
+ <title>XML based log output format</title>
+
+ <para role="first-line-indented">
+ This log format is designed for automated test results processing. The test log output XML schema depends on the
+ active log level threshold.
+ </para>
+
+ <!-- TO FIX -->
+ </section>
+
+ <section id="utf.user-guide.test-output.log.ct-config">
+ <title>Compile time configuration</title>
+ <para role="first-line-indented">
+ While many test log configuration tasks can be performed at runtime using predefined framework parameters, the
+ &utf; provides a compile time interface as well. The interface gives you full power over what, where and how to
+ log. The interface is provided by singleton class <classname>boost::unit_test::unit_test_log_t</classname> and is
+ accessible through local file scope reference to single instance of this class: boost::unit_test::unit_test_log.
+ </para>
+
+ <section id="utf.user-guide.test-output.log.ct-config.output-stream">
+ <title>Log output stream redirection</title>
+
+ <para role="first-line-indented">
+ If you want to redirect the test log output stream into something different from std::cout use the following
+ interface:
+ </para>
+
+ <programlisting>boost::unit_test::unit_test_log.set_stream( std::ostream&amp; str );</programlisting>
+
+ <para role="first-line-indented">
+ You can reset the output stream at any time both during the test module initialization and from within test
+ cases. There are no limitations on number of output stream resets either.
+ </para>
+
+ <btl-example name="example50">
+ <title>Test log output redirection</title>
+
+ <warning>
+ <simpara>
+ If you redirect test log output stream from global fixture setup, you are
+ <emphasis role="bold">required</emphasis> to reset it back to std::cout during teardown to prevent dangling
+ references access
+ </simpara>
+ </warning>
+ </btl-example>
+ </section>
+
+ <section id="utf.user-guide.test-output.log.ct-config.log-level">
+ <title>Log level configuration</title>
+
+ <para role="first-line-indented">
+ If you need to enforce specific log level from within your test module use the following interface:
+ </para>
+
+ <programlisting>boost::unit_test::unit_test_log.set_threshold_level( boost::unit_test::log_level );</programlisting>
+
+ <para role="first-line-indented">
+ In regular circumstances you shouldn't use this interface, since you not only override default log level, but also
+ the one supplied at test execution time. Prefer to use runtime parameters for log level selection.
+ </para>
+
+ <btl-example name="example51">
+ <title>Compile time log level configuration</title>
+ </btl-example>
+ </section>
+
+ <section id="utf.user-guide.test-output.log.ct-config.log-format">
+ <title>Predefined log format selection</title>
+
+ <para role="first-line-indented">
+ The select at compile time the log format from the list of the formats supplied by the &utf;
+ </para>
+
+ <programlisting>boost::unit_test::unit_test_log.set_format( boost::unit_test::output_format );</programlisting>
+
+ <para role="first-line-indented">
+ In regular circumstances you shouldn't use this interface. Prefer to use runtime parameters for predefined log
+ format selection.
+ </para>
+
+ <btl-example name="example52">
+ <title>Compile time log format selection</title>
+ </btl-example>
+ </section>
+
+ <section id="utf.user-guide.test-output.log.ct-config.log-formatter">
+ <title>Custom log format support</title>
+
+ <!-- TO FIX -->
+ </section>
+ </section>
+ </section>
+
+ <section id="utf.user-guide.test-output.results-report">
+ <title>Test report output</title>
+ <section id="utf.user-guide.test-output.results-report.runtime-config">
+ <title>Runtime configuration</title>
+ <!-- TO FIX -->
+ </section>
+
+ <section id="utf.user-guide.test-output.results-report.ct-config">
+ <title>Compile time configuration</title>
+
+ <section id="utf.user-guide.test-output.results-report.ct-config.output-stream">
+ <title>Report output stream redirection and access</title>
+ <!-- TO FIX -->
+ </section>
+
+ <section id="utf.user-guide.test-output.results-report.ct-config.report-level">
+ <title>Report level configuration</title>
+ <!-- TO FIX -->
+ </section>
+
+ <section id="utf.user-guide.test-output.results-report.ct-config.report-format">
+ <title>Predefined report format selection</title>
+ <!-- TO FIX -->
+ </section>
+
+ <section id="utf.user-guide.test-output.results-report.ct-config.report-formatter">
+ <title>Custom report format support</title>
+ <!-- TO FIX -->
+ </section>
+
+ </section>
+ </section>
+
+ <section id="utf.user-guide.test-output.progress">
+ <title>Test progress display</title>
+ <titleabbrev>Progress display</titleabbrev>
+ <para role="first-line-indented">
+ In case if the test module involves lengthy computation split among multiple test cases you may be interested in
+ progress monitor. The test runners supplied with the &utf; support simple text progress display, implemented based
+ on <classname>boost::progress_display</classname><footnote>The &utf; interfaces allow implementing
+ an advanced GUI based test runner with arbitrary progress display controls</footnote>. The progress display output
+ is enabled using the &utf; parameter <link linkend="utf.user-guide.runtime-config.parameters">show_progress</link>.
+ </para> <!-- TO FIX: what's wrong with footnode? -->
+
+
+ <para role="first-line-indented">
+ The &utf; has no ability to estimate how long the test case execution is going to take and the manual test
+ progress update is not supported at this point. The &utf; tracks the progress on test case level. If you want to
+ see more frequent progress update, you need to split the test into multiple test cases.
+ </para>
+
+ <para role="first-line-indented">
+ In default configuration both test log and test progress outputs are directed into standard output stream. Any test
+ log messages are going to interfere with test progress display. To prevent this you can either set log level to
+ lower level or redirect either test log or test progress output into different stream during test module
+ initialization. Use following interface to redirect test progress output:
+ </para>
+
+ <programlisting>boost::unit_test::progress_monitor.set_stream( std::ostream&amp; )</programlisting>
+
+ <btl-example name="example49">
+ <title>Progress report for the test module with large amount of test cases</title>
+ </btl-example>
+ </section>
+</section>
\ No newline at end of file

Added: trunk/libs/test/doc/src/utf.users-guide.xml
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/utf.users-guide.xml 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,698 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!DOCTYPE chapter PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN" "../../../../tools/boostbook/dtd/boostbook.dtd" [
+ <!ENTITY utf "<acronym>UTF</acronym>">
+]>
+<section id="utf.user-guide" last-revision="$Date$">
+ <title>Unit Test Framework: User's guide</title><titleabbrev>User's guide</titleabbrev>
+
+ <section id="utf.user-guide.intro">
+ <title>Introduction &hellip; or what's your name?</title><titleabbrev>Introduction</titleabbrev>
+
+ <para role="first-line-indented">
+ Without further ado, let's define terms regularly used by the &utf;.
+ </para>
+ <variablelist>
+ <?dbhtml term-width="16%" list-width="100%"?>
+ <?dbhtml term-separator=":"?>
+ <?dbhtml table-summary="utf terms definition"?>
+
+ <varlistentry id="test-module.def">
+ <term><firstterm>The test module</firstterm></term>
+ <listitem>
+ <simpara>
+ This is a single binary that performs the test. Physically a test module consists of one or more test source files,
+ which can be built into an executable or a dynamic library. A test module that consists of a single test source
+ file is called <firstterm id="single-file-test-module.def">single-file test module</firstterm>. Otherwise
+ it's called <firstterm id="multi-file-test-module.def">multi-file test module</firstterm>. Logically a test
+ module consists of four parts: <link linkend="test-setup.def">test setup</link> (or test initialization),
+ <link linkend="test-body.def">test body</link>, <link linkend="test-cleanup.def">test cleanup</link> and
+ <link linkend="test-runner.def">test runner</link>. The test runner part is optional. If a test module is built as
+ an executable the test runner is built-in. If a test module is built as a dynamic library, it is run by an
+ external test runner.
+ </simpara>
+ </listitem>
+ </varlistentry>
+ <varlistentry id="test-body.def">
+ <term><firstterm>The test body</firstterm></term>
+ <listitem>
+ <simpara>
+ This is the part of a test module that actually performs the test.
+ Logically test body is a collection of <link linkend="test-assertion.def">test assertions</link> wrapped in
+ <link linkend="test-case.def">test cases</link>, which are organized in a <link linkend="test-tree.def">test tree
+ </link>.
+ </simpara>
+ </listitem>
+ </varlistentry>
+ <varlistentry id="test-tree.def">
+ <term><firstterm>The test tree</firstterm></term>
+ <listitem>
+ <simpara>
+ This is a hierarchical structure of <link linkend="test-suite.def">test suites</link> (non-leaf nodes) and
+ <link linkend="test-case.def">test cases</link> (leaf nodes).
+ </simpara>
+ </listitem>
+ </varlistentry>
+ <varlistentry id="test-unit.def">
+ <term><firstterm>The test unit</firstterm></term>
+ <listitem>
+ <simpara>
+ This is a collective name when referred to either <link linkend="test-suite.def">test suite</link> or
+ <link linkend="test-case.def">test case</link>
+ </simpara>
+ </listitem>
+ </varlistentry>
+ <varlistentry id="test-assertion.def">
+ <term><firstterm>Test assertion</firstterm></term>
+ <listitem>
+ <simpara>
+ This is a single binary condition (binary in a sense that is has two outcomes: pass and fail) checked
+ by a test module.
+ </simpara>
+ <simpara>
+ There are different schools of thought on how many test assertions a test case should consist of. Two polar
+ positions are the one advocated by TDD followers - one assertion per test case; and opposite of this - all test
+ assertions within single test case - advocated by those only interested in the first error in a
+ test module. The &utf; supports both approaches.
+ </simpara>
+ </listitem>
+ </varlistentry>
+ <varlistentry id="test-case.def">
+ <term><firstterm>The test case</firstterm></term>
+ <listitem>
+ <simpara>
+ This is an independently monitored function within a test module that
+ consists of one or more test assertions. The term &quot;independently monitored&quot; in the definition above is
+ used to emphasize the fact, that all test cases are monitored independently. An uncaught exception or other normal
+ test case execution termination doesn't cause the testing to cease. Instead the error is caught by the test
+ case execution monitor, reported by the &utf; and testing proceeds to the next test case. Later on you are going
+ to see that this is on of the primary reasons to prefer multiple small test cases to a single big test function.
+ </simpara>
+ </listitem>
+ </varlistentry>
+ <varlistentry id="test-suite.def">
+ <term><firstterm>The test suite</firstterm></term>
+ <listitem>
+ <simpara>
+ This is a container for one or more test cases. The test suite gives you an ability to group
+ test cases into a single referable entity. There are various reasons why you may opt to do so, including:
+ </simpara>
+ <itemizedlist>
+ <listitem>
+ <simpara>To group test cases per subsystems of the unit being tested.</simpara>
+ </listitem>
+ <listitem>
+ <simpara>To share test case setup/cleanup code.</simpara>
+ </listitem>
+ <listitem>
+ <simpara>To run selected group of test cases only.</simpara>
+ </listitem>
+ <listitem>
+ <simpara>To see test report split by groups of test cases</simpara>
+ </listitem>
+ <listitem>
+ <simpara>To skip groups of test cases based on the result of another test unit in a test tree.</simpara>
+ </listitem>
+ </itemizedlist>
+ <simpara>
+ A test suite can also contain other test suites, thus allowing a hierarchical test tree structure to be formed.
+ The &utf; requires the test tree to contain at least one test suite with at least one test case. The top level
+ test suite - root node of the test tree - is called the master test suite.
+ </simpara>
+ </listitem>
+ </varlistentry>
+ <varlistentry id="test-setup.def">
+ <term><firstterm>The test setup</firstterm></term>
+ <listitem>
+ <simpara>
+ This is the part of a test module that is responsible for the test
+ preparation. It includes the following operations that take place prior to a start of the test:
+ </simpara>
+ <itemizedlist>
+ <listitem>
+ <simpara>
+ The &utf; initialization
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ Test tree construction
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ Global test module setup code
+ </simpara>
+ </listitem>
+ </itemizedlist>
+ <simpara>
+ Per test case&quot; setup code, invoked for every test case it's assigned to, is also attributed to the
+ test initialization, even though it's executed as a part of the test case.
+ </simpara>
+ </listitem>
+ </varlistentry>
+ <varlistentry id="test-cleanup.def">
+ <term><firstterm>The test cleanup</firstterm></term>
+ <listitem>
+ <simpara>
+ This is the part of test module that is responsible for cleanup operations.
+ </simpara>
+ </listitem>
+ </varlistentry>
+ <varlistentry id="test-fixture.def">
+ <term><firstterm>The test fixture</firstterm></term>
+ <listitem>
+ <simpara>
+ Matching setup and cleanup operations are frequently united into a single entity called test fixture.
+ </simpara>
+ </listitem>
+ </varlistentry>
+ <varlistentry id="test-runner.def">
+ <term><firstterm>The test runner</firstterm></term>
+ <listitem>
+ <simpara>
+ This is an &quot;executive manager&quot; that runs the show. The test runner's functionality should include
+ the following interfaces and operations:
+ </simpara>
+ <itemizedlist>
+ <listitem>
+ <simpara>
+ Entry point to a test module. This is usually either the function main() itself or single function that can be
+ invoked from it to start testing.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ Initialize the &utf; based on runtime parameters
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ Select an output media for the test log and the test results report
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ Select test cases to execute based on runtime parameters
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ Execute all or selected test cases
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ Produce the test results report
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ Generate a test module result code.
+ </simpara>
+ </listitem>
+ </itemizedlist>
+ <para role="first-line-indented">
+ An advanced test runner may provide additional features, including interactive <acronym>GUI</acronym> interfaces,
+ test coverage and profiling support.
+ </para>
+ </listitem>
+ </varlistentry>
+ <varlistentry id="test-log.def">
+ <term><firstterm>The test log</firstterm></term>
+ <listitem>
+ <simpara>
+ This is the record of all events that occur during the testing.
+ </simpara>
+ </listitem>
+ </varlistentry>
+ <varlistentry id="test-results-report.def">
+ <term><firstterm>The test results report</firstterm></term>
+ <listitem>
+ <simpara>
+ This is the report produced by the &utf; after the testing is completed, that indicates which test cases/test
+ suites passed and which failed.
+ </simpara>
+ </listitem>
+ </varlistentry>
+ </variablelist >
+ </section>
+
+ <section id="utf.user-guide.usage-variants">
+ <title>The &utf; usage variants &hellip; or the <ulink url="http://en.wikipedia.org/wiki/Buridan's_ass">Buridan's donkey</ulink> parable</title>
+ <titleabbrev>Usage variants</titleabbrev>
+
+ <para role="first-line-indented">
+ The &utf; presents you with 4 different variants how it can be used.
+ </para>
+
+ <itemizedlist>
+ <listitem>
+ <simpara><link linkend="utf.user-guide.static-lib-variant">The static library variant</link></simpara>
+ </listitem>
+ <listitem>
+ <simpara><link linkend="utf.user-guide.dynamic-lib-variant">The dynamic library variant</link></simpara>
+ </listitem>
+ <listitem>
+ <simpara><link linkend="utf.user-guide.single-header-variant">The single-header variant</link></simpara>
+ </listitem>
+ <listitem>
+ <simpara><link linkend="utf.user-guide.extern-test-runner-variant">The external test runner variant</link></simpara>
+ </listitem>
+ </itemizedlist>
+
+ <para role="first-line-indented">
+ Unlike the Buridan's donkey though, you shouldn't have problems deciding which one to use, since there are
+ clear reasons why would you prefer each one.
+ </para>
+
+ <para role="first-line-indented">
+ In most cases to compile a test module based on the &utf; all you need to include is just the single header
+ <filename class="headerfile">boost/test/unit_test.hpp</filename>. This header includes internally most of the other
+ headers that contains the &utf; definitions. Some advanced features, like the floating point comparison or the
+ logged expectations testing, are defined in independent headers and need to be included explicitly.
+ </para>
+
+ <section id="utf.user-guide.static-lib-variant">
+ <title>The static library variant of the &utf;</title><titleabbrev>Static library</titleabbrev>
+ <para role="first-line-indented">
+ The &utf; can be built into a static library. If you opt to link a test module with the
+ <link linkend="utf.compilation.standalone">standalone static library</link>, this usage is called the static library
+ variant of the &utf;.
+ </para>
+
+ <para role="first-line-indented">
+ The test runner supplied with this variant requires you to implement the <link linkend="test-module.def">test
+ module</link> initialization function that matches one of the two specifications depending on the compilation flag
+ <xref linkend="utf.flag.alt-init-api" endterm="utf.flag.alt-init-api"/>. If flag isn't defined you are required
+ to match the original specification. If you define the flag <xref linkend="utf.flag.alt-init-api"
+ endterm="utf.flag.alt-init-api"/> during a test module compilation you are required to use the alternative
+ initialization function specification. The &utf; provides an ability to
+ <link linkend="utf.user-guide.initialization.auto-generation">automatically generate</link> an empty test module
+ initialization function with correct specification if no custom initialization is required by a test module.
+ </para>
+
+ <important>
+ <simpara>
+ If you opted to use an alternative initialization API, for a test module to be able to link with prebuilt library,
+ the flag <xref linkend="utf.flag.alt-init-api" endterm="utf.flag.alt-init-api"/> has to be defined both during
+ library and a test module compilation.
+ </simpara>
+ </important>
+ </section>
+
+ <section id="utf.user-guide.dynamic-lib-variant">
+ <title>The dynamic library variant of the &utf;</title>
+ <titleabbrev>Dynamic library</titleabbrev>
+
+ <para role="first-line-indented">
+ In the project with large number of test modules <link linkend="utf.user-guide.dynamic-lib-variant">the static
+ library</link> variant of the &utf; may cause you to waste a lot of disk space, since the &utf; is linked
+ statically with every test module. The solution is to link with the &utf; built into a dynamic library. If you opt
+ to link a test module with the prebuilt dynamic library, this usage is called the dynamic library variant of the
+ &utf;. This variant requires you to define the flag <xref linkend="utf.flag.dyn-link" endterm="utf.flag.dyn-link"/>
+ either in a makefile or before the header <filename class="headerfile">boost/test/unit_test.hpp</filename>
+ inclusion.
+ </para>
+
+ <para role="first-line-indented">
+ The test runner supplied with this variant requires you to implement the <link linkend="test-module.def">test
+ module</link> initialization function that matches the alternative initialization function signature. The &utf;
+ provides an ability to <link linkend="utf.user-guide.initialization.auto-generation">automatically generate</link>
+ an empty test module initialization function with correct signature if no custom initialization is required by a
+ test module.
+ </para>
+
+ <note>
+ <simpara>
+ The name of the test module initialization function is not enforced, since the function is passed as an argument
+ to the test runner.
+ </simpara>
+ </note>
+ </section>
+
+ <section id="utf.user-guide.single-header-variant">
+ <title>The single-header variant of the &utf;</title>
+ <titleabbrev>Single header</titleabbrev>
+
+ <para role="first-line-indented">
+ If you prefer to avoid the <link linkend="utf.compilation.standalone">standalone library compilation</link>, you
+ should use the single-header variant of the &utf;. This variant is implemented, as it follows from its name, in
+ the single header <filename class="headerfile">boost/test/included/unit_test.hpp</filename>. An inclusion of
+ the header causes the complete implementation of the &utf; to be included as a part of a test module's
+ source file. The header <filename class="headerfile">boost/test/unit_test.hpp</filename> doesn't have to be
+ included anymore. You don't have to worry about disabling <link linkend="utf.compilation.auto-linking">
+ auto-linking</link> feature either. It's done in the implementation header already. This variant
+ can't be used with the <xref linkend="multi-file-test-module.def" endterm="multi-file-test-module.def"/>.
+ Otherwise it's almost identical from the usage prospective to the static library variant of the &utf;.
+ In fact the only difference is the name of the include file:
+ <filename class="headerfile">boost/test/included/unit_test.hpp</filename> instead of
+ <filename class="headerfile">boost/test/unit_test.hpp</filename>.
+ </para>
+
+ <para role="first-line-indented">
+ The test runner supplied with this variant requires you to implement the <link linkend="test-module.def">test
+ module</link> initialization function that matches one of the two specifications depending on the compilation flag
+ <xref linkend="utf.flag.alt-init-api" endterm="utf.flag.alt-init-api"/>. If flag isn't defined you are required to
+ match the original specification. If you define the flag
+ <xref linkend="utf.flag.alt-init-api" endterm="utf.flag.alt-init-api"/> during a test module compilation you are
+ required to use the alternative initialization function specification. The &utf; provides an ability to
+ <link linkend="utf.user-guide.initialization.auto-generation">automatically generate</link> an empty test module
+ initialization function with correct specification if no custom initialization is required by a test module.
+ </para>
+ </section>
+
+ <section id="utf.user-guide.extern-test-runner-variant">
+ <title>The external test runner variant of the &utf;</title>
+ <titleabbrev>External test runner</titleabbrev>
+
+ <para role="first-line-indented">
+ All other usage variants employ the build-in test runners. If you plan to use an external test runner with your
+ test module you need to build it as a dynamic library. This usage of the &utf; is called the external test runner
+ variant of the &utf;. The variant requires you to define the flag
+ <xref linkend="utf.flag.dyn-link" endterm="utf.flag.dyn-link"/> either in a makefile or before the header
+ <filename class="headerfile">boost/test/unit_test.hpp</filename> inclusion. An external test runner utility is
+ required to link with dynamic library.
+ </para>
+
+ <para role="first-line-indented">
+ If an external test runner is based on the test runner built in to the dynamic library (like the standalone
+ boost_test_runner utility supplied by the &utf;), it requires you to implement the <link linkend="test-module.def">
+ test module</link> initialization function that matches the alternative initialization function signature. The
+ &utf; provides an ability to <link linkend="utf.user-guide.initialization.auto-generation">automatically generate
+ </link> an empty test module initialization function with correct signature if no custom initialization is required
+ by a test module.
+ </para>
+
+ <note>
+ <simpara>
+ An advanced test runner doesn't have to be based on the build-in one and may require a different
+ test module initialization function signature and/or name.
+ </simpara>
+ </note>
+ </section>
+ </section>
+
+ <section id="utf.user-guide.test-runners">
+ <title>The supplied test runners &hellip; or where is the entrance?</title>
+ <titleabbrev>Supplied test runners</titleabbrev>
+
+ <para role="first-line-indented">
+ All usage variants of the &utf;, excluding the
+ <link linkend="utf.user-guide.external-test-runner">external test runner</link>, supply the test runner in a form of
+ free function named unit_test_main with the following signature:
+ </para>
+
+ <programlisting>int unit_test_main( init_unit_test_func init_func, int argc, char* argv[] );</programlisting>
+
+ <para role="first-line-indented">
+ To invoke the test runner you are required to supply the pointer to the <link linkend="test-module.def">test module</link>
+ initialization function as the first argument to the test runner function. In majority of the cases this function is
+ invoked directly from test executable entry point - function main(). In most usage variants the &utf; can
+ automatically generate default function main() implementation as either part of the library or test module itself.
+ Since the function main needs to refer to the initialization function by name, it is predefined by the default
+ implementation and you are required to match both specific signature and name, when implementing initialization
+ function. If you for any reason prefer more flexibility you can opt to implement the function main() yourself, in
+ which case it's going to be your responsibility to invoke the test runner, but the initialization function name is
+ not enforces the &utf;. See below for flags that needs to be defined/undefined in each usage variant to enable this.
+ </para>
+
+ <warning>
+ <simpara>
+ In spite syntactic similarity the signatures of the test runner function in fact are different for different usage
+ variants. The cause is different signature of the test module initialization function referred by the
+ <link linkend="utf.user-guide.initialization.signature-typedef">typedef init_unit_test_func</link>. This makes static
+ and dynamic library usage variants incompatible and they can't be easily switched on a fly.
+ </simpara>
+ </warning>
+
+ <section id="utf.user-guide.static-lib-runner">
+ <title>Static library variant of the &utf;</title>
+ <titleabbrev>Static library</titleabbrev>
+
+ <para role="first-line-indented">
+ By default this variant supplies the function main() as part of static library. If this is for any reason undesirable
+ you need to define the flag <xref linkend="utf.flag.no-main" endterm="utf.flag.no-main"/> during the library
+ compilation and the function main() implementation won't be generated.
+ </para>
+
+ <para role="first-line-indented">
+ In addition to the <link linkend="utf.user-guide.static-lib-variant">initialization function signature requirement</link>
+ default function main() implementation assumes the name of initialization function is init_unit_test_suite
+ </para>
+
+ </section>
+
+ <section id="utf.user-guide.dynamic-lib-runner">
+ <title>Dynamic library variant of the &utf;</title>
+ <titleabbrev>Dynamic library</titleabbrev>
+
+ <para role="first-line-indented">
+ Unlike the static library variant function main() can't reside in the dynamic library body. Instead this variant
+ supplies default function main() implementation as part of the header
+ <filename class="headerfile">boost/test/unit_test.hpp</filename> to be generated as part of your test file body.
+ The function main() is generated only if either the <xref linkend="utf.flag.main" endterm="utf.flag.main"/> or
+ the <xref linkend="utf.flag.module" endterm="utf.flag.module"/> flags are defined during a test module compilation.
+ For <link linkend="single-file-test-module.def">single-file test module</link> flags can be defined either in a
+ test module's makefile or before the header <filename class="headerfile">boost/test/unit_test.hpp</filename>
+ inclusion. For a <xref linkend="multi-file-test-module.def" endterm="multi-file-test-module.def"/> flags can't
+ be defined in makefile and have to be defined in only one of the test files to avoid duplicate copies of the
+ function main().
+ </para>
+
+ <important>
+ <simpara>
+ The same flags also govern generation of an empty
+ <link linkend="utf.user-guide.initialization">test module initialization function</link>. This means that if you
+ need to implement either function main() or initialization function manually, you can't define the above flags
+ and are required to manually implement both of them.
+ </simpara>
+ </important>
+ </section>
+
+ <section id="utf.user-guide.single-header-runner">
+ <title>Single-header variant of the &utf;</title>
+ <titleabbrev>Single header</titleabbrev>
+
+ <para role="first-line-indented">
+ By default this variant supplies function main() as part of the header
+ <filename class="headerfile">boost/test/included/unit_test.hpp</filename> to be generated as part of your test file
+ body. If this is for any reason undesirable you need to define the flag
+ <xref linkend="utf.flag.no-main" endterm="utf.flag.no-main"/> during test module compilation and the function main()
+ implementation won't be generated.
+ </para>
+ </section>
+
+ <section id="utf.user-guide.external-test-runner">
+ <title>External test runner variant of the &utf;</title>
+ <titleabbrev>External test runner</titleabbrev>
+
+ <para role="first-line-indented">
+ The external test runner variant of the &utf; supplies the test runner in a form of standalone utility
+ boost_test_runner. You are free to implement different, more advanced, test runners that can be used with this
+ variant.
+ </para>
+
+ <simpara>
+ <!-- TO FIX -->
+ </simpara>
+ </section>
+
+ <section id="utf.user-guide.runners-exit-status">
+ <title>Generated exit status values</title>
+
+ <para role="first-line-indented">
+ Once testing is finished, all supplied test runners report the results and returns an exit status value. Here are
+ the summary of all possible generated values:
+ </para>
+
+ <table id="utf.user-guide.runners-exit-status-summary">
+ <title>Generated exit status values</title>
+ <tgroup cols="2">
+ <colspec colname="c1"/>
+ <colspec colname="c2"/>
+ <thead>
+ <row>
+ <entry>Value</entry>
+ <entry>Meaning</entry>
+ </row>
+ </thead>
+ <tbody>
+ <row>
+ <entry>boost::exit_success</entry>
+ <entry>
+ No errors occurred during the test or the success result code was explicitly requested with the no_result_code
+ parameter.
+ </entry>
+ </row>
+ <row>
+ <entry>boost::exit_test_failure</entry>
+ <entry>
+ Non-fatal errors detected and no uncaught exceptions were thrown during testing or the &utf; fails during
+ initialization.
+ </entry>
+ </row>
+ <row>
+ <entry>boost::exit_exception_failure</entry>
+ <entry>
+ Fatal errors were detected or uncaught exceptions thrown during testing.
+ </entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+ </section>
+ </section>
+
+ <section id="utf.user-guide.initialization">
+ <title>Test module initialization &hellip; or ready, set &hellip;</title>
+ <titleabbrev>Test module initialization</titleabbrev>
+
+ <para role="first-line-indented">
+ There are two tasks that you may need to perform before actual testing can start:
+ </para>
+
+ <itemizedlist>
+ <listitem>
+ <simpara>
+ The test tree needs to be built (unless you are using automated test units registration).
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ Custom test module initialization needs to be performed. This includes
+ initialization of the code under test and custom tune-up of the &utf; parameters (for example the test log or the
+ test results report output streams redirection).
+ </simpara>
+ </listitem>
+ </itemizedlist>
+
+ <para role="first-line-indented">
+ The function dedicated for this purpose is called the test module initialization function. Alternatively you can
+ employ global fixtures, covered in details, including differences in two approaches, in
+ <xref linkend="utf.user-guide.fixture"/>.
+ </para>
+
+ <para role="first-line-indented">
+ The &utf; requires you to implement the test module initialization function. The test runner supplied with the static
+ library or single-header variants of the &utf; requires the specific function specification. The test runner supplied
+ with the dynamic library variant of the &utf; requires the specific initialization function signature only. <!-- TO FIX: specific specification -->
+ </para>
+
+ <para role="first-line-indented">
+ For many <link linkend="test-module.def">test modules</link> you don't need to do any custom initialization
+ and test tree construction is automated. In this case you don't really need the initialization function and
+ the &utf; provides a way to automatically generate an empty one for you.
+ </para>
+
+ <para role="first-line-indented">
+ Original design of the &utf; supported the manual test tree construction only. Later versions introduced the
+ automated registration of test units. In later versions of the &utf; the original initialization function
+ specification became inconvenient and unnecessary unsafe. So the alternative initialization function specification
+ was introduced. This change is not backward compatible. The test runners supplied with the static library and
+ single-header variants of the &utf; by default still require original initialization function specification, but
+ support <link linkend="utf.compilation.flags">compilation flags</link> that switch to the alternative one. The test
+ runner supplied with dynamic library variant of the &utf; requires new specification and doesn't support
+ original one. The plan is to deprecate the original initialization function specification in one of the future
+ releases and ultimately to stop supporting it.
+ </para>
+
+ <para role="first-line-indented">
+ The initialization function invocation is monitored by the &utf; the same way as all the test cases. An unexpected
+ exception or system error detected during initialization function invocation is treated as initialization error and
+ is reported as such.
+ </para>
+
+ <section id="utf.user-guide.initialization.orig-signature">
+ <title>Original initialization function signature and name</title>
+ <titleabbrev>Original initialization function</titleabbrev>
+
+ <para role="first-line-indented">
+ The original design of the &utf; initialization requires you to implement the function with the following
+ specification:
+ </para>
+
+ <programlisting><classname>boost::unit_test::test_suite</classname>* init_unit_test_suite( int argc, char* argv[] );</programlisting>
+
+ <para role="first-line-indented">
+ In original design of the &utf; this function was intended to initialize and return a master test suite. The null
+ value was considered an initialization error. The current design of the &utf; maintains master test suite instance
+ internally and does not treat the null result value as an initialization error. In fact it's recommended to
+ return null value always and register test units in the master test suite using the regular test suite add
+ interface. The only way to indicate an initialization error is to throw the
+ <classname>boost::unit_test::framework::setup_error</classname> exception.
+ </para>
+
+ <para role="first-line-indented">
+ The initialization function parameters argc, argv provide the command line arguments specified during test
+ module invocation. It's guarantied that any framework-specific command line arguments are excluded. To be
+ consisted with the alternative initialization function specification it's recommended though to access the
+ command line arguments using the master test suite interface.
+ </para>
+ </section>
+
+ <section id="utf.user-guide.initialization.alt-signature">
+ <title>Alternative initialization function signature and name</title>
+ <titleabbrev>Alternative initialization function</titleabbrev>
+
+ <para role="first-line-indented">
+ The alternative design of the &utf; initialization requires you to implement a function with the following
+ specification:
+ </para>
+
+ <programlisting>bool init_unit_test();</programlisting>
+
+ <para role="first-line-indented">
+ The result value of this function indicates whether or not initialization was successful. To register test
+ units in a master test suite use the test suite add interface. To access command line arguments use the master
+ test suite interface. It's guarantied that any framework-specific command line arguments are excluded.
+ </para>
+ </section>
+
+ <section id="utf.user-guide.initialization.signature-typedef">
+ <title>Initialization function signature access</title>
+
+ <para role="first-line-indented">
+ The test runner interface needs to refer to the initialization function signature. The &utf; provides the typedef
+ that resolves to proper signature in all configurations:
+ </para>
+
+ <programlisting>namespace boost {
+namespace unit_test {
+#ifdef BOOST_TEST_ALTERNATIVE_INIT_API
+typedef bool (*init_unit_test_func)();
+#else
+typedef test_suite* (*init_unit_test_func)( int, char* [] );
+#endif
+}
+}</programlisting>
+
+ </section>
+
+ <section id="utf.user-guide.initialization.auto-generation">
+ <title>Automated generation of the test module initialization function</title>
+ <titleabbrev>Automated generation</titleabbrev>
+
+ <para role="first-line-indented">
+ To automatically generate an empty test module initialization function you need to define
+ <xref linkend="utf.flag.main" endterm="utf.flag.main"/> before including the
+ <filename class="headerfile">boost/test/unit_test.hpp</filename> header. The value of this define is ignored.
+ Alternatively you can define the macro <xref linkend="utf.flag.module" endterm="utf.flag.module"/> to be equal to
+ any string (not necessarily in quotes). This macro causes the same result as
+ <xref linkend="utf.flag.main" endterm="utf.flag.main"/>, and in addition the macro value becomes the name of the
+ master test suite.
+ </para>
+
+ <important>
+ <simpara>
+ For a test module consisting of multiple source files you have to define these flags in a single test file only.
+ Otherwise you end up with multiple instances of the initialization function.
+ </simpara>
+ </important>
+ </section>
+ </section>
+
+ <xi:include href="utf.users-guide.test-organization.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+ <xi:include href="utf.users-guide.fixture.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+ <xi:include href="utf.users-guide.test-output.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+ <xi:include href="utf.user-guide.runtime-config.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+</section>

Added: trunk/libs/test/doc/src/utf.xml
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/utf.xml 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,415 @@
+<?xml version="1.0" encoding="utf-8"?>
+<!DOCTYPE part PUBLIC "-//Boost//DTD BoostBook XML V1.0//EN" "../../../../tools/boostbook/dtd/boostbook.dtd" [
+ <!ENTITY utf "<acronym>UTF</acronym>">
+]>
+<section id="utf" last-revision="$Date$">
+ <title>Boost Test Library: The Unit Test Framework</title>
+ <titleabbrev>The Unit Test Framework</titleabbrev>
+
+ <section id="utf.intro">
+ <title>Introduction</title>
+
+ <epigraph>
+ <attribution>XP maxim</attribution>
+ <simpara>
+ The acceptance test makes the customer satisfied that the software provides the business value that makes them
+ willing to pay for it. The unit test makes the programmer satisfied that the software does what the programmer
+ thinks it does
+ </simpara>
+ </epigraph>
+
+ <para role="first-line-indented">
+ What is the first thing you need to do when you start working on new library/class/program? That's right -
+ you need to start with the unit test module (I hope you all gave this answer!). Occasional, simple test may be
+ implemented using asserts. But any professional developer soon finds this approach lacking. It becomes clear that
+ it's too time-consuming and tedious for simple, but repetitive unit testing tasks and it's too inflexible for
+ most nontrivial ones.
+ </para>
+
+ <para role="first-line-indented">
+ <firstterm id="utf.def">The Boost Test Library Unit Test Framework</firstterm> (further in the documentation
+ referred by the acronym <acronym id="utf.def.ref">UTF</acronym>) provides both an easy to use and flexible solution
+ to this problem domain: C++ unit test implementation and organization.
+ </para>
+
+ <para role="first-line-indented">
+ Unit testing tasks arise during many different stages of software development: from initial project implementation
+ to its maintenance and later revisions. These tasks differ in their complexity and purpose and accordingly are
+ approached differently by different developers. The wide spectrum of tasks in a problem domain cause many
+ requirements (sometimes conflicting) to be placed on a unit testing framework. These include:
+ </para>
+
+ <itemizedlist mark ="square">
+ <listitem>
+ <simpara>
+ Writing a unit test module should be simple and obvious for new users.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ The framework should allow advanced users to perform nontrivial tests.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ Test module should be able to have many small test cases and developer should be able to group them into test
+ suites.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ At the beginning of the development users want to see verbose and descriptive error message, whereas during the
+ regression testing they just want to know if any tests failed.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ For a small test modules run time should prevail over compilation time: user don't want to wait a minute to
+ compile a test that takes a second to run.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ For long and complex tests users want to be able to see the test progress.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ Simplest tests shouldn't require an external library.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ For long term usage users of a unit test framework should be able to build it as a standalone library.
+ </simpara>
+ </listitem>
+ </itemizedlist>
+
+ <para role="first-line-indented">
+ The &utf; design is based on above rationale and provides versatile facilities to:
+ </para>
+
+ <itemizedlist mark ="square">
+ <listitem>
+ <simpara>
+ Simplify writing test cases by using various <link linkend="utf.testing-tools.reference">testing tools</link>.
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara><link linkend="utf.user-guide.test-organization">Organize test cases</link> into a test tree.</simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ Relieve you from messy error detection, reporting duties and framework runtime parameters processing.
+ </simpara>
+ </listitem>
+ </itemizedlist>
+
+ <para role="first-line-indented">
+ The &utf; keeps track of all passed/failed testing tools <link linkend="test-assertion.def">assertions</link>,
+ provides an ability to check the <link linkend="utf.user-guide.test-output.progress">test progress</link>
+ and generates a <link linkend="utf.user-guide.test-output.results-report">result report</link> in several different
+ formats. The &utf; supplies command line test runners that initialize the framework and run the requested tests.
+ Depending on the selected <link linkend="utf.compilation.flags">compilation flags</link> the function main()
+ default implementation, that invoke the supplied test runner, can be generated automatically as well.
+ </para>
+
+ <para role="first-line-indented">
+ The &utf; is intended to be used both for a simple and non trivial testing. It is not intended to be used with
+ production code. In this case the <ulink url="under_construction.html">Program Execution Monitor</ulink> is more
+ suitable.
+ </para>
+
+ <para role="first-line-indented">
+ Given the largely differing requirements of new and advanced users, it is clear that the &utf; must provide both
+ simple, easy-to-use interfaces with limited customization options and advanced interfaces, which allow unit testing
+ to be fully customized. Accordingly the material provided in this documentation is split into two sections:
+ </para>
+
+ <itemizedlist mark ="upper-roman">
+ <listitem>
+ <simpara>
+ <link linkend="utf.user-guide">The User's Guide</link>: covers all functionality that doesn't require
+ knowledge of the &utf; internals
+ </simpara>
+ </listitem>
+ <listitem>
+ <simpara>
+ <ulink url="under_construction.html">The Advanced User's Guide</ulink>: covers all implementation details
+ required for a user to understand the advanced customization options available in the &utf;, and for a user
+ interested in extending the testing framework.
+ </simpara>
+ </listitem>
+ </itemizedlist>
+
+ <para role="first-line-indented">
+ For those interested in getting started quickly please visit <ulink url="example-toc.html">collection of
+ examples</ulink> presented in this documentation.
+ </para>
+ </section>
+
+ <xi:include href="utf.tutorials.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+
+ <section id="utf.compilation">
+ <title>The &utf; compilation variants and procedures</title>
+ <titleabbrev>Compilation</titleabbrev>
+
+ <section id="utf.compilation.impl">
+ <title/>
+
+ <para role="first-line-indented">
+ The &utf; is comparatively complicated component and is implemented in close to hundred header and source files,
+ so for long term usage the preferable solution is to build the &utf; as a reusable standalone library.
+ Depending on your platform this may save you a significant time during test module compilation and doesn't
+ really require that much effort.
+ <ulink url="http://boost.org/more/getting_started/index.html">Boost Getting started</ulink> tells you how to get
+ pre-built libraries for some platforms. If available, this is the easiest option and you can ignore standalone
+ library compilation instructions below.
+ </para>
+
+ <para role="first-line-indented">
+ Following files constitute the &utf; implementation: <!--TO FIX: explain cpp/ipp -->
+ </para>
+
+ <itemizedlist html-class="files-list">
+ <listitem>
+ <simpara><ulink url="../../../../boost/test/impl/debug.ipp"><filename>debug.cpp</filename></ulink></simpara>
+ </listitem>
+ <listitem>
+ <simpara><ulink url="../../../../boost/test/impl/compiler_log_formatter.ipp"><filename>compiler_log_formatter.cpp</filename></ulink></simpara>
+ </listitem>
+ <listitem>
+ <simpara><ulink url="../../../../boost/test/impl/exception_safety.ipp"><filename>exception_safety.cpp</filename></ulink></simpara>
+ </listitem>
+ <listitem>
+ <simpara><ulink url="../../../../boost/test/impl/execution_monitor.ipp"><filename>execution_monitor.cpp</filename></ulink></simpara>
+ </listitem>
+ <listitem>
+ <simpara><ulink url="../../../../boost/test/impl/framework.ipp"><filename>framework.cpp</filename></ulink></simpara>
+ </listitem>
+ <listitem>
+ <simpara><ulink url="../../../../boost/test/impl/interaction_based.ipp"><filename>interaction_based.cpp</filename></ulink></simpara>
+ </listitem>
+ <listitem>
+ <simpara><ulink url="../../../../boost/test/impl/logged_expectations.ipp"><filename>logged_expectations.cpp</filename></ulink></simpara>
+ </listitem>
+ <listitem>
+ <simpara><ulink url="../../../../boost/test/impl/plain_report_formatter.ipp"><filename>plain_report_formatter.cpp</filename></ulink></simpara>
+ </listitem>
+ <listitem>
+ <simpara><ulink url="../../../../boost/test/impl/progress_monitor.ipp"><filename>progress_monitor.cpp</filename></ulink></simpara>
+ </listitem>
+ <listitem>
+ <simpara><ulink url="../../../../boost/test/impl/results_collector.ipp"><filename>results_collector.cpp</filename></ulink></simpara>
+ </listitem>
+ <listitem>
+ <simpara><ulink url="../../../../boost/test/impl/results_reporter.ipp"><filename>results_reporter.cpp</filename></ulink></simpara>
+ </listitem>
+ <listitem>
+ <simpara><ulink url="../../../../boost/test/impl/unit_test_log.ipp"><filename>unit_test_log.cpp</filename></ulink></simpara>
+ </listitem>
+ <listitem>
+ <simpara><ulink url="../../../../boost/test/impl/unit_test_main.ipp"><filename>unit_test_main.cpp</filename></ulink></simpara>
+ </listitem>
+ <listitem>
+ <simpara><ulink url="../../../../boost/test/impl/unit_test_monitor.ipp"><filename>unit_test_monitor.cpp</filename></ulink></simpara>
+ </listitem>
+ <listitem>
+ <simpara><ulink url="../../../../boost/test/impl/unit_test_parameters.ipp"><filename>unit_test_parameters.cpp</filename></ulink></simpara>
+ </listitem>
+ <listitem>
+ <simpara><ulink url="../../../../boost/test/impl/unit_test_suite.ipp"><filename>unit_test_suite.cpp</filename></ulink></simpara>
+ </listitem>
+ <listitem>
+ <simpara><ulink url="../../../../boost/test/impl/xml_log_formatter.ipp"><filename>xml_log_formatter.cpp</filename></ulink></simpara>
+ </listitem>
+ <listitem>
+ <simpara><ulink url="../../../../boost/test/impl/xml_report_formatter.ipp"><filename>xml_report_formatter.cpp</filename></ulink></simpara>
+ </listitem>
+ </itemizedlist>
+ </section>
+
+ <section id="utf.compilation.procedured">
+ <title>Compilation procedures</title>
+
+ <para role="first-line-indented">
+ In comparison with many other boost libraries, which are completely implemented in header files, compilation and
+ linking with the &utf; may require additional steps. The &utf; presents you with options to either
+ <link linkend="utf.compilation.standalone">built and link with a standalone library</link> or
+ <link linkend="utf.compilation.direct-include">include the implementation directly</link> into a test module.
+ If you opt to use the library the &utf; headers implement the
+ <link linkend="utf.compilation.auto-linking">auto-linking support</link>. The compilation of the &utf; library and
+ a test module can be configured using the following compilation flags.
+ </para>
+
+ <table id="utf.compilation.flags">
+ <title>The &utf; compilation flags</title>
+ <tgroup cols="2">
+ <colspec colname="c1"/>
+ <colspec colname="c2"/>
+ <thead>
+ <row>
+ <entry>Flag</entry>
+ <entry>Usage</entry>
+ </row>
+ </thead>
+ <tbody>
+ <row>
+ <entry id="utf.flag.dyn-link">BOOST_TEST_DYN_LINK</entry>
+
+ <entry>Define this flag to build/use dynamic library.</entry>
+ </row>
+ <row>
+ <entry id="utf.flag.no-lib">BOOST_TEST_NO_LIB</entry>
+
+ <entry>Define this flag to prevent auto-linking.</entry>
+ </row>
+ <row>
+ <entry id="utf.flag.no-main">BOOST_TEST_NO_MAIN</entry>
+
+ <entry>Define this flag to prevent function main() implementation generation.</entry>
+ </row>
+ <row>
+ <entry id="utf.flag.main">BOOST_TEST_MAIN</entry>
+
+ <entry>
+ Define this flag to generate an empty test module initialization function and in case of
+ <link linkend="utf.user-guide.dynamic-lib-runner">dynamic library variant</link> default function main()
+ implementation as well.
+ </entry>
+ </row>
+ <row>
+ <entry id="utf.flag.module">BOOST_TEST_MODULE</entry>
+
+ <entry>
+ Define this flag to generate the test module initialization function, which uses the defined value to name
+ the master test suite. In case of <link linkend="utf.user-guide.dynamic-lib-runner">dynamic library variant</link>
+ default function main() implementation is generated as well
+ </entry>
+ </row>
+ <row>
+ <entry id="utf.flag.alt-init-api">BOOST_TEST_ALTERNATIVE_INIT_API</entry>
+
+ <entry>
+ Define this flag to generate the switch to the alternative test module initialization API.
+ </entry>
+ </row>
+ </tbody>
+ </tgroup>
+ </table>
+
+ <para role="first-line-indented">
+ Further in documentation you are going to see in details when and how these flags should be used.
+ </para>
+ </section>
+
+ <section id="utf.compilation.standalone">
+ <title>Standalone library compilation</title>
+
+ <para role="first-line-indented">
+ If you opted to link your program with the standalone library, you need to build it first. To build a standalone
+ library the all C++ files (.cpp), that constitute &utf; <link linkend="pem.impl">implementation</link> need to be
+ listed as source files in your makefile<footnote><simpara>There are varieties of make systems that can be used. To name
+ a few: <acronym>GNU</acronym> make (and other make clones) and build systems integrated into <acronym>IDE</acronym>s
+ (for example Microsoft Visual Studio). The Boost preferred solution is Boost.Build system that is based on top of
+ bjam tool. Make systems require some kind of configuration file that lists all files that constitute the library
+ and all build options. For example the makefile that is used by make, or the Microsoft Visual Studio project file,
+ Jamfile is used by Boost.Build. For the sake of simplicity let's call this file the makefile.</simpara></footnote>.
+ </para>
+
+ <para role="first-line-indented">
+ The Jamfile for use with Boost.Build system is supplied in <filename class="directory">libs/test/build</filename>
+ directory. The &utf; can be built as either <link id="prg-exec-monitor.compilation.standalone.static">static</link>
+ or <link id="prg-exec-monitor.compilation.standalone.dynamic">dynamic</link> library.
+ </para>
+
+ <section id="utf.compilation.standalone.static">
+ <title>Static library compilation</title>
+
+ <para role="first-line-indented">
+ No special build options or macro definitions are required to build the static library. Using the Boost.Build
+ system you can build the static library with the following command from
+ <filename class="directory">libs/test/build</filename> directory:
+ </para>
+
+ <cmdsynopsis>
+ <!-- TO FIX -->
+ <command>bjam</command>
+ <arg>-sTOOLS=&lt;your-tool-name&gt;</arg>
+ <arg choice="req">-sBUILD=boost_unit_test_framework</arg>
+ </cmdsynopsis>
+
+ <para role="first-line-indented">
+ Also on Windows you can use the Microsoft Visual Studio .NET project file provided.
+ </para>
+ </section>
+
+ <section id="utf.compilation.standalone.dynamic">
+ <title>Dynamic library compilation</title>
+
+ <para role="first-line-indented">
+ To build the dynamic library<footnote><simpara>What is meant by the term dynamic library is a <firstterm>dynamically
+ loaded library</firstterm>, alternatively called a <firstterm>shared library</firstterm>.</simpara></footnote> you
+ need to add <xref linkend="utf.flag.dyn-link" endterm="utf.flag.dyn-link"/> to the list of macro definitions in the
+ makefile. Using the Boost.Build system you can build the dynamic library with the following command from
+ <filename class="directory">libs/test/build</filename> directory:
+ </para>
+
+ <cmdsynopsis>
+ <!-- TO FIX -->
+ <command>bjam</command>
+ <arg>-sTOOLS=&lt;your-tool-name&gt;</arg>
+ <arg choice="req">-sBUILD=boost_unit_test_framework</arg>
+ </cmdsynopsis>
+
+ <para role="first-line-indented">
+ Also on Windows you can use the Microsoft Visual Studio .NET project file provided.
+ </para>
+
+ <important>
+ <simpara>
+ For test module to successfully link with the dynamic library the flag
+ <xref linkend="utf.flag.dyn-link" endterm="utf.flag.dyn-link"/> needs to be defined both during dynamic library build
+ and during test module compilation.
+ </simpara>
+ </important>
+ </section>
+ </section>
+
+ <section id="utf.compilation.auto-linking">
+ <title>Support of the auto-linking feature</title>
+ <titleabbrev>Auto-linking support</titleabbrev>
+
+ <para role="first-line-indented">
+ For the Microsoft family of compilers the &utf; provides an ability to automatically select proper library name
+ and add it to the list of objects to be linked with. By default this feature is enabled. To disable it you have
+ to define the flag <xref linkend="utf.flag.no-lib" endterm="utf.flag.no-lib"/>. For more details on the auto-linking
+ feature implementation and configuration you should consult the
+ <ulink url="under_construction.html">appropriate documentation</ulink>.
+ </para>
+ </section>
+
+ <section id="utf.compilation.direct-include">
+ <title>Including the &utf; directly into your test module</title>
+ <titleabbrev>Direct include</titleabbrev>
+
+ <para role="first-line-indented">
+ If you prefer to avoid the standalone library compilation you can either include all files that constitute the
+ static library in your test module's makefile or include them as a part of a test module's source file.
+ To facilitate the later variant the &utf; presents the
+ <link linkend="utf.user-guide.single-header-variant">single-header usage variant</link>. In either case no special
+ build options or macro definitions are required to be added to your compilation options list by default. But the
+ same flags that can be used for the <link linkend="utf.compilation.standalone">standalone library compilation</link>
+ are applicable in this case. Though, obviously, neither <xref linkend="pem.flag.dyn-link" endterm="pem.flag.dyn-link"/>
+ nor <xref linkend="pem.flag.no-lib" endterm="pem.flag.no-lib"/> are applicable. This solution may not be the
+ best choice in a long run, since it requires the &utf; sources recompilation for every test module you use it with
+ and for every change of a test module you are working on. In a result your testing cycle time may increase. If it
+ become tiresome, I recommend switching to one of the prebuilt library usage variants.
+ </para>
+ </section>
+ </section>
+
+ <xi:include href="utf.users-guide.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+ <xi:include href="utf.testing-tools.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+ <xi:include href="utf.usage-recommendations.xml" xmlns:xi="http://www.w3.org/2001/XInclude"/>
+</section>

Added: trunk/libs/test/doc/src/xsl/docbook.xsl
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/xsl/docbook.xsl 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,247 @@
+<?xml version="1.0" encoding="utf-8"?>
+<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
+
+ <xsl:import href="../../../../../tools/boostbook/xsl/docbook.xsl"/>
+
+ <xsl:output method="xml"
+ doctype-public="-//OASIS//DTD DocBook XML V4.2//EN"
+ doctype-system="http://www.oasis-open.org/docbook/xml/4.2/docbookx.dtd"/>
+
+ <xsl:param name="boost.root" select="'../../../..'"/>
+ <xsl:param name="snippet.dir" select="'./snippet'"/>
+ <xsl:param name="example.dir" select="'./example'"/>
+
+<!-- *********************************************************************** -->
+<!-- *********************************************************************** -->
+<!-- *********************************************************************** -->
+
+ <xsl:template match="annotations/annotation" mode="area-spec">
+ <area id="{@id}-co" linkends="{@id}" coords="{@coords}"/>
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template match="annotations/annotation" mode="area-descr">
+ <callout arearefs="{@id}-co" id="{@id}">
+ <xsl:apply-templates/>
+ </callout>
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template match="btl-example" mode="content">
+ <programlisting><xi:include href="{$example.dir}/{@name}.cpp" parse="text" xmlns:xi="http://www.w3.org/2001/XInclude">
+ <xi:fallback xmlns:xi='http://www.w3.org/2001/XInclude'>
+ <simpara><emphasis>FIXME: MISSING XINCLUDE CONTENT</emphasis></simpara>
+ </xi:fallback>
+ </xi:include></programlisting>
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template match="btl-example">
+ <example id="{../@id}.{@name}">
+ <xsl:copy-of select="title|para|simpara"/>
+
+ <xsl:choose>
+ <xsl:when test="annotations">
+ <programlistingco>
+ <areaspec units="linecolumn">
+ <xsl:apply-templates select ="annotations/annotation" mode="area-spec"/>
+ </areaspec>
+
+ <xsl:apply-templates select ="." mode="content"/>
+
+ <simplelist type='horiz' columns='5'>
+ <member><literal><ulink url="../src/examples/{@name}.cpp">Source code</ulink></literal></member>
+ <member> | </member>
+ <member><literal><toggle linkend="{@name}-annot" on-label="Show annotations" off-label="Hide annotations"/></literal></member>
+ <member> | </member>
+ <member><literal><toggle linkend="{@name}-output" on-label="Show output" off-label="Hide output"/></literal></member>
+ </simplelist>
+
+ <calloutlist html-id="{@name}-annot" html-class="example-annot">
+ <xsl:apply-templates select ="annotations/annotation" mode="area-descr"/>
+ </calloutlist>
+
+ </programlistingco>
+ </xsl:when>
+
+ <xsl:otherwise>
+ <xsl:apply-templates select ="." mode="content"/>
+
+ <simplelist type='horiz' columns='3'>
+ <member><literal><ulink url="../src/examples/{@name}.cpp">Source code</ulink></literal></member>
+ <member> | </member>
+ <member><literal><toggle linkend="{@name}-output" on-label="Show output" off-label="Hide output"/></literal></member>
+ </simplelist>
+ </xsl:otherwise>
+
+ </xsl:choose>
+
+ <screen html-id="{@name}-output" html-class="example-output"><xi:include href="{$example.dir}/{@name}.output" parse="text" xmlns:xi="http://www.w3.org/2001/XInclude">
+ <xi:fallback xmlns:xi='http://www.w3.org/2001/XInclude'><simpara><emphasis>FIXME: MISSING XINCLUDE CONTENT</emphasis></simpara></xi:fallback>
+ </xi:include></screen>
+ </example>
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template match="btl-snippet" mode="content">
+ <programlisting><xi:include href="{$snippet.dir}/{@name}.cpp" parse="text" xmlns:xi="http://www.w3.org/2001/XInclude">
+ <xi:fallback xmlns:xi='http://www.w3.org/2001/XInclude'>
+ <simpara><emphasis>FIXME: MISSING XINCLUDE CONTENT</emphasis></simpara>
+ </xi:fallback>
+ </xi:include></programlisting>
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template match="btl-snippet">
+ <xsl:choose>
+ <xsl:when test="annotations">
+ <programlistingco>
+ <areaspec units="linecolumn">
+ <xsl:apply-templates select ="annotations/annotation" mode="area-spec"/>
+ </areaspec>
+
+ <xsl:apply-templates select ="." mode="content"/>
+
+ <calloutlist>
+ <xsl:apply-templates select ="annotations/annotation" mode="area-descr"/>
+ </calloutlist>
+
+ </programlistingco>
+ </xsl:when>
+
+ <xsl:otherwise>
+ <xsl:apply-templates select ="." mode="content"/>
+ </xsl:otherwise>
+ </xsl:choose>
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template match="btl-parameter-reference">
+ <inline-reference id="{@id}" curr_entry_var="{generate-id()}">
+ <xsl:apply-templates/>
+ </inline-reference>
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template match="inline-reference">
+ <inline-reference id="{@id}" curr_entry_var="{generate-id()}">
+ <xsl:apply-templates select="*"/>
+ </inline-reference>
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template match="btl-parameter-reference/refentry">
+ <refentry name="{@name}">
+ <segmentedlist>
+ <?dbhtml list-presentation="list"?>
+
+ <segtitle>Parameter Name</segtitle>
+ <segtitle>Environment variable name</segtitle>
+ <segtitle>Command line argument name</segtitle>
+ <segtitle>Acceptable Values</segtitle>
+ <segtitle>Description</segtitle>
+
+ <seglistitem>
+ <seg><emphasis><xsl:value-of select="name"/></emphasis></seg>
+ <seg><varname><xsl:value-of select="env"/></varname></seg>
+ <seg><parameter class='command'><xsl:value-of select="cla"/></parameter></seg>
+ <seg><xsl:copy-of select="vals/*"/></seg>
+ <seg><xsl:copy-of select="descr/*"/></seg>
+ </seglistitem>
+ </segmentedlist>
+ </refentry>
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template match="btl-equation">
+ <informaltable tabstyle="equation" id="{../@id}.eq.{@index}">
+ <tgroup cols="2">
+ <tbody><row>
+ <entry>
+ <informalequation>
+ <mathphrase>
+ <xsl:copy-of select="node()|text()"/>
+ </mathphrase>
+ </informalequation>
+ </entry>
+ <entry role="index">(<emphasis role="bold"><xsl:value-of select="@index"/></emphasis>)</entry>
+ </row></tbody>
+ </tgroup>
+ </informaltable>
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <!-- TO FIX: correct formatting? -->
+ <xsl:template match="inline-synopsis">
+ <programlisting html-class="inline-synopsis">
+ <xsl:if test="@id">
+ <xsl:attribute name="id">
+ <xsl:value-of select="@id"/>
+ </xsl:attribute>
+ </xsl:if>
+ <xsl:apply-templates select="*" mode="inline-synopsis"/>
+ </programlisting>
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template name="generate.id">
+ <xsl:param name="node" select="."/>
+
+ <xsl:choose>
+ <xsl:when test="$node/@ref-id">
+ <xsl:value-of select="$node/@ref-id"/>
+ </xsl:when>
+ <xsl:when test="ancestor::class-specialization|ancestor::struct-specialization|ancestor::union-specialization">
+ <xsl:value-of select="generate-id(.)"/>
+ <xsl:text>-bb</xsl:text>
+ </xsl:when>
+ <xsl:otherwise>
+ <xsl:apply-templates select="$node" mode="generate.id"/>
+ </xsl:otherwise>
+ </xsl:choose>
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template match="macro" mode="inline-synopsis">
+ <xsl:text>&#10;</xsl:text>
+
+ <xsl:choose>
+ <xsl:when test="@ref-id='none'">
+ <xsl:value-of select="@name"/>
+ </xsl:when>
+ <xsl:otherwise>
+ <xsl:call-template name="link-or-anchor">
+ <xsl:with-param name="to" select="@name"/>
+ <xsl:with-param name="text" select="@name"/>
+ <xsl:with-param name="link-type" select="'anchor'"/>
+ </xsl:call-template>
+ </xsl:otherwise>
+ </xsl:choose>
+
+ <xsl:if test="@kind='functionlike'">
+ <xsl:text>(</xsl:text>
+ <xsl:for-each select="macro-parameter">
+ <xsl:if test="position() &gt; 1">
+ <xsl:text>, </xsl:text>
+ </xsl:if>
+ <emphasis><xsl:value-of select="@name"/></emphasis>
+ </xsl:for-each>
+ <xsl:text>)</xsl:text>
+ </xsl:if>
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+</xsl:stylesheet>

Added: trunk/libs/test/doc/src/xsl/html.xsl
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/src/xsl/html.xsl 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,421 @@
+<?xml version="1.0" encoding="utf-8"?>
+<xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform" version="1.0">
+
+ <xsl:import
+ href="http://docbook.sourceforge.net/release/xsl/current/html/chunktoc.xsl"/>
+
+ <xsl:import
+ href="http://docbook.sourceforge.net/release/xsl/current/html/math.xsl"/>
+
+ <xsl:import href="../../../../../tools/boostbook/xsl/chunk-common.xsl"/>
+ <xsl:import href="../../../../../tools/boostbook/xsl/docbook-layout.xsl"/>
+ <xsl:import href="../../../../../tools/boostbook/xsl/navbar.xsl"/>
+ <xsl:import href="../../../../../tools/boostbook/xsl/admon.xsl"/>
+ <xsl:import href="../../../../../tools/boostbook/xsl/xref.xsl"/>
+ <xsl:import href="../../../../../tools/boostbook/xsl/relative-href.xsl"/>
+ <xsl:import href="../../../../../tools/boostbook/xsl/callout.xsl"/>
+ <xsl:import href="../../../../../tools/boostbook/xsl/html-base.xsl"/>
+
+ <xsl:param name = "boost.root" select = "'../../../..'"/>
+ <xsl:param name = "callout.graphics" select = "'0'"/>
+
+ <xsl:param name = "callout.graphics.path"
+ select = "concat($boost.root, '/doc/html/images/')"/>
+
+ <xsl:param name = "html.stylesheet" select = "'../style/style.css'"/>
+
+ <xsl:param name = "chunk.fast" select = "1"/>
+ <xsl:param name = "chunk.separate.lots" select = "1"/>
+ <xsl:param name = "chunk.toc" select = "btl-toc.xml"/>
+ <xsl:param name = "manual.toc" select = "btl-toc.xml"/>
+
+ <xsl:param name = "use.id.as.filename" select = "1"/>
+
+ <xsl:param name = "chapter.autolabel" select = "0"/>
+ <xsl:param name = "section.autolabel" select = "0"/>
+
+ <xsl:param name = "variablelist.as.table" select = "1"/>
+ <xsl:param name = "variablelist.term.break.after" select = "1"/>
+
+ <xsl:param name = "runinhead.default.title.end.punct">:<br/></xsl:param>
+
+ <xsl:param name = "generate.toc">
+ book toc,title
+ chapter toc,title
+ part toc,title
+ section toc,title
+ qandaset toc
+ </xsl:param>
+
+ <xsl:param name="generate.section.toc.level" select="2"/>
+
+ <xsl:template match="itemizedlist">
+ <div>
+ <xsl:apply-templates select="." mode="class.attribute"/>
+ <xsl:call-template name="anchor"/>
+ <xsl:if test="title">
+ <xsl:call-template name="formal.object.heading"/>
+ </xsl:if>
+
+ <!-- Preserve order of PIs and comments -->
+ <xsl:apply-templates
+ select="*[not(self::listitem
+ or self::title
+ or self::titleabbrev)]
+ |comment()[not(preceding-sibling::listitem)]
+ |processing-instruction()[not(preceding-sibling::listitem)]"/>
+
+ <ul>
+ <xsl:if test="@role">
+ <xsl:attribute name="class">
+ <xsl:value-of select="@role"/>
+ </xsl:attribute>
+ </xsl:if>
+
+ <xsl:if test="$css.decoration != 0">
+ <xsl:attribute name="type">
+ <xsl:call-template name="list.itemsymbol"/>
+ </xsl:attribute>
+ </xsl:if>
+
+ <xsl:if test="@spacing='compact'">
+ <xsl:attribute name="compact">
+ <xsl:value-of select="@spacing"/>
+ </xsl:attribute>
+ </xsl:if>
+ <xsl:apply-templates
+ select="listitem
+ |comment()[preceding-sibling::listitem]
+ |processing-instruction()[preceding-sibling::listitem]"/>
+ </ul>
+ </div>
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template match = "seglistitem">
+ <xsl:call-template name = "anchor"/>
+ <table class="seglistitem">
+ <xsl:apply-templates/>
+ </table>
+ <br/>
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template match = "seg">
+ <xsl:variable name = "segnum" select = "count(preceding-sibling::seg)+1"/>
+ <xsl:variable name = "seglist" select = "ancestor::segmentedlist"/>
+ <xsl:variable name = "segtitles" select = "$seglist/segtitle"/>
+
+ <tr class="seg">
+ <td>
+ <strong><nobr>
+ <span class="segtitle">
+ <xsl:apply-templates select = "$segtitles[$segnum=position()]" mode = "segtitle-in-seg"/>
+ </span>
+ </nobr></strong>
+ </td>
+ <td>
+ <xsl:text>: </xsl:text>
+ </td>
+ <td>
+ <xsl:apply-templates/>
+ </td>
+ </tr>
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template match = "userinput"><span class="userinput">&lt;<xsl:apply-templates/>&gt;</span></xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template match = "toggle">
+ <xsl:variable name = "toggle-id" select = "generate-id()"/>
+
+ <a href="#" target="_top" id="{$toggle-id}" >
+ <xsl:attribute name = "onclick">toggle_element( '<xsl:value-of select = "@linkend"/>', '<xsl:value-of select = "$toggle-id"/>', '<xsl:value-of select = "@on-label"/>', '<xsl:value-of select = "@off-label"/>' ); return false;</xsl:attribute>
+ <xsl:value-of select = "@on-label"/>
+ </a>
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template name = "user.head.content">
+ <xsl:param name = "node" select = "."/>
+ <script language="JavaScript1.2">
+ <xsl:attribute name = "src">
+ <xsl:call-template name = "href.target.relative">
+ <xsl:with-param name = "target" select = "'../js/boost-test.js'"/>
+ </xsl:call-template>
+ </xsl:attribute>
+ </script>
+
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template match = "*" mode = "class.attribute">
+ <xsl:param name = "class" select = "local-name(.)"/>
+
+ <xsl:choose>
+
+ <xsl:when test="@html-class">
+ <xsl:attribute name = "class">
+ <xsl:value-of select = "@html-class"/>
+ </xsl:attribute>
+ </xsl:when>
+
+ <xsl:otherwise>
+ <xsl:attribute name = "class">
+ <xsl:value-of select = "$class"/>
+ </xsl:attribute>
+ </xsl:otherwise>
+
+ </xsl:choose>
+
+ <!-- TO FIX: is there a better way? -->
+ <xsl:if test="@html-id">
+ <xsl:attribute name = "id">
+ <xsl:value-of select = "@html-id"/>
+ </xsl:attribute>
+ </xsl:if>
+
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template match = "inline-reference/refentry" mode = "index">
+ <xsl:variable name = "curr_entry_var">
+ <xsl:value-of select = "ancestor::inline-reference/@curr_entry_var"/>
+ </xsl:variable>
+
+ <xsl:variable name = "targ_id">
+ <xsl:value-of select = "../@id"/>.<xsl:value-of select = "@name"/>
+ </xsl:variable>
+
+ <li></li>
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template match = "inline-reference/refentry" mode = "entry">
+ <xsl:variable name = "targ_id">
+ <xsl:value-of select = "../@id"/>.<xsl:value-of select = "@name"/>
+ </xsl:variable>
+
+ <div class="entry" id="{$targ_id}">
+ <xsl:apply-templates select = "*"/>
+ </div>
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template match = "inline-reference/refentry/seealso">
+ <span class="inline-ref-see-also"><xsl:text>See also: </xsl:text></span>
+ <xsl:apply-templates select = "node()|text()"/>
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template match = "inline-reference//ref">
+ <xsl:variable name = "curr_entry_var">
+ <xsl:value-of select = "ancestor::inline-reference/@curr_entry_var"/>
+ </xsl:variable>
+
+ <xsl:variable name = "targ">
+ <xsl:value-of select = "."/>
+ </xsl:variable>
+
+ <xsl:variable name = "targ_id">
+ <xsl:value-of select = "ancestor::inline-reference/@id"/>.<xsl:value-of select = "$targ"/>
+ </xsl:variable>
+
+
+
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template match = "inline-reference">
+ <script language="JavaScript1.2">
+ var <xsl:value-of select = "@curr_entry_var"/>;
+ </script>
+
+ <table>
+ <xsl:apply-templates select = "." mode = "class.attribute"/>
+ <tr>
+ <td class="index" valign="top">
+ <ul>
+ <xsl:apply-templates select ="refentry" mode = "index"/>
+ </ul>
+ </td>
+
+ <td class="content" valign="top">
+ <xsl:apply-templates select ="refentry" mode = "entry"/>
+ </td>
+ </tr>
+ </table>
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+ <xsl:template name = "navig.link">
+ <xsl:param name = "targ"/>
+ <xsl:param name = "direction"/>
+ <xsl:param name = "accesskey"/>
+ <xsl:param name = "context" select = "."/>
+
+ <xsl:if test = "count($targ)>0">
+ <xsl:variable name = "href-target">
+ <xsl:call-template name = "href.target">
+ <xsl:with-param name = "object" select = "$targ"/>
+ <xsl:with-param name = "context" select = "$context"/>
+ </xsl:call-template>
+ </xsl:variable>
+
+ <a>
+ <xsl:if test="$accesskey">
+ <xsl:attribute name = "accesskey" select = "$acceskey"/>
+ </xsl:if>
+ <xsl:attribute name = "href">
+ <!--xsl:message>
+0000 <xsl:value-of select = "$href-target"/>
+ </xsl:message-->
+
+ <xsl:value-of select = "$href-target"/>
+ </xsl:attribute>
+
+ <xsl:call-template name = "navig.content">
+ <xsl:with-param name = "direction" select = "$direction"/>
+ </xsl:call-template>
+ </a>
+ </xsl:if>
+ </xsl:template>
+
+ <!-- *********************************************************************** -->
+
+ <xsl:template match = "*" mode = "navig.location-path">
+ <xsl:param name = "home"/>
+ <xsl:param name = "next"/>
+ <xsl:param name = "context" select = "."/>
+ <xsl:param name = "leaf" select = "1"/>
+
+ <xsl:variable name = "node" select = "."/>
+
+ <xsl:choose>
+ <xsl:when test="($node) and ($node != $home)">
+ <xsl:apply-templates select = "parent::*" mode="navig.location-path">
+ <xsl:with-param name = "home" select = "$home"/>
+ <xsl:with-param name = "next" select = "$next"/>
+ <xsl:with-param name = "context" select = "$context"/>
+ <xsl:with-param name = "leaf" select = "0"/>
+ </xsl:apply-templates>
+
+ <xsl:variable name = "text">
+ <xsl:choose>
+ <xsl:when test="$node/titleabbrev">
+ <xsl:value-of select = "$node/titleabbrev"/>
+ </xsl:when>
+ <xsl:otherwise>
+ <xsl:value-of select = "$node/title"/>
+ </xsl:otherwise>
+ </xsl:choose>
+ </xsl:variable>
+
+ <xsl:choose>
+ <xsl:when test="$leaf">
+ <b><xsl:value-of select = "$text"/></b>
+ </xsl:when>
+ <xsl:otherwise>
+ <xsl:call-template name = "navig.link">
+ <xsl:with-param name = "direction" select = "$text"/>
+ <xsl:with-param name = "targ" select = "$node"/>
+ <xsl:with-param name = "context" select = "$context"/>
+ </xsl:call-template>
+ </xsl:otherwise>
+ </xsl:choose>
+
+ <xsl:variable name = "next-sibling" select = "following-sibling::*[1]" />
+
+ <xsl:choose>
+ <xsl:when test = "$next-sibling and ($leaf = 0 or $next-sibling != $next)">
+ <a>
+ <xsl:attribute name = "href">
+ <xsl:call-template name = "href.target">
+ <xsl:with-param name = "object" select = "$next-sibling"/>
+ <xsl:with-param name = "context" select = "$context"/>
+ </xsl:call-template>
+ </xsl:attribute>
+ &gt;
+ </a>
+ </xsl:when>
+ <xsl:when test = "$leaf = 0">
+ <xsl:text> &gt; </xsl:text>
+ </xsl:when>
+ </xsl:choose>
+
+ </xsl:when>
+ <xsl:when test="$leaf = 0"><xsl:text> &gt; </xsl:text></xsl:when>
+ </xsl:choose>
+ </xsl:template>
+
+ <!-- *********************************************************************** -->
+
+ <xsl:template name = "header.navigation">
+ <xsl:param name = "prev" select = "/foo"/>
+ <xsl:param name = "next" select = "/foo"/>
+ <xsl:param name = "nav.context"/>
+
+ <xsl:variable name = "home" select = "/*[1]"/>
+ <xsl:variable name = "up" select = "parent::*"/>
+ <xsl:variable name = "boost.test.image.src" select = "concat($boost.root, '/libs/test/docbook/img/boost.test.logo.png')"/>
+
+ <table width = "100%">
+ <tr>
+ <td width="10%">
+ <a>
+ <xsl:attribute name = "href">
+ <xsl:call-template name = "href.target">
+ <xsl:with-param name = "object" select = "$home"/>
+ </xsl:call-template>
+ </xsl:attribute>
+
+ <img alt="Home" width="229" height="61" border="0">
+ <xsl:attribute name = "src">
+ <xsl:call-template name = "href.target.relative">
+ <xsl:with-param name = "target" select = "$boost.test.image.src"/>
+ </xsl:call-template>
+ </xsl:attribute>
+ </img>
+ </a>
+ </td>
+ <td valign = "middle" align="left">
+ <xsl:apply-templates select = "." mode="navig.location-path">
+ <xsl:with-param name = "home" select = "$home"/>
+ <xsl:with-param name = "next" select = "$next"/>
+ </xsl:apply-templates>
+ </td>
+ <td>
+ <div class = "spirit-nav">
+ <xsl:call-template name = "navig.link">
+ <xsl:with-param name = "direction" select = "'prev'"/>
+ <xsl:with-param name = "targ" select = "$prev"/>
+ <xsl:with-param name = "accesskey" select = "p"/>
+ </xsl:call-template>
+ <xsl:call-template name = "navig.link">
+ <xsl:with-param name = "direction" select = "'next'"/>
+ <xsl:with-param name = "targ" select = "$next"/>
+ <xsl:with-param name = "accesskey" select = "n"/>
+ </xsl:call-template>
+ </div>
+ </td>
+ </tr>
+ </table>
+ <hr/>
+ </xsl:template>
+
+<!-- *********************************************************************** -->
+
+</xsl:stylesheet>

Added: trunk/libs/test/doc/utf-boostbook.jam
==============================================================================
--- (empty file)
+++ trunk/libs/test/doc/utf-boostbook.jam 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -0,0 +1,46 @@
+# Jamfile.v2
+#
+# Copyright (c) 2010
+# Steven Watanabe
+#
+# Distributed Under the Boost Software License, Version 1.0. (See
+# accompanying file LICENSE_1_0.txt or copy at
+# http://www.boost.org/LICENSE_1_0.txt)
+
+import xsltproc : xslt xslt-dir ;
+import feature ;
+import path ;
+import toolset : using ;
+import generators ;
+
+feature.feature utf-boostbook : off on : propagated ;
+
+.initialized = ;
+
+.here = [ path.root [ path.make $(__file__:D) ] [ path.pwd ] ] ;
+
+rule init ( )
+{
+ if ! $(initialized)
+ {
+ .initialized = true ;
+ using boostbook ;
+ generators.register-standard utf-boostbook.boostbook-to-docbook : XML : DOCBOOK : <utf-boostbook>on ;
+ generators.register-standard utf-boostbook.docbook-to-htmldir : DOCBOOK : HTMLDIR : <utf-boostbook>on ;
+
+ generators.override utf-boostbook.boostbook-to-docbook : boostbook.boostbook-to-docbook ;
+ generators.override utf-boostbook.docbook-to-htmldir : boostbook.docbook-to-htmldir ;
+ }
+}
+
+rule boostbook-to-docbook ( target : source : properties * )
+{
+ local stylesheet = [ path.native $(.here)/src/xsl/docbook.xsl ] ;
+ xslt $(target) : $(source) $(stylesheet) : $(properties) ;
+}
+
+rule docbook-to-htmldir ( target : source : properties * )
+{
+ local stylesheet = [ path.native $(.here)/src/xsl/html.xsl ] ;
+ xslt-dir $(target) : $(source) $(stylesheet) : $(properties) : html ;
+}

Modified: trunk/tools/boostbook/xsl/chunk-common.xsl
==============================================================================
--- trunk/tools/boostbook/xsl/chunk-common.xsl (original)
+++ trunk/tools/boostbook/xsl/chunk-common.xsl 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -68,6 +68,9 @@
     </xsl:variable>
 
     <xsl:choose>
+ <xsl:when test="$navtext = 'xxx'">
+ <xsl:value-of select="$direction"/>
+ </xsl:when>
         <xsl:when test="$navig.graphics != 0">
             <img>
                 <xsl:attribute name="src">

Copied: trunk/tools/boostbook/xsl/html-base.xsl (from r62041, /trunk/tools/boostbook/xsl/html.xsl)
==============================================================================
--- /trunk/tools/boostbook/xsl/html.xsl (original)
+++ trunk/tools/boostbook/xsl/html-base.xsl 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -9,29 +9,6 @@
 <xsl:stylesheet xmlns:xsl="http://www.w3.org/1999/XSL/Transform"
                 xmlns:rev="http://www.cs.rpi.edu/~gregod/boost/tools/doc/revision"
                 version="1.0">
-
- <!-- Import the HTML chunking stylesheet -->
- <xsl:import
- href="http://docbook.sourceforge.net/release/xsl/current/html/chunk.xsl"/>
- <xsl:import
- href="http://docbook.sourceforge.net/release/xsl/current/html/math.xsl"/>
-
- <!-- Bring in the fast chunking overrides. There's nothing
- that we need to override, so include instead of importing it. -->
- <xsl:include
- href="http://docbook.sourceforge.net/release/xsl/current/html/chunkfast.xsl"/>
-
- <!-- We have to make sure that our templates override all
- docbook templates. Therefore, we include our own templates
- instead of importing them. In order for this to work,
- the stylesheets included here cannot also include each other -->
- <xsl:include href="chunk-common.xsl"/>
- <xsl:include href="docbook-layout.xsl"/>
- <xsl:include href="navbar.xsl"/>
- <xsl:include href="admon.xsl"/>
- <xsl:include href="xref.xsl"/>
- <xsl:include href="relative-href.xsl"/>
- <xsl:include href="callout.xsl"/>
   
   <xsl:param name="html.stylesheet">
     <xsl:choose>

Modified: trunk/tools/boostbook/xsl/html.xsl
==============================================================================
--- trunk/tools/boostbook/xsl/html.xsl (original)
+++ trunk/tools/boostbook/xsl/html.xsl 2010-05-17 16:09:18 EDT (Mon, 17 May 2010)
@@ -32,303 +32,6 @@
   <xsl:include href="xref.xsl"/>
   <xsl:include href="relative-href.xsl"/>
   <xsl:include href="callout.xsl"/>
-
- <xsl:param name="html.stylesheet">
- <xsl:choose>
- <xsl:when test = "$boost.defaults = 'Boost'">
- <xsl:value-of select = "concat($boost.root, '/doc/src/boostbook.css')"/>
- </xsl:when>
- <xsl:otherwise>
- boostbook.css
- </xsl:otherwise>
- </xsl:choose>
- </xsl:param>
-
- <xsl:param name="admon.style"/>
- <xsl:param name="admon.graphics">1</xsl:param>
- <xsl:param name="boostbook.verbose" select="0"/>
- <xsl:param name="navig.graphics" select="1"/>
- <xsl:param name="navig.graphics.extension" select="'.png'"/>
- <xsl:param name="chapter.autolabel" select="1"/>
- <xsl:param name="use.id.as.filename" select="1"/>
- <xsl:param name="refentry.generate.name" select="0"/>
- <xsl:param name="refentry.generate.title" select="1"/>
- <xsl:param name="make.year.ranges" select="1"/>
- <xsl:param name="generate.manifest" select="1"/>
- <xsl:param name="generate.section.toc.level" select="3"/>
- <xsl:param name="doc.standalone">false</xsl:param>
- <xsl:param name="chunker.output.indent">yes</xsl:param>
- <xsl:param name="chunker.output.encoding">US-ASCII</xsl:param>
- <xsl:param name="chunk.quietly" select="not(number($boostbook.verbose))"/>
- <xsl:param name="toc.max.depth">2</xsl:param>
- <xsl:param name="callout.graphics.number.limit">15</xsl:param>
- <xsl:param name = "admon.graphics.path"
- select = "concat($boost.root, '/doc/html/images/')"/>
- <xsl:param name = "navig.graphics.path"
- select = "concat($boost.root, '/doc/html/images/')"/>
- <xsl:param name = "callout.graphics.path"
- select = "concat($boost.root, '/doc/src/images/callouts/')"/>
-
-
- <xsl:param name="admon.style">
- <!-- Remove the style. Let the CSS do the styling -->
-</xsl:param>
-
-<!-- Always have graphics -->
-<xsl:param name="admon.graphics" select="1"/>
-
- <xsl:param name="generate.toc">
-appendix toc,title
-article/appendix nop
-article toc,title
-book toc,title
-chapter toc,title
-part toc,title
-preface toc,title
-qandadiv toc
-qandaset toc
-reference toc,title
-sect1 toc
-sect2 toc
-sect3 toc
-sect4 toc
-sect5 toc
-section toc
-set toc,title
- </xsl:param>
-
-
- <xsl:template name="format.cvs.revision">
- <xsl:param name="text"/>
-
- <!-- Remove the "$Date: " -->
- <xsl:variable name="text.noprefix"
- select="substring-after($text, '$Date: ')"/>
-
- <!-- Grab the year -->
- <xsl:variable name="year" select="substring-before($text.noprefix, '/')"/>
- <xsl:variable name="text.noyear"
- select="substring-after($text.noprefix, '/')"/>
-
- <!-- Grab the month -->
- <xsl:variable name="month" select="substring-before($text.noyear, '/')"/>
- <xsl:variable name="text.nomonth"
- select="substring-after($text.noyear, '/')"/>
-
- <!-- Grab the year -->
- <xsl:variable name="day" select="substring-before($text.nomonth, ' ')"/>
- <xsl:variable name="text.noday"
- select="substring-after($text.nomonth, ' ')"/>
-
- <!-- Get the time -->
- <xsl:variable name="time" select="substring-before($text.noday, ' ')"/>
-
- <xsl:variable name="month.name">
- <xsl:choose>
- <xsl:when test="$month=1">January</xsl:when>
- <xsl:when test="$month=2">February</xsl:when>
- <xsl:when test="$month=3">March</xsl:when>
- <xsl:when test="$month=4">April</xsl:when>
- <xsl:when test="$month=5">May</xsl:when>
- <xsl:when test="$month=6">June</xsl:when>
- <xsl:when test="$month=7">July</xsl:when>
- <xsl:when test="$month=8">August</xsl:when>
- <xsl:when test="$month=9">September</xsl:when>
- <xsl:when test="$month=10">October</xsl:when>
- <xsl:when test="$month=11">November</xsl:when>
- <xsl:when test="$month=12">December</xsl:when>
- </xsl:choose>
- </xsl:variable>
-
- <xsl:value-of select="concat($month.name, ' ', $day, ', ', $year, ' at ',
- $time, ' GMT')"/>
- </xsl:template>
-
-
- <xsl:template name="format.svn.revision">
- <xsl:param name="text"/>
-
- <!-- Remove the "$Date: " -->
- <xsl:variable name="text.noprefix"
- select="substring-after($text, '$Date: ')"/>
-
- <!-- Grab the year -->
- <xsl:variable name="year" select="substring-before($text.noprefix, '-')"/>
- <xsl:variable name="text.noyear"
- select="substring-after($text.noprefix, '-')"/>
-
- <!-- Grab the month -->
- <xsl:variable name="month" select="substring-before($text.noyear, '-')"/>
- <xsl:variable name="text.nomonth"
- select="substring-after($text.noyear, '-')"/>
-
- <!-- Grab the year -->
- <xsl:variable name="day" select="substring-before($text.nomonth, ' ')"/>
- <xsl:variable name="text.noday"
- select="substring-after($text.nomonth, ' ')"/>
-
- <!-- Get the time -->
- <xsl:variable name="time" select="substring-before($text.noday, ' ')"/>
- <xsl:variable name="text.notime"
- select="substring-after($text.noday, ' ')"/>
-
- <!-- Get the timezone -->
- <xsl:variable name="timezone" select="substring-before($text.notime, ' ')"/>
-
- <xsl:variable name="month.name">
- <xsl:choose>
- <xsl:when test="$month=1">January</xsl:when>
- <xsl:when test="$month=2">February</xsl:when>
- <xsl:when test="$month=3">March</xsl:when>
- <xsl:when test="$month=4">April</xsl:when>
- <xsl:when test="$month=5">May</xsl:when>
- <xsl:when test="$month=6">June</xsl:when>
- <xsl:when test="$month=7">July</xsl:when>
- <xsl:when test="$month=8">August</xsl:when>
- <xsl:when test="$month=9">September</xsl:when>
- <xsl:when test="$month=10">October</xsl:when>
- <xsl:when test="$month=11">November</xsl:when>
- <xsl:when test="$month=12">December</xsl:when>
- </xsl:choose>
- </xsl:variable>
-
- <xsl:value-of select="concat($month.name, ' ', $day, ', ', $year, ' at ',
- $time, ' ', $timezone)"/>
- </xsl:template>
-
- <!-- Footer Copyright -->
- <xsl:template match="copyright" mode="boost.footer">
- <xsl:if test="position() &gt; 1">
- <br/>
- </xsl:if>
- <xsl:call-template name="gentext">
- <xsl:with-param name="key" select="'Copyright'"/>
- </xsl:call-template>
- <xsl:call-template name="gentext.space"/>
- <xsl:call-template name="dingbat">
- <xsl:with-param name="dingbat">copyright</xsl:with-param>
- </xsl:call-template>
- <xsl:call-template name="gentext.space"/>
- <xsl:call-template name="copyright.years">
- <xsl:with-param name="years" select="year"/>
- <xsl:with-param name="print.ranges" select="$make.year.ranges"/>
- <xsl:with-param name="single.year.ranges"
- select="$make.single.year.ranges"/>
- </xsl:call-template>
- <xsl:call-template name="gentext.space"/>
- <xsl:apply-templates select="holder" mode="titlepage.mode"/>
- </xsl:template>
-
- <!-- Footer License -->
- <xsl:template match="legalnotice" mode="boost.footer">
- <xsl:apply-templates select="para" mode="titlepage.mode" />
- </xsl:template>
-
- <xsl:template name="user.footer.content">
- <table width="100%">
- <tr>
- <td align="left">
- <xsl:variable name="revision-nodes"
- select="ancestor-or-self::*
- [not (attribute::rev:last-revision='')]"/>
- <xsl:if test="count($revision-nodes) &gt; 0">
- <xsl:variable name="revision-node"
- select="$revision-nodes[last()]"/>
- <xsl:variable name="revision-text">
- <xsl:value-of
- select="normalize-space($revision-node/attribute::rev:last-revision)"/>
- </xsl:variable>
- <xsl:if test="string-length($revision-text) &gt; 0">
- <p>
- <small>
- <xsl:text>Last revised: </xsl:text>
- <xsl:choose>
- <xsl:when test="contains($revision-text, '/')">
- <xsl:call-template name="format.cvs.revision">
- <xsl:with-param name="text" select="$revision-text"/>
- </xsl:call-template>
- </xsl:when>
- <xsl:otherwise>
- <xsl:call-template name="format.svn.revision">
- <xsl:with-param name="text" select="$revision-text"/>
- </xsl:call-template>
- </xsl:otherwise>
- </xsl:choose>
- </small>
- </p>
- </xsl:if>
- </xsl:if>
- </td>
- <td align="right">
- <div class = "copyright-footer">
- <xsl:apply-templates select="ancestor::*/*/copyright"
- mode="boost.footer"/>
- <xsl:apply-templates select="ancestor::*/*/legalnotice"
- mode="boost.footer"/>
- </div>
- </td>
- </tr>
- </table>
- </xsl:template>
-
- <!-- We don't want refentry's to show up in the TOC because they
- will merely be redundant with the synopsis. -->
- <xsl:template match="refentry" mode="toc"/>
-
- <!-- override the behaviour of some DocBook elements for better
- rendering facilities -->
-
- <xsl:template match = "programlisting[ancestor::informaltable]">
- <pre class = "table-{name(.)}"><xsl:apply-templates/></pre>
- </xsl:template>
-
- <xsl:template match = "refsynopsisdiv">
- <h2 class = "{name(.)}-title">Synopsis</h2>
- <div class = "{name(.)}">
- <xsl:apply-templates/>
- </div>
- </xsl:template>
-
- <xsl:template name="generate.html.title"/>
-
-<!-- ============================================================ -->
-
-<xsl:template name="output.html.stylesheets">
- <xsl:param name="stylesheets" select="''"/>
-
- <xsl:choose>
- <xsl:when test="contains($stylesheets, ' ')">
- <link rel="stylesheet">
- <xsl:attribute name="href">
- <xsl:call-template name="href.target.relative">
- <xsl:with-param name="target" select="substring-before($stylesheets, ' ')"/>
- </xsl:call-template>
- </xsl:attribute>
- <xsl:if test="$html.stylesheet.type != ''">
- <xsl:attribute name="type">
- <xsl:value-of select="$html.stylesheet.type"/>
- </xsl:attribute>
- </xsl:if>
- </link>
- <xsl:call-template name="output.html.stylesheets">
- <xsl:with-param name="stylesheets" select="substring-after($stylesheets, ' ')"/>
- </xsl:call-template>
- </xsl:when>
- <xsl:when test="$stylesheets != ''">
- <link rel="stylesheet">
- <xsl:attribute name="href">
- <xsl:call-template name="href.target.relative">
- <xsl:with-param name="target" select="$stylesheets"/>
- </xsl:call-template>
- </xsl:attribute>
- <xsl:if test="$html.stylesheet.type != ''">
- <xsl:attribute name="type">
- <xsl:value-of select="$html.stylesheet.type"/>
- </xsl:attribute>
- </xsl:if>
- </link>
- </xsl:when>
- </xsl:choose>
-</xsl:template>
+ <xsl:include href="html-base.xsl"/>
 
 </xsl:stylesheet>


Boost-Commit list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk