
BoostCommit : 
Subject: [Boostcommit] svn:boost r82596  in trunk/libs/math/doc/sf_and_dist: . distributions
From: pbristow_at_[hidden]
Date: 20130124 09:12:39
Author: pbristow
Date: 20130124 09:12:37 EST (Thu, 24 Jan 2013)
New Revision: 82596
URL: http://svn.boost.org/trac/boost/changeset/82596
Log:
Numerous small edits to add references to multiprecision and constexpr etc constants. roadmap updated to 1.54.
TODO bessel zeros.
Text files modified:
trunk/libs/math/doc/sf_and_dist/common_overviews.qbk  98 ++++++
trunk/libs/math/doc/sf_and_dist/concepts.qbk  315 ++++++++++++++++++++++++++++
trunk/libs/math/doc/sf_and_dist/constants.qbk  74 +++++
trunk/libs/math/doc/sf_and_dist/credits.qbk  14 +
trunk/libs/math/doc/sf_and_dist/distributions/distribution_construction.qbk  16 +
trunk/libs/math/doc/sf_and_dist/distributions/negative_binomial.qbk  92 +++++
trunk/libs/math/doc/sf_and_dist/distributions/rayleigh.qbk  36 ++
trunk/libs/math/doc/sf_and_dist/error.qbk  30 ++
trunk/libs/math/doc/sf_and_dist/faq.qbk  46 +++
trunk/libs/math/doc/sf_and_dist/implementation.qbk  45 ++
trunk/libs/math/doc/sf_and_dist/math.qbk  11 +
trunk/libs/math/doc/sf_and_dist/minimax.qbk  36 ++
trunk/libs/math/doc/sf_and_dist/performance.qbk  99 ++++++
trunk/libs/math/doc/sf_and_dist/result_type_calc.qbk  36 ++
trunk/libs/math/doc/sf_and_dist/roadmap.qbk  7
trunk/libs/math/doc/sf_and_dist/roots.qbk  58 +++
16 files changed, 598 insertions(+), 415 deletions()
Modified: trunk/libs/math/doc/sf_and_dist/common_overviews.qbk
==============================================================================
 trunk/libs/math/doc/sf_and_dist/common_overviews.qbk (original)
+++ trunk/libs/math/doc/sf_and_dist/common_overviews.qbk 20130124 09:12:37 EST (Thu, 24 Jan 2013)
@@ 4,16 +4,16 @@
Policies are a powerful finegrain mechanism that allow you to customise the
behaviour of this library according to your needs. There is more information
available in the [link math_toolkit.policy.pol_tutorial policy tutorial]
+available in the [link math_toolkit.policy.pol_tutorial policy tutorial]
and the [link math_toolkit.policy.pol_ref policy reference].
Generally speaking, unless you find that the
+Generally speaking, unless you find that the
[link math_toolkit.policy.pol_tutorial.policy_tut_defaults
default policy behaviour]
when encountering 'bad' argument values does not meet your needs,
you should not need to worry about policies.
Policies are a compiletime mechanism that allow you to change
+Policies are a compiletime mechanism that allow you to change
errorhandling or calculation precision either
program wide, or at the call site.
@@ 33,19 +33,19 @@
* How many iterations a special function is permitted to perform in
a series evaluation or root finding algorithm before it gives up and raises an
__evaluation_error.

+
You can control policies:
* Using [link math_toolkit.policy.pol_ref.policy_defaults macros] to
+* Using [link math_toolkit.policy.pol_ref.policy_defaults macros] to
change any default policy: the is the prefered method for installation
wide policies.
* At your chosen [link math_toolkit.policy.pol_ref.namespace_pol
namespace scope] for distributions and/or functions: this is the
+* At your chosen [link math_toolkit.policy.pol_ref.namespace_pol
+namespace scope] for distributions and/or functions: this is the
prefered method for project, namespace, or translation unit scope
policies.
* In an adhoc manner [link math_toolkit.policy.pol_tutorial.ad_hoc_sf_policies
by passing a specific policy to a special function], or to a
[link math_toolkit.policy.pol_tutorial.ad_hoc_dist_policies
+* In an adhoc manner [link math_toolkit.policy.pol_tutorial.ad_hoc_sf_policies
+by passing a specific policy to a special function], or to a
+[link math_toolkit.policy.pol_tutorial.ad_hoc_dist_policies
statistical distribution].
]
@@ 63,7 +63,7 @@
numeric libraries are implemented in C or FORTRAN. Traditionally
languages such as C or FORTRAN are perceived as easier to optimise
than more complex languages like C++, so in a sense this library
provides a good test of current compiler technology, and the
+provides a good test of current compiler technology, and the
"abstraction penalty"  if any  of C++ compared to other languages.
The two most important things you can do to ensure the best performance
@@ 72,10 +72,10 @@
# Turn on your compilers optimisations: the difference between "release"
and "debug" builds can easily be a [link math_toolkit.perf.getting_best factor of 20].
# Pick your compiler carefully: [link math_toolkit.perf.comp_compilers
performance differences of up to
+performance differences of up to
8 fold] have been found between some Windows compilers for example.
The [link math_toolkit.perf performance section] contains more
+The [link math_toolkit.perf performance section] contains more
information on the performance
of this library, what you can do to fine tune it, and how this library
compares to some other open source alternatives.
@@ 84,111 +84,113 @@
[template compilers_overview[]
This section contains some information about how various compilers
+This section contains some information about how various compilers
work with this library.
It is not comprehensive and updated experiences are always welcome.
Some effort has been made to suppress unhelpful warnings but it is
+Some effort has been made to suppress unhelpful warnings but it is
difficult to achieve this on all systems.
[table Supported/Tested Compilers
[[Platform][Compiler][Has long double support][Notes]]
[[Windows][MSVC 7.1 and later][Yes]
[All tests OK.

+
We aim to keep our headers warning free at level 4 with
this compiler.]]
[[Windows][Intel 8.1 and later][Yes]
[All tests OK.

+
We aim to keep our headers warning free at level 4 with
this compiler. However, The tests cases tend to generate a lot of
 warnings relating to numeric underflow of the test data: these are
+ warnings relating to numeric underflow of the test data: these are
harmless.]]
[[Windows][GNU Mingw32 C++][Yes]
[All tests OK.

+
We aim to keep our headers warning free with Wall with this compiler.]]
[[Windows][GNU Cygwin C++][No]
[All tests OK.

+
We aim to keep our headers warning free with Wall with this compiler.

+
Long double support has been disabled because there are no native
long double C std library functions available.]]
[[Windows][Borland C++ 5.8.2 (Developer studio 2006)][No]
 [We have only partial compatability with this compiler:

+ [We have only partial compatability with this compiler:
+
Long double support has been disabled because the native
long double C standard library functions really only forward to the
double versions. This can result in unpredictable behaviour when
 using the long double overloads: for example `sqrtl` applied to a
+ using the long double overloads: for example `sqrtl` applied to a
finite value, can result in an infinite result.

+
Some functions still fail to compile, there are no known workarounds at present.]]

+[[Windows 7/Netbeans 7.2][Clang 3.1][Yes][Spot examples OK. Expect all tests to compile and run OK.]]
+
[[Linux][GNU C++ 3.4 and later][Yes]
[All tests OK.

+
We aim to keep our headers warning free with Wall with this compiler.]]
+[[Linux][Clang 3.2][Yes][All tests OK.]]
[[Linux][Intel C++ 10.0 and later][Yes]
[All tests OK.

+
We aim to keep our headers warning free with Wall with this compiler.
However, The tests cases tend to generate a lot of
 warnings relating to numeric underflow of the test data: these are
+ warnings relating to numeric underflow of the test data: these are
harmless.]]
[[Linux][Intel C++ 8.1 and 9.1][No]
[All tests OK.

+
Long double support has been disabled with these compiler releases
because calling the standard library long double math functions
can result in a segfault. The issue is Linux distribution and
glibc version specific and is Intel bug report #409291. Fully up to date
releases of Intel 9.1 (post version l_cc_c_9.1.046)
 shouldn't have this problem. If you need long
+ shouldn't have this problem. If you need long
double support with this compiler, then comment out the define of
 BOOST_MATH_NO_LONG_DOUBLE_MATH_FUNCTIONS at line 55 of
+ BOOST_MATH_NO_LONG_DOUBLE_MATH_FUNCTIONS at line 55 of
[@../../../../../boost/math/tools/config.hpp boost/math/tools/config.hpp].

+
We aim to keep our headers warning free with Wall with this compiler.
However, The tests cases tend to generate a lot of
 warnings relating to numeric underflow of the test data: these are
+ warnings relating to numeric underflow of the test data: these are
harmless.]]
[[Linux][QLogic PathScale 3.0][Yes]
[Some tests involving conceptual checks fail to build, otherwise
there appear to be no issues.]]
[[Linux][Sun Studio 12][Yes]
 [Some tests involving function overload resolution fail to build,
+ [Some tests involving function overload resolution fail to build,
these issues should be rairly encountered in practice.]]
[[Solaris][Sun Studio 12][Yes]
 [Some tests involving function overload resolution fail to build,
+ [Some tests involving function overload resolution fail to build,
these issues should be rairly encountered in practice.]]
[[Solaris][GNU C++ 4.x][Yes]
[All tests OK.

+
We aim to keep our headers warning free with Wall with this compiler.]]
[[HP Tru64][Compaq C++ 7.1][Yes]
[All tests OK.]]
[[HPUX Itanium][HP aCC 6.x][Yes]
[All tests OK.

 Unfortunately this compiler emits quite a few warnings from libraries
+
+ Unfortunately this compiler emits quite a few warnings from libraries
upon which we depend (TR1, Array etc).]]
[[HPUX PARISC][GNU C++ 3.4][No]
[All tests OK.]]
[[Apple Mac OS X, Intel][Darwin/GNU C++ 4.x][Yes][All tests OK.]]
[[Apple Mac OS X, PowerPC][Darwin/GNU C++ 4.x][No]
[All tests OK.

+
Long double support has been disabled on this platform due to the
rather strange nature of Darwin's 106bit long double
implementation. It should be possible to make this work if someone
is prepared to offer assistance.]]
[[IMB AIX][IBM xlc 5.3][Yes]
 [All tests pass except for our fpclassify tests which fail due to a
+[[Apple Mac OS X,][Clang 3.2][Yes][All tests expected to be OK.]]
+[[IBM AIX][IBM xlc 5.3][Yes]
+ [All tests pass except for our fpclassify tests which fail due to a
bug in `std::numeric_limits`, the bug effects the test code, not
 fpclassify itself. The IBM compiler group
 are aware of the problem.]]
+ fpclassify itself. The IBM compiler group are aware of the problem.]]
]
[table Unsupported Compilers
@@ 202,20 +204,20 @@
bjam mytoolset
where "mytoolset" is the name of the
+where "mytoolset" is the name of the
[@../../../../../tools/build/index.html Boost.Build] toolset used for your
compiler. The chances are that [*many of the accuracy tests will fail
at this stage]  don't panic  the default acceptable error tolerances
are quite tight, especially for long double types with an extended
exponent range (these cause more extreme test cases to be executed
for some functions).
+for some functions).
You will need to cast an eye over the output from
the failing tests and make a judgement as to whether
+the failing tests and make a judgement as to whether
the error rates are acceptable or not.
]
[/ math.qbk
 Copyright 2007 John Maddock and Paul A. Bristow.
+ Copyright 2007, 2012 John Maddock and Paul A. Bristow.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
Modified: trunk/libs/math/doc/sf_and_dist/concepts.qbk
==============================================================================
 trunk/libs/math/doc/sf_and_dist/concepts.qbk (original)
+++ trunk/libs/math/doc/sf_and_dist/concepts.qbk 20130124 09:12:37 EST (Thu, 24 Jan 2013)
@@ 1,39 +1,136 @@
[section:use_ntl Using With NTL  a HighPrecision FloatingPoint Library]
+[section:high_precision Using Boost.Math with HighPrecision FloatingPoint Libraries]
The special functions and tools in this library can be used with
[@http://shoup.net/ntl/doc/RR.txt NTL::RR (an arbitrary precision number type)],
via the bindings in [@../../../../../boost/math/bindings/rr.hpp boost/math/bindings/rr.hpp].
[@http://shoup.net/ntl/ See also NTL: A Library for doing Number Theory by
Victor Shoup]
+The special functions, distributions, constants and tools in this library
+can be used with a number of highprecision libraries, including:
Unfortunately `NTL::RR` doesn't quite satisfy our conceptual requirements,
so there is a very thin wrapper class `boost::math::ntl::RR` defined in
[@../../../../../boost/math/bindings/rr.hpp boost/math/bindings/rr.hpp] that you
should use in place of `NTL::RR`. The class is intended to be a dropin
replacement for the "real" NTL::RR that adds some syntactic sugar to keep
this library happy, plus some of the standard library functions not implemented
in NTL.
+* __multiprecision
+* e_float
+* __NTL
+* __GMP
+* __MPFR
For those functions that are based upon the __lanczos, the bindings
defines a series of approximations with up to 61 terms and accuracy
up to approximately 3e113. This therefore sets the upper limit for accuracy
to the majority of functions defined this library when used with `NTL::RR`.
+The last four have some license restrictions;
+only __multiprecision when using the `cpp_float` backend
+can provide an unrestricted [@http://www.boost.org/LICENSE_1_0.txt Boost] license.
There is a concept checking test program for NTL support
[@../../../../../libs/math/test/ntl_concept_check.cpp here].
+At present, the price of a free license is slightly lower speed.
+
+Of course, the main cost of higher precision is very much decreased
+(usually at least hundredfold) computation speed, and big increases in memory use.
+
+Some libraries offer true
+[@http://en.wikipedia.org/wiki/Arbitraryprecision_arithmetic arbitrary precision arithmetic]
+where the precision is limited only by avilable memory and compute time, but most are used
+at some arbitrarilyfixed precision, say 100 decimal digits.
+
+__multiprecision can operate in both ways, but the most popular choice is likely to be about a hundred
+decimal digits, though examples of computing tens of thousands of digits have been demonstrated.
+
+[section:why_high_precision Why use a highprecision library rather than builtin floatingpoint types?]
+
+For nearly all applications, the builtin floatingpoint types, `double`
+(and `long double` if this offers higher precision) offer enough precision,
+typically a dozen decimal digits.
+
+Some reasons why one would want to use a higher precision:
+
+* A much more precise result (many more digits) is just a requirement.
+* The range of the computed value exceeds the range of the type: factorials are the textbook example.
+* Using double is (or may be) too inaccurate.
+* Using long double (or may be) is too inaccurate.
+* Using an extended precision type implemented in software as
+[@http://en.wikipedia.org/wiki/Doubledouble_(arithmetic)#Doubledouble_arithmetic doubledouble]
+([@http://en.wikipedia.org/wiki/Darwin_(operating_system) Darwin]) is sometimes unpredictably inaccurate.
+* Loss of precision or inaccuracy caused by extreme arguments or cancellation error.
+* An accuracy as good as possible for a chosen builtin floatingpoint type is required.
+* As a reference value, for example, to determine the inaccuracy
+of a value computed with a builtin floating point type,
+(perhaps even using some quick'n'dirty algorithm).
+The accuracy of many functions and distributions in Boost.Math has been measured in this way
+from tables of very high precision (up to 1000 decimal digits).
+
+Many functions and distributions have differences from exact values
+that are only a few least significant bits  computation noise.
+Others, often those for which analytical solutions are not available,
+require approximations and iteration:
+these may lose several decimal digits of precision.
+
+Much larger loss of precision can occur for [@http://en.wikipedia.org/wiki/Boundary_case boundary]
+or [@http://en.wikipedia.org/wiki/Corner_case corner cases],
+often caused by [@http://en.wikipedia.org/wiki/Loss_of_significance cancellation errors].
+
+(Some of the worst and most common examples of
+[@http://en.wikipedia.org/wiki/Loss_of_significance cancellation error or loss of significance]
+can be avoided by using __complements: see __why_complements).
+
+If you require a value which is as accurate as can be represented in the floatingpoint type,
+and is thus the closest representable value and has an error less than 1/2 a
+[@http://en.wikipedia.org/wiki/Least_significant_bit least significant bit] or
+[@http://en.wikipedia.org/wiki/Unit_in_the_last_place ulp]
+it may be useful to use a higherprecision type,
+for example, `cpp_dec_float_50`, to generate this value.
+Conversion of this value to a builtin floatingpoint type ('float', `double` or `long double`)
+will not cause any further loss of precision.
+A decimal digit string will also be 'read' precisely by the compiler
+into a builtin floatingpoint type to the nearest representable value.
+
+[note In contrast, reading a value from an `std::istream` into a builtin floatingpoint type
+is [*not guaranteed] by the C++ Standard to give the nearest representable value.]
+
+William Kahan coined the term
+[@http://en.wikipedia.org/wiki/Rounding#The_tablemaker.27s_dilemma TableMaker's Dilemma]
+for the problem of correctly rounding functions.
+Using a much higher precision (50 or 100 decimal digits)
+is a practical way of generating (almost always) correctly rounded values.
+
+[endsect] [/section:why_high_precision Why use a highprecision library rather than builtin floatingpoint types?]
+
+[section:use_multiprecision Using Boost.Multiprecision]
+
+[*All new projects are recommended to use __multiprecision.]
+
+[import ../../example/big_seventh.cpp]
+
+[big_seventh_example_1]
+
+[import ../../example/fft_sines_table.cpp]
+
+[fft_sines_table_example_1]
+
+The table output is:
+
+[fft_sines_table_example_output]
+
+[fft_sines_table_example_check]
+
+
+[import ../../example/ibeta_mp_example.cpp]
+
+[ibeta_mp_example_1]
+
+The program output is:
[endsect][/section:use_ntl Using With NTL  a High Precision FloatingPoint Library]
+[ibeta_mp_output_1]
[section:use_mpfr Using With MPFR / GMP  a HighPrecision FloatingPoint Library]
+
+
+
+[endsect] [/section:use_multiprecision Using Boost.Multiprecision]
+
+
+[section:use_mpfr Using With MPFR or GMP  HighPrecision FloatingPoint Library]
The special functions and tools in this library can be used with
[@http://www.mpfr.org MPFR (an arbitrary precision number type based on the GMP library)],
+[@http://www.mpfr.org MPFR] (an arbitrary precision number type based on the __GMP),
either via the bindings in [@../../../../../boost/math/bindings/mpfr.hpp boost/math/bindings/mpfr.hpp],
or via [@../../../../../boost/math/bindings/mpfr.hpp boost/math/bindings/mpreal.hpp].
In order to use these binings you will need to have installed [@http://www.mpfr.org MPFR]
+[*New projects are recommended to use __multiprecision with GMP/MPFR backend instead.]
+
+In order to use these bindings you will need to have installed [@http://www.mpfr.org MPFR]
plus its dependency the [@http://gmplib.org GMP library]. You will also need one of the
two supported C++ wrappers for MPFR: [@http://math.berkeley.edu/~wilken/code/gmpfrxx/ gmpfrxx (or mpfr_class)],
+two supported C++ wrappers for MPFR:
+[@http://math.berkeley.edu/~wilken/code/gmpfrxx/ gmpfrxx (or mpfr_class)],
or [@http://www.holoborodko.com/pavel/mpfr/ mpfrC++ (mpreal)].
Unfortunately neither `mpfr_class` nor `mpreal` quite satisfy our conceptual requirements,
@@ 42,8 +139,8 @@
[@../../../../../boost/math/bindings/mpreal.hpp boost/math/bindings/mpreal.hpp]
that you
should use in place of including 'gmpfrxx.h' or 'mpreal.h' directly. The classes
`mpfr_class` or `mpreal` are
then usable unchanged once this header is included, so for example `mpfr_class`'s
+`mpfr_class` or `mpreal` are
+then usable unchanged once this header is included, so for example `mpfr_class`'s
performanceenhancing
expression templates are preserved and fully supported by this library:
@@ 54,15 +151,15 @@
{
mpfr_class::set_dprec(500); // 500 bit precision
//
 // Note that the argument to tgamma is an expression template,
 // that's just fine here:
+ // Note that the argument to tgamma is
+ // an expression template  that's just fine here.
//
mpfr_class v = boost::math::tgamma(sqrt(mpfr_class(2)));
std::cout << std::setprecision(50) << v << std::endl;
}
Alternatively use with `mpreal` would look like:

+
#include <boost/math/bindings/mpreal.hpp>
#include <boost/math/special_functions/gamma.hpp>
@@ 78,39 +175,81 @@
up to approximately 3e113. This therefore sets the upper limit for accuracy
to the majority of functions defined this library when used with either `mpfr_class` or `mpreal`.
There is a concept checking test program for mpfr support
+There is a concept checking test program for mpfr support
[@../../../../../libs/math/test/mpfr_concept_check.cpp here] and
[@../../../../../libs/math/test/mpreal_concept_check.cpp here].
[endsect][/section:use_mpfr Using With MPFR / GMP  a HighPrecision FloatingPoint Library]
+[endsect] [/section:use_mpfr Using With MPFR / GMP  a HighPrecision FloatingPoint Library]
+
+[section:e_float Using e_float Library]
[section:e_float e_float Support]
+__multiprecision was a development from the e_float library [@http://calgo.acm.org/910.zip e_float (TOMS Algorithm 910)]
+by Christopher Kormanyos.
This library can be used with [@http://calgo.acm.org/910.zip e_float (TOMS Algorithm 910)] via the header:
+e_float can still be used with Boost.Math library via the header:
<boost/math/bindings/e_float.hpp>
And the type `boost::math::ef::e_float`: this type is a thin wrapper class around ::e_float which provides the necessary
+And the type `boost::math::ef::e_float`:
+this type is a thin wrapper class around ::e_float which provides the necessary
syntactic sugar to make everything "just work".
There is also a concept checking test program for e_float support
[@../../../../../libs/math/test/e_float_concept_check.cpp here].
+[*New projects are recommended to use __multiprecision with `cpp_float` backend instead.]
+
+[endsect] [/section:e_float Using e_float Library]
+
+[section:use_ntl Using NTL Library]
+
+[@http://shoup.net/ntl/doc/RR.txt NTL::RR]
+(an arbitrarilyfixed precision floatingpoint number type),
+can be used via the bindings in
+[@../../../../../boost/math/bindings/rr.hpp boost/math/bindings/rr.hpp].
+For details, see [@http://shoup.net/ntl/ NTL: A Library for doing Number Theory by
+Victor Shoup].
+
+[*New projects are recommended to use __multiprecision instead.]
+
+Unfortunately `NTL::RR` doesn't quite satisfy our conceptual requirements,
+so there is a very thin wrapper class `boost::math::ntl::RR` defined in
+[@../../../../../boost/math/bindings/rr.hpp boost/math/bindings/rr.hpp] that you
+should use in place of `NTL::RR`. The class is intended to be a dropin
+replacement for the "real" NTL::RR that adds some syntactic sugar to keep
+this library happy, plus some of the standard library functions not implemented
+in NTL.
+
+For those functions that are based upon the __lanczos, the bindings
+defines a series of approximations with up to 61 terms and accuracy
+up to approximately 3e113. This therefore sets the upper limit for accuracy
+to the majority of functions defined this library when used with `NTL::RR`.
+
+There is a concept checking test program for NTL support
+[@../../../../../libs/math/test/ntl_concept_check.cpp here].
+
+
+[endsect] [/section:use_ntl Using With NTL  a HighPrecision FloatingPoint Library]
+
+[endsect] [/section:high_precision Using With HighPrecision FloatingPoint Libraries]
[endsect]
[section:concepts Conceptual Requirements for Real Number Types]
The functions, and statistical distributions in this library can be used with
any type /RealType/ that meets the conceptual requirements given below. All
the built in floating point types will meet these requirements.
User defined types that meet the requirements can also be used. For example,
with [link math_toolkit.using_udt.use_ntl a thin wrapper class] one of the types
provided with [@http://shoup.net/ntl/ NTL (RR)] can be used. Submissions
of binding to other extended precision types would also be most welcome!
+the builtin floatingpoint types will meet these requirements.
+Userdefined types that meet the requirements can also be used.
The guiding principal behind these requirements, is that a /RealType/
behaves just like a built in floating point type.
+For example, with [link math_toolkit.using_udt.high_precision.use_ntl a thin wrapper class]
+one of the types provided with [@http://shoup.net/ntl/ NTL (RR)] can be used.
+But now that __multiprecision library is available,
+this has become the reference real number type.
+
+Submissions of binding to other extended precision types would also still be welcome.
+
+The guiding principal behind these requirements is that a /RealType/
+behaves just like a builtin floatingpoint type.
[h4 Basic Arithmetic Requirements]
@@ 185,11 +324,11 @@
Note that:
# The functions `log_max_value` and `log_min_value` can be
+# The functions `log_max_value` and `log_min_value` can be
synthesised from the others, and so no explicit specialisation is required.
# The function `epsilon` can be synthesised from the others, so no
explicit specialisation is required provided the precision
of RealType does not vary at runtime (see the header
+of RealType does not vary at runtime (see the header
[@../../../../../boost/math/bindings/rr.hpp boost/math/bindings/rr.hpp]
for an example where the precision does vary at runtime).
# The functions `digits`, `max_value` and `min_value`, all get synthesised
@@ 197,7 +336,7 @@
is not specialised for type RealType, then you will get a compiler error
when code tries to use these functions, /unless/ you explicitly specialise them.
For example if the precision of RealType varies at runtime, then
`numeric_limits` support may not be appropriate, see
+`numeric_limits` support may not be appropriate, see
[@../../../../../boost/math/bindings/rr.hpp boost/math/bindings/rr.hpp] for examples.
[warning
@@ 214,7 +353,7 @@
Although it might seem obvious that RealType should require `std::numeric_limits`
to be specialized, this is not sensible for
`NTL::RR` and similar classes where the number of digits is a runtime
+`NTL::RR` and similar classes where the number of digits is a runtime
parameter (where as for `numeric_limits` it has to be fixed at compile time).
]
@@ 227,7 +366,7 @@
doubt whether a user defined type has enough standard library
support to be useable the best advise is to try it and see!
In the following table /r/ is an object of type `RealType`,
+In the following table /r/ is an object of type `RealType`,
/cr1/ and /cr2/ are objects of type `const RealType`, and
/i/ is an object of type `int`.
@@ 256,7 +395,7 @@
]
Note that the table above lists only those standard library functions known to
be used (or likely to be used in the near future) by this library.
+be used (or likely to be used in the near future) by this library.
The following functions: `acos`, `atan2`, `fmod`, `cosh`, `sinh`, `tanh`, `log10`,
`lround`, `llround`, `ltrunc`, `lltrunc` and `modf`
are not currently used, but may be if further special functions are added.
@@ 264,26 +403,26 @@
Note that the `round`, `trunc` and `modf` functions are not part of the
current C++ standard: they are part of the additions added to C99 which will
likely be in the next C++ standard. There are Boost versions of these provided
as a backup, and the functions are always called unqualified so that
+as a backup, and the functions are always called unqualified so that
argumentdependentlookup can take place.
In addition, for efficient and accurate results, a __lanczos is highly desirable.
You may be able to adapt an existing approximation from
+You may be able to adapt an existing approximation from
[@../../../../../boost/math/special_functions/lanczos.hpp
boost/math/special_functions/lanczos.hpp] or
[@../../../../../boost/math/bindings/detail/big_lanczos.hpp
boost/math/bindings/detail/big_lanczos.hpp]:
+[@../../../../../boost/math/bindings/detail/big_lanczos.hpp
+boost/math/bindings/detail/big_lanczos.hpp]:
in the former case you will need change
static_cast's to lexical_cast's, and the constants to /strings/
+static_cast's to lexical_cast's, and the constants to /strings/
(in order to ensure the coefficients aren't truncated to long double)
and then specialise `lanczos_traits` for type T. Otherwise you may have to hack
[@../../../tools/lanczos_generator.cpp
+and then specialise `lanczos_traits` for type T. Otherwise you may have to hack
+[@../../../tools/lanczos_generator.cpp
libs/math/tools/lanczos_generator.cpp] to find a suitable
approximation for your RealType. The code will still compile if you don't do
this, but both accuracy and efficiency will be greatly compromised in any
function that makes use of the gamma\/beta\/erf family of functions.
[endsect]
+[endsect] [/section: ]
[section:dist_concept Conceptual Requirements for Distribution Types]
@@ 291,12 +430,12 @@
requirements, and encapsulates a statistical distribution.
Please note that this documentation should not be used as a substitute
for the
[link math_toolkit.dist.dist_ref reference documentation], and
+for the
+[link math_toolkit.dist.dist_ref reference documentation], and
[link math_toolkit.dist.stat_tut tutorial] of the statistical
distributions.
In the following table, /d/ is an object of type `DistributionType`,
+In the following table, /d/ is an object of type `DistributionType`,
/cd/ is an object of type `const DistributionType` and /cr/ is an
object of a type convertible to `RealType`.
@@ 311,7 +450,7 @@
[[pdf(cd, cr)][RealType][Returns the PDF of the distribution.]]
[[cdf(cd, cr)][RealType][Returns the CDF of the distribution.]]
[[cdf(complement(cd, cr))][RealType]
 [Returns the complement of the CDF of the distribution,
+ [Returns the complement of the CDF of the distribution,
the same as: `1cdf(cd, cr)`]]
[[quantile(cd, cr)][RealType][Returns the quantile (or percentile) of the distribution.]]
[[quantile(complement(cd, cr))][RealType]
@@ 355,7 +494,7 @@
The main purpose in providing this type is to verify
that standard library functions are found via a using declaration 
bringing those functions into the current scope 
+bringing those functions into the current scope 
and not just because they happen to be in global scope.
In order to ensure that a call to say `pow` can be found
@@ 366,28 +505,28 @@
to easy to forget the `using` declaration, and call the double version of
the function that happens to be in the global scope by mistake.
For example if the code calls ::pow rather than std::pow,
+For example if the code calls ::pow rather than std::pow,
the code will cleanly compile, but truncation of long doubles to
double will cause a significant loss of precision.
In contrast a template instantiated with std_real_concept will *only*
compile if the all the standard library functions used have
+compile if the all the standard library functions used have
been brought into the current scope with a using declaration.
[h6 Testing the real concept]
There is a test program
+There is a test program
[@../../../test/std_real_concept_check.cpp libs/math/test/std_real_concept_check.cpp]
that instantiates every template in this library with type
`std_real_concept` to verify its usage of standard library functions.
``#include <boost/math/concepts/real_concept.hpp>``
 namespace boost{
 namespace math{
+ namespace boost{
+ namespace math{
namespace concepts{

+
class real_concept;

+
}}} // namespaces
`real_concept` is an archetype for
@@ 403,7 +542,7 @@
NTL RR is an example of a type meeting the requirements that this type
models, but note that use of a thin wrapper class is required: refer to
[link math_toolkit.using_udt.use_ntl "Using With NTL  a HighPrecision FloatingPoint Library"].
+[linkmath_toolkit.using_udt.high_precision.use_ntl "Using With NTL  a HighPrecision FloatingPoint Library"].
There is no specific test case for type `real_concept`, instead, since this
type is usable at runtime, each individual test case as well as testing
@@ 419,47 +558,47 @@
namespace math{
namespace concepts
{
 template <class RealType>
 class distribution_archetype;
+ template <class RealType>
+ class distribution_archetype;
+
+ template <class Distribution>
+ struct DistributionConcept;
 template <class Distribution>
 struct DistributionConcept;

}}} // namespaces

+
The class template `distribution_archetype` is a model of the
[link math_toolkit.using_udt.dist_concept Distribution concept].
The class template `DistributionConcept` is a
[@../../../../../libs/concept_check/index.html concept checking class]
+The class template `DistributionConcept` is a
+[@../../../../../libs/concept_check/index.html concept checking class]
for distribution types.
[h6 Testing the distribution concept]
The test program
+The test program
[@../../../test/compile_test/distribution_concept_check.cpp distribution_concept_check.cpp]
is responsible for using `DistributionConcept` to verify that all the
distributions in this library conform to the
+distributions in this library conform to the
[link math_toolkit.using_udt.dist_concept Distribution concept].
The class template `DistributionConcept` verifies the existence
+The class template `DistributionConcept` verifies the existence
(but not proper function) of the nonmember accessors
required by the [link math_toolkit.using_udt.dist_concept Distribution concept].
These are checked by calls like
v = pdf(dist, x); // (Result v is ignored).
And in addition, those that accept two arguments do the right thing when the
arguments are of different types (the result type is always the same as the
distribution's value_type). (This is implemented by some additional
forwardingfunctions in derived_accessors.hpp, so that there is no need for
any code changes. Likewise boilerplate versions of the
hazard\/chf\/coefficient_of_variation functions are implemented in
+And in addition, those that accept two arguments do the right thing when the
+arguments are of different types (the result type is always the same as the
+distribution's value_type). (This is implemented by some additional
+forwardingfunctions in derived_accessors.hpp, so that there is no need for
+any code changes. Likewise boilerplate versions of the
+hazard\/chf\/coefficient_of_variation functions are implemented in
there too.)
[endsect] [/section:archetypes Conceptual Archetypes for Reals and Distributions]
[/
 Copyright 2006, 2010 John Maddock and Paul A. Bristow.
+[/
+ Copyright 2006, 2010, 2012 John Maddock and Paul A. Bristow.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
Modified: trunk/libs/math/doc/sf_and_dist/constants.qbk
==============================================================================
 trunk/libs/math/doc/sf_and_dist/constants.qbk (original)
+++ trunk/libs/math/doc/sf_and_dist/constants.qbk 20130124 09:12:37 EST (Thu, 24 Jan 2013)
@@ 12,20 +12,23 @@
* Effortless  avoiding a search of reference sources.
* Usable with both builtin floating point types, and userdefined, possibly extended precision, types such as
NTL, MPFR/GMP, mp_float: in the latter case the constants are computed to the necessary precision and then cached.
* Accurate  ensuring that the values are as accurate as possible for the
chosen floatingpoint type
+* Accurate  ensuring that the values are as accurate as possible for the
+chosen floatingpoint type
* No loss of accuracy from repeated rounding of intermediate computations.
* Result is computed with higher precision and only rounded once.
* Less risk of inaccurate result from functions pow, trig and log at [@http://en.wikipedia.org/wiki/Corner_case corner cases].
 * Less risk of [@http://docs.oracle.com/cd/E1995701/8063568/ncg_goldberg.html cancellation error].
* Faster  can avoid (re)calculation at runtime. This can be significant if:
 * Functions pow, trig or log are used.
 * Inside an inner loop.
 * Using a highprecision UDT.
 * Compiler optimizations possible with builtin types, especially `double`, are not available.
+ * Less risk of [@http://docs.oracle.com/cd/E1995701/8063568/ncg_goldberg.html cancellation error].
* Portable  as possible between different systems using different floatingpoint precisions:
see [link math_toolkit.constants.tutorial.templ use in template code].
* Tested  by comparison with other published sources, or separately computed at long double precision.
+* Faster  can avoid (re)calculation at runtime.
+ * If the value returned is a builtin type then it's returned by value as a `constexpr` (C++11 feature, if available).
+ * If the value is computed and cached (or constructed from a string representation and cached), then it's returned by constant reference.[br]
+This can be significant if:
+ * Functions pow, trig or log are used.
+ * Inside an inner loop.
+ * Using a highprecision UDT like __multiprecision.
+ * Compiler optimizations possible with builtin types, especially `double`, are not available.
[endsect] [/section:intro Introduction]
@@ 66,7 +69,7 @@
Some examples of using constants are at [@../../../example/constants_eg1.cpp constants_eg1].
[endsect]

+
[section:templ Use in template code]
When using the constants inside a function template, we need to ensure that
@@ 137,7 +140,9 @@
[endsect] [/section:templ Use in template code]
[section:user_def Use With User Defined Types]
+[section:user_def Use With UserDefined Types]
+
+The most common example of a highprecision userdefined type will probably be __multiprecision.
The syntax for using the functioncall constants with userdefined types is the same
as it is in the template class, which is to say we use:
@@ 146,8 +151,14 @@
boost::math::constants::pi<UserDefinedType>();
+For example:
+
+ boost::math::constants::pi<boost::multiprecision::cpp_dec_float_50>();
+
+giving [pi] with a precision of 50 decimal digits.
+
However, since the precision of the userdefined type may be much greater than that
of the builtin floating pointer types, how the value returned is created is as follows:
+of the builtin floating point types, how the value returned is created is as follows:
* If the precision of the type is known at compile time:
* If the precision is less than or equal to that of a `float` and the type is constructable from a `float`
@@ 159,10 +170,10 @@
* If the precision is less than or equal to that of a `long double` and the type is constructable from a `long double`
then our code returns a `long double` literal. If the userdefined type is a literal type
then the function call that returns the constant will be a `constexp`.
 * If the precision is less than 100 decimal digits, then the constant will be constructed
+ * If the precision is less than 100 decimal digits, then the constant will be constructed
(just the once, then cached in a threadsafe manner) from a string representation of the constant.
In this case the value is returned as a const reference to the cached value.
 * Otherwise the value is computed (just once, then cached in a threadsafe manner).
+ * Otherwise the value is computed (just once, then cached in a threadsafe manner).
In this case the value is returned as a const reference to the cached value.
* If the precision is unknown at compile time then:
* If the runtime precision (obtained from a call to `boost::math::tools::digits<T>()`) is
@@ 324,7 +335,7 @@
[[[*Euler's e and related]] [] [] [] ]
[[e] [e] [2.71828] [[@http://en.wikipedia.org/wiki/E_(mathematical_constant) Euler's constant e]] ]
[[exp_minus_half] [e [super 1/2]] [0.606530] [] ]
[[e_pow_pi] [e [super [pi]]] [23.14069] [] ]
+[[e_pow_pi] [e [super [pi]]] [23.14069] [] ]
[[root_e] [[radic] e] [1.64872] [] ]
[[log10_e] [log10(e)] [0.434294] [] ]
[[one_div_log10_e] [1/log10(e)] [2.30258] [] ]
@@ 393,7 +404,7 @@
}
}}}} // namespaces

+
Then define a placeholder for the constant itself:
namespace boost{ namespace math{ namespace constants{
@@ 406,7 +417,7 @@
For example, to calculate [pi]/2, add to `boost/math/constants/calculate_constants.hpp`
template <class T>
 template<int N>
+ template<int N>
inline T constant_half_pi<T>::compute(BOOST_MATH_EXPLICIT_TEMPLATE_TYPE_SPEC(mpl::int_<N>))
{
BOOST_MATH_STD_USING
@@ 418,12 +429,12 @@
BOOST_DEFINE_MATH_CONSTANT(half_pi, 0.0, "0"); // Actual values are temporary, we'll replace them later.
[note Previously defined constants like pi and e can be used, but by *not simply calling* `pi<T>()`;
specifying the precision via the policy
+specifying the precision via the policy
`pi<T, policies::policy<policies::digits2<N> > >()`
is essential to ensure full accuracy.]
[warning Newly defined constants can only be used once they are included in
`boost/math/constants/constants.hpp`. So if you add
+`boost/math/constants/constants.hpp`. So if you add
`template <class T, class N> T constant_my_constant{...}`,
then you cannot define `constant_my_constant`
until you add the temporary `BOOST_DEFINE_MATH_CONSTANT(my_constant, 0.0, "0")`.
@@ 488,7 +499,7 @@
* Expensive to compute.
* Requested by users.
* [@http://en.wikipedia.org/wiki/Mathematical_constant Used in science and mathematics.]
* No integer values (because so cheap to construct).[br]
+* No integer values (because so cheap to construct).[br]
(You can easily define your own if found convenient, for example: `FPT one =static_cast<FPT>(42);`).
[h4 How are constants named?]
@@ 497,7 +508,7 @@
* No CamelCase.
* Underscore as _ delimiter between words.
* Numbers spelt as words rather than decimal digits (except following pow).
* Abbreviation conventions:
+* Abbreviation conventions:
* root for square root.
* cbrt for cube root.
* pow for pow function using decimal digits like pow23 for n[super 2/3].
@@ 527,7 +538,7 @@
with at least 35 decimal digits, enough to be accurate for all long double implementations.
The tolerance is usually twice `long double epsilon`.
# Comparison with calculation at long double precision.
+# Comparison with calculation at long double precision.
This often requires a slightly higher tolerance than two epsilon
because of computational noise from roundoff etc,
especially when trig and other functions are called.
@@ 565,11 +576,11 @@
[h4 What is the Internal Format of the constants, and why?]
See [link math_toolkit.constants.tutorial tutorial] above for normal use,
but this FAQ explains the internal details used for the constants.
+but this FAQ explains the internal details used for the constants.
Constants are stored as 100 decimal digit values.
However, some compilers do not accept decimal digits strings as long as this.
So the constant is split into two parts, with the first containing at least
+So the constant is split into two parts, with the first containing at least
128bit long double precision (35 decimal digits),
and for consistency should be in scientific format with a signed exponent.
@@ 605,7 +616,7 @@
This work is based on an earlier work called efloat:
Algorithm 910: A Portable C++ MultiplePrecision System for SpecialFunction Calculations,
in ACM TOMS, {VOL 37, ISSUE 4, (February 2011)} (C) ACM, 2011.
+in ACM TOMS, {VOL 37, ISSUE 4, (February 2011)} (C) ACM, 2011.
[@http://doi.acm.org/10.1145/1916461.1916469]
[@https://svn.boost.org/svn/boost/sandbox/e_float/ e_float]
but is now refactored and available under the Boost license in the Boostsandbox at
@@ 644,6 +655,8 @@
[note The precision of all `doubledouble` floatingpoint types is rather odd and values given are only approximate.]
+[*New projects should use __multiprecision.]
+
[h5 NTL class RR]
Arbitrary precision floating point with NTL class RR,
@@ 651,9 +664,11 @@
used here with 300 bit to output 100 decimal digits,
enough for many practical non'numbertheoretic' C++ applications.
NTL is [*not licenced for commercial use].
+__NTL is [*not licenced for commercial use].
+
+This class is used in Boost.Math and is an option when using big_number projects to calculate new math constants.
This class is used in Boost.Math and an option when using big_number projects to calculate new math constants.
+[*New projects should use __multiprecision.]
[h5 GMP and MPFR]
@@ 674,6 +689,7 @@
but combined with template struct and functions to allow simultaneous use
with other nonbuiltin floatingpoint types.
+
[h4 Why do the constants (internally) have a struct rather than a simple function?]
A function mechanism was provided by in previous versions of Boost.Math.
@@ 690,7 +706,7 @@
D. E. Knuth, Art of Computer Programming, Appendix A, Table 1, Vol 1, ISBN 0 201 89683 4 (1997)
# M. Abrahamovitz & I. E. Stegun, National Bureau of Standards, Handbook of Mathematical Functions,
a reference source for formulae now superceded by
# Frank W. Olver, Daniel W. Lozier, Ronald F. Boisvert, Charles W. Clark, NIST Handbook of Mathemetical Functions, Cambridge University Press, ISBN 9780521140638, 2010.
+# Frank W. Olver, Daniel W. Lozier, Ronald F. Boisvert, Charles W. Clark, NIST Handbook of Mathemetical Functions, Cambridge University Press, ISBN 9780521140638, 2010.
# John F Hart, Computer Approximations, Kreiger (1978) ISBN 0 88275 642 7.
# Some values from Cephes Mathematical Library, Stephen L. Moshier
and CALC100 100 decimal digit Complex Variable Calculator Program, a DOS utility.
@@ 700,7 +716,7 @@
Not here in this Boost.Math collection, because physical constants:
* Are measurements.
+* Are measurements, not truely constants.
* Are not truly constant and keeping changing as mensuration technology improves.
* Have a instrinsic uncertainty.
* Mathematical constants are stored and represented at varying precision, but should never be inaccurate.
@@ 711,7 +727,7 @@
[endsect] [/section:constants Mathematical Constants]
[/
+[/
Copyright 2012 John Maddock and Paul A. Bristow.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
Modified: trunk/libs/math/doc/sf_and_dist/credits.qbk
==============================================================================
 trunk/libs/math/doc/sf_and_dist/credits.qbk (original)
+++ trunk/libs/math/doc/sf_and_dist/credits.qbk 20130124 09:12:37 EST (Thu, 24 Jan 2013)
@@ 81,7 +81,7 @@
Thanks to Mark Coleman and Georgi Boshnakov for spot test values
from __Mathematica, and of course,
to Eric Weissten for nurturing __Mathworld, an invaluable resource.
+to Eric Weisstein for nurturing __Mathworld, an invaluable resource.
The Skewnormal distribution and Owen's t function were written by Benjamin Sobotta.
@@ 90,10 +90,20 @@
and contributing to some long discussions about how to improve accuracy
for large noncentrality and/or large degrees of freedom.
+Christopher Kormanyos wrote the e_float multiprecision library __TOMS910
+which formed the basis for the Boost.Multiprecision library
+which now can be used to allow most functions and distributions
+to be computed up to a precision of the users' choice,
+no longer restricted to builtin floatingpoint types like double.
+(And thanks to Topher Cooper for bring Christopher's e_float to our attention).
+
+Christopher Kormanyos wrote some examples for using __multiprecision,
+and added methods for finding zeros of Bessel Functions.
+
[endsect] [/section:credits Credits and Acknowledgements]
[/
 Copyright 2006, 2007, 2008, 2009, 2010, 2012 John Maddock and Paul A. Bristow.
+ Copyright 2006, 2007, 2008, 2009, 2010, 2012, 2013 John Maddock and Paul A. Bristow.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
Modified: trunk/libs/math/doc/sf_and_dist/distributions/distribution_construction.qbk
==============================================================================
 trunk/libs/math/doc/sf_and_dist/distributions/distribution_construction.qbk (original)
+++ trunk/libs/math/doc/sf_and_dist/distributions/distribution_construction.qbk 20130124 09:12:37 EST (Thu, 24 Jan 2013)
@@ 1,15 +1,15 @@
[section:dist_construct_eg Distribution Construction Example]

See [@../../../example/distribution_construction.cpp distribution_construction.cpp] for full source code.
+[section:dist_construct_eg Distribution Construction Examples]
[import ../../../example/distribution_construction.cpp]
[distribution_construction1]
[distribution_construction2]
+[distribution_construction_1]
+[distribution_construction_2]
+
+See [@../../../example/distribution_construction.cpp distribution_construction.cpp] for full source code.
[endsect] [/section:dist_construct_eg Distribution Construction Example]
+[endsect] [/section:dist_construct_eg Distribution Construction Examples]
[/
 Copyright 2006 John Maddock and Paul A. Bristow.
+[/
+ Copyright 2006, 2012 John Maddock and Paul A. Bristow.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
Modified: trunk/libs/math/doc/sf_and_dist/distributions/negative_binomial.qbk
==============================================================================
 trunk/libs/math/doc/sf_and_dist/distributions/negative_binomial.qbk (original)
+++ trunk/libs/math/doc/sf_and_dist/distributions/negative_binomial.qbk 20130124 09:12:37 EST (Thu, 24 Jan 2013)
@@ 2,14 +2,14 @@
``#include <boost/math/distributions/negative_binomial.hpp>``
 namespace boost{ namespace math{

 template <class RealType = double,
+ namespace boost{ namespace math{
+
+ template <class RealType = double,
class ``__Policy`` = ``__policy_class`` >
class negative_binomial_distribution;

+
typedef negative_binomial_distribution<> negative_binomial;

+
template <class RealType, class ``__Policy``>
class negative_binomial_distribution
{
@@ 18,21 +18,21 @@
typedef Policy policy_type;
// Constructor from successes and success_fraction:
negative_binomial_distribution(RealType r, RealType p);

+
// Parameter accessors:
RealType success_fraction() const;
RealType successes() const;

+
// Bounds on success fraction:
static RealType find_lower_bound_on_p(
 RealType trials,
+ RealType trials,
RealType successes,
RealType probability); // alpha
static RealType find_upper_bound_on_p(
 RealType trials,
+ RealType trials,
RealType successes,
RealType probability); // alpha

+
// Estimate min/max number of trials:
static RealType find_minimum_number_of_trials(
RealType k, // Number of failures.
@@ 43,9 +43,9 @@
RealType p, // Success fraction.
RealType probability); // Probability threshold alpha.
};

+
}} // namespaces

+
The class type `negative_binomial_distribution` represents a
[@http://en.wikipedia.org/wiki/Negative_binomial_distribution negative_binomial distribution]:
it is used when there are exactly two mutually exclusive outcomes of a
@@ 53,9 +53,9 @@
these outcomes are labelled "success" and "failure".
For k + r Bernoulli trials each with success fraction p, the
negative_binomial distribution gives the probability of observing
k failures and r successes with success on the last trial.
The negative_binomial distribution
+negative_binomial distribution gives the probability of observing
+k failures and r successes with success on the last trial.
+The negative_binomial distribution
assumes that success_fraction p is fixed for all (k + r) trials.
[note The random variable for the negative binomial distribution is the number of trials,
@@ 67,7 +67,7 @@
[equation neg_binomial_ref]
The following graph illustrate how the PDF varies as the success fraction
+The following graph illustrate how the PDF varies as the success fraction
/p/ changes:
[graph negative_binomial_pdf_1]
@@ 91,7 +91,7 @@
The Poisson distribution is a generalization of the Pascal distribution,
where the success parameter r is an integer: to obtain the Pascal
distribution you must ensure that an integer value is provided for r,
and take integer values (floor or ceiling) from functions that return
+and take integer values (floor or ceiling) from functions that return
a number of successes.
For large values of r (successes), the negative binomial distribution
@@ 107,7 +107,7 @@
poisson([lambda]) = lim [sub r [rarr] [infin]] [space] negative_binomial(r, r / ([lambda] + r)))
[discrete_quantile_warning Negative Binomial]

+
[h4 Member Functions]
[h5 Construct]
@@ 122,27 +122,27 @@
[h5 Accessors]
RealType success_fraction() const; // successes / trials (0 <= p <= 1)

+
Returns the parameter /p/ from which this distribution was constructed.

+
RealType successes() const; // required successes (r > 0)

+
Returns the parameter /r/ from which this distribution was constructed.
The best method of calculation for the following functions is disputed:
see __binomial_distrib for more discussion.
+see __binomial_distrib for more discussion.
[h5 Lower Bound on Parameter p]
static RealType find_lower_bound_on_p(
 RealType failures,
+ RealType failures,
RealType successes,
RealType probability) // (0 <= alpha <= 1), 0.05 equivalent to 95% confidence.

+
Returns a *lower bound* on the success fraction:
[variablelist
[[failures][The total number of failures before the r th success.]]
+[[failures][The total number of failures before the ['r]th success.]]
[[successes][The number of successes required.]]
[[alpha][The largest acceptable probability that the true value of
the success fraction is [*less than] the value returned.]]
@@ 150,16 +150,16 @@
For example, if you observe /k/ failures and /r/ successes from /n/ = k + r trials
the best estimate for the success fraction is simply ['r/n], but if you
want to be 95% sure that the true value is [*greater than] some value,
+want to be 95% sure that the true value is [*greater than] some value,
['p[sub min]], then:
p``[sub min]`` = negative_binomial_distribution<RealType>::find_lower_bound_on_p(
failures, successes, 0.05);
[link math_toolkit.dist.stat_tut.weg.neg_binom_eg.neg_binom_conf See negative binomial confidence interval example.]

+
This function uses the ClopperPearson method of computing the lower bound on the
success fraction, whilst many texts refer to this method as giving an "exact"
+success fraction, whilst many texts refer to this method as giving an "exact"
result in practice it produces an interval that guarantees ['at least] the
coverage required, and may produce pessimistic estimates for some combinations
of /failures/ and /successes/. See:
@@ 171,10 +171,10 @@
[h5 Upper Bound on Parameter p]
static RealType find_upper_bound_on_p(
 RealType trials,
+ RealType trials,
RealType successes,
RealType alpha); // (0 <= alpha <= 1), 0.05 equivalent to 95% confidence.

+
Returns an *upper bound* on the success fraction:
[variablelist
@@ 186,7 +186,7 @@
For example, if you observe /k/ successes from /n/ trials the
best estimate for the success fraction is simply ['k/n], but if you
want to be 95% sure that the true value is [*less than] some value,
+want to be 95% sure that the true value is [*less than] some value,
['p[sub max]], then:
p``[sub max]`` = negative_binomial_distribution<RealType>::find_upper_bound_on_p(
@@ 195,7 +195,7 @@
[link math_toolkit.dist.stat_tut.weg.neg_binom_eg.neg_binom_conf See negative binomial confidence interval example.]
This function uses the ClopperPearson method of computing the lower bound on the
success fraction, whilst many texts refer to this method as giving an "exact"
+success fraction, whilst many texts refer to this method as giving an "exact"
result in practice it produces an interval that guarantees ['at least] the
coverage required, and may produce pessimistic estimates for some combinations
of /failures/ and /successes/. See:
@@ 210,7 +210,7 @@
RealType k, // number of failures.
RealType p, // success fraction.
RealType alpha); // probability threshold (0.05 equivalent to 95%).

+
This functions estimates the number of trials required to achieve a certain
probability that [*more than k failures will be observed].
@@ 221,12 +221,12 @@
]
For example:

+
negative_binomial_distribution<RealType>::find_minimum_number_of_trials(10, 0.5, 0.05);

+
Returns the smallest number of trials we must conduct to be 95% sure
of seeing 10 failures that occur with frequency one half.

+
[link math_toolkit.dist.stat_tut.weg.neg_binom_eg.neg_binom_size_eg Worked Example.]
This function uses numeric inversion of the negative binomial distribution
@@ 240,7 +240,7 @@
RealType k, // number of failures.
RealType p, // success fraction.
RealType alpha); // probability threshold (0.05 equivalent to 95%).

+
This functions estimates the maximum number of trials we can conduct and achieve
a certain probability that [*k failures or fewer will be observed].
@@ 251,12 +251,12 @@
]
For example:

+
negative_binomial_distribution<RealType>::find_maximum_number_of_trials(0, 1.01.0/1000000, 0.05);

+
Returns the largest number of trials we can conduct and still be 95% sure
of seeing no failures that occur with frequency one in one million.

+
This function uses numeric inversion of the negative binomial distribution
to obtain the result: another interpretation of the result, is that it finds
the number of trials (success+failures) that will lead to an /alpha/ probability
@@ 267,7 +267,7 @@
All the [link math_toolkit.dist.dist_ref.nmp usual nonmember accessor functions]
that are generic to all distributions are supported: __usual_accessors.
However it's worth taking a moment to define what these actually mean in
+However it's worth taking a moment to define what these actually mean in
the context of this distribution:
[table Meaning of the nonmember accessors.
@@ 285,14 +285,14 @@
[[__ccdf]
[The probability of obtaining [*more than k failures] from k+r trials
with success fraction p and success on the last trial. For example:

+
``cdf(complement(negative_binomial(r, p), k))``]]
[[__quantile]
[The [*greatest] number of failures k expected to be observed from k+r trials
with success fraction p, at probability P. Note that the value returned
is a realnumber, and not an integer. Depending on the use case you may
want to take either the floor or ceiling of the real result. For example:

+
``quantile(negative_binomial(r, p), P)``]]
[[__quantile_c]
[The [*smallest] number of failures k expected to be observed from k+r trials
@@ 304,8 +304,8 @@
[h4 Accuracy]
This distribution is implemented using the
incomplete beta functions __ibeta and __ibetac:
+This distribution is implemented using the
+incomplete beta functions __ibeta and __ibetac:
please refer to these functions for information on accuracy.
[h4 Implementation]
@@ 326,7 +326,7 @@
just a thin wrapper around part of the internals of the incomplete
beta function.
]]
[[cdf][Using the relation:
+[[cdf][Using the relation:
cdf = I[sub p](r, k+1) = ibeta(r, k+1, p)
Modified: trunk/libs/math/doc/sf_and_dist/distributions/rayleigh.qbk
==============================================================================
 trunk/libs/math/doc/sf_and_dist/distributions/rayleigh.qbk (original)
+++ trunk/libs/math/doc/sf_and_dist/distributions/rayleigh.qbk 20130124 09:12:37 EST (Thu, 24 Jan 2013)
@@ 3,14 +3,14 @@
``#include <boost/math/distributions/rayleigh.hpp>``
 namespace boost{ namespace math{

 template <class RealType = double,
+ namespace boost{ namespace math{
+
+ template <class RealType = double,
class ``__Policy`` = ``__policy_class`` >
class rayleigh_distribution;

+
typedef rayleigh_distribution<> rayleigh;

+
template <class RealType, class ``__Policy``>
class rayleigh_distribution
{
@@ 22,11 +22,11 @@
// Accessors:
RealType sigma()const;
};

+
}} // namespaces

+
The [@http://en.wikipedia.org/wiki/Rayleigh_distribution Rayleigh distribution]
is a continuous distribution with the
+is a continuous distribution with the
[@http://en.wikipedia.org/wiki/Probability_density_function probability density function]:
f(x; sigma) = x * exp(x[super 2]/2 [sigma][super 2]) / [sigma][super 2]
@@ 54,22 +54,22 @@
The [@http://en.wikipedia.org/wiki/Chi_distribution Chi],
[@http://en.wikipedia.org/wiki/Rice_distribution Rice]
and [@http://en.wikipedia.org/wiki/Weibull_distribution Weibull] distributions are generalizations of the
[@http://en.wikipedia.org/wiki/Rayleigh_distribution Rayleigh distribution].
+[@http://en.wikipedia.org/wiki/Rayleigh_distribution Rayleigh distribution].
[h4 Member Functions]
rayleigh_distribution(RealType sigma = 1);

Constructs a [@http://en.wikipedia.org/wiki/Rayleigh_distribution
+
+Constructs a [@http://en.wikipedia.org/wiki/Rayleigh_distribution
Rayleigh distribution] with [sigma] /sigma/.
Requires that the [sigma] parameter is greater than zero,
+Requires that the [sigma] parameter is greater than zero,
otherwise calls __domain_error.
RealType sigma()const;

+
Returns the /sigma/ parameter of this distribution.

+
[h4 Nonmember Accessors]
All the [link math_toolkit.dist.dist_ref.nmp usual nonmember accessor functions] that are generic to all
@@ 79,14 +79,14 @@
[h4 Accuracy]
The Rayleigh distribution is implemented in terms of the
+The Rayleigh distribution is implemented in terms of the
standard library `sqrt` and `exp` and as such should have very low error rates.
Some constants such as skewness and kurtosis were calculated using
NTL RR type with 150bit accuracy, about 50 decimal digits.
[h4 Implementation]
In the following table [sigma][space] is the sigma parameter of the distribution,
+In the following table [sigma][space] is the sigma parameter of the distribution,
/x/ is the random variate, /p/ is the probability and /q = 1p/.
[table
@@ 108,9 +108,9 @@
* [@http://en.wikipedia.org/wiki/Rayleigh_distribution ]
* [@http://mathworld.wolfram.com/RayleighDistribution.html Weisstein, Eric W. "Rayleigh Distribution." From MathWorldA Wolfram Web Resource.]
[endsect][/section:Rayleigh Rayleigh]
+[endsect] [/section:Rayleigh Rayleigh]
[/
+[/
Copyright 2006 John Maddock and Paul A. Bristow.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
Modified: trunk/libs/math/doc/sf_and_dist/error.qbk
==============================================================================
 trunk/libs/math/doc/sf_and_dist/error.qbk (original)
+++ trunk/libs/math/doc/sf_and_dist/error.qbk 20130124 09:12:37 EST (Thu, 24 Jan 2013)
@@ 9,13 +9,13 @@
[equation error1]
which measures /relative difference/ and happens to be less error
+which measures /relative difference/ and happens to be less error
prone in use since we don't have to worry which value is the "true"
result, and which is the experimental one. It guarantees to return a value
at least as large as the relative error.
Special care needs to be taken when one value is zero: we could either take the
absolute error in this case (but that's cheating as the absolute error is likely
+absolute error in this case (but that's cheating as the absolute error is likely
to be very small), or we could assign a value of either 1 or infinity to the
relative error in this special case. In the test cases for the special functions
in this library, everything below a threshold is regarded as "effectively zero",
@@ 24,20 +24,20 @@
in other words all denormalised numbers are regarded as a zero.
All the test programs calculate /quantized relative error/, whereas the graphs
in this manual are produced with the /actual error/. The difference is as
follows: in the test programs, the test data is rounded to the target real type
+in this manual are produced with the /actual error/. The difference is as
+follows: in the test programs, the test data is rounded to the target real type
under test when the program is compiled,
so the error observed will then be a whole number of /units in the last place/
either rounded up from the actual error, or rounded down (possibly to zero).
In contrast the /true error/ is obtained by extending
the precision of the calculated value, and then comparing to the actual value:
+the precision of the calculated value, and then comparing to the actual value:
in this case the calculated error may be some fraction of /units in the last place/.
Note that throughout this manual and the test programs the relative error is
usually quoted in units of epsilon. However, remember that /units in the last place/
more accurately reflect the number of contaminated digits, and that relative
error can /"wobble"/ by a factor of 2 compared to /units in the last place/.
In other words: two implementations of the same function, whose
+In other words: two implementations of the same function, whose
maximum relative errors differ by a factor of 2, can actually be accurate
to the same number of binary digits. You have been warned!
@@ 45,26 +45,28 @@
For many of the functions in this library, it is assumed that the error is
"effectively zero" if the computation can be done with a number of guard
digits. However it should be remembered that if the result is a
/transcendental number/
+digits. However it should be remembered that if the result is a
+/transcendental number/
then as a point of principle we can never be sure that the result is accurate
to more than 1 ulp. This is an example of /the table makers dilemma/: consider what
happens if the first guard digit is a one, and the remaining guard digits are all zero.
+to more than 1 ulp. This is an example of what
+[@http://en.wikipedia.org/wiki/William_Kahan] called
+[@http://en.wikipedia.org/wiki/Rounding#The_tablemaker.27s_dilemma]:
+consider what happens if the first guard digit is a one, and the remaining guard digits are all zero.
Do we have a tie or not? Since the only thing we can tell about a transcendental number
is that its digits have no particular pattern, we can never tell if we have a tie,
no matter how many guard digits we have. Therefore, we can never be completely sure
+no matter how many guard digits we have. Therefore, we can never be completely sure
that the result has been rounded in the right direction. Of course, transcendental
numbers that just happen to be a tie  for however many guard digits we have  are
extremely rare, and get rarer the more guard digits we have, but even so....
Refer to the classic text
+Refer to the classic text
[@http://docs.sun.com/source/8063568/ncg_goldberg.html What Every Computer Scientist Should Know About FloatingPoint Arithmetic]
for more information.
[endsect][/section:relative_error Relative Error]
[/
 Copyright 2006 John Maddock and Paul A. Bristow.
+[/
+ Copyright 2006, 2012 John Maddock and Paul A. Bristow.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
Modified: trunk/libs/math/doc/sf_and_dist/faq.qbk
==============================================================================
 trunk/libs/math/doc/sf_and_dist/faq.qbk (original)
+++ trunk/libs/math/doc/sf_and_dist/faq.qbk 20130124 09:12:37 EST (Thu, 24 Jan 2013)
@@ 1,19 +1,19 @@
[section:faq Frequently Asked Questions FAQ]
# ['I'm a FORTRAN/NAG/SPSS/SAS/Cephes/MathCad/R user
+# ['I'm a FORTRAN/NAG/SPSS/SAS/Cephes/MathCad/R user
and I don't see where the functions like dnorm(mean, sd) are in Boost.Math?] [br]
Nearly all are provided, and many more like mean, skewness, quantiles, complements ...
but Boost.Math makes full use of C++, and it looks a bit different.
+Nearly all are provided, and many more like mean, skewness, quantiles, complements ...
+but Boost.Math makes full use of C++, and it looks a bit different.
But do not panic! See section on construction and the many examples.
Briefly, the distribution is constructed with the parameters (like location and scale)
(things after the  in representation like P(X=kn, p) or ; in a common represention of pdf f(x; [mu][sigma][super 2]).
+Briefly, the distribution is constructed with the parameters (like location and scale)
+(things after the  in representation like P(X=kn, p) or ; in a common represention of pdf f(x; [mu][sigma][super 2]).
Functions like pdf, cdf are called with the name of that distribution and the random variate often called x or k.
For example, `normal my_norm(0, 1); pdf(my_norm, 2.0);` [br]
#I'm a user of [@http://support.sas.com/rnd/app/da/new/probabilityfunctions.html New SAS Functions for Computing Probabilities]. [br]
You will find the interface more familar, but to be able to select a distribution (perhaps using a string)
see the Extras/Future Directions section,
and /boost/libs/math/dot_net_example/boost_math.cpp for an example that is used to create a C# utility
(that you might also find useful): see [@http://sourceforge.net/projects/distexplorer/ Statistical Distribution Explorer] [br].
+(that you might also find useful): see [@http://sourceforge.net/projects/distexplorer/ Statistical Distribution Explorer] [br].
# ['I'm allegic to reading manuals and prefer to learn from examples.][br]
Fear not  you are not alone! Many examples are available for functions and distributions.
Some are referenced directly from the text. Others can be found at \boost_latest_release\libs\math\example.
@@ 27,26 +27,26 @@
Visual Studio 2010 instead provides property sheets to assist.
You may find it convenient to create a new one adding \boostlatest_release;
to the existing include items in $(IncludePath).
# ['I'm a FORTRAN/NAG/SPSS/SAS/Cephes/MathCad/R user and
+# ['I'm a FORTRAN/NAG/SPSS/SAS/Cephes/MathCad/R user and
I don't see where the properties like mean, median, mode, variance, skewness of distributions are in Boost.Math?][br]
They are all available (if defined for the parameters with which you constructed the distribution) via __usual_accessors.
# ['I am a C programmer. Can I user Boost.Math with C?][br]
Yes you can, including all the special functions, and TR1 functions like isnan.
They appear as C functions, by being declared as "extern C".
# ['I am a C# (Basic? F# FORTRAN? Other CLI?) programmer. Can I use Boost.Math with C#?] [br]
Yes you can, including all the special functions, and TR1 functions like isnan.
But you [*must build the Boost.Math as a dynamic library (.dll) and compile with the /CLI option].
See the boost/math/dot_net_example folder which contains an example that
+Yes you can, including all the special functions, and TR1 functions like isnan.
+But you [*must build the Boost.Math as a dynamic library (.dll) and compile with the /CLI option].
+See the boost/math/dot_net_example folder which contains an example that
builds a simple statistical distribution app with a GUI.
See [@http://sourceforge.net/projects/distexplorer/ Statistical Distribution Explorer] [br]
# ['What these "policies" things for?] [br]
Policies are a powerful (if necessarily complex) finegrain mechanism that
allow you to customise the behaviour of the Boost.Math library according to your precise needs.
+Policies are a powerful (if necessarily complex) finegrain mechanism that
+allow you to customise the behaviour of the Boost.Math library according to your precise needs.
See __policy_section. But if, very probably, the default behaviour suits you, you don't need to know more.
# ['I am a C user and expect to see global Cstyle`::errno` set for overflow/errors etc?] [br]
You can achieve what you want  see __error_policy and __user_error_handling and many examples.
# ['I am a C user and expect to silently return a max value for overflow?] [br]
You (and C++ users too) can return whatever you want on overflow
+You (and C++ users too) can return whatever you want on overflow
 see __overflow_error and __error_policy and several examples.
# ['I don't want any error message for overflow etc?] [br]
You can control exactly what happens for all the abnormal conditions, including the values returned.
@@ 55,7 +55,7 @@
Yes but you must customise the error handling: see __user_error_handling and __changing_policy_defaults .
# ['The docs are several hundreds of pages long! Can I read the docs offline or on paper?] [br]
Yes  you can download the Boost current release of most documentation
as a zip of pdfs (including Boost.Math) from Sourceforge, for example
+as a zip of pdfs (including Boost.Math) from Sourceforge, for example
[@https://sourceforge.net/projects/boost/files/boostdocs/1.45.0/boost_pdf_1_45_0.tar.gz/download].
And you can print any pages you need (or even print all pages  but be warned that there are several hundred!).
Both html and pdf versions are highly hyperlinked.
@@ 63,14 +63,14 @@
This can often find what you seek, a partial substitute for a full index.
# ['I want a compact version for an embedded application. Can I use float precision?] [br]
Yes  by selecting RealType template parameter as float:
for example normal_distribution<float> your_normal(mean, sd);
+for example normal_distribution<float> your_normal(mean, sd);
(But double may still be used internally, so space saving may be less that you hope for).
You can also change the promotion policy, but accuracy might be much reduced.
# ['I seem to get somewhat different results compared to other programs. Why?]
We hope Boost.Math to be more accurate: our priority is accuracy (over speed).
See the section on accuracy. But for evaluations that require iterations
there are parameters which can change the required accuracy. You might be able to
squeeze a little more accuracy at the cost of runtime.
+squeeze a little more accuracy at the cost of runtime.
# ['Will my program run more slowly compared to other math functions and statistical libraries?]
Probably, thought not always, and not by too much: our priority is accuracy.
For most functions, making sure you have the latest compiler version with all optimisations switched on is the key to speed.
@@ 79,15 +79,23 @@
# ['How do I handle infinity and NaNs portably?] [br]
See __fp_facets for Facets for FloatingPoint Infinities and NaNs.
# ['Where are the prebuilt libraries?] [br]
Good news  you probably don't need any!  just #include <boost/math/distribution_you_want>.
+Good news  you probably don't need any!  just `#include <boost/`['math/distribution_you_want>].
But in the unlikely event that you do, see __building.
# ['I don't see the function or distribution that I want.] [br]
You could try an email to ask the authors  but no promises!
+# ['I need more decimal digits for values/computations.] [br]
+You can use Boost.Math with __multiprecision: typically
+__cpp_dec_float is a useful userdefined type to provide a fixed number of decimal digits, usually 50 or 100.
+# Why can't I write something really simple like `cpp_int one(1); cpp_dec_float_50 two(2); one * two;`
+Because `cpp_int` might be bigger than `cpp_dec_float can hold`, so you must make an [*explicit] conversion.
+See [@http://svn.boost.org/svn/boost/trunk/libs/multiprecision/doc/html/boost_multiprecision/intro.html mixed multiprecision arithmetic]
+and [@http://svn.boost.org/svn/boost/trunk/libs/multiprecision/doc/html/boost_multiprecision/tut/conversions.html conversion].
+
[endsect] [/section:faq Frequently Asked Questions]
[/
 Copyright 2010 John Maddock and Paul A. Bristow.
+[/
+ Copyright 2010, 2012 John Maddock and Paul A. Bristow.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
Modified: trunk/libs/math/doc/sf_and_dist/implementation.qbk
==============================================================================
 trunk/libs/math/doc/sf_and_dist/implementation.qbk (original)
+++ trunk/libs/math/doc/sf_and_dist/implementation.qbk 20130124 09:12:37 EST (Thu, 24 Jan 2013)
@@ 36,7 +36,7 @@
then a user may substitute his custom specialization.
For example, there are approximations dating back from times
when computation was a *lot* more expensive:
+when computation was a [*lot] more expensive:
H Goldberg and H Levine, Approximate formulas for
percentage points and normalisation of t and chi squared,
@@ 160,14 +160,14 @@
mean(cauchy<>()) will return std::numeric_limits<T>::quiet_NaN().
[warning If `std::numeric_limits<T>::has_quiet_NaN` is false
(for example T is a Userdefined type),
+(for example, if T is a Userdefined type without NaN support),
then an exception will always be thrown when a domain error occurs.
Catching exceptions is therefore strongly recommended.]
[h4 Median of distributions]
There are many distributions for which we have been unable to find an analytic formula,
and this has deterred us from implementing
+and this has deterred us from implementing
[@http://en.wikipedia.org/wiki/Median median functions], the midpoint in a list of values.
However a useful numerical approximation for distribution `dist`
@@ 211,13 +211,13 @@
and it is supported and tested by the distribution.
The range for these distributions is set to infinity if supported by the platform,
(by testing `std::numeric_limits<RealType>::has_infinity`)
+(by testing `std::numeric_limits<RealType>::has_infinity`)
else the maximum value provided for the `RealType` by Boost.Math.
Testing for has_infinity is obviously important for arbitrary precision types
where infinity makes much less sense than for IEEE754 floatingpoint.
So far we have not set `support()` function (only range)
+So far we have not set `support()` function (only range)
on the grounds that the PDF is uninteresting/zero for infinities.
Users who require special handling of infinity (or other specific value) can,
@@ 307,7 +307,8 @@
to provide high accuracy constants to mathematical functions and distributions,
since it is important to provide values uniformly for both builtin
float, double and long double types,
and for User Defined types like NTL::quad_float and NTL::RR.
+and for User Defined types in __multiprecision like __cpp_dec_float.
+and others like NTL::quad_float and NTL::RR.
To permit calculations in this Math ToolKit and its tests, (and elsewhere)
at about 100 decimal digits with NTL::RR type,
@@ 353,21 +354,19 @@
[h4 Thread safety]
Reporting of error by setting errno should be thread safe already
+Reporting of error by setting `errno` should be threadsafe already
(otherwise none of the std lib math functions would be thread safe?).
If you turn on reporting of errors via exceptions, errno gets left unused anyway.
+If you turn on reporting of errors via exceptions, `errno` gets left unused anyway.
Other than that, the code is intended to be thread safe *for built in
realnumber types* : so float, double and long double are all thread safe.
+For normal C++ usage, the Boost.Math `static const` constants are now threadsafe so
+for builtin realnumber types: `float`, `double` and `long double` are all thread safe.
For nonbuiltin types  NTL::RR for example  initialisation of the various
constants used in the implementation is potentially *not* thread safe.
This most undesiable, but it would be a signficant challenge to fix it.
Some compilers may offer the option of having
staticconstants initialised in a thread safe manner (Commeau, and maybe
others?), if that's the case then the problem is solved. This is a topic of
hot debate for the next C++ std revision, so hopefully all compilers
will be required to do the right thing here at some point.
+For User_defined types, for example, __cpp_dec_float,
+the Boost.Math should also be threadsafe,
+(thought we are unsure how to rigorously prove this).
+
+(Thread safety has received attention in the C++11 Standard revision,
+so hopefully all compilers will do the right thing here at some point.)
[h4 Sources of Test Data]
@@ 381,7 +380,7 @@
provided a higher accuracy than
C++ double (64bit floatingpoint) and was regarded as
the mosttrusted source by far.
The __R provided the widest range of distributions,
+The __R provided the widest range of distributions,
but the usual Intel X86 distribution uses 64but doubles,
so our use was limited to the 15 to 17 decimal digit accuracy.
@@ 418,7 +417,7 @@
Usage is `check_out_of_range< DistributionType >(listofparams);`
Where listofparams is a list of *valid* parameters from which the distribution can be constructed
 ie the same number of args are passed to the function,
as are passed to the distribution constructor.
+as are passed to the distribution constructor.
The values of the parameters are not important, but must be *valid* to pass the contructor checks;
the default values are suitable, but must be explicitly provided, for example:
@@ 628,12 +627,12 @@
[h4 Producing Graphs]
Graphs were produced in SVG format and then converted to PNG's using the same
process as the equations.
+process as the equations.
The programs
+The programs
`/libs/math/doc/sf_and_dist/graphs/dist_graphs.cpp`
and `/libs/math/doc/sf_and_dist/graphs/sf_graphs.cpp`
generate the SVG's directly using the
+generate the SVG's directly using the
[@http://code.google.com/soc/2007/boost/about.html Google Summer of Code 2007]
project of Jacob Voytko (whose work so far,
considerably enhanced and now reasonably mature and usable, by Paul A. Bristow,
Modified: trunk/libs/math/doc/sf_and_dist/math.qbk
==============================================================================
 trunk/libs/math/doc/sf_and_dist/math.qbk (original)
+++ trunk/libs/math/doc/sf_and_dist/math.qbk 20130124 09:12:37 EST (Thu, 24 Jan 2013)
@@ 1,13 +1,13 @@
[article Math Toolkit
[quickbook 1.5]
 [copyright 2006, 2007, 2008, 2009, 2010, 2012 John Maddock, Paul A. Bristow, Hubert Holin, Xiaogang Zhang, Bruno Lalande, Johan RÃ¥de, Gautam Sewani, Thijs van den Berg and Benjamin Sobotta]
+ [copyright 2006, 2007, 2008, 2009, 2010, 2012, 2013 John Maddock, Paul A. Bristow, Hubert Holin, Xiaogang Zhang, Bruno Lalande, Johan RÃ¥de, Gautam Sewani, Thijs van den Berg, Benjamin Sobotta and Christopher Kormanyos]
[/purpose ISBN 095048332X 9780950483320, Classification 519.2dc22]
[license
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
[@http://www.boost.org/LICENSE_1_0.txt])
]
 [authors [Maddock, John], [Bristow, Paul A.], [Holin, Hubert], [Zhang, Xiaogang], [Lalande, Bruno], [RÃ¥de, Johan], [Sewani, Gautam], [van den Berg, Thijs], [Sobotta, Benjamin]]
+ [authors [Maddock, John], [Bristow, Paul A.], [Holin, Hubert], [Zhang, Xiaogang], [Lalande, Bruno], [RÃ¥de, Johan], [Sewani, Gautam], [van den Berg, Thijs], [Sobotta, Benjamin], [Kormanyos, Christopher] ]
[/lastrevision $Date$]
]
@@ 310,8 +310,10 @@
[def __NTL [@http://www.shoup.net/ntl/ NTL A Library for doing Number Theory]]
[def __NTL_RR [@http://shoup.net/ntl/doc/RR.txt NTL::RR]]
[def __NTL_quad_float [@http://shoup.net/ntl/doc/quad_float.txt NTL::quad_float]]
[def __MPFR [@http://www.mpfr.org/ MPFR]]
+[def __MPFR [@http://www.mpfr.org/ GNU MPFR library]]
[def __GMP [@http://gmplib.org/ GNU Multiple Precision Arithmetic Library]]
+[def __multiprecision [@http://www.boost.org/doc/libs/1_53_0_beta1/libs/multiprecision/doc/html/index.html Boost.Multiprecision]]
+[def __cpp_dec_float [@http://www.boost.org/doc/libs/1_53_0_beta1/libs/multiprecision/doc/html/boost_multiprecision/tut/floats/cpp_dec_float.html cpp_dec_float]]
[def __R [@http://www.rproject.org/ The R Project for Statistical Computing]]
[def __godfrey [link godfrey Godfrey]]
[def __pugh [link pugh Pugh]]
@@ 319,12 +321,15 @@
[def __errno [@http://en.wikipedia.org/wiki/Errno `::errno`]]
[def __Mathworld [@http://mathworld.wolfram.com Wolfram MathWorld]]
[def __Mathematica [@http://www.wolfram.com/products/mathematica/index.html Wolfram Mathematica]]
+[def __WolframAlpha [@http://www.wolframalpha.com/ Wolfram Alpha]]
[def __TOMS748 [@http://portal.acm.org/citation.cfm?id=210111 TOMS Algorithm 748: enclosing zeros of continuous functions]]
+[def __TOMS910 [@http://portal.acm.org/citation.cfm?id=1916469 TOMS Algorithm 910: A Portable C++ MultiplePrecision System for SpecialFunction Calculations]]
[def __why_complements [link why_complements why complements?]]
[def __complements [link complements complements]]
[def __performance [link math_toolkit.perf performance]]
[def __building [link math_toolkit.main_overview.building building libraries]]
+
[/ Some composite templates]
[template super[x]'''<superscript>'''[x]'''</superscript>''']
[template sub[x]'''<subscript>'''[x]'''</subscript>''']
Modified: trunk/libs/math/doc/sf_and_dist/minimax.qbk
==============================================================================
 trunk/libs/math/doc/sf_and_dist/minimax.qbk (original)
+++ trunk/libs/math/doc/sf_and_dist/minimax.qbk 20130124 09:12:37 EST (Thu, 24 Jan 2013)
@@ 13,8 +13,8 @@
algorithm is, and the general form of the approximation you want to achieve.
Unless you already familar with the Remez method,
you should first read the [link math_toolkit.backgrounders.remez
brief background article explaining the principles behind the
+you should first read the [link math_toolkit.backgrounders.remez
+brief background article explaining the principles behind the
Remez algorithm].
The program consists of two parts:
@@ 29,18 +29,18 @@
the same compiled program: each as a separate variant:
NTL::RR f(const NTL::RR& x, int variant);

+
Returns the value of the function /variant/ at point /x/. So if you
wish you can just add the function to approximate as a new variant
after the existing examples.
In addition to those two files, the program needs to be linked to
a [link math_toolkit.using_udt.use_ntl patched NTL library to compile].
+a [link math_toolkit.using_udt.high_precision.use_ntl patched NTL library to compile].
Note that the function /f/ must return the rational part of the
approximation: for example if you are approximating a function
/f(x)/ then it is quite common to use:

+
f(x) = g(x)(Y + R(x))
where /g(x)/ is the dominant part of /f(x)/, /Y/ is some constant, and
@@ 50,7 +50,7 @@
In this case you would define /f/ to return ['f(x)/g(x)] and then set the
yoffset of the approximation to /Y/ (see command line options below).
Many other forms are possible, but in all cases the objective is to
+Many other forms are possible, but in all cases the objective is to
split /f(x)/ into a dominant part that you can evaluate easily using
standard math functions, and a smooth and slowly changing rational approximation
part. Refer to your favourite textbook for more examples.
@@ 62,7 +62,7 @@
that are to be approximated to be compiled into the same executable.
Defaults to 0.]]
[[range a b][Sets the domain for the approximation to the range \[a,b\], defaults
 to \[0,1\].]]
+ to \[0,1\].]]
[[relative][Sets the Remez code to optimise for relative error. This is the default
at program startup. Note that relative error can only be used
if f(x) has no roots over the range being optimised.]]
@@ 70,7 +70,7 @@
[[pin \[truefalse\]]["Pins" the code so that the rational approximation
passes through the origin. Obviously only set this to
/true/ if R(0) must be zero. This is typically used when
 trying to preserve a root at \[0,0\] while also optimising
+ trying to preserve a root at \[0,0\] while also optimising
for relative error.]]
[[order N D][Sets the order of the approximation to /N/ in the numerator and /D/
in the denominator. If /D/ is zero then the result will be a polynomial
@@ 78,7 +78,7 @@
coefficient of the numerator is zero if /pin/ was set to true, and the
first coefficient of the denominator is always one.]]
[[workingprecision N][Sets the working precision of NTL::RR to /N/ binary digits. Defaults to 250.]]
[[targetprecision N][Sets the precision of printed output to /N/ binary digits:
+[[targetprecision N][Sets the precision of printed output to /N/ binary digits:
set to the same number of digits as the type that will be used to
evaluate the approximation. Defaults to 53 (for double precision).]]
[[skew val]["Skews" the initial interpolated control points towards one
@@ 89,32 +89,32 @@
try adjusting the skew parameter until the first step yields
the smallest possible error. /val/ should be in the range
\[100,+100\], the default is zero.]]
[[brake val][Sets a brake on each step so that the change in the
+[[brake val][Sets a brake on each step so that the change in the
control points is braked by /val%/. Defaults to 50,
try a higher value if an approximation won't converge,
or a lower value to get speedier convergence.]]
[[xoffset val][Sets the xoffset to /val/: the approximation will
 be generated for `f(S * (x + X)) + Y` where /X/ is the
+ be generated for `f(S * (x + X)) + Y` where /X/ is the
xoffset, /S/ is the xscale
and /Y/ is the yoffset. Defaults to zero. To avoid
rounding errors, take care to specify a value that can
be exactly represented as a floating point number.]]
[[xscale val][Sets the xscale to /val/: the approximation will
 be generated for `f(S * (x + X)) + Y` where /S/ is the
+ be generated for `f(S * (x + X)) + Y` where /S/ is the
xscale, /X/ is the xoffset
and /Y/ is the yoffset. Defaults to one. To avoid
rounding errors, take care to specify a value that can
be exactly represented as a floating point number.]]
[[yoffset val][Sets the yoffset to /val/: the approximation will
 be generated for `f(S * (x + X)) + Y` where /X/
+ be generated for `f(S * (x + X)) + Y` where /X/
is the xoffset, /S/ is the xscale
and /Y/ is the yoffset. Defaults to zero. To avoid
rounding errors, take care to specify a value that can
be exactly represented as a floating point number.]]
[[yoffset auto][Sets the yoffset to the average value of f(x)
evaluated at the two endpoints of the range plus the midpoint
 of the range. The calculated value is deliberately truncated
 to /float/ precision (and should be stored as a /float/
+ of the range. The calculated value is deliberately truncated
+ to /float/ precision (and should be stored as a /float/
in your code). The approximation will
be generated for `f(x + X) + Y` where /X/ is the xoffset
and /Y/ is the yoffset. Defaults to zero.]]
@@ 124,7 +124,7 @@
of interest.]]
[[step N][Performs /N/ steps, or one step if /N/ is unspecified.
After each step prints: the peek error at the extrema of
 the error function of the approximation,
+ the error function of the approximation,
the theoretical error term solved for on the last step,
and the maximum relative change in the location of the
Chebyshev control points. The approximation is converged on the
@@ 154,12 +154,12 @@
[[info][Prints out the current approximation: the location of the zeros of the
error function, the location of the Chebyshev control points, the
x and y offsets, and of course the coefficients of the polynomials.]]
]
+]
[endsect][/section:minimax Minimax Approximations and the Remez Algorithm]
[/
+[/
Copyright 2006 John Maddock and Paul A. Bristow.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
Modified: trunk/libs/math/doc/sf_and_dist/performance.qbk
==============================================================================
 trunk/libs/math/doc/sf_and_dist/performance.qbk (original)
+++ trunk/libs/math/doc/sf_and_dist/performance.qbk 20130124 09:12:37 EST (Thu, 24 Jan 2013)
@@ 11,16 +11,15 @@
In all of the following tables, the best performing
result in each row, is assigned a relative value of "1" and shown
in bold, so a score of "2" means ['"twice as slow as the best
+in bold, so a score of "2" means ['"twice as slow as the best
performing result".] Actual timings in seconds per function call
are also shown in parenthesis.
+are also shown in parenthesis.
Result were obtained on a system
with an Intel 2.8GHz Pentium 4 processor with 2Gb of RAM and running
either Windows XP or Mandriva Linux.
+either Windows XP or Mandriva Linux.
[caution As usual
with performance results these should be taken with a large pinch
+[caution As usual with performance results these should be taken with a large pinch
of salt: relative performance is known to shift quite a bit depending
upon the architecture of the particular test system used. Further
more, our performance results were obtained using our own test data:
@@ 37,18 +36,18 @@
[section:getting_best Getting the Best Performance from this Library]
By far the most important thing you can do when using this library
is turn on your compiler's optimisation options. As the following
table shows the penalty for using the library in debug mode can be
quite large.
+is turn on your compiler's optimisation options. As the following
+table shows the penalty for using the library in debug mode can be
+quite large.
[table Performance Comparison of Release and Debug Settings
[[Function]
[Microsoft Visual C++ 8.0

+
Debug Settings: /Od /ZI
]
[Microsoft Visual C++ 8.0

+
Release settings: /Ox /arch:SSE2
]]
@@ 66,25 +65,25 @@
[section:comp_compilers Comparing Compilers]
After a good choice of build settings the next most important thing
+After a good choice of build settings the next most important thing
you can do, is choose your compiler
 and the standard C library it sits on top of  very carefully. GCC3.x
in particular has been found to be particularly bad at inlining code,
+in particular has been found to be particularly bad at inlining code,
and performing the kinds of high level transformations that good C++ performance
demands (thankfully GCC4.x is somewhat better in this respect).
[table Performance Comparison of Various Windows Compilers
[[Function]
[Intel C++ 10.0

+
( /Ox /Qipo /QxN )
]
[Microsoft Visual C++ 8.0

+
( /Ox /arch:SSE2 )
]
[Cygwin G++ 3.4

+
( /O3 )
]]
[[__erf][[perf intelerf..[para *1.00*][para (4.118e008s)]]][[perf msvcerf..[para *1.00*][para (1.483e007s)]]][[perf gccerf..[para 3.24][para (1.336e007s)]]]]
@@ 105,7 +104,7 @@
that are determined by configuration macros. These should be set
in boost/math/tools/user.hpp; or else reported to the Boostdevelopment
mailing list so that the appropriate option for a given compiler and
OS platform can be set automatically in our configuration setup.
+OS platform can be set automatically in our configuration setup.
[table
[[Macro][Meaning]]
@@ 127,16 +126,16 @@
[Many of the coefficients to the polynomials and rational functions
used by this library are integers. Normally these are stored as tables
as integers, but if mixed integer / floating point arithmetic is much
 slower than regular floating point arithmetic then they can be stored
+ slower than regular floating point arithmetic then they can be stored
as tables of floating point values instead. If mixed arithmetic is slow
then add:

+
#define BOOST_MATH_INT_TABLE_TYPE(RT, IT) RT

+
to boost/math/tools/user.hpp, otherwise the default of:

+
#define BOOST_MATH_INT_TABLE_TYPE(RT, IT) IT

+
Set in boost/math/config.hpp is fine, and may well result in smaller
code.
]]
@@ 148,8 +147,8 @@
[table
[[Value][Effect]]
[[0][The polynomial or rational function is evaluated using Horner's
 method, and a simple forloop.

+ method, and a simple forloop.
+
Note that if the order of the polynomial
or rational function is a runtime parameter, or the order is
greater than the value of `BOOST_MATH_MAX_POLY_ORDER`, then
@@ 179,13 +178,13 @@
than or equal to `BOOST_MATH_MAX_POLY_ORDER`.]]
]
To determine which
+To determine which
of these options is best for your particular compiler/platform build
the performance test application with your usual release settings,
and run the program with the tune command line option.
In practice the difference between methods is rather small at present,
as the following table shows. However, parallelisation /vectorisation
+as the following table shows. However, parallelisation /vectorisation
is likely to become more important in the future: quite likely the methods
currently supported will need to be supplemented or replaced by ones more
suited to highly vectorisable processors in the future.
@@ 205,25 +204,25 @@
There is one final performance tuning option that is available as a compile time
[link math_toolkit.policy policy]. Normally when evaluating functions at `double`
precision, these are actually evaluated at `long double` precision internally:
this helps to ensure that as close to full `double` precision as possible is
+this helps to ensure that as close to full `double` precision as possible is
achieved, but may slow down execution in some environments. The defaults for
this policy can be changed by
[link math_toolkit.policy.pol_ref.policy_defaults
+this policy can be changed by
+[link math_toolkit.policy.pol_ref.policy_defaults
defining the macro `BOOST_MATH_PROMOTE_DOUBLE_POLICY`]
to `false`, or
[link math_toolkit.policy.pol_ref.internal_promotion
+to `false`, or
+[link math_toolkit.policy.pol_ref.internal_promotion
by specifying a specific policy] when calling the special
functions or distributions. See also the
+functions or distributions. See also the
[link math_toolkit.policy.pol_tutorial policy tutorial].
[table Performance Comparison with and Without Internal Promotion to long double
[[Function]
[GCC 4.2 , Linux

+
(with internal promotion of double to long double).
]
[GCC 4.2, Linux

+
(without promotion of double).
]
]
@@ 241,15 +240,15 @@
[section:comparisons Comparisons to Other Open Source Libraries]
We've run our performance tests both for our own code, and against other
open source implementations of the same functions. The results are
+open source implementations of the same functions. The results are
presented below to give you a rough idea of how they all compare.
[caution
You should exercise extreme caution when interpreting
these results, relative performance may vary by platform, the tests use
data that gives good code coverage of /our/ code, but which may skew the
results towards the corner cases. Finally, remember that different
libraries make different choices with regard to performance verses
+data that gives good code coverage of /our/ code, but which may skew the
+results towards the corner cases. Finally, remember that different
+libraries make different choices with regard to performance verses
numerical stability.
]
@@ 325,7 +324,7 @@
All the results were measured on a 2.0GHz Intel T5800 Core 2 Duo, 4Gb RAM, Windows Vista
machine, with the test program compiled with Microsoft Visual C++ 2009, and
R2.9.2 compiled in "standalone mode" with MinGW4.3
+R2.9.2 compiled in "standalone mode" with MinGW4.3
(R2.9.2 appears not to be buildable with Visual C++).
[table A Comparison to the R Statistical Library on Windows XP
@@ 343,12 +342,12 @@
[[__F_distrib CDF][[perf msvcdistfisher_fcdf..[para *1.00*][para (9.556e007s)]]][[perf msvcdistfRcdf..[para 1.34][para (1.283e006s)]]][[perf msvcdistfdcdcdf..[para 1.24][para (1.183e006s)]]]]
[[__F_distrib Quantile][[perf msvcdistfisher_fquantile..[para *1.00*][para (6.987e006s)]]][[perf msvcdistfRquantile..[para 1.33][para (9.325e006s)]]][[perf msvcdistfdcdquantile..[para 3.16][para (2.205e005s)]]]]
[[__gamma_distrib CDF][[perf msvcdistgammacdf..[para 1.52][para (6.240e007s)]]][[perf msvcdistgammaRcdf..[para 3.11][para (1.279e006s)]]][[perf msvcdistgamdcdcdf..[para *1.00*][para (4.111e007s)]]]]
[[__gamma_distrib Quantile][[perf msvcdistgammaquantile..[para 1.24][para (2.179e006s)]]][[perf msvcdistgammaRquantile..[para 6.25][para (1.102e005s)]]][[perf msvcdistgamdcdquantile..[para *1.00*][para (1.764e006s)]]]]
+[[__gamma_distrib Quantile][[perf msvcdistgammaquantile..[para 1.24][para (2.179e006s)]]][[perf msvcdistgammaRquantile..[para 6.25][para (1.102e005s)]]][[perf msvcdistgamdcdquantile..[para *1.00*][para (1.764e006s)]]]]
[[__hypergeometric_distrib CDF][[perf msvcdisthypergeometriccdf..[para 3.60[footnote This result is somewhat misleading: for small values of the parameters there is virtually no difference between the two libraries, but for large values the Boost implementation is /much/ slower, albeit with much improved precision.]][para (5.987e007s)]]][[perf msvcdisthypergeoRcdf..[para *1.00*][para (1.665e007s)]]][NA]]
[[__hypergeometric_distrib Quantile][[perf msvcdisthypergeometricquantile..[para *1.00*][para (5.684e007s)]]][[perf msvcdisthypergeoRquantile..[para 3.53][para (2.004e006s)]]][NA]]
+[[__hypergeometric_distrib Quantile][[perf msvcdisthypergeometricquantile..[para *1.00*][para (5.684e007s)]]][[perf msvcdisthypergeoRquantile..[para 3.53][para (2.004e006s)]]][NA]]
[[__logistic_distrib CDF][[perf msvcdistlogisticcdf..[para *1.00*][para (1.714e007s)]]][[perf msvcdistlogisRcdf..[para 5.24][para (8.984e007s)]]][NA]]
[[__logistic_distrib Quantile][[perf msvcdistlogisticquantile..[para 1.02][para (2.084e007s)]]][[perf msvcdistlogisRquantile..[para *1.00*][para (2.043e007s)]]][NA]]
+[[__logistic_distrib Quantile][[perf msvcdistlogisticquantile..[para 1.02][para (2.084e007s)]]][[perf msvcdistlogisRquantile..[para *1.00*][para (2.043e007s)]]][NA]]
[[__lognormal_distrib CDF][[perf msvcdistlognormalcdf..[para *1.00*][para (3.579e007s)]]][[perf msvcdistlnormRcdf..[para 1.49][para (5.332e007s)]]][NA]]
[[__lognormal_distrib Quantile][[perf msvcdistlognormalquantile..[para *1.00*][para (9.622e007s)]]][[perf msvcdistlnormRquantile..[para 1.57][para (1.507e006s)]]][NA]]
@@ 392,12 +391,12 @@
[[__F_distrib CDF][[perf gcc4_3_2distfisher_fcdf..[para 1.62][para (2.324e006s)]]][[perf gcc4_3_2distfRcdf..[para 1.19][para (1.711e006s)]]][[perf gcc4_3_2distfdcdcdf..[para *1.00*][para (1.437e006s)]]]]
[[__F_distrib Quantile][[perf gcc4_3_2distfisher_fquantile..[para 1.53][para (1.577e005s)]]][[perf gcc4_3_2distfRquantile..[para *1.00*][para (1.033e005s)]]][[perf gcc4_3_2distfdcdquantile..[para 2.63][para (2.719e005s)]]]]
[[__gamma_distrib CDF][[perf gcc4_3_2distgammacdf..[para 3.18][para (1.582e006s)]]][[perf gcc4_3_2distgammaRcdf..[para 2.63][para (1.309e006s)]]][[perf gcc4_3_2distgamdcdcdf..[para *1.00*][para (4.980e007s)]]]]
[[__gamma_distrib Quantile][[perf gcc4_3_2distgammaquantile..[para 2.19][para (4.770e006s)]]][[perf gcc4_3_2distgammaRquantile..[para 6.94][para (1.513e005s)]]][[perf gcc4_3_2distgamdcdquantile..[para *1.00*][para (2.179e006s)]]]]
+[[__gamma_distrib Quantile][[perf gcc4_3_2distgammaquantile..[para 2.19][para (4.770e006s)]]][[perf gcc4_3_2distgammaRquantile..[para 6.94][para (1.513e005s)]]][[perf gcc4_3_2distgamdcdquantile..[para *1.00*][para (2.179e006s)]]]]
[[__hypergeometric_distrib CDF][[perf gcc4_3_2disthypergeometriccdf..[para 2.20[footnote This result is somewhat misleading: for small values of the parameters there is virtually no difference between the two libraries, but for large values the Boost implementation is /much/ slower, albeit with much improved precision.]][para (3.522e007s)]]][[perf gcc4_3_2disthypergeoRcdf..[para *1.00*][para (1.601e007s)]]][NA]]
[[__hypergeometric_distrib Quantile][[perf gcc4_3_2disthypergeometricquantile..[para *1.00*][para (8.279e007s)]]][[perf gcc4_3_2disthypergeoRquantile..[para 2.57][para (2.125e006s)]]][NA]]
+[[__hypergeometric_distrib Quantile][[perf gcc4_3_2disthypergeometricquantile..[para *1.00*][para (8.279e007s)]]][[perf gcc4_3_2disthypergeoRquantile..[para 2.57][para (2.125e006s)]]][NA]]
[[__logistic_distrib CDF][[perf gcc4_3_2distlogisticcdf..[para *1.00*][para (9.398e008s)]]][[perf gcc4_3_2distlogisRcdf..[para 2.75][para (2.588e007s)]]][NA]]
[[__logistic_distrib Quantile][[perf gcc4_3_2distlogisticquantile..[para *1.00*][para (9.893e008s)]]][[perf gcc4_3_2distlogisRquantile..[para 1.30][para (1.285e007s)]]][NA]]
+[[__logistic_distrib Quantile][[perf gcc4_3_2distlogisticquantile..[para *1.00*][para (9.893e008s)]]][[perf gcc4_3_2distlogisRquantile..[para 1.30][para (1.285e007s)]]][NA]]
[[__lognormal_distrib CDF][[perf gcc4_3_2distlognormalcdf..[para *1.00*][para (1.831e007s)]]][[perf gcc4_3_2distlnormRcdf..[para 1.39][para (2.539e007s)]]][NA]]
[[__lognormal_distrib Quantile][[perf gcc4_3_2distlognormalquantile..[para 1.10][para (5.551e007s)]]][[perf gcc4_3_2distlnormRquantile..[para *1.00*][para (5.037e007s)]]][NA]]
@@ 425,26 +424,26 @@
[section:perf_test_app The Performance Test Application]
Under ['boostpath]\/libs\/math\/performance you will find a
+Under ['boostpath]\/libs\/math\/performance you will find a
(fairly rudimentary) performance test application for this library.
To run this application yourself, build the all the .cpp files in
['boostpath]\/libs\/math\/performance into an application using
your usual releasebuild settings. Run the application with help
to see a full list of options, or with all to test everything
(which takes quite a while), or with tune to test the
+to see a full list of options, or with all to test everything
+(which takes quite a while), or with tune to test the
[link math_toolkit.perf.tuning available performance tuning options].
If you want to use this application to test the effect of changing
any of the __policy_section, then you will need to build and run it twice:
once with the default __policy_section, and then a second time with the
+any of the __policy_section, then you will need to build and run it twice:
+once with the default __policy_section, and then a second time with the
__policy_section you want to test set as the default.
[endsect]
[endsect]
[/
+[/
Copyright 2006 John Maddock and Paul A. Bristow.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
Modified: trunk/libs/math/doc/sf_and_dist/result_type_calc.qbk
==============================================================================
 trunk/libs/math/doc/sf_and_dist/result_type_calc.qbk (original)
+++ trunk/libs/math/doc/sf_and_dist/result_type_calc.qbk 20130124 09:12:37 EST (Thu, 24 Jan 2013)
@@ 9,19 +9,19 @@
foo(1.0f, 2);
foo(1.0, 2L);
etc, are all valid calls, as long as "foo" is a function taking two
+etc, are all valid calls, as long as "foo" is a function taking two
floatingpoint arguments. But that leaves the question:
[blurb ['"Given a special function with N arguments of
types T1, T2, T3 ... TN, then what type is the result?"]]
[*If all the arguments are of the same (floating point) type then the
+[*If all the arguments are of the same (floating point) type then the
result is the same type as the arguments.]
Otherwise, the type of the result
is computed using the following logic:
# Any arguments that are not template arguments are disregarded from
+# Any arguments that are not template arguments are disregarded from
further analysis.
# For each type in the argument list, if that type is an integer type
then it is treated as if it were of type double for the purposes of
@@ 36,49 +36,49 @@
# Otherwise the result is of type `float`.
For example:

+
cyl_bessel(2, 3.0);

+
Returns a `double` result, as does:
cyl_bessel(2, 3.0f);

+
as in this case the integer first argument is treated as a `double` and takes
precedence over the `float` second argument. To get a `float` result we would need
all the arguments to be of type float:
cyl_bessel_j(2.0f, 3.0f);

+
When one or more of the arguments is not a template argument then it
doesn't effect the return type at all, for example:
sph_bessel(2, 3.0f);

+
returns a `float`, since the first argument is not a template argument and
so doesn't effect the result: without this rule functions that take
explicitly integer arguments could never return `float`.
And for user defined types, all of the following return an NTL::RR result:
+And for userdefined types, all of the following return an `NTL::RR` result:
cyl_bessel_j(0, NTL::RR(2));

+
cyl_bessel_j(NTL::RR(2), 3);

+
cyl_bessel_j(NTL::quad_float(2), NTL::RR(3));

In the last case, quad_float is convertible to RR, but not viceversa, so
the result will be an NTL::RR. Note that this assumes that you are using
a [link math_toolkit.using_udt.use_ntl patched NTL library].
These rules are chosen to be compatible with the behaviour of
+In the last case, `quad_float` is convertible to `RR`, but not viceversa, so
+the result will be an `NTL::RR`. Note that this assumes that you are using
+a [link math_toolkit.using_udt.high_precision.use_ntl patched NTL library].
+
+These rules are chosen to be compatible with the behaviour of
['ISO/IEC 9899:1999 Programming languages  C]
and with the
[@http://www.openstd.org/jtc1/sc22/wg21/docs/papers/2005/n1836.pdf Draft Technical Report on C++ Library Extensions, 20050624, section 5.2.1, paragraph 5].
[endsect]
[/
 Copyright 2006 John Maddock and Paul A. Bristow.
+[/
+ Copyright 2006, 2012 John Maddock and Paul A. Bristow.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
Modified: trunk/libs/math/doc/sf_and_dist/roadmap.qbk
==============================================================================
 trunk/libs/math/doc/sf_and_dist/roadmap.qbk (original)
+++ trunk/libs/math/doc/sf_and_dist/roadmap.qbk 20130124 09:12:37 EST (Thu, 24 Jan 2013)
@@ 8,7 +8,10 @@
[h4 Boost1.54]
* Fixed constants to use a thread safe cache of computed values when used at arbitrary precision.
+* Added many references to Boost.Multiprecision and `cpp_dec_float_50` as an example of a Userdefined Type (UDT).
+* Added Clang to list of supported compilers.
+* Fixed constants to use a threadsafe cache of computed values when used at arbitrary precision.
+* Added finding zeros of Bessel functions `cyl_bessel_j_zero` and `cyl_neumann_zero` (by Christopher Kormanyos).
* More accuracy improvements to the Bessel J and Y functions from Rocco Romeo.
[h4 Boost1.53]
@@ 20,7 +23,7 @@
[@https://svn.boost.org/trac/boost/ticket/7891 #7891], [@https://svn.boost.org/trac/boost/ticket/7429 #7429].
* Fixed mistake in calculating pooled standard deviation in twosample students t example
[@https://svn.boost.org/trac/boost/ticket/7402 #7402].
* Improve complex acos/asin/atan, see [@https://svn.boost.org/trac/boost/ticket/7290 #7290],
+* Improve complex acos/asin/atan, see [@https://svn.boost.org/trac/boost/ticket/7290 #7290],
[@https://svn.boost.org/trac/boost/ticket/7291 #7291].
* Improve accuracy in some corner cases of __cyl_bessel_j and __gamma_p/__gamma_q thanks to suggestions from Rocco Romeo.
* Improve accuracy of Bessel J and Y for integer orders thanks to suggestions from Rocco Romeo.
Modified: trunk/libs/math/doc/sf_and_dist/roots.qbk
==============================================================================
 trunk/libs/math/doc/sf_and_dist/roots.qbk (original)
+++ trunk/libs/math/doc/sf_and_dist/roots.qbk 20130124 09:12:37 EST (Thu, 24 Jan 2013)
@@ 1,4 +1,4 @@
[section:roots Root Finding With Derivatives: NewtonRaphson, Halley & Schroeder]
+ [section:roots Root Finding With Derivatives: NewtonRaphson, Halley & Schroeder]
[h4 Synopsis]
@@ 6,35 +6,35 @@
#include <boost/math/tools/roots.hpp>
``
 namespace boost{ namespace math{
+ namespace boost{ namespace math{
namespace tools{

+
template <class F, class T>
T newton_raphson_iterate(F f, T guess, T min, T max, int digits);

+
template <class F, class T>
T newton_raphson_iterate(F f, T guess, T min, T max, int digits, boost::uintmax_t& max_iter);

+
template <class F, class T>
T halley_iterate(F f, T guess, T min, T max, int digits);

+
template <class F, class T>
T halley_iterate(F f, T guess, T min, T max, int digits, boost::uintmax_t& max_iter);

+
template <class F, class T>
T schroeder_iterate(F f, T guess, T min, T max, int digits);

+
template <class F, class T>
T schroeder_iterate(F f, T guess, T min, T max, int digits, boost::uintmax_t& max_iter);

+
}}} // namespaces
[h4 Description]
These functions all perform iterative root finding using derivatives:
* `newton_raphson_iterate`performs second order
[link newton NewtonRaphson iteration],
+* `newton_raphson_iterate` performs second order
+[link newton NewtonRaphson iteration],
* `halley_iterate` and`schroeder_iterate` perform third order
[link halley Halley] and [link schroeder Schroeder] iteration.
@@ 42,15 +42,15 @@
The functions all take the same parameters:
[variablelist Parameters of the root finding functions
[[F f] [Type F must be a callable function object that accepts one parameter and
+[[F f] [Type F must be a callable function object that accepts one parameter and
returns a __tuple:

+
For the second order iterative methods ([@http://en.wikipedia.org/wiki/Newton_Raphson Newton Raphson])
the __tuple should have *two* elements containing the evaluation
of the function and its first derivative.

+
For the third order methods
([@http://en.wikipedia.org/wiki/Halley%27s_method Halley] and
+([@http://en.wikipedia.org/wiki/Halley%27s_method Halley] and
Schroeder)
the __tuple should have *three* elements containing the evaluation of
the function and its first and second derivatives.]]
@@ 78,22 +78,22 @@
A large first derivative leads to a very small next step, triggering the termination
condition. Derivative based iteration may not be appropriate in such cases.
* If the function is 'Really Well Behaved' (monotonic and has only one root)
the bracket bounds min and max may as well be set to the widest limits
like zero and `numeric_limits<T>::max()`.
+the bracket bounds min and max may as well be set to the widest limits
+like zero and `numeric_limits<T>::max()`.
*But if the function more complex and may have more than one root or a pole,
the choice of bounds is protection against jumping out to seek the 'wrong' root.
* These functions fall back to bisection if the next computed step would take the
next value out of bounds. The bounds are updated after each step to ensure this leads
to convergence. However, a good initial guess backed up by asymptoticallytight
bounds will improve performance no end  rather than relying on bisection.
* The value of /digits/ is crucial to good performance of these functions,
+* The value of /digits/ is crucial to good performance of these functions,
if it is set too high then at best you will get one extra (unnecessary)
iteration, and at worst the last few steps will proceed by bisection.
Remember that the returned value can never be more accurate than f(x) can be
evaluated, and that if f(x) suffers from cancellation errors as it
tends to zero then the computed steps will be effectively random. The
value of /digits/ should be set so that iteration terminates before this point:
remember that for second and third order methods the number of correct
+remember that for second and third order methods the number of correct
digits in the result is increasing quite
substantially with each iteration, /digits/ should be set by experiment so that the final
iteration just takes the next value into the zone where f(x) becomes inaccurate.
@@ 196,11 +196,11 @@
return newton_raphson_iterate(detail::cbrt_functor<T>(z), guess, min, max, digits);
}
Using the test data in libs/math/test/cbrt_test.cpp this found the cube root
+Using the test data in `libs/math/test/cbrt_test.cpp` this found the cube root
exact to the last digit in every case, and in no more than 6 iterations at double
precision. However, you will note that a high precision was used in this
+precision. However, you will note that a high precision was used in this
example, exactly what was warned against earlier on in these docs! In this
particular case it is possible to compute f(x) exactly and without undue
+particular case it is possible to compute f(x) exactly and without undue
cancellation error, so a high limit is not too much of an issue. However,
reducing the limit to `std::numeric_limits<T>::digits * 2 / 3` gave full
precision in all but one of the test cases (and that one was out by just one bit).
@@ 210,7 +210,7 @@
and reusing it, omits error handling, and does not handle
negative values of z correctly. (These are left as an exercise for the reader!)
The boost::math::cbrt function also includes these and other improvements.
+The `boost::math::cbrt` function also includes these and other improvements.
Now let's adapt the functor slightly to return the second derivative as well:
@@ 221,7 +221,7 @@
``__tuple``<T, T, T> operator()(T const& z)
{
return boost::math::make_tuple(
 z*z*z  a,
+ z*z*z  a,
3 * z*z,
6 * z);
}
@@ 247,7 +247,7 @@
}
Note that the iterations are set to stop at just onehalf of full precision,
and yet, even so, not one of the test cases had a single bit wrong.
+and yet, even so, not one of the test cases had a single bit wrong.
What's more, the maximum number of iterations was now just 4.
Just to complete the picture, we could have called `schroeder_iterate` in the last
@@ 257,10 +257,10 @@
guess can be computed. There appear to be no generalisations that can be made
except "try them and see".
Finally, had we called cbrt with [@http://shoup.net/ntl/doc/RR.txt NTL::RR]
+Finally, had we called `cbrt` with [@http://shoup.net/ntl/doc/RR.txt NTL::RR]
set to 1000 bit precision, then full precision can be obtained with just 7 iterations.
To put that in perspective,
an increase in precision by a factor of 20, has less than doubled the number of
+an increase in precision by a factor of 20, has less than doubled the number of
iterations. That just goes to emphasise that most of the iterations are used
up getting the first few digits correct: after that these methods can churn out
further digits with remarkable efficiency.
@@ 269,8 +269,8 @@
[endsect] [/section:roots Root Finding With Derivatives]
[/
 Copyright 2006, 2010 John Maddock and Paul A. Bristow.
+[/
+ Copyright 2006, 2010, 2012 John Maddock and Paul A. Bristow.
Distributed under the Boost Software License, Version 1.0.
(See accompanying file LICENSE_1_0.txt or copy at
http://www.boost.org/LICENSE_1_0.txt).
BoostCommit list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk