|
Boost-Commit : |
Subject: [Boost-commit] svn:boost r76653 - in sandbox/big_number/libs/multiprecision: doc doc/html doc/html/boost_multiprecision doc/html/boost_multiprecision/perf doc/html/boost_multiprecision/ref doc/html/boost_multiprecision/tut performance
From: john_at_[hidden]
Date: 2012-01-23 14:01:47
Author: johnmaddock
Date: 2012-01-23 14:01:43 EST (Mon, 23 Jan 2012)
New Revision: 76653
URL: http://svn.boost.org/trac/boost/changeset/76653
Log:
Update docs with latest performance results.
Text files modified:
sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/intro.html | 45
sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/perf/float_performance.html | 360 ++++++++--
sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/perf/integer_performance.html | 1297 +++++++++++++++++++++++++++++++++------
sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/perf/realworld.html | 17
sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/ref/mp_number.html | 4
sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/tut/ints.html | 139 ++-
sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/tut/rational.html | 62 +
sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/tut/reals.html | 127 +++
sandbox/big_number/libs/multiprecision/doc/html/index.html | 2
sandbox/big_number/libs/multiprecision/doc/multiprecision.qbk | 353 +++++++---
sandbox/big_number/libs/multiprecision/performance/performance_test-msvc-10.log | 894 ++++++++++++++++-----------
11 files changed, 2424 insertions(+), 876 deletions(-)
Modified: sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/intro.html
==============================================================================
--- sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/intro.html (original)
+++ sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/intro.html 2012-01-23 14:01:43 EST (Mon, 23 Jan 2012)
@@ -33,9 +33,9 @@
</li>
</ul></div>
<p>
- The library is often used by using one of the predefined typedefs: for example
- if you wanted an arbitrary precision integer type using GMP as the underlying
- implementation then you could use:
+ The library is often used via one of the predefined typedefs: for example if
+ you wanted an arbitrary precision integer type using GMP
+ as the underlying implementation then you could use:
</p>
<pre class="programlisting"><span class="preprocessor">#include</span> <span class="special"><</span><span class="identifier">boost</span><span class="special">/</span><span class="identifier">multiprecision</span><span class="special">/</span><span class="identifier">gmp</span><span class="special">.</span><span class="identifier">hpp</span><span class="special">></span> <span class="comment">// Defines the wrappers around the GMP library's types</span>
@@ -45,8 +45,9 @@
Alternatively, you can compose your own multiprecision type, by combining
<code class="computeroutput"><span class="identifier">mp_number</span></code> with one of the predefined
backend types. For example, suppose you wanted a 300 decimal digit floating-point
- type based on the MPFR library. In this case, there's no predefined typedef
- with that level of precision, so instead we compose our own:
+ type based on the MPFR library. In
+ this case, there's no predefined typedef with that level of precision, so instead
+ we compose our own:
</p>
<pre class="programlisting"><span class="preprocessor">#include</span> <span class="special"><</span><span class="identifier">boost</span><span class="special">/</span><span class="identifier">multiprecision</span><span class="special">/</span><span class="identifier">mpfr</span><span class="special">.</span><span class="identifier">hpp</span><span class="special">></span> <span class="comment">// Defines the Backend type that wraps MPFR</span>
@@ -102,11 +103,13 @@
<p>
If type <code class="computeroutput"><span class="identifier">T</span></code> is an <code class="computeroutput"><span class="identifier">mp_number</span></code>, then this expression is evaluated
<span class="emphasis"><em>without creating a single temporary value</em></span>. In contrast,
- if we were using the C++ wrapper that ships with GMP - <code class="computeroutput"><span class="identifier">mpf_class</span></code>
+ if we were using the C++ wrapper that ships with GMP
+ - mpfr_class
- then this expression would result in no less than 11 temporaries (this is
- true even though <code class="computeroutput"><span class="identifier">mpf_class</span></code>
+ true even though mpfr_class
does use expression templates to reduce the number of temporaries somewhat).
- Had we used an even simpler wrapper around GMP or MPFR like <code class="computeroutput"><span class="identifier">mpclass</span></code>
+ Had we used an even simpler wrapper around GMP
+ or MPFR like <code class="computeroutput"><span class="identifier">mpclass</span></code>
things would have been even worse and no less that 24 temporaries are created
for this simple expression (note - we actually measure the number of memory
allocations performed rather than the number of temporaries directly).
@@ -192,10 +195,11 @@
<p>
And finally... the performance improvements from an expression template library
like this are often not as dramatic as the reduction in number of temporaries
- would suggest. For example if we compare this library with <code class="computeroutput"><span class="identifier">mpfr_class</span></code>
- and <code class="computeroutput"><span class="identifier">mpreal</span></code>, with all three
- using the underlying MPFR library at 50 decimal digits precision then we see
- the following typical results for polynomial execution:
+ would suggest. For example if we compare this library with mpfr_class
+ and mpreal, with
+ all three using the underlying MPFR
+ library at 50 decimal digits precision then we see the following typical results
+ for polynomial execution:
</p>
<div class="table">
<a name="boost_multiprecision.intro.evaluation_of_order_6_polynomial_"></a><p class="title"><b>Table 1.1. Evaluation of Order 6 Polynomial.</b></p>
@@ -243,7 +247,7 @@
<tr>
<td>
<p>
- mpfr_class
+ mpfr_class
</p>
</td>
<td>
@@ -260,7 +264,7 @@
<tr>
<td>
<p>
- mpreal
+ mpreal
</p>
</td>
<td>
@@ -301,8 +305,9 @@
</ul></div>
<p>
We'll conclude this section by providing some more performance comparisons
- between these three libraries, again, all are using MPFR to carry out the underlying
- arithmetic, and all are operating at the same precision (50 decimal digits):
+ between these three libraries, again, all are using MPFR
+ to carry out the underlying arithmetic, and all are operating at the same precision
+ (50 decimal digits):
</p>
<div class="table">
<a name="boost_multiprecision.intro.evaluation_of_boost_math_s_bessel_function_test_data"></a><p class="title"><b>Table 1.2. Evaluation of Boost.Math's Bessel function test data</b></p>
@@ -350,7 +355,7 @@
<tr>
<td>
<p>
- mpfr_class
+ mpfr_class
</p>
</td>
<td>
@@ -367,7 +372,7 @@
<tr>
<td>
<p>
- mpreal
+ mpreal
</p>
</td>
<td>
@@ -430,7 +435,7 @@
<tr>
<td>
<p>
- mpfr_class
+ mpfr_class
</p>
</td>
<td>
@@ -447,7 +452,7 @@
<tr>
<td>
<p>
- mpreal
+ mpreal
</p>
</td>
<td>
Modified: sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/perf/float_performance.html
==============================================================================
--- sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/perf/float_performance.html (original)
+++ sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/perf/float_performance.html 2012-01-23 14:01:43 EST (Mon, 23 Jan 2012)
@@ -29,12 +29,12 @@
</p>
<p>
Test code was compiled with Microsoft Visual Studio 2010 with all optimisations
- turned on (/Ox), and used MPIR-2.3.0 and MPFR-3.0.0. The tests were run on
- 32-bit Windows Vista machine.
+ turned on (/Ox), and used MPIR-2.3.0 and MPFR-3.0.0.
+ The tests were run on 32-bit Windows Vista machine.
</p>
<div class="table">
-<a name="boost_multiprecision.perf.float_performance.operator__"></a><p class="title"><b>Table 1.8. Operator *</b></p>
-<div class="table-contents"><table class="table" summary="Operator *">
+<a name="boost_multiprecision.perf.float_performance.operator__"></a><p class="title"><b>Table 1.8. Operator +</b></p>
+<div class="table-contents"><table class="table" summary="Operator +">
<colgroup>
<col>
<col>
@@ -72,17 +72,17 @@
</td>
<td>
<p>
- 1.0826 (0.287216s)
+ <span class="bold"><strong>1</strong></span> (0.02382s)
</p>
</td>
<td>
<p>
- 1.48086 (0.586363s)
+ <span class="bold"><strong>1</strong></span> (0.0294619s)
</p>
</td>
<td>
<p>
- 1.57545 (5.05269s)
+ <span class="bold"><strong>1</strong></span> (0.058466s)
</p>
</td>
</tr>
@@ -94,17 +94,17 @@
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.265302s)
+ 4.55086 (0.108402s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.395962s)
+ 3.86443 (0.113853s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (3.20714s)
+ 2.6241 (0.15342s)
</p>
</td>
</tr>
@@ -116,17 +116,17 @@
</td>
<td>
<p>
- 1.24249 (0.329636s)
+ 2.52036 (0.060035s)
</p>
</td>
<td>
<p>
- 1.15432 (0.457067s)
+ 2.1833 (0.0643242s)
</p>
</td>
<td>
<p>
- 1.16182 (3.72612s)
+ 1.37736 (0.0805287s)
</p>
</td>
</tr>
@@ -134,8 +134,8 @@
</table></div>
</div>
<br class="table-break"><div class="table">
-<a name="boost_multiprecision.perf.float_performance.operator0"></a><p class="title"><b>Table 1.9. Operator +</b></p>
-<div class="table-contents"><table class="table" summary="Operator +">
+<a name="boost_multiprecision.perf.float_performance.operator___int_"></a><p class="title"><b>Table 1.9. Operator +(int)</b></p>
+<div class="table-contents"><table class="table" summary="Operator +(int)">
<colgroup>
<col>
<col>
@@ -173,17 +173,17 @@
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0242151s)
+ 1.56759 (0.0527023s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.029252s)
+ 1.74629 (0.0618102s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0584099s)
+ 1.68077 (0.105927s)
</p>
</td>
</tr>
@@ -195,17 +195,17 @@
</td>
<td>
<p>
- 4.55194 (0.110226s)
+ <span class="bold"><strong>1</strong></span> (0.0336201s)
</p>
</td>
<td>
<p>
- 3.67516 (0.107506s)
+ <span class="bold"><strong>1</strong></span> (0.0353951s)
</p>
</td>
<td>
<p>
- 2.42489 (0.141638s)
+ <span class="bold"><strong>1</strong></span> (0.0630232s)
</p>
</td>
</tr>
@@ -217,17 +217,17 @@
</td>
<td>
<p>
- 2.45362 (0.0594147s)
+ 3.14875 (0.105861s)
</p>
</td>
<td>
<p>
- 2.18552 (0.0639309s)
+ 3.15499 (0.111671s)
</p>
</td>
<td>
<p>
- 1.32099 (0.0771588s)
+ 1.92831 (0.121528s)
</p>
</td>
</tr>
@@ -235,8 +235,8 @@
</table></div>
</div>
<br class="table-break"><div class="table">
-<a name="boost_multiprecision.perf.float_performance.operator___int_"></a><p class="title"><b>Table 1.10. Operator +(int)</b></p>
-<div class="table-contents"><table class="table" summary="Operator +(int)">
+<a name="boost_multiprecision.perf.float_performance.operator0"></a><p class="title"><b>Table 1.10. Operator -</b></p>
+<div class="table-contents"><table class="table" summary="Operator -">
<colgroup>
<col>
<col>
@@ -274,17 +274,17 @@
</td>
<td>
<p>
- 1.51995 (0.0484155s)
+ <span class="bold"><strong>1</strong></span> (0.0265783s)
</p>
</td>
<td>
<p>
- 1.78781 (0.0611055s)
+ <span class="bold"><strong>1</strong></span> (0.031465s)
</p>
</td>
<td>
<p>
- 1.8309 (0.104123s)
+ <span class="bold"><strong>1</strong></span> (0.0619405s)
</p>
</td>
</tr>
@@ -296,17 +296,17 @@
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0318533s)
+ 4.66954 (0.124108s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0341789s)
+ 3.72645 (0.117253s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0568699s)
+ 2.67536 (0.165713s)
</p>
</td>
</tr>
@@ -318,17 +318,17 @@
</td>
<td>
<p>
- 3.39055 (0.108s)
+ 2.7909 (0.0741774s)
</p>
</td>
<td>
<p>
- 3.30142 (0.112839s)
+ 2.48557 (0.0782083s)
</p>
</td>
<td>
<p>
- 2.05293 (0.11675s)
+ 1.50944 (0.0934957s)
</p>
</td>
</tr>
@@ -336,8 +336,8 @@
</table></div>
</div>
<br class="table-break"><div class="table">
-<a name="boost_multiprecision.perf.float_performance.operator1"></a><p class="title"><b>Table 1.11. Operator -</b></p>
-<div class="table-contents"><table class="table" summary="Operator -">
+<a name="boost_multiprecision.perf.float_performance.operator_int0"></a><p class="title"><b>Table 1.11. Operator -(int)</b></p>
+<div class="table-contents"><table class="table" summary="Operator -(int)">
<colgroup>
<col>
<col>
@@ -375,17 +375,17 @@
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0261498s)
+ <span class="bold"><strong>1</strong></span> (0.0577674s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.030946s)
+ <span class="bold"><strong>1</strong></span> (0.0633795s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0606388s)
+ <span class="bold"><strong>1</strong></span> (0.11146s)
</p>
</td>
</tr>
@@ -397,17 +397,17 @@
</td>
<td>
<p>
- 4.48753 (0.117348s)
+ 2.31811 (0.133911s)
</p>
</td>
<td>
<p>
- 3.75823 (0.116302s)
+ 2.07251 (0.131355s)
</p>
</td>
<td>
<p>
- 2.4823 (0.150524s)
+ 1.67161 (0.186319s)
</p>
</td>
</tr>
@@ -419,17 +419,17 @@
</td>
<td>
<p>
- 2.96057 (0.0774183s)
+ 2.45081 (0.141577s)
</p>
</td>
<td>
<p>
- 2.61897 (0.0810465s)
+ 2.29174 (0.145249s)
</p>
</td>
<td>
<p>
- 1.56236 (0.0947396s)
+ 1.395 (0.155487s)
</p>
</td>
</tr>
@@ -437,8 +437,8 @@
</table></div>
</div>
<br class="table-break"><div class="table">
-<a name="boost_multiprecision.perf.float_performance.operator_int0"></a><p class="title"><b>Table 1.12. Operator -(int)</b></p>
-<div class="table-contents"><table class="table" summary="Operator -(int)">
+<a name="boost_multiprecision.perf.float_performance.operator1"></a><p class="title"><b>Table 1.12. Operator *</b></p>
+<div class="table-contents"><table class="table" summary="Operator *">
<colgroup>
<col>
<col>
@@ -476,17 +476,17 @@
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0567601s)
+ 1.07276 (0.287898s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0626685s)
+ 1.47724 (0.584569s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.111692s)
+ 1.55145 (5.09969s)
</p>
</td>
</tr>
@@ -498,17 +498,17 @@
</td>
<td>
<p>
- 2.27932 (0.129374s)
+ <span class="bold"><strong>1</strong></span> (0.268372s)
</p>
</td>
<td>
<p>
- 2.04821 (0.128358s)
+ <span class="bold"><strong>1</strong></span> (0.395718s)
</p>
</td>
<td>
<p>
- 1.48297 (0.165635s)
+ <span class="bold"><strong>1</strong></span> (3.28705s)
</p>
</td>
</tr>
@@ -520,17 +520,17 @@
</td>
<td>
<p>
- 2.43199 (0.13804s)
+ 1.27302 (0.341642s)
</p>
</td>
<td>
<p>
- 2.32131 (0.145473s)
+ 1.17649 (0.465557s)
</p>
</td>
<td>
<p>
- 1.38152 (0.154304s)
+ 1.14029 (3.7482s)
</p>
</td>
</tr>
@@ -538,7 +538,108 @@
</table></div>
</div>
<br class="table-break"><div class="table">
-<a name="boost_multiprecision.perf.float_performance.operator2"></a><p class="title"><b>Table 1.13. Operator /</b></p>
+<a name="boost_multiprecision.perf.float_performance.operator_int1"></a><p class="title"><b>Table 1.13. Operator *(int)</b></p>
+<div class="table-contents"><table class="table" summary="Operator *(int)">
+<colgroup>
+<col>
+<col>
+<col>
+<col>
+</colgroup>
+<thead><tr>
+<th>
+ <p>
+ Backend
+ </p>
+ </th>
+<th>
+ <p>
+ 50 Decimal Digits
+ </p>
+ </th>
+<th>
+ <p>
+ 100 Decimal Digits
+ </p>
+ </th>
+<th>
+ <p>
+ 500 Decimal Digits
+ </p>
+ </th>
+</tr></thead>
+<tbody>
+<tr>
+<td>
+ <p>
+ cpp_float
+ </p>
+ </td>
+<td>
+ <p>
+ 2.89945 (0.11959s)
+ </p>
+ </td>
+<td>
+ <p>
+ 4.56335 (0.197945s)
+ </p>
+ </td>
+<td>
+ <p>
+ 9.03602 (0.742044s)
+ </p>
+ </td>
+</tr>
+<tr>
+<td>
+ <p>
+ gmp_float
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.0412457s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.0433772s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.0821206s)
+ </p>
+ </td>
+</tr>
+<tr>
+<td>
+ <p>
+ mpfr_float
+ </p>
+ </td>
+<td>
+ <p>
+ 3.6951 (0.152407s)
+ </p>
+ </td>
+<td>
+ <p>
+ 3.71977 (0.161353s)
+ </p>
+ </td>
+<td>
+ <p>
+ 3.30958 (0.271785s)
+ </p>
+ </td>
+</tr>
+</tbody>
+</table></div>
+</div>
+<br class="table-break"><div class="table">
+<a name="boost_multiprecision.perf.float_performance.operator2"></a><p class="title"><b>Table 1.14. Operator /</b></p>
<div class="table-contents"><table class="table" summary="Operator /">
<colgroup>
<col>
@@ -577,17 +678,17 @@
</td>
<td>
<p>
- 3.2662 (3.98153s)
+ 3.24327 (4.00108s)
</p>
</td>
<td>
<p>
- 5.07021 (8.11948s)
+ 5.00532 (8.12985s)
</p>
</td>
<td>
<p>
- 6.78872 (53.6099s)
+ 6.79566 (54.2796s)
</p>
</td>
</tr>
@@ -599,17 +700,17 @@
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (1.21901s)
+ <span class="bold"><strong>1</strong></span> (1.23366s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (1.60141s)
+ <span class="bold"><strong>1</strong></span> (1.62424s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (7.89691s)
+ <span class="bold"><strong>1</strong></span> (7.9874s)
</p>
</td>
</tr>
@@ -621,17 +722,17 @@
</td>
<td>
<p>
- 1.33238 (1.62419s)
+ 1.32521 (1.63486s)
</p>
</td>
<td>
<p>
- 1.39529 (2.23443s)
+ 1.38967 (2.25716s)
</p>
</td>
<td>
<p>
- 1.70882 (13.4944s)
+ 1.72413 (13.7713s)
</p>
</td>
</tr>
@@ -639,7 +740,108 @@
</table></div>
</div>
<br class="table-break"><div class="table">
-<a name="boost_multiprecision.perf.float_performance.operator_str"></a><p class="title"><b>Table 1.14. Operator str</b></p>
+<a name="boost_multiprecision.perf.float_performance.operator_int2"></a><p class="title"><b>Table 1.15. Operator /(int)</b></p>
+<div class="table-contents"><table class="table" summary="Operator /(int)">
+<colgroup>
+<col>
+<col>
+<col>
+<col>
+</colgroup>
+<thead><tr>
+<th>
+ <p>
+ Backend
+ </p>
+ </th>
+<th>
+ <p>
+ 50 Decimal Digits
+ </p>
+ </th>
+<th>
+ <p>
+ 100 Decimal Digits
+ </p>
+ </th>
+<th>
+ <p>
+ 500 Decimal Digits
+ </p>
+ </th>
+</tr></thead>
+<tbody>
+<tr>
+<td>
+ <p>
+ cpp_float
+ </p>
+ </td>
+<td>
+ <p>
+ 1.45093 (0.253675s)
+ </p>
+ </td>
+<td>
+ <p>
+ 1.83306 (0.419569s)
+ </p>
+ </td>
+<td>
+ <p>
+ 2.3644 (1.64187s)
+ </p>
+ </td>
+</tr>
+<tr>
+<td>
+ <p>
+ gmp_float
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.174836s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.22889s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.694411s)
+ </p>
+ </td>
+</tr>
+<tr>
+<td>
+ <p>
+ mpfr_float
+ </p>
+ </td>
+<td>
+ <p>
+ 1.16731 (0.204088s)
+ </p>
+ </td>
+<td>
+ <p>
+ 1.13211 (0.259127s)
+ </p>
+ </td>
+<td>
+ <p>
+ 1.02031 (0.708513s)
+ </p>
+ </td>
+</tr>
+</tbody>
+</table></div>
+</div>
+<br class="table-break"><div class="table">
+<a name="boost_multiprecision.perf.float_performance.operator_str"></a><p class="title"><b>Table 1.16. Operator str</b></p>
<div class="table-contents"><table class="table" summary="Operator str">
<colgroup>
<col>
@@ -678,17 +880,17 @@
</td>
<td>
<p>
- 1.46076 (0.0192656s)
+ 1.4585 (0.0188303s)
</p>
</td>
<td>
<p>
- 1.59438 (0.0320398s)
+ 1.55515 (0.03172s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.134302s)
+ <span class="bold"><strong>1</strong></span> (0.131962s)
</p>
</td>
</tr>
@@ -700,17 +902,17 @@
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0131888s)
+ <span class="bold"><strong>1</strong></span> (0.0129107s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0200954s)
+ <span class="bold"><strong>1</strong></span> (0.0203967s)
</p>
</td>
<td>
<p>
- 1.01007 (0.135655s)
+ 1.04632 (0.138075s)
</p>
</td>
</tr>
@@ -722,24 +924,26 @@
</td>
<td>
<p>
- 2.19174 (0.0289065s)
+ 2.19015 (0.0282764s)
</p>
</td>
<td>
<p>
- 1.86101 (0.0373977s)
+ 1.84679 (0.0376683s)
</p>
</td>
<td>
<p>
- 1.15842 (0.155578s)
+ 1.20295 (0.158743s)
</p>
</td>
</tr>
</tbody>
</table></div>
</div>
-<br class="table-break">
+<br class="table-break"><p>
+ ]
+ </p>
</div>
<table xmlns:rev="http://www.cs.rpi.edu/~gregod/boost/tools/doc/revision" width="100%"><tr>
<td align="left"></td>
Modified: sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/perf/integer_performance.html
==============================================================================
--- sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/perf/integer_performance.html (original)
+++ sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/perf/integer_performance.html 2012-01-23 14:01:43 EST (Mon, 23 Jan 2012)
@@ -29,11 +29,16 @@
</p>
<p>
Test code was compiled with Microsoft Visual Studio 2010 with all optimisations
- turned on (/Ox), and used MPIR-2.3.0 and MPFR-3.0.0. The tests were run on
- 32-bit Windows Vista machine.
+ turned on (/Ox), and used MPIR-2.3.0 and MPFR-3.0.0.
+ The tests were run on 32-bit Windows Vista machine.
+ </p>
+<p>
+ Note that Linux x64 tests showed significantly worse performance for <code class="computeroutput"><span class="identifier">fixed_int</span></code> division than on Win32 (or possibly
+ GMP behaves much better in that case).
+ Otherwise the results are much the same.
</p>
<div class="table">
-<a name="boost_multiprecision.perf.integer_performance.operator__"></a><p class="title"><b>Table 1.15. Operator +</b></p>
+<a name="boost_multiprecision.perf.integer_performance.operator__"></a><p class="title"><b>Table 1.17. Operator +</b></p>
<div class="table-contents"><table class="table" summary="Operator +">
<colgroup>
<col>
@@ -84,27 +89,27 @@
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0031173s)
+ <span class="bold"><strong>1</strong></span> (0.0031291s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.00696555s)
+ <span class="bold"><strong>1</strong></span> (0.00703043s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0163707s)
+ <span class="bold"><strong>1</strong></span> (0.0163669s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0314806s)
+ <span class="bold"><strong>1</strong></span> (0.0326567s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0596158s)
+ <span class="bold"><strong>1</strong></span> (0.0603087s)
</p>
</td>
</tr>
@@ -116,27 +121,27 @@
</td>
<td>
<p>
- 12.7096 (0.0396194s)
+ 12.4866 (0.0390717s)
</p>
</td>
<td>
<p>
- 5.89178 (0.0410395s)
+ 6.01034 (0.0422553s)
</p>
</td>
<td>
<p>
- 2.66402 (0.0436119s)
+ 2.65628 (0.0434751s)
</p>
</td>
<td>
<p>
- 1.59356 (0.0501664s)
+ 1.54295 (0.0503875s)
</p>
</td>
<td>
<p>
- 1.11155 (0.0662662s)
+ 1.16477 (0.0702458s)
</p>
</td>
</tr>
@@ -148,27 +153,27 @@
</td>
<td>
<p>
- 6.14357 (0.0191513s)
+ 6.03111 (0.018872s)
</p>
</td>
<td>
<p>
- 3.16177 (0.0220235s)
+ 3.08173 (0.0216659s)
</p>
</td>
<td>
<p>
- 1.85441 (0.030358s)
+ 1.84243 (0.0301548s)
</p>
</td>
<td>
<p>
- 1.45895 (0.0459287s)
+ 1.30199 (0.0425188s)
</p>
</td>
<td>
<p>
- 1.26576 (0.0754591s)
+ 1.18909 (0.0717123s)
</p>
</td>
</tr>
@@ -176,7 +181,7 @@
</table></div>
</div>
<br class="table-break"><div class="table">
-<a name="boost_multiprecision.perf.integer_performance.operator___int_"></a><p class="title"><b>Table 1.16. Operator +(int)</b></p>
+<a name="boost_multiprecision.perf.integer_performance.operator___int_"></a><p class="title"><b>Table 1.18. Operator +(int)</b></p>
<div class="table-contents"><table class="table" summary="Operator +(int)">
<colgroup>
<col>
@@ -227,27 +232,27 @@
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.00329336s)
+ <span class="bold"><strong>1</strong></span> (0.00335294s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.00370718s)
+ <span class="bold"><strong>1</strong></span> (0.00376116s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.00995385s)
+ <span class="bold"><strong>1</strong></span> (0.00985174s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0117467s)
+ <span class="bold"><strong>1</strong></span> (0.0119345s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0233483s)
+ <span class="bold"><strong>1</strong></span> (0.0170918s)
</p>
</td>
</tr>
@@ -259,27 +264,27 @@
</td>
<td>
<p>
- 9.56378 (0.031497s)
+ 9.47407 (0.031766s)
</p>
</td>
<td>
<p>
- 8.0588 (0.0298754s)
+ 8.44794 (0.0317741s)
</p>
</td>
<td>
<p>
- 4.15824 (0.0413905s)
+ 4.23857 (0.0417573s)
</p>
</td>
<td>
<p>
- 5.47974 (0.0643691s)
+ 5.40856 (0.0645488s)
</p>
</td>
<td>
<p>
- 4.46265 (0.104195s)
+ 6.31314 (0.107903s)
</p>
</td>
</tr>
@@ -291,27 +296,27 @@
</td>
<td>
<p>
- 76.2624 (0.25116s)
+ 67.0025 (0.224655s)
</p>
</td>
<td>
<p>
- 71.3973 (0.264682s)
+ 60.4203 (0.22725s)
</p>
</td>
<td>
<p>
- 28.0238 (0.278945s)
+ 25.1834 (0.2481s)
</p>
</td>
<td>
<p>
- 25.9035 (0.304282s)
+ 23.2996 (0.27807s)
</p>
</td>
<td>
<p>
- 13.1635 (0.307346s)
+ 17.1743 (0.293538s)
</p>
</td>
</tr>
@@ -319,7 +324,7 @@
</table></div>
</div>
<br class="table-break"><div class="table">
-<a name="boost_multiprecision.perf.integer_performance.operator0"></a><p class="title"><b>Table 1.17. Operator -</b></p>
+<a name="boost_multiprecision.perf.integer_performance.operator0"></a><p class="title"><b>Table 1.19. Operator -</b></p>
<div class="table-contents"><table class="table" summary="Operator -">
<colgroup>
<col>
@@ -370,27 +375,27 @@
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.00359417s)
+ <span class="bold"><strong>1</strong></span> (0.00339191s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.00721041s)
+ <span class="bold"><strong>1</strong></span> (0.0073172s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0168213s)
+ <span class="bold"><strong>1</strong></span> (0.0166428s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0323563s)
+ <span class="bold"><strong>1</strong></span> (0.0349375s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.061385s)
+ <span class="bold"><strong>1</strong></span> (0.0600083s)
</p>
</td>
</tr>
@@ -402,27 +407,27 @@
</td>
<td>
<p>
- 10.6794 (0.0383836s)
+ 12.5182 (0.0424608s)
</p>
</td>
<td>
<p>
- 5.65517 (0.0407761s)
+ 5.57936 (0.0408253s)
</p>
</td>
<td>
<p>
- 2.63634 (0.0443466s)
+ 2.78496 (0.0463496s)
</p>
</td>
<td>
<p>
- 1.59979 (0.0517632s)
+ 1.48373 (0.051838s)
</p>
</td>
<td>
<p>
- 1.13379 (0.0695978s)
+ 1.29928 (0.0779673s)
</p>
</td>
</tr>
@@ -434,27 +439,27 @@
</td>
<td>
<p>
- 6.43615 (0.0231326s)
+ 7.00782 (0.0237699s)
</p>
</td>
<td>
<p>
- 3.6161 (0.0260736s)
+ 3.69919 (0.0270677s)
</p>
</td>
<td>
<p>
- 2.2585 (0.0379908s)
+ 2.29645 (0.0382195s)
</p>
</td>
<td>
<p>
- 1.52006 (0.0491835s)
+ 1.39777 (0.0488346s)
</p>
</td>
<td>
<p>
- 1.24231 (0.0762591s)
+ 1.28243 (0.0769566s)
</p>
</td>
</tr>
@@ -462,7 +467,7 @@
</table></div>
</div>
<br class="table-break"><div class="table">
-<a name="boost_multiprecision.perf.integer_performance.operator_int0"></a><p class="title"><b>Table 1.18. Operator -(int)</b></p>
+<a name="boost_multiprecision.perf.integer_performance.operator_int0"></a><p class="title"><b>Table 1.20. Operator -(int)</b></p>
<div class="table-contents"><table class="table" summary="Operator -(int)">
<colgroup>
<col>
@@ -513,27 +518,27 @@
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.00353606s)
+ <span class="bold"><strong>1</strong></span> (0.00250933s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.00577573s)
+ <span class="bold"><strong>1</strong></span> (0.00358055s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0155184s)
+ <span class="bold"><strong>1</strong></span> (0.0103282s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.029385s)
+ <span class="bold"><strong>1</strong></span> (0.0119127s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0586271s)
+ <span class="bold"><strong>1</strong></span> (0.0176089s)
</p>
</td>
</tr>
@@ -545,27 +550,27 @@
</td>
<td>
<p>
- 9.04434 (0.0319814s)
+ 12.093 (0.0303454s)
</p>
</td>
<td>
<p>
- 5.12393 (0.0295945s)
+ 8.50898 (0.0304669s)
</p>
</td>
<td>
<p>
- 2.50743 (0.0389112s)
+ 3.9284 (0.0405733s)
</p>
</td>
<td>
<p>
- 2.01898 (0.0593277s)
+ 5.03037 (0.0599252s)
</p>
</td>
<td>
<p>
- 1.68381 (0.098717s)
+ 5.96617 (0.105058s)
</p>
</td>
</tr>
@@ -577,27 +582,27 @@
</td>
<td>
<p>
- 60.2486 (0.213043s)
+ 80.8477 (0.202873s)
</p>
</td>
<td>
<p>
- 38.3032 (0.221229s)
+ 57.8371 (0.207089s)
</p>
</td>
<td>
<p>
- 15.8792 (0.24642s)
+ 21.3372 (0.220375s)
</p>
</td>
<td>
<p>
- 8.71166 (0.255992s)
+ 23.526 (0.280258s)
</p>
</td>
<td>
<p>
- 4.85236 (0.28448s)
+ 14.793 (0.260488s)
</p>
</td>
</tr>
@@ -605,7 +610,7 @@
</table></div>
</div>
<br class="table-break"><div class="table">
-<a name="boost_multiprecision.perf.integer_performance.operator1"></a><p class="title"><b>Table 1.19. Operator *</b></p>
+<a name="boost_multiprecision.perf.integer_performance.operator1"></a><p class="title"><b>Table 1.21. Operator *</b></p>
<div class="table-contents"><table class="table" summary="Operator *">
<colgroup>
<col>
@@ -656,27 +661,170 @@
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0175309s)
+ <span class="bold"><strong>1</strong></span> (0.0223481s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.0375288s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.120353s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.439147s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (1.46969s)
+ </p>
+ </td>
+</tr>
+<tr>
+<td>
+ <p>
+ gmp_int
+ </p>
+ </td>
+<td>
+ <p>
+ 2.50746 (0.0560369s)
+ </p>
+ </td>
+<td>
+ <p>
+ 1.76676 (0.0663044s)
+ </p>
+ </td>
+<td>
+ <p>
+ 1.06052 (0.127636s)
+ </p>
+ </td>
+<td>
+ <p>
+ 1.22558 (0.53821s)
+ </p>
+ </td>
+<td>
+ <p>
+ 1.03538 (1.52168s)
+ </p>
+ </td>
+</tr>
+<tr>
+<td>
+ <p>
+ tommath_int
+ </p>
+ </td>
+<td>
+ <p>
+ 3.00028 (0.0670506s)
+ </p>
+ </td>
+<td>
+ <p>
+ 2.97696 (0.111722s)
+ </p>
+ </td>
+<td>
+ <p>
+ 2.86257 (0.34452s)
+ </p>
+ </td>
+<td>
+ <p>
+ 2.26661 (0.995374s)
+ </p>
+ </td>
+<td>
+ <p>
+ 2.12926 (3.12935s)
+ </p>
+ </td>
+</tr>
+</tbody>
+</table></div>
+</div>
+<br class="table-break"><div class="table">
+<a name="boost_multiprecision.perf.integer_performance.operator_int1"></a><p class="title"><b>Table 1.22. Operator *(int)</b></p>
+<div class="table-contents"><table class="table" summary="Operator *(int)">
+<colgroup>
+<col>
+<col>
+<col>
+<col>
+<col>
+<col>
+</colgroup>
+<thead><tr>
+<th>
+ <p>
+ Backend
+ </p>
+ </th>
+<th>
+ <p>
+ 64 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 128 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 256 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 512 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 1024 Bits
+ </p>
+ </th>
+</tr></thead>
+<tbody>
+<tr>
+<td>
+ <p>
+ fixed_int
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.00444316s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0388232s)
+ <span class="bold"><strong>1</strong></span> (0.0135739s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.123609s)
+ <span class="bold"><strong>1</strong></span> (0.0192615s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.427489s)
+ <span class="bold"><strong>1</strong></span> (0.0328339s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (1.46312s)
+ 1.18198 (0.0567364s)
</p>
</td>
</tr>
@@ -688,27 +836,27 @@
</td>
<td>
<p>
- 2.93263 (0.0514117s)
+ 4.57776 (0.0203397s)
</p>
</td>
<td>
<p>
- 1.70358 (0.0661383s)
+ 1.79901 (0.0244196s)
</p>
</td>
<td>
<p>
- 1.01811 (0.125848s)
+ 1.32814 (0.025582s)
</p>
</td>
<td>
<p>
- 1.20692 (0.515943s)
+ 1.01453 (0.033311s)
</p>
</td>
<td>
<p>
- 1.03248 (1.51064s)
+ <span class="bold"><strong>1</strong></span> (0.048001s)
</p>
</td>
</tr>
@@ -720,27 +868,27 @@
</td>
<td>
<p>
- 3.82476 (0.0670515s)
+ 53.8709 (0.239357s)
</p>
</td>
<td>
<p>
- 2.87425 (0.111587s)
+ 18.3773 (0.249452s)
</p>
</td>
<td>
<p>
- 2.74339 (0.339108s)
+ 14.2088 (0.273682s)
</p>
</td>
<td>
<p>
- 2.26768 (0.969408s)
+ 14.0907 (0.462652s)
</p>
</td>
<td>
<p>
- 2.1233 (3.10664s)
+ 9.10761 (0.437175s)
</p>
</td>
</tr>
@@ -748,7 +896,7 @@
</table></div>
</div>
<br class="table-break"><div class="table">
-<a name="boost_multiprecision.perf.integer_performance.operator2"></a><p class="title"><b>Table 1.20. Operator /</b></p>
+<a name="boost_multiprecision.perf.integer_performance.operator2"></a><p class="title"><b>Table 1.23. Operator /</b></p>
<div class="table-contents"><table class="table" summary="Operator /">
<colgroup>
<col>
@@ -799,27 +947,27 @@
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0973696s)
+ <span class="bold"><strong>1</strong></span> (0.0991632s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.260936s)
+ <span class="bold"><strong>1</strong></span> (0.172328s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.845628s)
+ <span class="bold"><strong>1</strong></span> (0.309492s)
</p>
</td>
<td>
<p>
- 2.4597 (2.51371s)
+ <span class="bold"><strong>1</strong></span> (0.573815s)
</p>
</td>
<td>
<p>
- 6.21836 (7.93136s)
+ <span class="bold"><strong>1</strong></span> (1.06356s)
</p>
</td>
</tr>
@@ -831,27 +979,27 @@
</td>
<td>
<p>
- 7.66851 (0.74668s)
+ 7.81859 (0.775316s)
</p>
</td>
<td>
<p>
- 3.17732 (0.829077s)
+ 5.11069 (0.880715s)
</p>
</td>
<td>
<p>
- 1.05006 (0.887961s)
+ 2.93514 (0.908404s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (1.02196s)
+ 1.80497 (1.03572s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (1.27547s)
+ 1.21878 (1.29625s)
</p>
</td>
</tr>
@@ -863,27 +1011,27 @@
</td>
<td>
<p>
- 18.3945 (1.79107s)
+ 18.0766 (1.79253s)
</p>
</td>
<td>
<p>
- 8.11201 (2.11671s)
+ 12.3939 (2.13582s)
</p>
</td>
<td>
<p>
- 3.49119 (2.95225s)
+ 9.80438 (3.03438s)
</p>
</td>
<td>
<p>
- 4.55727 (4.65733s)
+ 8.74047 (5.01541s)
</p>
</td>
<td>
<p>
- 9.06813 (11.5662s)
+ 10.8288 (11.517s)
</p>
</td>
</tr>
@@ -891,8 +1039,8 @@
</table></div>
</div>
<br class="table-break"><div class="table">
-<a name="boost_multiprecision.perf.integer_performance.operator3"></a><p class="title"><b>Table 1.21. Operator %</b></p>
-<div class="table-contents"><table class="table" summary="Operator %">
+<a name="boost_multiprecision.perf.integer_performance.operator_int2"></a><p class="title"><b>Table 1.24. Operator /(int)</b></p>
+<div class="table-contents"><table class="table" summary="Operator /(int)">
<colgroup>
<col>
<col>
@@ -942,27 +1090,27 @@
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.098458s)
+ 1.04098 (0.0443082s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.269155s)
+ 1.61317 (0.110308s)
</p>
</td>
<td>
<p>
- 1.10039 (0.849272s)
+ 2.18324 (0.229148s)
</p>
</td>
<td>
<p>
- 2.92096 (2.55909s)
+ 2.36331 (0.442167s)
</p>
</td>
<td>
<p>
- 7.47157 (7.99106s)
+ 2.45159 (0.866172s)
</p>
</td>
</tr>
@@ -974,27 +1122,27 @@
</td>
<td>
<p>
- 6.63934 (0.653697s)
+ <span class="bold"><strong>1</strong></span> (0.042564s)
</p>
</td>
<td>
<p>
- 2.6753 (0.72007s)
+ <span class="bold"><strong>1</strong></span> (0.06838s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.771794s)
+ <span class="bold"><strong>1</strong></span> (0.104957s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.87611s)
+ <span class="bold"><strong>1</strong></span> (0.187096s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (1.06953s)
+ <span class="bold"><strong>1</strong></span> (0.35331s)
</p>
</td>
</tr>
@@ -1006,27 +1154,27 @@
</td>
<td>
<p>
- 18.5522 (1.82661s)
+ 32.4072 (1.37938s)
</p>
</td>
<td>
<p>
- 8.00831 (2.15548s)
+ 23.7471 (1.62383s)
</p>
</td>
<td>
<p>
- 3.89737 (3.00797s)
+ 22.1907 (2.32908s)
</p>
</td>
<td>
<p>
- 5.38078 (4.71416s)
+ 19.9054 (3.72421s)
</p>
</td>
<td>
<p>
- 10.7885 (11.5386s)
+ 24.2219 (8.55783s)
</p>
</td>
</tr>
@@ -1034,8 +1182,8 @@
</table></div>
</div>
<br class="table-break"><div class="table">
-<a name="boost_multiprecision.perf.integer_performance.operator___"></a><p class="title"><b>Table 1.22. Operator <<</b></p>
-<div class="table-contents"><table class="table" summary="Operator <<">
+<a name="boost_multiprecision.perf.integer_performance.operator3"></a><p class="title"><b>Table 1.25. Operator %</b></p>
+<div class="table-contents"><table class="table" summary="Operator %">
<colgroup>
<col>
<col>
@@ -1085,27 +1233,27 @@
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0120907s)
+ <span class="bold"><strong>1</strong></span> (0.0946529s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0129147s)
+ <span class="bold"><strong>1</strong></span> (0.170561s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0214412s)
+ <span class="bold"><strong>1</strong></span> (0.328458s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0249208s)
+ <span class="bold"><strong>1</strong></span> (0.575884s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0341293s)
+ <span class="bold"><strong>1</strong></span> (1.05006s)
</p>
</td>
</tr>
@@ -1117,27 +1265,27 @@
</td>
<td>
<p>
- 1.93756 (0.0234265s)
+ 7.77525 (0.73595s)
</p>
</td>
<td>
<p>
- 1.97785 (0.0255433s)
+ 4.39387 (0.749422s)
</p>
</td>
<td>
<p>
- 1.43607 (0.0307911s)
+ 2.35075 (0.772122s)
</p>
</td>
<td>
<p>
- 1.815 (0.0452311s)
+ 1.51922 (0.874894s)
</p>
</td>
<td>
<p>
- 2.00167 (0.0683156s)
+ 1.02263 (1.07382s)
</p>
</td>
</tr>
@@ -1149,27 +1297,27 @@
</td>
<td>
<p>
- 3.42859 (0.0414542s)
+ 27.1503 (2.56986s)
</p>
</td>
<td>
<p>
- 3.04951 (0.0393836s)
+ 12.8743 (2.19585s)
</p>
</td>
<td>
<p>
- 3.04202 (0.0652246s)
+ 9.43965 (3.10053s)
</p>
</td>
<td>
<p>
- 3.81169 (0.0949903s)
+ 8.24936 (4.75068s)
</p>
</td>
<td>
<p>
- 4.93896 (0.168563s)
+ 10.9719 (11.5211s)
</p>
</td>
</tr>
@@ -1177,8 +1325,8 @@
</table></div>
</div>
<br class="table-break"><div class="table">
-<a name="boost_multiprecision.perf.integer_performance.operator4"></a><p class="title"><b>Table 1.23. Operator >></b></p>
-<div class="table-contents"><table class="table" summary="Operator >>">
+<a name="boost_multiprecision.perf.integer_performance.operator_int3"></a><p class="title"><b>Table 1.26. Operator %(int)</b></p>
+<div class="table-contents"><table class="table" summary="Operator %(int)">
<colgroup>
<col>
<col>
@@ -1228,27 +1376,27 @@
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0064833s)
+ 1.25034 (0.0425984s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.00772857s)
+ 1.91617 (0.106226s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0186871s)
+ 2.02166 (0.195577s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0218303s)
+ 2.14437 (0.387067s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0326372s)
+ 2.23514 (0.776075s)
</p>
</td>
</tr>
@@ -1260,27 +1408,27 @@
</td>
<td>
<p>
- 4.212 (0.0273077s)
+ <span class="bold"><strong>1</strong></span> (0.0340695s)
</p>
</td>
<td>
<p>
- 3.72696 (0.0288041s)
+ <span class="bold"><strong>1</strong></span> (0.0554367s)
</p>
</td>
<td>
<p>
- 1.55046 (0.0289735s)
+ <span class="bold"><strong>1</strong></span> (0.0967406s)
</p>
</td>
<td>
<p>
- 1.51403 (0.0330518s)
+ <span class="bold"><strong>1</strong></span> (0.180504s)
</p>
</td>
<td>
<p>
- 1.13695 (0.037107s)
+ <span class="bold"><strong>1</strong></span> (0.347216s)
</p>
</td>
</tr>
@@ -1292,27 +1440,27 @@
</td>
<td>
<p>
- 33.9418 (0.220055s)
+ 42.8781 (1.46083s)
</p>
</td>
<td>
<p>
- 29.104 (0.224932s)
+ 29.879 (1.65639s)
</p>
</td>
<td>
<p>
- 13.8407 (0.258642s)
+ 23.4323 (2.26685s)
</p>
</td>
<td>
<p>
- 13.1488 (0.287043s)
+ 19.932 (3.5978s)
</p>
</td>
<td>
<p>
- 15.1741 (0.495242s)
+ 25.0046 (8.682s)
</p>
</td>
</tr>
@@ -1320,8 +1468,8 @@
</table></div>
</div>
<br class="table-break"><div class="table">
-<a name="boost_multiprecision.perf.integer_performance.operator5"></a><p class="title"><b>Table 1.24. Operator &</b></p>
-<div class="table-contents"><table class="table" summary="Operator &">
+<a name="boost_multiprecision.perf.integer_performance.operator_str"></a><p class="title"><b>Table 1.27. Operator str</b></p>
+<div class="table-contents"><table class="table" summary="Operator str">
<colgroup>
<col>
<col>
@@ -1371,27 +1519,27 @@
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0028732s)
+ <span class="bold"><strong>1</strong></span> (0.000465841s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.00552933s)
+ <span class="bold"><strong>1</strong></span> (0.00102073s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0125148s)
+ <span class="bold"><strong>1</strong></span> (0.00207212s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.020299s)
+ 1.02618 (0.0062017s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.034856s)
+ 1.32649 (0.0190043s)
</p>
</td>
</tr>
@@ -1403,27 +1551,27 @@
</td>
<td>
<p>
- 16.3018 (0.0468383s)
+ 2.83823 (0.00132216s)
</p>
</td>
<td>
<p>
- 9.51109 (0.05259s)
+ 2.17537 (0.00222046s)
</p>
</td>
<td>
<p>
- 5.20026 (0.0650802s)
+ 1.46978 (0.00304557s)
</p>
</td>
<td>
<p>
- 4.46545 (0.0906443s)
+ <span class="bold"><strong>1</strong></span> (0.00604351s)
</p>
</td>
<td>
<p>
- 3.99377 (0.139207s)
+ <span class="bold"><strong>1</strong></span> (0.0143268s)
</p>
</td>
</tr>
@@ -1435,27 +1583,27 @@
</td>
<td>
<p>
- 42.221 (0.121309s)
+ 15.76 (0.00734164s)
</p>
</td>
<td>
<p>
- 22.2471 (0.123011s)
+ 15.9879 (0.0163193s)
</p>
</td>
<td>
<p>
- 11.3587 (0.142151s)
+ 21.7337 (0.0450349s)
</p>
</td>
<td>
<p>
- 7.3475 (0.149147s)
+ 19.7183 (0.119168s)
</p>
</td>
<td>
<p>
- 11.4043 (0.397507s)
+ 26.3445 (0.377431s)
</p>
</td>
</tr>
@@ -1463,8 +1611,8 @@
</table></div>
</div>
<br class="table-break"><div class="table">
-<a name="boost_multiprecision.perf.integer_performance.operator6"></a><p class="title"><b>Table 1.25. Operator ^</b></p>
-<div class="table-contents"><table class="table" summary="Operator ^">
+<a name="boost_multiprecision.perf.integer_performance.operator___"></a><p class="title"><b>Table 1.28. Operator <<</b></p>
+<div class="table-contents"><table class="table" summary="Operator <<">
<colgroup>
<col>
<col>
@@ -1514,27 +1662,27 @@
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.00287983s)
+ <span class="bold"><strong>1</strong></span> (0.0119095s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.00543128s)
+ <span class="bold"><strong>1</strong></span> (0.0131746s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0125726s)
+ <span class="bold"><strong>1</strong></span> (0.0213483s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.019987s)
+ <span class="bold"><strong>1</strong></span> (0.0247552s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.034697s)
+ <span class="bold"><strong>1</strong></span> (0.0339579s)
</p>
</td>
</tr>
@@ -1546,27 +1694,27 @@
</td>
<td>
<p>
- 14.938 (0.0430189s)
+ 1.9355 (0.0230509s)
</p>
</td>
<td>
<p>
- 9.00973 (0.0489344s)
+ 1.94257 (0.0255925s)
</p>
</td>
<td>
<p>
- 4.83803 (0.0608267s)
+ 1.49684 (0.031955s)
</p>
</td>
<td>
<p>
- 4.33359 (0.0866154s)
+ 1.79202 (0.0443618s)
</p>
</td>
<td>
<p>
- 3.89518 (0.135151s)
+ 2.0846 (0.0707887s)
</p>
</td>
</tr>
@@ -1578,27 +1726,27 @@
</td>
<td>
<p>
- 41.6898 (0.12006s)
+ 2.64273 (0.0314737s)
</p>
</td>
<td>
<p>
- 22.4393 (0.121874s)
+ 2.95612 (0.0389456s)
</p>
</td>
<td>
<p>
- 10.7513 (0.135172s)
+ 3.05842 (0.065292s)
</p>
</td>
<td>
<p>
- 7.2632 (0.145169s)
+ 3.79496 (0.0939451s)
</p>
</td>
<td>
<p>
- 11.5765 (0.401671s)
+ 4.82142 (0.163725s)
</p>
</td>
</tr>
@@ -1606,8 +1754,8 @@
</table></div>
</div>
<br class="table-break"><div class="table">
-<a name="boost_multiprecision.perf.integer_performance.operator7"></a><p class="title"><b>Table 1.26. Operator |</b></p>
-<div class="table-contents"><table class="table" summary="Operator |">
+<a name="boost_multiprecision.perf.integer_performance.operator4"></a><p class="title"><b>Table 1.29. Operator >></b></p>
+<div class="table-contents"><table class="table" summary="Operator >>">
<colgroup>
<col>
<col>
@@ -1657,27 +1805,27 @@
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.00314803s)
+ <span class="bold"><strong>1</strong></span> (0.006361s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.00548233s)
+ <span class="bold"><strong>1</strong></span> (0.00880189s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0125434s)
+ <span class="bold"><strong>1</strong></span> (0.0180295s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.0198161s)
+ <span class="bold"><strong>1</strong></span> (0.0220786s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.034957s)
+ <span class="bold"><strong>1</strong></span> (0.0325312s)
</p>
</td>
</tr>
@@ -1689,27 +1837,27 @@
</td>
<td>
<p>
- 13.0622 (0.0411201s)
+ 4.26889 (0.0271544s)
</p>
</td>
<td>
<p>
- 8.63936 (0.0473638s)
+ 3.14669 (0.0276968s)
</p>
</td>
<td>
<p>
- 4.6932 (0.0588688s)
+ 1.74396 (0.0314426s)
</p>
</td>
<td>
<p>
- 4.25792 (0.0843755s)
+ 1.45928 (0.0322188s)
</p>
</td>
<td>
<p>
- 3.78236 (0.13222s)
+ 1.24596 (0.0405327s)
</p>
</td>
</tr>
@@ -1721,27 +1869,27 @@
</td>
<td>
<p>
- 38.5896 (0.121481s)
+ 39.4379 (0.250865s)
</p>
</td>
<td>
<p>
- 22.3609 (0.12259s)
+ 28.6225 (0.251932s)
</p>
</td>
<td>
<p>
- 10.9015 (0.136742s)
+ 16.4543 (0.296661s)
</p>
</td>
<td>
<p>
- 7.68521 (0.152291s)
+ 14.2167 (0.313884s)
</p>
</td>
<td>
<p>
- 11.6322 (0.406628s)
+ 15.5842 (0.506974s)
</p>
</td>
</tr>
@@ -1749,8 +1897,723 @@
</table></div>
</div>
<br class="table-break"><div class="table">
-<a name="boost_multiprecision.perf.integer_performance.operator_str"></a><p class="title"><b>Table 1.27. Operator str</b></p>
-<div class="table-contents"><table class="table" summary="Operator str">
+<a name="boost_multiprecision.perf.integer_performance.operator5"></a><p class="title"><b>Table 1.30. Operator &</b></p>
+<div class="table-contents"><table class="table" summary="Operator &">
+<colgroup>
+<col>
+<col>
+<col>
+<col>
+<col>
+<col>
+</colgroup>
+<thead><tr>
+<th>
+ <p>
+ Backend
+ </p>
+ </th>
+<th>
+ <p>
+ 64 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 128 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 256 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 512 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 1024 Bits
+ </p>
+ </th>
+</tr></thead>
+<tbody>
+<tr>
+<td>
+ <p>
+ fixed_int
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.00298048s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.00546222s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.0127546s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.01985s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.0349286s)
+ </p>
+ </td>
+</tr>
+<tr>
+<td>
+ <p>
+ gmp_int
+ </p>
+ </td>
+<td>
+ <p>
+ 16.0105 (0.0477189s)
+ </p>
+ </td>
+<td>
+ <p>
+ 9.67027 (0.0528211s)
+ </p>
+ </td>
+<td>
+ <p>
+ 5.12678 (0.0653902s)
+ </p>
+ </td>
+<td>
+ <p>
+ 4.62316 (0.0917698s)
+ </p>
+ </td>
+<td>
+ <p>
+ 4.00837 (0.140007s)
+ </p>
+ </td>
+</tr>
+<tr>
+<td>
+ <p>
+ tommath_int
+ </p>
+ </td>
+<td>
+ <p>
+ 43.6665 (0.130147s)
+ </p>
+ </td>
+<td>
+ <p>
+ 23.8003 (0.130002s)
+ </p>
+ </td>
+<td>
+ <p>
+ 11.4242 (0.145711s)
+ </p>
+ </td>
+<td>
+ <p>
+ 7.83416 (0.155508s)
+ </p>
+ </td>
+<td>
+ <p>
+ 9.50103 (0.331858s)
+ </p>
+ </td>
+</tr>
+</tbody>
+</table></div>
+</div>
+<br class="table-break"><div class="table">
+<a name="boost_multiprecision.perf.integer_performance.operator_int4"></a><p class="title"><b>Table 1.31. Operator &(int)</b></p>
+<div class="table-contents"><table class="table" summary="Operator &(int)">
+<colgroup>
+<col>
+<col>
+<col>
+<col>
+<col>
+<col>
+</colgroup>
+<thead><tr>
+<th>
+ <p>
+ Backend
+ </p>
+ </th>
+<th>
+ <p>
+ 64 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 128 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 256 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 512 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 1024 Bits
+ </p>
+ </th>
+</tr></thead>
+<tbody>
+<tr>
+<td>
+ <p>
+ fixed_int
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.00222291s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.0035522s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.0110247s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.0154281s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.0275044s)
+ </p>
+ </td>
+</tr>
+<tr>
+<td>
+ <p>
+ gmp_int
+ </p>
+ </td>
+<td>
+ <p>
+ 70.8538 (0.157502s)
+ </p>
+ </td>
+<td>
+ <p>
+ 42.1478 (0.149717s)
+ </p>
+ </td>
+<td>
+ <p>
+ 13.9023 (0.153268s)
+ </p>
+ </td>
+<td>
+ <p>
+ 10.3271 (0.159328s)
+ </p>
+ </td>
+<td>
+ <p>
+ 6.0529 (0.166481s)
+ </p>
+ </td>
+</tr>
+<tr>
+<td>
+ <p>
+ tommath_int
+ </p>
+ </td>
+<td>
+ <p>
+ 154.134 (0.342626s)
+ </p>
+ </td>
+<td>
+ <p>
+ 93.2035 (0.331077s)
+ </p>
+ </td>
+<td>
+ <p>
+ 31.9151 (0.351853s)
+ </p>
+ </td>
+<td>
+ <p>
+ 23.6515 (0.364899s)
+ </p>
+ </td>
+<td>
+ <p>
+ 22.0042 (0.605213s)
+ </p>
+ </td>
+</tr>
+</tbody>
+</table></div>
+</div>
+<br class="table-break"><div class="table">
+<a name="boost_multiprecision.perf.integer_performance.operator6"></a><p class="title"><b>Table 1.32. Operator ^</b></p>
+<div class="table-contents"><table class="table" summary="Operator ^">
+<colgroup>
+<col>
+<col>
+<col>
+<col>
+<col>
+<col>
+</colgroup>
+<thead><tr>
+<th>
+ <p>
+ Backend
+ </p>
+ </th>
+<th>
+ <p>
+ 64 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 128 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 256 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 512 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 1024 Bits
+ </p>
+ </th>
+</tr></thead>
+<tbody>
+<tr>
+<td>
+ <p>
+ fixed_int
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.00307714s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.00538197s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.0127717s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.0198304s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.0345822s)
+ </p>
+ </td>
+</tr>
+<tr>
+<td>
+ <p>
+ gmp_int
+ </p>
+ </td>
+<td>
+ <p>
+ 13.9543 (0.0429392s)
+ </p>
+ </td>
+<td>
+ <p>
+ 9.92785 (0.0534314s)
+ </p>
+ </td>
+<td>
+ <p>
+ 4.80398 (0.0613552s)
+ </p>
+ </td>
+<td>
+ <p>
+ 4.35864 (0.0864335s)
+ </p>
+ </td>
+<td>
+ <p>
+ 3.887 (0.134421s)
+ </p>
+ </td>
+</tr>
+<tr>
+<td>
+ <p>
+ tommath_int
+ </p>
+ </td>
+<td>
+ <p>
+ 41.5958 (0.127996s)
+ </p>
+ </td>
+<td>
+ <p>
+ 24.2396 (0.130457s)
+ </p>
+ </td>
+<td>
+ <p>
+ 11.3666 (0.145171s)
+ </p>
+ </td>
+<td>
+ <p>
+ 8.01016 (0.158845s)
+ </p>
+ </td>
+<td>
+ <p>
+ 9.84853 (0.340584s)
+ </p>
+ </td>
+</tr>
+</tbody>
+</table></div>
+</div>
+<br class="table-break"><div class="table">
+<a name="boost_multiprecision.perf.integer_performance.operator_int5"></a><p class="title"><b>Table 1.33. Operator ^(int)</b></p>
+<div class="table-contents"><table class="table" summary="Operator ^(int)">
+<colgroup>
+<col>
+<col>
+<col>
+<col>
+<col>
+<col>
+</colgroup>
+<thead><tr>
+<th>
+ <p>
+ Backend
+ </p>
+ </th>
+<th>
+ <p>
+ 64 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 128 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 256 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 512 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 1024 Bits
+ </p>
+ </th>
+</tr></thead>
+<tbody>
+<tr>
+<td>
+ <p>
+ fixed_int
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.00236664s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.0035339s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.0100442s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.0155814s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.0293253s)
+ </p>
+ </td>
+</tr>
+<tr>
+<td>
+ <p>
+ gmp_int
+ </p>
+ </td>
+<td>
+ <p>
+ 61.4272 (0.145376s)
+ </p>
+ </td>
+<td>
+ <p>
+ 41.6319 (0.147123s)
+ </p>
+ </td>
+<td>
+ <p>
+ 14.9744 (0.150405s)
+ </p>
+ </td>
+<td>
+ <p>
+ 9.64857 (0.150338s)
+ </p>
+ </td>
+<td>
+ <p>
+ 5.46649 (0.160306s)
+ </p>
+ </td>
+</tr>
+<tr>
+<td>
+ <p>
+ tommath_int
+ </p>
+ </td>
+<td>
+ <p>
+ 145.509 (0.344367s)
+ </p>
+ </td>
+<td>
+ <p>
+ 93.9055 (0.331853s)
+ </p>
+ </td>
+<td>
+ <p>
+ 35.0456 (0.352003s)
+ </p>
+ </td>
+<td>
+ <p>
+ 22.7371 (0.354275s)
+ </p>
+ </td>
+<td>
+ <p>
+ 19.1373 (0.561207s)
+ </p>
+ </td>
+</tr>
+</tbody>
+</table></div>
+</div>
+<br class="table-break"><div class="table">
+<a name="boost_multiprecision.perf.integer_performance.operator7"></a><p class="title"><b>Table 1.34. Operator |</b></p>
+<div class="table-contents"><table class="table" summary="Operator |">
+<colgroup>
+<col>
+<col>
+<col>
+<col>
+<col>
+<col>
+</colgroup>
+<thead><tr>
+<th>
+ <p>
+ Backend
+ </p>
+ </th>
+<th>
+ <p>
+ 64 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 128 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 256 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 512 Bits
+ </p>
+ </th>
+<th>
+ <p>
+ 1024 Bits
+ </p>
+ </th>
+</tr></thead>
+<tbody>
+<tr>
+<td>
+ <p>
+ fixed_int
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.00295261s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.00560832s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.0127056s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.0200759s)
+ </p>
+ </td>
+<td>
+ <p>
+ <span class="bold"><strong>1</strong></span> (0.034651s)
+ </p>
+ </td>
+</tr>
+<tr>
+<td>
+ <p>
+ gmp_int
+ </p>
+ </td>
+<td>
+ <p>
+ 14.1091 (0.0416586s)
+ </p>
+ </td>
+<td>
+ <p>
+ 8.52475 (0.0478096s)
+ </p>
+ </td>
+<td>
+ <p>
+ 4.74593 (0.0602998s)
+ </p>
+ </td>
+<td>
+ <p>
+ 4.19694 (0.0842575s)
+ </p>
+ </td>
+<td>
+ <p>
+ 3.85525 (0.133588s)
+ </p>
+ </td>
+</tr>
+<tr>
+<td>
+ <p>
+ tommath_int
+ </p>
+ </td>
+<td>
+ <p>
+ 44.8889 (0.132539s)
+ </p>
+ </td>
+<td>
+ <p>
+ 25.2503 (0.141612s)
+ </p>
+ </td>
+<td>
+ <p>
+ 11.0488 (0.140382s)
+ </p>
+ </td>
+<td>
+ <p>
+ 7.39273 (0.148416s)
+ </p>
+ </td>
+<td>
+ <p>
+ 9.75809 (0.338127s)
+ </p>
+ </td>
+</tr>
+</tbody>
+</table></div>
+</div>
+<br class="table-break"><div class="table">
+<a name="boost_multiprecision.perf.integer_performance.operator_int6"></a><p class="title"><b>Table 1.35. Operator |(int)</b></p>
+<div class="table-contents"><table class="table" summary="Operator |(int)">
<colgroup>
<col>
<col>
@@ -1800,27 +2663,27 @@
</td>
<td>
<p>
- 1.03557 (0.00143356s)
+ <span class="bold"><strong>1</strong></span> (0.00244005s)
</p>
</td>
<td>
<p>
- 1.39844 (0.00290281s)
+ <span class="bold"><strong>1</strong></span> (0.0040142s)
</p>
</td>
<td>
<p>
- 3.14081 (0.0099558s)
+ <span class="bold"><strong>1</strong></span> (0.00983777s)
</p>
</td>
<td>
<p>
- 6.28067 (0.0372769s)
+ <span class="bold"><strong>1</strong></span> (0.0155223s)
</p>
</td>
<td>
<p>
- 13.2101 (0.188878s)
+ <span class="bold"><strong>1</strong></span> (0.0293444s)
</p>
</td>
</tr>
@@ -1832,27 +2695,27 @@
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.00138432s)
+ 64.6148 (0.157663s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.00207575s)
+ 34.5827 (0.138822s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.00316982s)
+ 14.2764 (0.140448s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.00593518s)
+ 10.3248 (0.160264s)
</p>
</td>
<td>
<p>
- <span class="bold"><strong>1</strong></span> (0.014298s)
+ 5.33565 (0.156572s)
</p>
</td>
</tr>
@@ -1864,27 +2727,27 @@
</td>
<td>
<p>
- 5.31194 (0.00735345s)
+ 137.825 (0.3363s)
</p>
</td>
<td>
<p>
- 7.90724 (0.0164135s)
+ 81.1074 (0.325581s)
</p>
</td>
<td>
<p>
- 15.8581 (0.0502673s)
+ 34.8737 (0.343079s)
</p>
</td>
<td>
<p>
- 19.7526 (0.117235s)
+ 22.3727 (0.347276s)
</p>
</td>
<td>
<p>
- 26.6031 (0.380373s)
+ 18.912 (0.554963s)
</p>
</td>
</tr>
Modified: sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/perf/realworld.html
==============================================================================
--- sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/perf/realworld.html (original)
+++ sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/perf/realworld.html 2012-01-23 14:01:43 EST (Mon, 23 Jan 2012)
@@ -22,13 +22,14 @@
cases for these functions. In each case the best performing library gets
a relative score of 1, with the total execution time given in brackets. The
first three libraries listed are the various floating point types provided
- by this library, while for comparison, two popular C++ frontends to MPFR
- (mpfr_class and mpreal) are also shown.
+ by this library, while for comparison, two popular C++ frontends to MPFR ( mpfr_class
+ and mpreal) are
+ also shown.
</p>
<p>
Test code was compiled with Microsoft Visual Studio 2010 with all optimisations
- turned on (/Ox), and used MPIR-2.3.0 and MPFR-3.0.0. The tests were run on
- 32-bit Windows Vista machine.
+ turned on (/Ox), and used MPIR-2.3.0 and MPFR-3.0.0.
+ The tests were run on 32-bit Windows Vista machine.
</p>
<div class="table">
<a name="boost_multiprecision.perf.realworld.bessel_function_performance"></a><p class="title"><b>Table 1.6. Bessel Function Performance</b></p>
@@ -110,7 +111,7 @@
<tr>
<td>
<p>
- mpfr_class
+ mpfr_class
</p>
</td>
<td>
@@ -127,7 +128,7 @@
<tr>
<td>
<p>
- mpreal
+ mpreal
</p>
</td>
<td>
@@ -224,7 +225,7 @@
<tr>
<td>
<p>
- mpfr_class
+ mpfr_class
</p>
</td>
<td>
@@ -241,7 +242,7 @@
<tr>
<td>
<p>
- mpreal
+ mpreal
</p>
</td>
<td>
Modified: sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/ref/mp_number.html
==============================================================================
--- sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/ref/mp_number.html (original)
+++ sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/ref/mp_number.html 2012-01-23 14:01:43 EST (Mon, 23 Jan 2012)
@@ -468,8 +468,8 @@
versions are provided for Backend types that don't have native support for
these functions. Please note however, that this default support requires
the precision of the type to be a compile time constant - this means for
- example that the GMP MPF Backend will not work with these functions when
- that type is used at variable precision.
+ example that the GMP MPF Backend will
+ not work with these functions when that type is used at variable precision.
</p>
<p>
Also note that with the exception of <code class="computeroutput"><span class="identifier">abs</span></code>
Modified: sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/tut/ints.html
==============================================================================
--- sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/tut/ints.html (original)
+++ sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/tut/ints.html 2012-01-23 14:01:43 EST (Mon, 23 Jan 2012)
@@ -80,7 +80,7 @@
</td>
<td>
<p>
- GMP
+ GMP
</p>
</td>
<td>
@@ -90,7 +90,8 @@
</td>
<td>
<p>
- Dependency on GNU licenced GMP library.
+ Dependency on GNU licenced GMP
+ library.
</p>
</td>
</tr>
@@ -122,7 +123,7 @@
</td>
<td>
<p>
- Slower than GMP.
+ Slower than GMP.
</p>
</td>
</tr>
@@ -154,7 +155,7 @@
</td>
<td>
<p>
- Slower than GMP.
+ Slower than GMP.
</p>
</td>
</tr>
@@ -175,9 +176,9 @@
<p>
The <code class="computeroutput"><span class="identifier">gmp_int</span></code> backend is used
via the typedef <code class="computeroutput"><span class="identifier">boost</span><span class="special">::</span><span class="identifier">multiprecision</span><span class="special">::</span><span class="identifier">mpz_int</span></code>. It acts as a thin wrapper around
- the GMP <code class="computeroutput"><span class="identifier">mpz_t</span></code> to provide
- an integer type that is a drop-in replacement for the native C++ integer
- types, but with unlimited precision.
+ the GMP <code class="computeroutput"><span class="identifier">mpz_t</span></code>
+ to provide an integer type that is a drop-in replacement for the native C++
+ integer types, but with unlimited precision.
</p>
<p>
As well as the usual conversions from arithmetic and string types, type
@@ -186,8 +187,8 @@
</p>
<div class="itemizedlist"><ul class="itemizedlist" type="disc">
<li class="listitem">
- The GMP native types: <code class="computeroutput"><span class="identifier">mpf_t</span></code>,
- <code class="computeroutput"><span class="identifier">mpz_t</span></code>, <code class="computeroutput"><span class="identifier">mpq_t</span></code>.
+ The GMP native types: <code class="computeroutput"><span class="identifier">mpf_t</span></code>, <code class="computeroutput"><span class="identifier">mpz_t</span></code>,
+ <code class="computeroutput"><span class="identifier">mpq_t</span></code>.
</li>
<li class="listitem">
Instances of <code class="computeroutput"><span class="identifier">mp_number</span><span class="special"><</span><span class="identifier">T</span><span class="special">></span></code> that are wrappers around those types:
@@ -199,18 +200,36 @@
via the <code class="computeroutput"><span class="identifier">data</span><span class="special">()</span></code>
member function of <code class="computeroutput"><span class="identifier">gmp_int</span></code>.
</p>
-<div class="note"><table border="0" summary="Note">
-<tr>
-<td rowspan="2" align="center" valign="top" width="25"><img alt="[Note]" src="../../images/note.png"></td>
-<th align="left">Note</th>
-</tr>
-<tr><td align="left" valign="top"><p>
- Formatted IO for this type does not support octal or hexadecimal notation
- for negative values, as a result performing formatted output on this type
- when the argument is negative and either of the flags <code class="computeroutput"><span class="identifier">std</span><span class="special">::</span><span class="identifier">ios_base</span><span class="special">::</span><span class="identifier">oct</span></code>
- or <code class="computeroutput"><span class="identifier">std</span><span class="special">::</span><span class="identifier">ios_base</span><span class="special">::</span><span class="identifier">hex</span></code> are set, will result in a <code class="computeroutput"><span class="identifier">std</span><span class="special">::</span><span class="identifier">runtime_error</span></code> will be thrown.
- </p></td></tr>
-</table></div>
+<p>
+ Things you should know when using this type:
+ </p>
+<div class="itemizedlist"><ul class="itemizedlist" type="disc">
+<li class="listitem">
+ No changes are made to the GMP library's global settings - so you can
+ safely mix this type with existing code that uses GMP.
+ </li>
+<li class="listitem">
+ Default constructed <code class="computeroutput"><span class="identifier">gmp_int</span></code>'s
+ have the value zero (this is GMP's default behavior).
+ </li>
+<li class="listitem">
+ Formatted IO for this type does not support octal or hexadecimal notation
+ for negative values, as a result performing formatted output on this
+ type when the argument is negative and either of the flags <code class="computeroutput"><span class="identifier">std</span><span class="special">::</span><span class="identifier">ios_base</span><span class="special">::</span><span class="identifier">oct</span></code> or <code class="computeroutput"><span class="identifier">std</span><span class="special">::</span><span class="identifier">ios_base</span><span class="special">::</span><span class="identifier">hex</span></code>
+ are set, will result in a <code class="computeroutput"><span class="identifier">std</span><span class="special">::</span><span class="identifier">runtime_error</span></code>
+ will be thrown.
+ </li>
+<li class="listitem">
+ Division by zero is handled by the GMP
+ library - it will trigger a division by zero signal.
+ </li>
+<li class="listitem">
+ Although this type is a wrapper around GMP
+ it will work equally well with MPIR.
+ Indeed use of MPIR is recomended
+ on Win32.
+ </li>
+</ul></div>
<h6>
<a name="boost_multiprecision.tut.ints.h1"></a>
<span><a name="boost_multiprecision.tut.ints.example_"></a></span><a class="link" href="ints.html#boost_multiprecision.tut.ints.example_">Example:</a>
@@ -255,34 +274,34 @@
to provide an integer type that is a drop-in replacement for the native C++
integer types, but with unlimited precision.
</p>
-<div class="caution"><table border="0" summary="Caution">
-<tr>
-<td rowspan="2" align="center" valign="top" width="25"><img alt="[Caution]" src="../../images/caution.png"></td>
-<th align="left">Caution</th>
-</tr>
-<tr><td align="left" valign="top"><p>
- Although <code class="computeroutput"><span class="identifier">mp_int</span></code> is mostly
- a drop in replacement for the builtin integer types, it should be noted
- that it is a rather strange beast as it's a signed type that is not a 2's
- complement type. As a result the bitwise operations <code class="computeroutput"><span class="special">|</span>
- <span class="special">&</span> <span class="special">^</span></code>
- will throw a <code class="computeroutput"><span class="identifier">std</span><span class="special">::</span><span class="identifier">runtime_error</span></code> exception if either of
- the arguments is negative. Similarly the complement operator<code class="computeroutput"><span class="special">~</span></code> is deliberately not implemented for this
- type.
- </p></td></tr>
-</table></div>
-<div class="note"><table border="0" summary="Note">
-<tr>
-<td rowspan="2" align="center" valign="top" width="25"><img alt="[Note]" src="../../images/note.png"></td>
-<th align="left">Note</th>
-</tr>
-<tr><td align="left" valign="top"><p>
- Formatted IO for this type does not support octal or hexadecimal notation
- for negative values, as a result performing formatted output on this type
- when the argument is negative and either of the flags <code class="computeroutput"><span class="identifier">std</span><span class="special">::</span><span class="identifier">ios_base</span><span class="special">::</span><span class="identifier">oct</span></code>
- or <code class="computeroutput"><span class="identifier">std</span><span class="special">::</span><span class="identifier">ios_base</span><span class="special">::</span><span class="identifier">hex</span></code> are set, will result in a <code class="computeroutput"><span class="identifier">std</span><span class="special">::</span><span class="identifier">runtime_error</span></code> will be thrown.
- </p></td></tr>
-</table></div>
+<p>
+ Things you should know when using this type:
+ </p>
+<div class="itemizedlist"><ul class="itemizedlist" type="disc">
+<li class="listitem">
+ Default constructed objects have the value zero (this is libtommath's
+ default behavior).
+ </li>
+<li class="listitem">
+ Although <code class="computeroutput"><span class="identifier">mp_int</span></code> is mostly
+ a drop in replacement for the builtin integer types, it should be noted
+ that it is a rather strange beast as it's a signed type that is not a
+ 2's complement type. As a result the bitwise operations <code class="computeroutput"><span class="special">|</span> <span class="special">&</span> <span class="special">^</span></code> will throw a <code class="computeroutput"><span class="identifier">std</span><span class="special">::</span><span class="identifier">runtime_error</span></code>
+ exception if either of the arguments is negative. Similarly the complement
+ operator<code class="computeroutput"><span class="special">~</span></code> is deliberately
+ not implemented for this type.
+ </li>
+<li class="listitem">
+ Formatted IO for this type does not support octal or hexadecimal notation
+ for negative values, as a result performing formatted output on this
+ type when the argument is negative and either of the flags <code class="computeroutput"><span class="identifier">std</span><span class="special">::</span><span class="identifier">ios_base</span><span class="special">::</span><span class="identifier">oct</span></code> or <code class="computeroutput"><span class="identifier">std</span><span class="special">::</span><span class="identifier">ios_base</span><span class="special">::</span><span class="identifier">hex</span></code>
+ are set, will result in a <code class="computeroutput"><span class="identifier">std</span><span class="special">::</span><span class="identifier">runtime_error</span></code>
+ will be thrown.
+ </li>
+<li class="listitem">
+ Division by zero will result in a hardware signal being raised by libtommath.
+ </li>
+</ul></div>
<h6>
<a name="boost_multiprecision.tut.ints.h3"></a>
<span><a name="boost_multiprecision.tut.ints.example0"></a></span><a class="link" href="ints.html#boost_multiprecision.tut.ints.example0">Example:</a>
@@ -351,6 +370,28 @@
designed to work just like a typical built in integer type, but with larger
precision.
</p>
+<p>
+ Things you should know when using this type:
+ </p>
+<div class="itemizedlist"><ul class="itemizedlist" type="disc">
+<li class="listitem">
+ Default constructed <code class="computeroutput"><span class="identifier">fixed_int</span></code>'s
+ have indeterminate value - just like normal built in integers.
+ </li>
+<li class="listitem">
+ Division by zero results in a <code class="computeroutput"><span class="identifier">std</span><span class="special">::</span><span class="identifier">runtime_error</span></code>
+ being thrown.
+ </li>
+<li class="listitem">
+ Construction from a string that contains invalid non-numeric characters
+ results in a <code class="computeroutput"><span class="identifier">std</span><span class="special">::</span><span class="identifier">runtime_error</span></code> being thrown.
+ </li>
+<li class="listitem">
+ Since the precision of <code class="computeroutput"><span class="identifier">fixed_int</span></code>
+ is necessarily limited, care should be taken to avoid numeric overflow
+ when using this type unless you actually want modulo-arithmetic behavior.
+ </li>
+</ul></div>
<h6>
<a name="boost_multiprecision.tut.ints.h5"></a>
<span><a name="boost_multiprecision.tut.ints.example1"></a></span><a class="link" href="ints.html#boost_multiprecision.tut.ints.example1">Example:</a>
Modified: sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/tut/rational.html
==============================================================================
--- sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/tut/rational.html (original)
+++ sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/tut/rational.html 2012-01-23 14:01:43 EST (Mon, 23 Jan 2012)
@@ -80,7 +80,7 @@
</td>
<td>
<p>
- GMP
+ GMP
</p>
</td>
<td>
@@ -90,7 +90,8 @@
</td>
<td>
<p>
- Dependency on GNU licenced GMP library.
+ Dependency on GNU licenced GMP
+ library.
</p>
</td>
</tr>
@@ -122,7 +123,7 @@
</td>
<td>
<p>
- Slower than GMP.
+ Slower than GMP.
</p>
</td>
</tr>
@@ -210,9 +211,10 @@
<p>
The <code class="computeroutput"><span class="identifier">gmp_rational</span></code> backend
is used via the typedef <code class="computeroutput"><span class="identifier">boost</span><span class="special">::</span><span class="identifier">multiprecision</span><span class="special">::</span><span class="identifier">mpq_rational</span></code>.
- It acts as a thin wrapper around the GMP <code class="computeroutput"><span class="identifier">mpq_t</span></code>
- to provide a rational number type that is a drop-in replacement for the native
- C++ number types, but with unlimited precision.
+ It acts as a thin wrapper around the GMP
+ <code class="computeroutput"><span class="identifier">mpq_t</span></code> to provide a rational
+ number type that is a drop-in replacement for the native C++ number types,
+ but with unlimited precision.
</p>
<p>
As well as the usual conversions from arithmetic and string types, instances
@@ -221,8 +223,7 @@
</p>
<div class="itemizedlist"><ul class="itemizedlist" type="disc">
<li class="listitem">
- The GMP native types: <code class="computeroutput"><span class="identifier">mpz_t</span></code>,
- <code class="computeroutput"><span class="identifier">mpq_t</span></code>.
+ The GMP native types: <code class="computeroutput"><span class="identifier">mpz_t</span></code>, <code class="computeroutput"><span class="identifier">mpq_t</span></code>.
</li>
<li class="listitem">
<code class="computeroutput"><span class="identifier">mp_number</span><span class="special"><</span><span class="identifier">gmp_int</span><span class="special">></span></code>.
@@ -242,6 +243,28 @@
via the <code class="computeroutput"><span class="identifier">data</span><span class="special">()</span></code>
member function of <code class="computeroutput"><span class="identifier">mpq_rational</span></code>.
</p>
+<p>
+ Things you should know when using this type:
+ </p>
+<div class="itemizedlist"><ul class="itemizedlist" type="disc">
+<li class="listitem">
+ Default constructed <code class="computeroutput"><span class="identifier">mpq_rational</span></code>'s
+ have the value zero (this is the GMP
+ default behavior).
+ </li>
+<li class="listitem">
+ Division by zero results in a hardware exception inside the GMP
+ library.
+ </li>
+<li class="listitem">
+ No changes are made to the GMP
+ library's global settings, so this type can coexist with existing GMP code.
+ </li>
+<li class="listitem">
+ The code can equally be used with MPIR
+ as the underlying library - indeed that is the prefered option on Win32.
+ </li>
+</ul></div>
<h6>
<a name="boost_multiprecision.tut.rational.h1"></a>
<span><a name="boost_multiprecision.tut.rational.example_"></a></span><a class="link" href="rational.html#boost_multiprecision.tut.rational.example_">Example:</a>
@@ -304,6 +327,29 @@
<p>
which return the numerator and denominator of the number.
</p>
+<p>
+ Things you should know when using this type:
+ </p>
+<div class="itemizedlist"><ul class="itemizedlist" type="disc">
+<li class="listitem">
+ Default constructed <code class="computeroutput"><span class="identifier">mp_rational</span></code>'s
+ have the value zero (this the inherited Boost.Rational behavior).
+ </li>
+<li class="listitem">
+ Division by zero results in a <code class="computeroutput"><span class="identifier">boost</span><span class="special">::</span><span class="identifier">bad_rational</span></code>
+ exception being thrown (see the rational number library's docs for more
+ information).
+ </li>
+<li class="listitem">
+ No changes are made to libtommath's global state, so this type can safely
+ coexist with other libtommath code.
+ </li>
+<li class="listitem">
+ Performance of this type has been found to be pretty poor - this need
+ further investigation - but it appears that Boost.Rational needs some
+ improvement in this area.
+ </li>
+</ul></div>
<h6>
<a name="boost_multiprecision.tut.rational.h3"></a>
<span><a name="boost_multiprecision.tut.rational.example0"></a></span><a class="link" href="rational.html#boost_multiprecision.tut.rational.example0">Example:</a>
Modified: sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/tut/reals.html
==============================================================================
--- sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/tut/reals.html (original)
+++ sandbox/big_number/libs/multiprecision/doc/html/boost_multiprecision/tut/reals.html 2012-01-23 14:01:43 EST (Mon, 23 Jan 2012)
@@ -80,7 +80,7 @@
</td>
<td>
<p>
- GMP
+ GMP
</p>
</td>
<td>
@@ -90,7 +90,8 @@
</td>
<td>
<p>
- Dependency on GNU licenced GMP library.
+ Dependency on GNU licenced GMP
+ library.
</p>
</td>
</tr>
@@ -112,7 +113,7 @@
</td>
<td>
<p>
- GMP and MPFR
+ GMP and MPFR
</p>
</td>
<td>
@@ -123,7 +124,8 @@
</td>
<td>
<p>
- Dependency on GNU licenced GMP and MPFR libraries.
+ Dependency on GNU licenced GMP
+ and MPFR libraries.
</p>
</td>
</tr>
@@ -155,7 +157,8 @@
</td>
<td>
<p>
- Approximately 2x slower than the MPFR or GMP libraries.
+ Approximately 2x slower than the MPFR
+ or GMP libraries.
</p>
</td>
</tr>
@@ -181,9 +184,10 @@
<p>
The <code class="computeroutput"><span class="identifier">gmp_float</span></code> backend is
used in conjunction with <code class="computeroutput"><span class="identifier">mp_number</span></code>
- : it acts as a thin wrapper around the GMP <code class="computeroutput"><span class="identifier">mpf_t</span></code>
- to provide an real-number type that is a drop-in replacement for the native
- C++ floating-point types, but with much greater precision.
+ : it acts as a thin wrapper around the GMP
+ <code class="computeroutput"><span class="identifier">mpf_t</span></code> to provide an real-number
+ type that is a drop-in replacement for the native C++ floating-point types,
+ but with much greater precision.
</p>
<p>
Type <code class="computeroutput"><span class="identifier">gmp_float</span></code> can be used
@@ -212,8 +216,8 @@
</p>
<div class="itemizedlist"><ul class="itemizedlist" type="disc">
<li class="listitem">
- The GMP native types <code class="computeroutput"><span class="identifier">mpf_t</span></code>,
- <code class="computeroutput"><span class="identifier">mpz_t</span></code>, <code class="computeroutput"><span class="identifier">mpq_t</span></code>.
+ The GMP native types <code class="computeroutput"><span class="identifier">mpf_t</span></code>, <code class="computeroutput"><span class="identifier">mpz_t</span></code>,
+ <code class="computeroutput"><span class="identifier">mpq_t</span></code>.
</li>
<li class="listitem">
The <code class="computeroutput"><span class="identifier">mp_number</span></code> wrappers
@@ -227,10 +231,45 @@
via the <code class="computeroutput"><span class="identifier">data</span><span class="special">()</span></code>
member function of <code class="computeroutput"><span class="identifier">gmp_float</span></code>.
</p>
+<p>
+ Things you should know when using this type:
+ </p>
+<div class="itemizedlist"><ul class="itemizedlist" type="disc">
+<li class="listitem">
+ Default constructed <code class="computeroutput"><span class="identifier">gmp_float</span></code>'s
+ have the value zero (this is the GMP
+ library's default behavior).
+ </li>
+<li class="listitem">
+ No changes are made to the GMP
+ library's global settings, so this type can be safely mixed with existing
+ GMP code.
+ </li>
+<li class="listitem">
+ It is not possible to round-trip objects of this type to and from a string
+ and get back exactly the same value. This appears to be a limitation
+ of GMP.
+ </li>
+<li class="listitem">
+ Since the underlying GMP types
+ have no notion of infinities or NaN's, care should be taken to avoid
+ numeric overflow or division by zero. That latter will trigger a hardware
+ exception, while generating excessively large exponents may result in
+ instability of the underlying GMP
+ library (in testing, converting a number with an excessively large or
+ small exponent to a string caused GMP
+ to segfault).
+ </li>
+<li class="listitem">
+ This type can equally be used with MPIR
+ as the underlying implementation - indeed that is the recommended option
+ on Win32.
+ </li>
+</ul></div>
<h6>
<a name="boost_multiprecision.tut.reals.h1"></a>
- <span><a name="boost_multiprecision.tut.reals.gmp_example_"></a></span><a class="link" href="reals.html#boost_multiprecision.tut.reals.gmp_example_">GMP
- example:</a>
+ <span><a name="boost_multiprecision.tut.reals.__ulink_url__http___gmplib_org__gmp__ulink__example_"></a></span><a class="link" href="reals.html#boost_multiprecision.tut.reals.__ulink_url__http___gmplib_org__gmp__ulink__example_">
+ GMP example:</a>
</h6>
<p>
</p>
@@ -276,9 +315,10 @@
<p>
The <code class="computeroutput"><span class="identifier">mpfr_float_backend</span></code> type
is used in conjunction with <code class="computeroutput"><span class="identifier">mp_number</span></code>:
- It acts as a thin wrapper around the MPFR <code class="computeroutput"><span class="identifier">mpfr_t</span></code>
- to provide an real-number type that is a drop-in replacement for the native
- C++ floating-point types, but with much greater precision.
+ It acts as a thin wrapper around the MPFR
+ <code class="computeroutput"><span class="identifier">mpfr_t</span></code> to provide an real-number
+ type that is a drop-in replacement for the native C++ floating-point types,
+ but with much greater precision.
</p>
<p>
Type <code class="computeroutput"><span class="identifier">mpfr_float_backend</span></code> can
@@ -307,11 +347,11 @@
</p>
<div class="itemizedlist"><ul class="itemizedlist" type="disc">
<li class="listitem">
- The GMP native types <code class="computeroutput"><span class="identifier">mpf_t</span></code>,
- <code class="computeroutput"><span class="identifier">mpz_t</span></code>, <code class="computeroutput"><span class="identifier">mpq_t</span></code>.
+ The GMP native types <code class="computeroutput"><span class="identifier">mpf_t</span></code>, <code class="computeroutput"><span class="identifier">mpz_t</span></code>,
+ <code class="computeroutput"><span class="identifier">mpq_t</span></code>.
</li>
<li class="listitem">
- The MPFR native type <code class="computeroutput"><span class="identifier">mpfr_t</span></code>.
+ The MPFR native type <code class="computeroutput"><span class="identifier">mpfr_t</span></code>.
</li>
<li class="listitem">
The <code class="computeroutput"><span class="identifier">mp_number</span></code> wrappers
@@ -323,10 +363,34 @@
It's also possible to access the underlying <code class="computeroutput"><span class="identifier">mpf_t</span></code>
via the data() member function of <code class="computeroutput"><span class="identifier">gmp_float</span></code>.
</p>
+<p>
+ Things you should know when using this type:
+ </p>
+<div class="itemizedlist"><ul class="itemizedlist" type="disc">
+<li class="listitem">
+ A default constructed <code class="computeroutput"><span class="identifier">mpfr_float_backend</span></code>
+ is set to a NaN (this is the default MPFR
+ behavior).
+ </li>
+<li class="listitem">
+ All operations use round to nearest.
+ </li>
+<li class="listitem">
+ No changes are made to GMP or
+ MPFR global settings, so this
+ type can coexist with existing MPFR
+ or GMP code.
+ </li>
+<li class="listitem">
+ The code can equally use MPIR in
+ place of GMP - indeed that is
+ the prefered option on Win32.
+ </li>
+</ul></div>
<h6>
<a name="boost_multiprecision.tut.reals.h3"></a>
- <span><a name="boost_multiprecision.tut.reals.mpfr_example_"></a></span><a class="link" href="reals.html#boost_multiprecision.tut.reals.mpfr_example_">MPFR
- example:</a>
+ <span><a name="boost_multiprecision.tut.reals.__ulink_url__http___www_mpfr_org__mpfr__ulink__example_"></a></span><a class="link" href="reals.html#boost_multiprecision.tut.reals.__ulink_url__http___www_mpfr_org__mpfr__ulink__example_">
+ MPFR example:</a>
</h6>
<p>
</p>
@@ -383,6 +447,29 @@
There is full standard library and <code class="computeroutput"><span class="identifier">numeric_limits</span></code>
support available for this type.
</p>
+<p>
+ Things you should know when using this type:
+ </p>
+<div class="itemizedlist"><ul class="itemizedlist" type="disc">
+<li class="listitem">
+ Default constructed <code class="computeroutput"><span class="identifier">cpp_float</span></code>'s
+ have a value of zero.
+ </li>
+<li class="listitem">
+ The radix of this type is 10. As a result it can behave subtly differently
+ from base-2 types.
+ </li>
+<li class="listitem">
+ It is not possible to round-trip this type to and from a string and get
+ back to exactly the same value (this is a result of the type having some
+ hidden internal guard digits).
+ </li>
+<li class="listitem">
+ The type has a number of internal guard digits over and above those specified
+ in the template argument. Normally these should not be visible to the
+ user.
+ </li>
+</ul></div>
<h6>
<a name="boost_multiprecision.tut.reals.h5"></a>
<span><a name="boost_multiprecision.tut.reals.cpp_float_example_"></a></span><a class="link" href="reals.html#boost_multiprecision.tut.reals.cpp_float_example_">cpp_float
Modified: sandbox/big_number/libs/multiprecision/doc/html/index.html
==============================================================================
--- sandbox/big_number/libs/multiprecision/doc/html/index.html (original)
+++ sandbox/big_number/libs/multiprecision/doc/html/index.html 2012-01-23 14:01:43 EST (Mon, 23 Jan 2012)
@@ -56,7 +56,7 @@
</div>
</div>
<table xmlns:rev="http://www.cs.rpi.edu/~gregod/boost/tools/doc/revision" width="100%"><tr>
-<td align="left"><p><small>Last revised: January 14, 2012 at 13:20:36 GMT</small></p></td>
+<td align="left"><p><small>Last revised: January 23, 2012 at 18:58:32 GMT</small></p></td>
<td align="right"><div class="copyright-footer"></div></td>
</tr></table>
<hr>
Modified: sandbox/big_number/libs/multiprecision/doc/multiprecision.qbk
==============================================================================
--- sandbox/big_number/libs/multiprecision/doc/multiprecision.qbk (original)
+++ sandbox/big_number/libs/multiprecision/doc/multiprecision.qbk 2012-01-23 14:01:43 EST (Mon, 23 Jan 2012)
@@ -24,6 +24,13 @@
[import ../example/tommath_snips.cpp]
[import ../example/fixed_int_snips.cpp]
+[template mpfr[] [@http://www.mpfr.org MPFR]]
+[template gmp[] [@http://gmplib.org GMP]]
+[template mpf_class[] [@http://gmplib.org/manual/C_002b_002b-Interface-Floats.html#C_002b_002b-Interface-Floats mpfr_class]]
+[template mpfr_class[] [@http://math.berkeley.edu/~wilken/code/gmpfrxx/ mpfr_class]]
+[template mpreal[] [@http://www.holoborodko.com/pavel/mpfr/ mpreal]]
+[template mpir[] [@http://mpir.org/ MPIR]]
+
[section:intro Introduction]
The Multiprecision library comes in two distinct parts:
@@ -33,8 +40,8 @@
* A selection of backends that implement the actual arithmetic operations, and need conform only to the
reduced interface requirements of the front end.
-The library is often used by using one of the predefined typedefs: for example if you wanted an arbitrary precision
-integer type using GMP as the underlying implementation then you could use:
+The library is often used via one of the predefined typedefs: for example if you wanted an arbitrary precision
+integer type using [gmp] as the underlying implementation then you could use:
#include <boost/multiprecision/gmp.hpp> // Defines the wrappers around the GMP library's types
@@ -42,7 +49,7 @@
Alternatively, you can compose your own multiprecision type, by combining `mp_number` with one of the
predefined backend types. For example, suppose you wanted a 300 decimal digit floating-point type
-based on the MPFR library. In this case, there's no predefined typedef with that level of precision,
+based on the [mpfr] library. In this case, there's no predefined typedef with that level of precision,
so instead we compose our own:
#include <boost/multiprecision/mpfr.hpp> // Defines the Backend type that wraps MPFR
@@ -86,9 +93,9 @@
y = (((((a[6] * x + a[5]) * x + a[4]) * x + a[3]) * x + a[2]) * x + a[1]) * x + a[0];
If type `T` is an `mp_number`, then this expression is evaluated ['without creating a single temporary value]. In contrast,
-if we were using the C++ wrapper that ships with GMP - `mpf_class` - then this expression would result in no less than 11
-temporaries (this is true even though `mpf_class` does use expression templates to reduce the number of temporaries somewhat). Had
-we used an even simpler wrapper around GMP or MPFR like `mpclass` things would have been even worse and no less that 24 temporaries
+if we were using the C++ wrapper that ships with [gmp] - [mpf_class] - then this expression would result in no less than 11
+temporaries (this is true even though [mpf_class] does use expression templates to reduce the number of temporaries somewhat). Had
+we used an even simpler wrapper around [gmp] or [mpfr] like `mpclass` things would have been even worse and no less that 24 temporaries
are created for this simple expression (note - we actually measure the number of memory allocations performed rather than
the number of temporaries directly).
@@ -147,14 +154,14 @@
And finally... the performance improvements from an expression template library like this are often not as
dramatic as the reduction in number of temporaries would suggest. For example if we compare this library with
-`mpfr_class` and `mpreal`, with all three using the underlying MPFR library at 50 decimal digits precision then
+[mpfr_class] and [mpreal], with all three using the underlying [mpfr] library at 50 decimal digits precision then
we see the following typical results for polynomial execution:
[table Evaluation of Order 6 Polynomial.
[[Library][Relative Time][Relative number of memory allocations]]
[[mp_number][1.0 (0.00793s)][1.0 (2996 total)]]
-[[mpfr_class][1.2 (0.00931s)][4.3 (12976 total)]]
-[[mpreal][1.9 (0.0148s)][9.3 (27947 total)]]
+[[[mpfr_class]][1.2 (0.00931s)][4.3 (12976 total)]]
+[[[mpreal]][1.9 (0.0148s)][9.3 (27947 total)]]
]
As you can see, the execution time increases a lot more slowly than the number of memory allocations. There are
@@ -170,21 +177,21 @@
one would hope.
We'll conclude this section by providing some more performance comparisons between these three libraries,
-again, all are using MPFR to carry out the underlying arithmetic, and all are operating at the same precision
+again, all are using [mpfr] to carry out the underlying arithmetic, and all are operating at the same precision
(50 decimal digits):
[table Evaluation of Boost.Math's Bessel function test data
[[Library][Relative Time][Relative Number of Memory Allocations]]
[[mp_number][1.0 (6.21s)][1.0 (2685469)]]
-[[mpfr_class][1.04 (6.45s)][1.47 (3946007)]]
-[[mpreal][1.53 (9.52s)][4.92 (13222940)]]
+[[[mpfr_class]][1.04 (6.45s)][1.47 (3946007)]]
+[[[mpreal]][1.53 (9.52s)][4.92 (13222940)]]
]
[table Evaluation of Boost.Math's Non-Central T distribution test data
[[Library][Relative Time][Relative Number of Memory Allocations]]
[[mp_number][1.0 (269s)][1.0 (139082551)]]
-[[mpfr_class][1.04 (278s)][1.81 (252400791)]]
-[[mpreal][1.49 (401s)][3.22 (447009280)]]
+[[[mpfr_class]][1.04 (278s)][1.81 (252400791)]]
+[[[mpreal]][1.49 (401s)][3.22 (447009280)]]
]
[endsect]
@@ -200,9 +207,9 @@
[table
[[Backend Type][Header][Radix][Dependencies][Pros][Cons]]
-[[`gmp_int`][boost/multiprecision/gmp.hpp][2][GMP][Very fast and efficient backend.][Dependency on GNU licenced GMP library.]]
-[[`mp_int`][boost/multiprecision/tommath.hpp][2][libtommath][Public domain backend with no licence restrictions.][Slower than GMP.]]
-[[`fixed_int`][boost/multiprecision/fixed_int.hpp][2][None][Boost licenced fixed precision modular arithmetic integer.][Slower than GMP.]]
+[[`gmp_int`][boost/multiprecision/gmp.hpp][2][[gmp]][Very fast and efficient backend.][Dependency on GNU licenced [gmp] library.]]
+[[`mp_int`][boost/multiprecision/tommath.hpp][2][libtommath][Public domain backend with no licence restrictions.][Slower than [gmp].]]
+[[`fixed_int`][boost/multiprecision/fixed_int.hpp][2][None][Boost licenced fixed precision modular arithmetic integer.][Slower than [gmp].]]
]
[h4 gmp_int]
@@ -215,19 +222,27 @@
}} // namespaces
-The `gmp_int` backend is used via the typedef `boost::multiprecision::mpz_int`. It acts as a thin wrapper around the GMP `mpz_t`
+The `gmp_int` backend is used via the typedef `boost::multiprecision::mpz_int`. It acts as a thin wrapper around the [gmp] `mpz_t`
to provide an integer type that is a drop-in replacement for the native C++ integer types, but with unlimited precision.
As well as the usual conversions from arithmetic and string types, type `mpz_int` is copy constructible and assignable from:
-* The GMP native types: `mpf_t`, `mpz_t`, `mpq_t`.
+* The [gmp] native types: `mpf_t`, `mpz_t`, `mpq_t`.
* Instances of `mp_number<T>` that are wrappers around those types: `mp_number<gmp_float<N> >`, `mp_number<gmp_rational>`.
It's also possible to access the underlying `mpz_t` via the `data()` member function of `gmp_int`.
-[note Formatted IO for this type does not support octal or hexadecimal notation for negative values,
+Things you should know when using this type:
+
+* No changes are made to the GMP library's global settings - so you can safely mix this type with
+existing code that uses [gmp].
+* Default constructed `gmp_int`'s have the value zero (this is GMP's default behavior).
+* Formatted IO for this type does not support octal or hexadecimal notation for negative values,
as a result performing formatted output on this type when the argument is negative and either of the flags
-`std::ios_base::oct` or `std::ios_base::hex` are set, will result in a `std::runtime_error` will be thrown.]
+`std::ios_base::oct` or `std::ios_base::hex` are set, will result in a `std::runtime_error` will be thrown.
+* Division by zero is handled by the [gmp] library - it will trigger a division by zero signal.
+* Although this type is a wrapper around [gmp] it will work equally well with [mpir]. Indeed use of [mpir]
+is recomended on Win32.
[h5 Example:]
@@ -246,14 +261,17 @@
The `tommath_int` backend is used via the typedef `boost::multiprecision::mp_int`. It acts as a thin wrapper around the libtommath `mp_int`
to provide an integer type that is a drop-in replacement for the native C++ integer types, but with unlimited precision.
-[caution Although `mp_int` is mostly a drop in replacement for the builtin integer types, it should be noted that it is a
+Things you should know when using this type:
+
+* Default constructed objects have the value zero (this is libtommath's default behavior).
+* Although `mp_int` is mostly a drop in replacement for the builtin integer types, it should be noted that it is a
rather strange beast as it's a signed type that is not a 2's complement type. As a result the bitwise operations
`| & ^` will throw a `std::runtime_error` exception if either of the arguments is negative. Similarly the complement
-operator`~` is deliberately not implemented for this type.]
-
-[note Formatted IO for this type does not support octal or hexadecimal notation for negative values,
+operator`~` is deliberately not implemented for this type.
+* Formatted IO for this type does not support octal or hexadecimal notation for negative values,
as a result performing formatted output on this type when the argument is negative and either of the flags
-`std::ios_base::oct` or `std::ios_base::hex` are set, will result in a `std::runtime_error` will be thrown.]
+`std::ios_base::oct` or `std::ios_base::hex` are set, will result in a `std::runtime_error` will be thrown.
+* Division by zero will result in a hardware signal being raised by libtommath.
[h5 Example:]
@@ -282,6 +300,14 @@
and modular arithmetic with a 2's complement representation for negative values. In other words it's designed to work just
like a typical built in integer type, but with larger precision.
+Things you should know when using this type:
+
+* Default constructed `fixed_int`'s have indeterminate value - just like normal built in integers.
+* Division by zero results in a `std::runtime_error` being thrown.
+* Construction from a string that contains invalid non-numeric characters results in a `std::runtime_error` being thrown.
+* Since the precision of `fixed_int` is necessarily limited, care should be taken to avoid numeric overflow when using this type
+unless you actually want modulo-arithmetic behavior.
+
[h5 Example:]
[fixed_int_eg]
@@ -294,9 +320,9 @@
[table
[[Backend Type][Header][Radix][Dependencies][Pros][Cons]]
-[[`mpf_float<N>`][boost/multiprecision/gmp.hpp][2][GMP][Very fast and efficient backend.][Dependency on GNU licenced GMP library.]]
-[[`mpfr_float<N>`][boost/multiprecision/mpfr.hpp][2][GMP and MPFR][Very fast and efficient backend, with its own standard library implementation.][Dependency on GNU licenced GMP and MPFR libraries.]]
-[[`cpp_float<N>`][boost/multiprecision/cpp_float.hpp][10][None][Header only, all C++ implementation. Boost licence.][Approximately 2x slower than the MPFR or GMP libraries.]]
+[[`mpf_float<N>`][boost/multiprecision/gmp.hpp][2][[gmp]][Very fast and efficient backend.][Dependency on GNU licenced [gmp] library.]]
+[[`mpfr_float<N>`][boost/multiprecision/mpfr.hpp][2][[gmp] and [mpfr]][Very fast and efficient backend, with its own standard library implementation.][Dependency on GNU licenced [gmp] and [mpfr] libraries.]]
+[[`cpp_float<N>`][boost/multiprecision/cpp_float.hpp][10][None][Header only, all C++ implementation. Boost licence.][Approximately 2x slower than the [mpfr] or [gmp] libraries.]]
]
[h4 gmp_float]
@@ -314,7 +340,7 @@
}} // namespaces
-The `gmp_float` backend is used in conjunction with `mp_number` : it acts as a thin wrapper around the GMP `mpf_t`
+The `gmp_float` backend is used in conjunction with `mp_number` : it acts as a thin wrapper around the [gmp] `mpf_t`
to provide an real-number type that is a drop-in replacement for the native C++ floating-point types, but with
much greater precision.
@@ -329,12 +355,27 @@
As well as the usual conversions from arithmetic and string types, instances of `mp_number<mpf_float<N> >` are
copy constructible and assignable from:
-* The GMP native types `mpf_t`, `mpz_t`, `mpq_t`.
+* The [gmp] native types `mpf_t`, `mpz_t`, `mpq_t`.
* The `mp_number` wrappers around those types: `mp_number<mpf_float<M> >`, `mp_number<gmp_int>`, `mp_number<gmp_rational>`.
It's also possible to access the underlying `mpf_t` via the `data()` member function of `gmp_float`.
-[h5 GMP example:]
+Things you should know when using this type:
+
+* Default constructed `gmp_float`'s have the value zero (this is the [gmp] library's default behavior).
+* No changes are made to the [gmp] library's global settings, so this type can be safely mixed with
+existing [gmp] code.
+* It is not possible to round-trip objects of this type to and from a string and get back
+exactly the same value. This appears to be a limitation of [gmp].
+* Since the underlying [gmp] types have no notion of infinities or NaN's, care should be taken
+to avoid numeric overflow or division by zero. That latter will trigger a hardware exception,
+while generating excessively large exponents may result in instability of the underlying [gmp]
+library (in testing, converting a number with an excessively large or small exponent
+to a string caused [gmp] to segfault).
+* This type can equally be used with [mpir] as the underlying implementation - indeed that is
+the recommended option on Win32.
+
+[h5 [gmp] example:]
[mpf_eg]
@@ -353,7 +394,7 @@
}} // namespaces
-The `mpfr_float_backend` type is used in conjunction with `mp_number`: It acts as a thin wrapper around the MPFR `mpfr_t`
+The `mpfr_float_backend` type is used in conjunction with `mp_number`: It acts as a thin wrapper around the [mpfr] `mpfr_t`
to provide an real-number type that is a drop-in replacement for the native C++ floating-point types, but with
much greater precision.
@@ -368,13 +409,21 @@
As well as the usual conversions from arithmetic and string types, instances of `mp_number<mpfr_float_backend<N> >` are
copy constructible and assignable from:
-* The GMP native types `mpf_t`, `mpz_t`, `mpq_t`.
-* The MPFR native type `mpfr_t`.
+* The [gmp] native types `mpf_t`, `mpz_t`, `mpq_t`.
+* The [mpfr] native type `mpfr_t`.
* The `mp_number` wrappers around those types: `mp_number<mpfr_float_backend<M> >`, `mp_number<mpf_float<M> >`, `mp_number<gmp_int>`, `mp_number<gmp_rational>`.
It's also possible to access the underlying `mpf_t` via the data() member function of `gmp_float`.
-[h5 MPFR example:]
+Things you should know when using this type:
+
+* A default constructed `mpfr_float_backend` is set to a NaN (this is the default [mpfr] behavior).
+* All operations use round to nearest.
+* No changes are made to [gmp] or [mpfr] global settings, so this type can coexist with existing
+[mpfr] or [gmp] code.
+* The code can equally use [mpir] in place of [gmp] - indeed that is the prefered option on Win32.
+
+[h5 [mpfr] example:]
[mpfr_eg]
@@ -400,6 +449,15 @@
There is full standard library and `numeric_limits` support available for this type.
+Things you should know when using this type:
+
+* Default constructed `cpp_float`'s have a value of zero.
+* The radix of this type is 10. As a result it can behave subtly differently from base-2 types.
+* It is not possible to round-trip this type to and from a string and get back to exactly the same value
+(this is a result of the type having some hidden internal guard digits).
+* The type has a number of internal guard digits over and above those specified in the template argument.
+Normally these should not be visible to the user.
+
[h5 cpp_float example:]
[cpp_float_eg]
@@ -412,8 +470,8 @@
[table
[[Backend Type][Header][Radix][Dependencies][Pros][Cons]]
-[[`gmp_rational`][boost/multiprecision/gmp.hpp][2][GMP][Very fast and efficient backend.][Dependency on GNU licenced GMP library.]]
-[[`tommath_rational`][boost/multiprecision/tommath.hpp][2][libtommath][All C/C++ implementation that's Boost Software Licence compatible.][Slower than GMP.]]
+[[`gmp_rational`][boost/multiprecision/gmp.hpp][2][[gmp]][Very fast and efficient backend.][Dependency on GNU licenced [gmp] library.]]
+[[`tommath_rational`][boost/multiprecision/tommath.hpp][2][libtommath][All C/C++ implementation that's Boost Software Licence compatible.][Slower than [gmp].]]
[[`rational_adapter`][boost/multiprecision/rational_adapter.hpp][N/A][none][All C++ adapter that allows any inetger backend type to be used as a rational type.][Requires an underlying integer backend type.]]
[[`boost::rational`][boost/rational.hpp][N/A][None][A C++ rational number type that can used with any `mp_number` integer type.][The expression templates used by `mp_number` end up being "hidden" inside `boost::rational`: performance may well suffer as a result.]]
]
@@ -428,13 +486,13 @@
}} // namespaces
-The `gmp_rational` backend is used via the typedef `boost::multiprecision::mpq_rational`. It acts as a thin wrapper around the GMP `mpq_t`
+The `gmp_rational` backend is used via the typedef `boost::multiprecision::mpq_rational`. It acts as a thin wrapper around the [gmp] `mpq_t`
to provide a rational number type that is a drop-in replacement for the native C++ number types, but with unlimited precision.
As well as the usual conversions from arithmetic and string types, instances of `mp_number<gmp_rational>` are copy constructible
and assignable from:
-* The GMP native types: `mpz_t`, `mpq_t`.
+* The [gmp] native types: `mpz_t`, `mpq_t`.
* `mp_number<gmp_int>`.
There are also non-member functions:
@@ -446,6 +504,14 @@
It's also possible to access the underlying `mpq_t` via the `data()` member function of `mpq_rational`.
+Things you should know when using this type:
+
+* Default constructed `mpq_rational`'s have the value zero (this is the [gmp] default behavior).
+* Division by zero results in a hardware exception inside the [gmp] library.
+* No changes are made to the [gmp] library's global settings, so this type can coexist with existing
+[gmp] code.
+* The code can equally be used with [mpir] as the underlying library - indeed that is the prefered option on Win32.
+
[h5 Example:]
[mpq_eg]
@@ -473,6 +539,14 @@
which return the numerator and denominator of the number.
+Things you should know when using this type:
+
+* Default constructed `mp_rational`'s have the value zero (this the inherited Boost.Rational behavior).
+* Division by zero results in a `boost::bad_rational` exception being thrown (see the rational number library's docs for more information).
+* No changes are made to libtommath's global state, so this type can safely coexist with other libtommath code.
+* Performance of this type has been found to be pretty poor - this need further investigation - but it appears that Boost.Rational
+needs some improvement in this area.
+
[h5 Example:]
[mp_rat_eg]
@@ -869,7 +943,7 @@
These functions are normally implemented by the Backend type. However, default versions are provided for Backend types that
don't have native support for these functions. Please note however, that this default support requires the precision of the type
-to be a compile time constant - this means for example that the GMP MPF Backend will not work with these functions when that type is
+to be a compile time constant - this means for example that the [gmp] MPF Backend will not work with these functions when that type is
used at variable precision.
Also note that with the exception of `abs` that these functions can only be used with floating-point Backend types.
@@ -1096,11 +1170,11 @@
These tests test the total time taken to execute all of Boost.Math's test cases for these functions.
In each case the best performing library gets a relative score of 1, with the total execution time
given in brackets. The first three libraries listed are the various floating point types provided
-by this library, while for comparison, two popular C++ frontends to MPFR (mpfr_class and mpreal)
+by this library, while for comparison, two popular C++ frontends to [mpfr] ([mpfr_class] and [mpreal])
are also shown.
Test code was compiled with Microsoft Visual Studio 2010 with all optimisations
-turned on (/Ox), and used MPIR-2.3.0 and MPFR-3.0.0. The tests were run on 32-bit
+turned on (/Ox), and used MPIR-2.3.0 and [mpfr]-3.0.0. The tests were run on 32-bit
Windows Vista machine.
[table Bessel Function Performance
@@ -1108,8 +1182,8 @@
[[mpfr_float][[*1.0] (6.472s)][1.193 (10.154s)]]
[[mpf_float][1.801 (11.662s)][[*1.0](8.511s)]]
[[cpp_float][3.13 (20.285s)][2.46 (21.019s)]]
-[[mpfr_class][1.001 (6.480s)][1.15(9.805s)]]
-[[mpreal][1.542 (9.981s)][1.61 (13.702s)]]
+[[[mpfr_class]][1.001 (6.480s)][1.15(9.805s)]]
+[[[mpreal]][1.542 (9.981s)][1.61 (13.702s)]]
]
[table Non-Central T Distribution Performance
@@ -1117,8 +1191,8 @@
[[mpfr_float][1.308 (258.09s)][1.30 (516.74s)]]
[[mpf_float][[*1.0] (197.30s)][[*1.0](397.30s)]]
[[cpp_float][1.695 (334.50s)][2.68 (1064.53s)]]
-[[mpfr_class][1.35 (266.39s)][1.323 (525.74s)]]
-[[mpreal][1.75 (346.64s)][1.635 (649.94s)]]
+[[[mpfr_class]][1.35 (266.39s)][1.323 (525.74s)]]
+[[[mpreal]][1.75 (346.64s)][1.635 (649.94s)]]
]
[endsect]
@@ -1133,50 +1207,63 @@
operations.
Test code was compiled with Microsoft Visual Studio 2010 with all optimisations
-turned on (/Ox), and used MPIR-2.3.0 and MPFR-3.0.0. The tests were run on 32-bit
+turned on (/Ox), and used MPIR-2.3.0 and [mpfr]-3.0.0. The tests were run on 32-bit
Windows Vista machine.
-[table Operator *
-[[Backend][50 Decimal Digits][100 Decimal Digits][500 Decimal Digits]]
-[[cpp_float][1.0826 (0.287216s)][1.48086 (0.586363s)][1.57545 (5.05269s)]]
-[[gmp_float][[*1] (0.265302s)][[*1] (0.395962s)][[*1] (3.20714s)]]
-[[mpfr_float][1.24249 (0.329636s)][1.15432 (0.457067s)][1.16182 (3.72612s)]]
-]
[table Operator +
[[Backend][50 Decimal Digits][100 Decimal Digits][500 Decimal Digits]]
-[[cpp_float][[*1] (0.0242151s)][[*1] (0.029252s)][[*1] (0.0584099s)]]
-[[gmp_float][4.55194 (0.110226s)][3.67516 (0.107506s)][2.42489 (0.141638s)]]
-[[mpfr_float][2.45362 (0.0594147s)][2.18552 (0.0639309s)][1.32099 (0.0771588s)]]
+[[cpp_float][[*1] (0.02382s)][[*1] (0.0294619s)][[*1] (0.058466s)]]
+[[gmp_float][4.55086 (0.108402s)][3.86443 (0.113853s)][2.6241 (0.15342s)]]
+[[mpfr_float][2.52036 (0.060035s)][2.1833 (0.0643242s)][1.37736 (0.0805287s)]]
]
[table Operator +(int)
[[Backend][50 Decimal Digits][100 Decimal Digits][500 Decimal Digits]]
-[[cpp_float][1.51995 (0.0484155s)][1.78781 (0.0611055s)][1.8309 (0.104123s)]]
-[[gmp_float][[*1] (0.0318533s)][[*1] (0.0341789s)][[*1] (0.0568699s)]]
-[[mpfr_float][3.39055 (0.108s)][3.30142 (0.112839s)][2.05293 (0.11675s)]]
+[[cpp_float][1.56759 (0.0527023s)][1.74629 (0.0618102s)][1.68077 (0.105927s)]]
+[[gmp_float][[*1] (0.0336201s)][[*1] (0.0353951s)][[*1] (0.0630232s)]]
+[[mpfr_float][3.14875 (0.105861s)][3.15499 (0.111671s)][1.92831 (0.121528s)]]
]
[table Operator -
[[Backend][50 Decimal Digits][100 Decimal Digits][500 Decimal Digits]]
-[[cpp_float][[*1] (0.0261498s)][[*1] (0.030946s)][[*1] (0.0606388s)]]
-[[gmp_float][4.48753 (0.117348s)][3.75823 (0.116302s)][2.4823 (0.150524s)]]
-[[mpfr_float][2.96057 (0.0774183s)][2.61897 (0.0810465s)][1.56236 (0.0947396s)]]
+[[cpp_float][[*1] (0.0265783s)][[*1] (0.031465s)][[*1] (0.0619405s)]]
+[[gmp_float][4.66954 (0.124108s)][3.72645 (0.117253s)][2.67536 (0.165713s)]]
+[[mpfr_float][2.7909 (0.0741774s)][2.48557 (0.0782083s)][1.50944 (0.0934957s)]]
]
[table Operator -(int)
[[Backend][50 Decimal Digits][100 Decimal Digits][500 Decimal Digits]]
-[[cpp_float][[*1] (0.0567601s)][[*1] (0.0626685s)][[*1] (0.111692s)]]
-[[gmp_float][2.27932 (0.129374s)][2.04821 (0.128358s)][1.48297 (0.165635s)]]
-[[mpfr_float][2.43199 (0.13804s)][2.32131 (0.145473s)][1.38152 (0.154304s)]]
+[[cpp_float][[*1] (0.0577674s)][[*1] (0.0633795s)][[*1] (0.11146s)]]
+[[gmp_float][2.31811 (0.133911s)][2.07251 (0.131355s)][1.67161 (0.186319s)]]
+[[mpfr_float][2.45081 (0.141577s)][2.29174 (0.145249s)][1.395 (0.155487s)]]
+]
+[table Operator *
+[[Backend][50 Decimal Digits][100 Decimal Digits][500 Decimal Digits]]
+[[cpp_float][1.07276 (0.287898s)][1.47724 (0.584569s)][1.55145 (5.09969s)]]
+[[gmp_float][[*1] (0.268372s)][[*1] (0.395718s)][[*1] (3.28705s)]]
+[[mpfr_float][1.27302 (0.341642s)][1.17649 (0.465557s)][1.14029 (3.7482s)]]
+]
+[table Operator *(int)
+[[Backend][50 Decimal Digits][100 Decimal Digits][500 Decimal Digits]]
+[[cpp_float][2.89945 (0.11959s)][4.56335 (0.197945s)][9.03602 (0.742044s)]]
+[[gmp_float][[*1] (0.0412457s)][[*1] (0.0433772s)][[*1] (0.0821206s)]]
+[[mpfr_float][3.6951 (0.152407s)][3.71977 (0.161353s)][3.30958 (0.271785s)]]
]
[table Operator /
[[Backend][50 Decimal Digits][100 Decimal Digits][500 Decimal Digits]]
-[[cpp_float][3.2662 (3.98153s)][5.07021 (8.11948s)][6.78872 (53.6099s)]]
-[[gmp_float][[*1] (1.21901s)][[*1] (1.60141s)][[*1] (7.89691s)]]
-[[mpfr_float][1.33238 (1.62419s)][1.39529 (2.23443s)][1.70882 (13.4944s)]]
+[[cpp_float][3.24327 (4.00108s)][5.00532 (8.12985s)][6.79566 (54.2796s)]]
+[[gmp_float][[*1] (1.23366s)][[*1] (1.62424s)][[*1] (7.9874s)]]
+[[mpfr_float][1.32521 (1.63486s)][1.38967 (2.25716s)][1.72413 (13.7713s)]]
+]
+[table Operator /(int)
+[[Backend][50 Decimal Digits][100 Decimal Digits][500 Decimal Digits]]
+[[cpp_float][1.45093 (0.253675s)][1.83306 (0.419569s)][2.3644 (1.64187s)]]
+[[gmp_float][[*1] (0.174836s)][[*1] (0.22889s)][[*1] (0.694411s)]]
+[[mpfr_float][1.16731 (0.204088s)][1.13211 (0.259127s)][1.02031 (0.708513s)]]
]
[table Operator str
[[Backend][50 Decimal Digits][100 Decimal Digits][500 Decimal Digits]]
-[[cpp_float][1.46076 (0.0192656s)][1.59438 (0.0320398s)][[*1] (0.134302s)]]
-[[gmp_float][[*1] (0.0131888s)][[*1] (0.0200954s)][1.01007 (0.135655s)]]
-[[mpfr_float][2.19174 (0.0289065s)][1.86101 (0.0373977s)][1.15842 (0.155578s)]]
+[[cpp_float][1.4585 (0.0188303s)][1.55515 (0.03172s)][[*1] (0.131962s)]]
+[[gmp_float][[*1] (0.0129107s)][[*1] (0.0203967s)][1.04632 (0.138075s)]]
+[[mpfr_float][2.19015 (0.0282764s)][1.84679 (0.0376683s)][1.20295 (0.158743s)]]
+]
]
[endsect]
@@ -1190,86 +1277,126 @@
operations.
Test code was compiled with Microsoft Visual Studio 2010 with all optimisations
-turned on (/Ox), and used MPIR-2.3.0 and MPFR-3.0.0. The tests were run on 32-bit
+turned on (/Ox), and used MPIR-2.3.0 and [mpfr]-3.0.0. The tests were run on 32-bit
Windows Vista machine.
+Note that Linux x64 tests showed significantly worse performance for `fixed_int` division
+than on Win32 (or possibly [gmp] behaves much better in that case). Otherwise the results
+are much the same.
+
[table Operator +
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.0031173s)][[*1] (0.00696555s)][[*1] (0.0163707s)][[*1] (0.0314806s)][[*1] (0.0596158s)]]
-[[gmp_int][12.7096 (0.0396194s)][5.89178 (0.0410395s)][2.66402 (0.0436119s)][1.59356 (0.0501664s)][1.11155 (0.0662662s)]]
-[[tommath_int][6.14357 (0.0191513s)][3.16177 (0.0220235s)][1.85441 (0.030358s)][1.45895 (0.0459287s)][1.26576 (0.0754591s)]]
+[[fixed_int][[*1] (0.0031291s)][[*1] (0.00703043s)][[*1] (0.0163669s)][[*1] (0.0326567s)][[*1] (0.0603087s)]]
+[[gmp_int][12.4866 (0.0390717s)][6.01034 (0.0422553s)][2.65628 (0.0434751s)][1.54295 (0.0503875s)][1.16477 (0.0702458s)]]
+[[tommath_int][6.03111 (0.018872s)][3.08173 (0.0216659s)][1.84243 (0.0301548s)][1.30199 (0.0425188s)][1.18909 (0.0717123s)]]
]
[table Operator +(int)
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.00329336s)][[*1] (0.00370718s)][[*1] (0.00995385s)][[*1] (0.0117467s)][[*1] (0.0233483s)]]
-[[gmp_int][9.56378 (0.031497s)][8.0588 (0.0298754s)][4.15824 (0.0413905s)][5.47974 (0.0643691s)][4.46265 (0.104195s)]]
-[[tommath_int][76.2624 (0.25116s)][71.3973 (0.264682s)][28.0238 (0.278945s)][25.9035 (0.304282s)][13.1635 (0.307346s)]]
+[[fixed_int][[*1] (0.00335294s)][[*1] (0.00376116s)][[*1] (0.00985174s)][[*1] (0.0119345s)][[*1] (0.0170918s)]]
+[[gmp_int][9.47407 (0.031766s)][8.44794 (0.0317741s)][4.23857 (0.0417573s)][5.40856 (0.0645488s)][6.31314 (0.107903s)]]
+[[tommath_int][67.0025 (0.224655s)][60.4203 (0.22725s)][25.1834 (0.2481s)][23.2996 (0.27807s)][17.1743 (0.293538s)]]
]
[table Operator -
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.00359417s)][[*1] (0.00721041s)][[*1] (0.0168213s)][[*1] (0.0323563s)][[*1] (0.061385s)]]
-[[gmp_int][10.6794 (0.0383836s)][5.65517 (0.0407761s)][2.63634 (0.0443466s)][1.59979 (0.0517632s)][1.13379 (0.0695978s)]]
-[[tommath_int][6.43615 (0.0231326s)][3.6161 (0.0260736s)][2.2585 (0.0379908s)][1.52006 (0.0491835s)][1.24231 (0.0762591s)]]
+[[fixed_int][[*1] (0.00339191s)][[*1] (0.0073172s)][[*1] (0.0166428s)][[*1] (0.0349375s)][[*1] (0.0600083s)]]
+[[gmp_int][12.5182 (0.0424608s)][5.57936 (0.0408253s)][2.78496 (0.0463496s)][1.48373 (0.051838s)][1.29928 (0.0779673s)]]
+[[tommath_int][7.00782 (0.0237699s)][3.69919 (0.0270677s)][2.29645 (0.0382195s)][1.39777 (0.0488346s)][1.28243 (0.0769566s)]]
]
[table Operator -(int)
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.00353606s)][[*1] (0.00577573s)][[*1] (0.0155184s)][[*1] (0.029385s)][[*1] (0.0586271s)]]
-[[gmp_int][9.04434 (0.0319814s)][5.12393 (0.0295945s)][2.50743 (0.0389112s)][2.01898 (0.0593277s)][1.68381 (0.098717s)]]
-[[tommath_int][60.2486 (0.213043s)][38.3032 (0.221229s)][15.8792 (0.24642s)][8.71166 (0.255992s)][4.85236 (0.28448s)]]
+[[fixed_int][[*1] (0.00250933s)][[*1] (0.00358055s)][[*1] (0.0103282s)][[*1] (0.0119127s)][[*1] (0.0176089s)]]
+[[gmp_int][12.093 (0.0303454s)][8.50898 (0.0304669s)][3.9284 (0.0405733s)][5.03037 (0.0599252s)][5.96617 (0.105058s)]]
+[[tommath_int][80.8477 (0.202873s)][57.8371 (0.207089s)][21.3372 (0.220375s)][23.526 (0.280258s)][14.793 (0.260488s)]]
]
[table Operator *
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.0175309s)][[*1] (0.0388232s)][[*1] (0.123609s)][[*1] (0.427489s)][[*1] (1.46312s)]]
-[[gmp_int][2.93263 (0.0514117s)][1.70358 (0.0661383s)][1.01811 (0.125848s)][1.20692 (0.515943s)][1.03248 (1.51064s)]]
-[[tommath_int][3.82476 (0.0670515s)][2.87425 (0.111587s)][2.74339 (0.339108s)][2.26768 (0.969408s)][2.1233 (3.10664s)]]
+[[fixed_int][[*1] (0.0223481s)][[*1] (0.0375288s)][[*1] (0.120353s)][[*1] (0.439147s)][[*1] (1.46969s)]]
+[[gmp_int][2.50746 (0.0560369s)][1.76676 (0.0663044s)][1.06052 (0.127636s)][1.22558 (0.53821s)][1.03538 (1.52168s)]]
+[[tommath_int][3.00028 (0.0670506s)][2.97696 (0.111722s)][2.86257 (0.34452s)][2.26661 (0.995374s)][2.12926 (3.12935s)]]
+]
+[table Operator *(int)
+[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
+[[fixed_int][[*1] (0.00444316s)][[*1] (0.0135739s)][[*1] (0.0192615s)][[*1] (0.0328339s)][1.18198 (0.0567364s)]]
+[[gmp_int][4.57776 (0.0203397s)][1.79901 (0.0244196s)][1.32814 (0.025582s)][1.01453 (0.033311s)][[*1] (0.048001s)]]
+[[tommath_int][53.8709 (0.239357s)][18.3773 (0.249452s)][14.2088 (0.273682s)][14.0907 (0.462652s)][9.10761 (0.437175s)]]
]
[table Operator /
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.0973696s)][[*1] (0.260936s)][[*1] (0.845628s)][2.4597 (2.51371s)][6.21836 (7.93136s)]]
-[[gmp_int][7.66851 (0.74668s)][3.17732 (0.829077s)][1.05006 (0.887961s)][[*1] (1.02196s)][[*1] (1.27547s)]]
-[[tommath_int][18.3945 (1.79107s)][8.11201 (2.11671s)][3.49119 (2.95225s)][4.55727 (4.65733s)][9.06813 (11.5662s)]]
+[[fixed_int][[*1] (0.0991632s)][[*1] (0.172328s)][[*1] (0.309492s)][[*1] (0.573815s)][[*1] (1.06356s)]]
+[[gmp_int][7.81859 (0.775316s)][5.11069 (0.880715s)][2.93514 (0.908404s)][1.80497 (1.03572s)][1.21878 (1.29625s)]]
+[[tommath_int][18.0766 (1.79253s)][12.3939 (2.13582s)][9.80438 (3.03438s)][8.74047 (5.01541s)][10.8288 (11.517s)]]
+]
+[table Operator /(int)
+[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
+[[fixed_int][1.04098 (0.0443082s)][1.61317 (0.110308s)][2.18324 (0.229148s)][2.36331 (0.442167s)][2.45159 (0.866172s)]]
+[[gmp_int][[*1] (0.042564s)][[*1] (0.06838s)][[*1] (0.104957s)][[*1] (0.187096s)][[*1] (0.35331s)]]
+[[tommath_int][32.4072 (1.37938s)][23.7471 (1.62383s)][22.1907 (2.32908s)][19.9054 (3.72421s)][24.2219 (8.55783s)]]
]
[table Operator %
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.098458s)][[*1] (0.269155s)][1.10039 (0.849272s)][2.92096 (2.55909s)][7.47157 (7.99106s)]]
-[[gmp_int][6.63934 (0.653697s)][2.6753 (0.72007s)][[*1] (0.771794s)][[*1] (0.87611s)][[*1] (1.06953s)]]
-[[tommath_int][18.5522 (1.82661s)][8.00831 (2.15548s)][3.89737 (3.00797s)][5.38078 (4.71416s)][10.7885 (11.5386s)]]
+[[fixed_int][[*1] (0.0946529s)][[*1] (0.170561s)][[*1] (0.328458s)][[*1] (0.575884s)][[*1] (1.05006s)]]
+[[gmp_int][7.77525 (0.73595s)][4.39387 (0.749422s)][2.35075 (0.772122s)][1.51922 (0.874894s)][1.02263 (1.07382s)]]
+[[tommath_int][27.1503 (2.56986s)][12.8743 (2.19585s)][9.43965 (3.10053s)][8.24936 (4.75068s)][10.9719 (11.5211s)]]
+]
+[table Operator %(int)
+[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
+[[fixed_int][1.25034 (0.0425984s)][1.91617 (0.106226s)][2.02166 (0.195577s)][2.14437 (0.387067s)][2.23514 (0.776075s)]]
+[[gmp_int][[*1] (0.0340695s)][[*1] (0.0554367s)][[*1] (0.0967406s)][[*1] (0.180504s)][[*1] (0.347216s)]]
+[[tommath_int][42.8781 (1.46083s)][29.879 (1.65639s)][23.4323 (2.26685s)][19.932 (3.5978s)][25.0046 (8.682s)]]
+]
+[table Operator str
+[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
+[[fixed_int][[*1] (0.000465841s)][[*1] (0.00102073s)][[*1] (0.00207212s)][1.02618 (0.0062017s)][1.32649 (0.0190043s)]]
+[[gmp_int][2.83823 (0.00132216s)][2.17537 (0.00222046s)][1.46978 (0.00304557s)][[*1] (0.00604351s)][[*1] (0.0143268s)]]
+[[tommath_int][15.76 (0.00734164s)][15.9879 (0.0163193s)][21.7337 (0.0450349s)][19.7183 (0.119168s)][26.3445 (0.377431s)]]
]
[table Operator <<
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.0120907s)][[*1] (0.0129147s)][[*1] (0.0214412s)][[*1] (0.0249208s)][[*1] (0.0341293s)]]
-[[gmp_int][1.93756 (0.0234265s)][1.97785 (0.0255433s)][1.43607 (0.0307911s)][1.815 (0.0452311s)][2.00167 (0.0683156s)]]
-[[tommath_int][3.42859 (0.0414542s)][3.04951 (0.0393836s)][3.04202 (0.0652246s)][3.81169 (0.0949903s)][4.93896 (0.168563s)]]
+[[fixed_int][[*1] (0.0119095s)][[*1] (0.0131746s)][[*1] (0.0213483s)][[*1] (0.0247552s)][[*1] (0.0339579s)]]
+[[gmp_int][1.9355 (0.0230509s)][1.94257 (0.0255925s)][1.49684 (0.031955s)][1.79202 (0.0443618s)][2.0846 (0.0707887s)]]
+[[tommath_int][2.64273 (0.0314737s)][2.95612 (0.0389456s)][3.05842 (0.065292s)][3.79496 (0.0939451s)][4.82142 (0.163725s)]]
]
[table Operator >>
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.0064833s)][[*1] (0.00772857s)][[*1] (0.0186871s)][[*1] (0.0218303s)][[*1] (0.0326372s)]]
-[[gmp_int][4.212 (0.0273077s)][3.72696 (0.0288041s)][1.55046 (0.0289735s)][1.51403 (0.0330518s)][1.13695 (0.037107s)]]
-[[tommath_int][33.9418 (0.220055s)][29.104 (0.224932s)][13.8407 (0.258642s)][13.1488 (0.287043s)][15.1741 (0.495242s)]]
+[[fixed_int][[*1] (0.006361s)][[*1] (0.00880189s)][[*1] (0.0180295s)][[*1] (0.0220786s)][[*1] (0.0325312s)]]
+[[gmp_int][4.26889 (0.0271544s)][3.14669 (0.0276968s)][1.74396 (0.0314426s)][1.45928 (0.0322188s)][1.24596 (0.0405327s)]]
+[[tommath_int][39.4379 (0.250865s)][28.6225 (0.251932s)][16.4543 (0.296661s)][14.2167 (0.313884s)][15.5842 (0.506974s)]]
]
[table Operator &
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.0028732s)][[*1] (0.00552933s)][[*1] (0.0125148s)][[*1] (0.020299s)][[*1] (0.034856s)]]
-[[gmp_int][16.3018 (0.0468383s)][9.51109 (0.05259s)][5.20026 (0.0650802s)][4.46545 (0.0906443s)][3.99377 (0.139207s)]]
-[[tommath_int][42.221 (0.121309s)][22.2471 (0.123011s)][11.3587 (0.142151s)][7.3475 (0.149147s)][11.4043 (0.397507s)]]
+[[fixed_int][[*1] (0.00298048s)][[*1] (0.00546222s)][[*1] (0.0127546s)][[*1] (0.01985s)][[*1] (0.0349286s)]]
+[[gmp_int][16.0105 (0.0477189s)][9.67027 (0.0528211s)][5.12678 (0.0653902s)][4.62316 (0.0917698s)][4.00837 (0.140007s)]]
+[[tommath_int][43.6665 (0.130147s)][23.8003 (0.130002s)][11.4242 (0.145711s)][7.83416 (0.155508s)][9.50103 (0.331858s)]]
+]
+[table Operator &(int)
+[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
+[[fixed_int][[*1] (0.00222291s)][[*1] (0.0035522s)][[*1] (0.0110247s)][[*1] (0.0154281s)][[*1] (0.0275044s)]]
+[[gmp_int][70.8538 (0.157502s)][42.1478 (0.149717s)][13.9023 (0.153268s)][10.3271 (0.159328s)][6.0529 (0.166481s)]]
+[[tommath_int][154.134 (0.342626s)][93.2035 (0.331077s)][31.9151 (0.351853s)][23.6515 (0.364899s)][22.0042 (0.605213s)]]
]
[table Operator ^
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.00287983s)][[*1] (0.00543128s)][[*1] (0.0125726s)][[*1] (0.019987s)][[*1] (0.034697s)]]
-[[gmp_int][14.938 (0.0430189s)][9.00973 (0.0489344s)][4.83803 (0.0608267s)][4.33359 (0.0866154s)][3.89518 (0.135151s)]]
-[[tommath_int][41.6898 (0.12006s)][22.4393 (0.121874s)][10.7513 (0.135172s)][7.2632 (0.145169s)][11.5765 (0.401671s)]]
+[[fixed_int][[*1] (0.00307714s)][[*1] (0.00538197s)][[*1] (0.0127717s)][[*1] (0.0198304s)][[*1] (0.0345822s)]]
+[[gmp_int][13.9543 (0.0429392s)][9.92785 (0.0534314s)][4.80398 (0.0613552s)][4.35864 (0.0864335s)][3.887 (0.134421s)]]
+[[tommath_int][41.5958 (0.127996s)][24.2396 (0.130457s)][11.3666 (0.145171s)][8.01016 (0.158845s)][9.84853 (0.340584s)]]
+]
+[table Operator ^(int)
+[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
+[[fixed_int][[*1] (0.00236664s)][[*1] (0.0035339s)][[*1] (0.0100442s)][[*1] (0.0155814s)][[*1] (0.0293253s)]]
+[[gmp_int][61.4272 (0.145376s)][41.6319 (0.147123s)][14.9744 (0.150405s)][9.64857 (0.150338s)][5.46649 (0.160306s)]]
+[[tommath_int][145.509 (0.344367s)][93.9055 (0.331853s)][35.0456 (0.352003s)][22.7371 (0.354275s)][19.1373 (0.561207s)]]
]
[table Operator |
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.00314803s)][[*1] (0.00548233s)][[*1] (0.0125434s)][[*1] (0.0198161s)][[*1] (0.034957s)]]
-[[gmp_int][13.0622 (0.0411201s)][8.63936 (0.0473638s)][4.6932 (0.0588688s)][4.25792 (0.0843755s)][3.78236 (0.13222s)]]
-[[tommath_int][38.5896 (0.121481s)][22.3609 (0.12259s)][10.9015 (0.136742s)][7.68521 (0.152291s)][11.6322 (0.406628s)]]
+[[fixed_int][[*1] (0.00295261s)][[*1] (0.00560832s)][[*1] (0.0127056s)][[*1] (0.0200759s)][[*1] (0.034651s)]]
+[[gmp_int][14.1091 (0.0416586s)][8.52475 (0.0478096s)][4.74593 (0.0602998s)][4.19694 (0.0842575s)][3.85525 (0.133588s)]]
+[[tommath_int][44.8889 (0.132539s)][25.2503 (0.141612s)][11.0488 (0.140382s)][7.39273 (0.148416s)][9.75809 (0.338127s)]]
]
-[table Operator str
+[table Operator |(int)
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][1.03557 (0.00143356s)][1.39844 (0.00290281s)][3.14081 (0.0099558s)][6.28067 (0.0372769s)][13.2101 (0.188878s)]]
-[[gmp_int][[*1] (0.00138432s)][[*1] (0.00207575s)][[*1] (0.00316982s)][[*1] (0.00593518s)][[*1] (0.014298s)]]
-[[tommath_int][5.31194 (0.00735345s)][7.90724 (0.0164135s)][15.8581 (0.0502673s)][19.7526 (0.117235s)][26.6031 (0.380373s)]]
+[[fixed_int][[*1] (0.00244005s)][[*1] (0.0040142s)][[*1] (0.00983777s)][[*1] (0.0155223s)][[*1] (0.0293444s)]]
+[[gmp_int][64.6148 (0.157663s)][34.5827 (0.138822s)][14.2764 (0.140448s)][10.3248 (0.160264s)][5.33565 (0.156572s)]]
+[[tommath_int][137.825 (0.3363s)][81.1074 (0.325581s)][34.8737 (0.343079s)][22.3727 (0.347276s)][18.912 (0.554963s)]]
]
[endsect]
Modified: sandbox/big_number/libs/multiprecision/performance/performance_test-msvc-10.log
==============================================================================
--- sandbox/big_number/libs/multiprecision/performance/performance_test-msvc-10.log (original)
+++ sandbox/big_number/libs/multiprecision/performance/performance_test-msvc-10.log 2012-01-23 14:01:43 EST (Mon, 23 Jan 2012)
@@ -1,447 +1,621 @@
-gmp_float 50 + 0.110226
-gmp_float 50 - 0.117348
-gmp_float 50 +(int)0.0318533
-gmp_float 50 -(int)0.129374
-gmp_float 50 * 0.265302
-gmp_float 50 / 1.21901
-gmp_float 50 str 0.0131888
-gmp_float 100 + 0.107506
-gmp_float 100 - 0.116302
-gmp_float 100 +(int)0.0341789
-gmp_float 100 -(int)0.128358
-gmp_float 100 * 0.395962
-gmp_float 100 / 1.60141
-gmp_float 100 str 0.0200954
-gmp_float 500 + 0.141638
-gmp_float 500 - 0.150524
-gmp_float 500 +(int)0.0568699
-gmp_float 500 -(int)0.165635
-gmp_float 500 * 3.20714
-gmp_float 500 / 7.89691
-gmp_float 500 str 0.135655
-gmp_int 64 + 0.0396194
-gmp_int 64 - 0.0383836
-gmp_int 64 +(int)0.031497
-gmp_int 64 -(int)0.0319814
-gmp_int 64 * 0.0514117
-gmp_int 64 / 0.74668
-gmp_int 64 str 0.00138432
-gmp_int 64 % 0.653697
-gmp_int 64 | 0.0411201
-gmp_int 64 & 0.0468383
-gmp_int 64 ^ 0.0430189
-gmp_int 64 << 0.0234265
-gmp_int 64 >> 0.0273077
-gmp_int 128 + 0.0410395
-gmp_int 128 - 0.0407761
-gmp_int 128 +(int)0.0298754
-gmp_int 128 -(int)0.0295945
-gmp_int 128 * 0.0661383
-gmp_int 128 / 0.829077
-gmp_int 128 str 0.00207575
-gmp_int 128 % 0.72007
-gmp_int 128 | 0.0473638
-gmp_int 128 & 0.05259
-gmp_int 128 ^ 0.0489344
-gmp_int 128 << 0.0255433
-gmp_int 128 >> 0.0288041
-gmp_int 256 + 0.0436119
-gmp_int 256 - 0.0443466
-gmp_int 256 +(int)0.0413905
-gmp_int 256 -(int)0.0389112
-gmp_int 256 * 0.125848
-gmp_int 256 / 0.887961
-gmp_int 256 str 0.00316982
-gmp_int 256 % 0.771794
-gmp_int 256 | 0.0588688
-gmp_int 256 & 0.0650802
-gmp_int 256 ^ 0.0608267
-gmp_int 256 << 0.0307911
-gmp_int 256 >> 0.0289735
-gmp_int 512 + 0.0501664
-gmp_int 512 - 0.0517632
-gmp_int 512 +(int)0.0643691
-gmp_int 512 -(int)0.0593277
-gmp_int 512 * 0.515943
-gmp_int 512 / 1.02196
-gmp_int 512 str 0.00593518
-gmp_int 512 % 0.87611
-gmp_int 512 | 0.0843755
-gmp_int 512 & 0.0906443
-gmp_int 512 ^ 0.0866154
-gmp_int 512 << 0.0452311
-gmp_int 512 >> 0.0330518
-gmp_int 1024 + 0.0662662
-gmp_int 1024 - 0.0695978
-gmp_int 1024 +(int)0.104195
-gmp_int 1024 -(int)0.098717
-gmp_int 1024 * 1.51064
-gmp_int 1024 / 1.27547
-gmp_int 1024 str 0.014298
-gmp_int 1024 % 1.06953
-gmp_int 1024 | 0.13222
-gmp_int 1024 & 0.139207
-gmp_int 1024 ^ 0.135151
-gmp_int 1024 << 0.0683156
-gmp_int 1024 >> 0.037107
-mpq_rational 64 + 1.47058
-mpq_rational 64 - 1.47214
-mpq_rational 64 +(int)0.654198
-mpq_rational 64 -(int)0.646626
-mpq_rational 64 * 2.63854
-mpq_rational 64 / 9.24093
-mpq_rational 64 str 0.00265152
-mpq_rational 128 + 3.18485
-mpq_rational 128 - 3.1691
-mpq_rational 128 +(int)0.66159
-mpq_rational 128 -(int)0.657712
-mpq_rational 128 * 5.8426
-mpq_rational 128 / 14.6415
-mpq_rational 128 str 0.00372512
-mpq_rational 256 + 6.67422
-mpq_rational 256 - 6.67532
-mpq_rational 256 +(int)0.705687
-mpq_rational 256 -(int)0.704919
-mpq_rational 256 * 12.6093
-mpq_rational 256 / 26.2341
-mpq_rational 256 str 0.00573139
-mpq_rational 512 + 15.5117
-mpq_rational 512 - 15.6279
-mpq_rational 512 +(int)0.817714
-mpq_rational 512 -(int)0.818141
-mpq_rational 512 * 28.654
-mpq_rational 512 / 52.808
-mpq_rational 512 str 0.011966
-mpq_rational 1024 + 37.688
-mpq_rational 1024 - 37.6616
-mpq_rational 1024 +(int)0.925526
-mpq_rational 1024 -(int)0.935657
-mpq_rational 1024 * 68.7938
-mpq_rational 1024 / 119.722
-mpq_rational 1024 str 0.0288116
-tommath_int 64 + 0.0191513
-tommath_int 64 - 0.0231326
-tommath_int 64 +(int)0.25116
-tommath_int 64 -(int)0.213043
-tommath_int 64 * 0.0670515
-tommath_int 64 / 1.79107
-tommath_int 64 str 0.00735345
-tommath_int 64 % 1.82661
-tommath_int 64 | 0.121481
-tommath_int 64 & 0.121309
-tommath_int 64 ^ 0.12006
-tommath_int 64 << 0.0414542
-tommath_int 64 >> 0.220055
-tommath_int 128 + 0.0220235
-tommath_int 128 - 0.0260736
-tommath_int 128 +(int)0.264682
-tommath_int 128 -(int)0.221229
-tommath_int 128 * 0.111587
-tommath_int 128 / 2.11671
-tommath_int 128 str 0.0164135
-tommath_int 128 % 2.15548
-tommath_int 128 | 0.12259
-tommath_int 128 & 0.123011
-tommath_int 128 ^ 0.121874
-tommath_int 128 << 0.0393836
-tommath_int 128 >> 0.224932
-tommath_int 256 + 0.030358
-tommath_int 256 - 0.0379908
-tommath_int 256 +(int)0.278945
-tommath_int 256 -(int)0.24642
-tommath_int 256 * 0.339108
-tommath_int 256 / 2.95225
-tommath_int 256 str 0.0502673
-tommath_int 256 % 3.00797
-tommath_int 256 | 0.136742
-tommath_int 256 & 0.142151
-tommath_int 256 ^ 0.135172
-tommath_int 256 << 0.0652246
-tommath_int 256 >> 0.258642
-tommath_int 512 + 0.0459287
-tommath_int 512 - 0.0491835
-tommath_int 512 +(int)0.304282
-tommath_int 512 -(int)0.255992
-tommath_int 512 * 0.969408
-tommath_int 512 / 4.65733
-tommath_int 512 str 0.117235
-tommath_int 512 % 4.71416
-tommath_int 512 | 0.152291
-tommath_int 512 & 0.149147
-tommath_int 512 ^ 0.145169
-tommath_int 512 << 0.0949903
-tommath_int 512 >> 0.287043
-tommath_int 1024 + 0.0754591
-tommath_int 1024 - 0.0762591
-tommath_int 1024 +(int)0.307346
-tommath_int 1024 -(int)0.28448
-tommath_int 1024 * 3.10664
-tommath_int 1024 / 11.5662
-tommath_int 1024 str 0.380373
-tommath_int 1024 % 11.5386
-tommath_int 1024 | 0.406628
-tommath_int 1024 & 0.397507
-tommath_int 1024 ^ 0.401671
-tommath_int 1024 << 0.168563
-tommath_int 1024 >> 0.495242
-fixed_int 64 + 0.0031173
-fixed_int 64 - 0.00359417
-fixed_int 64 +(int)0.00329336
-fixed_int 64 -(int)0.00353606
-fixed_int 64 * 0.0175309
-fixed_int 64 / 0.0973696
-fixed_int 64 str 0.00143356
-fixed_int 64 % 0.098458
-fixed_int 64 | 0.00314803
-fixed_int 64 & 0.0028732
-fixed_int 64 ^ 0.00287983
-fixed_int 64 << 0.0120907
-fixed_int 64 >> 0.0064833
-fixed_int 128 + 0.00696555
-fixed_int 128 - 0.00721041
-fixed_int 128 +(int)0.00370718
-fixed_int 128 -(int)0.00577573
-fixed_int 128 * 0.0388232
-fixed_int 128 / 0.260936
-fixed_int 128 str 0.00290281
-fixed_int 128 % 0.269155
-fixed_int 128 | 0.00548233
-fixed_int 128 & 0.00552933
-fixed_int 128 ^ 0.00543128
-fixed_int 128 << 0.0129147
-fixed_int 128 >> 0.00772857
-fixed_int 256 + 0.0163707
-fixed_int 256 - 0.0168213
-fixed_int 256 +(int)0.00995385
-fixed_int 256 -(int)0.0155184
-fixed_int 256 * 0.123609
-fixed_int 256 / 0.845628
-fixed_int 256 str 0.0099558
-fixed_int 256 % 0.849272
-fixed_int 256 | 0.0125434
-fixed_int 256 & 0.0125148
-fixed_int 256 ^ 0.0125726
-fixed_int 256 << 0.0214412
-fixed_int 256 >> 0.0186871
-fixed_int 512 + 0.0314806
-fixed_int 512 - 0.0323563
-fixed_int 512 +(int)0.0117467
-fixed_int 512 -(int)0.029385
-fixed_int 512 * 0.427489
-fixed_int 512 / 2.51371
-fixed_int 512 str 0.0372769
-fixed_int 512 % 2.55909
-fixed_int 512 | 0.0198161
-fixed_int 512 & 0.020299
-fixed_int 512 ^ 0.019987
-fixed_int 512 << 0.0249208
-fixed_int 512 >> 0.0218303
-fixed_int 1024 + 0.0596158
-fixed_int 1024 - 0.061385
-fixed_int 1024 +(int)0.0233483
-fixed_int 1024 -(int)0.0586271
-fixed_int 1024 * 1.46312
-fixed_int 1024 / 7.93136
-fixed_int 1024 str 0.188878
-fixed_int 1024 % 7.99106
-fixed_int 1024 | 0.034957
-fixed_int 1024 & 0.034856
-fixed_int 1024 ^ 0.034697
-fixed_int 1024 << 0.0341293
-fixed_int 1024 >> 0.0326372
-cpp_float 50 + 0.0242151
-cpp_float 50 - 0.0261498
-cpp_float 50 +(int)0.0484155
-cpp_float 50 -(int)0.0567601
-cpp_float 50 * 0.287216
-cpp_float 50 / 3.98153
-cpp_float 50 str 0.0192656
-cpp_float 100 + 0.029252
-cpp_float 100 - 0.030946
-cpp_float 100 +(int)0.0611055
-cpp_float 100 -(int)0.0626685
-cpp_float 100 * 0.586363
-cpp_float 100 / 8.11948
-cpp_float 100 str 0.0320398
-cpp_float 500 + 0.0584099
-cpp_float 500 - 0.0606388
-cpp_float 500 +(int)0.104123
-cpp_float 500 -(int)0.111692
-cpp_float 500 * 5.05269
-cpp_float 500 / 53.6099
-cpp_float 500 str 0.134302
-mpfr_float 50 + 0.0594147
-mpfr_float 50 - 0.0774183
-mpfr_float 50 +(int)0.108
-mpfr_float 50 -(int)0.13804
-mpfr_float 50 * 0.329636
-mpfr_float 50 / 1.62419
-mpfr_float 50 str 0.0289065
-mpfr_float 100 + 0.0639309
-mpfr_float 100 - 0.0810465
-mpfr_float 100 +(int)0.112839
-mpfr_float 100 -(int)0.145473
-mpfr_float 100 * 0.457067
-mpfr_float 100 / 2.23443
-mpfr_float 100 str 0.0373977
-mpfr_float 500 + 0.0771588
-mpfr_float 500 - 0.0947396
-mpfr_float 500 +(int)0.11675
-mpfr_float 500 -(int)0.154304
-mpfr_float 500 * 3.72612
-mpfr_float 500 / 13.4944
-mpfr_float 500 str 0.155578
+gmp_float 50 + 0.108402
+gmp_float 50 - 0.124108
+gmp_float 50 * 0.268372
+gmp_float 50 / 1.23366
+gmp_float 50 str 0.0129107
+gmp_float 50 +(int)0.0336201
+gmp_float 50 -(int)0.133911
+gmp_float 50 *(int)0.0412457
+gmp_float 50 /(int)0.174836
+gmp_float 100 + 0.113853
+gmp_float 100 - 0.117253
+gmp_float 100 * 0.395718
+gmp_float 100 / 1.62424
+gmp_float 100 str 0.0203967
+gmp_float 100 +(int)0.0353951
+gmp_float 100 -(int)0.131355
+gmp_float 100 *(int)0.0433772
+gmp_float 100 /(int)0.22889
+gmp_float 500 + 0.15342
+gmp_float 500 - 0.165713
+gmp_float 500 * 3.28705
+gmp_float 500 / 7.9874
+gmp_float 500 str 0.138075
+gmp_float 500 +(int)0.0630232
+gmp_float 500 -(int)0.186319
+gmp_float 500 *(int)0.0821206
+gmp_float 500 /(int)0.694411
+gmp_int 64 + 0.0390717
+gmp_int 64 - 0.0424608
+gmp_int 64 * 0.0560369
+gmp_int 64 / 0.775316
+gmp_int 64 str 0.00132216
+gmp_int 64 +(int)0.031766
+gmp_int 64 -(int)0.0303454
+gmp_int 64 *(int)0.0203397
+gmp_int 64 /(int)0.042564
+gmp_int 64 % 0.73595
+gmp_int 64 | 0.0416586
+gmp_int 64 & 0.0477189
+gmp_int 64 ^ 0.0429392
+gmp_int 64 << 0.0230509
+gmp_int 64 >> 0.0271544
+gmp_int 64 %(int)0.0340695
+gmp_int 64 |(int)0.157663
+gmp_int 64 &(int)0.157502
+gmp_int 64 ^(int)0.145376
+gmp_int 128 + 0.0422553
+gmp_int 128 - 0.0408253
+gmp_int 128 * 0.0663044
+gmp_int 128 / 0.880715
+gmp_int 128 str 0.00222046
+gmp_int 128 +(int)0.0317741
+gmp_int 128 -(int)0.0304669
+gmp_int 128 *(int)0.0244196
+gmp_int 128 /(int)0.06838
+gmp_int 128 % 0.749422
+gmp_int 128 | 0.0478096
+gmp_int 128 & 0.0528211
+gmp_int 128 ^ 0.0534314
+gmp_int 128 << 0.0255925
+gmp_int 128 >> 0.0276968
+gmp_int 128 %(int)0.0554367
+gmp_int 128 |(int)0.138822
+gmp_int 128 &(int)0.149717
+gmp_int 128 ^(int)0.147123
+gmp_int 256 + 0.0434751
+gmp_int 256 - 0.0463496
+gmp_int 256 * 0.127636
+gmp_int 256 / 0.908404
+gmp_int 256 str 0.00304557
+gmp_int 256 +(int)0.0417573
+gmp_int 256 -(int)0.0405733
+gmp_int 256 *(int)0.025582
+gmp_int 256 /(int)0.104957
+gmp_int 256 % 0.772122
+gmp_int 256 | 0.0602998
+gmp_int 256 & 0.0653902
+gmp_int 256 ^ 0.0613552
+gmp_int 256 << 0.031955
+gmp_int 256 >> 0.0314426
+gmp_int 256 %(int)0.0967406
+gmp_int 256 |(int)0.140448
+gmp_int 256 &(int)0.153268
+gmp_int 256 ^(int)0.150405
+gmp_int 512 + 0.0503875
+gmp_int 512 - 0.051838
+gmp_int 512 * 0.53821
+gmp_int 512 / 1.03572
+gmp_int 512 str 0.00604351
+gmp_int 512 +(int)0.0645488
+gmp_int 512 -(int)0.0599252
+gmp_int 512 *(int)0.033311
+gmp_int 512 /(int)0.187096
+gmp_int 512 % 0.874894
+gmp_int 512 | 0.0842575
+gmp_int 512 & 0.0917698
+gmp_int 512 ^ 0.0864335
+gmp_int 512 << 0.0443618
+gmp_int 512 >> 0.0322188
+gmp_int 512 %(int)0.180504
+gmp_int 512 |(int)0.160264
+gmp_int 512 &(int)0.159328
+gmp_int 512 ^(int)0.150338
+gmp_int 1024 + 0.0702458
+gmp_int 1024 - 0.0779673
+gmp_int 1024 * 1.52168
+gmp_int 1024 / 1.29625
+gmp_int 1024 str 0.0143268
+gmp_int 1024 +(int)0.107903
+gmp_int 1024 -(int)0.105058
+gmp_int 1024 *(int)0.048001
+gmp_int 1024 /(int)0.35331
+gmp_int 1024 % 1.07382
+gmp_int 1024 | 0.133588
+gmp_int 1024 & 0.140007
+gmp_int 1024 ^ 0.134421
+gmp_int 1024 << 0.0707887
+gmp_int 1024 >> 0.0405327
+gmp_int 1024 %(int)0.347216
+gmp_int 1024 |(int)0.156572
+gmp_int 1024 &(int)0.166481
+gmp_int 1024 ^(int)0.160306
+mpq_rational 64 + 1.47032
+mpq_rational 64 - 1.48169
+mpq_rational 64 * 2.67402
+mpq_rational 64 / 9.26405
+mpq_rational 64 str 0.00265159
+mpq_rational 64 +(int)0.639153
+mpq_rational 64 -(int)0.650283
+mpq_rational 64 *(int)1.15673
+mpq_rational 64 /(int)1.40596
+mpq_rational 128 + 3.1599
+mpq_rational 128 - 3.17694
+mpq_rational 128 * 5.89806
+mpq_rational 128 / 14.7461
+mpq_rational 128 str 0.00363964
+mpq_rational 128 +(int)0.683218
+mpq_rational 128 -(int)0.669498
+mpq_rational 128 *(int)1.20559
+mpq_rational 128 /(int)1.42215
+mpq_rational 256 + 6.74926
+mpq_rational 256 - 6.75317
+mpq_rational 256 * 12.7524
+mpq_rational 256 / 26.4939
+mpq_rational 256 str 0.00677698
+mpq_rational 256 +(int)0.725686
+mpq_rational 256 -(int)0.707811
+mpq_rational 256 *(int)1.25776
+mpq_rational 256 /(int)1.50342
+mpq_rational 512 + 16.7334
+mpq_rational 512 - 15.9167
+mpq_rational 512 * 29.4044
+mpq_rational 512 / 54.0641
+mpq_rational 512 str 0.0117945
+mpq_rational 512 +(int)0.895283
+mpq_rational 512 -(int)0.83232
+mpq_rational 512 *(int)1.41413
+mpq_rational 512 /(int)1.62326
+mpq_rational 1024 + 38.5739
+mpq_rational 1024 - 39.0541
+mpq_rational 1024 * 70.1615
+mpq_rational 1024 / 126.261
+mpq_rational 1024 str 0.0283447
+mpq_rational 1024 +(int)0.931053
+mpq_rational 1024 -(int)1.06134
+mpq_rational 1024 *(int)1.59151
+mpq_rational 1024 /(int)1.7796
+tommath_int 64 + 0.018872
+tommath_int 64 - 0.0237699
+tommath_int 64 * 0.0670506
+tommath_int 64 / 1.79253
+tommath_int 64 str 0.00734164
+tommath_int 64 +(int)0.224655
+tommath_int 64 -(int)0.202873
+tommath_int 64 *(int)0.239357
+tommath_int 64 /(int)1.37938
+tommath_int 64 % 2.56986
+tommath_int 64 | 0.132539
+tommath_int 64 & 0.130147
+tommath_int 64 ^ 0.127996
+tommath_int 64 << 0.0314737
+tommath_int 64 >> 0.250865
+tommath_int 64 %(int)1.46083
+tommath_int 64 |(int)0.3363
+tommath_int 64 &(int)0.342626
+tommath_int 64 ^(int)0.344367
+tommath_int 128 + 0.0216659
+tommath_int 128 - 0.0270677
+tommath_int 128 * 0.111722
+tommath_int 128 / 2.13582
+tommath_int 128 str 0.0163193
+tommath_int 128 +(int)0.22725
+tommath_int 128 -(int)0.207089
+tommath_int 128 *(int)0.249452
+tommath_int 128 /(int)1.62383
+tommath_int 128 % 2.19585
+tommath_int 128 | 0.141612
+tommath_int 128 & 0.130002
+tommath_int 128 ^ 0.130457
+tommath_int 128 << 0.0389456
+tommath_int 128 >> 0.251932
+tommath_int 128 %(int)1.65639
+tommath_int 128 |(int)0.325581
+tommath_int 128 &(int)0.331077
+tommath_int 128 ^(int)0.331853
+tommath_int 256 + 0.0301548
+tommath_int 256 - 0.0382195
+tommath_int 256 * 0.34452
+tommath_int 256 / 3.03438
+tommath_int 256 str 0.0450349
+tommath_int 256 +(int)0.2481
+tommath_int 256 -(int)0.220375
+tommath_int 256 *(int)0.273682
+tommath_int 256 /(int)2.32908
+tommath_int 256 % 3.10053
+tommath_int 256 | 0.140382
+tommath_int 256 & 0.145711
+tommath_int 256 ^ 0.145171
+tommath_int 256 << 0.065292
+tommath_int 256 >> 0.296661
+tommath_int 256 %(int)2.26685
+tommath_int 256 |(int)0.343079
+tommath_int 256 &(int)0.351853
+tommath_int 256 ^(int)0.352003
+tommath_int 512 + 0.0425188
+tommath_int 512 - 0.0488346
+tommath_int 512 * 0.995374
+tommath_int 512 / 5.01541
+tommath_int 512 str 0.119168
+tommath_int 512 +(int)0.27807
+tommath_int 512 -(int)0.280258
+tommath_int 512 *(int)0.462652
+tommath_int 512 /(int)3.72421
+tommath_int 512 % 4.75068
+tommath_int 512 | 0.148416
+tommath_int 512 & 0.155508
+tommath_int 512 ^ 0.158845
+tommath_int 512 << 0.0939451
+tommath_int 512 >> 0.313884
+tommath_int 512 %(int)3.5978
+tommath_int 512 |(int)0.347276
+tommath_int 512 &(int)0.364899
+tommath_int 512 ^(int)0.354275
+tommath_int 1024 + 0.0717123
+tommath_int 1024 - 0.0769566
+tommath_int 1024 * 3.12935
+tommath_int 1024 / 11.517
+tommath_int 1024 str 0.377431
+tommath_int 1024 +(int)0.293538
+tommath_int 1024 -(int)0.260488
+tommath_int 1024 *(int)0.437175
+tommath_int 1024 /(int)8.55783
+tommath_int 1024 % 11.5211
+tommath_int 1024 | 0.338127
+tommath_int 1024 & 0.331858
+tommath_int 1024 ^ 0.340584
+tommath_int 1024 << 0.163725
+tommath_int 1024 >> 0.506974
+tommath_int 1024 %(int)8.682
+tommath_int 1024 |(int)0.554963
+tommath_int 1024 &(int)0.605213
+tommath_int 1024 ^(int)0.561207
+fixed_int 64 + 0.0031291
+fixed_int 64 - 0.00339191
+fixed_int 64 * 0.0223481
+fixed_int 64 / 0.0991632
+fixed_int 64 str 0.000465841
+fixed_int 64 +(int)0.00335294
+fixed_int 64 -(int)0.00250933
+fixed_int 64 *(int)0.00444316
+fixed_int 64 /(int)0.0443082
+fixed_int 64 % 0.0946529
+fixed_int 64 | 0.00295261
+fixed_int 64 & 0.00298048
+fixed_int 64 ^ 0.00307714
+fixed_int 64 << 0.0119095
+fixed_int 64 >> 0.006361
+fixed_int 64 %(int)0.0425984
+fixed_int 64 |(int)0.00244005
+fixed_int 64 &(int)0.00222291
+fixed_int 64 ^(int)0.00236664
+fixed_int 128 + 0.00703043
+fixed_int 128 - 0.0073172
+fixed_int 128 * 0.0375288
+fixed_int 128 / 0.172328
+fixed_int 128 str 0.00102073
+fixed_int 128 +(int)0.00376116
+fixed_int 128 -(int)0.00358055
+fixed_int 128 *(int)0.0135739
+fixed_int 128 /(int)0.110308
+fixed_int 128 % 0.170561
+fixed_int 128 | 0.00560832
+fixed_int 128 & 0.00546222
+fixed_int 128 ^ 0.00538197
+fixed_int 128 << 0.0131746
+fixed_int 128 >> 0.00880189
+fixed_int 128 %(int)0.106226
+fixed_int 128 |(int)0.0040142
+fixed_int 128 &(int)0.0035522
+fixed_int 128 ^(int)0.0035339
+fixed_int 256 + 0.0163669
+fixed_int 256 - 0.0166428
+fixed_int 256 * 0.120353
+fixed_int 256 / 0.309492
+fixed_int 256 str 0.00207212
+fixed_int 256 +(int)0.00985174
+fixed_int 256 -(int)0.0103282
+fixed_int 256 *(int)0.0192615
+fixed_int 256 /(int)0.229148
+fixed_int 256 % 0.328458
+fixed_int 256 | 0.0127056
+fixed_int 256 & 0.0127546
+fixed_int 256 ^ 0.0127717
+fixed_int 256 << 0.0213483
+fixed_int 256 >> 0.0180295
+fixed_int 256 %(int)0.195577
+fixed_int 256 |(int)0.00983777
+fixed_int 256 &(int)0.0110247
+fixed_int 256 ^(int)0.0100442
+fixed_int 512 + 0.0326567
+fixed_int 512 - 0.0349375
+fixed_int 512 * 0.439147
+fixed_int 512 / 0.573815
+fixed_int 512 str 0.0062017
+fixed_int 512 +(int)0.0119345
+fixed_int 512 -(int)0.0119127
+fixed_int 512 *(int)0.0328339
+fixed_int 512 /(int)0.442167
+fixed_int 512 % 0.575884
+fixed_int 512 | 0.0200759
+fixed_int 512 & 0.01985
+fixed_int 512 ^ 0.0198304
+fixed_int 512 << 0.0247552
+fixed_int 512 >> 0.0220786
+fixed_int 512 %(int)0.387067
+fixed_int 512 |(int)0.0155223
+fixed_int 512 &(int)0.0154281
+fixed_int 512 ^(int)0.0155814
+fixed_int 1024 + 0.0603087
+fixed_int 1024 - 0.0600083
+fixed_int 1024 * 1.46969
+fixed_int 1024 / 1.06356
+fixed_int 1024 str 0.0190043
+fixed_int 1024 +(int)0.0170918
+fixed_int 1024 -(int)0.0176089
+fixed_int 1024 *(int)0.0567364
+fixed_int 1024 /(int)0.866172
+fixed_int 1024 % 1.05006
+fixed_int 1024 | 0.034651
+fixed_int 1024 & 0.0349286
+fixed_int 1024 ^ 0.0345822
+fixed_int 1024 << 0.0339579
+fixed_int 1024 >> 0.0325312
+fixed_int 1024 %(int)0.776075
+fixed_int 1024 |(int)0.0293444
+fixed_int 1024 &(int)0.0275044
+fixed_int 1024 ^(int)0.0293253
+cpp_float 50 + 0.02382
+cpp_float 50 - 0.0265783
+cpp_float 50 * 0.287898
+cpp_float 50 / 4.00108
+cpp_float 50 str 0.0188303
+cpp_float 50 +(int)0.0527023
+cpp_float 50 -(int)0.0577674
+cpp_float 50 *(int)0.11959
+cpp_float 50 /(int)0.253675
+cpp_float 100 + 0.0294619
+cpp_float 100 - 0.031465
+cpp_float 100 * 0.584569
+cpp_float 100 / 8.12985
+cpp_float 100 str 0.03172
+cpp_float 100 +(int)0.0618102
+cpp_float 100 -(int)0.0633795
+cpp_float 100 *(int)0.197945
+cpp_float 100 /(int)0.419569
+cpp_float 500 + 0.058466
+cpp_float 500 - 0.0619405
+cpp_float 500 * 5.09969
+cpp_float 500 / 54.2796
+cpp_float 500 str 0.131962
+cpp_float 500 +(int)0.105927
+cpp_float 500 -(int)0.11146
+cpp_float 500 *(int)0.742044
+cpp_float 500 /(int)1.64187
+mpfr_float 50 + 0.060035
+mpfr_float 50 - 0.0741774
+mpfr_float 50 * 0.341642
+mpfr_float 50 / 1.63486
+mpfr_float 50 str 0.0282764
+mpfr_float 50 +(int)0.105861
+mpfr_float 50 -(int)0.141577
+mpfr_float 50 *(int)0.152407
+mpfr_float 50 /(int)0.204088
+mpfr_float 100 + 0.0643242
+mpfr_float 100 - 0.0782083
+mpfr_float 100 * 0.465557
+mpfr_float 100 / 2.25716
+mpfr_float 100 str 0.0376683
+mpfr_float 100 +(int)0.111671
+mpfr_float 100 -(int)0.145249
+mpfr_float 100 *(int)0.161353
+mpfr_float 100 /(int)0.259127
+mpfr_float 500 + 0.0805287
+mpfr_float 500 - 0.0934957
+mpfr_float 500 * 3.7482
+mpfr_float 500 / 13.7713
+mpfr_float 500 str 0.158743
+mpfr_float 500 +(int)0.121528
+mpfr_float 500 -(int)0.155487
+mpfr_float 500 *(int)0.271785
+mpfr_float 500 /(int)0.708513
[section:float_performance Float Type Perfomance]
[table Operator *
[[Backend][50 Bits][100 Bits][500 Bits]]
-[[cpp_float][1.0826 (0.287216s)][1.48086 (0.586363s)][1.57545 (5.05269s)]]
-[[gmp_float][[*1] (0.265302s)][[*1] (0.395962s)][[*1] (3.20714s)]]
-[[mpfr_float][1.24249 (0.329636s)][1.15432 (0.457067s)][1.16182 (3.72612s)]]
+[[cpp_float][1.07276 (0.287898s)][1.47724 (0.584569s)][1.55145 (5.09969s)]]
+[[gmp_float][[*1] (0.268372s)][[*1] (0.395718s)][[*1] (3.28705s)]]
+[[mpfr_float][1.27302 (0.341642s)][1.17649 (0.465557s)][1.14029 (3.7482s)]]
+]
+[table Operator *(int)
+[[Backend][50 Bits][100 Bits][500 Bits]]
+[[cpp_float][2.89945 (0.11959s)][4.56335 (0.197945s)][9.03602 (0.742044s)]]
+[[gmp_float][[*1] (0.0412457s)][[*1] (0.0433772s)][[*1] (0.0821206s)]]
+[[mpfr_float][3.6951 (0.152407s)][3.71977 (0.161353s)][3.30958 (0.271785s)]]
]
[table Operator +
[[Backend][50 Bits][100 Bits][500 Bits]]
-[[cpp_float][[*1] (0.0242151s)][[*1] (0.029252s)][[*1] (0.0584099s)]]
-[[gmp_float][4.55194 (0.110226s)][3.67516 (0.107506s)][2.42489 (0.141638s)]]
-[[mpfr_float][2.45362 (0.0594147s)][2.18552 (0.0639309s)][1.32099 (0.0771588s)]]
+[[cpp_float][[*1] (0.02382s)][[*1] (0.0294619s)][[*1] (0.058466s)]]
+[[gmp_float][4.55086 (0.108402s)][3.86443 (0.113853s)][2.6241 (0.15342s)]]
+[[mpfr_float][2.52036 (0.060035s)][2.1833 (0.0643242s)][1.37736 (0.0805287s)]]
]
[table Operator +(int)
[[Backend][50 Bits][100 Bits][500 Bits]]
-[[cpp_float][1.51995 (0.0484155s)][1.78781 (0.0611055s)][1.8309 (0.104123s)]]
-[[gmp_float][[*1] (0.0318533s)][[*1] (0.0341789s)][[*1] (0.0568699s)]]
-[[mpfr_float][3.39055 (0.108s)][3.30142 (0.112839s)][2.05293 (0.11675s)]]
+[[cpp_float][1.56759 (0.0527023s)][1.74629 (0.0618102s)][1.68077 (0.105927s)]]
+[[gmp_float][[*1] (0.0336201s)][[*1] (0.0353951s)][[*1] (0.0630232s)]]
+[[mpfr_float][3.14875 (0.105861s)][3.15499 (0.111671s)][1.92831 (0.121528s)]]
]
[table Operator -
[[Backend][50 Bits][100 Bits][500 Bits]]
-[[cpp_float][[*1] (0.0261498s)][[*1] (0.030946s)][[*1] (0.0606388s)]]
-[[gmp_float][4.48753 (0.117348s)][3.75823 (0.116302s)][2.4823 (0.150524s)]]
-[[mpfr_float][2.96057 (0.0774183s)][2.61897 (0.0810465s)][1.56236 (0.0947396s)]]
+[[cpp_float][[*1] (0.0265783s)][[*1] (0.031465s)][[*1] (0.0619405s)]]
+[[gmp_float][4.66954 (0.124108s)][3.72645 (0.117253s)][2.67536 (0.165713s)]]
+[[mpfr_float][2.7909 (0.0741774s)][2.48557 (0.0782083s)][1.50944 (0.0934957s)]]
]
[table Operator -(int)
[[Backend][50 Bits][100 Bits][500 Bits]]
-[[cpp_float][[*1] (0.0567601s)][[*1] (0.0626685s)][[*1] (0.111692s)]]
-[[gmp_float][2.27932 (0.129374s)][2.04821 (0.128358s)][1.48297 (0.165635s)]]
-[[mpfr_float][2.43199 (0.13804s)][2.32131 (0.145473s)][1.38152 (0.154304s)]]
+[[cpp_float][[*1] (0.0577674s)][[*1] (0.0633795s)][[*1] (0.11146s)]]
+[[gmp_float][2.31811 (0.133911s)][2.07251 (0.131355s)][1.67161 (0.186319s)]]
+[[mpfr_float][2.45081 (0.141577s)][2.29174 (0.145249s)][1.395 (0.155487s)]]
]
[table Operator /
[[Backend][50 Bits][100 Bits][500 Bits]]
-[[cpp_float][3.2662 (3.98153s)][5.07021 (8.11948s)][6.78872 (53.6099s)]]
-[[gmp_float][[*1] (1.21901s)][[*1] (1.60141s)][[*1] (7.89691s)]]
-[[mpfr_float][1.33238 (1.62419s)][1.39529 (2.23443s)][1.70882 (13.4944s)]]
+[[cpp_float][3.24327 (4.00108s)][5.00532 (8.12985s)][6.79566 (54.2796s)]]
+[[gmp_float][[*1] (1.23366s)][[*1] (1.62424s)][[*1] (7.9874s)]]
+[[mpfr_float][1.32521 (1.63486s)][1.38967 (2.25716s)][1.72413 (13.7713s)]]
+]
+[table Operator /(int)
+[[Backend][50 Bits][100 Bits][500 Bits]]
+[[cpp_float][1.45093 (0.253675s)][1.83306 (0.419569s)][2.3644 (1.64187s)]]
+[[gmp_float][[*1] (0.174836s)][[*1] (0.22889s)][[*1] (0.694411s)]]
+[[mpfr_float][1.16731 (0.204088s)][1.13211 (0.259127s)][1.02031 (0.708513s)]]
]
[table Operator str
[[Backend][50 Bits][100 Bits][500 Bits]]
-[[cpp_float][1.46076 (0.0192656s)][1.59438 (0.0320398s)][[*1] (0.134302s)]]
-[[gmp_float][[*1] (0.0131888s)][[*1] (0.0200954s)][1.01007 (0.135655s)]]
-[[mpfr_float][2.19174 (0.0289065s)][1.86101 (0.0373977s)][1.15842 (0.155578s)]]
+[[cpp_float][1.4585 (0.0188303s)][1.55515 (0.03172s)][[*1] (0.131962s)]]
+[[gmp_float][[*1] (0.0129107s)][[*1] (0.0203967s)][1.04632 (0.138075s)]]
+[[mpfr_float][2.19015 (0.0282764s)][1.84679 (0.0376683s)][1.20295 (0.158743s)]]
]
[endsect]
[section:integer_performance Integer Type Perfomance]
[table Operator %
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.098458s)][[*1] (0.269155s)][1.10039 (0.849272s)][2.92096 (2.55909s)][7.47157 (7.99106s)]]
-[[gmp_int][6.63934 (0.653697s)][2.6753 (0.72007s)][[*1] (0.771794s)][[*1] (0.87611s)][[*1] (1.06953s)]]
-[[tommath_int][18.5522 (1.82661s)][8.00831 (2.15548s)][3.89737 (3.00797s)][5.38078 (4.71416s)][10.7885 (11.5386s)]]
+[[fixed_int][[*1] (0.0946529s)][[*1] (0.170561s)][[*1] (0.328458s)][[*1] (0.575884s)][[*1] (1.05006s)]]
+[[gmp_int][7.77525 (0.73595s)][4.39387 (0.749422s)][2.35075 (0.772122s)][1.51922 (0.874894s)][1.02263 (1.07382s)]]
+[[tommath_int][27.1503 (2.56986s)][12.8743 (2.19585s)][9.43965 (3.10053s)][8.24936 (4.75068s)][10.9719 (11.5211s)]]
+]
+[table Operator %(int)
+[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
+[[fixed_int][1.25034 (0.0425984s)][1.91617 (0.106226s)][2.02166 (0.195577s)][2.14437 (0.387067s)][2.23514 (0.776075s)]]
+[[gmp_int][[*1] (0.0340695s)][[*1] (0.0554367s)][[*1] (0.0967406s)][[*1] (0.180504s)][[*1] (0.347216s)]]
+[[tommath_int][42.8781 (1.46083s)][29.879 (1.65639s)][23.4323 (2.26685s)][19.932 (3.5978s)][25.0046 (8.682s)]]
]
[table Operator &
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.0028732s)][[*1] (0.00552933s)][[*1] (0.0125148s)][[*1] (0.020299s)][[*1] (0.034856s)]]
-[[gmp_int][16.3018 (0.0468383s)][9.51109 (0.05259s)][5.20026 (0.0650802s)][4.46545 (0.0906443s)][3.99377 (0.139207s)]]
-[[tommath_int][42.221 (0.121309s)][22.2471 (0.123011s)][11.3587 (0.142151s)][7.3475 (0.149147s)][11.4043 (0.397507s)]]
+[[fixed_int][[*1] (0.00298048s)][[*1] (0.00546222s)][[*1] (0.0127546s)][[*1] (0.01985s)][[*1] (0.0349286s)]]
+[[gmp_int][16.0105 (0.0477189s)][9.67027 (0.0528211s)][5.12678 (0.0653902s)][4.62316 (0.0917698s)][4.00837 (0.140007s)]]
+[[tommath_int][43.6665 (0.130147s)][23.8003 (0.130002s)][11.4242 (0.145711s)][7.83416 (0.155508s)][9.50103 (0.331858s)]]
+]
+[table Operator &(int)
+[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
+[[fixed_int][[*1] (0.00222291s)][[*1] (0.0035522s)][[*1] (0.0110247s)][[*1] (0.0154281s)][[*1] (0.0275044s)]]
+[[gmp_int][70.8538 (0.157502s)][42.1478 (0.149717s)][13.9023 (0.153268s)][10.3271 (0.159328s)][6.0529 (0.166481s)]]
+[[tommath_int][154.134 (0.342626s)][93.2035 (0.331077s)][31.9151 (0.351853s)][23.6515 (0.364899s)][22.0042 (0.605213s)]]
]
[table Operator *
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.0175309s)][[*1] (0.0388232s)][[*1] (0.123609s)][[*1] (0.427489s)][[*1] (1.46312s)]]
-[[gmp_int][2.93263 (0.0514117s)][1.70358 (0.0661383s)][1.01811 (0.125848s)][1.20692 (0.515943s)][1.03248 (1.51064s)]]
-[[tommath_int][3.82476 (0.0670515s)][2.87425 (0.111587s)][2.74339 (0.339108s)][2.26768 (0.969408s)][2.1233 (3.10664s)]]
+[[fixed_int][[*1] (0.0223481s)][[*1] (0.0375288s)][[*1] (0.120353s)][[*1] (0.439147s)][[*1] (1.46969s)]]
+[[gmp_int][2.50746 (0.0560369s)][1.76676 (0.0663044s)][1.06052 (0.127636s)][1.22558 (0.53821s)][1.03538 (1.52168s)]]
+[[tommath_int][3.00028 (0.0670506s)][2.97696 (0.111722s)][2.86257 (0.34452s)][2.26661 (0.995374s)][2.12926 (3.12935s)]]
+]
+[table Operator *(int)
+[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
+[[fixed_int][[*1] (0.00444316s)][[*1] (0.0135739s)][[*1] (0.0192615s)][[*1] (0.0328339s)][1.18198 (0.0567364s)]]
+[[gmp_int][4.57776 (0.0203397s)][1.79901 (0.0244196s)][1.32814 (0.025582s)][1.01453 (0.033311s)][[*1] (0.048001s)]]
+[[tommath_int][53.8709 (0.239357s)][18.3773 (0.249452s)][14.2088 (0.273682s)][14.0907 (0.462652s)][9.10761 (0.437175s)]]
]
[table Operator +
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.0031173s)][[*1] (0.00696555s)][[*1] (0.0163707s)][[*1] (0.0314806s)][[*1] (0.0596158s)]]
-[[gmp_int][12.7096 (0.0396194s)][5.89178 (0.0410395s)][2.66402 (0.0436119s)][1.59356 (0.0501664s)][1.11155 (0.0662662s)]]
-[[tommath_int][6.14357 (0.0191513s)][3.16177 (0.0220235s)][1.85441 (0.030358s)][1.45895 (0.0459287s)][1.26576 (0.0754591s)]]
+[[fixed_int][[*1] (0.0031291s)][[*1] (0.00703043s)][[*1] (0.0163669s)][[*1] (0.0326567s)][[*1] (0.0603087s)]]
+[[gmp_int][12.4866 (0.0390717s)][6.01034 (0.0422553s)][2.65628 (0.0434751s)][1.54295 (0.0503875s)][1.16477 (0.0702458s)]]
+[[tommath_int][6.03111 (0.018872s)][3.08173 (0.0216659s)][1.84243 (0.0301548s)][1.30199 (0.0425188s)][1.18909 (0.0717123s)]]
]
[table Operator +(int)
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.00329336s)][[*1] (0.00370718s)][[*1] (0.00995385s)][[*1] (0.0117467s)][[*1] (0.0233483s)]]
-[[gmp_int][9.56378 (0.031497s)][8.0588 (0.0298754s)][4.15824 (0.0413905s)][5.47974 (0.0643691s)][4.46265 (0.104195s)]]
-[[tommath_int][76.2624 (0.25116s)][71.3973 (0.264682s)][28.0238 (0.278945s)][25.9035 (0.304282s)][13.1635 (0.307346s)]]
+[[fixed_int][[*1] (0.00335294s)][[*1] (0.00376116s)][[*1] (0.00985174s)][[*1] (0.0119345s)][[*1] (0.0170918s)]]
+[[gmp_int][9.47407 (0.031766s)][8.44794 (0.0317741s)][4.23857 (0.0417573s)][5.40856 (0.0645488s)][6.31314 (0.107903s)]]
+[[tommath_int][67.0025 (0.224655s)][60.4203 (0.22725s)][25.1834 (0.2481s)][23.2996 (0.27807s)][17.1743 (0.293538s)]]
]
[table Operator -
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.00359417s)][[*1] (0.00721041s)][[*1] (0.0168213s)][[*1] (0.0323563s)][[*1] (0.061385s)]]
-[[gmp_int][10.6794 (0.0383836s)][5.65517 (0.0407761s)][2.63634 (0.0443466s)][1.59979 (0.0517632s)][1.13379 (0.0695978s)]]
-[[tommath_int][6.43615 (0.0231326s)][3.6161 (0.0260736s)][2.2585 (0.0379908s)][1.52006 (0.0491835s)][1.24231 (0.0762591s)]]
+[[fixed_int][[*1] (0.00339191s)][[*1] (0.0073172s)][[*1] (0.0166428s)][[*1] (0.0349375s)][[*1] (0.0600083s)]]
+[[gmp_int][12.5182 (0.0424608s)][5.57936 (0.0408253s)][2.78496 (0.0463496s)][1.48373 (0.051838s)][1.29928 (0.0779673s)]]
+[[tommath_int][7.00782 (0.0237699s)][3.69919 (0.0270677s)][2.29645 (0.0382195s)][1.39777 (0.0488346s)][1.28243 (0.0769566s)]]
]
[table Operator -(int)
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.00353606s)][[*1] (0.00577573s)][[*1] (0.0155184s)][[*1] (0.029385s)][[*1] (0.0586271s)]]
-[[gmp_int][9.04434 (0.0319814s)][5.12393 (0.0295945s)][2.50743 (0.0389112s)][2.01898 (0.0593277s)][1.68381 (0.098717s)]]
-[[tommath_int][60.2486 (0.213043s)][38.3032 (0.221229s)][15.8792 (0.24642s)][8.71166 (0.255992s)][4.85236 (0.28448s)]]
+[[fixed_int][[*1] (0.00250933s)][[*1] (0.00358055s)][[*1] (0.0103282s)][[*1] (0.0119127s)][[*1] (0.0176089s)]]
+[[gmp_int][12.093 (0.0303454s)][8.50898 (0.0304669s)][3.9284 (0.0405733s)][5.03037 (0.0599252s)][5.96617 (0.105058s)]]
+[[tommath_int][80.8477 (0.202873s)][57.8371 (0.207089s)][21.3372 (0.220375s)][23.526 (0.280258s)][14.793 (0.260488s)]]
]
[table Operator /
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.0973696s)][[*1] (0.260936s)][[*1] (0.845628s)][2.4597 (2.51371s)][6.21836 (7.93136s)]]
-[[gmp_int][7.66851 (0.74668s)][3.17732 (0.829077s)][1.05006 (0.887961s)][[*1] (1.02196s)][[*1] (1.27547s)]]
-[[tommath_int][18.3945 (1.79107s)][8.11201 (2.11671s)][3.49119 (2.95225s)][4.55727 (4.65733s)][9.06813 (11.5662s)]]
+[[fixed_int][[*1] (0.0991632s)][[*1] (0.172328s)][[*1] (0.309492s)][[*1] (0.573815s)][[*1] (1.06356s)]]
+[[gmp_int][7.81859 (0.775316s)][5.11069 (0.880715s)][2.93514 (0.908404s)][1.80497 (1.03572s)][1.21878 (1.29625s)]]
+[[tommath_int][18.0766 (1.79253s)][12.3939 (2.13582s)][9.80438 (3.03438s)][8.74047 (5.01541s)][10.8288 (11.517s)]]
+]
+[table Operator /(int)
+[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
+[[fixed_int][1.04098 (0.0443082s)][1.61317 (0.110308s)][2.18324 (0.229148s)][2.36331 (0.442167s)][2.45159 (0.866172s)]]
+[[gmp_int][[*1] (0.042564s)][[*1] (0.06838s)][[*1] (0.104957s)][[*1] (0.187096s)][[*1] (0.35331s)]]
+[[tommath_int][32.4072 (1.37938s)][23.7471 (1.62383s)][22.1907 (2.32908s)][19.9054 (3.72421s)][24.2219 (8.55783s)]]
]
[table Operator <<
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.0120907s)][[*1] (0.0129147s)][[*1] (0.0214412s)][[*1] (0.0249208s)][[*1] (0.0341293s)]]
-[[gmp_int][1.93756 (0.0234265s)][1.97785 (0.0255433s)][1.43607 (0.0307911s)][1.815 (0.0452311s)][2.00167 (0.0683156s)]]
-[[tommath_int][3.42859 (0.0414542s)][3.04951 (0.0393836s)][3.04202 (0.0652246s)][3.81169 (0.0949903s)][4.93896 (0.168563s)]]
+[[fixed_int][[*1] (0.0119095s)][[*1] (0.0131746s)][[*1] (0.0213483s)][[*1] (0.0247552s)][[*1] (0.0339579s)]]
+[[gmp_int][1.9355 (0.0230509s)][1.94257 (0.0255925s)][1.49684 (0.031955s)][1.79202 (0.0443618s)][2.0846 (0.0707887s)]]
+[[tommath_int][2.64273 (0.0314737s)][2.95612 (0.0389456s)][3.05842 (0.065292s)][3.79496 (0.0939451s)][4.82142 (0.163725s)]]
]
[table Operator >>
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.0064833s)][[*1] (0.00772857s)][[*1] (0.0186871s)][[*1] (0.0218303s)][[*1] (0.0326372s)]]
-[[gmp_int][4.212 (0.0273077s)][3.72696 (0.0288041s)][1.55046 (0.0289735s)][1.51403 (0.0330518s)][1.13695 (0.037107s)]]
-[[tommath_int][33.9418 (0.220055s)][29.104 (0.224932s)][13.8407 (0.258642s)][13.1488 (0.287043s)][15.1741 (0.495242s)]]
+[[fixed_int][[*1] (0.006361s)][[*1] (0.00880189s)][[*1] (0.0180295s)][[*1] (0.0220786s)][[*1] (0.0325312s)]]
+[[gmp_int][4.26889 (0.0271544s)][3.14669 (0.0276968s)][1.74396 (0.0314426s)][1.45928 (0.0322188s)][1.24596 (0.0405327s)]]
+[[tommath_int][39.4379 (0.250865s)][28.6225 (0.251932s)][16.4543 (0.296661s)][14.2167 (0.313884s)][15.5842 (0.506974s)]]
]
[table Operator ^
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.00287983s)][[*1] (0.00543128s)][[*1] (0.0125726s)][[*1] (0.019987s)][[*1] (0.034697s)]]
-[[gmp_int][14.938 (0.0430189s)][9.00973 (0.0489344s)][4.83803 (0.0608267s)][4.33359 (0.0866154s)][3.89518 (0.135151s)]]
-[[tommath_int][41.6898 (0.12006s)][22.4393 (0.121874s)][10.7513 (0.135172s)][7.2632 (0.145169s)][11.5765 (0.401671s)]]
+[[fixed_int][[*1] (0.00307714s)][[*1] (0.00538197s)][[*1] (0.0127717s)][[*1] (0.0198304s)][[*1] (0.0345822s)]]
+[[gmp_int][13.9543 (0.0429392s)][9.92785 (0.0534314s)][4.80398 (0.0613552s)][4.35864 (0.0864335s)][3.887 (0.134421s)]]
+[[tommath_int][41.5958 (0.127996s)][24.2396 (0.130457s)][11.3666 (0.145171s)][8.01016 (0.158845s)][9.84853 (0.340584s)]]
+]
+[table Operator ^(int)
+[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
+[[fixed_int][[*1] (0.00236664s)][[*1] (0.0035339s)][[*1] (0.0100442s)][[*1] (0.0155814s)][[*1] (0.0293253s)]]
+[[gmp_int][61.4272 (0.145376s)][41.6319 (0.147123s)][14.9744 (0.150405s)][9.64857 (0.150338s)][5.46649 (0.160306s)]]
+[[tommath_int][145.509 (0.344367s)][93.9055 (0.331853s)][35.0456 (0.352003s)][22.7371 (0.354275s)][19.1373 (0.561207s)]]
]
[table Operator str
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][1.03557 (0.00143356s)][1.39844 (0.00290281s)][3.14081 (0.0099558s)][6.28067 (0.0372769s)][13.2101 (0.188878s)]]
-[[gmp_int][[*1] (0.00138432s)][[*1] (0.00207575s)][[*1] (0.00316982s)][[*1] (0.00593518s)][[*1] (0.014298s)]]
-[[tommath_int][5.31194 (0.00735345s)][7.90724 (0.0164135s)][15.8581 (0.0502673s)][19.7526 (0.117235s)][26.6031 (0.380373s)]]
+[[fixed_int][[*1] (0.000465841s)][[*1] (0.00102073s)][[*1] (0.00207212s)][1.02618 (0.0062017s)][1.32649 (0.0190043s)]]
+[[gmp_int][2.83823 (0.00132216s)][2.17537 (0.00222046s)][1.46978 (0.00304557s)][[*1] (0.00604351s)][[*1] (0.0143268s)]]
+[[tommath_int][15.76 (0.00734164s)][15.9879 (0.0163193s)][21.7337 (0.0450349s)][19.7183 (0.119168s)][26.3445 (0.377431s)]]
]
[table Operator |
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[fixed_int][[*1] (0.00314803s)][[*1] (0.00548233s)][[*1] (0.0125434s)][[*1] (0.0198161s)][[*1] (0.034957s)]]
-[[gmp_int][13.0622 (0.0411201s)][8.63936 (0.0473638s)][4.6932 (0.0588688s)][4.25792 (0.0843755s)][3.78236 (0.13222s)]]
-[[tommath_int][38.5896 (0.121481s)][22.3609 (0.12259s)][10.9015 (0.136742s)][7.68521 (0.152291s)][11.6322 (0.406628s)]]
+[[fixed_int][[*1] (0.00295261s)][[*1] (0.00560832s)][[*1] (0.0127056s)][[*1] (0.0200759s)][[*1] (0.034651s)]]
+[[gmp_int][14.1091 (0.0416586s)][8.52475 (0.0478096s)][4.74593 (0.0602998s)][4.19694 (0.0842575s)][3.85525 (0.133588s)]]
+[[tommath_int][44.8889 (0.132539s)][25.2503 (0.141612s)][11.0488 (0.140382s)][7.39273 (0.148416s)][9.75809 (0.338127s)]]
+]
+[table Operator |(int)
+[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
+[[fixed_int][[*1] (0.00244005s)][[*1] (0.0040142s)][[*1] (0.00983777s)][[*1] (0.0155223s)][[*1] (0.0293444s)]]
+[[gmp_int][64.6148 (0.157663s)][34.5827 (0.138822s)][14.2764 (0.140448s)][10.3248 (0.160264s)][5.33565 (0.156572s)]]
+[[tommath_int][137.825 (0.3363s)][81.1074 (0.325581s)][34.8737 (0.343079s)][22.3727 (0.347276s)][18.912 (0.554963s)]]
]
[endsect]
[section:rational_performance Rational Type Perfomance]
[table Operator *
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[mpq_rational][[*1] (2.63854s)][[*1] (5.8426s)][[*1] (12.6093s)][[*1] (28.654s)][[*1] (68.7938s)]]
+[[mpq_rational][[*1] (2.67402s)][[*1] (5.89806s)][[*1] (12.7524s)][[*1] (29.4044s)][[*1] (70.1615s)]]
+]
+[table Operator *(int)
+[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
+[[mpq_rational][[*1] (1.15673s)][[*1] (1.20559s)][[*1] (1.25776s)][[*1] (1.41413s)][[*1] (1.59151s)]]
]
[table Operator +
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[mpq_rational][[*1] (1.47058s)][[*1] (3.18485s)][[*1] (6.67422s)][[*1] (15.5117s)][[*1] (37.688s)]]
+[[mpq_rational][[*1] (1.47032s)][[*1] (3.1599s)][[*1] (6.74926s)][[*1] (16.7334s)][[*1] (38.5739s)]]
]
[table Operator +(int)
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[mpq_rational][[*1] (0.654198s)][[*1] (0.66159s)][[*1] (0.705687s)][[*1] (0.817714s)][[*1] (0.925526s)]]
+[[mpq_rational][[*1] (0.639153s)][[*1] (0.683218s)][[*1] (0.725686s)][[*1] (0.895283s)][[*1] (0.931053s)]]
]
[table Operator -
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[mpq_rational][[*1] (1.47214s)][[*1] (3.1691s)][[*1] (6.67532s)][[*1] (15.6279s)][[*1] (37.6616s)]]
+[[mpq_rational][[*1] (1.48169s)][[*1] (3.17694s)][[*1] (6.75317s)][[*1] (15.9167s)][[*1] (39.0541s)]]
]
[table Operator -(int)
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[mpq_rational][[*1] (0.646626s)][[*1] (0.657712s)][[*1] (0.704919s)][[*1] (0.818141s)][[*1] (0.935657s)]]
+[[mpq_rational][[*1] (0.650283s)][[*1] (0.669498s)][[*1] (0.707811s)][[*1] (0.83232s)][[*1] (1.06134s)]]
]
[table Operator /
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[mpq_rational][[*1] (9.24093s)][[*1] (14.6415s)][[*1] (26.2341s)][[*1] (52.808s)][[*1] (119.722s)]]
+[[mpq_rational][[*1] (9.26405s)][[*1] (14.7461s)][[*1] (26.4939s)][[*1] (54.0641s)][[*1] (126.261s)]]
+]
+[table Operator /(int)
+[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
+[[mpq_rational][[*1] (1.40596s)][[*1] (1.42215s)][[*1] (1.50342s)][[*1] (1.62326s)][[*1] (1.7796s)]]
]
[table Operator str
[[Backend][64 Bits][128 Bits][256 Bits][512 Bits][1024 Bits]]
-[[mpq_rational][[*1] (0.00265152s)][[*1] (0.00372512s)][[*1] (0.00573139s)][[*1] (0.011966s)][[*1] (0.0288116s)]]
+[[mpq_rational][[*1] (0.00265159s)][[*1] (0.00363964s)][[*1] (0.00677698s)][[*1] (0.0117945s)][[*1] (0.0283447s)]]
]
[endsect]
Boost-Commit list run by bdawes at acm.org, david.abrahams at rcn.com, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk