[boost][decimal] Andrzej's review
Hi Everyone, This is a review of the Boost.Decimal library. My knowledge about decimal floats is very superficial. In my job I need to deal with prices in all the World's currencies (string<->numeric conversion and simple additions and scaling) and for that I use a home-brew fixed-point implementation. I participated in the previous review, spent a couple of days studying the documentation, having a superficial look at tests and small pieces of implementation, running simple examples and filing bug reports. I would like to thank Matt and Chris for the huge effort to write and maintain this library, and for undergoing the painstaking process of Boost Review. The community needs this library, and I am happy that it is proposed for Boost. The nature of this library is very unusual: it implements types that belong in processors. We know it is needed for applications where accuracy and correctness is absolutely critical. Against other Boost libraries, it requires extreme measures for verifying correctness across the whole spectrum of values and cases. In a similar vein, in my opinion, it requires extremely detailed and comprehensive documentation. The bar should be higher than for an average Boost library. In this context, I have some concerns. While playing with the library I found three corner cases where it produces incorrect results: * https://github.com/cppalliance/decimal/issues/1091 * https://github.com/cppalliance/decimal/issues/1094 * https://github.com/cppalliance/decimal/issues/1119 Two are already fixed, and Matt has been very responsive, but I feel like it is too easy for a novice to find bugs in the library after a couple of minute's tests. It looks to me like the unit test "sieve" is still too coarse. Some test files (like test_decimal32.cpp or test_decimal64_basis.cpp) are commented out. Fuzz tests for decimal comparisons in random_decimal32_comp.cpp seem to be only testing decimals created from integer values (no fractions). I know that other Boost candidate libraries pass with fewer tests, but I feel this one needs to have a lot of them. I am not even sure how one tests user-defined literals across the spectrum. Maybe the unit tests should be written outside of C++. One can see that a lot of work went into preparing the documentation. It is helpful and already provides a lot of useful information, also about the background, and ideas. It has the "Design Rationale" section which reminds me of the high quality documentation that the original Boost libraries used to have. But I still find it insufficient. The API reference for the key types, like `decimal32_t`, only lists the class members, but there is no detailed description for the behavior of these members. What are the preconditions of the Decimal's constructors? Is any combination of values considered invalid? Or is it nicely degrading to subnormals and infinities? I think the contract of the library (how the cohorts are treated in comparisons, if they are implementation detail, or are observable?) need to be spelled out more clearly. Again, Matt has been very responsive, and the documentation has improved a lot during the review period, and I know it will improve further. Regarding the design, it looks like it has been done by IEEE, so not much to review here. I only had a short glimpse at the implementation, so I cannot say much. One thing that I found surprising is that operator<=> is implemented in terms of operators <, > and ==. Operator <=> was designed for the class authors to go the opposite way: define operator <=> and you get all the others for free. So I wonder what was the reason for the current implementation. I do not feel comfortable giving an accept or reject recommendation. I clearly have a higher acceptance bar for this library than for any other I have seen reviewed in Boost, and this may appear unfair. Do I want to see this library in Boost? -- yes. It is useful, and IEEE seems to agree. Are the authors the right people to implement it? -- definitely! But I would feel more comfortable accepting it, if it went through another review iteration. Could this be considered a conditional acceptance? In the case of the test sieve issue I brought up, I do not think it is a question of writing more tests. It is more a question of designing an approach to testing that would give a guarantee of correctness. This is requesting a lot of work from the authors who already put a lot of effort to get the library where it is. It is huge (both the library and the effort)! Again, this cannot be overstated. Matt and Chris: you did great work! Thank you! I know that sooner or later we will be able to use Boost.Decimal from the official distribution. Also, thanks to John Maddock for managing the review. Regards, &rzej;
> This is a review of the Boost.Decimal library.
Thank you, Andrzej, for your in-depth, insightful review.
- Chris
On Sunday, October 12, 2025 at 09:47:15 PM GMT+2, Andrzej Krzemienski <akrzemi1@gmail.com> wrote:
Hi Everyone,This is a review of the Boost.Decimal library.
My knowledge about decimal floats is very superficial. In my job I need to deal with prices in all the World's currencies (string<->numeric conversion and simple additions and scaling) and for that I use a home-brew fixed-point implementation.
I participated in the previous review, spent a couple of days studying the documentation, having a superficial look at tests and small pieces of implementation, running simple examples and filing bug reports.
I would like to thank Matt and Chris for the huge effort to write and maintain this library, and for undergoing the painstaking process of Boost Review. The community needs this library, and I am happy that it is proposed for Boost.
The nature of this library is very unusual: it implements types that belong in processors. We know it is needed for applications where accuracy and correctness is absolutely critical. Against other Boost libraries, it requires extreme measures for verifying correctness across the whole spectrum of values and cases. In a similar vein, in my opinion, it requires extremely detailed and comprehensive documentation. The bar should be higher than for an average Boost library.
In this context, I have some concerns. While playing with the library I found three corner cases where it produces incorrect results:
* https://github.com/cppalliance/decimal/issues/1091 * https://github.com/cppalliance/decimal/issues/1094 * https://github.com/cppalliance/decimal/issues/1119
Two are already fixed, and Matt has been very responsive, but I feel like it is too easy for a novice to find bugs in the library after a couple of minute's tests. It looks to me like the unit test "sieve" is still too coarse. Some test files (like test_decimal32.cpp or test_decimal64_basis.cpp) are commented out. Fuzz tests for decimal comparisons in random_decimal32_comp.cpp seem to be only testing decimals created from integer values (no fractions). I know that other Boost candidate libraries pass with fewer tests, but I feel this one needs to have a lot of them. I am not even sure how one tests user-defined literals across the spectrum. Maybe the unit tests should be written outside of C++.
One can see that a lot of work went into preparing the documentation. It is helpful and already provides a lot of useful information, also about the background, and ideas. It has the "Design Rationale" section which reminds me of the high quality documentation that the original Boost libraries used to have. But I still find it insufficient. The API reference for the key types, like `decimal32_t`, only lists the class members, but there is no detailed description for the behavior of these members. What are the preconditions of the Decimal's constructors? Is any combination of values considered invalid? Or is it nicely degrading to subnormals and infinities? I think the contract of the library (how the cohorts are treated in comparisons, if they are implementation detail, or are observable?) need to be spelled out more clearly. Again, Matt has been very responsive, and the documentation has improved a lot during the review period, and I know it will improve further.
Regarding the design, it looks like it has been done by IEEE, so not much to review here. I only had a short glimpse at the implementation, so I cannot say much. One thing that I found surprising is that operator<=> is implemented in terms of operators <, > and ==. Operator <=> was designed for the class authors to go the opposite way: define operator <=> and you get all the others for free. So I wonder what was the reason for the current implementation.
I do not feel comfortable giving an accept or reject recommendation. I clearly have a higher acceptance bar for this library than for any other I have seen reviewed in Boost, and this may appear unfair.
Do I want to see this library in Boost? -- yes. It is useful, and IEEE seems to agree. Are the authors the right people to implement it? -- definitely! But I would feel more comfortable accepting it, if it went through another review iteration. Could this be considered a conditional acceptance? In the case of the test sieve issue I brought up, I do not think it is a question of writing more tests. It is more a question of designing an approach to testing that would give a guarantee of correctness.
This is requesting a lot of work from the authors who already put a lot of effort to get the library where it is. It is huge (both the library and the effort)!
Again, this cannot be overstated. Matt and Chris: you did great work! Thank you! I know that sooner or later we will be able to use Boost.Decimal from the official distribution.
Also, thanks to John Maddock for managing the review.
Regards,&rzej;
Two are already fixed, and Matt has been very responsive, but I feel like it is too easy for a novice to find bugs in the library after a couple of minute's tests. It looks to me like the unit test "sieve" is still too coarse. Some test files (like test_decimal32.cpp or test_decimal64_basis.cpp) are commented out. Fuzz tests for decimal comparisons in random_decimal32_comp.cpp seem to be only testing decimals created from integer values (no fractions). I know that other Boost candidate libraries pass with fewer tests, but I feel this one needs to have a lot of them. I am not even sure how one tests user-defined literals across the spectrum. Maybe the unit tests should be written outside of C++.
Something we've learned from test design in Multiprecision is the iterative steps. As you note many of the early tests seem trivial because they are. The real aggressive numerical correctness testing that finds bugs both here and in Multiprecision are running special functions (e.g. the gamma functions). But yes to your point there's always more things or more through ways to test.
Regarding the design, it looks like it has been done by IEEE, so not much to review here. I only had a short glimpse at the implementation, so I cannot say much. One thing that I found surprising is that operator<=> is implemented in terms of operators <, > and ==. Operator <=> was designed for the class authors to go the opposite way: define operator <=> and you get all the others for free. So I wonder what was the reason for the current implementation.
The library is C++14 with optional support for 20. Defining operator<=> in terms of the existing operators is to avoid implementing everything twice. Thanks for your detailed review, bugs, and doc issues. Matt
> The real aggressive numerical correctness> testing that finds bugs both here and in> Multiprecision are running special functions> (e.g. the gamma functions)
All of specfun actually
Yes. Boost.Math's special function tests willpick up edge cases.
But I would definitely place this lofty goalfor Decimal in the area of library evolutioninstead of immediate goals.
I recently ran a new Multiprecision backendthrough specfun and it was very helpful,but took a year or more. Admittedly,Matt tends to work much faster than I do.But running a new type through specfunrequires a dedicated effort.
- Chris
On Monday, October 13, 2025 at 10:13:23 AM GMT+2, Matt Borland <matt@mattborland.com> wrote:
> Two are already fixed, and Matt has been very responsive, but I feel like it is too easy for a novice to find bugs in the library after a couple of minute's tests. It looks to me like the unit test "sieve" is still too coarse. Some test files (like test_decimal32.cpp or test_decimal64_basis.cpp) are commented out. Fuzz tests for decimal comparisons in random_decimal32_comp.cpp seem to be only testing decimals created from integer values (no fractions). I know that other Boost candidate libraries pass with fewer tests, but I feel this one needs to have a lot of them. I am not even sure how one tests user-defined literals across the spectrum. Maybe the unit tests should be written outside of C++.
>
Something we've learned from test design in Multiprecision is the iterative steps. As you note many of the early tests seem trivial because they are. The real aggressive numerical correctness testing that finds bugs both here and in Multiprecision are running special functions (e.g. the gamma functions). But yes to your point there's always more things or more through ways to test.
> Regarding the design, it looks like it has been done by IEEE, so not much to review here. I only had a short glimpse at the implementation, so I cannot say much. One thing that I found surprising is that operator<=> is implemented in terms of operators <, > and ==. Operator <=> was designed for the class authors to go the opposite way: define operator <=> and you get all the others for free. So I wonder what was the reason for the current implementation.
>
The library is C++14 with optional support for 20. Defining operator<=> in terms of the existing operators is to avoid implementing everything twice.
Thanks for your detailed review, bugs, and doc issues.
Matt
participants (3)
-
Andrzej Krzemienski -
Christopher Kormanyos -
Matt Borland