The second review of the proposed Decimal Number library by Matt Borland and Chris Kormanyos begins today and runs until 15th Oct. You will find documentation here: https://develop.decimal.cpp.al/decimal/overview.html And the code repository is here: https://github.com/cppalliance/decimal/ Boost.Decimal is an implementation of IEEE 754 <https://standards.ieee.org/ieee/754/6210/> <https://standards.ieee.org/ieee/754/6210/%3E> and ISO/IEC DTR 24733 <https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2849.pdf> <https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2849.pdf%3E> Decimal Floating Point numbers. The library is header-only, has no dependencies, and requires C++14. This re-review is happening because the original review had an indeterminate result (see my summary at https://lists.boost.org/archives/list/boost@lists.boost.org/message/2Q7UBOTE...) and resulted in my filing a number of issues against the library based on the reviewers comments. Now that Matt and Chris have been busy fixing these and addressing the reviewers concerns, the library is now back for a second look, see https://github.com/cppalliance/decimal/issues?q=is%3Aissue%20state%3Aclosed%... for a complete list of issues addressed from the last review. I hope both the original reviewers, and new ones will come back for a second look. Please feel free to amend your original review, or if you're starting from scratch then please provide feedback on the following general topics: - What is your evaluation of the design? - What is your evaluation of the implementation? - What is your evaluation of the documentation? - What is your evaluation of the potential usefulness of the library? - Do you already use it in industry? - Did you try to use the library? With which compiler(s)? Did you have any problems? - How much effort did you put into your evaluation? A glance? A quick reading? In-depth study? - Are you knowledgeable about the problem domain? Ensure to explicitly include with your review: ACCEPT, REJECT, or CONDITIONAL ACCEPT (with acceptance conditions). Best, John Maddock (review manager).
This re-review is happening because the original review had an indeterminate result (see my summary at https://lists.boost.org/archives/list/boost@lists.boost.org/message/2Q7UBOTE...) and resulted in my filing a number of issues against the library based on the reviewers comments. Now that Matt and Chris have been busy fixing these and addressing the reviewers concerns, the library is now back for a second look, see https://github.com/cppalliance/decimal/issues?q=is%3Aissue state%3Aclosed label%3A"Boost Review" for a complete list of issues addressed from the last review.
In addition to the issues list Boost Review here's a summary of the activities from the last 9 months: Breaking Changes: - Based on bitwise comparisons with other similar libraries and database software, we have changed the internal encoding of our IEEE 754-compliant types - We spent about 3 months optimizing just back end integer types that are now used throughout the library, and as the internals of decimal128_t - We have changed the type names to better match conventions: - `decimalXX` is now `decimalXX_t` - `decimalXX_fast` is now `decimal_fastXX_t` - The headers have been similarly renamed (e.g. decimal32.hpp -> decimal32_t.hpp), and can now be used independently instead of requiring the monolith based on feedback in Review - Constructors have been simplified to reduce confusion (no more double negative logic) - The default rounding mode has changed to align with IEEE 754, with rounding bugs being squashed across the modes as well Other Changes: - The documentation content has been overhauled thanks to feedback from Peter Turcan and others during the first review - The docs are no longer a single long page of Asciidoc; we have moved to Antora. Thanks to Joaquín and Christian for making it trivial to copy from Unordered to make that happen. - https://develop.decimal.cpp.al/ - We now support formatting with {fmt} - Benchmarks have been expanded to include GCC `_DecimalXX` types, and Intel's libbid. I think people should be pleased with the results now, since that was a huge point of contention at the end of the review - We have added support for CMake pkg config for ease of use Matt
pon., 6 paź 2025 o 17:47 John Maddock via Boost <boost@lists.boost.org> napisał(a):
The second review of the proposed Decimal Number library by Matt Borland and Chris Kormanyos begins today and runs until 15th Oct.
You will find documentation here: https://develop.decimal.cpp.al/decimal/overview.html
And the code repository is here: https://github.com/cppalliance/decimal/
Boost.Decimal is an implementation of IEEE 754 <https://standards.ieee.org/ieee/754/6210/> <https://standards.ieee.org/ieee/754/6210/%3E> and ISO/IEC DTR 24733 <https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2849.pdf> <https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2849.pdf%3E> Decimal Floating Point numbers.
Thank you for the update, and thank you for the author's efforts to deliver the second incarnation of the library. A couple of quick questions. Regarding the ISO/IEC DTR 24733, shall we treat Boost.Decimal as a literal implementation of that TR up to the last detail, or rather that it wanted to be as close as reasonable but not closer. For instance, N2849 has a different initialization interface for the decimal types. From the documentation on rounding modes ( https://develop.decimal.cpp.al/decimal/examples.html#examples_rounding_mode), I cannot figure out how I actually get to control it for my operations. Do I just create the object and as long as it is in scope all my operations have the altered mode? Or do I need to pass it to operations? If the former, does it have all the problems that globals have? For instance, the interactions with coroutines where the same lexical scope can start on one thread and finish on another? Anyway, the documentation on the rounding mode is missing. The link points to the branch `develop` on GitHub. Is it the branch `develop` that is the subject of the review? Regards, &rzej;
The library is header-only, has no dependencies, and requires C++14.
This re-review is happening because the original review had an indeterminate result (see my summary at
https://lists.boost.org/archives/list/boost@lists.boost.org/message/2Q7UBOTE...)
and resulted in my filing a number of issues against the library based on the reviewers comments. Now that Matt and Chris have been busy fixing these and addressing the reviewers concerns, the library is now back for a second look, see
https://github.com/cppalliance/decimal/issues?q=is%3Aissue%20state%3Aclosed%... for a complete list of issues addressed from the last review.
I hope both the original reviewers, and new ones will come back for a second look.
Please feel free to amend your original review, or if you're starting from scratch then please provide feedback on the following general topics:
- What is your evaluation of the design?
- What is your evaluation of the implementation?
- What is your evaluation of the documentation?
- What is your evaluation of the potential usefulness of the library?
- Do you already use it in industry?
- Did you try to use the library? With which compiler(s)? Did you have any problems?
- How much effort did you put into your evaluation? A glance? A quick reading? In-depth study?
- Are you knowledgeable about the problem domain?
Ensure to explicitly include with your review: ACCEPT, REJECT, or CONDITIONAL ACCEPT (with acceptance conditions).
Best, John Maddock (review manager).
_______________________________________________ Boost mailing list -- boost@lists.boost.org To unsubscribe send an email to boost-leave@lists.boost.org https://lists.boost.org/mailman3/lists/boost.lists.boost.org/ Archived at: https://lists.boost.org/archives/list/boost@lists.boost.org/message/2GQFSND3...
Thank you for the update, and thank you for the author's efforts to deliver the second incarnation of the library.
Thank you for taking another look. Your comments and issues the first time around were helpful.
A couple of quick questions. Regarding the ISO/IEC DTR 24733, shall we treat Boost.Decimal as a literal implementation of that TR up to the last detail, or rather that it wanted to be as close as reasonable but not closer. For instance, N2849 has a different initialization interface for the decimal types.
Consider ISO/IEC DTR 24733 to be a starting point. It was written in 2009, so much has progressed since then. Example is I don't see the need to offer make_decimalXX or decimalXX_to_ functions since classes have the facilities for those. The discussion on required functions, rounding modes, etc. is derived from IEEE 754 so that is absolutely valid (and more readable than the IEEE version).
From the documentation on rounding modes ( https://develop.decimal.cpp.al/decimal/examples.html#examples_rounding_mode), I cannot figure out how I actually get to control it for my operations. Do I just create the object and as long as it is in scope all my operations have the altered mode? Or do I need to pass it to operations? If the former, does it have all the problems that globals have? For instance, the interactions with coroutines where the same lexical scope can start on one thread and finish on another? Anyway, the documentation on the rounding mode is missing.
There's a global rounding mode flag[1] that is set at compile time that you can query with fegetround() and set with fesetround(rounding_mode)[2][3]. These are globals and will have the same problems you described in theory. In practice binary floating point will also have the same issues because you are reading/writing to an FPU flag. Unlike binary floating point we do let you set the initial rounding mode via macro. This was one of the concerns in the constexpr <cmath> paper where you could have different results because of divergent rounding modes between compile time and run time [4].
The link points to the branch `develop` on GitHub. Is it the branch `develop` that is the subject of the review?
Yes, the docs match develop, and the website is automatically updated on every documentation change to committed to develop. Matt [1] https://github.com/cppalliance/decimal/blob/develop/include/boost/decimal/cf... [2] https://en.cppreference.com/w/cpp/numeric/fenv/feround.html [3] https://develop.decimal.cpp.al/decimal/cfenv.html [4] https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2021/p0533r9.pdf
wt., 7 paź 2025 o 07:50 Matt Borland <matt@mattborland.com> napisał(a):
The link points to the branch `develop` on GitHub. Is it the branch `develop` that is the subject of the review?
Yes, the docs match develop, and the website is automatically updated on every documentation change to committed to develop.
I can see changes being committed to branch `develop`. I do not think this is desirable. I think the shape of the library (source code, docs) should remain frozen for the time of the review. All the reviewers need to be able to look at and talk about the same thing. Either provide a branch dedicated for the review, or refrain from doing changes to `develop` during the review period. Regards, &rzej;
I can see changes being committed to branch `develop`. I do not think this is desirable. I think the shape of the library (source code, docs) should remain frozen for the time of the review. All the reviewers need to be able to look at and talk about the same thing. Either provide a branch dedicated for the review, or refrain from doing changes to `develop` during the review period.
The only changes have been for clarity: you asked for an improved error message, and someone on Reddit asked for clarification on underflow/overflow of constructors to be added to the Docs. I'm fine with freezing the branch, but I thought it would make sense to address clarity issues along the way so they don't confuse potentially multiple reviewers when it could be dealt with on the spot. Matt
wt., 7 paź 2025 o 12:50 Matt Borland <matt@mattborland.com> napisał(a):
I can see changes being committed to branch `develop`. I do not think this is desirable. I think the shape of the library (source code, docs) should remain frozen for the time of the review. All the reviewers need to be able to look at and talk about the same thing. Either provide a branch dedicated for the review, or refrain from doing changes to `develop` during the review period.
The only changes have been for clarity: you asked for an improved error message, and someone on Reddit asked for clarification on underflow/overflow of constructors to be added to the Docs. I'm fine with freezing the branch, but I thought it would make sense to address clarity issues along the way so they don't confuse potentially multiple reviewers when it could be dealt with on the spot.
I recall that this was brought up as an issue in other reviews. I personally do not mind this. (In fact, your changes helped me today when I made the same programming mistake as yesterday and got a cleaner response.) If everyone is ok with it, then I withdraw my concern. Regards, &rzej;
- What is your evaluation of the design?
There seems to be no way to construct a boost.decimal from a string representing a decimal number. Is that correct, or have I missed it? Without a constructor or conversion mechanism from string, when using decimal to store numbers received off the wire one would either have to round trip through (lossy) strtod or write a conversion function, which is tiresome. Almost all financial systems (an obvious candidate for a class such as this) spend most of their time converting decimals to strings and strings to decimals. I see that serialisation is covered by *fmt* integration, but what about parsing? R
On Wednesday, October 8th, 2025 at 9:53 AM, Richard Hodges via Boost <boost@lists.boost.org> wrote:
- What is your evaluation of the design?
There seems to be no way to construct a boost.decimal from a string representing a decimal number.
Is that correct, or have I missed it?
Without a constructor or conversion mechanism from string, when using decimal to store numbers received off the wire one would either have to round trip through (lossy) strtod or write a conversion function, which is tiresome.
The best way to do this (and how current users are) is via the <charconv> functions[1]. There are also literals if you prefer to go that route [2].
Almost all financial systems (an obvious candidate for a class such as this) spend most of their time converting decimals to strings and strings to decimals. I see that serialisation is covered by fmt integration, but what about parsing?
R
Does {fmt} even support parsing beyond parsing of the context? I am not aware of that if it does. Matt [1] https://develop.decimal.cpp.al/decimal/charconv.html [2] https://develop.decimal.cpp.al/decimal/literals.html
On Wed, 8 Oct 2025 at 10:09, Matt Borland <matt@mattborland.com> wrote:
On Wednesday, October 8th, 2025 at 9:53 AM, Richard Hodges via Boost < boost@lists.boost.org> wrote:
- What is your evaluation of the design?
There seems to be no way to construct a boost.decimal from a string representing a decimal number.
Is that correct, or have I missed it?
Without a constructor or conversion mechanism from string, when using decimal to store numbers received off the wire one would either have to round trip through (lossy) strtod or write a conversion function, which is tiresome.
The best way to do this (and how current users are) is via the <charconv> functions[1]. There are also literals if you prefer to go that route [2].
This is a terrible user experience, and the opposite of "make simple things simple". Why not just have a constructor that constructs the decimal from a string? I noted the literals, but these are irrelevant as the pain will come when accepting text inputs and building decimals from them. As things stand I think boost.multiprecision remains a better solution because of this one feature. I appreciate that John's intent is to implement the standard verbatim, but this has frankly never been a good idea.
Almost all financial systems (an obvious candidate for a class such as this) spend most of their time converting decimals to strings and strings to decimals. I see that serialisation is covered by fmt integration, but what about parsing?
R
Does {fmt} even support parsing beyond parsing of the context? I am not aware of that if it does.
I did not mean to imply that {fmt} includes parsing. Merely that serialisation is covered by {fmt}, but that parsing is not covered (other than by the from_chars complication).
Matt
[1] https://develop.decimal.cpp.al/decimal/charconv.html [2] https://develop.decimal.cpp.al/decimal/literals.html
This is a terrible user experience, and the opposite of "make simple things simple". Why not just have a constructor that constructs the decimal from a string?
I noted the literals, but these are irrelevant as the pain will come when accepting text inputs and building decimals from them.
As things stand I think boost.multiprecision remains a better solution because of this one feature.
There's not been any demand for a string constructor is really the main reason there isn't one. Would you want the behavior of a theoretical constructor to match the behavior of chars_format::general, much like from_chars without a specified format? That would be nothing more than a thin wrapper which is easy to implement.
I appreciate that John's intent is to implement the standard verbatim, but this has frankly never been a good idea.
Almost all financial systems (an obvious candidate for a class such as this) spend most of their time converting decimals to strings and strings to decimals. I see that serialisation is covered by fmt integration, but what about parsing?
R
Does {fmt} even support parsing beyond parsing of the context? I am not aware of that if it does.
I did not mean to imply that {fmt} includes parsing. Merely that serialisation is covered by {fmt}, but that parsing is not covered (other than by the from_chars complication).
Something that makes from_chars slightly easier (and is divergent from the STL) is that by popular demand both here, and in Boost.Charconv, from_chars accepts std::string and std::string_views in addition to a pointer pair. Matt
On Wed, Oct 8, 2025 at 11:57 AM Matt Borland via Boost < boost@lists.boost.org> wrote:
There's not been any demand for a string constructor is really the main reason there isn't one. Would you want the behavior of a theoretical constructor to match the behavior of chars_format::general, much like from_chars without a specified format? That would be nothing more than a thin wrapper which is easy to implement.
If you are talking about current users of the library it may be true that no demand exists, but I did bring that up in original review <https://listarchives.boost.org/Archives/boost/2025/01/259117.php>. Not that my opinion is super important, but Richard is not only person to bring this up, c/p from my original review: API Usability Missing from string/char array constructor is problematic. I find this API quite natural to use, e.g. let from_string = Decimal::from_str("1.1").unwrap();
śr., 8 paź 2025 o 11:36 Richard Hodges via Boost <boost@lists.boost.org> napisał(a):
On Wed, 8 Oct 2025 at 10:09, Matt Borland <matt@mattborland.com> wrote:
On Wednesday, October 8th, 2025 at 9:53 AM, Richard Hodges via Boost < boost@lists.boost.org> wrote:
- What is your evaluation of the design?
There seems to be no way to construct a boost.decimal from a string representing a decimal number.
Is that correct, or have I missed it?
Without a constructor or conversion mechanism from string, when using decimal to store numbers received off the wire one would either have to round trip through (lossy) strtod or write a conversion function, which is tiresome.
The best way to do this (and how current users are) is via the <charconv> functions[1]. There are also literals if you prefer to go that route [2].
This is a terrible user experience, and the opposite of "make simple things simple". Why not just have a constructor that constructs the decimal from a string?
I noted the literals, but these are irrelevant as the pain will come when accepting text inputs and building decimals from them.
As things stand I think boost.multiprecision remains a better solution because of this one feature.
I appreciate that John's intent is to implement the standard verbatim, but this has frankly never been a good idea.
While having the ability to convert from string to a decimal is a valid expectation, I do not think it is justified to require that this must be done via a constructor. If not anything else, the author may want to keep the headers smaller and not couple the representation with string parsing. You may consider from_chars too clumsy, but how do you propose to signal parsing failures given that this library is designed to service environments without exceptions? Regards, &rzej;
While having the ability to convert from string to a decimal is a valid expectation, I do not think it is justified to require that this must be done via a constructor. If not anything else, the author may want to keep the headers smaller and not couple the representation with string parsing.
You may consider from_chars too clumsy, but how do you propose to signal parsing failures given that this library is designed to service environments without exceptions?
If this constructor is to follow the behavior of charconv I think something like this could work: decimal_t (const std::string& str) { decimal_t x {}; const auto r {from_chars(str, x)}; if (!r) *this = NAN; else *this = x; } No exceptions is definitely a large concern as I know a number of the users run exception free environments. Matt
Matt Borland wrote:
You may consider from_chars too clumsy, but how do you propose to signal parsing failures given that this library is designed to service environments without exceptions?
If this constructor is to follow the behavior of charconv I think something like this could work:
decimal_t (const std::string& str) { decimal_t x {}; const auto r {from_chars(str, x)}; if (!r) *this = NAN; else
*this = x; }
Please don't. Constructors fail by throwing, not by silently constructing arbitrary values.
No exceptions is definitely a large concern as I know a number of the users run exception free environments.
People who don't want exceptions wouldn't use the constructor.
As things stand I think boost.multiprecision remains a better solution because of this one feature.
I appreciate that John's intent is to implement the standard verbatim,
but
this has frankly never been a good idea.
While having the ability to convert from string to a decimal is a valid expectation, I do not think it is justified to require that this must be done via a constructor. If not anything else, the author may want to keep the headers smaller and not couple the representation with string parsing.
You may consider from_chars too clumsy, but how do you propose to signal parsing failures given that this library is designed to service environments without exceptions?
You can disable the lines of codes if exceptions are disabled or replace the exception with a terminate. Or it can just be in a separate header so no-exception environments just can't include that one. I agree that the library could use a simple to and from conversion. The charconv is great to have, but is clumsy to use. I think a one-line conversion operation would be great, although the constructor might not be the right place. After all, `double("3.142")` isn't valid C++ (Richard might say "well it should be"). But adding `std::string to_string(decimalX_t)` and `decimalX_t stodX(const std::string &)` (or string_view) wrapper functions for charconv probably wouldn't hurt anyone. Either of those would be fine for me: decimal64_t dec("3.142"); // or auto dec = stod64("3.142"); And it's fine if these functions throw; non-exception environments just can't use them.
Either of those would be fine for me:
decimal64_t dec("3.142"); // or auto dec = stod64("3.142");
That basically already exists with strtod functions: decimal64_t dec = strtod64("3.142", nullptr); This also handles locales like you would expect, which is useful if you have for example German formatting of currency. Matt
Matt Borland wrote:
Either of those would be fine for me:
decimal64_t dec("3.142"); // or auto dec = stod64("3.142");
That basically already exists with strtod functions:
decimal64_t dec = strtod64("3.142", nullptr);
This also handles locales like you would expect, which is useful if you have for example German formatting of currency.
And not useful at all if you don't. :-) Most of the time, the input values are in the C locale. <cstdio> functions being locale-aware has caused many programmer-years of trouble. E.g. your example may produce 3142 because the current locale is Italian. This is very rarely accurately described as "handles locales as you would expect". In the typical case where the input comes over the wire, it's never a good idea to assume that the sender has magically consulted your current locale and tailored the output to it. TL;DR being locale-aware by default causes many more problems than it solves (if it has ever solved any), and <cstdio> is basically useless if your locale isn't guaranteed "C".
On Wed, Oct 8, 2025 at 5:11 PM Matt Borland via Boost <boost@lists.boost.org> wrote:
Without a constructor or conversion mechanism from string, when using decimal to store numbers received off the wire one would either have to round trip through (lossy) strtod or write a conversion function, which is tiresome.
The best way to do this (and how current users are) is via the <charconv> functions[1].
I'd like it to allow a leading plus sign. Wouldn't it be more convenient to use the `strtod` family (e.g. `strtod64`) instead of `from_chars`? Regards, Michel
On Wed, Oct 8, 2025 at 5:11 PM Matt Borland via Boost boost@lists.boost.org wrote:
Without a constructor or conversion mechanism from string, when using decimal to store numbers received off the wire one would either have to round trip through (lossy) strtod or write a conversion function, which is tiresome.
The best way to do this (and how current users are) is via the <charconv> functions[1].
I'd like it to allow a leading plus sign. Wouldn't it be more convenient to use the `strtod` family (e.g. `strtod64`) instead of `from_chars`?
I think Peter covers the argument against that above [1]. If we use issues as a proxy for how much a feature is being used <charconv> followed by {fmt} are being used way more extensively than the strtod family. Matt [1] https://lists.boost.org/archives/list/boost@lists.boost.org/message/SK43DRVG...
I've seen that all classes (decimal32_t, decimal64_t, and decimal128_t) can be implicitly constructed from an integer, but not from a floating-point number. Which is the reason for this design choice? And if there's any, wouldn't it be better to explain it in the documentation? Best regards, LoS On Wed, Oct 8, 2025 at 2:06 PM Matt Borland via Boost <boost@lists.boost.org> wrote:
On Wed, Oct 8, 2025 at 5:11 PM Matt Borland via Boost boost@lists.boost.org wrote:
Without a constructor or conversion mechanism from string, when using decimal to store numbers received off the wire one would either have to round trip through (lossy) strtod or write a conversion function, which is tiresome.
The best way to do this (and how current users are) is via the <charconv> functions[1].
I'd like it to allow a leading plus sign. Wouldn't it be more convenient to use the `strtod` family (e.g. `strtod64`) instead of `from_chars`?
I think Peter covers the argument against that above [1]. If we use issues as a proxy for how much a feature is being used <charconv> followed by {fmt} are being used way more extensively than the strtod family.
Matt
[1] https://lists.boost.org/archives/list/boost@lists.boost.org/message/SK43DRVG... _______________________________________________ Boost mailing list -- boost@lists.boost.org To unsubscribe send an email to boost-leave@lists.boost.org https://lists.boost.org/mailman3/lists/boost.lists.boost.org/ Archived at: https://lists.boost.org/archives/list/boost@lists.boost.org/message/LKEKQPOW...
I've seen that all classes (decimal32_t, decimal64_t, and decimal128_t) can be implicitly constructed from an integer, but not from a floating-point number. Which is the reason for this design choice? And if there's any, wouldn't it be better to explain it in the documentation?
Best regards, LoS
We discuss the rationale for absence of any operation besides explicit conversion between binary floating point and decimal floating point in the design decisions page [1]. I don't entirely remember why Integers are allowed implicit conversion, but that should probably be made explicit too. All the normal operators between decimal floating point and integers are allowed because the results should not be surprising unlike with binary floats. Matt [1] https://develop.decimal.cpp.al/decimal/design.html
On Wed, Oct 8, 2025 at 2:26 PM Matt Borland via Boost <boost@lists.boost.org> wrote:
We discuss the rationale for absence of any operation besides explicit conversion between binary floating point and decimal floating point in the design decisions page [1]. I don't entirely remember why Integers are allowed implicit conversion, but that should probably be made explicit too.
I do not understand why construction from integer should be explicit for types that can perfectly represent the integer in type. So I do understand that decimal32 can *not* represent all values of int64 without rounding, so implicit is bad. But if somebody wants to construct decimal128 from int64 or decimal64 from uint32_t do we really want to burden that user with spam? I know conditional explicit requires C++20 for clean solution but other less nice <https://devblogs.microsoft.com/cppblog/c20s-conditionally-explicit-constructors/> solutions are possible in older standards. So I strongly suggest allowing implicit construction when no rounding can occur.
I think Peter covers the argument against that above [1]. If we use issues as a proxy for how much a feature is being used <charconv> followed by {fmt} are being used way more extensively than the strtod family.
Ah, right! What I really want is a `from_chars` that allows a leading plus sign. Regards, Michel
Ah, right! What I really want is a `from_chars` that allows a leading plus sign.
What about something like the following: decimal64_t dec = make_decimal64(const std::string&) or decimal64_t dec = make_decimal64(std::string_view) This could assume C locale like from_chars to avoid strtod surprises, be more ergonomic than from_chars, and allow leading "+" like strtod. Matt
Le 2025-10-08 14:59, Matt Borland via Boost a écrit :
What about something like the following:
decimal64_t dec = make_decimal64(const std::string&)
or decimal64_t dec = make_decimal64(std::string_view)
This could assume C locale like from_chars to avoid strtod surprises, be more ergonomic than from_chars, and allow leading "+" like strtod.
How do you report failure ? Shouldn't it return an expected-like for that purpose ? Regards, Julien
10-08 14:59, Matt Borland via Boost a écrit :
What about something like the following:
decimal64_t dec = make_decimal64(const std::string&)
or decimal64_t dec = make_decimal64(std::string_view)
This could assume C locale like from_chars to avoid strtod surprises, be more ergonomic than from_chars, and allow leading "+" like strtod.
How do you report failure ? Shouldn't it return an expected-like for that purpose ?
I maintain NAN is the way to go for reporting failure here, and I won't be scolded for returning that from the constructor. NAN is designed to represent undefined or unrepresentable values. I would argue that an unparseable or any bad string is an undefined/unrepresentable value and thus a NAN. Matt
Matt Borland wrote:
How do you report failure ? Shouldn't it return an expected-like for that purpose ?
I maintain NAN is the way to go for reporting failure here, and I won't be scolded for returning that from the constructor. NAN is designed to represent undefined or unrepresentable values. I would argue that an unparseable or any bad string is an undefined/unrepresentable value and thus a NAN.
There are at least two reasons to not do that. First, NaN is a possible legitimate return value, so you have no way to reliably determine failure. Second, it makes it very easy to forget to check for errors. This is not the same as 0.0 / 0.0 returning NaN. We do that for the practical reason that we don't want, after several hours of computation, element 17108 out of 1048576 in total causing the loss of all the results because of a floating point exception terminating the process. This constructor / factory function is used to convert _input_ values, at the outer perimeter, and should therefore validate them.
On Wed, Oct 8, 2025 at 4:10 PM Peter Dimov via Boost <boost@lists.boost.org> wrote:
There are at least two reasons to not do that. First, NaN is a possible legitimate return value, so you have no way to reliably determine failure.
Second, it makes it very easy to forget to check for errors.
This is not the same as 0.0 / 0.0 returning NaN. We do that for the practical reason that we don't want, after several hours of computation, element 17108 out of 1048576 in total causing the loss of all the results because of a floating point exception terminating the process.
This constructor / factory function is used to convert _input_ values, at the outer perimeter, and should therefore validate them.
I fully agree, libraries should avoid atoi kind of problems <https://stackoverflow.com/questions/1640720/how-do-i-tell-if-the-c-function-atoi-failed-or-if-it-was-a-string-of-zeros>, , But also users are often averse to error checking for verbosity reasons. Maybe(I am not sure) nice API would be to return NaN in case of parse failure(for people "certain" input is fine), but also provide overload that takes error code as out argument? I personally prefer optional API, but those are not that common in C++, library is C++14, may be slower.... so I will not suggest that. :)
Le 2025-10-08 15:58, Matt Borland a écrit :
10-08 14:59, Matt Borland via Boost a écrit :
What about something like the following:
decimal64_t dec = make_decimal64(const std::string&)
or decimal64_t dec = make_decimal64(std::string_view)
This could assume C locale like from_chars to avoid strtod surprises, be more ergonomic than from_chars, and allow leading "+" like strtod.
How do you report failure ? Shouldn't it return an expected-like for that purpose ?
I maintain NAN is the way to go for reporting failure here, and I won't be scolded for returning that from the constructor. NAN is designed to represent undefined or unrepresentable values. I would argue that an unparseable or any bad string is an undefined/unrepresentable value and thus a NAN.
That makes sense but that also lose the information about why the parsing failed. Given that boost::from_chars in charconv had to diverge from the standard to be usable by Boost.Json for precisely that reason (handling different ERANGE errors IIRC), i think that there's definitely some value in not losing that information. Regards, Julien
Matt Borland wrote:
What about something like the following:
decimal64_t dec = make_decimal64(const std::string&)
or decimal64_t dec = make_decimal64(std::string_view)
I think that's fine. If that function behaves like the UDL operator (assuming this will be modified to accept a leading plus), it would be convenient as a lightweight conversion function for casual use. As for naming, considering consistency with the bit conversion functions could be one possible idea. Regards, Michel
On Wed, Oct 8, 2025 at 9:04 PM Matt Borland wrote:
<charconv> followed by {fmt}
Does the default format `{}` output ("shortest") round-trip representation, like float and double? It'd be clearer if this is written in the documentation. Also, it seems that round-tripping sometimes fails with the current implementation; `std::println("{}", "0.99999999"_DD);` outputs `1` instead of `0.99999999`. Regards, Michel
Does the default format `{}` output ("shortest") round-trip representation, like float and double? It'd be clearer if this is written in the documentation.
Also, it seems that round-tripping sometimes fails with the current implementation; `std::println("{}", "0.99999999"_DD);` outputs `1` instead of `0.99999999`.
The format "{}" is general format with 6 digits of precision, so 1 would be correct. I'll update the docs accordingly. Matt
Matt Borland wrote:
Also, it seems that round-tripping sometimes fails with the current implementation; `std::println("{}", "0.99999999"_DD);` outputs `1` instead of `0.99999999`.
The format "{}" is general format with 6 digits of precision, so 1 would be correct. I'll update the docs accordingly.
That's not what {} does by convention. It should output shortest roundtrip.
The format "{}" is general format with 6 digits of precision, so 1 would be correct. I'll update the docs accordingly.
That's not what {} does by convention. It should output shortest roundtrip.
That's my mistake then, and I will fix accordingly. This issue is up on the tracker. Matt
That's not what {} does by convention. It should output shortest roundtrip.
That probably means round-tripping of values (i.e. ignoring cohorts). It would be nice to support a cohort-aware formatter. For example, "%Da" in C23 printf provides a round-trip representation that distinguishes cohorts (though not the shortest one). Another cohort-related topic: Finite values have cohorts. Additionally, zero has both +0 and -0, and infinity has multiple bit representations. So we cannot hash decimal FP numbers simply by delegating to hash functions for unsigned integer types. We may need to canonicalize numbers before hashing. Regards, Michel
On Sunday, October 12th, 2025 at 11:48 AM, Michel Morin <mimomorin@gmail.com> wrote:
That's not what {} does by convention. It should output shortest
roundtrip.
That probably means round-tripping of values (i.e. ignoring cohorts). It would be nice to support a cohort-aware formatter. For example, "%Da" in C23 printf provides a round-trip representation that distinguishes cohorts (though not the shortest one).
Another cohort-related topic: Finite values have cohorts. Additionally, zero has both +0 and -0, and infinity has multiple bit representations. So we cannot hash decimal FP numbers simply by delegating to hash functions for unsigned integer types. We may need to canonicalize numbers before hashing.
In the general case should an entire equivalence class hash to the same value? I think no. Hash functions are designed to operate on bytes, and the members of an equivalence class are not bit-wise equal. We offer a normalization function[1] for those that would want to remove these effects[2]. Matt [1] https://develop.decimal.cpp.al/decimal/cmath.html#cmath_normalize [2] https://en.wikipedia.org/wiki/Hash_function#Data_normalization
See note 1 on https://eel.is/c++draft/hash.requirements Equal values MUST produce the same hash value - if two values of a given type that compare equal could produce different hash values then that would render such a type effectively unusable as a key in hash-based associative containers.
On 12 Oct 2025, at 11:18, Peter Dimov via Boost <boost@lists.boost.org> wrote:
Matt Borland wrote:
In the general case should an entire equivalence class hash to the same value?
If they compare equal, yes. That's a hash function requirement.
_______________________________________________ Boost mailing list -- boost@lists.boost.org To unsubscribe send an email to boost-leave@lists.boost.org https://lists.boost.org/mailman3/lists/boost.lists.boost.org/ Archived at: https://lists.boost.org/archives/list/boost@lists.boost.org/message/I56KQHPU...
See note 1 on https://eel.is/c++draft/hash.requirements
Equal values MUST produce the same hash value - if two values of a given type that compare equal could produce different hash values then that would render such a type effectively unusable as a key in hash-based associative containers.
I will make the changes. https://github.com/cppalliance/decimal/issues/1120 Matt
niedz., 12 paź 2025 o 11:49 Michel Morin via Boost <boost@lists.boost.org> napisał(a):
That's not what {} does by convention. It should output shortest roundtrip.
That probably means round-tripping of values (i.e. ignoring cohorts). It would be nice to support a cohort-aware formatter. For example, "%Da" in C23 printf provides a round-trip representation that distinguishes cohorts (though not the shortest one).
Hashing and equality aside, do users need to know which cohort-member is used for representation while formatting the value? Why would you like to know this? I imagine that this could be left as an implementation detail. This has performance implications, but the contract of the types should be talking about the performance trade-offs (like between fast and non-fast implementations) directly, not cohorts. Regards, &rzej;
Another cohort-related topic: Finite values have cohorts. Additionally, zero has both +0 and -0, and infinity has multiple bit representations. So we cannot hash decimal FP numbers simply by delegating to hash functions for unsigned integer types. We may need to canonicalize numbers before hashing.
Regards, Michel _______________________________________________ Boost mailing list -- boost@lists.boost.org To unsubscribe send an email to boost-leave@lists.boost.org https://lists.boost.org/mailman3/lists/boost.lists.boost.org/ Archived at: https://lists.boost.org/archives/list/boost@lists.boost.org/message/ZTMXP4YE...
Andrzej Krzemienski wrote:
Hashing and equality aside, do users need to know which cohort-member is used for representation while formatting the value? Why would you like to know this?
The decarith ("decimal arithmetic specification"), at https://speleotrove.com/decimal/, discusses this in Appendix B "Design concepts":
For example, people expect trailing zeros to be indicated conventionally in a result: the sum 1.57 + 2.03 is expected to result in 3.60, not 3.6; however, if the positional information has been lost during the operation it is no longer possible to show the expected result. For some applications the loss of trailing zeros is materially significant.
IEEE 754 also requires this (IEEE 754-2019, Clause 5.12.2). My own use case is debugging. Regards, Michel
On Sunday, October 12th, 2025 at 3:25 PM, Michel Morin via Boost <boost@lists.boost.org> wrote:
Andrzej Krzemienski wrote:
Hashing and equality aside, do users need to know which cohort-member is used for representation while formatting the value? Why would you like to know this?
The decarith ("decimal arithmetic specification"), at https://speleotrove.com/decimal/, discusses this in Appendix B "Design concepts":
For example, people expect trailing zeros to be indicated conventionally in a result: the sum 1.57 + 2.03 is expected to result in 3.60, not 3.6; however, if the positional information has been lost during the operation it is no longer possible to show the expected result. For some applications the loss of trailing zeros is materially significant.
IEEE 754 also requires this (IEEE 754-2019, Clause 5.12.2). My own use case is debugging.
Precision and cohort retention are mutually exclusive so to_chars could be extended with something to the effect of: enum class quantum_retention { on, off } to_chars_result to_chars(char* first, char* last, Decimal value, chars_format fmt, quantum_retention retention_mode); Then the D format parameter could be an inherited decision from C, and be used with {fmt}/<format> since the quoted clause specifies this as a language level decision for implementation. Matt
niedz., 12 paź 2025 o 16:24 Matt Borland <matt@mattborland.com> napisał(a):
On Sunday, October 12th, 2025 at 3:25 PM, Michel Morin via Boost < boost@lists.boost.org> wrote:
Andrzej Krzemienski wrote:
Hashing and equality aside, do users need to know which cohort-member is used for representation while formatting the value? Why would you like to know this?
The decarith ("decimal arithmetic specification"), at https://speleotrove.com/decimal/, discusses this in Appendix B "Design concepts":
For example, people expect trailing zeros to be indicated conventionally in a result: the sum 1.57 + 2.03 is expected to result in 3.60, not 3.6; however, if the positional information has been lost during the operation it is no longer possible to show the expected result. For some applications the loss of trailing zeros is materially significant.
IEEE 754 also requires this (IEEE 754-2019, Clause 5.12.2). My own use case is debugging.
Precision and cohort retention are mutually exclusive so to_chars could be extended with something to the effect of:
enum class quantum_retention { on, off }
to_chars_result to_chars(char* first, char* last, Decimal value, chars_format fmt, quantum_retention retention_mode);
Then the D format parameter could be an inherited decision from C, and be used with {fmt}/<format> since the quoted clause specifies this as a language level decision for implementation.
Are there also IEEE 754 requirements or recommendations on what exponent a resulting decimal should have when you are adding decimals with different exponents? Regards, &rzej;
Matt
niedz., 12 paź 2025 o 17:32 Matt Borland <matt@mattborland.com> napisał(a):
Are there also IEEE 754 requirements or recommendations on what exponent a resulting decimal should have when you are adding decimals with different exponents?
Yes, Section 5.2 "Decimal Exponent Calculation" is what you're looking for.
Thank you. Anyway, I do not have access to IEEE 754. (Does anyone know if there are publicly available versions with similar content (like drafts of the C++ standard?) I have access to decarith. It makes sense to expect and provide the behavior where the default textual representation preserves precision: this is compatible with the main goal of the type. Precision and cohort retention are mutually exclusive Can you explain why? Anyway, this is such an important topic in the context of decimal types that it deserves an entry in the design decisions of the library. Whatever the decision, it needs to be explicit. Regards, &rzej;
Thank you. Anyway, I do not have access to IEEE 754. (Does anyone know if there are publicly available versions with similar content (like drafts of the C++ standard?)
I have access to decarith. It makes sense to expect and provide the behavior where the default textual representation preserves precision: this is compatible with the main goal of the type.
Precision and cohort retention are mutually exclusive
Can you explain why?
The easy answer is IEEE 754 section 5.12.2 says so. Consider decimal32_t which has precision 7. Imagine you specify to_chars with precision of 2 and cohort retention, but the value is in the cohort that needs 4 digits. What would you print? Matt
niedz., 12 paź 2025 o 18:19 Matt Borland <matt@mattborland.com> napisał(a):
Precision and cohort retention are mutually exclusive
Can you explain why?
The easy answer is IEEE 754 section 5.12.2 says so.
Consider decimal32_t which has precision 7. Imagine you specify to_chars with precision of 2 and cohort retention, but the value is in the cohort that needs 4 digits. What would you print?
Hmm, I may have misunderstood the previous conversation. So let me present the use case that I have in mind. I need to represent prices in national currencies: different currencies have different number of "cents" in a "dollar": US Dollars have 100 cents in a dollar, so I need to reflect this by always displaying two digits after the decimal point, even if these are zeros. Japanese Yen uses no equivalent of "cents", so I need to display it without the comma or the fractional part. For Iraqi Dinars, one is composed of 1000 fils, so I need to always display three decimal places. My numeric values do not have to know the currency they represent, but they need to know the magnitude of the currency's subunit. They can obtain this number during initialization. Until today, I thought that my use case cannot be handled by a decimal float. But I now hear that this could be a valid use case. I am looking for the following behaviour: Decimal d{100, -2}; // "1.00" cout << d; // no formatting flags I expect this to display "1.00": I provided enough information in the initialization that the right cohort should be used so that the exact "1.00" can be recreated during printing. My question: is the above a use case intended or possible to be handled by Boost.Decimal? Irrespective of the answer, if there are any justified use cases for observing different members of a cohort, then we have an existential conflict. According to a popular concept of "regular", if two objects compare equal then any (regular) function using either of them should return the same result. Type `double` already breaks this for -0 and +0. Cohorts look like another such breakage. Regards, &rzej;
Andrzej Krzemienski wrote:
Irrespective of the answer, if there are any justified use cases for observing different members of a cohort, then we have an existential conflict. According to a popular concept of "regular", if two objects compare equal then any (regular) function using either of them should return the same result. Type `double` already breaks this for -0 and +0. Cohorts look like another such breakage.
+0 and -0 only "break" this if you divide by zero. Cohorts don't "break" it ever, I think.
On Sunday, October 12th, 2025 at 7:27 PM, Andrzej Krzemienski <akrzemi1@gmail.com> wrote:
niedz., 12 paź 2025 o 18:19 Matt Borland <matt@mattborland.com> napisał(a):
Precision and cohort retention are mutually exclusive
Can you explain why?
The easy answer is IEEE 754 section 5.12.2 says so.
Consider decimal32_t which has precision 7. Imagine you specify to_chars with precision of 2 and cohort retention, but the value is in the cohort that needs 4 digits. What would you print?
Hmm, I may have misunderstood the previous conversation. So let me present the use case that I have in mind.
I need to represent prices in national currencies: different currencies have different number of "cents" in a "dollar": US Dollars have 100 cents in a dollar, so I need to reflect this by always displaying two digits after the decimal point, even if these are zeros. Japanese Yen uses no equivalent of "cents", so I need to display it without the comma or the fractional part. For Iraqi Dinars, one is composed of 1000 fils, so I need to always display three decimal places. My numeric values do not have to know the currency they represent, but they need to know the magnitude of the currency's subunit. They can obtain this number during initialization.
Until today, I thought that my use case cannot be handled by a decimal float. But I now hear that this could be a valid use case. I am looking for the following behaviour:
Decimal d{100, -2}; // "1.00" cout << d; // no formatting flags
I expect this to display "1.00": I provided enough information in the initialization that the right cohort should be used so that the exact "1.00" can be recreated during printing.
My question: is the above a use case intended or possible to be handled by Boost.Decimal?
I think in your case where you never want the decimal point to move fixed-point arithmetic is the solution. Theoretically, you could perform a decimal operation like multiply where you would need additional decimal places to represent the fraction. Matt
Am 12.10.25 um 19:27 schrieb Andrzej Krzemienski via Boost:
My numeric values do not have to know the currency they represent, but they need to know the magnitude of the currency's subunit. They can obtain this number during initialization.
Until today, I thought that my use case cannot be handled by a decimal float. But I now hear that this could be a valid use case. I am looking for the following behaviour:
Decimal d{100, -2}; // "1.00" cout << d; // no formatting flags
I expect this to display "1.00": I provided enough information in the initialization that the right cohort should be used so that the exact "1.00" can be recreated during printing.
My question: is the above a use case intended or possible to be handled by Boost.Decimal?
This looks like it intersects with localization rather than common representation. Different countries use different formats for their numbers and currencies. But this only matters at the input/output boundary and should be handled there. This is the same as with e.g. float values: You got to specify somehow how you want it to be displayed. In this case I could also imagine a solution like the following is better suited cout << as_dollar(d); // $1.00 cout << as_yen(d); // 1Y cout << as_dinar(d); // 1.000D Or a strong-type `Dollar` containing a `decimal_t` which has the benefit of not mixing calculations on different currencies TLDR: For specific output you need format flags as with other number types as well which I find reasonable
Matt Borland wrote:
Then the D format parameter could be an inherited decision from C, and be used with {fmt}/<format> since the quoted clause specifies this as a language level decision for implementation.
`D` specifies `_Decimal64`, `H` specifies `_Decimal32`, and `DD` specifies `_Decimal128`. The `a` specifier for decimal floating-point types indicates quantum-preserving form and stands for "actual" (WG14's n1247). So, `a` might be a better fit for a {fmt}/<format> specifier. Regards, Michel
`D` specifies `_Decimal64`, `H` specifies `_Decimal32`, and `DD` specifies `_Decimal128`. The `a` specifier for decimal floating-point types indicates quantum-preserving form and stands for "actual" (WG14's n1247). So, `a` might be a better fit for a {fmt}/<format> specifier.
a and A are already reserved for hexfloats. q and Q as in Quantum-Preserving or c and C for cohort-preserving are other options. Matt
Matt Borland wrote:
`D` specifies `_Decimal64`, `H` specifies `_Decimal32`, and `DD` specifies `_Decimal128`. The `a` specifier for decimal floating-point types indicates quantum-preserving form and stands for "actual" (WG14's n1247). So, `a` might be a better fit for a {fmt}/<format> specifier.
a and A are already reserved for hexfloats.
One might argue that the (a)ctual representation for binary floats is the hex one, because it accurately represent what's in the bits. But hex makes significantly less sense for decimal floats; for them, the (a)ctual representation is decimal. And if printf uses %a for decimal floats for cohort-preserving, it wouldn't make much sense for std::format to do something entirely different. If you want to force hex for some reason there's always `x`, but I doubt anyone would find that particularly useful.
Matt Borland wrote:
`D` specifies `_Decimal64`, `H` specifies `_Decimal32`, and `DD` specifies `_Decimal128`. The `a` specifier for decimal floating-point types indicates quantum-preserving form and stands for "actual" (WG14's n1247). So, `a` might be a better fit for a {fmt}/<format> specifier.
a and A are already reserved for hexfloats.
One might argue that the (a)ctual representation for binary floats is the hex one, because it accurately represent what's in the bits.
But hex makes significantly less sense for decimal floats; for them, the (a)ctual representation is decimal.
And if printf uses %a for decimal floats for cohort-preserving, it wouldn't make much sense for std::format to do something entirely different.
If you want to force hex for some reason there's always `x`, but I doubt anyone would find that particularly useful.
I bring it up because per {fmt}'s "available presentation types for floating-point values" they don't have x for hexfloat and a for actual like printf, only a for hexfloat. Yes, legally I can inject whatever I want into the fmt namespace, but I would rather strictly add to rather than re-interpret existing meanings. Matt
Matt Borland wrote:
I bring it up because per {fmt}'s "available presentation types for floating-point values" they don't have x for hexfloat and a for actual like printf, only a for hexfloat. Yes, legally I can inject whatever I want into the fmt namespace, but I would rather strictly add to rather than re-interpret existing meanings.
I don't understand what's there to inject. `a` for binary floats in libfmt and C++20 prints what %a in printf prints for binary floats. It prints binary floats in hex because that's what printf does. From that it in no way follows that it should print decimal floats in hex, entirely unlike what printf does.
John Maddock wrote:
Boost.Decimal is an implementation of IEEE 754 <https://standards.ieee.org/ieee/754/6210/> <https://standards.ieee.org/ieee/754/6210/%3E> and ISO/IEC DTR 24733 <https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2849.pdf> <https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2849.pdf%3E> Decimal Floating Point numbers.
For a Boost-qualified library that implements IEEE 754 decimal FP, I expect "the basics to work correctly." To achieve that, I believe being rigorously tested is essential. As a baseline, I'd suggest running the dectest (for decimal32, 64, 128) at https://speleotrove.com/decimal/, although its tests for decimal32 and for BID encoding are weak and would need to be supplemented. I've tried running some of those tests (`ddAdd.decTest`, `ddEncode.decTest`, ....) myself, and they included failing test cases. Regards, Michel
Michel Morin wrote:
As a baseline, I'd suggest running the dectest (for decimal32, 64, 128) at https://speleotrove.com/decimal/
decTest also indicates which status flags are expected to be raised, (though not all of them correspond to the IEEE 754 exception flags). I might just be missing something, but it seems that Boost.Decimal doesn't support FP exceptions. There may not be enough use cases for FP exceptions, but their support is mandatory for IEEE 754. Are there any plans regarding FP exceptions? Possible implementation approaches include global/thread-local variables (e.g. C23), flag arguments (e.g. the intel library), or using a "context" argument (e.g. decNumber library). Regards, Michel
On Thursday, October 9th, 2025 at 12:58 PM, Michel Morin via Boost <boost@lists.boost.org> wrote:
Michel Morin wrote:
As a baseline, I'd suggest running the dectest (for decimal32, 64, 128) at https://speleotrove.com/decimal/
decTest also indicates which status flags are expected to be raised, (though not all of them correspond to the IEEE 754 exception flags). I might just be missing something, but it seems that Boost.Decimal doesn't support FP exceptions. There may not be enough use cases for FP exceptions, but their support is mandatory for IEEE 754.
Are there any plans regarding FP exceptions?
The reason there is no support for them at this time is it's a tradeoff with the C++14 standard; I can't support std::feraiseexcept and constexpr at the same time. The latter is significantly more useful than the former. I believe they only times I worry about FP exceptions is when binding Boost.Math into python because Pybind11 will turn FP exceptions into hard errors. This also requires that the compilers and platforms support #pragma STDC FENV_ACCESS ON.
Possible implementation approaches include global/thread-local variables (e.g. C23), flag arguments (e.g. the intel library), or using a "context" argument (e.g. decNumber library).
Intel and decNumber aren't particularly ergonomic since the are C libs. For example Decimal64 + Decimal64 in Intel's lib becomes bid64_add(Decimal64, Decimal64, ROUNDING_MODE, &flag). Theoretically the same workaround for consteval contexts (__builtin_is_constant_evaluated()) that is used to set/query the rounding mode could be used when fenv access is allowed. Tracking my own global is another possibility, but then you lose the information in use cases such as with Pybind11 above. Matt
czw., 9 paź 2025 o 13:28 Matt Borland via Boost <boost@lists.boost.org> napisał(a): > > > > > > On Thursday, October 9th, 2025 at 12:58 PM, Michel Morin via Boost < > boost@lists.boost.org> wrote: > > > > > > > > > Michel Morin wrote: > > > > > > As a baseline, I'd suggest running the dectest (for decimal32, 64, > 128) at https://speleotrove.com/decimal/ > > > > > > > > decTest also indicates which status flags are expected to be raised, > > (though not all of them > > correspond to the IEEE 754 exception flags). I might just be missing > something, > > but it seems that Boost.Decimal doesn't support FP exceptions. There > > may not be enough > > use cases for FP exceptions, but their support is mandatory for IEEE 754. > > > > > Are there any plans regarding FP exceptions? > > > The reason there is no support for them at this time is it's a tradeoff > with the C++14 standard; I can't support std::feraiseexcept and constexpr > at the same time. The latter is significantly more useful than the former. > I believe they only times I worry about FP exceptions is when binding > Boost.Math into python because Pybind11 will turn FP exceptions into hard > errors. This also requires that the compilers and platforms support #pragma > STDC FENV_ACCESS ON. > Boost.Decimal has the following statement in the front matter of its documentation. Boost.Decimal is an implementation of IEEE 754 and ISO/IEC DTR 24733 Decimal Floating Point numbers. It looks like conscious decisions were made to depart from both IEEE 754 and ISO/IEC DTR 24733 where the full conformance is deemed impractical. This decision seems the right thing to do, but it *HAS TO* be reflected in the docs. Either list all the aspects where you depart from the two standards, or say up front that the library is just "inspired" by those standards. Regards, &rzej;
Boost.Decimal has the following statement in the front matter of its documentation.
Boost.Decimal is an implementation of IEEE 754 and ISO/IEC DTR 24733 Decimal Floating Point numbers.
It looks like conscious decisions were made to depart from both IEEE 754 and ISO/IEC DTR 24733 where the full conformance is deemed impractical. This decision seems the right thing to do, but it *HAS TO* be reflected in the docs. Either list all the aspects where you depart from the two standards, or say up front that the library is just "inspired" by those standards.
I think the best move here is to add deviations into the design decisions page which becomes "Design Decisions and Standards Deviations". Right now there are some, but not all, that are sprinkled throughout the documentation. The could all be consolidated in one place, since the rationales for deviations are design decisions. An example is at the top of the <cmath> page it talks about none of the functions actually meet the IEEE standard of 0.5 ULP precision since this is an unreasonable expectation [1]. Matt [1] https://develop.decimal.cpp.al/decimal/cmath.html
Thanks for the explanation. It makes sense to document the status of FP exception support and wait until SG6 comes up with a reasonable solution. Matt Borland wrote:
An example is at the top of the <cmath> page it talks about none of the functions actually meet the IEEE standard of 0.5 ULP precision since this is an unreasonable expectation [1].
I'd suggest splitting that statement on CR into a mandatory-operation part (IEEE 754 Clause 5, e.g. conversions, arithmetic operations, fma, sqrt, …) and an optional-operation part (Clause 9). This would help clarify which parts are required for conformance and which are outside the standard's mandatory scope. Regards, Michel
Matt Borland wrote:
I think the best move here is to add deviations into the design decisions page which becomes "Design Decisions and Standards Deviations". Right now there are some, but not all, that are sprinkled throughout the documentation. The could all be consolidated in one place, since the rationales for deviations are design decisions.
While C's strtod, strtod32, etc. respect the rounding mode (at least in environments that define __STDC_IEC_60559_*__), std::from_chars does not. It is a reasonable and practical deviation from the standard that Boost.Decimal's from_chars does respect the rounding mode. Regards, Michel
Michel Morin wrote:
As a baseline, I'd suggest running the dectest (for decimal32, 64, 128) at https://speleotrove.com/decimal/
To ensure that the appropriate cohort is selected, it would be better to compare the actual and expected results by bit patterns rather than numeric values. It seems that operations like add, the correct cohort is not being used. Regards, Michel
On Mon, Oct 6, 2025 at 5:47 PM John Maddock via Boost <boost@lists.boost.org> wrote:
The second review of the proposed Decimal Number library by Matt Borland and Chris Kormanyos begins today and runs until 15th Oct.
General question for the authors, although this is not obviously reason for accept or reject. Did you consider bumping up version to C++17? I was reading this code and it really made me think how much simpler it would be in C++17. I know Boost developers are not average developers and can read and write more complex code than huge majority of developers, but still, if constexpr, _v instead of ::value... There would be no need for macros here I believe. BOOST_DECIMAL_IF_CONSTEXPR (std::numeric_limits<typename Decimal::significand_type>::digits10 > std::numeric_limits<std::uint64_t>::digits10) { new_sig = detail::shrink_significand<std::uint64_t>(sig, exp); } else { new_sig = static_cast<std::uint64_t>(sig); } BOOST_DECIMAL_IF_CONSTEXPR (std::is_same<TargetType, float>::value) { result = static_cast<TargetType>(detail::fast_float::compute_float32(exp, new_sig, val.isneg(), success)); } else BOOST_DECIMAL_IF_CONSTEXPR (std::is_same<TargetType, double>::value) { result = static_cast<TargetType>(detail::fast_float::compute_float64(exp, new_sig, val.isneg(), success)); } else BOOST_DECIMAL_IF_CONSTEXPR (std::is_same<TargetType, long double>::value) { #if BOOST_DECIMAL_LDBL_BITS == 64 result = static_cast<TargetType>(detail::fast_float::compute_float64(exp, new_sig, val.isneg(), success)); #elif BOOST_DECIMAL_LDBL_BITS == 80 result = static_cast<TargetType>(detail::fast_float::compute_float80_128(exp, new_sig, val.isneg(), success)); #else static_cast<void>(new_sig); result = static_cast<TargetType>(detail::fast_float::compute_float80_128(exp, sig, val.isneg(), success)); #endif }
General question for the authors, although this is not obviously reason for accept or reject. Did you consider bumping up version to C++17?
We have, and if I remember correctly several people brought it up last review. The C++ developer survey for 2025 [1] shows we probably wouldn't alienate too many people with the change. Right now I don't think removing the IF_CONSTEXPR macro, _v, etc. is sufficiently compelling to change to C++17 and refactor. C++20 would be more beneficial, but the adoption rate just isn't there right now. Matt [1] https://isocpp.org/files/papers/CppDevSurvey-2025-summary.pdf
On Thu, Oct 9, 2025 at 4:44 PM Matt Borland <matt@mattborland.com> wrote:
We have, and if I remember correctly several people brought it up last review. The C++ developer survey for 2025 [1] shows we probably wouldn't alienate too many people with the change. Right now I don't think removing the IF_CONSTEXPR macro, _v, etc. is sufficiently compelling to change to C++17 and refactor. C++20 would be more beneficial, but the adoption rate just isn't there right now.
You could also nuke the macros for when function return type is different, since if constexpr "understands" that, if does not. In any case your choice. :) Another question about documentation, I think this may have been asked during last review but I am not sure. IIRC fast types do not support subnormal numbers, e.g. this prints different values: std::print("{:.20e}\n", std::numeric_limits<decimal32_t>::denorm_min()); std::print("{:.20e}\n", std::numeric_limits<decimal_fast32_t>::denorm_min()); but documentation claims they offer same results as non fast ones. Is this documentation issue or you think documentation is correct? I would say that the fact that value X in non fast can be divided in and produce subnormal result(greater than 0) is quite different from getting rounding to 0 for fast type. Now that we have seen the three basic types as specified in IEEE-754 there are three additional adjacent types: decimal_fast32_t, decimal_fast64_t, and decimal_fast128_t. These types yield identical computational results but with faster performance. Another question about digits count, could this be implemented without the checks on estimated_digits, i.e. by making the array larger or some trick like that to avoid checking estimated_digits < 10 and estimated_digits > 1? I do not claim I benchmarked this, but I know people usually try to replace branches in code like this with "clever" array access. constexpr auto num_digits(T init_x) noexcept -> int { // Use the most significant bit position to approximate log10 // log10(x) ~= log2(x) / log2(10) ~= log2(x) / 3.32 const auto x {static_cast<std::uint32_t>(init_x)}; const auto msb {32 - int128::detail::impl::countl_impl(x)}; // Approximate log10 const auto estimated_digits {(msb * 1000) / 3322 + 1}; // 1000/3322 ~= 1/log2(10) if (estimated_digits < 10 && x >= impl::powers_of_10_u32[estimated_digits]) { return estimated_digits + 1; } if (estimated_digits > 1 && x < impl::powers_of_10_u32[estimated_digits - 1]) { return estimated_digits - 1; } return estimated_digits; }
but documentation claims they offer same results as non fast ones. Is this documentation issue or you think documentation is correct? I would say that the fact that value X in non fast can be divided in and produce subnormal result(greater than 0) is quite different from getting rounding to 0 for fast type.
I can add a blurb to the effect of "within it's domain". Subnormals aren't particularly useful so I would not argue it's markedly different mathematical support. There are plenty of compilers, optimizers, hardware platforms etc. that will flush your binary floating point subnormals to 0.
Another question about digits count, could this be implemented without the checks on estimated_digits, i.e. by making the array larger or some trick like that to avoid checking estimated_digits < 10 and estimated_digits > 1? I do not claim I benchmarked this, but I know people usually try to replace branches in code like this with "clever" array access.
I have benchmarked a number of different methods inside the library. In a Lemire blog post on counting digits he found that sometimes more instructions does not equal worse runtime [1]. Yes, I have tried his methods. Matt [1] https://lemire.me/blog/2025/01/07/counting-the-digits-of-64-bit-integers/
On Sat, Oct 11, 2025 at 9:13 AM Matt Borland <matt@mattborland.com> wrote:
but documentation claims they offer same results as non fast ones. Is this documentation issue or you think documentation is correct? I would say that the fact that value X in non fast can be divided in and produce subnormal result(greater than 0) is quite different from getting rounding to 0 for fast type.
I can add a blurb to the effect of "within it's domain". Subnormals aren't particularly useful so I would not argue it's markedly different mathematical support. There are plenty of compilers, optimizers, hardware platforms etc. that will flush your binary floating point subnormals to 0.
I am not that familiar with this, but I would think that is a separate issue. Behavior of one type that is controlled by compiler and CPU is dependent on compiler and CPU. OK, there are some documentation somewhere about flags and CPU vendors probably say what happens during FP computation. But if software types behave differently then that should be documented. From reading documentation I would expect bitwise same results(obviously ignoring that those bits are differently packed in the fast and non fast types, I am talking about extracting the sign, exp, sig and comparing them). If some other reviewer disagrees please write here or write in your review. :) Additional question about numbers.hpp I am quite confused so I thought to ask before digging deeper. std::println("pi is {:.36e}", T{boost::decimal::numbers::pi}); std::println("pi is {:.36e}", T{boost::decimal::numbers::detail::pi_v<T>()}); prints different value for decimal128 since numbers::pi is decima64. Is this intentional? std::numbers::pi is double not long double so on my machine for long double using numbers::pi is *wrong* way to initialize the variable. I have no idea why std:: did this(I presume because long double is 64 bit on some platforms or because double is much more common, or...). In any case seems like quite a footgun in Decimal as users might init decimal128 with decimal64 constant... Am I missing something?
But if software types behave differently then that should be documented. From reading documentation I would expect bitwise same results(obviously ignoring that those bits are differently packed in the fast and non fast types, I am talking about extracting the sign, exp, sig and comparing them). If some other reviewer disagrees please write here or write in your review. :)
Yes, I will better document the differences between the two.
std::numbers::pi is double not long double so on my machine for long double using numbers::pi is wrong way to initialize the variable. I have no idea why std:: did this(I presume because long double is 64 bit on some platforms or because double is much more common, or...). In any case seems like quite a footgun in Decimal as users might init decimal128 with decimal64 constant... Am I missing something?
The non-template variables for numbers::pi, log2e etc are all double so we went with decimal64_t for consistency [1]. pi_v<decimal128_t> is available in your case. Matt [1] https://develop.decimal.cpp.al/decimal/numbers.html
On Mon, Oct 13, 2025 at 6:35 PM Matt Borland <matt@mattborland.com> wrote:
The non-template variables for numbers::pi, log2e etc are all double so we went with decimal64_t for consistency [1]. pi_v<decimal128_t> is available in your case.
My intuition is that it is still a footgun. std:: can not magically make all compilers change long double to 64 or 80 bit, but you control Decimal code so I think making them 128 bit constants would be better. But to be honest I have not found any discussion of double vs long double in paper that proposed <numbers> header so I may be missing something. I presume narrower types will not be easily constructible from wider so making constants d128 will make code more verbose. But I am not sure being concise is worth the potential issues. I have question about hash: template <> struct hash<boost::decimal::decimal_fast128_t> { // Take the xor of the two words and hash that auto operator()(const boost::decimal::decimal_fast128_t& v) const noexcept -> std::size_t { boost::decimal::decimal128_t v_128 {v}; boost::int128::uint128_t bits; std::memcpy(&bits, &v_128, sizeof(boost::int128::uint128_t)); return std::hash<std::uint64_t>{}(bits.high ^ bits.low); } }; Usually people say that you should not use xor because swapped operands will hash the same. Is there a reason beside performance why xor is used here? I presume because 128 bits (a bit less since some patterns are invalid or encode same value, but definitely more than 64 bits) must collide anyway?
My intuition is that it is still a footgun. std:: can not magically make all compilers change long double to 64 or 80 bit, but you control Decimal code so I think making them 128 bit constants would be better. But to be honest I have not found any discussion of double vs long double in paper that proposed <numbers> header so I may be missing something. I presume narrower types will not be easily constructible from wider so making constants d128 will make code more verbose. But I am not sure being concise is worth the potential issues.
Since the C days math functions have used double as the default (e.g. sqrt vs sqrtf vs sqrtl) so I'd be willing to bet that's where the numbers default came from. You are correct; narrowing is explicit and widening is implicit, which was an output of the first review. I will consider deprecation/removal.
Usually people say that you should not use xor because swapped operands will hash the same. Is there a reason beside performance why xor is used here? I presume because 128 bits (a bit less since some patterns are invalid or encode same value, but definitely more than 64 bits) must collide anyway?
Yes, the pigeon hole principle tells us their must be collisions here, but in the case where the function is mapping 2^128 -> 2^(32 or 64) it only takes 2^(16 or 32) operations to find a collision on average. If we assume your consumer grade computer performs 10^9 operations/second that means we can generate a collision in 2^32/10^9 ~= 4.3 seconds. std::hash is also commonly the Identity function so the output here is going to be however we decided to combine the two words. I don't see any real reason to try and make this more clever. Matt
On Mon, Oct 6, 2025 at 11:47 AM John Maddock via Boost < boost@lists.boost.org> wrote:
The second review of the proposed Decimal Number library by Matt Borland
I have a question, is it possible at all that on some platform the "fast" types are slower than the IEEE 754 types?
I have a question, is it possible at all that on some platform the "fast" types are slower than the IEEE 754 types?
For basic operations no. The two slowest parts for any operation are decoding the value, and normalization to remove the effects of cohorts. The fast types are a struct that stores the value always normalized, and decoded. The <charconv> operations can be a bit slower for the fast types. from_chars has to go through the normalization steps for the fast types only, and to_chars has to strip off all the trailing zeros that normalization adds. If we assume you do from_chars and to_chars once each per value the extra expense gets amortized quickly into actually performing calculations with the value. In the last review someone brought up that there exist hardware platforms in the compile farm that have a decimal floating point units. I have been working on a wrapper around this native type because that should offer the best performance. This wrapper type could (should) exceed the performance of the fast types for those that have that system (POWER10 in my case). Matt
In the last review someone brought up that there exist hardware platforms in the compile farm that have a decimal floating point units. I have been working on a wrapper around this native type because that should offer the best performance. This wrapper type could (should) exceed the performance of the fast types for those that have that system (POWER10 in my case).
What's your idea for exposing this? Would it be a separate type, or would the fast types default to this hardware-backed implementation if available?
Matt_______________________________________________ Boost mailing list -- boost@lists.boost.org To unsubscribe send an email to boost-leave@lists.boost.org https://lists.boost.org/mailman3/lists/boost.lists.boost.org/ Archived at: https://lists.boost.org/archives/list/boost@lists.boost.org/message/7GQPS4JN...
What's your idea for exposing this? Would it be a separate type, or would the fast types default to this hardware-backed implementation if available?
I planned on making it it's own type, and then enabling or disabling it's availability on platform detection. The design is fundamentally the same as the decimalXX_t types. Since the encoding is different and basic ops (comp, add, sub, mul, div) are provided there's no internal code overlap between for example decimal32_t and hardware_decimal32_t. Matt
sob., 11 paź 2025 o 09:30 Matt Borland via Boost <boost@lists.boost.org> napisał(a):
I have a question, is it possible at all that on some platform the "fast" types are slower than the IEEE 754 types?
For basic operations no. The two slowest parts for any operation are decoding the value, and normalization to remove the effects of cohorts. The fast types are a struct that stores the value always normalized, and decoded. The <charconv> operations can be a bit slower for the fast types. from_chars has to go through the normalization steps for the fast types only, and to_chars has to strip off all the trailing zeros that normalization adds. If we assume you do from_chars and to_chars once each per value the extra expense gets amortized quickly into actually performing calculations with the value.
This is a very important information and it belongs in the library docs. I would also request that you describe what happens upon addition, when a new number is being produced: which item from the cohort is chosen? Does that matter for performance? Regards, &rzej;
In the last review someone brought up that there exist hardware platforms in the compile farm that have a decimal floating point units. I have been working on a wrapper around this native type because that should offer the best performance. This wrapper type could (should) exceed the performance of the fast types for those that have that system (POWER10 in my case).
Matt_______________________________________________ Boost mailing list -- boost@lists.boost.org To unsubscribe send an email to boost-leave@lists.boost.org https://lists.boost.org/mailman3/lists/boost.lists.boost.org/ Archived at: https://lists.boost.org/archives/list/boost@lists.boost.org/message/7GQPS4JN...
This is a very important information and it belongs in the library docs.
https://github.com/cppalliance/decimal/issues/1114
I would also request that you describe what happens upon addition,
when a new number is being produced: which item from the cohort is chosen? Does that matter for performance?
Yes having non-normalized numbers effects performance. Generally, with regular types 1 significand has to get shifted around to perform the operation (i.e. they have different exponents). After the rounding step (as applicable) you're left with the minimum representation which was one values initial cohort, or it's the minimum exponent required to represent the value. With the fast types this logic is easier because you know you're always starting with significands in a 1 decimal digit range. Matt
On Mon, 6 Oct 2025 at 17:46, John Maddock via Boost <boost@lists.boost.org> wrote:
The second review of the proposed Decimal Number library by Matt Borland and Chris Kormanyos begins today and runs until 15th Oct.
You will find documentation here: https://develop.decimal.cpp.al/decimal/overview.html
And the code repository is here: https://github.com/cppalliance/decimal/
Boost.Decimal is an implementation of IEEE 754 <https://standards.ieee.org/ieee/754/6210/> <https://standards.ieee.org/ieee/754/6210/%3E> and ISO/IEC DTR 24733 <https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2849.pdf> <https://www.open-std.org/JTC1/SC22/WG21/docs/papers/2009/n2849.pdf%3E> Decimal Floating Point numbers.
Hi all, This is my re-review of the proposed Boost.Decimal library. Thanks again to Matt for submitting the library, and to John for managing the review. TL;DR: my recommendation is to ACCEPT the library into Boost. No conditions. Most of my concerns from my last review [1] [2] have been addressed, the library has a growing user base, and it's useful. My most pressing concern during the last review was performance [2], and IMO this has been addressed. Most of my concerns regarding API cleanness have also been fixed. Fuzz testing has increased coverage, which is great. I like having fmt support. The way the optional dependency is handled in the header seems a bit esoteric though - the fmt_format.hpp header has the following: #if __has_include(<fmt/format.h>) && __has_include(<fmt/base.h>) And then gets included in <boost/decimal.hpp>. By convention, headers with optional dependencies don't get included in the convenience header (e.g. see Asio with OpenSSL). As a user, I'd prefer a "can't find include <fmt/format.h>" error than "X function is not defined" when the dependency can't be found for whatever reason. I've refreshed my MySQL DECIMAL compatibility prototype [9] and it's worked correctly. The new library version seems to trigger an ICE under MSVC 14.33 [10], but that CI is running a slightly outdated compiler version. Some points: * BOOST_DECIMAL_REDUCE_TEST_DEPTH, debug_pattern and bit_string are still defined in public headers, when they belong to tests. * The example on literals [3] hasn't updated the using namespace clause (should be boost::decimal::literals rather than boost::decimal). You might find it useful to build and run the examples on CI, and include the code in the docs by using asciidoc tagged regions [4]. * boost::decimal::to_string seems to still be public but not documented. * The docs list an entry for Rounding Mode that looks like a dead link [5]. * I'd try to avoid using the <boost/decimal.hpp> include at all in the examples. We then get complaints on "Boost makes your builds slow". Specially, please try to avoid this recommendation in the tutorial [6]. * The examples section topics are great, but the content seems more like unit tests than examples [7]. I miss text explaining what the example is trying to do, some comments, etc. * The example on charconv uses an assert to check the return value, and this makes me very uncomfortable [8]. I've found real-life projects with people "checking" return values with assert like this (there are developers that haven't heard about NDEBUG, apparently). So I'd be more comfortable with an "if (r_from) { /* handle failure */ }". * There are still places marked by LCOV_EXCL_START/LCOV_EXCL_STOP that correspond to valid code paths. Please try to avoid this. It's okay not having 100% coverage, but marking corner cases as "won't happen" makes the metric less reliable. Affiliation disclosure: I'm currently affiliated with the C++ Alliance, as is Matt Borland. Regards, Ruben. [1] https://lists.boost.org/archives/list/boost@lists.boost.org/message/T63BOOVG... [2] https://lists.boost.org/archives/list/boost@lists.boost.org/message/M247DGXG... [3] https://develop.decimal.cpp.al/decimal/examples.html#examples_literals_const... [4] https://docs.asciidoctor.org/asciidoc/latest/directives/include-tagged-regio... [5] https://develop.decimal.cpp.al/decimal/examples.html#examples_rounding_mode [6] https://develop.decimal.cpp.al/decimal/basics.html#basics_using_the_library [7] https://develop.decimal.cpp.al/decimal/examples.html [8] https://develop.decimal.cpp.al/decimal/examples.html#examples_charconv [9] https://github.com/boostorg/mysql/pull/399 [10] https://drone.cpp.al/boostorg/mysql/1130/51/2
I like having fmt support. The way the optional dependency is handled in the header seems a bit esoteric though - the fmt_format.hpp header has the following:
#if __has_include(<fmt/format.h>) && __has_include(<fmt/base.h>)
And then gets included in <boost/decimal.hpp>. By convention, headers
with optional dependencies don't get included in the convenience header (e.g. see Asio with OpenSSL). As a user, I'd prefer a "can't find include <fmt/format.h>" error than "X function is not defined"
when the dependency can't be found for whatever reason.
Fair enough. I already have fmt separated out in the case that you consume Decimal as a module so that's an easy change.
Some points:
* BOOST_DECIMAL_REDUCE_TEST_DEPTH, debug_pattern and bit_string are still defined in public headers, when they belong to tests. * The example on literals [3] hasn't updated the using namespace clause (should be boost::decimal::literals rather than boost::decimal). You might find it useful to build and run the examples on CI, and include the code in the docs by using asciidoc tagged regions [4]. * boost::decimal::to_string seems to still be public but not documented. * The docs list an entry for Rounding Mode that looks like a dead link [5]. * I'd try to avoid using the <boost/decimal.hpp> include at all in the
examples. We then get complaints on "Boost makes your builds slow". Specially, please try to avoid this recommendation in the tutorial [6]. * The examples section topics are great, but the content seems more like unit tests than examples [7]. I miss text explaining what the example is trying to do, some comments, etc. * The example on charconv uses an assert to check the return value, and this makes me very uncomfortable [8]. I've found real-life projects with people "checking" return values with assert like this (there are developers that haven't heard about NDEBUG, apparently). So I'd be more comfortable with an "if (r_from) { /* handle failure */ }". * There are still places marked by LCOV_EXCL_START/LCOV_EXCL_STOP that correspond to valid code paths. Please try to avoid this. It's okay not having 100% coverage, but marking corner cases as "won't happen" makes the metric less reliable.
Thank you for your detailed points and review. They all seem sound to me. Matt
On 10/6/2025 5:46 PM, John Maddock via Boost wrote:
The second review of the proposed Decimal Number library by Matt Borland and Chris Kormanyos begins today and runs until 15th Oct.
Hi, should reviews be posted in this thread? I see that some people have gone the "one review per thread" way, instead. -- Gennaro Prota <https://prota.dev>
Hi everyone, here's my review of the proposed Decimal library. First off, I'd like to thank Matt and Chris for the substantial work they've put into this submission—it's no small feat to bring such a comprehensive decimal implementation to the table. And thanks to John for stepping up as review manager. I spent some time exploring the documentation and found it generally well-structured and effective at onboarding users. That said, a few facilities appear to be undocumented (see below), and I spotted some minor issues in the code that—while not critical—could benefit from cleanup or clarification. Here's a rundown of my findings: API Design - Considering e.g. decimal32_t, I'm unsure about the rationale behind having both of these constructors: template <typename UnsignedInteger, typename Integer> constexpr decimal32_t(UnsignedInteger coefficient, Integer exponent, bool sign = false) noexcept; template <typename SignedInteger, typename Integer> constexpr decimal32_t(SignedInteger coefficient, Integer exponent) noexcept; The second seems sufficient. What's the intended use case for the first? Documentation - In the section titled "Fundamental Operations", the phrase "The fundamental operations of numerical type (e.g. >, ==, +, etc.) are overloaded." should actually say "numerical types". - In Examples -> Construction -> Literals and Constants, the statement “numeric_limits is overloaded for all decimal types” should say “specialized” instead of "overloaded". - The sentence "This example shows how to parse historical stock data from file and use it" would read more naturally as "from a file". - The navigation panel contains a typo: "Formating support" should be "Formatting support". - Title case in the navigation panel is used inconsistently (see the entries ending with "support".) - On the six pages dedicated to the six decimal types, there are notes saying "This support has been removed in v6.0.0". I think they would be better as "This name has been removed". You might omit the notes altogether if the library is accepted into Boost. Implementation details - The specializations of `std::numeric_limits` use conditional compilation to select between `class` and `struct`, and to specify `public:` access: template <> #ifdef _MSC_VER class numeric_limits<boost::decimal::decimal64_t> #else struct numeric_limits<boost::decimal::decimal64_t> #endif { #ifdef _MSC_VER public: #endif I think just using `class` and `public:` across all compilers is simpler and effective. - In decimal128_t.hpp, the file "detail/int128.hpp" is included twice. I suggest ordering the header-names alphabetically to prevent these kinds of oversights. - Bitwise operators are defined but don't seem to be documented. A rationale for their inclusion would be nice too. - Several binary literals throughout the codebase are difficult to read. For instance: UINT64_C(0b10000000000000000000000000000000000000000000000000) might become: UINT64_C(1) << 49 increasing readability and reducing visual noise. - In decimal128_t.hpp, the following diagnostic suppression block appears: #if defined(__GNUC__) && __GNUC__ >= 6 # pragma GCC diagnostic push # pragma GCC diagnostic ignored "-Wduplicated-branches" # pragma GCC diagnostic ignored "-Wconversion" #endif The presence of `-Wduplicated-branches` raises questions. Are there actual duplicated branches in the code? Or are you just suppressing some false positives? Clarifying this in a comment, or restructuring the code to avoid the warning, would be IMHO useful. Conclusion Overall, I found only minor issues in the documentation and code. I haven't reviewed the core decimal logic in depth, as I'm not deeply familiar with the decimal aspects of IEEE 754—but I trust others will scrutinize that part more thoroughly. Given the quality of the work and the minor nature of the issues I encountered, I believe Decimal deserves to be accepted into Boost. Thanks again to the authors and reviewers for their efforts. -- Gennaro Prota <https://prota.dev>
- Considering e.g. decimal32_t, I'm unsure about the rationale behind having both of these constructors:
template <typename UnsignedInteger, typename Integer>
constexpr decimal32_t(UnsignedInteger coefficient, Integer exponent, bool sign = false) noexcept;
template <typename SignedInteger, typename Integer>
constexpr decimal32_t(SignedInteger coefficient, Integer exponent) noexcept;
The second seems sufficient. What's the intended use case for the first?
We use the unsigned overload extensively for numeric constants in the implementation of <cmath> functions. Not all platforms have (un)signed __int128 so we have a software u128 type that is allowed as an UnsignedInteger. This is important for decimal128_t constants.
- Several binary literals throughout the codebase are difficult to read. For instance:
UINT64_C(0b10000000000000000000000000000000000000000000000000)
might become:
UINT64_C(1) << 49
increasing readability and reducing visual noise.
I can probably go through and simplify a bunch of these now. In the formative stages it made better logical sense to me using binary literal masks, even though they tend to be ugly. My comments are written out by the bit which is what I was going off of [1].
- In decimal128_t.hpp, the following diagnostic suppression block appears:
#if defined(GNUC) && GNUC >= 6
# pragma GCC diagnostic push # pragma GCC diagnostic ignored "-Wduplicated-branches" # pragma GCC diagnostic ignored "-Wconversion" #endif
The presence of `-Wduplicated-branches` raises questions. Are there actual duplicated branches in the code? Or are you just suppressing some false positives? Clarifying this in a comment, or restructuring the code to avoid the warning, would be IMHO useful.
I can't remember why I added it, but the build is clean without so I'll just delete it.
Conclusion
Thank you for your review and comments. Matt [1] https://github.com/cppalliance/decimal/blob/7d44a84c3fc271e48b6850a6b5a39f83...
participants (15)
-
Alexander Grund -
Andrzej Krzemienski -
Emil Dotchevski -
Gennaro Prota -
Ivan Matek -
John Maddock -
Julien Blanc -
Klemens Morgenstern -
LoS -
Matt Borland -
Michel Morin -
Mungo Gill -
Peter Dimov -
Richard Hodges -
Ruben Perez