Dear All, I am seeking endorsement for review of my library int128 [1]. The library requires only C++14, is header only, and has no mandatory dependencies. What does int128 provide? Two portable and performant types: a 128-bit signed integer and a 128-bit unsigned integer, as well as a standard library for them. The performance of these types have been tuned and tested on a variety of architectures to include: x64, x32, s390x, ARM64, ARM32, PPC64LE. Both of the types and most of their library functions also work inside CUDA kernels. Benchmarks against Boost.Multiprecision, Absl, MSVC's software 128-bit integers, and built-ins, are available [2]. A couple of questions that I anticipate: Why do we need this if we already have Boost.Multiprecision? An old compliant against Boost.Multiprecision is that the 128-bit integer types are not 16 bytes [3]. The 128-bit integer types are also incident to the arbitrary precision type rather than a dedicated implementation. This allows int128 to improve performance in places that Multiprecision can't or shouldn't which is reflected in the benchmarks [2]. Should this go in Core (or other existing lib)? I talked with Peter about this a while back, but int128 was already getting too big at the time. Now int128's include/ directory has a higher sloccount than core's, so it makes even less sense. I would rather it not go into Multiprecision, as int128 would have a module weight of up to 5 (optional dependencies), whereas Multiprecision has a module weight of 25 [4]. The design is fundamentally different than the types used in Multiprecision as well (All types are backends into a master template called number for compatibility) Why does the library have a low number of stars? This library started life as the backend for Boost.Decimal since we needed a cross platform 128-bit integer for the representation of decimal128_t as well as to perform fundamental operations with decimal64_t. You can still find a int128/ folder in Decimal that will be removed if this lib is accepted (it's a vendored version of the library now). I find these types useful for my own purposes, so I expect others will too. Please let me know if you have any questions as you take a look at the library. Thanks for your time, Matt [1] https://github.com/cppalliance/int128 [2] https://develop.int128.cpp.al/u128_benchmarks.html [3] https://stackoverflow.com/questions/41876253/boostmultiprecisionuint128-t-si... [4] https://pdimov.github.io/boostdep-report/develop/module-weights.html
Matt Borland wrote:
Dear All,
I am seeking endorsement for review of my library int128 [1]. The library requires only C++14, is header only, and has no mandatory dependencies. What does int128 provide? Two portable and performant types: a 128-bit signed integer and a 128-bit unsigned integer, as well as a standard library for them. The performance of these types have been tuned and tested on a variety of architectures to include: x64, x32, s390x, ARM64, ARM32, PPC64LE. Both of the types and most of their library functions also work inside CUDA kernels. Benchmarks against Boost.Multiprecision, Absl, MSVC's software 128- bit integers, and built-ins, are available [2].
How does this relate to / interact with __(u)int128_t?
How does this relate to / interact with __(u)int128_t?
If __(u)int128_t exists, then all operators between those types and the boost.int128 types are defined: construction, conversion, add, sub, mul, div, etc. Same case if you have MSVC's std::_Unsigned128 or std::_Signed128 as the "builtin" 128-bit type. The alignment of the boost.int128 types is also set to match the builtin types when they exist. Internally, the boost.int128 types are a struct of two 64-bit integers regardless of the platform. Matt
Matt Borland wrote:
How does this relate to / interact with __(u)int128_t?
If __(u)int128_t exists, then all operators between those types and the boost.int128 types are defined: construction, conversion, add, sub, mul, div, etc. Same case if you have MSVC's std::_Unsigned128 or std::_Signed128 as the "builtin" 128-bit type. The alignment of the boost.int128 types is also set to match the builtin types when they exist. Internally, the boost.int128 types are a struct of two 64-bit integers regardless of the platform.
What do the operations actually do, though? Does adding boost::uint128, when __uint128_t exists, use __uint128_t addition, or does it continue to use the library implementation? Similarly, for mixed mode; what does adding boost::uint128 and __uint128_t do under the hood? (And what is the return type? I assume boost::uint128?)
Vinnie Falco wrote:
On Tue, Apr 28, 2026 at 11:07 AM Peter Dimov via Boost <boost@lists.boost.org <mailto:boost@lists.boost.org> > wrote:
...
Is there already a C++ Standards proposal for int128?
There is, yes. https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p3140r0.html
There is, yes.
https://www.open-std.org/jtc1/sc22/wg21/docs/papers/2024/p3140r0.html
This is not active anymore. Jan's most recent proposal is the inclusion of C's _BitInt which would then reduce a 128-bit integer implementation to _BitInt(128). Matt
On Tuesday, April 28th, 2026 at 2:04 PM, Peter Dimov <pdimov@gmail.com> wrote:
Matt Borland wrote:
How does this relate to / interact with __(u)int128_t?
If __(u)int128_t exists, then all operators between those types and the boost.int128 types are defined: construction, conversion, add, sub, mul, div, etc. Same case if you have MSVC's std::_Unsigned128 or std::_Signed128 as the "builtin" 128-bit type. The alignment of the boost.int128 types is also set to match the builtin types when they exist. Internally, the boost.int128 types are a struct of two 64-bit integers regardless of the platform.
What do the operations actually do, though? Does adding boost::uint128, when __uint128_t exists, use __uint128_t addition, or does it continue to use the library implementation?
Similarly, for mixed mode; what does adding boost::uint128 and __uint128_t do under the hood? (And what is the return type? I assume boost::uint128?)
In all cases the return type is a boost::uint128 for consistency. Yes, this is typically implemented by casting both types to builtin and then casting the result back to uint128. Matt
On Tue, Apr 28, 2026 at 5:05 PM Matt Borland via Boost < boost@lists.boost.org> wrote:
Dear All,
I am seeking endorsement for review of my library int128 [1]. The library requires only C++14, is header only, and has no mandatory dependencies. What does int128 provide? Two portable and performant types: a 128-bit signed integer and a 128-bit unsigned integer, as well as a standard library for them. The performance of these types have been tuned and tested on a variety of architectures to include: x64, x32, s390x, ARM64, ARM32, PPC64LE. Both of the types and most of their library functions also work inside CUDA kernels. Benchmarks against Boost.Multiprecision, Absl, MSVC's software 128-bit integers, and built-ins, are available [2].
Which Multiprecision backend did you use for the benchmarks (cpp, gmp, tom)?
A couple of questions that I anticipate:
Why do we need this if we already have Boost.Multiprecision?
An old compliant against Boost.Multiprecision is that the 128-bit integer types are not 16 bytes [3]. The 128-bit integer types are also incident to the arbitrary precision type rather than a dedicated implementation. This allows int128 to improve performance in places that Multiprecision can't or shouldn't which is reflected in the benchmarks [2].
Should this go in Core (or other existing lib)?
I talked with Peter about this a while back, but int128 was already getting too big at the time. Now int128's include/ directory has a higher sloccount than core's, so it makes even less sense. I would rather it not go into Multiprecision, as int128 would have a module weight of up to 5 (optional dependencies), whereas Multiprecision has a module weight of 25 [4]. The design is fundamentally different than the types used in Multiprecision as well (All types are backends into a master template called number for compatibility)
I appreciate the detailed explanations here. However, I can imagine users of Multiprecision might grumble about having to use two different libraries to get, for example, extended float and int28. I would emphasize the benefits in the docs to try to mitigate this.
Why does the library have a low number of stars?
This library started life as the backend for Boost.Decimal since we needed a cross platform 128-bit integer for the representation of decimal128_t as well as to perform fundamental operations with decimal64_t. You can still find a int128/ folder in Decimal that will be removed if this lib is accepted (it's a vendored version of the library now). I find these types useful for my own purposes, so I expect others will too.
Please let me know if you have any questions as you take a look at the library.
Thanks for your time, Matt
[1] https://github.com/cppalliance/int128
[2] https://develop.int128.cpp.al/u128_benchmarks.html
[3] https://stackoverflow.com/questions/41876253/boostmultiprecisionuint128-t-si...
[4] https://pdimov.github.io/boostdep-report/develop/module-weights.html _______________________________________________ Boost mailing list -- boost@lists.boost.org To unsubscribe send an email to boost-leave@lists.boost.org https://lists.boost.org/mailman3/lists/boost.lists.boost.org/ Archived at: https://lists.boost.org/archives/list/boost@lists.boost.org/message/FTHD7S6D...
Which Multiprecision backend did you use for the benchmarks (cpp, gmp, tom)?
I used the fixed-precision cpp_int types as they are the most closely analogous, and they are portable for those that want to run the benchmarks themselves.
A couple of questions that I anticipate:
Why do we need this if we already have Boost.Multiprecision?
An old compliant against Boost.Multiprecision is that the 128-bit integer types are not 16 bytes [3]. The 128-bit integer types are also incident to the arbitrary precision type rather than a dedicated implementation. This allows int128 to improve performance in places that Multiprecision can't or shouldn't which is reflected in the benchmarks [2].
Should this go in Core (or other existing lib)?
I talked with Peter about this a while back, but int128 was already getting too big at the time. Now int128's include/ directory has a higher sloccount than core's, so it makes even less sense. I would rather it not go into Multiprecision, as int128 would have a module weight of up to 5 (optional dependencies), whereas Multiprecision has a module weight of 25 [4]. The design is fundamentally different than the types used in Multiprecision as well (All types are backends into a master template called number for compatibility)
I appreciate the detailed explanations here. However, I can imagine users of Multiprecision might grumble about having to use two different libraries to get, for example, extended float and int28. I would emphasize the benefits in the docs to try to mitigate this.
Can do. I actually have one convert from Multiprecision to int128 from a recent iteration of the issue asking about cpp_int being 24 bytes. Matt
Matt Borland wrote:
Dear All,
I am seeking endorsement for review of my library int128 [1]. The library requires only C++14, is header only, and has no mandatory dependencies. What does int128 provide? Two portable and performant types: a 128-bit signed integer and a 128-bit unsigned integer, as well as a standard library for them. The performance of these types have been tuned and tested on a variety of architectures to include: x64, x32, s390x, ARM64, ARM32, PPC64LE. Both of the types and most of their library functions also work inside CUDA kernels. Benchmarks against Boost.Multiprecision, Absl, MSVC's software 128- bit integers, and built-ins, are available [2].
I endorse this library.
On Tuesday, April 28th, 2026 at 4:04 PM, Peter Dimov via Boost <boost@lists.boost.org> wrote:
Matt Borland wrote:
Dear All,
I am seeking endorsement for review of my library int128 [1]. The library requires only C++14, is header only, and has no mandatory dependencies. What does int128 provide? Two portable and performant types: a 128-bit signed integer and a 128-bit unsigned integer, as well as a standard library for them. The performance of these types have been tuned and tested on a variety of architectures to include: x64, x32, s390x, ARM64, ARM32, PPC64LE. Both of the types and most of their library functions also work inside CUDA kernels. Benchmarks against Boost.Multiprecision, Absl, MSVC's software 128- bit integers, and built-ins, are available [2].
I endorse this library.
Thank you Peter. Arnaud Bechler has graciously offered to manage the review, so we will get this on the calendar in short order. Matt
> I am seeking endorsement for review of my> library int128 [1].
I endorse (u)int128.
It should be a library, on its own, and beseparate from Boost.Core.
Its purpose is to provide (u)int128_t forall platforms, and is seamless if theplatform already has 128.
It is not a Multiprecision type. In fact,it much more closely resembles (u)int64_t(which is also kind of new in my world)
Clients of Boost.Multiprecision willhopefully not confuse this with a synthesizedMultiprecision type. If confusion exists, it caneasily be cleared up in ensuing chats or issues.
This type (these types) is/are intendedto provide a native-like type.
- Christopher
On Tuesday, April 28, 2026 at 07:04:28 PM GMT+2, Matt Borland via Boost <boost@lists.boost.org> wrote:
Dear All,
I am seeking endorsement for review of my library int128 [1]. The library requires only C++14, is header only, and has no mandatory dependencies. What does int128 provide? Two portable and performant types: a 128-bit signed integer and a 128-bit unsigned integer, as well as a standard library for them. The performance of these types have been tuned and tested on a variety of architectures to include: x64, x32, s390x, ARM64, ARM32, PPC64LE. Both of the types and most of their library functions also work inside CUDA kernels. Benchmarks against Boost.Multiprecision, Absl, MSVC's software 128-bit integers, and built-ins, are available [2].
A couple of questions that I anticipate:
Why do we need this if we already have Boost.Multiprecision?
An old compliant against Boost.Multiprecision is that the 128-bit integer types are not 16 bytes [3]. The 128-bit integer types are also incident to the arbitrary precision type rather than a dedicated implementation. This allows int128 to improve performance in places that Multiprecision can't or shouldn't which is reflected in the benchmarks [2].
Should this go in Core (or other existing lib)?
I talked with Peter about this a while back, but int128 was already getting too big at the time. Now int128's include/ directory has a higher sloccount than core's, so it makes even less sense. I would rather it not go into Multiprecision, as int128 would have a module weight of up to 5 (optional dependencies), whereas Multiprecision has a module weight of 25 [4]. The design is fundamentally different than the types used in Multiprecision as well (All types are backends into a master template called number for compatibility)
Why does the library have a low number of stars?
This library started life as the backend for Boost.Decimal since we needed a cross platform 128-bit integer for the representation of decimal128_t as well as to perform fundamental operations with decimal64_t. You can still find a int128/ folder in Decimal that will be removed if this lib is accepted (it's a vendored version of the library now). I find these types useful for my own purposes, so I expect others will too.
Please let me know if you have any questions as you take a look at the library.
Thanks for your time,
Matt
[1] https://github.com/cppalliance/int128
[2] https://develop.int128.cpp.al/u128_benchmarks.html
[3] https://stackoverflow.com/questions/41876253/boostmultiprecisionuint128-t-sizeof-is-24
[4] https://pdimov.github.io/boostdep-report/develop/module-weights.html_______________________________________________
Boost mailing list -- boost@lists.boost.org
To unsubscribe send an email to boost-leave@lists.boost.org
https://lists.boost.org/mailman3/lists/boost.lists.boost.org/
Archived at: https://lists.boost.org/archives/list/boost@lists.boost.org/message/FTHD7S6DHZFHRKVDFUVN7QOANA7P4E5U/
I endorse (u)int128.
Thank you Chris.
It should be a library, on its own, and be separate from Boost.Core.
Its purpose is to provide (u)int128_t for all platforms, and is seamless if the platform already has 128.
It is not a Multiprecision type. In fact, it much more closely resembles (u)int64_t (which is also kind of new in my world)
Clients of Boost.Multiprecision will hopefully not confuse this with a synthesized Multiprecision type. If confusion exists, it can easily be cleared up in ensuing chats or issues.
I've opened an issue to explain why this is separate, and should stay separate from Multiprecision. Matt
On 28 Apr 2026 20:02, Matt Borland via Boost wrote:
Dear All,
I am seeking endorsement for review of my library int128 [1]. The library requires only C++14, is header only, and has no mandatory dependencies. What does int128 provide? Two portable and performant types: a 128-bit signed integer and a 128-bit unsigned integer, as well as a standard library for them. The performance of these types have been tuned and tested on a variety of architectures to include: x64, x32, s390x, ARM64, ARM32, PPC64LE. Both of the types and most of their library functions also work inside CUDA kernels. Benchmarks against Boost.Multiprecision, Absl, MSVC's software 128-bit integers, and built-ins, are available [2].
I would find this library useful. I have doubts regarding the signed/unsigned comparison behavior deviation described here: https://develop.int128.cpp.al/uint128_t.html#uint128_t_sign_compare_behavior... While I understand the reasons for wanting to implement comparison this way, I thing I would rather prefer to keep the behavior consistent with the rest of integer types. That is, make it the same as for built-in integer types. Yes, I understand this may produce surprising results to some, but IMHO, consistency is more important, as it eliminates a special case in the user's mental model. And the result is no longer surprising once you learn how comparisons work in C++, which you must learn anyway for built-in integer types. Also, I don't quite understand what you are saying in these two sections: https://develop.int128.cpp.al/uint128_t.html#u128_operator_behavior https://develop.int128.cpp.al/int128_t.html#i128_operator_behavior In one place you're saying you convert signed to unsigned to perform the operation, in the other - the other way around. Which way is it? I would expect the operators to work the same way regardless of the order of signed/unsigned arguments, and to follow the rules for the built-in integer types (i.e. to convert signed to unsigned).
I have doubts regarding the signed/unsigned comparison behavior deviation described here:
https://develop.int128.cpp.al/uint128_t.html#uint128_t_sign_compare_behavior...
While I understand the reasons for wanting to implement comparison this way, I thing I would rather prefer to keep the behavior consistent with the rest of integer types. That is, make it the same as for built-in integer types.
It is the same behavior as the built-in types have when you set -Werror -Wsign-conversion -Wconversion which I think is uncontroversial to say is best practice. If this is behavior is removed from the library, then when a user does set the aforementioned flags they will not apply to mixed library type - builtin type operations which I think is the bigger risk. All of the analysis is static; there is zero runtime overhead introduced by these checks.
Also, I don't quite understand what you are saying in these two sections:
https://develop.int128.cpp.al/uint128_t.html#u128_operator_behavior https://develop.int128.cpp.al/int128_t.html#i128_operator_behavior
In one place you're saying you convert signed to unsigned to perform the operation, in the other - the other way around. Which way is it?
For an operation between a builtin and a int128 type of opposite sign, the builtin will be cast to the int128 type. These sign conversions match the behavior of __(u)int128_t. I will edit these sections to make them more clear. Matt
On 2026-04-28 19:02, Matt Borland via Boost wrote:
Dear All,
I am seeking endorsement for review of my library int128 [1]. The library requires only C++14, is header only, and has no mandatory dependencies. What does int128 provide? Two portable and performant types: a 128-bit signed integer and a 128-bit unsigned integer, as well as a standard library for them. The performance of these types have been tuned and tested on a variety of architectures to include: x64, x32, s390x, ARM64, ARM32, PPC64LE. Both of the types and most of their library functions also work inside CUDA kernels. Benchmarks against Boost.Multiprecision, Absl, MSVC's software 128-bit integers, and built-ins, are available [2].
A couple of questions that I anticipate:
Why do we need this if we already have Boost.Multiprecision?
An old compliant against Boost.Multiprecision is that the 128-bit integer types are not 16 bytes [3]. The 128-bit integer types are also incident to the arbitrary precision type rather than a dedicated implementation. This allows int128 to improve performance in places that Multiprecision can't or shouldn't which is reflected in the benchmarks [2].
It would be good to have a more performant 128bit integer type. What is needed to use GMP mpz_powm() modular exponention on int128? Conversion to Boost.Multiprecision gmp_int, or directly to GMP? Would adding mpz_powm() like function to new library make sense? If such int128 powm would be faster than GMP mpz_powm I would say yes. Regards, Hermann.
What is needed to use GMP mpz_powm() modular exponention on int128? Conversion to Boost.Multiprecision gmp_int, or directly to GMP?
Would adding mpz_powm() like function to new library make sense? If such int128 powm would be faster than GMP mpz_powm I would say yes.
To convert to mpz_t you could do something like: boost::int128::uint128_t val; mpz_t z; mpz_init(z); uint64_t words[2] = { val.high, val.low }; mpz_import(z, 2, 1, sizeof(uint64_t), 0, 0, words); I don't have any special bindings for easy conversions to multiprecision types. Without measuring I will assume that a special case of a two word modular exponentiation in the library would be a good bit faster. I already have a way for 128 by 128 mul to result in a 256-bit array in the library (I use this for mul in decimal128_t). Same for the 256-bit by 128-bit reduction. I believe that GMP's small value optimization is 128-bits, so they would need to make at least one heap allocation for this operation. I can look into this in the next few days for you. Matt
participants (7)
-
Andrey Semashev -
Christopher Kormanyos -
hermann@stamm-wilbrandt.de -
Matt Borland -
Peter Dimov -
Tim Haines -
Vinnie Falco