Which Multiprecision backend did you use for the benchmarks (cpp, gmp, tom)?
I used the fixed-precision cpp_int types as they are the most closely analogous, and they are portable for those that want to run the benchmarks themselves.
A couple of questions that I anticipate:
Why do we need this if we already have Boost.Multiprecision?
An old compliant against Boost.Multiprecision is that the 128-bit integer types are not 16 bytes [3]. The 128-bit integer types are also incident to the arbitrary precision type rather than a dedicated implementation. This allows int128 to improve performance in places that Multiprecision can't or shouldn't which is reflected in the benchmarks [2].
Should this go in Core (or other existing lib)?
I talked with Peter about this a while back, but int128 was already getting too big at the time. Now int128's include/ directory has a higher sloccount than core's, so it makes even less sense. I would rather it not go into Multiprecision, as int128 would have a module weight of up to 5 (optional dependencies), whereas Multiprecision has a module weight of 25 [4]. The design is fundamentally different than the types used in Multiprecision as well (All types are backends into a master template called number for compatibility)
I appreciate the detailed explanations here. However, I can imagine users of Multiprecision might grumble about having to use two different libraries to get, for example, extended float and int28. I would emphasize the benefits in the docs to try to mitigate this.
Can do. I actually have one convert from Multiprecision to int128 from a recent iteration of the issue asking about cpp_int being 24 bytes. Matt