My intuition is that it is still a footgun. std:: can not magically make all compilers change long double to 64 or 80 bit, but you control Decimal code so I think making them 128 bit constants would be better. But to be honest I have not found any discussion of double vs long double in paper that proposed <numbers> header so I may be missing something. I presume narrower types will not be easily constructible from wider so making constants d128 will make code more verbose. But I am not sure being concise is worth the potential issues.
Since the C days math functions have used double as the default (e.g. sqrt vs sqrtf vs sqrtl) so I'd be willing to bet that's where the numbers default came from. You are correct; narrowing is explicit and widening is implicit, which was an output of the first review. I will consider deprecation/removal.
Usually people say that you should not use xor because swapped operands will hash the same. Is there a reason beside performance why xor is used here? I presume because 128 bits (a bit less since some patterns are invalid or encode same value, but definitely more than 64 bits) must collide anyway?
Yes, the pigeon hole principle tells us their must be collisions here, but in the case where the function is mapping 2^128 -> 2^(32 or 64) it only takes 2^(16 or 32) operations to find a collision on average. If we assume your consumer grade computer performs 10^9 operations/second that means we can generate a collision in 2^32/10^9 ~= 4.3 seconds. std::hash is also commonly the Identity function so the output here is going to be however we decided to combine the two words. I don't see any real reason to try and make this more clever. Matt