Matt Borland wrote:
`D` specifies `_Decimal64`, `H` specifies `_Decimal32`, and `DD` specifies `_Decimal128`. The `a` specifier for decimal floating-point types indicates quantum-preserving form and stands for "actual" (WG14's n1247). So, `a` might be a better fit for a {fmt}/<format> specifier.
a and A are already reserved for hexfloats.
One might argue that the (a)ctual representation for binary floats is the hex one, because it accurately represent what's in the bits.
But hex makes significantly less sense for decimal floats; for them, the (a)ctual representation is decimal.
And if printf uses %a for decimal floats for cohort-preserving, it wouldn't make much sense for std::format to do something entirely different.
If you want to force hex for some reason there's always `x`, but I doubt anyone would find that particularly useful.
I bring it up because per {fmt}'s "available presentation types for floating-point values" they don't have x for hexfloat and a for actual like printf, only a for hexfloat. Yes, legally I can inject whatever I want into the fmt namespace, but I would rather strictly add to rather than re-interpret existing meanings. Matt