Two are already fixed, and Matt has been very responsive, but I feel like it is too easy for a novice to find bugs in the library after a couple of minute's tests. It looks to me like the unit test "sieve" is still too coarse. Some test files (like test_decimal32.cpp or test_decimal64_basis.cpp) are commented out. Fuzz tests for decimal comparisons in random_decimal32_comp.cpp seem to be only testing decimals created from integer values (no fractions). I know that other Boost candidate libraries pass with fewer tests, but I feel this one needs to have a lot of them. I am not even sure how one tests user-defined literals across the spectrum. Maybe the unit tests should be written outside of C++.
Something we've learned from test design in Multiprecision is the iterative steps. As you note many of the early tests seem trivial because they are. The real aggressive numerical correctness testing that finds bugs both here and in Multiprecision are running special functions (e.g. the gamma functions). But yes to your point there's always more things or more through ways to test.
Regarding the design, it looks like it has been done by IEEE, so not much to review here. I only had a short glimpse at the implementation, so I cannot say much. One thing that I found surprising is that operator<=> is implemented in terms of operators <, > and ==. Operator <=> was designed for the class authors to go the opposite way: define operator <=> and you get all the others for free. So I wonder what was the reason for the current implementation.
The library is C++14 with optional support for 20. Defining operator<=> in terms of the existing operators is to avoid implementing everything twice. Thanks for your detailed review, bugs, and doc issues. Matt