Boost logo

Boost :

Subject: Re: [boost] [review] Multiprecision review scheduled for June 8th - 17th, 2012
From: John Maddock (boost.regex_at_[hidden])
Date: 2012-05-31 14:17:31


As per Jeff's comments I'm replying to the boost-list not boost-users.

>I have spent some hours reading the documentation. Here are some comments
>and a lot of questions.
>
>* As all the classes are at the multi-precision namespace, why name the
>main class mp_number and not just number?
>
>typedef mp::number<mp::mpfr_float_backend<300> > my_float;

Good question :)

I don't have a particulary strong view whether it's "number" or "mp_number",
but would like to know what others think.

>* I think that the fact that operands of different backends can not be
>mixed on the same operation limits some interesting operations:
>
>I would expect the result of unary operator-() always signed? Is this
>operation defined for signed backends?

It is, but I'm not sure it's useful.

Currently there's only one unsigned backend, and it does the equivalent of a
two's complement negate - ie unary minus is equivalent to (~i + 1). It does
this because this is used to implement some of the operations (at both
frontend and backend level), so it's hard to change. It might be possible
to poison the unary minus operator at the top level so it doesn't compile
for unsigned integer types, but I'd have to investigate that.

Basically unsigned types are frankly horrible :(

>I would expect the result of binary operator-() always signed? Is this
>operation defined for signed backends? what is the behavior of
>mp_uint128_t(0) - mp_uint128_t(1)?

It's a mp_uint128_t, and the result is the same as you would get for a built
in 128 bit unsigned type that does 2's complement arithmetic. This is
intentional, as the intended use for fixed precision cpp_int's is as a
replacement for built in types.

>It would be great if the tutorial could show that it is possible however to
>add a mp_uint128_t and a mp_int256_t, or isn't it possible?
>I guess this is possible, but a conversion is needed before adding the
>operands. I don't know if this behavior is not hiding some possible
>optimizations.

Not currently possible (compiler error).

I thought about mixed operations early on and decided it was such a can of
worms that I wouldn't go there at this time. Basically there are enough
design issues to argue about already ;-)

One option would be to have a further review for that specific issue at a
later date.

However, consider this: in almost any non-trivial cenario I can think of, if
mixed operations are allowed, then expression template enabled operations
will yield a different result to non-expression template operations. In
fact it's basically impossible for the user to reason about what expression
templates might do in the face of mixed precision operations, and when/if
promotions might occur. For that reason I'm basically against them, even
if, as you say, it might allow for some optimisations in some cases.

>* Anyway, if the library authors don't want to open to this feature, the
>limitation should be stated more clearly, e.g in the reference
>documentation
>"The arguments to these functions must contain at least one of the
>following:
>
> An mp_number.
> An expression template type derived from mp_number.
>"
>there is nothing that let think mixing backend is not provided.

Nod, will fix.

>* What about replacing the second bool template parameter by an enum class
>expression_template {disabled, enabled}; which will be more explicit. That
>is
>
> typedef mp::mp_number<mp::mpfr_float_backend<300>, false> my_float;
>
>versus
>
> typedef mp::mp_number<mp::mpfr_float_backend<300>,
> mp::expression_template::disabled> my_float;

Not a bad idea actually, I'd like to know what others think.

>* As I posted in this ML already I think that allocators and precision are
>orthogonal concepts and the library should allow to associate one for fixed
>precision. What about adding a 3rd parameter to state if it is fixed or
>arbitrary precision?

I could do that yes.

>* Why cpp_dec_float doesn't have a template parameter to give the integral
>digits? or as the C++ standard proposal from Lawrence Crow
>(http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3352.html), take
>the range and resolution as template parameters?

I don't understand, how is that different from the number of decimal digits?

>* What about adding Throws specification on the mp_number and backend
>requirements operations documentation?

Well mostly it would be empty ;-) But yes, there are a few situations were
throwing is acceptable, but it's never a requirement.

>* Can the user define a backend for fixed int types that needs to manage
>with overflow?

For sure, just flag an error (throw for example) for any operation that
overflows.

>* Why bit_set is a free function?

Why not?

At the time, that seemed the natural way to go, but now you mention it I
guess it could be an enable_if'ed member function.

I guess I have no strong views either way.

>* I don't see nothing about overflow for cpp_dec_float backend operations.
>I guess it is up to the user to avoid overflow as for integers. what would
>be the result on overflow? Could this be added to the documentation?

It supports infinities and NaN's - should be mentioned somewhere, but I'll
add to the reference section. So basically behaviour is the same as for
double/float/long double.

>* can we convert from a cpp_dec_float_100 to a cpp_dec_float_50? if yes,
>which rounding policy is applied? Do you plan to let the user configure the
>rounding policy?

Yes you can convert, and the rounding is currently poorly defined :-(

I'll let Chris answer about rounding policies, but basically it's a whole
lot of work. The aim is not to try and compete with say MPFR, but be "good
enough" for most purposes. For some suitable definition of "good enough"
obviously ;-)

>BTW, I see in the reference "Type mp_number is default constructible, and
>both copy constructible and assignable from: ... Any type that the Backend
>is constructible or assignable from. "
>I would expect to have this information in some way on the tutorial.

It should be in the "Constructing and Interconverting Between Number Types"
section of the tutorial, but will check.

>I will appreciate also if section "Constructing and Interconverting Between
>Number Types" says something about convert_to<T> member function.

Nod will do.

>If not, what about a mp_number_cast function taking as parameter a rounding
>policy?

I think it would be very hard to a coherant set of rounding policies that
were applicable to all backends... including third party ones that haven't
been thought of yet. Basically ducking that issue at present :-(

>* Does the cpp_dec_float back end satisfies any of the Optional
>Requirements? The same question for the other backends?

Yes, but it's irrelevant / an implementation detail. The optional
requirements are there for optimisations, the user shouldn't be able to
detect which ones a backend choses to support.

>* Is there a difference between implicit and explicit construction?

Not currently.

>* On c++11 compilers providing explicit conversion, couldn't the convert_to
>function be replaced by a explicit conversion operator?

I don't know, I'd have to think about that, what compilers support that now?

>* Are implicit conversion possible?

To an mp_number, never from it.

>* Do you plan to add constexpr and noexcept to the interface? After
>thinking a little bit I'm wondering if this is this possible when using 3pp
>libraries backends that don't provide them?

I'm also not sure if it's possible, or even what we would gain - I can't
offhand think of any interfaces that could use constexp for example.

>* Why do you allow the argument of left and right shift operations to be
>signed and throw an exception when negative? Why don't just forbid it for
>signed types?

Good question, although:

* I think it's pretty common to write "mynumber << 4" and expect it to
compile.
* I don't want implicit conversions from signed to unsigned in this case as
it can lead to hard to track down errors if the signed value really is
negative.

>* Why the "Non-member standard library function support" can be used only
>with floating-point Backend types? Why not with fixed-point types?

Because we don't currently have any to test this with.

Is supporting pow or exp with a fixed point type a good idea?

>* What is the type of boost::multiprecision::number_category<B>::type for
>all the provided backends? Could the specialization for
>boost::multiprecision::number_category<B>::type be added in the
>documentation of each backend? and why not add also B::signed_types,
>B::unsigned_types, B::float_types, B::exponent_type?

OK.

>* Why have you chosen the following requirements for the backend?
>- negate instead of operator-()
>- eval_op instead of operator op=()
>- eval_convert_to instead of explicit operator T()
>- eval_floor instead of floor

* non-member functions are required if defaults are to be provided for the
optional requirements.
* There are some non-members that can't be written as overloaded non-member
operators but can named free functions (sorry I forget which ones, but I
remember seeing one or two along the way).
* explicit conversions aren't well supported at present.
* Compiler bug workaround (older GCC versions), there's a note at the
requirements section: "The non-member functions are all named with an
"eval_" prefix to avoid conflicts with template classes of the same name -
in point of fact this naming convention shouldn't be necessary, but rather
works around some compiler bugs. "

>Optimization? Is this optimization valid for short types (e.g. up to 4/8
>bytes)?

What optimisation?

>* As the developer needs to define a class with some constraints to be a
>model of backend, which are the advantages of requiring free functions
>instead of member functions?

Easier for the library to provide default versions for the optional
requirements.

>* Couldn't these be optional if the backend defines the usual operations?

Well you can meta-program around anything I guess, doesn't mean I want to
though...

>* Or could the library provide a trivial backend adaptor that requires the
>backend just to provide the usual operations instead of the eval_xxx?

There is such a backend (undocumented) in SVN - it's call
arithmetic_backend.

However it's not nearly as useful as you might think - there are still a
bunch of things that have to be written specifically for each backend type.
That's why it's not part of the library submission.

>* How the performances of mp_number<this_trivial_adaptor<float>, false>
>will compare with float?

No idea, might be interesting to find out, will investigate.

>* I don't see in the reference section the relation between files and what
>is provided by them. Could this be added?

Nod.

>* And last, I don't see anything related to rvalue references and move
>semantics. Have you analyzed if its use could improve the performances of
>the library?

Analyzed no, but rvalue references are supported for copying if the backend
also supports it.

I do seem to recall seeing different compilers which both claim to support
rvalue refs doing different things with the code though - if I remember
rightly gcc is much more willing to use rvalue based move semantics than
VC++.

Thanks for the comments, John.


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk