|
Boost : |
Subject: Re: [boost] [review] Multiprecision review scheduled for June 8th - 17th, 2012
From: Vicente J. Botet Escriba (vicente.botet_at_[hidden])
Date: 2012-06-02 16:33:32
Le 31/05/12 20:17, John Maddock a écrit :
>
>> * I think that the fact that operands of different backends can not
>> be mixed on the same operation limits some interesting operations:
>>
>> I would expect the result of unary operator-() always signed? Is this
>> operation defined for signed backends?
>
> It is, but I'm not sure it's useful.
I don't reach to find it now on the documentation for mp_number, neither
on the code. Could you point me where it is defined?
>
> Currently there's only one unsigned backend, and it does the
> equivalent of a two's complement negate - ie unary minus is equivalent
> to (~i + 1). It does this because this is used to implement some of
> the operations (at both frontend and backend level), so it's hard to
> change. It might be possible to poison the unary minus operator at
> the top level so it doesn't compile for unsigned integer types, but
> I'd have to investigate that.
>
> Basically unsigned types are frankly horrible :(
I agree, but they are also quite useful ;-)
>
>> I would expect the result of binary operator-() always signed? Is
>> this operation defined for signed backends? what is the behavior of
>> mp_uint128_t(0) - mp_uint128_t(1)?
>
> It's a mp_uint128_t, and the result is the same as you would get for a
> built in 128 bit unsigned type that does 2's complement arithmetic.
> This is intentional, as the intended use for fixed precision cpp_int's
> is as a replacement for built in types.
I could understand that you want the class cpp_int behave as the builtin
types, but I can understand also that others expect that a high level
numeric class shouldn't suffer from the inconveniences the builtin types
suffer and be closer to the mathematical model. I expected mp_number to
manage with these different expectation using a different backend, but
maybe my expectations are wrong.
>> It would be great if the tutorial could show that it is possible
>> however to add a mp_uint128_t and a mp_int256_t, or isn't it possible?
>> I guess this is possible, but a conversion is needed before adding
>> the operands. I don't know if this behavior is not hiding some
>> possible optimizations.
>
> Not currently possible (compiler error).
Why? mp_uint128_t is not convertible to mp_int256_t?
>
> I thought about mixed operations early on and decided it was such a
> can of worms that I wouldn't go there at this time. Basically there
> are enough design issues to argue about already ;-)
As for example?
>
> One option would be to have a further review for that specific issue
> at a later date.
Maybe this is a good compromise.
>
> However, consider this: in almost any non-trivial cenario I can think
> of, if mixed operations are allowed, then expression template enabled
> operations will yield a different result to non-expression template
> operations.
Why? could you clarify?
> In fact it's basically impossible for the user to reason about what
> expression templates might do in the face of mixed precision
> operations, and when/if promotions might occur. For that reason I'm
> basically against them, even if, as you say, it might allow for some
> optimisations in some cases.
It is not only an optimization matter. When working with fixed precision
it important to know what is the result type precision of an arithmetic
operation that don't loss information by overflow or resolution.
>
>> * What about replacing the second bool template parameter by an enum
>> class expression_template {disabled, enabled}; which will be more
>> explicit. That is
>>
>> typedef mp::mp_number<mp::mpfr_float_backend<300>, false> my_float;
>>
>> versus
>>
>> typedef mp::mp_number<mp::mpfr_float_backend<300>,
>> mp::expression_template::disabled> my_float;
>
> Not a bad idea actually, I'd like to know what others think.
The same applies for the sign.
>
>> * Why cpp_dec_float doesn't have a template parameter to give the
>> integral digits? or as the C++ standard proposal from Lawrence Crow
>> (http://www.open-std.org/jtc1/sc22/wg21/docs/papers/2012/n3352.html),
>> take the range and resolution as template parameters?
>
> I don't understand, how is that different from the number of decimal
> digits?
Oh I got it now. the decimal digits concern the mantissa and not the
digits of the fractional part?
>
>> * What about adding Throws specification on the mp_number and backend
>> requirements operations documentation?
>
> Well mostly it would be empty ;-) But yes, there are a few situations
> were throwing is acceptable, but it's never a requirement.
By empty, do you mean that the operation throws nothing? if yes, this is
a important feature and/or requirement.
>
>> * Can the user define a backend for fixed int types that needs to
>> manage with overflow?
>
> For sure, just flag an error (throw for example) for any operation
> that overflows.
I guess then that most of the operations could throw if the backend
throws, so that the Throw specification should take care of this.
>
>> * Why bit_set is a free function?
>
> Why not?
>
> At the time, that seemed the natural way to go, but now you mention it
> I guess it could be an enable_if'ed member function.
>
> I guess I have no strong views either way.
I just wanted to know if there were some strong reasons. An
alternative could be to follow the std::bitset<> or dynamic_bitset
interfaces.
>
>> * I don't see nothing about overflow for cpp_dec_float backend
>> operations. I guess it is up to the user to avoid overflow as for
>> integers. what would be the result on overflow? Could this be added
>> to the documentation?
>
> It supports infinities and NaN's - should be mentioned somewhere, but
> I'll add to the reference section. So basically behaviour is the same
> as for double/float/long double.
OK. I see.
>
>> * can we convert from a cpp_dec_float_100 to a cpp_dec_float_50? if
>> yes, which rounding policy is applied? Do you plan to let the user
>> configure the rounding policy?
>
> Yes you can convert, and the rounding is currently poorly defined :-(
>
> I'll let Chris answer about rounding policies, but basically it's a
> whole lot of work. The aim is not to try and compete with say MPFR,
> but be "good enough" for most purposes. For some suitable definition
> of "good enough" obviously ;-)
>
>> BTW, I see in the reference "Type mp_number is default constructible,
>> and both copy constructible and assignable from: ... Any type that
>> the Backend is constructible or assignable from. "
>> I would expect to have this information in some way on the tutorial.
>
> It should be in the "Constructing and Interconverting Between Number
> Types" section of the tutorial, but will check.
I didn't find it there.
>
>
>> If not, what about a mp_number_cast function taking as parameter a
>> rounding policy?
>
> I think it would be very hard to a coherant set of rounding policies
> that were applicable to all backends... including third party ones
> that haven't been thought of yet. Basically ducking that issue at
> present :-(
Could we expect this as an improvement for future releases?
>
>> * Does the cpp_dec_float back end satisfies any of the Optional
>> Requirements? The same question for the other backends?
>
> Yes, but it's irrelevant / an implementation detail. The optional
> requirements are there for optimisations, the user shouldn't be able
> to detect which ones a backend choses to support.
Even the conversion constructors?
>
>> * Is there a difference between implicit and explicit construction?
>
> Not currently.
So, I guess that only implicit construction is supported. I really think
that mp_number should provide both constructors if the backed provide them.
>
>> * On c++11 compilers providing explicit conversion, couldn't the
>> convert_to function be replaced by a explicit conversion operator?
>
> I don't know, I'd have to think about that, what compilers support
> that now?
gcc and clang atleast. Does msvc 11?
>
>> * Are implicit conversion possible?
>
> To an mp_number, never from it.
Do you mean that there is no implicit conversion from mp_number to a
builtin type?
>
>> * Do you plan to add constexpr and noexcept to the interface? After
>> thinking a little bit I'm wondering if this is this possible when
>> using 3pp libraries backends that don't provide them?
>
> I'm also not sure if it's possible, or even what we would gain - I
> can't offhand think of any interfaces that could use constexp for
> example.
It depends on the backend. But construction from builtins and most of
the arithmetic operations could be constexpr.
Note that noexcept is a different matter.
>
>> * Why do you allow the argument of left and right shift operations to
>> be signed and throw an exception when negative? Why don't just forbid
>> it for signed types?
>
> Good question, although:
>
> * I think it's pretty common to write "mynumber << 4" and expect it to
> compile.
Is it so hard to write "mynumber << 4u"?
> * I don't want implicit conversions from signed to unsigned in this
> case as it can lead to hard to track down errors if the signed value
> really is negative.
I agree. unsigned can be converted to signed but not the opposite.
>
>> * Why the "Non-member standard library function support" can be used
>> only with floating-point Backend types? Why not with fixed-point types?
>
> Because we don't currently have any to test this with.
Well, you can replace the documentation to say just that.
>
> Is supporting pow or exp with a fixed point type a good idea?
I don't know if they will be used often. I'm just working on a log on
fixed points.
>
>
>> * Why have you chosen the following requirements for the backend?
>> - negate instead of operator-()
>> - eval_op instead of operator op=()
>> - eval_convert_to instead of explicit operator T()
>> - eval_floor instead of floor
>
> * non-member functions are required if defaults are to be provided for
> the optional requirements.
> * There are some non-members that can't be written as overloaded
> non-member operators but can named free functions (sorry I forget
> which ones, but I remember seeing one or two along the way).
> * explicit conversions aren't well supported at present.
> * Compiler bug workaround (older GCC versions), there's a note at the
> requirements section: "The non-member functions are all named with an
> "eval_" prefix to avoid conflicts with template classes of the same
> name - in point of fact this naming convention shouldn't be necessary,
> but rather works around some compiler bugs. "
>
OK. I understand it now why.
>> Optimization? Is this optimization valid for short types (e.g. up to
>> 4/8 bytes)?
>
> What optimisation?
I though this was related to optimization of the expression templates.
> * Or could the library provide a trivial backend adaptor that requires
> the backend just to provide the usual operations instead of the eval_xxx?
>
> There is such a backend (undocumented) in SVN - it's call
> arithmetic_backend.
>
> However it's not nearly as useful as you might think - there are still
> a bunch of things that have to be written specifically for each
> backend type. That's why it's not part of the library submission.
I will take a look.
> * And last, I don't see anything related to rvalue references and move
> semantics. Have you analyzed if its use could improve the performances
> of the library?
>
> Analyzed no, but rvalue references are supported for copying if the
> backend also supports it.
>
> I do seem to recall seeing different compilers which both claim to
> support rvalue refs doing different things with the code though - if I
> remember rightly gcc is much more willing to use rvalue based move
> semantics than VC++.
>
I don't know why I though rvalue references and move semantics should
help to optimize in this domain. Maybe some experts could tell a word on
this.
Best,
Vicente
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk