|
Boost : |
Subject: Re: [boost] [safe_numerics] Review Part 2 (Implementation)
From: Steven Watanabe (watanabesj_at_[hidden])
Date: 2017-03-07 19:40:28
AMDG
On 03/07/2017 10:32 AM, Robert Ramey via Boost wrote:
>
> On 3/6/17 7:35 AM, Steven Watanabe via Boost wrote:
>>
>> automatic.hpp
>>
>> 97:
>> // clause 4 - otherwise use unsigned version of the signed type
>> std::uintmax_t
>> It isn't clear to me why uintmax_t is preferable
>> to intmax_t when neither is able to represent every
>> possible result.
>
> If both operands are signed - then uintmax_t could hold
> one more bit than intmax_t can. In these cases the
> operand result can be twice as large before runtime
> checking is necessary.
>
Right, but uintmax_t can't hold
negative values which /also/ require
runtime checking.
>
>>
>> 247:
>> using result_base_type = typename boost::mpl::if_c<
>> std::numeric_limits<t_base_type>::is_signed
>> || std::numeric_limits<u_base_type>::is_signed,
>> std::intmax_t,
>> std::uintmax_t
>> >::type;
>> Why division doesn't use calculate_max_t like multiplication
>> deserves some explanation. Division can overflow
>> too (numeric_limits<uintmax_t>::max() / 1).
>
> Hmmm - I'm sure it's because I've presumed that the result
> can aways be contained in the same size type as the number
> being divide. That t / x <= t - and divide by zero is
> checked separately. Am I wrong about this?
>
Yes. There are two specific cases that can
cause problems:
- the example I gave above returns intmax_t which cannot
represent the maximum value of uintmax_t.
- std::numeric_limits<intmax_t>::min() / -1, which is
also out of range.
>
>>
>> checked.hpp:
>>
>> 220:
>> template<class R, class T, class U>
>> constexpr checked_result<R> add(
>> Consider the following (assuming 2's complement):
>> checked::add<int>(INT_MIN, UINT_MAX)
>> The result should be INT_MAX,
>
> I'm not seeing this. With 8 bit ints we've got
> INT_MIN = x80 = -128
> UINT_MAX = x80 = 128
> adding together x100
> truncating = 0
>
UINT_MAX (not INT_MAX) = 0xFF = 255
> which might be OK except on current machines but does violate the
> standard in that it undefined behavior. Given that compiler optimizers
> can do anything they wand on UB I don't think we can do this
>
>> but your implementation
>> will generate an error when converting UINT_MAX
>> to int.
>
> As I believe it should -
>
> UINT_MAX = 0x80
> convert to int - nothing changes
> x80 interpreted as an int is -128
>
> So this would transform an arithmetic value of 128 to
> a value of -128. Not a good thing.
>
> What I've done is applied the standard. Convert each
> type to the result type then do the addition. On this
> conversion we change the value - game over - invoke
> error.
>
My mental model for the correct behavior of checked::add is:
- add the values as if using infinite precision arithmetic
- truncate the result to result_type
- if the value doesn't fit, then return an error
I notice that safe_compare does extra work
to allow comparisons to give the correct
mathematical result even if casts would
fail. I believe the checked::xxx should
do the same. In addition, *not* following
this rule is precisely the reason for the
problems that you've encountered with divide.
>>
>> 230:
>> return detail::add<R>(t, u);
>> This relies on implicit conversion and may generate
>> spurious warnings.
>
> conversion - I don't see it. detail::add<R>(t, u)
> return a checked_result<R> which is returned as another
> checked_result<R>. No conversion. Probably not even
> a copy with copy elusion.
I meant implicit conversion of t and u.
(Is the PromotionPolicy forbidden from
returning a type smaller than the arguments?)
>>
>>
>> 583:
>> constexpr divide_automatic(
>> The difference between divide and divide_automatic
>> could use some explanation. It looks like the only
>> difference is that divide_automatic doesn't cast
>> the divisor, but it isn't clear why. Also, why
>> do you need two checked divide functions in the
>> first place? There should really just be one
>> that always works correctly.
>
> LOL - that's what I thought. I couldn't make one which worked
> for both native and automatic promotions.
>
> The case which is giving me fits is dividing INT_MIN / -1
>
> INT_MIN = 0x80 = -128
> -128 / -1 = + 128 = 0x80 = INT_MIN again.
>
> So we have INT_MIN / -1 -> INT_MIN an incorrect result.
>
> Now I forget - but this is only a problem with automatic
> promotions.
>
Try this way:
if (u == 0) error;
auto uresult = checked::abs(t)/checked::abs(u);
if (sign(t) != sign(u))
return checked::negate<R>(uresult);
else
return cast<R>(uresult);
This should work correctly in all cases
as long as you handle all the sticky cases
in checked::abs and a hypothetical checked::negate.
>>
>>
>> interval.hpp:
>>
>> 101:
>> namespace {
>> Please don't use the unnamed namespace in headers.
>
> OK - but I forgot why it's not recommended
Code in the unnamed namespace will be distinct
in every translation unit. Using the unnamed
namespace in a header means that every translation
unit that #includes it will get a copy of this
code, and linker will not merge the duplicates.
>>
>> 453:
>> template<>
>> std::ostream & operator<<(std::ostream & os, const
>> boost::numeric::interval<unsigned char> & i)
>> - This specialization is not a template, and can
>> only appear in one translation unit.
> Please expand upon this
>
If you #include this header from multiple
translation units, it will fail to link,
with a multiple definition error.
>> - The implementation requires <ostream>, but you only #include <iosfwd>
> Right = but any other which invokes this function will aready have
> included ostream. That's why included <iosfwd> rather than <ostream>
> since this operator is used almost exclusively for debugging/examples.
This operator requires ostream to be
complete /at point of definition/, not when
it called. i.e.
#include "interval.hpp"
#include <ostream>
will fail. I'm assuming that interval.hpp
doesn't indirectly #include <ostream>.
>>
>> safe_literal.hpp:
>>
>> 111:
>> constexpr safe_literal_impl operator-() const { // unary minus
>> Err, what? This will work iff. N is 0.
> that's out
>>
>> 140:
>> #define safe_literal(n) \
>> This is not acceptable. It totally violates
>> the naming rules for macros.
>
> I'm still struggling with this. What do you suggest?
I suggest leaving it out. Any name that can
legitimately be used as a macro would be more
cumbersome than writing out safe_signed_literal
or safe_unsigned_literal.
In Christ,
Steven Watanabe
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk