Boost logo

Boost :

Subject: Re: [boost] [safe_numerics] Review Part 2 (Implementation)
From: Steven Watanabe (watanabesj_at_[hidden])
Date: 2017-03-06 15:35:36


All comments are as of 3bcfabe1cafb8ef6edf08d3bd8730a1555b1c45d


// policy which creates results types equal to that of C++ promotions.
// Using the policy will permit the program to build and run in release
// mode which is identical to that in debug mode except for the fact
// that errors aren't trapped.
Copy/paste from native?

        // clause 2 - otherwise if the rank of the unsigned type exceeds
        // the rank of the of the maximum signed type
How is this possible? The rank of std::intmax_t
is the maximum possible rank. If the unsigned type
is larger, then rank will return void which gives
a compile error when accessing ::value.
Actually it looks like calculate_max_t was originally
written to return T or U, but was then altered to
use [u]intmax_t.

        // clause 4 - otherwise use unsigned version of the signed type
It isn't clear to me why uintmax_t is preferable
to intmax_t when neither is able to represent every
possible result.

        using t_base_type = typename base_type<T>::type;
This is called from safe_base_operations:259, which
has already called base_type.

  From the documentation, I had assumed that automatic
used the range from safe_base. However this range
is not actually passed to the PromotionPolicy, so
automatic always bumps the type to the next largest
integer type, until it reaches [u]intmax_t. This
means than any expressions with more than a few
terms will always be carried out using [u]intmax_t,
which is potentially inefficient, e.g. intmax_t may
require software implementation if it's larger
than the hardware register size. A greater
problem arises from the fact that mixed signed/unsigned
arithmetic eventually chooses unsigned when the
range is too large. If we have a slightly complex
expression with both signed and unsigned terms, the
result type will eventually be selected as uintmax_t,
which may be incapable of representing the result,
thus causing an error. (I know that this scenario
can also fail with builtin integer types, but
it seems to me that the whole point of 'automatic'
is to make things "just work" without having to
worry about pesky things like signedness)

        using result_base_type = typename boost::mpl::if_c<
            || std::numeric_limits<u_base_type>::is_signed,
Why division doesn't use calculate_max_t like multiplication
deserves some explanation. Division can overflow
too (numeric_limits<uintmax_t>::max() / 1).

    static constexpr divide(
This function appears to be necessary, but is
not documented as part of the PromotionPolicy
concept. The same goes for modulus.

    template<typename T, typename U>
    struct left_shift_result
Inconsistent with the documantation of PromotionPolicy
which says: PP::left_shift_result<T>::type. Same
for right_shift_result, and bitwise_result.

        using type = typename boost::mpl::if_c<
            (sizeof(t_base_type) > sizeof(u_base_type)),
Is it actually correct to combine &, |, and ^
in the same class? The result of & will always
fit in the smaller type, but | and ^ require
the larger type.


            // INT32-C Ensure that operations on signed
            // integers do not overflow
This is the unsigned case.

  template<class R, class T, class U>
  constexpr checked_result<R> add(
Consider the following (assuming 2's complement):
checked::add<int>(INT_MIN, UINT_MAX)
The result should be INT_MAX, but your implementation
will generate an error when converting UINT_MAX
to int.

    if(! rt.no_exception() )
The double negative makes this confusing to read.

    return detail::add<R>(t, u);
This relies on implicit conversion and may generate
spurious warnings.

Everything I said about add also applies to subtract.

This should be overflow_error.

Again as with add. This time the problematic case is
multiply<unsigned>(-1, -1)

    return detail::divide<R>(tx.m_r, ux.m_r);
Just because m_r is public, doesn't mean that
you should access it directly.

  constexpr divide_automatic(
The difference between divide and divide_automatic
could use some explanation. It looks like the only
difference is that divide_automatic doesn't cast
the divisor, but it isn't clear why. Also, why
do you need two checked divide functions in the
first place? There should really just be one
that always works correctly.

    return cast<R>(abs(t) % abs(u));
This likely differs from the builtin % and should be documented
as such. Also I prefer % to have the property that
t / u * u + t % u = t, which this implementation
does not satisfy.

    // INT34-C C++ standard paragraph 5.8
    if(u > std::numeric_limits<T>::digits){
You're off-by-one
"The behavior is undefined if the right operand
is negative, or greater than *or equal to* the length in
bits of the promoted left operand" (emphasis added)

left_shift and right_shift (625-792) have several problems:
- 640: You're checking the width of T even though the
       actual shift is done after casting to R.
- 696, 708: These overloads serve no purpose. The
       only difference between them is checks that
       are already handled by check_shift.
- 785: This test duplicates work done be check_shift


36: // can't select constructor based on the current status of another

    // checked_result object. So no copy constructor
Saying that there is no copy constructor is misleading,
as the default cctor exists and is used.

Why is this commented out? reading a
not-currently-active member of a union
is undefined behavior.

    constexpr boost::logic::tribool
    operator<=(const checked_result<T> & t) const {
        return ! operator>(t) && ! operator<(t);
This is opertor==. The correct way is just ! operator>(t).


    using rank =
Rank is only used for comparisons. I don't see
any reason to use instead of using sizeof directly.
This applies to automatic::rank as well.
(Note: rank matters for builtin integer promotion,
because types with the same size can have different
ranks. Your implementation of rank, however, is
strictly based on size, making it essentially useless.)


// contains operations for doing checked aritmetic on NATIVE

// C++ types.
Wrong file. Also s/aritmetic/arithmetic/ for
wherever this was pasted from.




#include <cstdlib> // quick_exit
I don't see quick_exit anywhere in this file.

// account for the fact that for floats and doubles
There's also long double, you know.

namespace {
Please don't use the unnamed namespace in headers.


template<typename R>
constexpr bool less_than(
    const checked_result<R> & lhs,
    const checked_result<R> & rhs
Why is this here and not in checked_result.hpp?
Also this can be implemented easily using operator<:
  return lhs < rhs;
(The implicit conversion from tribool to bool
does exactly what you want here.)

            checked::modulus<R>(t.l, u.l),
            checked::modulus<R>(t.l, u.u),
            checked::modulus<R>(t.u, u.l),
            checked::modulus<R>(t.u, u.u)
This appears to be copied from divide, but it's totally
wrong. modulus is not monotonic, so you can't get
away with checking only the boundaries.

    const R rl = safe_compare::greater_than(t.l, u.l) ? t.l : u.l;
The implicit cast to R may not be safe.
(same at 340, 356, and 357)

    if(safe_compare::greater_than(rl, ru)){
This check isn't necessary. The union of two non-empty
intervals can't be empty.

std::ostream & operator<<(std::ostream & os, const
boost::numeric::interval<unsigned char> & i)
- This specialization is not a template, and can
  only appear in one translation unit.
- The implementation requires <ostream>, but you only #include <iosfwd>


    // Standard C++ type promotion for expressions doesn't depend
    // on the operation being performed so we can just as well
    // use any operation to determine it. We choose + for this
    // purpose.
Comment out-dated. The code (somewhat) uses decltype on the
correct operators.


    safe_base operator++(){ // pre increment
pre-increment and pre-decrement should return by reference.
post-increment and post-decrement should return by value.
You currently have everything returning by value
except post-decrement (which returns a dangling

    constexpr safe_base operator-() const { // unary minus
should unary minus allow unsigned to become signed? (subject to
the PromotionPolicy, of course).

    constexpr safe_base operator~() const {
This will fail if Min/Max are anything other than
the full range of the Stored type.


        indeterminate(t_interval < this_interval),
This works, but it would be a bit more intuitive
if interval had an 'intersects' function.

// Note: the following global operators will be only found via
// argument dependent lookup. So they won't conflict any
// other global operators for types in namespaces other than
// boost::numeric
Unfortunately, ADL casts a very wide net.
What makes these operators (mostly) safe
is the use of enable_if.

    // Use base_value(T) ( which equals MIN ) to create a new interval. Same
    // for MAX. Now
Now ... what?

    constexpr static const interval<result_base_type> type_interval =
        exception_possible() ?
This can overestimate the result interval if the
interval only overflows on one side.

    using type_helper = typename boost::mpl::if_c<
This is what? Support for floating point in the future?

        // when we add the temporary intervals above, we'll get a new
        // with the correct range for the sum !
I think you mean multiply and product, not add and sum.

        constexpr static const interval<result_base_type> type_interval =
            exception_possible() ?
At this point, you lose any benefit of divide_nz over divide.
You should check r_interval.exception() instead of exception_possible().
The same goes for modulus.

    // argument dependent lookup should guarentee that we only get here
I don't understand this comment.


            % static_cast<result_base_type>(base_value(u))
Okay, we have a problem here: checked::modulus does /not/
have the same behavior as the builtin % when the divisor
is negative, which means that treating them as interchangable
here will result in weird and inconsistent behavior (not to
mention silently producing values outside the expected interval.)

typename boost::lazy_enable_if_c<
enable_if_c with bool should work just as well.

        // if the ranges don't overlap
        (! boost::logic::indeterminate(r)) ?
            // values in those ranges can't be equal
This is operator<, not operator==. You should return r, not false here.
Same at 970 in operator>. Also, you're already implementing
operator<= and >= in terms of < and >. Is there a reason
to implement > separately? (t > u) == (u < t)

    // handle safe<T> << int, int << safe<U>, safe<T> << safe<U>
    // exclude std::ostream << ...
Copy/paste. Should be >> and istream.


This doesn't take the input range into account and
can drastically overestimate the result range.

    using bwr = bitwise_or_result<T, U>;
I think it would be slightly better to create an alias
template<class T, class U>
using bitwise_xor_result = bitwise_or_result<T, U>;
instead of using bitwise_or_result directly in
the implementation of xor. (Note that the range
of ^ can be larger than that of | if the lower bound
of the parameters is greater than 0.)

1432, 1474:
    class P, // promotion polic

std::ostream & operator<<(
    std::ostream & os,
Should this use std::basic_ostream<CharT, Traits>?





    constexpr safe_literal_impl operator-() const { // unary minus
Err, what? This will work iff. N is 0.

#define safe_literal(n) \
This is not acceptable. It totally violates
the naming rules for macros.



    constexpr log(T x){
A function with a common name, that is not in a private
namespace, and can match almost anything is a really
bad idea.


The reason that the check is commented out deserves
explanation. (Missing functions are acceptible and
will cause a compile-time error instead of checking
at runtime.)

concept/*.hpp: no comments

In Christ,
Steven Watanabe

Boost list run by bdawes at, gregod at, cpdaniel at, john at