Boost logo

Boost Users :

From: Neill Clift (NeillClift_at_[hidden])
Date: 2021-08-19 22:02:44

The architecture of cpp_int being build on 64 bit arithmetic using 128 bit double_limb_type is interesting.
I have a question on the large divide (divide_unsigned_helper). It uses the upper portions of the large integers to get an estimation of the quotient. Subtracts out a multiple of that quotient and repeats.
It does this in 128 bit values if available from the compiler:

         double_limb_type a = (static_cast<double_limb_type>(prem[r_order]) << CppInt1::limb_bits) | prem[r_order - 1];
         double_limb_type b = py[y_order];
         double_limb_type v = a / b;

The compiler emulates this operation in the routine __udivmodti4 which itself uses an iterative approach.
It seems to use a pretty basic shift and subtract algorithm mind you.

As a general rule for multiprecision is it OK to layer the Knuth like algorithm D on top of each other this way.
I have no idea myself but wonder if this is a known issue. I would have guessed that it made sense to do a 64 by 64 bit divide to guess the quotient and repeat.

Boost-users list run by williamkempf at, kalb at, bjorn.karlsson at, gregod at, wekempf at