Boost logo

Boost :

Subject: Re: [boost] Name and Namespace forPotential BoostExtended Floating-Point
From: Simonson, Lucanus J (lucanus.j.simonson_at_[hidden])
Date: 2011-09-02 15:02:00


John Maddock wrote:
>>>> For me the rational data type is very important and should wrap the
>>>> gmp mpq_type. It would be nice if the library could use mp_int with
>>>> Boost.Rational to implement its own standalone mp_rational
>>>> datatype, but I would prefer to use the gmp type whenever possible.
>>>
>>> Support for mpq_t is on my (long) todo list.
>>
>> I'm glad to hear. In theory a fixed precision floating point type
>> with 128 bits or more in the mantissa would also satisfy my needs,
>> but I have only tested with multi-precison rational.
>
> Nod. In theory it should be possible to compose the low level GMP
> functions to create a fixed-precision (modulus arithmetic) integer
> type... but that's one for the future.
>
> I do have a rational/mpq backend for big_number (or whatever we call
> it)
> now, but I'm away for a week now, so it might not get committed for a
> while :-(
>
>> Gmp mpq_class expressions allocate for temporaries in compound
>> expressions.
>
> Nod, one thing I've worked hard on is eliminating temporaries inside
> expressions whenever possible, so if you write say:
>
> a = b + c + d + e + f;
>
> Then you get no temporaries at all - everthing gets accumulated inside
> variable a.
>
> However, something like:
>
> a = b*c + d*e;
>
> does require one temporary for *one* of the multiplications (the
> other uses "a" as working space).
>
> One thing I've thought of is walking the expression tree at compile
> time to calculate the largest number of temporaries in the longest
> "branch", then creating these as an array at the outermost level of
> the expression evaluation and passing them down, so that:
>
> a = b*c + b*d + b*e + b*f;
>
> still only needs one temporary - which gets reused for each branch.
>
> This needs some more thought though - to make sure I don't shoot
> myself in the foot and reuse something that shouldn't be!
>
>>> * Is there a concept check program anywhere that I can plug my
>>> types into and verify they meet the libraries conceptual
>>> requirements?
>>
>> Not exactly. I considered writing one, but basically what you need
>> to do is make sure the traits for your type are defined and that it
>> is registered as the right concept type then write a test that
>> exercises the traits through the free function of the same name that
>> expects the concept type. For example point concept has get and set
>> free functions that exercise the get and set traits. If those two
>> functions work correctly (and construct()) then your point is a
>> model of the concept. If you copy-paste the example code for
>> mapping a user defined type to the concept and replace my type with
>> yours and compile the test is part of the example code because I
>> exercise the traits.
>>
>>> * Is there a good program for testing "big number" performance
>>> within your library (for either large integers or rationals)?
>>
>> If you look at the gmp_override.hpp you will see how I abstract away
>> the gmpq_class as a high_precision_type<> metafunction lookup. You
>> can replace gmpq_class with anything you want and benchmark.
>> However, you will find that the performance of the high precision
>> type is almost irrelevant to performance of the larger algorthm
>> because lazy-exact arithmetic implies that it is very rarely if ever
>> used. What you can do instead is dig into my
>> polygon_aribtrary_formation.hpp in details and find the function
>> that computes the intersection point of two line segments using the
>> high precision type. You can then write a benchmark that generates
>> random line segments and intersects them. Once you have that
>> working you can swap in different high precision types by replacing
>> gmp_override.hpp with your own override header file that you include
>> just after polygon.hpp and before your own benchmark code. I have a
>> "fitness" test for intersection points so you can also use the
>> benchmark code as a stress test for the numerical data type chosen
>> to see if it would provide robust execution of my algorithm. You
>> will see the fitness test applied at the end of the lazy version of
>> line segment intersection just before it conditionally calls the
>> exact version.
>
> OK sounds like that might not be as simple as I'd hoped, will
> investigate as I get time,

Actually, it turns out it is simple if you use the trunk instead of release version of Polygon. I have directed_line_segment_concept in the trunk with generic function:

  // set point to the intersection of segment and b
  template <typename point_type, typename segment_type, typename segment_type_2>
    typename enable_if< typename gtl_and_4<y_dls_intersect, typename is_mutable_point_concept<typename geometry_concept<point_type>::type>::type,
      typename is_directed_line_segment_concept<typename geometry_concept<segment_type>::type>::type,
      typename is_directed_line_segment_concept<typename geometry_concept<segment_type_2>::type>::type>::type,
                       bool>::type
  intersection(point_type& intersection, const segment_type& segment, const segment_type_2& b,
               bool projected = false, bool round_closest = false) {
    typedef polygon_arbitrary_formation<typename directed_line_segment_traits<segment_type>::coordinate_type> paf;
    typename paf::Point pt;
    typename paf::Point l, h, l2, h2;
    assign(l, low(segment));
    assign(h, high(segment));
    assign(l2, low(b));
    assign(h2, high(b));
    typename paf::half_edge he1(l, h), he2(l2, h2);
    typename paf::compute_intersection_pack pack;
    if(pack.compute_intersection(pt, he1, he2, projected, round_closest)) {
      assign(intersection, pt);
      return true;
    }
    return false;
  }

That computes the intersection point.

If you replace pack.compute_intersection with pack.compute_exact_intersection it will not do the lazy intersection computation and you will get maximal usage of your bignum type. You still need to #include gmp_override.hpp and provide your own for other types.

The only relevant code in gmp_override.hpp is:

  template <>
  struct high_precision_type<int> {
    typedef mpq_class type;
  };

  template <>
  int convert_high_precision_type<int>(const mpq_class& v) {
    mpz_class num = v.get_num();
    mpz_class den = v.get_den();
    num /= den;
    return num.get_si();
  };

Just specify your own bignum as the specialization of high_precision_type for coordinate type int and your own conversion specialization to convert it to int.

Here is the expressions I use for computing the intersection point:
        x_num = (x11 * dy1 * dx2 - x21 * dy2 * dx1 + y21 * dx1 * dx2 - y11 * dx1 * dx2);
        x_den = (dy1 * dx2 - dy2 * dx1);
        y_num = (y11 * dx1 * dy2 - y21 * dx2 * dy1 + x21 * dy1 * dy2 - x11 * dy1 * dy2);
        y_den = (dx1 * dy2 - dx2 * dy1);
        x = x_num / x_den;
        y = y_num / y_den;

As you can see, I'm not caching my temporaries, despite being concerned about that sort of thing. The reason is that lazy exact evaluation pushes the use of high precision type down to one in a thousand or lower odds, so I don't really care about the cost of getting the right answer in the unlikely event that long double gave me the wrong answer, it just needs to be right. My concern about efficiency is more about getting the best bignum library, not for my use case.

Regards,
Luke


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk