Boost logo

Boost :

From: Maurizio Vitale (mav_at_[hidden])
Date: 2007-04-28 18:17:14


"Phil Endecott" <spam_from_boost_dev_at_[hidden]> writes:

>> I like the ability to specify the number of bits either side of the decimal
>> point, but this does get in the way for doing the math functions --- you need
>> a table with magic numbers in at the appropriate precision for exp and
>> sin/cos.

For my library I'm specifying the number of bits in the significand and the position
of the decimal point. This allows for some interesting scenarios (needed for my
SystemC application). If you call W the width of the significand and S (for scale)
the position of the decimal point you get:
    W <= S <= 0 : typical fixed point numbers (integers when S=0) with some
                  number of bits in the integer part (possibly 0) and some
                  number of bits in the fractional part (again, possibly 0)
    S > W : the decimal point is at the left of the significand, missing
                  bits are the zero extension (for unsigned) or sign extension
                  (for signed numbers) of the significand)
    S < 0 : the decimal point is to the right of the significand. Missing
                  bits are 0s.

A similar effect could be obtained by allowing the number of integer and fractional
bits to be negative numbers, but I feel that in taht case many of the checks and
computations you've to perform in order to do arithmetic are more complex.
It wouldn't matter much (still linear stuff), but in myapplication there're cases where
these computations happen at run-time, so the simpler the better.

A user layer can easily adapt one representation into another (and indeed, mine is not
exactly the SystemC one)

> John Femiani wrote:
>> I wonder if either of you are
>> willing to consider non power-of-two scaled values. For instance images
>> are commonly stored using 8bit integers, but with the assumption that
>> 255 is actually 'one'.
>
> That's an interesting problem, and one that I have previously had to
> deal with myself. However, I'm really keen to focus on the core idea
> of a fixed-point type.
>
> I am a bit concerned that the nature of the Boost process could lead to
> a sort of "design by committee". If I were to resist all suggestions
> for "feature creep", would I eventually find my offering rejected for
> "lack of features"? Would only a "bloated" library be accepted?

>
> When I was integrating my safe_int<> type with fixed<>, I encountered
> an interesting problem. Recall that fixed has three template
> parameters giving the number of bits before and after the point and the
> implementation type. The implementation type defaults to an integer
> provided by Boost.Integer, however you can provide an alternative such
> as safe_int. My problem is what type should be used in the result of a
> function such as:
>
> template<int XWB, int XFB, typename XIMPL, int YWB, int YFB, typename YIMPL>
> fixed<XWB+YWB, XFB+YFB, ????> operator*(fixed<XWB,XFB,XIMPL> x, fixed<YWB,YFB,YIMPL>);
>
> Putting aside the question of what to do when X and Y have different
> implementation types (e.g. one is overflow checking and one is not),
> what I need in the return type is something that is "like XIMPL (or
> YIMPL) but with n bits". For example, if XIMPL is safe_int<XWB+XFB>,
> the return type should be safe_int<XWB+YWB+XFB+YFB>; if it's
> boost::int_t<XWB+XFB>::least, the return type should be
> boost::int_t<XWB+YWB+XFB+YFB>. Any idea how to do that?

I'm doing things a bit differently, so the rest of this may or may not be of use.

I completely separate the functionality of storing values from the overflow and
quantization behaviour, so all I really have to do in arithmetic operations is
to find an appropriate container for the value.

Containers are classified in families. That's user definable, but for SystemC I'll
have: things which are less than 64 bits, things that are larger than 64 bits
(but fixed) and things that have infinite precision. The latter two will be implemented
w/ the GNU multiprecision library.

When doing arithmetic, what happens is user definable. While the natural size of
a multiplication would be W1+W2:S1+S2, the user may decide that some families lock-in.
In this case if for instance you are in a family that allows up to 64 bits everything
goes as you described and you get the natural size until you hit the 64 bit wall.
After that you invoke the overflow handling. It is user defineable whether overflow
is managed only upon creation/assignment or for every subexpressions.
This is again required for allowing me to implement SystemC,
but is also possible to allow for family promotion.

I have a metafunction that computes the concrete type used for representing values.
Arguments are:
          S: the desired size in bits
          Families: a description of classes of types to be used
          Family: the desired family, defaulting to unspecified

Families is a (mpl) vector of family descriptions, each having three components:
         - the maximum number of bits (0 means unbounded)
         - the raw type to be used (can be a lambda expression, in which case it is called
           with Size and Representation (basically signed or unsigned), and this is where it
           blends with boost::(u)int_t
         - whether the representation allows for infinite precision or not.

I include the relevant portion of my code that does this at the end.

It doesn't really work for you because I assume that families are non-overlapping (kind of, I
have the exception of 0 for unlimited) and that the first found is the one you want to represent
your number.

I'm still thinking about how to address the second part of what you
achieve with allowing normal ints and safe ints together. At present
I'm leaning towards a receiver-decides policy, combined with a user
selectable policy of overflow/quantization on assignment only or also
on subexpression. To clarify, if O_1..O_n are variables that would
throw on overflow when something too large is assigned to them and
I_1..I_n are variables that would behave like C++ builtins and ignore
overflow, I'm thinking to have:

    I_1 = I_0 + I_1 + I_2 - never throws
    O_1 = I_0 + I_1 - throws on assignment if I_0+I_1 is too large for O_1
    O_1 = O_2 + O_2 - throws on + if (because of O_1 being the target):
                                          - throw on subexpression is selected
                                          - either no family exist to hold the sum or the user has disabled
                                            family promotion
                                         throws on = as the previous one if you ever get there
    I_1 = O_1 + I_1 - this is probably more debatable, but I'm presently thinking it should
                                        never throw.

I'd appreciate if people would contribute to the discussion by stating what the'd like for
combining overflow and quantization handling.

Regards,

        Maurizio

---------------------------------------------------
template<int N, typename raw_type, bool bounded_>
struct family {
  typedef mpl::int_<N> bits;
  typedef raw_type type;
  typedef mpl::bool_<bounded_> bounded;
};

template<typename Family> struct bits { typedef typename Family::bits type; };
template<typename Family> struct raw_type { typedef typename Family::type type; };

//using mpl::placeholders::_;
using mpl::placeholders::_1;
using mpl::placeholders::_2;

// Not really infinity, but good enough for a number of bits which is larger than anybody will ever
// need. Hey, it is even more than what fits in 640KB. Be happy. Peace, ok?
// FIX-ME why can't we use const_max? 1L<<16 is not even 640KB in size.
//static const long infinity = boost::integer_traits<long>::const_max;
static const long infinity = 1L << 16;

// This is used for the limited precision family, and we assume that boost::int_t and boost::uint_t cover
// sizes up to 64 bits. This is only true on 64 bit machines, as (unsigned) long long is not supported yet.
template<typename Size, typename Representation> struct builtin_numeric { typedef typename boost::int_t<Size::value>::fast type; };
template<typename Size> struct builtin_numeric<Size, magnitude_c> { typedef typename boost::uint_t<Size::value>::fast type; };

// FIX-ME we actually want to say mpz_class directly when defining families, but I haven't found a way
// for detecting whether somethig is a binary metafunction or a plain type. Till then close your eyes...
template<typename Size, typename Representation> struct gmp_number { typedef mpz_class type; };

struct unbounded : family<0, gmp_number<_1,_2>, false> {};
struct limited : family<64, builtin_numeric<_1,_2>, true> {};
struct unlimited : family<infinity, gmp_number<_1,_2>, true> {};

//typedef mpl::vector2<unbounded, limited/*, unlimited*/> default_families;
typedef mpl::vector3<unbounded, limited, unlimited> default_families;

//-- Compute the concrete type we use for representing values.
//
// SIZE is the number of bits in the significand. The sign bit is included in SIZE.
//
// REPRESENTATION is the _logical_ layout of numbers. This includes:
// - magnitude_c, for unsigned numbers
// - two_complement_c, for two's complement signed numbers
// we also define, but not support for now
// - one_complement_c, for ones complement signed numbers
// - sign_magnitude_c, for signed numbers represented as sign and magnitude.
// we do not define biased numbers. It would complicate things (it would be the
// first representation requiring a parameter, although we could do without, as
// frequently the bias is derived from the size). If there's a desperate need for these
// we will reconsider.
//
// FAMILIES is a type describing the allowed families among which to chose.
//
// FAMILY is a tag for specifying the desired family. When left unspecified, the first
// family that allows for SIZE bits with REPRESENTATION is selected.
// If the user explicitely requests a family, it is required that this family is
// actually capable of holding SIZE bits with REPRESENTATION.
//
// this metafunction returns the concrete type as its primary result, available as concrete_type::type
// In addition it defines:
// family, the family returned
// bits, the maximum number of bits in the significanf, as an mpl::int_<>

template<typename Size, typename Representation, typename Families, typename Family>
struct concrete_type {
  typedef typename mpl::end<Families>::type not_found;
  typedef typename mpl::find_if<Families, mpl::less_equal<Size, bits<_1> > >::type found;

  BOOST_MPL_ASSERT_NOT ((boost::is_same<found, not_found>));

  typedef typename when_unspecified<Family, typename mpl::deref<found> >::type::type family;

  // FIX-ME here we should find a way for detecting whether raw_type returns a binary metafunction or
  // something else. In the first case you want to apply the metafunction to SIZE and REPRESENTATION,
  // otherwise you want to return the type unmodified.
  // FIX-ME it might be useful to have another special case: if type is a pair, consider the first element
  // to define the type for unsigned numbers and the second element of the pair for signed numbers.
  typedef typename mpl::apply<typename raw_type<family>::type, Size, Representation>::type type;
};

template<typename Representation, typename Families, typename Family>
struct concrete_type<run_time, Representation, Families, Family> {
  BOOST_MPL_ASSERT_NOT ((boost::is_same<Family,unspecified>));
  // FIX-ME here we should find a way for detecting whether raw_type returns a binary metafunction or
  // something else. In the first case you want to apply the metafunction to SIZE and REPRESENTATION,
  // otherwise you want to return the type unmodified.
  // FIX-ME it might be useful to have another special case: if type is a pair, consider the first element
  // to define the type for unsigned numbers and the second element of the pair for signed numbers.
  typedef typename mpl::apply<typename raw_type<Family>::type, typename bits<Family>::type, Representation>::type type;
};


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk