Boost logo

Boost Users :

Subject: Re: [Boost-users] [boost] [review][constrained_value] Review ofConstrainedValueLibrary begins today
From: Jesse Perla (jesseperla_at_[hidden])
Date: 2008-12-09 09:21:57


>This too is already solved if any predicate is acceptable: just use a
>predicate functor that forwards via a pointer or reference to another
>global predicate. Or (somewhat less efficient but easier) use the
>dynamic boost::function predicate that is already included and
>boost::bind/lambda/phoenix to a method on this global object.

Gordon:
I absolutely agree that if this library was purely for bounds testing,
managing invariants, etc, that predicates are sufficient. But for high
performance numerics, that is not what would happen. First of all, you
would turn off automatic testing at runtime. But one of the key value of
this library, in both release/debug, for those examples is to get
information about the bounds and intervals from the type itself for generic
algorithms. A predicate as a function can test assignment, but it can't
give back structured data for these algorithms. If you can show me how the
testing predicate can be easily queried within this framework for the type
metadata itself(e.g. a list of intervals), then I will shut up.

I really think people need to think through this alternative use case of
this powerful library.

And Robert:
There are a lot of discussions on the boost mailing list (which I can't post
on) about NaN and others for numeric_limits. In virtually every case with
high performance computing, you would turn off predicate testing on
assignment for release builds because it is too expensive. Then you would,
judiciously, test inclusion of the value. So the model here is that if
bounded_float is a subset of float, a value in this class it is always in
the float superset, but I can test inclusion within the other subset that I
have defined. Here, predicates work fine as long as there is a manual way
to test set inclusion. And numeric libraries frequently use NaN, infinity,
etc. as signals for algorithms. So these values should be associated with
the superset in the model as a default approach. So what I am saying is
that people should be given the option (as a preprocessor instruction is
sufficient) to have the value as always within the bounds vs. considering
this as a "set" which they can then test inclusion at their own liesure.
This is a pragmatic approach to ensure consistency of this library with
existing generic numerical algorithms, which is just as important for the
numerics as solving the floating point problem.

Also, I am not aware of the structure of the numeric_limits, but I doubt
these are written to be virtual/subclassed due to efficiency. What this
means is that you would probably have to write your own traits class if we
don't make this automatic, and you can't pick or choose to override existing
tratis. And these numeric traits are impossible for normal developers to
write since they are platform specific.

Last, and this is not something I know enough about because I am not a
numeric analyst, but the epsilon method discussed in the mailing list may
have a flaw for numeric work. What we effectively are doing here is
creating an epsilon neighborhood for testing equality. As we know from set
theory/topology, this does not create a weak ordering because it fails
transitivity. Now, one might say that this is irrelevant because it is a
reasonable approximation for pragmatic reasons, but you run into a problem
in numerics. The reason is that many numerical algorithms are already
testing within an epsilon neighborhood for convergence of algorithms,
derivatives, etc. So we need to be REALLY careful that this epsilon ball is
well within those balls, or we may seriously mess up algorithms depending on
a comparison operator.... This may or may not be a problem, but I would
pose it on the boost list to ensure that people are thinking about it. One
solution may be to rely on the numeric_limits::epsilon trait that would
always be available, would be platform specific, and I believe is the small
possible neighborhood of a point. Whether this is the maximum amount of
truncation possible, I do not know.

-Jesse



Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net