Boost :

From: Andrey Semashev (andysem_at_[hidden])
Date: 2005-09-21 15:32:01

Rob Stewart wrote:
>> I think we are talking about the same thing but from different
>> angles, = consider this simple case;
>>
>> typedef constrained_value<0, 10, policy> MyCV;
>> MyCV a =3D 6;
>> MyCv b =3D 8;
>> MyCv c =3D a + b; // Should fail
>>
>> If the user wants "c" to be able to hold the result of "a + b" he
>> should = define a new type that would accommodate it.
>
>
> The real difference then, arises with how you use the result of a
> computation. If you pass it to a function template, for example,
> our approaches would result in different instantiations. My
> approach would retain the original range checking (0-10), whereas
> Michael's approach would have a new range (0-20). That's where I
> question the validity of his approach.

>From my point of view passing the result of the operation to a template
function is the only difference (please, correct me if I'm forgetting
something). And since we are talking about runtime operations (no
metaprogramming at this point) such issue can easily be solved by using some
additional template function "constrain", for example, that would ensure
that the result is in specified range:

template< typanme T >
void foo (T const& cv);

typedef constrained_value<0, 10, policy> MyCV;
MyCV a = 6;
MyCV b = 8;
MyCV c = a + b; // Should fail

typedef constrained_value<0, 20, policy> MyCV2;
MyCV2 d = a + b; // Should be ok
foo(a + b); // Shall instantiate
// foo< constrained_value<0, 20, policy> >()
foo(constrain< 0, 10 >(a + b)); // Should fail, otherwise it would
// instantiate foo< constrained_value<0, 10, policy> >()

To my mind such approach is quite sufficient since (IMHO) in most cases the
extending ranges would be more natural.