From: Fernando Cacciola (fernando_cacciola_at_[hidden])
Date: 2003-12-20 20:08:26
"Andy Little" <andy_at_[hidden]> escribió en el mensaje
> a.. What is your evaluation of the design?
> My evaluation is based on a very quick read of the documentation (Well if
> you call 6 hours quick...its Complicated!). So I hope you accept my
> in that light.
> I am very disappointed that I could not get the converter to compile "out
> the box" on VC 7.1.
Yea, I know the feeling.
> I am not sure what the policy on code correctness at boost is, but in my
> book that means I would have to give an automatic
> "reject" vote to any design that doesnt actually work. ( I do understand
> that the problems are in dependent libs... but... cmon guys... get it
> for VC7.1 please! ) ( as I am a "newbie" I'll just abstain...even though
> not allowed)
I'll work on it pretty soon.
> With no easy/lazy ability to find out about the converter using by my
> preferred method of "messing about" with it to find what it does I was
> forced into the docs.
> My main interest in the converter is in promotion rather than conversion(
> Yes I want to just promote<A,B>::type for UDT's... dream on buddy), and
> apparently it is possible to use the
> supertype mechanism for this for inbuilt types.
> I have had problems with the whole syntax. Why is a subtype or supertype
> direction dependent?
> I would assume one range fits in another or it doesnt.
When you mix unsigned/signed types, the ranges overlap with a shift; that
is, neither range fits entirely into the other.
In this case, there is no real supertype at all.
If the definition of those traits were direction independent, you couldn't
tell the case: now you can assert whether a supertype is actually super when
checking that it is in fact the same in both directions.
> Would not set theory
> be better naming ie union intersection, subset etc.
An "additional" value like this would probably describe the situation
However, super/sub types classify S/T, so, with respect to such a type
classification the names look OK to me (these are the usual type-theoretic
> The whole business, grammar etc is complex enough.. is this direction
> dependent sub/supertype thing really the best way to achieve this?
It is to me... What _exactly_ would you use?
> These problems of comprehension aside and despite the fact that it it
> compile the docs are very interesting indeed.
> It will however take me a good few months (and a working implementation
> :-) ) to fully appreciate what is going on here.
You'll get a working implementation soon. :-)
> As well as conversion
> inbuilt types there are hints of possible ways to convert to from User
> Defined Types and some attempts at classification of User defined types...
> with a view to generic conversions from/to inbuilt/UDT.
> This I have dreamed about but looks as if someone is actually getting down
> to it in a serious way.
That's the idea... :-)
> Here too I have problems with things such as the density of a type. Its a
> nice idea but the implementation seems non-intuitive.
> I would prefer a more conceptual(less mathematical ) approach to the
> business of UDT classification.
> e.g (say) some numerics are trying to be analog values, while others are
> better at counting etc.
I've been working on a numeric type categorization scheme to help with the
classification system more generally than today.
Since I've never finished it I leaved it out of the library.
The problems I found are related to the following:
As far as conversion goes, it is the possible loos of data what matters.
Loss of data can ocurr in two forms: loss of range or loss of precision.
AFACT, attempting to detect loss of precision would require sophisticated
methods that will involve a significant runtime penalty (compared to a
simple range check); so this library does not try to do that (using an error
tracking numeric type is probably the answer for applications in need to
detect loss of precision).
Therefore, this library is only concerned with possible loss of range and
its classification scheme should be aimed at that, not more.
However, most intuitive categorization systems actually fall short of this
becasue the resulting taxonomy simply does not reflect the necessary facts.
For example, whether a numeric type represents a continuous (analog) or
discrete (integral) set makes no difference as far as relative ranges are
> That said this is going to be a very complex business to get right..
> On bounds which did compile...There are some oddities for instance the
> bounds<>::smallest returns 0 for an int but non-zero for a float.
> I would have expected either 1 for an int, or 0 for both (bit redundant).
> The bounds class claims to try to sort out the differences between the
> numeric_limits for real and int , but this doesnt seem a good way to go
> about it.
Yes, this was pointed out by Guillaume too.
The result should be 1 for integers.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk