From: Paul Mensonides (pmenso57_at_[hidden])
Date: 2002-11-19 16:44:23
----- Original Message -----
From: "Douglas Gregor" <gregod_at_[hidden]>
> On Wednesday 13 November 2002 04:50 am, Aleksey Gurtovoy wrote:
> > Paul Mensonides wrote:
> > > This is unacceptable. What really needs to happen, is the
> > > rules regarding declaration instantiation of template class
> > > members need to be merged with type-deduction failure. I.e.
> > > If the result is semantically invalid, the function is removed
> > > from the overload set (excepting only a few things such as
> > > applying 'const' to a type that is already 'const' and possibly
> > > the reference to reference issue). Period. This is the only
> > > safe way to go, and conveniently, would allow for all sorts of
> > > traits tests that currently rely on compiler extensions.
> > Agreed, that's basically what is wanted. The challenge is to execute
> > something along the lines of the Doug's plan successfully ;).
> > Aleksey
> Oops, I dropped the ball on this one.
> Last week I got the opportunity to ask Daveed Vandevoorde about this.
> Specifically, I mentioned the gap in the type deduction failure rules
> introduced by sizeof and asked if (a) the committee knew about this when
> drafted the clause and (b) what type of resolution we could expect if we
> asked the committee to clarify this clause with respect to sizeof.
Clever type traits are one thing, but they are a side-effect of the solution
to a more general problem. Specifically, one template function declaration
can permanently break an overload set--even if it is not selected. The
solution to this is obvious, if type deduction yields nonsense, it should
fail. It just so happens that we could exploit that solution in significant
ways with traits and expression validity checking.
> Daveed said that the intent of the clause is to ensure that the compiler
> never forced to create a type that is not well-formed. The the committee
> understand the problem sizeof caused when they drafted this clause, and
> they went through various forms of specification: listing all cases that
> could succeed, creating a blanket statement that says that deduction fails
> there is any failure, or listing all cases that could fail. The impression
> got was that for any case that isn't well-formed and doesn't fall into one
> the categories for type-deduction failure, the program is ill-formed and
> compiler should emit a diagnostic.
As it is now, yes. One single template function declaration can break an
entire overload set permanently. As I said before, this is unacceptable.
My primary interest here is to get rid of that hole in the system. My
secondary interest is the amount of available traits-like constructs
available would be greatly expanded.
> (Aside: we should file a bug report with
> EDG with Paul's example, because it falls into this case and should emit
> error instead of silently choosing the "wrong" one). We might still want
> submit a DR to get this clarified for sure.
Before 4.3, sometimes Comeau did this right, sometimes wrong, and sometimes
ice'd. It looks like they "fixed" this by just sealing off the whole area
to prevent the ice.
> What's this mean for us? Well, I think it kills the idea that a resolution
> a DR will give us the ability to check the compilability of any
What exactly are the reasons for specifying a list of possible failures? I
can't imagine any reasoning that would validate such a list as the only way
the type deduction can fail. Because of that list, what supposedly is a
two-state type deduction with a pass-or-fail result, becomes a three-state
type deduction with a pass-fail-or-error result--where a compiler has to go
out of its way to catch the few errors in that list and "fail" type
deduction instead of error. This just over-complicates the mechanism (and
prevents us from exploiting it).
> This brings me to a comment Daveed made during his talk (paraphrased):
> creating extensions for C++, don't forget what you actually want. We're
> saying that we'd like to fix the type-deduction failure rules to act in a
> certain way that makes our expression-checking hack work. We can't do this
> the old C++ (and no DR is going to change that), so we should step back
> ask "what exactly do we want to be able to do?"
>From my perspective, this is a secondary concern. I consider this a major
hole in the overloading/template mechanism. Expression checking is only an
extremely nice side-effect of the *only* worthwhile solution to this
> I _think_ we want to be able to ask "what happens when we try to compile
> expression?". Then I think the result should be one of:
> - ambiguous (the expression contains at least one ambiguity)
> - access control violation (the expression will compile, but there is an
> access violation that would cause a diagnostic)
> - ill-formed (the expression is ill-formed, but not because of an
> or an access control violation)
> - well-formed (the expression will compile. this does not guarantee that
> definitions of functions called in the expression can be instantiated, but
> only guarantees that the declarations can be instantiated)
This kind of fine grain control would be nice, and it is true that no
"expression-checking-hack" will yield this kind of detail. However, that
isn't the fundamental problem, and I don't think that it really matters
*why* something is invalid in the general sense. Also, the "sizeof" trick
is open-ended. Any possible future language primitives will only give us a
few of the things we need, and we'll still have to do the rest anyway.
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk