Boost logo

Boost :

From: Brian McNamara (lorgon_at_[hidden])
Date: 2003-11-14 11:54:40


On Fri, Nov 14, 2003 at 08:15:03AM -0800, Mat Marcus wrote:
> Named conformance worries me, though I haven't programmed much in
> Haskell yet. One question I have is: who writes the type classes (or
> whatever you want to call these things)? I can't imagine it is the
> library author since the generic (meta-)algorithms are meant to be
> used with an unbounded set of types(, metafunctions, etc.). But a

The library author writes them. Current library authors already "define
the concepts"; the only difference now is that they would also reify
them into "type classes" (or whatever we'd call them).

> library's usability would seem to suffer if the user has to manually
> assert conformance to various concepts before calling generic
> algorithms. A hybrid approach where the library author provides the
> named conformance for "common types" doesn't seem to help. I want the
> generic algorithm to be written in such a way that the compiler can
> check conformance for me. Are you really worried about accidental
> structural conformance? How do things work in practice in Haskell?

To answer the last two questions:

  (1) Somewhat. Accidental conformance is more of a problem "in theory"
      than "in practice", I think. However it's analogous to the
      argument for static typing (dynamic-typers will tell you that
      type errors are more of a problem "in theory" than "in
      practice"). See below.

  (2) All conformance must be declared. Everything, even stuff like
      "int" is a "EqualityComparable" and a "LessThanComparable". It
      is not as bad at it sounds.

I suppose one of the main motivations for using any conformance-
detecting approach is to get better compiler error messages. We all
know how bad compiler error messages can be when you instantiate a
template with a type which doesn't match the right concept. Using a
conformance-detecting approach, this error can be automatically
detected "at the entrance" to the library (rather than deep within its
implementation) and the framework can help dispatch a "custom error
message" if this is desired (or else just use SFINAE to make the
library function "invisible" to non-conforming types).

The advantage to named conformance over structural conformance with
respect to detection is that detecting named conformance is easy.
OTOH, detecting structural conformance was largely impossible prior to
the discovery of SFINAE, and I believe it is still quite difficult to do
well (see, e.g., Brock's recent comments, or much of the discussion in
"Static Interfaces in C++", esp. stuff like "HeroicProxy").

The pre-existence of structural conformance as the C++ norm for concept
checking creates a psychological barrier to the acceptance of named
conformance. I do believe that, on the whole, the "named" approach is
the best way to go. It's actually totally analogous to "static typing"
versus "dynamic typing". (Structural conformance is like a dynamic
typecheck; things don't "blow up" until the implementation tries to use
a T in a way that T doesn't actually support. Named conformance is
like a static typecheck; the conformance is checked at the template
interface (instantiation point), and possibly rejected with regards to
the concept. (The difference between "concept checking" and "type
checking" (which makes this "analogy" rather than "equality") is that
in concept-checking, everything is happening at compile-time.) )

In my experience, trying to convince C++ template programmers that named
conformance is good for them is like trying to convince LISP programmers
that static typing is good for them. :) In the end, few people can see
"both sides", and you end up with two separate camps who can't co-exist
well within the confines of one arena/language/something.

-- 
-Brian McNamara (lorgon_at_[hidden])

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk