Boost logo

Boost :

From: Brian McNamara (lorgon_at_[hidden])
Date: 2003-11-14 14:29:59

On Fri, Nov 14, 2003 at 09:39:40AM -0800, Mat Marcus wrote:
> --On Friday, November 14, 2003 11:54 AM -0500 Brian McNamara
> >On Fri, Nov 14, 2003 at 08:15:03AM -0800, Mat Marcus wrote:
> >>Haskell yet. One question I have is: who writes the type classes (or
> >>whatever you want to call these things)? I can't imagine it is the
> >
> >The library author writes them. Current library authors already
> >"define the concepts"; the only difference now is that they would
> >also reify them into "type classes" (or whatever we'd call them).
> Oops, I mistated my question. I understand that the type classes
> correspond to the concepts, and it is clear that some library author
> must them. Please replace the question in the paragraph above with:
> whose job is it to manually declare conformance to a give concept? I
> see that you have addressed this below somewhat so I will move on.

I'm not sure if understand your question, so I'm going to use
pseudo-Haskell example/terminology to try to nail things down.

In Haskell, you might define a "type class" like

   class Clonable t where -- (1)
      clone :: t -> t -- (2)

and then you declare "instances" like

   instance Clonable Foo where -- (3)
      clone aFoo = /* implementation... */ -- (4)
I think what you are asking is "whose responsibility is it to write
lines (3) and (4)--the author of Clonable or the author of Foo"?

This question does not have a general answer; in "real life" it depends
on the timeline/interaction/co-evolution of the two. Here are a few
possible scenarios:

 - "Clonable comes before Foo"

Imagine that Clonable is a type class in the "standard library". Today
you write Foo. If Foo is a Clonable, then you (the author of Foo) also
write the instance declaration.

 - "Foo comes before Clonable"

Take the specific case where Foo is "Int", for example. The author of
Clonable, who "invents" the type class some time after a number of
instances have already been invented, should do his best to write
instance declarations for existing types (like Int) which could easily
conform. (As more time passes, the "gaps" will get filled in.)

 - "Clonable and Foo were developed separately"

Suppose they come from two different vendor libraries, and today you
decide to use those two libraries together. You, the client of these
libraries, will have to write the instance declaration. In the
implementation you may have to "smooth over" the interface: we can
imagine that perhaps Foo natively provides a "duplicate" method, so you
have to write
   instance Clonable Foo where
      clone aFoo = aFoo.duplicate() // using C++ syntax to illustrate idea

Etc. There are probably other scenarios, too.

> > (2) All conformance must be declared. Everything, even stuff like
> > "int" is a "EqualityComparable" and a "LessThanComparable".
> >It is not as bad at it sounds.
> Not sure about that. A user can call the plain old function foo(int)
> without making any additional declarations. But with named conformance
> it would seem that the user of a generic foo(T) may incur an
> additional responsibility to manually assert that int conforms to
> whatever concept T myust model.

Unless the "user" is one of

 - the author of T

 - the author of foo()

 - the first client to ever try to use two separate libraries (one which
   defines foo(), and the other which defines T) together

he will not, because someone "upstream" will have already done it for him.

It sounds (to me from your wording) like you are concerned with the
specific case of being the author of foo(). You are presumably already
"doing the work" of thinking about what concepts T must model to be a
successful argument to foo(). In most cases, this will (hopefully) just
amount to changing C++ code like

   // Concept requirements are implicit, suggested by names
   template <class ForwardIterator>
   void foo( ForwardIterator i ) { ... }

to code like

   // Concept requirements are reified via "isa" framework
   template <class I>
   typename enable_if<isa<I,ForwardIteratorConcept>,void>::type
   foo( I i ) { ... }

or something.

> In practice this would seem to lead to users avoiding generic functions,
> just as the need to write helper functions to use STL today is one
> barrier to acceptance (which is why we like lambda).

Indeed, "instance declarations" must be as succinct as possible. The
framework I'm envisioning succeeds in the cases where the concept
precedes the model. In those cases, the author of the model declares
conformance to the concept using inheritance:

   // To say that "list_iterator models ForwardIteratorConcept",
   // write code along the lines of
   class list_iterator : public forward_iterator_tag { ... };

In the cases where either the model precedes the concept or the model
is a 3rd-party or built-in type, you have to use specialization, which
is unfortunately more verbose:

   // To say that "int* models ForwardIteratorConcept",
   // write code along the lines of
   template <> struct isa<int*,ForwardIteratorConcept>
   { static const bool value = true; };

I've chosen iterator categories as my example because they are one
example which I think is already using a "named conformance" framework

> I should have given a little more background. We spent a some time in
> Kona going over early drafts of papers proposing the addition of
> concepts to C++. These papers should appear as part of the post-Kona
> mailing any day now at the bottom of
> <>. Of course

I shall await them with bated breath. :)

> Yes. See my last paragraph.

(Probable-nitpick: I assume here by "last" you actually meant "previous".)

> >The pre-existence of structural conformance as the C++ norm for
> >concept checking creates a psychological barrier to the acceptance
> >of named conformance. I do believe that, on the whole, the "named"
> >approach is the best way to go. It's actually totally analogous to
> >"static typing" versus "dynamic typing".
> Not sure how will the analogy holds since concepts are multi-sorted
> notions and I've always preferred Goguen/Burstall's "loose semantics"
> over "initial semantics/ADT" based approaches. But I need to think
> about this some more.

You've gone past the limits of my vocabulary in this area. Is this the
kind of stuff I would know about if I read that types book:
and all the articles on your mailing list? I'd appreciate any pointers
you can give me to best help me quickly "get up to speed" so that we
can continue to communicate more on this topic.

-Brian McNamara (lorgon_at_[hidden])

Boost list run by bdawes at, gregod at, cpdaniel at, john at