Boost logo

Boost :

From: Fernando Cacciola (fernando_cacciola_at_[hidden])
Date: 2006-10-25 16:00:13

OK. It took me all of the extended review period to go through all of GIL
then through all of the review messages posted to date.

First, I vote to accept GIL into boost.

There are a number of issues, which I'll address below, but none of them are

Having said that, I must say that I like Vigra a lot but I won't reject GIL
on the ground of the elements in Vigra which can be considered to be better
or which are missing in GIL.
However, I do think that if GIL is accepted, it would be invaluable if
Ullrich could bring in all his expertise and experience into the GIL team,
not only regarding the many Vigra algorithms but the GIL core as well. Of
course, I understand that getting into a different boat after having crafted
(literally) your own out of raw wood is probably too much to ask, but well,
the result would be awesome.

- What is your evaluation of the design?

I have a number of issues with most of the concepts:

(*) ValueType concept:

I don't quite get.
It seems that its purpose is to draw a distinction between
references/pointers and values.
That's why there is a ChannelConcept but also a ChannelValueConcept.
But it turns that, unless I am too sleep (is 2:20am already), that pointers
are "ValueTypes" as far as the concept specification goes. And references
are not value types only becasue they are not default constructible.
But is that the crux of the distinction? that value types can be default
constructed? If so, it must be better documented.

But even then: why do you need different concepts to draw the distinction
between objects (values in this case) and references/pointers to it?
I found that quite uncommon and surely GIL is not the only library
that uses references as well as values.

It is mentioned that it allows proxy channels to be used...
how do differing concepts allow that, exactly?

(*) Point concept:

Since it is used not only to specify the coordinates of a pixel but also to
describe the dimensions of an image, I would have called it Vector, not
Point. But that's minor.

In the way it is specified, a model of this concept must be a class
specifically designed for that.
Yet that is unnecesary. External adaptation (see the BGL) can be used to
allow "classic" point classes to be valid models.
This comment goes for many of the other concepts.

Also, is there any real compelling reason to allow each dimension to have
its own coordinate type? If not then I would say this is being

I believe x and y (and red,green,blue,etc) shouldn't be member variables but
"properties", that is, x() and y() (etc)
That allows a model to have an array as the internal representation
(I know you _can_ have a property interface that uses a named-like syntax
as if it were a member variable, but I relly don't like that at all)

Why is ColorSpaceTypeConcept spelled with "Type" in it?

(*) ChannelsCompatibleConcept:

Why is this a concept? Isn't it just a metafunction on channels?
(the same goes for the othe "compatible" concepts)

(*) ColorSpaceConcept:

Why isn't it spelled "ColorFormatConcept" instead?
IMO, this isn't modelling a "space" (a set) but a format.
In fact, in reality, models of this are just tags that confer semantics to
the "channel container" that is a pixel.

The first time I read the docs I found it totally confusing. Then I figured
what a color-space was and I re-read it but mentally replacing each
appearance for "color_space" with "color_format" and it all become evident.

(*) Pixel concept:

It was already discussed the fact that the library lack a color concept.
It was also said that such a concept would be exactly the same as the pixel
concept, and logically, is senseless to have both.

But then I would rename PixelConcept as ColorConcept.

I like the use of the "color-format" tag (sorry, I can't spell it
color-space) to convey meaning to the collection of channel values stored in
a pixel (which I would call "color" rather than "pixel").

(*) color_base<>:

It is introduced as the class that represents the "color-space-specific
aspect of a pixel".
I have no idea what that means :)

I can see that is the base class for the pixel class.

Why is it called "color_base" instead of "pixel_base"? Because it only
captures the "color-space-specific" apsect of a pixel? That sentence implies
there is another aspect to it, but I can't see it in the definition of the
pixel class.

(*) pixel<> and planar_ref<>:

Why are planar pixels called planar? what is planar about them, or, more
precisely, what isn't planar about any other pixel type?

I know what you mean by that, but I found the term totally confusing. I
would have called them "layered" pixel.

Also, I wouldn't call "pixel" just pixel given that it represents and
interleaved pixel, and there are pixels which are not interleaved,
so I would have called them "interleaved_pixel".

What does the "_ref" suffix means? A reference? OK, to what?

I know is to the channels.. but why does that have to be put in the name?
Can't interleaved pixels provide references to the channels? Of course they

So, these two classes are misnamed IMO. I would called them
"interleaved_pixel" and "layered_pixel". Or else I totally missed the point
of planar_ref.

I can understand the need for a "planar_ptr<>" class, basically to move the
individual layer pointers at once. But then again I would call it
"layered_pixel_ptr" and I would make sure why a class is needed at all while
a C++ pointer would have seem to suffice (as it does for interleaved pixels)

... well ...

I can actually go on... but I have to work too :) and I rather post half a
review than none.

- What is your evaluation of the implementation?

It saved the library :)

I mean, the documentation is poor, confusing and even mismatched with

But I can read source code :)
And this library is much better than its documentation shows.

- What is your evaluation of the documentation?


I found one annoying issue with the documentation that I think is a mistake:
It mixes Concept Checking with Concepts.
For example:

I read in the design guide that a model for Point2DConcept must have an
integral constant named num_dimensions.
OK, how does a model provide that? as a nested type? as an "associated" type
provided by a particular traits class? Can't tell.
It also requires the type to have an "axis_value()" template function. Is
that a member function of the model? A free function in the GIL namespace?
Can't tell again.

So I look into the doxygen docs, and I found that I can't tell either, but I
can tell something: PointNDConcept is a concept "checking" class. That's not
the same as a Concept even if it comes close. For example, the "public
attribute" "P point" listed by doxygen is specific to the "checking" class.
It's not part of the Concept.

So I look into the code.
There I see the light: most MUST be nested in the model, except axis_value<>
which is a free function in the GIL namespace.

The same problem happens with all of the other "concepts".
Without looking into the code you can't tell how the requirements must be
fulfilled by models. traits_classes? nested types? etc


The discussion of compatible color spaces in the design guide mentions that
two color spaces (which as I said should be called color formats)
are compatible if the have the same base.
Maybe is me, but I found that to be an odd way to say that color-space
models form inheritance diagrams.
If you just say that, it follows by LSP when are they compatible.

It would also note in the documentation of color-spaces that they specify
format but not values and that they are used as tags in the template classes
that models the values.


Just an example of a problem found through all of the documentation:

It's next to imposible to understand what the "value_type" of a
HeterogeneousPixelConcept is.
It's documented as: HeterogeneousPixelValueConcept<pixel_value_type>, but:
HeterogeneousPixelValueConcept is a refinement of HeterogeneousPixelConcept
so the definition is recursive.
pixel_value_type is undefined.

I could only really uderstood GIL from the source code.

IMO, the documentation needs some serious work.

The following paragraph in the design guide is unparsable to me:

"Interleaved pixels have the built-in reference (pixel<T,C>&)"

have? where? what is *the* built-in reference? (ok, the parethesis makes it
just a little but more clear)

"Planar pixels, however, have channels that are not together in memory and
therefore a proxy class is required to represent a planar reference."

OK, until I read "a planar reference"... what's that?

"A reference to a planar pixel is defined as a set of references to each
corresponding channel"

If 'reference' in that sentence means a C++ reference, then, no, is not
defined that way.
But I actually know you are not talking about a C++ reference, so, what do
you mean by "reference to a planar pixel"?
Well, I do know the answer, because eventually I came around to understand
your use of the term reference. But I think you should decide on a new term
for that to keep it away from a C++ reference and avoid confusion.

- What is your evaluation of the potential usefulness of the library?

It represents a solid ground to build on top, so very useful.

 - Did you try to use the library? With what compiler? Did you have
any problems?

No, I didn't. I only studied the docs and the source code.

 - How much effort did you put into your evaluation? A glance? A quick
reading? In-depth study?

In-depth study.

 - Are you knowledgeable about the problem domain?

I had my share of image processing, mainly segmentation and feature
And as is typically the case with me, I had to implement the tools myself,
from scratch, both in C++ and C#, a number of times.
So I have some experience designing and implementing this kind of library.


Fernando Cacciola

Boost list run by bdawes at, gregod at, cpdaniel at, john at