Boost logo

Boost :

From: Fernando Cacciola (fernando_cacciola_at_[hidden])
Date: 2006-10-26 18:38:04

"Lubomir Bourdev" <lbourdev_at_[hidden]> escribió en el mensaje
> Fernando,
> Wow! You (and other reviewers) spent quite a lot of time reviewing GIL.
> We really appreciate that!

You're welcome :)

And BTW, I forgot to thank you and Adobe for GIL!

>> (*) ValueType concept:
>> I don't quite get.
>> It seems that its purpose is to draw a distinction between
>> references/pointers and values.
>> That's why there is a ChannelConcept but also a ChannelValueConcept.
>> But it turns that, unless I am too sleep (is 2:20am already),
>> that pointers are "ValueTypes" as far as the concept
>> specification goes. And references are not value types only
>> becasue they are not default constructible.
>> But is that the crux of the distinction? that value types can
>> be default constructed? If so, it must be better documented.
> Yes, values have a default constructor and a proper copy constructor.
> References do not. As stated in the concept definitions.
Just for the record, references do have copy constructors. Is just that they
are shallow.
But OK, I can see why a distinction between values and
references/pointers at the concept level is usefull.

>> But even then: why do you need different concepts to draw the
>> distinction between objects (values in this case) and
>> references/pointers to it?
>> I found that quite uncommon and surely GIL is not the only
>> library that uses references as well as values.
> Our concept definitions are driven by the minimum requirements needed
> when the concepts are used in algorithms.
> For example, consider an algorithm that takes two Pixels and determines,
> say, if they are equal:
> template <typename Pixel1, typename Pixel2>
> bool equal_pixels(const Pixel1& p1, const Pixel2& p2) {
> ...
> }
> In this case, all we need from the arguments is to be able to get the
> values of their channels.
> We don't need to be able to default-construct a pixel. This means that
> Pixel1 and Pixel2 should only model HeterogeneousPixelConcept.
> This allows us to pass not just pixel values, but also planar references
> as arguments:
> bool eq=equal_pixels(planar_view[4], rgb8_pixel_t(3,4,5));
> On the other hand, if we have an algorithm that requires us to construct
> and return a new pixel, then we need more than getting/setting the
> values of the channels. In that case we would need PixelValueConcept.
So a ValueType and the related concepts are there to add the additional
requirement of constructing these values, were needed.
OK, I can see how those additional requirements need not be enforced in all
I guess the docs could be more clear on that.

>> In the way it is specified, a model of this concept must be a
>> class specifically designed for that.
>> Yet that is unnecesary. External adaptation (see the BGL) can
>> be used to allow "classic" point classes to be valid models.
>> This comment goes for many of the other concepts.
> Yes, we are aware of this and are torn between the desire to allow full
> external adaptation and to make the code that uses the concept simpler
> and shorter.
> "p.x" is much nicer, in my opinion, than "p.x()" and especially
> "get_x(p)". The latter becomes especially unreadable in longer
> expressions.
> Unfortunately the most verbose version is also the one that most allows
> external adaptation. To ease external adaptation we would have to move
> everything from a method to a global function taking the object. We have
> done it for some of the methods, but are hesitating to do it for the
> rest. Not to mention some operations simply cannot be moved outside the
> objects, so full external adaptation is compromised anyway...
> Suggestions/opinions/votes are most welcome.

I guess I'd like to see external adaptation in the particular case of
the Point concept because Point classes are ubiquotous, yet none of them
would fit the concept the way it is.

>> Also, is there any real compelling reason to allow each
>> dimension to have its own coordinate type? If not then I
>> would say this is being over-specified.
> This comes from the fact that every dimension must have a different
> iterator. The type of the dimension is the difference_type of its
> iterator.

>> I believe x and y (and red,green,blue,etc) shouldn't be
>> member variables but "properties", that is, x() and y() (etc)
>> That allows a model to have an array as the internal
>> representation (I know you _can_ have a property interface
>> that uses a named-like syntax as if it were a member
>> variable, but I relly don't like that at all)
> This is exactly the same issue regarding external adaptability, as
> discussed above.
> Note also that "red", "green", etc. are not part of the pixel concept.
> Therefore you should not use the channel names directly in generic code.

Ha OK.
But x,y are part of the Point concept rigtht?

> We could provide global functions like get_red(p), set_red(p, x), etc...
> As of now, you can use<0>(), or p.semantic_channel<0>(), or,
> for homogeneous pixels, p[0]
Anyway, Joel explained how Fusion would solve all these.

>> (*) ChannelsCompatibleConcept:
>> Why is this a concept? Isn't it just a metafunction on channels?
>> (the same goes for the othe "compatible" concepts)
> Because it is often the case that algorithms require the types of the
> two channels they take be compatible.
> So a nice way to say this is that the channels must model the
> ChannelsCompatibleConcept.
> And yes, for each concept and each type, one could make a metafunction
> determining whether the type satisfies the concept. We have one in cases
> we need it, channels_are_compatible in this case.
Well, OK, I can see how the concept makes the documentation easier, and even
prescribes a proper check.

>> (*) ColorSpaceConcept:
>> Why isn't it spelled "ColorFormatConcept" instead?
> That is a good suggestion. We will think about it some more and may very
> well change the name accordingly.

Cool. I like it that way better.

>> (*) Pixel concept:
>> But then I would rename PixelConcept as ColorConcept.
> Well, here the best name is not so evident.
> As you know, the term Pixel is very widely used in graphics.
> If we change this to Color, then there will be a similar (justified)
> criticism that our image library has no notion of a Pixel. People think
> of images as 2D grids of pixels, not of colors...
Well, the imaging libraries that come to mind now all call that color, not
So does the rendering libraries, which share this particular element (they
also manipulate pixels)
So I would be surprised if the opposite argument were raised.

>> (*) color_base<>:
>> It is introduced as the class that represents the
>> "color-space-specific aspect of a pixel".
>> I have no idea what that means :)
>> I can see that is the base class for the pixel class.
>> Why is it called "color_base" instead of "pixel_base"?
>> Because it only captures the "color-space-specific" apsect of
>> a pixel? That sentence implies there is another aspect to it,
>> but I can't see it in the definition of the pixel class.
> You can classify operations on pixels, pixel references, and pixel
> iterators,

<<<< nit-picking on <<<<

I can think of operations on pixels.
A pixel reference, OTOH, is just an "alias" for the pixel, so I don't think
the operation is on the pixel "reference" but on the "pixel" itself, which
just happens to be denoted by a reference variable.
Similarly, the only operation on pixel iterators I can think of is
traversal. Any operation on a dereferenced pixel iterator is on the pixel
itslef, not the iterator.

<<<< nit-picking off <<<<

> into two categories - ones that depend on the specific color
> format and ones that are color-format agnostic and treat all the
> elements (channels) the same way.

> Examples of the first are return the first semantic channel, or color
> convert.
> Examples of the second are assignment and equality.
OK. I get it, you are drawing a distinction between semantics and

> Color base contains the "color-essence" of the pixel/reference/iterator.

The color semantics. OK.

> The color-format-agnostic operations are implemented outside, in
> pixel_algorithm.hpp. This is briefly mentioned in the design guide -
> grep for "agnostic"

The pixel structure. OK.

>> (*) pixel<> and planar_ref<>:
>> Why are planar pixels called planar? what is planar about
>> them, or, more precisely, what isn't planar about any other
>> pixel type?
> We would rather keep the name "planar" than change it to "layered". This
> is because "planar" is a well-established terminology for the image
> format in which color channels are in separate images (planes).
I know that images store color components in planes, but I can't relate that
to a "planar" pixel. :)
But then my main line of work is geometry, not graphics or imaging, so the
word planar rings too strong a bell to me.

>> I know what you mean by that, but I found the term totally
>> confusing. I would have called them "layered" pixel.
>> Also, I wouldn't call "pixel" just pixel given that it
>> represents and interleaved pixel, and there are pixels which
>> are not interleaved, so I would have called them "interleaved_pixel".
> "pixel" is the name for the color value of a pixel, whether or not it
> came from interleaved or planar image.
> It just so happens that a C
> reference to this class is also used to represent a reference to a pixel
> inside an interleaved image.
> Pixels have one value type "pixel" but may have different reference
> types - such as "pixel&" and "planar_ref".
> The value type is only concerned about representing the color, whereas
> references are concerned with representing the color at a specific
> location in memory.
>> What does the "_ref" suffix means? A reference? OK, to what?
> To a pixel. We didn't want to add this to the type and thus make it
> longer. Similarly "_ptr" is an iterator to a pixel.
>> I know is to the channels.. but why does that have to be put
>> in the name?
> planar_ref is a pixel _reference_ whose channels may be located at
> disjoint places in memory, such as in a planar image.
Ha OK.
Well, this is what I understood based on your explanations:


Images have pixels, and pixels have values, namely, color.

A pixel can be interleaved or planar.

If it is interleaved, the channels are stored contiguosly in memory so there
can be a C++ object *directly* representing it.
If is is planar, the channels, being in a separate planes, are not stored
contiguously in memory so there cannot be a C++ object *directly*
representing it.

OTOH, both interleaved and planar pixels have a value, its color, which has
some fixed structure which is independent of the disposition of the pixel
channels in the image buffer. Hence, there can be a C++ object represeting
the color of a pixel, whether interleaved or planar.

A natural requirement of an imaging framework is the ability to address
pixels in an image.
For interleaved images, since its pixels can be directly represented as C++
objects, a C++ pointer can be used for that.
However, for planar images, a *single* C++ pointer cannot be used to address
the *entire* planar pixel, so the addressing itself must be carried out by
an object (specifically, that wraps pointers to each channel).
Thus, there is the concept of a pixel pointer, or "iterator" to avoid
overloading the term pointer. This concept is modeled by a C++ pointer if
the pixel is interleaved (because in that case the pixel is itself a C++
object), or by a wrapper object if the pixel is planar, because in that case
the pixel is not a *single* C++ object. Such a wrapper simultaneously points
to all of the planar pixel channels.

A pointer (or iterator) can be dereferenced, giving value access to the
object it points to (through a reference).

A pointer to an interleaved pixel, which is a C++ pointer, when dereferenced
gives access to *the* image pixel, as physically stored in the buffer. Such
a pixel can be an rvalue or an lvalue (it is after all a C++ object) so you
to use a pixel pointer to read but also to mutate the color of a pixel right
there in the image buffer.

However, a pointer to a planar pixel, not being a C++ pointer, cannot offer
both rvalue and lvalue access to *the pixel* (a pseudo-object if you like)
unless another object with reference semantics is used as well.

Thus, there is also the concept of a pixel reference, or "handle" to avoid
overloading the term reference. This concept is modeled by a C++ reference
if the pixel is interleaved (because in that case the pixel is itself a C++
object), or by a wrapper object if the pixel is planar, because in that case
the pixel is not a *single* C++ object. Such a wrapper simultaneously
references all the planar pixel channels.

A dereferenced pointer gives you a reference which provides the value
semantics on the object pointed to.

In the case of a pixel, the value in question is its color, which happens to
have the same structure for both interleaved and planar pixels.

When a pixel reference (or handle, as defined above) is modeled as a C++
reference, the rvalue/lvalue semantics both correspond to the pixel's value,
that is, color, since an interleaved pixel and its color are the same

However, when a pixel is planar, its pixel reference (handle) provides
rvalue semantics that correspond to its color but lvalue semantics that
corresponds to the pixel's channel as stored in the buffer (through proxy

Thus, pixel references (for both types of pixels) yields "pixel<>" objects
as a closest match to a "pixel rvalue", even though in the case of a planar
pixel such object is really a deep copy of the pixels color.


If all that is correct, or mostly correct, then I would suggest:

(1) rename the pixel<> class as color<>.

The reason is that objects of this class hold the *value* of both kinds of
pixels but do not represent the pixels themselves.
The fact that a "pixel<>" object from a planar image is really holding a
deep copy of the channels might be more evident if the class is spelled
This goes in hand with the discussion of the Pixel vs Color concept.

(2) rename the planar_ptr<> class as planar_pixel_iterator<>

The reason for "iterator" is that overloading the term ptr can be confusing.
It was for me.
The reason for "_pixel_" is that I believe it is generally better to never
sacrifice clarity over identifier length unless the length is atually too

(3) rename the planar_ref<> class as planar_pixel_handle<>

The reasons are analogous to those for point (2)

> Thanks for your valuable suggestions. We would like to hear the rest of
> your comments whenever you find the time.

I will :)

> It doesn't have to be part of
> a formal review, or even in the form of a posted message on the boost
> list. You could just email them if you prefer, or we could talk.
OK. I will post them to the list, but off the review to allow Tom to finally
work on it.

>> It also requires the type to have an "axis_value()" template
>> function. Is
>> that a member function of the model? A free function in the
>> GIL namespace?
>> Can't tell again.
> A nested constant.
> Some properties are explicitly stated

OK, I guess there's the problem, some are, but not all, not even in the
doxygen docs.
Just systematically check all the documentation to see if it is clear how
each type/function is to be implemented.

> that must be
> provided in associated traits class, pixel_traits and
> pixel_iterator_traits. Unless explicitly specified in the design guide,
> assume nested constants.

> I know the same argument about adaptability applies here as well. We
> could have a traits class associated with every GIL concept and move
> types and constants there. But then refering to the type/constant
> becomes much longer and uglier expression.
> This is why, as of now, we use nested typedefs, unless the model can be
> a type that cannot have nested typedefs.
That's OK.
My complain was about the choice not being clearly stated in the docs in all

>> So I look into the doxygen docs, and I found that I can't
>> tell either, but I
>> can tell something: PointNDConcept is a concept "checking"
>> class. That's not
>> the same as a Concept even if it comes close. For example, the "public
>> attribute" "P point" listed by doxygen is specific to the
>> "checking" class.
>> It's not part of the Concept.
> Yes. What happened there is that we have classes that enforce concepts
> via boost::concept_check. We Doxygen-tag those classes to belong to the
> associated concept, which made Doxygen provide "Public Member Functions"
> and "Public Attributes" section, which could be quite confusing. We
> should see how to suppress these from the Doxygen documentation.
OK, that would help.
Or else add a section, somewhere, explaining how to interpret that doxygen

>> The same problem happens with all of the other "concepts".
>> Without looking into the code you can't tell how the
>> requirements must be
>> fulfilled by models. traits_classes? nested types? etc
> I am still not sure how exactly to represent this using the new concept
> syntax. We will look into this.
> Right now to find this you need to read the associated design guide
> section. Unless it states there is an associated traits class, the types
> must be nested.
The problem is that there is some mismatch bettween the doxygen generated
docs + the design guide and the actual code.
I found places were I realized that traits classes where used only after
looking at the code.

>> The discussion of compatible color spaces in the design guide
>> mentions that
>> two color spaces (which as I said should be called color formats)
>> are compatible if the have the same base.
>> Maybe is me, but I found that to be an odd way to say that
>> color-space
>> models form inheritance diagrams.
>> If you just say that, it follows by LSP when are they compatible.
> No they don't have to use inheritance to be compatible. They just need
> to have the same base, i.e.
> boost::is_same<typename CS1::base, typename CS2::base>
Ha OK.

>> Just an example of a problem found through all of the documentation:
>> It's next to imposible to understand what the "value_type" of a
>> HeterogeneousPixelConcept is.
>> It's documented as:
>> HeterogeneousPixelValueConcept<pixel_value_type>, but:
>> HeterogeneousPixelValueConcept is a refinement of
>> HeterogeneousPixelConcept
>> so the definition is recursive.
> Yes indeed. But there are many other places where the definition is
> recursive.
> For example, in the definition of a ViewConcept there are types of
> derived views, which also must model ViewConcept.
> A pixel iterator must have in its traits a "const_t" which must also
> model (an immutable) PixelIteratorConcept.
> In the concept check classes, we simply don't follow the circular
> branches.
> Suggestions on how to resolve this are welcome.
Hmm, I see.
Well, if I can come up with something I'll tell you.


Fernando Cacciola

Boost list run by bdawes at, gregod at, cpdaniel at, john at