Boost logo

Boost :

From: Lubomir Bourdev (lbourdev_at_[hidden])
Date: 2006-06-22 19:44:01


>Ok. You've pointed out how make a new permutation of any colorspace,
>but don't you think it's kind of too-level stuff and should be
automated?

Yes, it would be nice and if people believe it is important, we can make
adding a new permutation even less new code. But there is diminishing
return here - it is already easy to create a new permutation and the
number of color space permutation that people ever use in practice is
very small; we already provide the vast majority of them so people
rarely will have to create their own permutation.

>I still think you approach is less flexible:
>1. I can hardly imagine how you would support all YCbYr 4:4:4, 4:2:2,
>4:2:1, 4:2:0, etc. (Especially 4:2:0 one ;) without redutant data)
>1. I can hardly imagine how you would support all RGB565, RGB555,
RGB888 >etc. with your RGB class or BGR?

Pavel - you are confusing the concept of GIL pixel with a specific
design of a particular model. One of the reasons GIL is flexible is that
it allows you to use multiple models of the same concept. This is why it
is able to abstract away the planar vs. interleaved structure of the
image - it has different models for a planar and interleaved pixel
reference and iterator. This is why the same GIL algorithm will work for
a subsampled view of the image, for a color converted view, for a fully
synthetic image view, or even for a run-time specified view - because
these are just different models of an image view.

We are working on heterogeneous pixel models and models of sub-byte
channels. All of your examples above can be modeled in GIL. Some require
providing a new pixel model, others may require providing a new image
view model. I don't think it is impossible to create a GIL pixel model
that allows you to enumerate channels similar to the way you have it in
your example.

There is a huge number of ways an image may be represented and new ones
come up over time. There is an unlimited number of imaging algorithms as
well. It is not GIL's objective to have a model for every possible image
representation and imaging algorithm. What is important is that we have
good concepts that allow people to make extensions to create whatever
they need and know that their models will work with the rest of GIL.

>2. Perhaps with your "semantic channels" you can create interleaved RGB

>image, but do you really think that mixing "colorspace" stuff and how
it is >organized in memory is good?

I see your point, but I think it is a matter of terminology. What we
call "base color space" could be simply "color space" and "derived color
space" could be different; "layout?", "channel permutation?"

>You could use T& semantic_channel(boost::mpl::int_<0>) { return red; }

That is a good idea! Someone asked us why we want to put GIL in Boost,
since it is already a part of a high-profile library. This is an example
- we are already getting good ideas to help us make GIL even better.

>About IO: If I understand correctly you in Adobe have "not open-source"

>IO GIL stuff? Is it correct and would that two versions be the same one
day?

That is incorrect. We don't have a more advanced internal GIL I/O
extension. We are working on a higher-level I/O layer that will allow
you to register image format modules at run-time. It will use the base
GIL I/O extension that we have released; it will just be on top of that.
But it will have some dependencies on the Adobe Source Libraries and
thus we cannot include it in our Boost proposal. That higher-level I/O
layer will be fully open-sourced as well. (We are not using GIL's I/O
module internally because our products already know how to read/write
image files)

That said, we do have some internal GIL extensions that we are not at
liberty to open-source. For example, Photoshop Elements 4.0 (shipped
last Fall) has the ability to automatically extract human faces in your
photographs and present them to you to speed up tagging. It uses an
early version of GIL for the resampling and all face detection stuff.
The next version of Photoshop will also use GIL but unfortunately I
cannot share with you how and where without losing my job :-)

Thank you Pavel for all of your suggestions on how to make GIL better.
We have not heard from anyone else, except a few minor suggestions. Are
other people reviewing/trying out the library?

Lubomir


Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk