|
Boost : |
From: Pavel Chikulaev (pavel.chikulaev_at_[hidden])
Date: 2006-06-22 23:24:15
Lubomir Bourdev wrote:
>>Ok. You've pointed out how make a new permutation of any colorspace,
>>but don't you think it's kind of too-level stuff and should be
>> automated?
> Yes, it would be nice and if people believe it is important, we can make
> adding a new permutation even less new code. But there is diminishing
> return here - it is already easy to create a new permutation and the
> number of color space permutation that people ever use in practice is
> very small; we already provide the vast majority of them so people
> rarely will have to create their own permutation.
Actually you came up with word "permutation" :) I think that there
shouldn't be any permutation or any derived colorspaces. My point is
that we should independently define specific colorspace (i.e. RGB)
without any information how precise we store the channel information,
and should not make any assumption like all channels should be the size
same (your RGB class for example). This is what I call colorspace and
concept of an image. Every image with such colorspace should be treated
the same way, no matter BGR, RGB or any other kinds of layouts (i.g.
interleaved). For example
template<typename Image> //or view actually
void foo_on_rgb_image_algorithm(Image const & image, where<Image,
is_rgb888_colorspace> *= 0); //888 for example
But how specific image (or view) organized in memory (or even synthetic
like in you example), that's what I call layout. But you mix this from
the very beginning (when defined RGB class) till the very end (member
functions of a templated class image, with arguments that specify each
row alignment). I think it's no good at all.
>>I still think you approach is less flexible:
>>1. I can hardly imagine how you would support all YCbYr 4:4:4, 4:2:2,
>>4:2:1, 4:2:0, etc. (Especially 4:2:0 one ;) without redutant data)
>>1. I can hardly imagine how you would support all RGB565, RGB555,
> RGB888 >etc. with your RGB class or BGR?
> Pavel - you are confusing the concept of GIL pixel with a specific
> design of a particular model. One of the reasons GIL is flexible is that
> it allows you to use multiple models of the same concept. This is why it
> is able to abstract away the planar vs. interleaved structure of the
> image - it has different models for a planar and interleaved pixel
> reference and iterator. This is why the same GIL algorithm will work for
> a subsampled view of the image, for a color converted view, for a fully
> synthetic image view, or even for a run-time specified view - because
> these are just different models of an image view.
Ok, that's not strange to me.
> We are working on heterogeneous pixel models and models of sub-byte
> channels. All of your examples above can be modeled in GIL. Some require
> providing a new pixel model, others may require providing a new image
> view model. I don't think it is impossible to create a GIL pixel model
> that allows you to enumerate channels similar to the way you have it in
> your example.
So you working now on heterogeneous pixel modes, that's one more proof
to me that I'm on a right way, I don't even need to bother me doing so,
I already have it.
> There is a huge number of ways an image may be represented and new ones
> come up over time. There is an unlimited number of imaging algorithms as
> well. It is not GIL's objective to have a model for every possible image
> representation and imaging algorithm. What is important is that we have
> good concepts that allow people to make extensions to create whatever
> they need and know that their models will work with the rest of GIL.
I always wanted to implement most of that algorithms, but right now I
just don't feel GIL is the platform where I want to do it, sorry :(
>>2. Perhaps with your "semantic channels" you can create interleaved RGB
>>image, but do you really think that mixing "colorspace" stuff and how
> it is >organized in memory is good?
> I see your point, but I think it is a matter of terminology. What we
> call "base color space" could be simply "color space" and "derived color
> space" could be different; "layout?", "channel permutation?"
No it's not about terminology, it's about mixing absolutely two
different things in one.
-- Pavel Chikulaev
Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk