Boost logo

Boost :

From: Lubomir Bourdev (lbourdev_at_[hidden])
Date: 2007-04-28 16:18:09

On 4/26/07 12:33 PM, "Christian Henning" <chhenning_at_[hidden]> wrote:
> I cannot quite follow you here. Could you elaborate on what you mean
> with scaling up the histogram to the full range?

Suppose you have a 16-bit grayscale image that is dark. Most of the colors
will be close to black. Let's say they vary from 0/65535 to 300/65535.
Your algorithm will scale their values so that 300/65535 will become
65535/65535, i.e. fully bright. Instead of using just 300 possible values,
the image will use the full range.

I suspect you scale the histogram so that you can decrease the loss of
precision when converting the values to 8-bit range (because they now map to
the full 0..255 range, instead of just the first few values).

>> While this algorithm is useful for increasing the contrast of the image,
>> I wouldn't use it as the default when saving higher bit depth images to
>> 8-bit formats. If you want to preserve as much as possible the original
>> look of the image I suggest using color_converted_view to a form
>> supported by the I/O format.
> Again, useful for increasing the contrast? Seems my limited knowledge
> is getting back on me, here. Could you also please describe what you
> mean.

Because the values cover the full range 0..65535, the distance between two
neighbor intensities is larger and thus the contrast of the image increases.

Unfortunately one side effect is that the brightness of the image changes.
So your very dark image may look a lot brighter after the transformation.
This is why I suggested that by default we use color_converted_view, which
will preserve the original brightness and contrast of the image.

>> Other suggestions regarding your code:
>> 1. A better histogram spread computes the image histogram and uses the
>> top/bottom N% value as the min/max value, instead of the absolute
>> minimum and maximum. This is more robust to noise.
> How would you implement it? I started my own implementation that would
> create a vector<channel_t> for each channel. But I was running into
> problems all over.

I haven't thought much about this, but this is k-order statistic (in an
array of numbers, find the K-th smallest one). There is an algorithm to do
this in O(N). This would be rather useful, perhaps someone should put it in

> I tried creating a mpl::vector that contains a std::vector<channel_t>
> for each channel using the mpl::transform1 algorithm. That works
> flawlessly. For a rgb8_view_t. It would create something like:
> mpl::vector< std::vector<uint8>, std::vector<uint8>,
> std::vector<uint8> > .
> template <typename T>
> struct add_vector
> {
> typedef std::vector<T> type;
> };
> template< class PIXEL
> , class VIEW >
> inline PIXEL min_channel_values( const VIEW& view
> , const size_t percent = 1 )
> {
> typedef mpl::transform1< color_space_type<VIEW>::type,
> add_vector<mpl::_1> >::type channels_t;
> return PIXEL();
> }
> Right now, I don't understand how to create such an object where all
> vectors are resized to the correct max values. Do you have any ideas
> on how to do that?

My suggestion is to ignore the fact that there are multiple channels in a
pixel and just white the algorithm to operate on a grayscale image. It is
then trivial to extend it by using nth_channel_view.

>> You can then wrap it into a function object per pixel and use
>> gil::for_each_pixel
> I tried it but static_for_each only supports up to three pixels as
> parameters. But I think I need four. One for the current src pixel,
> current dst, src_min, and src_diff.

You don't need four. min and diff don't change per channel, so you can keep
them in the state of your function object.

Your algorithm makes most sense for grayscale images, and will do reasonable
job for RGB as well. However, in general, for many algorithms you don't want
to treat the channels of a pixel as a simple vector of channels. In
particular, for color spaces whose basis vectors are not orthogonal (like
CMYK) such linear interpolation per channel will produce undesired effects.

I suggest that you make your algorithm operate on grayscale images for now.
Ideally it makes sense to do the algorithm on the luminosity and leave the
hue alone.


Boost list run by bdawes at, gregod at, cpdaniel at, john at