Boost logo

Boost :

From: Kevin Wheatley (hxpro_at_[hidden])
Date: 2006-10-12 12:03:15


Andy Little wrote:
> For theatre lighting (and for lighting a 3d CGI scene) one uses lights with
> colour filters and mixes colours. Lights can be dimmed or brightened or switched
> on and off.
>
> These are quite simple operations and seem to me to apply to most models of
> colour.

They apply to linear additive light models certainly, but don't always
apply to non-linear subtractive models (well at some level of detail
the become linear light in the physical world but users/artists don't
view it like that)

> The other aspect of colour ( but not being an expert) is that it seems to be
> like a 3d vector in many respects, where instead of x,y,z you have the colours
> red, green and blue.( as I saw on a Wiki somewhere). This gives you a basis
> for some mathematical operations, like addition, subtraction and multiplication
> by a scalar. That is assuming the RGB model, but the RGB model appears to be the
> closest hardware conterpart to the physical phenomena.

Colour is a human concept, it only exists in the brain. In the
physical world you have an infinitive space of wavelengths, but
spectral colour when used in modeling the spectral reproduction of
things for instance uses more than 3 basis vectors, for example you
may take 10nm intervals across the visible spectrum (you may also need
UV and IR bands if your dealing with things like whiteners added to
paper etc).

If you needed to work with unusual eye conditions e.g.
http://en.wikipedia.org/wiki/Tetrachromat you may need 5 basis
functions for your colour representation. To truly understand the
cinema style experience you need 4 (rods and cones)

But these are all based upon additive mixtures of linear light.

They don't quite work in subtractive systems (like printing or print film)
in the same way, nor in non-linearly encoded colour spaces.

> That doesnt give you much of a basis for comparing two images from different
> sources. Say for comparing a CCTV image of a criminal to a mugshot. Presumably
> there are standards around which try to address this problem.

well yes and no. There are often a set of viewing conditions
associated with the reproduction of an image in the correct way, what
is not yet complete is how to map an image out of this set of
conditions. For instance the size of the image affects how you
perceive it - this is not something non-specialists would guess I
imagine, whilst they will have observed that turning a light on or off
changes the way printer media reacts in a coarse way, try look at a
photo under a sodium vapor street lamp for instance.

>>In terms of the library, I disagree. The library should model not the
>>physical concept of colour, because that is tricky to define, hard to
>>understand, and complicated to work with.
>
>
> That is up to the designer of the Concept. The phenomenon may be complex but the
> means of using it, its operations, seem fairly simple to me.

Generally in the CGI end of the business we have meta data that
describes how to interpret the data separate from its data
representation. I'd want a library to understand that separation.
Floating point numbers do go >1.0 (and <0.0 once you enter abstract
mathematical representations)

Personally if I ever see a HSV/HSL/etc colour space I'd avoid it like
the plague as generally they are ill defined by some 'graphics' text
book rather than being a true 'colour' space. This is because the
approximation they make is *too* approximate for the level of
adjustments we need to make.

Kevin

-- 
| Kevin Wheatley, Cinesite (Europe) Ltd | Nobody thinks this      |
| Senior Technology                     | My employer for certain |
| And Network Systems Architect         | Not even myself         |

Boost list run by bdawes at acm.org, gregod at cs.rpi.edu, cpdaniel at pacbell.net, john at johnmaddock.co.uk