Color and Light – Linear and Log >> Human vs Video

What We See

Human vision is complex: not only do we have a varying capacity to see colour and light, we also process what we see through our brains, which add layers of interpretation to colour and light.

These two types of cells do not exist in equal portions nor are they distributed evenly in our eyes. The cells that see colour and require bright light are fewer in number and are concentrated in the centre of our vision. The cells that see in dim light are more numerous and are concentrated primarily around the edges of our vision.

Whether the light gets darker or brighter, the decline in what we see is very gradual. We can see details in bright light, and will see colour, if not fine detail, into the very brightest of highlights. Our ability to distinguish colours and details declines gradually as the light fades, but we are able to detect motion and see shapes into very deep shadow.

What the Camera Gets

What a camera “sees” can be described simply: a camera’s sensor records a narrow range of light and colour, and the photo receptors respond uniformly across the field of view. Photo receptors do not desaturate colour in shadows, nor do they record more details as light gets brighter. Similarly, photo receptors do not record more colour in the centre of field of view. Each photo receptor, regardless of location on the sensor, will record colour and light as they exist within the sensor’s range of luminance. Further, a sensor’s ability to record colour and details simply ends at either end of a sensor’s range of luminance. Highlights clip to white and shadows clip to black.

Trichromatic Theory

It shouldn’t be a surprise, then, that all colours in luminance output devices (cameras, computer monitors, projectors, and so on) are composed with varying combinations of red, blue, and green. Because RGB are the colours of light, if you add all three colours together, you get white. Subtract all three colours and you get black. That is the basis of the RGB colour model.

The print colour model—CMY—is the inverse of the RGB model and, thus, also based on the trichromatic theory. CMY are the colours of print. Ink absorbs certain wavelengths of light, and reflects others, to create colour. If you subtract each of red, green, and blue from white, you get the colour opposites: cyan, magenta, and yellow, or CMY. If you add all three colours (CMY) together, you get (almost) black.

Opponent Process Theory

The opponent process theory suggests that the cone cells of our eyes are neurally linked to form three opposing pairs of colour: blue versus yellow, red versus green, and black versus white. When one of the pair is activated, activity is suppressed in the other. For example, as red is activated, we see less green, and as green is activated, we see less red.

If you stare at a patch of red for a minute, then switch to look at an even patch of white, you’ll see an afterimage of green in the middle of the white. This is the opponent process at work in your vision. The reason we see green after staring at red is because by staring we have fatigued the neural response for red. This allows the neural response for green to increase.

See more on tutsplus

Colour on computers is a minefield. Our eyes perceive light in a different way to a camera, settings like Gamma can completely mess-up the way mixing colours are displayed.

LIGHT IS LINEAR

Basically if you double the energy emitted from a light source, and the distance from that light source stays constant, the light intensity at that point will also double. Easy right?

THE INVERSE SQUARE RULE

Imagine a single light source in a massive dark room. Standing right next to the light, you’ll experience the highest light intensity possible. Moving to the far end of the room, you’ll experience the least intensity in the room, because the light intensity diminishes over distance.

However, it doesn’t diminish linearly as distance increases. If you stand half way between the light source and the far end of the room, the light won’t be half as bright; it will actually be approximately a quarter as intense. The light intensity is inversely proportional to the square of the distance from the light source.

HUMAN PERCEPTION IS NOT LINEAR

This is the real world physics of light. However, our perception of luminance is quite different and that is important when it comes to how we map real world linear luminance values to perceived brightness. We are more sensitive to small changes in luminance at the low end of the scale than the high end.

THE GAMMA CURVE – LINEAR VS LOG

By encoding luminance non-linearly, using a more or less logarithmic curve we can assign a higher number of smaller increments to the low and mid end of the brightness scale, and fewer larger increments all the way high into extended highlights.

A normal idealized gamma curve is actually almost a straight line, and this linear mapping will divide values perfectly evenly between 0 and 1023 across the scale of linear luminance, so the mid point of 512 will be exactly half way between black and white, which is 50% grey right? Wrong. A value of 512 will actually be about 75% grey. There will be far fewer values mapped to the dark end of the scale than the bright end with linear mapped values.

See more on dcinema

In digital photography we are fundamentally concerned with brightness (luminance) in a scene that needs to be converted into a coded value (dependent on bit-depth) of video signal strength (sometimes represented in milivolts: mV) in order to reproduce an image. To make it simple we can say that a digital camera will assign a number to a specific amount of brightness in a scene and that number will be output as voltage. On-set we can view the intensity of this voltage by running our video signal through a waveform monitor and noting its IRE value. A digital camera’s ability to interpret variations in light intensity within a scene is directly related to its bit-depth. The bigger the bit-depth the more luminance values a camera can discern. An 8-bit camera can discern 256 intensity values per pixel per color channel (RGB). A 10-bit camera can discern 1024 values. A 12-bit: 4096. And a 14-bit sensor: 16,384. It’s easy to see why bit-depth has a huge role in a camera’s dynamic range.

A digital camera encodes these luminance values linearly. That is, for every discreet step of difference in luma, the camera will output an equal step of difference in voltage or video signal.

The human eye is sensitive to relative, not discreet, steps of difference in luma. For example, during a full moon your eye will have no problem in discerning your immediate surroundings. If you were to light a bonfire the relative illumination coming from the flames would certainly overpower the moon light. Inversely, if you were to light that same bonfire during high noon you would be hard pressed to notice any discernible increase in illumination. This is why we use f-stops (the doubling or halving of light) to interpret changes in exposure.

What we can learn from the difference between linear and logarithmic responses to luminance is that a linear approach will be able to discern more discreet values in the highlights of an image while a logarithmic approach can discern more subtleties in the shadows. This is because a digital camera only has a finite number of bits in which to store a scene’s dynamic range and most of those bits are used up to capture the brightest portions of the scene.

See more on thedigitalparade

Lascia un commento

Inserisci i tuoi dati qui sotto o clicca su un'icona per effettuare l'accesso:

Logo WordPress.com

Stai commentando usando il tuo account WordPress.com. Chiudi sessione / Modifica )

Foto Twitter

Stai commentando usando il tuo account Twitter. Chiudi sessione / Modifica )

Foto di Facebook

Stai commentando usando il tuo account Facebook. Chiudi sessione / Modifica )

Google+ photo

Stai commentando usando il tuo account Google+. Chiudi sessione / Modifica )

Connessione a %s...

%d blogger cliccano Mi Piace per questo: