Archivio per arri alexa

Color and Light – Linear and Log >> Human vs Video

Posted in cinema with tags , , , , , on 13 ottobre 2015 by realuca

What We See

Human vision is complex: not only do we have a varying capacity to see colour and light, we also process what we see through our brains, which add layers of interpretation to colour and light.

These two types of cells do not exist in equal portions nor are they distributed evenly in our eyes. The cells that see colour and require bright light are fewer in number and are concentrated in the centre of our vision. The cells that see in dim light are more numerous and are concentrated primarily around the edges of our vision.

Whether the light gets darker or brighter, the decline in what we see is very gradual. We can see details in bright light, and will see colour, if not fine detail, into the very brightest of highlights. Our ability to distinguish colours and details declines gradually as the light fades, but we are able to detect motion and see shapes into very deep shadow.

What the Camera Gets

What a camera “sees” can be described simply: a camera’s sensor records a narrow range of light and colour, and the photo receptors respond uniformly across the field of view. Photo receptors do not desaturate colour in shadows, nor do they record more details as light gets brighter. Similarly, photo receptors do not record more colour in the centre of field of view. Each photo receptor, regardless of location on the sensor, will record colour and light as they exist within the sensor’s range of luminance. Further, a sensor’s ability to record colour and details simply ends at either end of a sensor’s range of luminance. Highlights clip to white and shadows clip to black.

Trichromatic Theory

It shouldn’t be a surprise, then, that all colours in luminance output devices (cameras, computer monitors, projectors, and so on) are composed with varying combinations of red, blue, and green. Because RGB are the colours of light, if you add all three colours together, you get white. Subtract all three colours and you get black. That is the basis of the RGB colour model.

The print colour model—CMY—is the inverse of the RGB model and, thus, also based on the trichromatic theory. CMY are the colours of print. Ink absorbs certain wavelengths of light, and reflects others, to create colour. If you subtract each of red, green, and blue from white, you get the colour opposites: cyan, magenta, and yellow, or CMY. If you add all three colours (CMY) together, you get (almost) black.

Opponent Process Theory

The opponent process theory suggests that the cone cells of our eyes are neurally linked to form three opposing pairs of colour: blue versus yellow, red versus green, and black versus white. When one of the pair is activated, activity is suppressed in the other. For example, as red is activated, we see less green, and as green is activated, we see less red.

If you stare at a patch of red for a minute, then switch to look at an even patch of white, you’ll see an afterimage of green in the middle of the white. This is the opponent process at work in your vision. The reason we see green after staring at red is because by staring we have fatigued the neural response for red. This allows the neural response for green to increase.

See more on tutsplus

Colour on computers is a minefield. Our eyes perceive light in a different way to a camera, settings like Gamma can completely mess-up the way mixing colours are displayed.

LIGHT IS LINEAR

Basically if you double the energy emitted from a light source, and the distance from that light source stays constant, the light intensity at that point will also double. Easy right?

THE INVERSE SQUARE RULE

Imagine a single light source in a massive dark room. Standing right next to the light, you’ll experience the highest light intensity possible. Moving to the far end of the room, you’ll experience the least intensity in the room, because the light intensity diminishes over distance.

However, it doesn’t diminish linearly as distance increases. If you stand half way between the light source and the far end of the room, the light won’t be half as bright; it will actually be approximately a quarter as intense. The light intensity is inversely proportional to the square of the distance from the light source.

HUMAN PERCEPTION IS NOT LINEAR

This is the real world physics of light. However, our perception of luminance is quite different and that is important when it comes to how we map real world linear luminance values to perceived brightness. We are more sensitive to small changes in luminance at the low end of the scale than the high end.

THE GAMMA CURVE – LINEAR VS LOG

By encoding luminance non-linearly, using a more or less logarithmic curve we can assign a higher number of smaller increments to the low and mid end of the brightness scale, and fewer larger increments all the way high into extended highlights.

A normal idealized gamma curve is actually almost a straight line, and this linear mapping will divide values perfectly evenly between 0 and 1023 across the scale of linear luminance, so the mid point of 512 will be exactly half way between black and white, which is 50% grey right? Wrong. A value of 512 will actually be about 75% grey. There will be far fewer values mapped to the dark end of the scale than the bright end with linear mapped values.

See more on dcinema

In digital photography we are fundamentally concerned with brightness (luminance) in a scene that needs to be converted into a coded value (dependent on bit-depth) of video signal strength (sometimes represented in milivolts: mV) in order to reproduce an image. To make it simple we can say that a digital camera will assign a number to a specific amount of brightness in a scene and that number will be output as voltage. On-set we can view the intensity of this voltage by running our video signal through a waveform monitor and noting its IRE value. A digital camera’s ability to interpret variations in light intensity within a scene is directly related to its bit-depth. The bigger the bit-depth the more luminance values a camera can discern. An 8-bit camera can discern 256 intensity values per pixel per color channel (RGB). A 10-bit camera can discern 1024 values. A 12-bit: 4096. And a 14-bit sensor: 16,384. It’s easy to see why bit-depth has a huge role in a camera’s dynamic range.

A digital camera encodes these luminance values linearly. That is, for every discreet step of difference in luma, the camera will output an equal step of difference in voltage or video signal.

The human eye is sensitive to relative, not discreet, steps of difference in luma. For example, during a full moon your eye will have no problem in discerning your immediate surroundings. If you were to light a bonfire the relative illumination coming from the flames would certainly overpower the moon light. Inversely, if you were to light that same bonfire during high noon you would be hard pressed to notice any discernible increase in illumination. This is why we use f-stops (the doubling or halving of light) to interpret changes in exposure.

What we can learn from the difference between linear and logarithmic responses to luminance is that a linear approach will be able to discern more discreet values in the highlights of an image while a logarithmic approach can discern more subtleties in the shadows. This is because a digital camera only has a finite number of bits in which to store a scene’s dynamic range and most of those bits are used up to capture the brightest portions of the scene.

See more on thedigitalparade

Annunci

Lens Coverage and Crop Factor on 35mm >> Film and Digital Format

Posted in Cinema e Fotografia with tags , , , , , on 4 ottobre 2015 by realuca

The FOV calculator lets you plug in lenses and it will show you what it will look like depending on the size of the sensor. The chart has all lenses and sensors ranging from Super 35mm Motion Picture to ¼ inch HD, all HDSLR’s included. This is a really useful tool to help explain FOV and crop sensors.

Cameramen brought up on film might only use the word ’35mm’ in one of its cinematography avatars (just showing the horizontal sizes and aspect ratio for simplicity):

Academy – 22mm (1.375:1)
Widescreen – 21.95mm (1.85:1)
Cinemascope – 21.95mm (2.39:1)
Super 35mm 3-perf – 24.89mm (16:9, 2.39:1)
Super 35mm 4-perf – 24.89mm (4:3)

There are people who think one should use Super 35mm when calculating the 35mm equivalent. After all, for video, why should anyone consider a standard photographic sensor size anyway?

Fair enough. Which one of the above five should one pick? You see, Super 35mm is a relatively new standard (if it can be called that) even in the film world. In the digital world, even if sensor manufacturers use the term ‘Super 35mm’, they don’t actually mean the film size that it refers to. E.g (horizontal sensor sizes only):

Arri Alexa – 23.76mm
Red Epic – 27.7mm
Red Epic Dragon – 30.7mm
Blackmagic Production Camera 4K – 21.12mm
Canon C500 – 24.6mm
Sony F55 – 24mm

See more on wolfcrow

Angle of view related to different formats is where it gets tricky and we need simple math. The angle of view of the 16mm format lens is about half the angle of view of 35mm format lens. If you want to shoot the same angle, 62 degrees, you’d select a 12mm lens for the 16mm camera and a 24mm lens for your 35mm camera.

Super 35 is a very variable size, depending on company and who’s making the groundglass or frame markings. That’s why I will receive emails from Denny Clairmont and Mitch Gross as soon as they read this, because these specs require many footnotes and further explanations. I’ve rounded out the dimenstions to one decimal place for clarity.

Since there are so many aspect ratios and dimensions, lens manufacturers use a diagonal measurement (shown at bottom of page) and try to cover the most area they can.

Math: To calculate comparable angles of view
new lens mm = (new format diagonal / old format diagonal) times old lens mm

If we’re using a 40mm 2/3″ format lens, and want the equivalent 35mm full frame size, here’s the math. New lens comparable angle = (31/11) x 40. That’s because the diagonal of the NEW full 35 film frame is 31mm. The diagonal of the OLD 2/3” CCD is 11mm. The OLD lens is 40mm.

For those of us whose math is rusty: 31 divided by 11 is 2.8. Multiply 2.8 by 40 to get 113. So, the comparable lens angle of the 40mm in 2/3″ format is a 113mm in 35mm format.

See more on fdtimes

The size of the sensor will affect the image angle of view, so that a 25mm Prime lens will look different if viewed on a variety of image formats.
Thus a 25mm lens will have a mid shot when used on 35mm film (HDTV 1.78:1 16:9 aspect ratio), close up on HD Camcorders and big close up on smaller semi-professional, ‘prosumer’ camcorders such as EX-3 and Z1 etc which have 1/2″ sensors and 1/4″ sensors respectively.
This explains therefore why if you mount an HD lens onto a ½? camcorder (such as Sony PDW-F355) without an optically corrected mount, that a wide angle acts like a telephoto and why the focal lengths of wide angle lenses of ½? lenses are always smaller than those used for 2/3″ lenses.
Here is a useful sensor size chart:
For 1/3-inch CCDs: H = 4.8 mm, V = 3.6 mm
For 1/2-inch CCDs: H = 6.4 mm, V = 4.8 mm
For 2/3-inch CCDs: H = 8.8 mm, V = 6.6 mm
For 1-inch CCDs: H = 12.7 mm, V = 9.5 mm

Even though lenses are designed to work with different negative and sensor sizes, the convention is that lenses are still referred to by their focal lengths and/or also their zoom ratios in the case of zoom lenses.
A very useful trick to know is that to convert 35mm focal lengths to 2/3″, just divide by 2.5 and to convert Super16 to 2/3″, divide by 1.6.
Thus a 25mm PL mount film lens has the same field of view as a 10mm 2/3″ lens.
The same lens in Super16 has an equivalent focal length of 16mm and has the same field of view as a 10mm 2/3″ lens.

See more on vmi

3 video per capire il Video

Posted in Cinema e Fotografia with tags , , , , , , , , , on 31 ottobre 2011 by realuca

%d blogger hanno fatto clic su Mi Piace per questo: