6 minute read
To understand log video, we first need to understand how video information is captured by your camera.
As light enters the lens and hits the sensor, every photosite (pixel) measures the discrete amount of light it receives. These measurements are linear, where a doubling of light intensity (number of photons) equals a doubling in signal output. Each pixel’s signal is then sent from the sensor to an image processor, where it is encoded into digital values (more on that in a moment).
But this isn’t how humans perceive light. Our eyes don’t detect light in terms of absolute, discrete measurements. Instead, we perceive brightness in a relative, nonlinear fashion. In fact, human vision is approximately logarithmic in how it perceives light. What exactly does that mean?
To illustrate, imagine you are in a completely dark room. Obviously, since there is no light, you cannot see anything. Now imagine you light a candle. This will let you see a little bit, and if you had a camera with you in the room, it probably could as well.
But let say you want more light, so you add a second candle. Physically speaking, this will double the number of photons being emitted in the room, and a camera’s sensor will actually detect and measure those photons as being double from before. Your eyes, however, will only perceive that there is a relative doubling in brightness (i.e. a movement from one step to a step twice as bright).
Again, imagine you light another candle. This is where our eyes and camera diverge. Three candles emit three times the number of photons as a single candle. This is what your camera’s image sensor detects, so it has three times the brightness information as before. But your eyes do not. Rather than being three times as bright as a single candle, it’s only 2.5 times as bright. But why? Because the movement from two candles (the second step) to three candles (the third step) is only half as big in relative terms.
By now you can probably see the difference between linear and logarithmic perception of light. This is the difference between how our eyes and cameras see. To continue the above example, a fourth candle would be four times the amount of light of a single candle from the camera’s perspective, but would only be three times as bright to our eyes. Eight candles would be eight times the number of photons hitting the image sensor, but only four times as bright as perceived by our eyes.
Adding a 17th candle would barely make a difference to our eyes’ perception of relative brightness, but the camera would continue to measure a linear increase in the number of photons. And if our eyes barely notice any change in brightness between 16 and 17 candles, imagine how much smaller the perceivable difference is between 99 and 100 candles. You’d barely notice any difference in brightness at all.
So what has all this got to do with log footage?
The disparity between our eyes and cameras means that not all light information is created equal. We simply don’t perceive variations in the brightest parts of an image as much as we do variations in the darkest parts. Knowing that, why should a camera allocate the same amount of space to all light information equally? In reality, it doesn’t.
Since storage space is limited, engineers took advantage of the situation. They developed a method that preserves image quality across the parts of the image our eyes are most sensitive to (like shadows), but without increasing bandwidth or storage requirements.
This process is called gamma encoding, and it takes place inside our cameras. After the analog light signals are measured by the image sensor, that information is sent to an image processor which converts it to a series of digital values. During this conversion, a mathematical function translates the camera’s linear light sensitivity to a more logarithmic scale. This is gamma encoding, and it has the effect of stretching the darker tonal ranges of an image across more bits of information, while compressing the brighter tonal ranges into fewer bits.
This is great, because it allocates digital information more efficiently across tonal ranges (in a way that is similar to how our eyes perceive them). More bits are used for tonal ranges our eyes are sensitive to, and fewer bits are given to the tonal ranges we perceive less.
However, a gamma-encoded image won’t look the way the actual scene did to our eyes. If we want the image to look the way we saw it in the real world, we need to restore its linear brightness.
This process is called gamma correction, and it takes place in our computers and displays. Basically, gamma correction is just the opposite of gamma encoding. This process applies a mathematical transform function in reverse, returning the image to a linear brightness scale.
This is also great, because it represents tonal ranges in an image as our eyes expect to see them.
In short, gamma encoding saves space by throwing away some bright parts of an image our eyes won’t really notice, and then uses that available space to store the dark parts we will notice. Gamma correction then displays that preserved tonal information in an image that is more representative of what the scene actually looked to the human eye.
This is where log comes into play. Log image profiles apply a mathematical function to the output of an image sensor, before the information is encoded by the image processor. If this sounds like gamma encoding to you, then you’re paying attention. The difference is that the mathematical function for log encoding is a logarithmic curve (hence the name). This is a much more aggressive curve, which stretches out brightness information over a much wider set of tonal ranges. That means that the image is much flatter than standard gamma encoding.
This approach uses far more digital information to store the shadows and midtones of an image, which preserves the lower stops with much greater detail. This enables greater precision for manipulation in post, but doesn’t increase the storage requirements above those of standard gamma-encoded video. Log provides more subtle steps in brightness (shades of gray) on screen, so you have much more flexibility to adjust the look of an image to be just the way you want.
Of course, this also means that when you view the image on most displays it will not look the way it did in the real world. Why is that? Because the standard gamma correction your monitor uses isn’t logarithmic, so the image will still look washed out. However, this is easily fixed by applying the corresponding logarithmic function in reverse, which is exactly what some LUTs do. This is basically gamma correction for log video, but you have the ability to tweak it in post.
To summarize, shooting in log preserves more detail across the dynamic range of an image. This preserved detail gives you more creative control over an image, similar to raw footage, but without the enormous file sizes. That means you can make finer adjustments in post-production, but with the same hardware as standard video files. That said, log isn’t meant for final delivery/viewing by itself, so you will need to add steps in your workflow to correct the image (using a LUT or otherwise).
Video collaboration solved.