We all harp on about the latest 4K this or 8K that. But do we really know what we’re saying? Most of the time, probably not. It all comes down to how the camera’s sensor actually records each of those pixels in the image, which is largely guesswork. In this video Cooke Optics interview cinematographer Geoff Boyle. He explains that it’s basically all down to the nature of a Bayer pattern filter array. What’s really happening when your sensor sees an image and why your camera’s resolution is lying to you.

The whole problem essentially boils down to the fact that a Bayer pattern filter array just doesn’t see all colours for every pixel. Its very design makes this impossible. Out of every four pixels, two are seeing green, one is seeing red, and one is seeing blue. The camera’s using the information from around each pixel to help fill in the gaps.

So, as Geoff describes, a 4K Bayer sensor doesn’t actually record a true 4K image. It records 2x2K in green, 2x1K in red, and 2x1K in blue. Then interpolates the missing data. Fuji attempt to improve upon the Bayer pattern with their X-Trans filter, but it still suffers from the same problem. It does seems to produce a better result, if the wave of adoring Fuji fans are anything to go by, but it’s still not seeing the whole picture.

It’s still guessing the colour in the pixels to help create a complete red, green and blue image. In the standard definition days, I used to shoot with Sony DSR-500 cameras. Great big shoulder mount cameras. They contain three separate sensors. This was common to many higher end systems then, although not so much today.

When the image entered through the lens, it was split up into its red, green and blue parts. One sensor would see the entire red image, one would see the entire green image and another would see the entire blue image.

Then the signals from these three separate sensors were brought together to create the final result. This is why these cameras were so much more expensive than lower end single chip cameras. They were essentially three cameras in one unit. With today’s cameras, even many of the high end ones, the single chip Bayer is the standard. And to really get that 1080p resolution footage, you actually need a camera that’s around 2.7K to get the best quality out of it. As Jeff mentions in the video, this is why Arri produced a 2.7K camera for shooting HD. It’s probably why DJI’s drones also shoot 2.7K. It’s also why companies like Panasonic produce 5.7K cameras for producing 4K footage. This is also why many people shoot 4K even though they only want to produce a 1080p end product. The increased detail of scaling 4K (or UHD) down to 1080p is noticeable. So, whenever you see a camera with an odd resolution, or something that seems to be way more than people are actually able to watch, now you know why.

This is why your 4K camera isn t really 4K - 84This is why your 4K camera isn t really 4K - 68This is why your 4K camera isn t really 4K - 18This is why your 4K camera isn t really 4K - 22