Google has shown its latest attempt to make cameras smarter. The company has announced the Pixel 4 and Pixel 4 XL. Both are new versions of its popular smartphone and it comes in 2 screen sizes. The devices include new hardware features, such as an extra camera lens and an infrared face scanner to unlock the phone. Google emphasized the use of phones in so-called computational photography. It automatically processes images to look more professional. A mode for shooting the night sky and capturing images of stars is among the Pixel 4’s new features. Google augmented a software feature called Super Res Zoom by adding the extra lens. This feature allows users to zoom in more closely on images without losing much detail.
It is noteworthy that last month, Apple also highlighted computational photography when it presented 3 new iPhones. Deep Fusion is yet-to-be-released and it will process images with an extreme amount of detail. You’re not actually shooting a photo anymore when you take a digital photo. A computer science professor at the University of California, Berkeley, Ren Ng said, “Most photos you take these days are not a photo where you click the photo and get one shot. These days it takes a burst of images and computes all of that data into a final photograph”.
Computational photography has been around for years. One of the earliest forms was HDR (high dynamic range) which involved taking a burst of photos at different exposures and blending the best parts of them into one optimal image. More sophisticated computational photography has rapidly improved the photos taken on our phones over the last few years. Google also provided a preview of its Pixel phones last week. Google introduced Night Sight last year, which made photos taken in low light look as though they had been shot in normal conditions, without a flash. The technique took a burst of photos with short exposures and reassembled them into an image. Google is applying a similar technique for photos of the night sky with the Pixel 4 smartphone.