Google’s Pixel 7 cameras focus on ‘accessible creativity’ with impressive new features

June Wan/ZDNET

In the smartphone industry, it’s commonly held that you should buy the Google Pixel for its computational photography and post-processing prowess. The hardware is secondary. 

Real-time image processing and HDR+ are still the bread and butter of the Pixel camera experience, but it’s the side dishes like Photo Unblur, Guided Frame, and Magic Eraser that will resonate with most users — professional or not. And Google continues to bet on making the most inclusive cameras in the business. One example is putting its machine learning to work in optimizing for a wide variety of skin tones. But there are other examples, too.

Also: I went to the zoo with a $2,500 camera and a Pixel 7 Pro. The results surprised me

I recently caught up with Navin Sarma, Product Manager of Google Research, to discuss the thought process behind the latest software features on the Pixel 7 and Pixel 7 Pro, as well as how the company is further differentiating its cameras from the rest of the market.

The “best camera setting”

Google’s camera team is largely comprised of professional photographers and videographers, including Sarma, who thrives in landscape photography. Like Sarma, if you’ve shown any ounce of knowledge in the field, you’ve probably been asked before, “What is the best camera setting?” He chuckles at the thought of it. There is no “best” camera setting. 

Instead, Sarma and team lean into the idea of one-tap capture, a series of user-friendly tools and features for every shooting scenario. Creating such a flexible smartphone camera was what excited Sarma to scale the Pixel experience. “We aren’t catering to a specific demographic with the Pixel camera. The general philosophy is that if you have any inclination to take a picture, then this camera’s for you.” Night Sight and Top Shot are among the tools that Sarma categorizes as “accessible creativity”.

Finding light in pervasive problems

Accessible creativity extends to Photo Unblur, Magic Eraser, and Portrait Light, features that are somewhat different than the real-time settings within the Google Camera app. Instead of guiding your shooting experience, they’re found in the Photos app, created to correct the past, doubling as a safety net for bad photos. It is in this area where Google’s AI and machine learning really come to play.

More: How to use Pixel’s Magic Eraser

“There are a bunch of issues that the team faces on a day-to-day basis. Naturally, as photographers in and outside of work, there’s an intuition of common challenges and roadblocks, such as blurriness and the need for, say, a lightbox to capture evenly-lit portraits,” Sarma said. “We consider these pervasive problems from a professional level, democratize them from the context of general users, and then validate the problems that are worth solving.” 

Google Photo Unblur on Pixel 7 Pro

Photo Unblur lets you dial the sharpness of previously captured images. 

June Wan/ZDNET

That’s why Photo Unblur and Magic Eraser, while not as robust as professional software like Adobe Photoshop, exist. In both cases, Google’s computational system — with the help of in-house silicon, Tensor — studies your images at a pixel level to define consistent patterns, lines, and color profiles. Then, it can discern objects from the foreground and background and make the changes you want. 

Also: How to use Pixel’s Photo Unblur

Ultimately, Google is encouraging users to look forward to the images they capture, as well as to look back at the ones from the past. 

Building the most inclusive camera experience

Representation matters and building for inclusivity is central to Sarma and the Google Pixel team. For instance, last year’s Pixel 6 introduced Real Tone, a camera-tuning feature that corrected the white balance and highlights when capturing people of color. For the longest time before then, images of darker skin tones would often appear washed out by brighter backgrounds, and general skin brightness looked unnatural. Real Tone addressed that problem.

Also: Google Pixel 7 Pro vs Pixel 6 Pro: Should you upgrade?

Google Real Tone Feature

Image: Google

With the Pixel 7, Google introduced Guided Frame, a feature that leverages Android’s TalkBack screen reader and haptic feedback to assist users with blindness or visual impairments with taking selfies. It’s a handy tool and one that brings meaningful innovation to smartphone cameras. “Our job isn’t done yet,” Sarma followed. “It’s an endless goal to make everyone feel represented. Again, accessible creativity.”

More: Smartphones with interchangeable camera lenses: Hardware chaos or pure genius?

It’s not about numbers. It’s about authenticity

Google’s machine learning and AI-powered camera features are like cheat codes, shortcuts that bypass hardware limitations while still simplifying even the most tedious photography tasks like long-exposure shots and achieving the most balanced dynamic range. 

Also: Google commits to building AI model that supports 1,000 languages

Putting the megapixel counts and sensor sizes aside, Sarma said, “If I had to describe the Pixel camera with one word, it would be authentic. It’s all about you; from the images you capture to how your subjects are portrayed to what you’re able to do with them after the fact.”

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TechNewsBoy.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.