Personalized Spatial Audio for AirPods looks good on paper – but I want more
Anyone who’s ever listened to Apple’s proprietary spatial audio with dynamic head-tracking tech will tell you, quite forcefully, that movie content sent from an iPhone, iPad, Mac (with silicon) or Apple TV 4K to their AirPods Max, AirPods Pro or AirPods 3 is vastly transformed.
Yet to give it a whirl? You’re in for a treat. Try the opening scene from Gravity, the shoot-out in the most recent James Bond epic, No Time To Die, or practically any scene from Venom, ideally from an iPad Pro 12.9 to a set of AirPods Max. But we digress…
At its recent WWDC 2022 event, Apple divulged an update for spatial audio coming with iOS 16. The news? This time, it’s personal.
The announcement was simple enough: Craig Federighi, Apple’s senior vice president of software engineering, said that with the iOS 16 rollout users will be able to use an iPhone’s TrueDepth camera to create a ‘Personalized Spatial Audio’ profile.
So, nobody’s asking you to go out and get impressions of your ears from an audiologist – your iPhone camera has got this one covered. In truth, it’s not the first time we’ve seen this kind of approach from headphone manufacturers.
The Sony Headphones App, for example, has been guiding users through photo-shoot style ear scans using their camera phones for a little while now, across two generations of its best-selling WH-1000XM4 and WH-1000XM5 over-ears. And of course, Sony has its own spatial audio format to make the best of it all, Sony 360 Reality Audio…
How does spatial audio work now – and how will it get better?
Both Sony 360 Reality and Apple’s head-tracked spatial audio use something called Head-Related Transfer Functions (HRTF). It’s a sort of formula for explaining the physical differences in any listener (shape of the ear, shape of the head, distance between ears, if there’s anything actually between them… OK the last one is a joke) which will affect said listener’s reception of sound from a given point plotted in space – Sony is very clear that its immersive solution works in a sphere around the listener.
By processing data taken from thousands of individuals, an HRTF that’s closest to the average person’s perception and response can be created – ie. immersive, head-tracked spatial audio that’ll wow almost anyone.
So, why the need to personalize? Well, Sony already does it, although by mentioning the iPhone’s TrueDepth camera, Apple implies that it’s snapper may also do some 3D mapping of your canals, though the company has not explicitly said as much.
It’s also unclear whether Apple’s personalized spatial audio will involve a more robust fit test than the one currently featured in the AirPods Pro, which issues sounds and uses the in-ear mic to test the efficacy of the seal you have achieved betwixt ear canal and AirPod, all in a bid for the best possible audio quality and noise cancellation. Will the new personalizing process involve a hearing test too, as seen in products such as the ultra-customizable NuraTrue earbuds?
But – and this is a compliment, because Apple’s spatial audio is already excellent – will any of this really make it any better? It remains to be seen; iOS 16 beta is with a select group of testers now, the public beta version will arrive in July and a full rollout is scheduled for late 2022 as long as you’ve got an iPhone 8 or later – and if you’re feeling brave, you can have at it on your iPhone now, although we’re not sure we recommend this course of action just yet.
Opinion: Apple’s spatial audio will achieve great things – but not with this particular update
However Apple ‘personalizes’ its truly wonderful spatial audio with head-tracking, I doubt it is the update we all really want to see – and most importantly, hear.
You see, spatial audio takes surround-sound Dolby Atmos signals and adds directional audio filters on top, tweaking the frequencies and volume levels each of your ears can hear so that sounds can be placed practically anywhere around your person. And when using Apple’s top-tier AirPods and an Apple device with head-tracking deployed, the device is actually placed and recognized too, as the source of the sound – watch any of the recommended movies at the beginning of this piece and simply take a few steps away from your device. Now turn slowly around. See?
What would be truly mind-blowing where spatial audio is concerned, is the ability to physically walk through a symphony orchestra, pausing beside the bassoon player or the second violins, perhaps. Currently, your device is still the source, so while it is immersive, you cannot glean that truly advanced level of personalization; you cannot hone in on the timpani by actually walking over to it.
From an in-app perspective, the option to tweak the head-tracked content from ‘device as source’ to ‘in situ’, if you had the space to live out your virtual stroll through the Sydney Opera House in your local community center, for example, would truly upgrade spatial audio. And that may be coming – but it’s not here yet.
As with all such advancements, it’s when the technology is truly advanced and malleable – when the end user is able to push it to its limits, break it, and put it back together incorrectly but in a way that they feel is an improvement – that spatial audio will achieve its full potential.
I’m not so sure taking a picture of your ear to optimize spatial audio for AirPods achieves this, but similarly I don’t want to rain on Apple’s parade. It’s certainly a step in the right direction – and I’m truly excited to see what this award-winning technology can achieve further down the road. After all, we’re such converts, we even curated 10 albums we wish were available in Spatial Audio.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.