There’s an age-old take when it comes to Apple and hot new technologies: if the company hasn’t shipped whatever everybody else in the industry is currently focusing on, it must be behind.
This is rarely the truth.
Apple’s business is like the proverbial iceberg: we only see the tip of what the company’s doing, while the vast majority of its research and development efforts are looming beneath the surface. Just look at its finances in its most recent quarter: it spent $7.7 billion on R&D, accounting for more than half of all of its operating expenses.
The latest technology to feature in this storyline is, of course, artificial intelligence. How can the company compete in this burgeoning new market if it doesn’t come out with a chatbot or image generator post haste? (Never mind that it still hasn’t shipped its virtual reality headset–that was the last market where the company was clearly falling behind.)
But, as is always the case with this particular canard, the truth is that Apple’s been doing AI in its own particular way, and it’s not about chasing the market.
Get your learning on
One reason that Apple’s AI work is sometimes overlooked is simply one of terminology. While the company doesn’t often talk about “artificial intelligence,” it does spend a lot of time discussing “machine learning” (ML), which is a critical underpinning to a lot of Apple’s latest technologies.
Though machine learning may technically only be a subset of artificial intelligence (and there’s some disagreement on even that) the two terms are often used interchangeably, at least on a colloquial basis. The large language models behind tools like ChatGPT, Google’s Bard, and Microsoft’s new Bing chatbot take advantage of machine learning technologies, as do image generators like DALL-E and Stable Diffusion. Broadly speaking these are all technologies that involve algorithms that use data to learn and improve.
Apple’s investment in machine learning is clear: in 2018 the company hired Google’s head of machine intelligence, John Giannandrea, as its Senior Vice President of Machine Learning and AI Strategy, reporting directly to Tim Cook. It also runs a rare public-facing site where it publishes a large amount of its machine-learning research and actively sponsors Ph.D. fellowships, recruits interns, and offers residencies. And it contributes to open-source machine learning projects, for example, optimizing the Stable Diffusion software to run on its hardware.
IDF
The cores of machine learning
You might concede that Apple is interested in ML, but perhaps you’ll double down on asking what it has actually produced with all of this investment.
Plenty. If you’re a fan of features like Live Text—the feature that lets you select any text out of photos or video—or the ability to search your Photos library for the word “dog” and actually see all the pictures of dogs you’ve taken, or the beta Live Captions feature that can subtitle any video or audio playing from your device, you’re benefiting from Apple’s machine learning research.
The company’s also created an entire framework called CoreML to make it easy for developers to integrate machine learning into their products, and if that’s not enough, then remember that every Apple-made processor dating back to 2017’s A11 Bionic has featured a dedicated Neural Engine, optimized for running machine learning algorithms—in the most recent iteration, which features 16 cores, it can run an astounding 17 trillion operations per second, allowing, in typical Apple fashion, machine learning models to run privately on your device instead of relying on a cloud service.
If Apple’s spending the money to devote a significant portion of its processors to machine learning, then it is most certainly putting its money where its mouth is.
Apple
Tip of the iceberg
Are there more opportunities for Apple to increase its machine-learning footprint? Certainly. The glib answer is that Siri’s often lackluster performance could be enhanced by the kind of AI you see in recent chatbots—though, given the truly bizarre nature of some of the conversations with Microsoft’s recent foray, it seems likely Apple isn’t going to immediately jump into the deep (learning) end of the pool.
But the ability for those types of systems to retain context and communicate in a more fluid and human manner does have advantages that could and should work their way into Apple’s virtual assistant, even if in a more limited and controlled manner.
Likewise, the speech-to-text capabilities of Live Captions and Apple’s dictation systems could be enhanced were Apple to do some of the same optimizations it’s done with Stable Diffusion on the very impressive Whisper speech recognition framework.
Again, these aren’t as flashy as what many of Apple’s competitors are doing in the market, but neither does Apple have to chase that functionality in the same way. Google and Microsoft, for example, are using AI to duke it out in search, a market that Apple doesn’t really play in (though I would hardly be opposed to Apple using some of the underlying technology to improve its own on-device search capabilities).
In the end, Apple’s use of machine learning remains driven more by the idea of how it can enhance what users do, rather than just existing for its own sake. And while that may not capture the imagination in the same way, it may ultimately have a bigger impact on users’ lives. Which, to my mind, puts Apple ahead in the game, not behind.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.