Apple’s Child-Protection Features and the Question of Who Controls Our Smartphones

It certainly wasn’t an average week in tech questions from my friends and family. And I don’t blame them for the freakout. Apple’s announcement last week of two distinct child-protection measures for iOS confused—and creeped out—even the most tech savvy.

For those catching up: One initiative is software intended to identify child pornography—also known as “child sexual abuse material”—when it is stored using Apple’s iCloud Photos. The other is a parental control enabling iPhones, iPads and Macs to blur out sexually explicit photos in the Messages app, and warn children about sending or receiving them. It could also alert parents of children 12 and under that they are sending or receiving such images.

“Wonderful! Apple wants to protect the children!” Right? Except, Apple flubbed the explanation.

“I grant you, in hindsight, introducing these two features at the same time was a recipe for this kind of confusion,”

Craig Federighi,

Apple’s senior vice president of software engineering, told me in an exclusive video interview, which you can watch here—and read more about here.

This wasn’t a Mapsgate or Batterygate or Antennagate situation. The outcry wasn’t over a demonstrable technical issue. Instead, it was focused on the interpretation—and in some cases, the conflation—of two very different technologies intended to solve two very different problems. And at the center of it all? Giant questions about user privacy and the power that the world’s biggest companies have over our lives and personal data.

Mr. Federighi drove a lot of this work at the company. I used my brief time with him to understand the two new initiatives, then ask about Apple’s power and how the software could be abused. It’s important that we understand these two features, how they work and can be controlled—and how the leap of faith we must make when buying Apple products seems to get longer every year.

Child Pornography Detection

How does this work? Some basics: The National Center for Missing and Exploited Children (aka NCMEC) maintains a database of known illegal child pornography. Other big tech companies—Google,

Facebook,

Microsoft,

etc.—have methods of scanning photos you upload to their servers to see if any match the images in the NCMEC repository. The fact that Apple does some of this on the phone, ostensibly to protect user privacy, is where the controversy lies.

The illegal photos collected by NCMEC and other similar child-safety organizations have been converted into cryptographic codes—strings of numbers called “neural hashes”—that identify signature characteristics of images. After this update hits your phone, in an iOS update due sometime this year, the software will generate hashes for your own photos as they’re prepared for upload to iCloud. The device then would cross reference your image hashes to the hashes from the child-pornography database. This is why you shouldn’t have to worry about some picture of your kid in the bathtub being flagged. The system is designed to match only fingerprints of known illegal images.

Still with me? Each uploaded photo gets a “safety voucher,” encrypted code that says whether that photo matches an illegal one. Even if there is a positive match, no alarm bells ring at this point. However, if an account collects around 30 safety vouchers corresponding to illegal images, according to Mr. Federighi, the account gets flagged to Apple’s human moderators. They review the vouchers (and no other images) to see if they actually contain potentially illegal images. If they do, Apple reports the account to NCMEC.

What are the big concerns? Among many, the biggest is that this is a “back door” for Apple and other entities to scan the private contents of your iPhone. Child pornography is abhorrent. Full stop. But what if the technology were used to spot other types of photos? Say, an authoritarian government looking for satirical photos of its leader? The Electronic Frontier Foundation, a digital-rights watchdog group, maps out this argument here.

“In no way is this a back door. I really don’t understand that characterization,” Mr. Federighi said. He said Apple would say no to any government asking to add its own image hashes to the software. He also said the image databases are audited by outside parties, and that the hash data that Apple sends to iPhones would be the same no matter what country the device is in. He said that the human review that happens before alerting authorities would catch any discrepancies.

You can check to see if you use iCloud Photos by going to Settings > Photos and seeing if this switch is turned on.



Photo:

Apple

What control do we have? This only applies to users of iCloud Photos. If you don’t use the service, this doesn’t happen on your device. While there isn’t an on/off switch for this, you can disable iCloud Photos in settings. Then you’d have to back up your photos some other way.

Communication Safety in Messages

How does this work? The better name for this might have been “texted nudes detection.” If an account designated as a child in iCloud Family Sharing receives or prepares to send a sexually explicit photo in the Messages app, the photo will appear blank. A warning message will appear, giving the child a choice to view or skip. In accounts of children 12 and under, parents can opt to receive notifications when a child views or sends such an image.

The technology here is totally different than the hashing system for child pornography.  Apple uses on-device machine learning to determine whether an image contains nudity. It’s similar to the tech that lets you search your photos for “dogs” or “beaches.” Mr. Federighi said while it can make a mistake, the company has had a hard time fooling it in testing.

Apple-provided screenshots of its announced Communication Safety tool for Messages show what it will look like when a child receives photos the system deems explicit.



Photo:

Apple

What are the big concerns? Apple has long touted the privacy of its messaging service, including that it is end-to-end encrypted so no one but the sender and receiver(s) are privy to the actual chats. Privacy advocates such as the EFF argue that by introducing a third party—the parent of a child age 12 or under—Apple is nearing a “slippery slope.” Governments and other powerful entities might want to be notified of other sorts of messaging activity among Apple users. Apple says this feature doesn’t break end-to-end encryption, and the company doesn’t gain access to the communications as a result of it.

SHARE YOUR THOUGHTS

How much power should big tech have over your life and personal data? Join the conversation below.

While in general parents will likely appreciate such a tool, there’s some concern it might invade children’s privacy. It could potentially out children who are questioning their sexuality. Mr. Federighi said the feature was designed so that cannot happen. The child will be warned before viewing an image that would trigger the parental alert, and the actual image isn’t shown to the parents when they are alerted.

What control do we have? This is a feature of Apple’s Family Sharing that parents can choose to turn on. For children under 12, parents can set up those notifications. For kids 13 or older, parents aren’t notified.

In Apple We Trust?

So back to the freakout of the week. A big part of it was the confusion around the side-by-side announcement of two completely different initiatives. But even if you now get that you—and all those pictures you took of Junior at the pool or in the tub—are safe, these tools and technologies raise big questions about control over the phones in our pockets. At the very least, they require a level of trust in Apple that goes far beyond the normal iOS software update.

We have to trust that these features really do work as described. We have to trust Apple won’t use any of these child-protection tools for less-noble reasons. And if we trust all the reasons the company laid out for this to be safe, we also have to trust its technology won’t land in the wrong hands, by hacking or coercion.

In a message posted in Las Vegas in 2019, Apple made a promise about privacy.



Photo:

Andrej Sokolow/DPA/ZUMA PRESS

We also have to accept that Apple’s walled garden isn’t just about keeping us in blue messaging bubbles or subscribing for more “Ted Lasso.” The company is now making deeper decisions about what can and can’t happen on the phones it sells us.

“Our customers own their phones, for sure,” Mr. Federighi said when I asked who owned my phone. “I hope a customer sees when they buy an iPhone their iPhone does keep getting better, and it changes in ways that they find important and enriching,” he said, adding that customers don’t have to install new software updates if they don’t want to. “We give customers control over adopting new capabilities they might find disruptive.”

I want to believe that, but, besides a few iPhone 6 holdouts, most people I know feel compelled to regularly update their software and hardware. And they usually don’t go digging into any deeply buried settings. These controls are no longer about switching on or off Bluetooth or a battery saver; the stakes are our autonomy to choose what happens to our most personal data. It should be made much clearer and simpler—just as Apple should have made the rollout of these features.

For more WSJ Technology analysis, reviews, advice and headlines, sign up for our weekly newsletter.

Write to Joanna Stern at [email protected]

Copyright ©2021 Dow Jones & Company, Inc. All Rights Reserved. 87990cbe856818d5eddac44c7b1cdeb8

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TechNewsBoy.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.