Apple confirms it will start scanning iCloud and Messages to detect child abuse images
Apple will begin using a system that will detect sexually explicit photos in Messages, Photos and iCloud, comparing it against a database of known Child Sexual Abuse Material (CSAM), to help point law enforcement to potential predators.
The announcement (via Reuters) says these new child safety measures will go into place with the release of iOS 15, watchOS 8 and macOS Monterey later this year. This move comes years after Google, Facebook and Microsoft put similar systems in place. Google implemented a “PhotoDNA” system back in 2008 with Microsoft following suit in 2009. Facebook and Twitter have had similar systems in place since 2011 and 2013 respectively.
The Messages app will begin warning children, as well as their parents, when either sexually explicate photos are sent or received. The app will blur out images and say, “It’s not your fault, but sensitive photos and videos can be used to hurt you.”
The system will use on-device machine learning to analyze an image. Photos will then be blurred if deemed sexually explicit.
“iOS and iPadOS will use new applications of cryptography to help limit the spread of CSAM online, while designing for user privacy,” per Apple’s child safety webpage. “CSAM detection will help Apple provide valuable information to law enforcement on collections of CSAM in iCloud Photos.”
The system will allow Apple to detect CSAM stored in iCloud Photos. It would then send a report to the National Center for Missing and Exploited Children (NCMEC).
According to MacRumors, Apple is using a “NeuralHash” system that will compare photos on a user’s iPhone or iPad before it gets uploaded to iCloud. If the system finds that CSAM is being uploaded, the case will be escalated for human review.
Apple will also allow for Siri and Search to help children and parents on reporting CSAM. Essentially, when someone searches for something related to CSAM, a pop-up will appear to assist users.
Of course, with a system like this, there invariably will be privacy concerns. Apple aims to address this as well.
“Apple’s method of detecting known CSAM is designed with user privacy in mind. Instead of scanning images in the cloud, the system performs on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations,” per Apple’s child safety webpage. “Apple further transforms this database into an unreadable set of hashes that is securely stored on users’ devices.”
According to Apple, there is “less than a one in one trillion chance per year of incorrectly flagging a given account.”
Even with the purportedly low rate of false accusation, some fear that this type of technology could be used in other ways, such as going after anti-government protestors who upload imagery critical of administrations.
These are bad things. I don’t particularly want to be on the side of child porn and I’m not a terrorist. But the problem is that encryption is a powerful tool that provides privacy, and you can’t really have strong privacy while also surveilling every image anyone sends.August 5, 2021
Regardless of potential privacy concerns, John Clark, chief executive of the National Center for Missing & Exploited Children believes what Apple is doing is more beneficial than harmful.
“With so many people using Apple products, these new safety measures have lifesaving potential for children who are being enticed online and whose horrific images are being circulated in child sexual abuse material,” said Clark in a statement. “The reality is that privacy and child protection can co-exist.”
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.