Apple’s CSAM scanning is a dangerous weapon, say researchers
A couple of weeks ago, Apple announced that it would be implementing a new measure across all iCloud-enabled iPhones to help catch predators in possession of photos containing child sexual abuse. This has long been a prevalent issue in our society, unfortunately, and there is only so much we have been able to do to fight against it.
Once a photo is scanned, its hash value will be checked against that of images in a database of existing CSAM (or Child Sexual Abuse Material). If any identical or even close matches are found, the photo will be instantly flagged. In order for an account to be flagged and reported to the National Center for Missing and Exploited Children, however, there would need to be around 30 CSAM matches found on that account.
Widespread backlash came immediately
The entire company’s integrity and privacy reputation on which Apple is founded could be compromised, if it chooses to replace its industry-standard end-to-end encryption policy in order to follow through with this plan. Both people and governments across the globe naturally have a right to be concerned at the prospect of such a huge back door opening up to illicit surveillance, and a whole Pandora’s box of possible evils.
Princeton joins the fray, calling out the dangers in Apple’s system
Platforms could, for example, use the constructions we describe to implement censorship or illegitimate surveillance—and might be compelled by a government that is not committed to free speech and the rule of law.
In other words, a system in place that allows for private communication to be monitored and handed over to authorities at any given time could be an easy tool for dictatorship. In the end, the way in which a PHM system is implemented and leveraged lies entirely at the discretion of its human controller, who “will have to curate and validate [that hash set B does exclusively contain harmful media].”
What constitutes harmful media also lacks an objective definition, and could be potentially expanded and twisted, to include anything which the powers that be choose to deem “harmful media”—compromising people’s right to free speech, or suppressing citizens even further in countries where that freedom is not considered a human right.
Apple’s “one-in-a-trillion” false detection rate is also called into question, as while the tech giant makes it sound like impossibility, the Princeton researchers believe that these false positives are a legitimate possibility (albeit with low probability) that could compromise innocent individuals’ safety. “A Client’s media [could] match a value in the hash set even though there is no perceptual similarity,” they write.
“In these instances, an innocent E2EE Client may—depending on the content moderation response—lose communications confidentiality, have their account terminated, or become the
subject of a law enforcement inquiry.” Needless to say, this is a ridiculously undesirable scenario, the risk of which should never exist.
Even employees working for Apple have been voicing their concern for the dangers inherent in such a system as well. Apple has already responded to the criticism once, promising that it will never allow such a photo surveillance system to fall into the control of any governing or external entity, and that it will only be used to prevent child abuse material possession.
Naturally, this promise alone is far from satisfactory, and it doesn’t seem like the backlash will slow down until Apple agrees to stick to end-to-end encryption, with no one other than the owner having access to any photos and data stored either on their iPhone or iCloud—which at this point has become an extension of nearly every iPhone in circulation.
For all the latest Technology News Click Here
For the latest news and updates, follow us on Google News.