Apple scheme to detect child abuse creates serious privacy and security risks, say scientists

Apple’s proposal to compel iPhone users to accept updates that would automatically and covertly search shared images for possible abuse material and send reports to Apple or law enforcement agencies are today condemned as unworkable, vulnerable to abuse, and a threat to safety and security by the world’s top cryptographic experts and internet pioneers.

The 14 top computer scientists’ detailed technical assessment of why Apple’s ideas are foolish and dangerous in principle and in practice, Bugs in our pockets: The risks of client-side scanning, was published this morning by Columbia University and on Arxiv.

Apple’s plan, unveiled in August, is called client-side scanning (CSS). The panel acknowledges that “Apple has devoted a major engineering effort and employed top technical talent in an attempt to build a safe and secure CSS system”, but finds it a complete failure, citing over 15 ways in which states or malicious actors, and even targeted abusers, could turn the technology around to cause harm to others or society. 

Apple has “not produced a secure and trustworthy design”, they say. “CSS neither guarantees efficacious crime prevention nor prevents surveillance. The effect is the opposite… CSS by its nature creates serious security and privacy risks for all society.”

The report’s signatories include Ron Rivest and Whit Diffie, whose pioneering 1970s mathematical inventions underpin much of the cryptography in use today; Steve Bellovin of Columbia University, one of the originators of Usenet; security gurus Bruce Schneier and Ross Anderson, of Cambridge University; Matt Blaze of Georgetown University, a director of the Tor project; and Susan Landau, Peter G Neumann, Jeffrey Schiller, Hal Abelson and four others, all giants in the field.

Apple’s plan “crosses a red line”, they say. “The proposal to pre-emptively scan all user devices for targeted content is far more insidious than earlier proposals for key escrow and exceptional access. In a world where our personal information lies in bits carried on powerful communication and storage devices in our pockets, both technology and laws must be designed to protect our privacy and security, not intrude upon it.”

Pressure from intelligence agencies

Apple’s summer announcement is the first time a major IT player appears to have been ready to give in to such government pressure in the west. Pressure from intelligence agencies and repressive governments to block, subvert or legally prohibit effective cryptography in digital communications has been incessant for over 40 years. But, faced with increasingly effective and ever more widely used cryptographic systems, these actors have shifted to attacks on endpoints and infrastructure instead, using methods including legally authorised hacking. 

“The proposal to pre-emptively scan all user devices for targeted content is far more insidious than earlier proposals for key escrow and exceptional access”
Bugs in our pockets report

“The move highlights a decisive shift in the latest battle by intelligence agencies to subvert modern and effective cryptography,” Abelson and colleagues say today. “Instead of having targeted capabilities, such as to wiretap communications with a warrant and to perform forensics on seized devices, the agencies’ direction of travel is the bulk scanning of everyone’s private data, all the time, without warrant or suspicion.”

The principle of CSS is that high-quality cryptography would be permitted, but material matching government-supplied and loaded templates would be flagged and secretly exported.

“Technically, CSS allows end-to-end encryption, but this is moot if the message has already been scanned for targeted content,” they note. “In reality, CSS is bulk intercept, albeit automated and distributed. As CSS gives government agencies access to private content, it must be treated like wiretapping.

“Once capabilities are built, reasons will be found to make use of them,” they add.

Bulk surveillance

The authors criticise not only Apple’s incompetence in applying basic security principles, but also its culpable naivety in suggesting that such a system once deployed would not immediately be repurposed. Even if deployed initially to scan for illegal and publicly condemned child sex material, “there would be enormous pressure to expand its scope” – and no way to rein back the privacy- and safety-destroying tool they had created.

The “promise of a technologically limited surveillance system is in many ways illusory”, they caution. Once introduced, as the targeted terms or images would be secret, and secretly managed, how would Apple or any user prevent other materials being added to the list, including information that was lawful but displeased the government of the day in a powerful state?

Apple has already yielded to such pressures, such as by moving the iCloud data of its Chinese users to datacentres under the control of a Chinese state-owned company, and more recently by removing the Navalny voting app from its Russian app store.

The security experts also highlight the fatal error of placing powerful systems like CSS onto client devices, thus exposing them to repurposing, gaming, misdirection and deception by every class of bad actor, from a powerful nation-state to criminal drugs and murder gangs, to cyber smart teenagers trying to set each other up.

Flawed technology

As proposed by Apple, the first CSS system would use “perceptual hashing” to match images being copied to iCloud to a library of government-supplied image “fingerprints”.

Perceptual hashing does not test for an exact bit-for-bit match but for image similarity. 

Apple’s latest version of perceptual hashing, called NeuralHash, was launched in August and promoted as a way of securely and reliably detecting abuse images. Critics quickly demonstrated that the system produced false positives and could be reverse-engineered and then exploited.

Researchers took barely two weeks to reverse-engineer the version of NeuralHash algorithm built into iOS 14. It led to immediate breaches, including engineered evasion and engineered false positives. The system’s reputation plummeted when one team showed that it matched two utterly unlike real-world images. Apple withdrew NeuralHash a month later.

Apple’s NeuralHash algorithm’s reputation plummeted when one team showed that it matched two utterly unlike real-world images

Another “perceptual hash” technique, Microsoft’s PhotoDNA, has also been reverse engineered to rebuild target photos, yielding sometimes recognisable, very low-resolution target images. 

Machine learning techniques would be even more vulnerable, as the model and the training engine would necessarily be exposed on vast numbers of devices, Adversaries could seek to “poison” learning algorithms with specially configured datasets.

For these reasons, the experts agree “we find no design space for solutions that provide substantial benefits to law enforcement without unduly risking the privacy and security of law-abiding citizens”.

As a “phase change” in information technology, client-side scanning “would gravely undermine [protection], making us all less safe and less secure” they say, concluding that “it is a dangerous technology”.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TechNewsBoy.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.