Cybersecurity chiefs back client-side mobile scanning for CSAM

58

Client-side mobile device scanning, which provides a backdoor to look at people’s phones and other devices, has been backed by two UK cyber security chiefs who say the method could be used to find images of child abuse. In the research paper, Ian Levy, technical director of the National Cyber ​​Security Center (NCSC), and Crispin Robinson, technical director of cryptanalysis at GCHQ, write that they see no reason why the controversial technique cannot be implemented securely.

In response to the document, academics and researchers expressed concern that the use of client-side scanning could create a culture of mass surveillance and violate people’s privacy, as well as create security risks. Others argue that it will simply force criminals to use other methods to share child sexual abuse material (CSAM).

GCHQ and NCSC chiefs have backed client-side scanning to identify images relating to child abuse. (Photo by gorodenkoff/iStock)

Levy and Crispin wrote their research paper on how client-side scanning techniques can help combat child abuse if implemented safely. They say that while more work is needed, there are “clear paths to implementation” that would have the required effective privacy and security properties. The paper provides an analysis of two archetypes of damage, which the authors claim show that it is possible to provide strong security protection for users, which ensures “that privacy and security are maintained for all”.

The authors wrote that while security concerns are often raised by those opposed to client-side scanning, “we do not believe that the techniques necessary to provide user security will inevitably lead to these outcomes.”

They use the example of hash matching, where abusive material is given a unique identifier so that it can be automatically detected and removed from online platforms. “Hash matching and other related technologies will identify exact or, in the case of perceptual hashes, close matches to previously seen content (typically images or video) that a trusted source (typically one or more NGOs) has classified as illegal,” the authors write. They go on to say that this type of approach has very high accuracy and that matches are usually subject to human review before being sent to authorities.

“There is a small risk that the child safety NGO may have misclassified the image, but the human review step mitigates the consequences of this, along with the impact of false positives from the detection algorithm,” they say.

In addition, Levy and Robinson say machine learning can be used to classify content to identify previously unseen CSAM or conversations, whether of the perpetrator or the child perpetrator, that are likely to be related to child sexual abuse. In practice, they write, they are deployed with parameters that give high precision even though the technique “will always produce significant false positives,” so human moderation is required.

However, in an environment that is end-to-end encrypted, vendors would have to scan the content, experts say – this is known as client-side scanning.

Content from our partners

What is client-side scanning?

Client-side scanning is a broad term that refers to systems that scan message content such as images, videos, text, and other files for matches in a spam database before sending the message to the intended person or device. It is used by antivirus software to find and disable malware on computers.

Law enforcement agencies claim this method is needed to access messages and other data to help identify and prevent the sharing of objectionable content. But opponents say its implementation would render end-to-end encryption, which offers a higher degree of privacy, ineffective.

In the summer of 2021, Apple released a feature on its iOS called ‘NeuralHash’ that attempted to detect known child sexual abuse images by running on the user’s device rather than the company’s servers. This was an algorithm that used a Convolutional Neural Network (CNN) to calculate a hash of images and identify those similar to abuse images, such as cropped, rotated and altered images.

NeuralHash drew backlash from privacy activists, and when the technical details of the system were released, several scripts appeared showing how it could be used to launch cyberattacks on devices. After that, Apple announced that it was delaying the implementation of the technology.

The Internet Society, a nonprofit organization that says it’s dedicated to building an open, secure and trusted Internet, believes that client-side scanning would threaten the privacy and security that users assume and rely on. “By making message content no longer private between sender and receiver, client-side scanning breaks the trust model of end-to-end encryption,” it said.

Will vendors implement client-side scanning?

Apple’s quick move away from NeuralHash following criticism is an example of vendors’ reluctance to push through something that would make their devices less secure and thus affect their reputation and profits, says Professor Alan Woodward of the University of Surrey’s Surrey Center for Cyber ​​Security .

He told Tech Monitor that companies like Google and Apple won’t implement client-side scanning on their devices if the wider ecosystem doesn’t want to accept it. He says Google and Facebook, for example, have opted for end-to-end encryption, so they can’t give the content of messages to government agencies, even with a warrant, because that would be bad for business.

Data, insights and analysis delivered to you View all newsletters From the Tech Monitor team Sign up for our newsletters Sign up here

He went on to say that while citizens might want to address child abuse, he doesn’t think they would want Big Tech companies scanning their phones because they wouldn’t know where it would be reported and if it would be misused. He adds that if Apple and Google decided to build and implement client-side scanning in their iOS and Android mobile operating systems, then they would lose customers and so “wouldn’t go down that road.”

Woodward added that there are other ways to track CSAM, such as using metadata. He explained that Facebook uses this method to monitor behavior patterns associated with child sexual abuse. “Instead of general surveillance, you can actually start zeroing in and you can do a more targeted form of surveillance on people who look like they’re going to be of interest,” he said.

Read more: The war on end-to-end encryption

Comments are closed.