Report: Client-Side Scanning Is An Insecure Nightmare Just Waiting To Be Exploited By Governments
from the 14-out-of-14-cybersecurity-experts-agree dept
In August, Apple declared that combating the spread of CSAM (child sexual abuse material) was more important than protecting millions of users who've never used their devices to store or share illegal material. While encryption would still protect users' data and communications (in transit and at rest), Apple had given itself permission to inspect data residing on people's devices before allowing it to be sent to others.
This is not a backdoor in a traditional sense. But it can be exploited just like an encryption backdoor if government agencies want access to devices' contents or mandate companies like Apple do more to halt the spread of other content governments have declared troublesome or illegal.
Apple may have implemented its client-side scanning carefully after weighing the pros and cons of introducing a security flaw, but there's simply no way to engage in this sort of scanning without creating a very large and slippery slope capable of accommodating plenty of unwanted (and unwarranted) government intercession.
Apple has put this program on hold for the time being, citing concerns raised by pretty much everyone who knows anything about client-side scanning and encryption. The conclusions that prompted Apple to step away from the precipice of this slope (at least momentarily) have been compiled in a report [PDF] on the negative side effects of client-side scanning, written by a large group of cybersecurity and encryption experts (Hal Abelson, Ross Anderson, Steven M. Bellovin, Josh Benaloh, Matt Blaze, Jon Callas, Whitfield Diffie, Susan Landau, Peter G. Neumann, Ronald L. Rivest, Jeffrey I. Schiller, Bruce Schneier, Vanessa Teague, and Carmela Troncoso). (via The Register)
Here's how that slippery slope looks. Apple's client-side scanning may be targeted, utilizing hashes of known CSAM images, but once the process is in place, it can easily be repurposed.
Only policy decisions prevent the scanning expanding from illegal abuse images to other material of interest to governments; and only the lack of a software update prevents the scanning expanding from static images to content stored in other formats, such as voice, text, or video.
And if people don't think governments will demand more than Apple's proactive CSAM efforts, they haven't been paying attention. CSAM is only the beginning of the list of content governments would like to see tech companies target and control.
While the Five Eyes governments and Apple have been talking about child sex-abuse material (CSAM) —specifically images— in their push for CSS, the European Union has included terrorism and organized crime along with sex abuse. In the EU’s view, targeted content extends from still images through videos to text, as text can be used for both sexual solicitation and terrorist recruitment. We cannot talk merely of “illegal” content, because proposed UK laws would require the blocking online of speech that is legal but that some actors find upsetting.
Once capabilities are built, reasons will be found to make use of them. Once there are mechanisms to perform on-device censorship at scale, court orders may require blocking of nonconsensual intimate imagery, also known as revenge porn. Then copyright owners may bring suit to block allegedly infringing material.
That's just the policy and law side. And that's only a very brief overview of clearly foreseeable expansions of CSS to cover other content, which also brings with it concerns about it being used as a tool for government censorship. Apple has already made concessions to notoriously censorial governments like China's in order to continue to sell products and services there. Additional demands will obviously be made if Apple implements scanning that can be exploited to locate and censor critics of the government.
There's plenty of bad stuff on the technical side, too. CSS is pretty much malware, the report says:
CSS is at odds with the least-privilege principle. Even if it runs in middleware, its scope depends on multiple parties in the targeting chain, so it cannot be claimed to use least-privilege in terms of the scanning scope. If the CSS system is a component used by many apps, then this also violates the least-privilege principle in terms of scope. If it runs at the OS level, things are worse still, as it can completely compromise any user’s device, accessing all their data, performing live intercept, and even turning the device into a room bug.
CSS has difficulty meeting the open-design principle, particularly when the CSS is for CSAM, which has secrecy requirements for the targeted content. As a result, it is not possible to publicly establish what the system actually does, or to be sure that fixes done in response to attacks are comprehensive. Even a meaningful audit must trust that the targeted content is what it purports to be, and so cannot completely test the system and all its failure modes.
Finally, CSS breaks the psychological-acceptability principle by introducing a spy in the owner’s private digital space. A tool that they thought was theirs alone, an intimate device to guard and curate their private life, is suddenly doing surveillance on behalf of the police. At the very least, this takes the chilling effect of surveillance and brings it directly to the owner’s fingertips and very thoughts.
While the report does offer some suggestions on how to make scanning less exploitable, the downsides are too numerous to conclude this can somehow be done safely. Given how many intrusive surveillance programs have already been justified with concerns about terrorism or the spread of illicit material, CSS -- no matter how implemented -- with become a tempting tool for governments to exploit.
In a world where our personal information lies in bits carried on powerful communication and storage devices in our pockets, both technology and laws must be designed to protect our privacy and security, not intrude upon it. Robust protection requires technology and law to complement each other. Client-side scanning would gravely undermine this, making us all less safe and less secure.
Despite this comprehensive report warning against the implementation of client-side scanning, there's a chance Apple may still roll its version out. And once it does, the pressure will be on other companies to do at least as much as Apple is doing to combat CSAM. The only upside is that if governments decide scanning should be used for reasons other than Apple intends, it has the power to shut its system down.
Filed Under: client side scanning, csam, iphones, security
Companies: apple