from the this-is-dangerously-dumb dept
Apple is somewhat famous for its approach to security on its iPhones. Most famously, Apple went to court to fight the FBI's demand that they effectively insert a backdoor into its on-phone encryption (by being able to force an update to the phone). Apple has tons of goodwill in the security community (and the public) because of that, though not in the law enforcement community. Unfortunately, it appears that Apple is throwing away much of that good will and has decided to undermine the security of its phone... "for the children" (of course).
This week, Apple announced what it refers to as "expanded protections for children." The company has been receiving lots of pressure from certain corners (including law enforcement groups who hate encryption), claiming that its encryption was helping hide child sexual abuse material (CSAM) on phones (and in iCloud accounts). So Apple's plan is to introduce what's generally called "client-side scanning" to search for CSAM on phones as well as a system that scans iCloud content for potentially problematic content. Apple claims that it's doing this in a manner that is protective of privacy. And, to be fair, this clearly isn't something that Apple rolled out willy-nilly without considering the trade-offs. It's clear from Apple's detailed explanations of the new "safety" features, that it is trying to balance the competing interests at play here. And, obviously, stopping the abuse of children is an important goal.
The problem is that, even with all of the balancing Apple has done here, it's definitely moved down a very dangerous, and very slippery slope towards using this approach for other things.
Apple's brief description of its new offerings are as follows:
Apple is introducing new child safety features in three areas, developed in collaboration with child safety experts. First, new communication tools will enable parents to play a more informed role in helping their children navigate communication online. The Messages app will use on-device machine learning to warn about sensitive content, while keeping private communications unreadable by Apple.
Next, iOS and iPadOS will use new applications of cryptography to help limit the spread of CSAM online, while designing for user privacy. CSAM detection will help Apple provide valuable information to law enforcement on collections of CSAM in iCloud Photos.
Finally, updates to Siri and Search provide parents and children expanded information and help if they encounter unsafe situations. Siri and Search will also intervene when users try to search for CSAM-related topics.
Some of the initial concerns about these descriptions -- including fears that, say, LGBTQ+ children might be outed to their parents -- have been somewhat (though not entirely) alleviated with the more detailed explanation. But that doesn't mean there aren't still very serious concerns about how this plays out in practice and what this means for Apple's security.
First, there's the issue of client-side scanning. As an EFF post from 2019 explains, client-side scanning breaks end-to-end encryption. In the EFF's latest post about Apple's announcement, it includes a quick description of how this introduces a backdoor:
We’ve said it before, and we’ll say it again now: it’s impossible to build a client-side scanning system that can only be used for sexually explicit images sent or received by children. As a consequence, even a well-intentioned effort to build such a system will break key promises of the messenger’s encryption itself and open the door to broader abuses.
All it would take to widen the narrow backdoor that Apple is building is an expansion of the machine learning parameters to look for additional types of content, or a tweak of the configuration flags to scan, not just children’s, but anyone’s accounts. That’s not a slippery slope; that’s a fully built system just waiting for external pressure to make the slightest change. Take the example of India, where recently passed rules include dangerous requirements for platforms to identify the origins of messages and pre-screen content. New laws in Ethiopia requiring content takedowns of “misinformation” in 24 hours may apply to messaging services. And many other countries—often those with authoritarian governments—have passed similar laws. Apple’s changes would enable such screening, takedown, and reporting in its end-to-end messaging. The abuse cases are easy to imagine: governments that outlaw homosexuality might require the classifier to be trained to restrict apparent LGBTQ+ content, or an authoritarian regime might demand the classifier be able to spot popular satirical images or protest flyers.
We’ve already seen this mission creep in action. One of the technologies originally built to scan and hash child sexual abuse imagery has been repurposed to create a database of “terrorist” content that companies can contribute to and access for the purpose of banning such content. The database, managed by the Global Internet Forum to Counter Terrorism (GIFCT), is troublingly without external oversight, despite calls from civil society. While it’s therefore impossible to know whether the database has overreached, we do know that platforms regularly flag critical content as “terrorism,” including documentation of violence and repression, counterspeech, art, and satire.
It's actually difficult to find any security experts who support Apple's approach here. Alec Muffett sums it up in a single tweet:
This is the very slippery slope. If we somehow believe that governments won't demand Apple cave on a wide variety of other types of content, you haven't been paying attention. Of course, Apple can claim that it will stand strong against such demands, but now we're back to being entirely dependent on trusting Apple.
As noted above, there were some initial concerns about the parent notifications, but as EFF's description notes, the rollout here does include some level of consent by users before their parents are notified, but it's still quite problematic:
In these new processes, if an account held by a child under 13 wishes to send an image that the on-device machine learning classifier determines is a sexually explicit image, a notification will pop up, telling the under-13 child that their parent will be notified of this content. If the under-13 child still chooses to send the content, they have to accept that the “parent” will be notified, and the image will be irrevocably saved to the parental controls section of their phone for the parent to view later. For users between the ages of 13 and 17, a similar warning notification will pop up, though without the parental notification.
Similarly, if the under-13 child receives an image that iMessage deems to be “sexually explicit”, before being allowed to view the photo, a notification will pop up that tells the under-13 child that their parent will be notified that they are receiving a sexually explicit image. Again, if the under-13 user accepts the image, the parent is notified and the image is saved to the phone. Users between 13 and 17 years old will similarly receive a warning notification, but a notification about this action will not be sent to their parent’s device.
This means that if—for instance—a minor using an iPhone without these features turned on sends a photo to another minor who does have the features enabled, they do not receive a notification that iMessage considers their image to be “explicit” or that the recipient’s parent will be notified. The recipient’s parents will be informed of the content without the sender consenting to their involvement. Additionally, once sent or received, the “sexually explicit image” cannot be deleted from the under-13 user’s device.
Whether sending or receiving such content, the under-13 user has the option to decline without the parent being notified. Nevertheless, these notifications give the sense that Apple is watching over the user’s shoulder—and in the case of under-13s, that’s essentially what Apple has given parents the ability to do.
It is also important to note that Apple has chosen to use the notoriously difficult-to-audit technology of machine learning classifiers to determine what constitutes a sexually explicit image. We know from years of documentation and research that machine-learning technologies, used without human oversight, have a habit of wrongfully classifying content, including supposedly “sexually explicit” content. When blogging platform Tumblr instituted a filter for sexual content in 2018, it famously caught all sorts of other imagery in the net, including pictures of Pomeranian puppies, selfies of fully-clothed individuals, and more. Facebook’s attempts to police nudity have resulted in the removal of pictures of famous statues such as Copenhagen’s Little Mermaid. These filters have a history of chilling expression, and there’s plenty of reason to believe that Apple’s will do the same.
There remains a real risk of false positives in this kind of system. There's a very worth reading blog post explaining how automated matching technologies fail, often in catastrophic ways. You really need to read that entire post as brief excerpts wouldn't do it justice -- but as it notes, the risk of false positives here are very high, and the cost of such false positives can be catastrophic. Obviously, CSAM is also catastrophic, so you can see how there is a real challenge in balancing those interests, but there are legitimate concerns that with this approach it's unclear if the balance is properly calibrated.
Obviously, Apple is trying to walk a fine line here. No one wants to be supporting CSAM distribution. But, once again, all the pressure on Apple feels like people blaming the tool for (serious) abuses by its users, and in demanding a "solution," opening up a very dangerous situation. If there were some way to guarantee that these technologies wouldn't be abused, or mess up, you could kind of see how this makes sense. But history has shown time and time again that neither of these things is really possible. Opening up this hole in Apple's famous security means more demands are coming.
Filed Under: client side scanning, csam, encryption, for the children, parent notification, privacy, security
Companies: apple