from the another-hard-case dept
Foes of Section 230 are always happy to see a case where a court denies a platform its protection. What's alarming about Lemmon v. Snap is how comfortable so many of the statute's frequent defenders seem to be with the Ninth Circuit overruling the district court to deny Snapchat this defense. They mistakenly believe that this case raises a form of liability Section 230 was never intended to reach. On the contrary: the entire theory of the case is predicated on the idea that Snapchat let people talk about something they were doing. This expressive conduct is at the heart of what Section 230 was intended to protect, and denying the statute's protection here invites exactly the sort of harm to expression that the law was passed to prevent.
The trouble with this case, like so many other cases with horrible facts, is that it can be hard for courts to see that bigger picture. As we wrote in an amicus brief in the Armslist case, which was another case involving Section 230 with nightmarish facts obscuring the important speech issues in play:
"Tragic events like the one at the heart of this case can often challenge the proper adjudication of litigation brought against Internet platforms. Justice would seem to call for a remedy, and if it appears that some twenty-year old federal statute is all that stands between a worthy plaintiff and a remedy, it can be tempting for courts to ignore it in order to find a way to grant that relief."
Here some teenagers were killed in a horrific high-speed car crash, and of course the tragedy of the situation creates an enormous temptation to find someone to blame. But while we can be sympathetic to the court's instinct, we can't suborn the facile reasoning it employed to look past the speech issues in play because acknowledging them would have interfered with the conclusion the court was determined to reach. Especially because at one point it even recognized that this was a case about user speech, before continuing on with an analysis that ignored its import:
Shortly before the crash, Landen opened Snapchat, a smartphone application, to document how fast the boys were going. [p.5] (emphasis added)
This sentence, noting that the boys were trying to document how fast they were going, captures the crux of the case: that the users were using the service to express themselves, albeit in a way that was harmful. But that's what Section 230 is built for, to insulate service providers from liability when people use their services to express themselves in harmful ways because, let's face it, people do it all the time. The court here wants us to believe that this case is somehow different from the sort of matter where Section 230 would apply and that this "negligent design" claim involves a sort of harm that Section 230 was never intended to apply to. Unfortunately it's not a view supported by the statutory text or the majority of precedent, and for good reason because, as explained below, it would eviscerate Section 230's critical protection for everyone.
Like it had done in the Homeaway case, the court repeatedly tried to split an invisible hair to pretend it wasn't trying to impose liability arising out of the users' own speech. [See, e.g., p. 10, misapplying Barnes v. Yahoo]. Of course, a claim that there was a negligent design of a service for facilitating expression is inherently premised on the idea that there was a problem with the resulting expression. And just because the case was not about a specific form of legal liability manifest in their users' speech did not put it outside of Section 230. Section 230 is a purposefully broadly-stated law ("No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider."), and here the court wants the platform to take responsibility for how its users used its services to express themselves. [p. 15, misapplying the Roommates.com case].
Section 230 also covers everything that could be wrong with expression unless the thing wrong with it happens to fall into one of the few exceptions the statute enumerates: it involves an intellectual property right, violates federal criminal law, or otherwise implicates FOSTA. None of those exceptions apply here, and, in fact, in the same section of the law where these few exceptions are set forth there is also a pre-emption provision explicitly barring any state law from becoming the basis of any new exceptions. Which, with this decision giving the go-ahead to a state law-based tort claim of "negligent design," is what the Ninth Circuit has now caused to happen.
It hurts online speech if courts can carve out new exceptions. If judges can ever post hoc look at a situation where expressive activity has led to harm and decide the degree of harm warrants stripping service providers of their Section 230 protection, then there is basically no point in having Section 230 on the books. If platforms have to litigate over whether it protects them, then it doesn't really matter whether it does or not because they'll already have lost out on so much of the value the protection was supposed to afford them to make it possible for them to facilitate others' expression in the first place. The inevitable consequence of this functional loss of statutory protection is that there will be fewer service providers available to facilitate as much user expression, if any at all.
But even if there were some limiting principle that could be derived from this case to constrain courts from inventing any other new exceptions, just having this particular "negligent design" one will still harm plenty of speech. To begin with, one troubling aspect the decision is that it is not particularly coherent, and one area of confusion relates to what it actually thinks is the negligent design. [see, e.g., p. 15]. The court spends time complaining about how Snapchat somehow deliberately encourages users to drive at unsafe speeds, even though the court itself acknowledged that while Snapchat apparently rewards users with "trophies, streaks, and social recognitions" to encourage them to keep using their service [p. 5], it "does not tell its users how to earn these various achievements" [p. 5], and it is a leap to say that Snap is somehow wrongfully encouraging users to do anything when it is not actually saying anything of the kind. [See p. 6 ("Many of Snapchat’s users suspect, if not actually 'believe,' that Snapchat will reward them for 'recording a 100-MPH or faster [s]nap' using the Speed Filter.")]. In fact, as the decision itself cites, Snapchat actually cautioned against reckless posting behavior. [See p. 6 with the screenshot including the text, "Don't snap and drive."] If the case were actually about Snap explicitly encouraging dangerous behavior ("Drive 100 mph and win a prize!") then there might legitimately be a claim predicated on the platform's own harmful speech, for which Section 230 wouldn't apply. But the record does not support this sort of theory, the theory of liability was predicated on a user's apparently harmful speech, and in any case the alleged encouragement wasn't really what the plaintiffs were charging was actually negligently designed anyway.
Instead, what was at issue was the "speed filter," a tool that helped users document how fast they were traveling. Unlike the district court, the Ninth Circuit could not seem to fathom that a tool that helped document speed could be used for anything other than unsafe purposes. But of course it can. Whether traveling at speed is dangerous depends entirely on context. A user in a plane could easily document traveling at significant speed perfectly safely, while a user on a bike documenting travel at a much slower speed could still be in tremendous peril. One reason we have Section 230 is because it is impossible for the service provider to effectively police all the uses of its platform, and even if it could, it would be unlikely to know whether the speeding was safe or not. But in denying Snapchat Section 230 protection with the presumption that such speech is always unsafe, the court has effectively decided that no one can ever document that they are traveling quickly, even in a safe way, because it is now too legally risky for the platform to give users the tools to do it.
Furthermore, if a platform could lose its Section 230 platform because the design of its services enabled speech that was harmful, it would eviscerate Section 230, because there are few, if any, whose design would not. For example, Twitter's design lets people post harmful expression. Perhaps one might argue it even encourages them to by making it so easy to post such garbage. Of course, Twitter also makes it easy to post things that are not harmful too, but the Ninth Circuit's decision here does not seem to care that a design eliciting user expression might be used for both good and bad ends. Per this decision, which asserts a state law-created "duty to design a reasonably safe product," [see p. 13, misapplying the Doe 14 v. Internet Brands case], even a product that meets the definition of an "interactive computer service" set forth in Section 230 (along with its pre-emption provision), if the design could be used to induce bad expression, then the platform no longer qualifies for Section 230's protection. But that would effectively mean that everyone could always plead around Section 230 because nearly every Section 230 case arises from someone having used the service in a harmful way the service enabled. It is unfortunate that the Ninth Circuit has now opened the door to such litigation, as the consequences stand to be chilling to all kinds of online speech and services Section 230 was designed to protect.
Filed Under: 9th circuit, intermediary liability, lemmon, negligence, product liability, section 230, speech, speed filter
Companies: snap