Netflix Files Anti-Slapp Motion To Dismiss Lawsuit Claiming One Of Its Series Caused A Teen To Commit Suicide
from the even-an-algorithm-is-protected-speech dept
Because Netflix is big, it draws lawsuits. It has been sued for defamation, copyright infringement, and, oddly, defamation via use of a private prison's logo in a fictional TV show. It has also been sued for supposedly contributing to a teen's suicide with its series "13 Reasons Why," which contained a lot of disturbing subject matter that teens deal with daily, like bullying, sexual assault, and -- most relevant here -- suicide. The final episode of the first season contained a suicide scene, one that was removed by Netflix two years after the show debuted.
While undeniably a tragedy, the attempt to blame Netflix for this teen's suicide is severely misguided. The lawsuit filed by the teen's survivors alleges Netflix had a duty to warn viewers of the content (content warnings were added to the show a year after its release) and it failed to do so, making it indirectly liable for this death.
Netflix is now trying to get this lawsuit dismissed using California's anti-SLAPP law because, as it argues persuasively, this is all about protected speech, no matter how the plaintiffs try to portray it as a consumer protection issue. (h/t Reason)
Netflix's anti-SLAPP motion [PDF] points out this isn't the first time teen suicide has been depicted in pop culture, nor is it the first time people have tried to sue creators over the content of their creations. None of those lawsuits have been successful.
13 Reasons Why is not the first work to tell a story about teen suicide. The subject has been explored in countless literary works, motion pictures, songs, and TV shows—everything from Romeo and Juliet to Dead Poets Society. And this is not the first lawsuit that has claimed that the media’s depiction of suicide and violence is to blame, and should be held legally liable, for real-life suicides and other tragic events. Courts, however, have repeatedly rejected such suits.
Since this lawsuit directly implicates creative expression, Netflix says California's anti-SLAPP law applies.
Without question, 13 Reasons Why is protected expression under the First Amendment. And the FAC itself, which is replete with citations to articles about 13 Reasons Why and its subject matter, makes plain that 13 Reasons Why’s speech is in “connection with a public issue.” Plaintiffs try to sidestep the anti-SLAPP statute and the First Amendment by insisting that their claims are not based on the content of 13 Reasons Why, but on an alleged “failure to warn” or breach of a purported duty to protect “vulnerable populations” from the content. But without the allegations that the show’s content is “dangerous,” Plaintiffs’ theories fall apart. Not only do those theories strike at the free expression embodied in the show, they target Netflix’s conduct “in furtherance” of the distribution of the show, and therefore bring this lawsuit squarely within the ambit of the anti-SLAPP statute.
Not only is the content of the series protected expression, but so is the algorithm that possibly recommended the show to the teen. The plaintiffs' "failure to protect" theory argues that Netflix's algorithm is itself reckless and dangerous, given that it prompts users to select titles that may contain disturbing subject matter. But whether or not a human is performing the recommendation makes no difference. It's a form of editorial control which is protected under the First Amendment.
The recommendations system, and the display of suggested titles, is speech. It evinces “[a]n intent to convey a particularized message,” Spence v. Washington, 418 U.S. 405, 410–11 (1974)—namely, a message about what shows and movies a viewer might choose from to watch. The recommendations fall within the well-recognized right to exercise “editorial control and judgment.” Miami Herald Pub. Co. Tornillo, 418 U.S. 241, 258 (1974). Plaintiffs allege that the recommendations here are different because they are dictated by an algorithm. But the fact that the recommendations “may be produced algorithmically” makes no difference to the analysis.
[...]
The suggestion that a viewer watch a particular show falls within the broad scope of conduct in furtherance of the right to free speech. The subject of the recommendation—13 Reasons Why—is itself protected speech, and the recommendations facilitate Netflix’s protected distribution and dissemination of that speech.
The plaintiffs' claims can't be separated from the undeniable fact that the allegations are all about creative expression, which is exactly why the state's anti-SLAPP law should apply.
Finally, like Plaintiffs’ failure-to-warn theory, Plaintiffs’ recommendation theory cannot be disentangled from Plaintiffs’ allegation that the underlying content is “dangerous”: the entire premise of the claim is that Netflix had a duty to identify which viewers are “vulnerable,” and ensure that the algorithm does not recommend 13 Reasons Why (or other content Plaintiffs deem unsuitable) to such viewers.
If, as the plaintiffs argue, creators and producers of disturbing content can be held directly liable for the actions of viewers, the end result would be self-censorship and a dearth of options for creators to distribute their creations. Suicide has been the subject matter of countless creative efforts for hundreds of years. A lack of content warnings ahead of viewing doesn't make a viewer's death Netflix's fault. Viewers have control over what they watch and, given the vast amounts of information contained on the internet about movies and TV series, there's little reason to believe viewers have to go into any show "blind."
As tragic as this situation is, the blame (if there even is any) doesn't lie with Netflix. Netflix is no more responsible for this death as the creators of the series or the writer of the novel that inspired it.
Filed Under: 13 reasons why, 1st amendment, anti-slapp, blame, california, liability, protected speech, suicide
Companies: netflix