Ninth Circuit Appeals Court Says Some Disturbing Stuff About Section 230 While Dumping Another Two 'Sue Twitter For Terrorism' Lawsuits
from the not-a-lot-of-good-news-in-the-167-pages dept
In a very dense and somewhat counterintuitive opinion [PDF], the Ninth Circuit Court of Appeals has dumped two more of the dozens of bogus "sue social media companies for acts committed by terrorists" lawsuits. But it has kept one alive. Worse, the 167-page ruling comes with concurring opinions that suggest Ninth Circuit judges think Section 230 immunity is, on the whole, letting social media companies get away with too much bad stuff.
The two lawsuits whose dismissals were affirmed deal with the San Bernardino shooting and the terrorist attacks in Paris, France. The one kept alive deals with a terrorist attack in Istanbul, Turkey. The first case (Gonzalez v. Google) deals with allegations that Google's revenue sharing with terrorist organizations amounted to material support. But the allegations aren't strong enough to sustain the lawsuit.
Although monetary support is undoubtedly important to ISIS’s terrorism campaign, the TAC is devoid of any allegations about how much assistance Google provided. As such, it does not allow the conclusion that Google’s assistance was substantial. Nor do the allegations in the TAC suggest that Google intended to assist ISIS. Accordingly, we conclude the Gonzalez Plaintiffs failed to state a claim for aiding-and-abetting liability under the ATA. We do not consider whether the identified defects in the Gonzalez Plaintiffs’ revenue-sharing claims—principally, the absence of any allegation regarding the amount of the shared revenue—could be cured by further amendment because the Gonzalez Plaintiffs were given leave to amend those claims and declined to do so.
The third case (Clayborn v. Twitter) involves several of the same allegations, but dealing with the San Bernardino shooting. The claims here fail mainly because there was no evidence ISIS directed or was directly involved in the attack.
Even if Congress intended “authorized” to include acts ratified by terrorist organizations after the fact, ISIS’s statement after the San Bernardino Attack fell short of ratification. The complaint alleges that ISIS stated, “Two followers of Islamic State attacked several days ago a center in San Bernardino in California, we pray to God to accept them as Martyrs.” This clearly alleges that ISIS found the San Bernardino Attack praiseworthy, but not that ISIS adopted Farook’s and Malik’s actions as its own.
The second case (Taamneh v. Twitter) stays alive. The allegations are pretty much identical to those in Gonzalez v. Google, with the main difference being what issues the lower court reached. In this case, there was no Section 230 discussion. And, as the Ninth Circuit sees it, that difference allows the claims it dismissed in Gonzalez to survive in Taamneh.
Because the bulk of the Gonzalez Plaintiffs’ claims were properly dismissed on the basis of § 230 immunity, our decision in Gonzalez principally focuses on whether the Gonzalez Plaintiffs’ revenue-sharing theory sufficed to state a claim under the ATA. In contrast, the district court in Taamneh did not reach § 230; it only addressed whether the Taamneh Plaintiffs plausibly alleged violations of the ATA for purposes of Rule 12(b)(6). The Taamneh appeal is further limited by the fact that the Taamneh Plaintiffs only appealed the dismissal of their aiding-and-abetting claim.
Hesitantly, the Ninth says this single lawsuit can proceed, mainly because it specified more direct support like use of Google's AdSense and other revenue-sharing.
We also recognize the need for caution in imputing aiding-and-abetting liability in the context of an arms-length transactional relationship of the sort defendants have with users of their platforms. Not every transaction with a designated terrorist organization will sufficiently state a claim for aiding-and-abetting liability under the ATA. But given the facts alleged here, we conclude the Taamneh Plaintiffs adequately state a claim for aiding-and-abetting liability.
Now, here comes the bad stuff. The concurring opinions don't deal much with the facts of the case, but rather with some judges' view that Section 230 is too broad and should be trimmed back. If Congress won't do it, maybe the Ninth Circuit will.
Here's Judge Marsha Berzon's take:
I concur in the majority opinion in full. I write separately to explain that, although we are bound by Ninth Circuit precedent compelling the outcome in this case, I join the growing chorus of voices calling for a more limited reading of the scope of section 230 immunity. For the reasons compellingly given by Judge Katzmann in his partial dissent in Force v. Facebook, 934 F.3d 53 (2d Cir. 2019), cert. denied, 140 S. Ct. 2761 (2020), if not bound by Circuit precedent I would hold that the term “publisher” under section 230 reaches only traditional activities of publication and distribution—such as deciding whether to publish, withdraw, or alter content—and does not include activities that promote or recommend content or connect content users to each other. I urge this Court to reconsider our precedent en banc to the extent that it holds that section 230 extends to the use of machine-learning algorithms to recommend content and connections to users.
The judge has problems with the recommendation algorithms used by social media companies -- ones that naturally tend to show people the sorts of things they appear to be interested in. In most cases, this is innocuous. But in some cases, the algorithms can send people down rabbit holes.
If viewers start down a path of watching videos that the algorithms link to interest in terrorist content, their immersive universe can easily become one filled with ISIS propaganda and recruitment. Even if the algorithm is based on content-neutral factors, such as recommending videos most likely to keep the targeted viewers watching longer, the platform’s recommendations of what to watch send a message to the user. And that message—“you may be interested in watching these videos or connecting to these people”—can radicalize users into extremist behavior and contribute to deadly terrorist attacks like these.
This is a really weird, really dangerous place to start drawing the line in Section 230 lawsuits. Algorithms react to input from users. If YouTube can't be held directly responsible for videos uploaded by users, it makes sense it would be immunized against algorithmically suggesting content based on users' actions and preferences. The algorithm does next to nothing on its own without input from content viewers. It takes a user to really get it moving.
Judge Ronald Gould's concurrence contains many of the same complaints about social media recommendation algorithms. And he similarly believes Section 230 shouldn't cover these, apparently for the simple reason that terrorists can benefit from recommendations made to users who've expressed an affinity for content allegedly created by terrorists.
The majority ultimately concludes that Section 230 shields Google from liability for its content-generating algorithms. I disagree. I would hold that Plaintiffs’ claims do not fall within the ambit of Section 230 because Plaintiffs do not seek to treat Google as a publisher or speaker of the ISIS video propaganda, and the same is true as to the content-generating methods and devices of Facebook and Twitter.
Accepting plausible complaint allegations as true, as we must, Google, through YouTube, and Facebook and Twitter through their various platforms and programs, acted affirmatively to amplify and direct ISIS content, repeatedly putting it in the eyes and ears of persons who were susceptible to acting upon it.
And if Congress won't act fast enough for Judge Gould, then the courts should step in and regulate social media companies.
I further urge that regulation of social media companies would best be handled by the political branches of our government, the Congress and the Executive Branch, but that in the case of sustained inaction by them, the federal courts are able to provide a forum responding to injustices that need to be addressed by our justice system. Here, that means to me that the courts should be able to assess whether certain procedures and methods of the social media companies have created an unreasonably dangerous social media product that proximately caused damages, and here, the death of many.
Judge Gould says this should be easy to do correctly and without collateral damage to legitimate content.
The record shows that despite extensive media coverage, legal warnings, and congressional hearings, social media companies continued to provide a platform and communication services to ISIS before the Paris attacks, and these resources and services went heedlessly to ISIS and its affiliates, as the social media companies refused to actively identify ISIS YouTube accounts, and only reviewed accounts reported by other YouTube users. If, for example, a social media company must take down within a reasonable time sites identified as infringing copyrights, it follows with stronger logic that social media companies should take down propaganda sites of ISIS, once identified, within a reasonable time to avoid death and destruction to the public, which may be victimized by ISIS supporters. Moreover, if social media companies can ban certain speakers who flout their rules by conveying lies or inciting violence, as was widely reported in the aftermath of tweets and posts relating to the recent “insurrection” of January 6, 2021, then it is hard to see why such companies could not police and prohibit the transmission of violent ISIS propaganda videos, in the periods preceding a terrorist attack.
This ignores the fact that the DMCA process is pretty much an ongoing train wreck, one that's abused to silence speech and often mistakenly targets non-infringing content. And social media companies' attempts to stop the spread of disinformation or deal with harassing/threatening content have rarely been viewed as competent, much less exemplary. This whole spiel ignores the fact that a lot of the moderation Gould considers easy or successful is also heavily reliant on reports by site users.
Finally, not only does Judge Gould suggest Section 230 should be narrowed, he thinks another course of legal action should be made available to plaintiffs to sue tech companies not just for the content they host, but for the actions of terrorist organizations all over the world.
As a matter of federal common law, I would hold that when social media companies in their platforms use systems or procedures that are unreasonably dangerous to the public—as in the case where their systems line up repeated messages in aid of terrorists like ISIS—or when they omit to act to avoid harm when omitting the act is unreasonably dangerous to the public—as in the case where they fail to review and self-regulate their websites adequately to notice and remove propaganda videos from ISIS that are likely to cause harm—then there should be a federal common law claim available against them.
That's Gould's "product liability" theory. In his view, the algorithms are defective because some users turn towards terrorism. And if the product is defective, the manufacturer can be sued. Gould really has to stretch the analogy to make it fit.
Here and similarly, social media companies should be viewed as making and “selling” their social media products through the device of forced advertising under the eyes of users.
Huh. But what if users use ad blockers? I mean, that's just one of several questions this raises. The product is access to site users and their attention. That's what's being sold to advertisers. If that's the case, the only parties who would have standing to bring lawsuits under this theory would be dissatisfied companies who feel their ads are being placed along content that's being served up by algorithms that are possibly radicalizing some users into committing terrorist acts. That's more than a little attenuated from the actual harm, especially if no one working for the aggrieved companies has been a victim of a terrorist attack. Sure, users could try to complain the product is defective, but the product -- according this this judge's own take -- isn't the social media platform.
Look, moderation is far from perfect and will likely always cause some sort of collateral damage as adjustments are made. If everyone would like to see less moderation and fewer social media options, they should definitely allow the courts and Congress to start creating a bunch of exceptions to Section 230 immunity. In these three lawsuits, plaintiffs suffered tragedies and were encouraged by questionable law firms to sue third parties with no link to terrorists and their acts other than some hosted content. If these claims had any merit, we'd see more wins. But we haven't seen these wins because these claims are weak and seem mostly propelled by finding the largest, easiest target to hit rather than any true desire to see justice done.
Filed Under: 9th circuit, ata, intermediary liability, section 230, terrorism
Companies: twitter