Ninth Circuit Appeals Court Says Some Disturbing Stuff About Section 230 While Dumping Another Two 'Sue Twitter For Terrorism' Lawsuits
from the not-a-lot-of-good-news-in-the-167-pages dept
In a very dense and somewhat counterintuitive opinion [PDF], the Ninth Circuit Court of Appeals has dumped two more of the dozens of bogus "sue social media companies for acts committed by terrorists" lawsuits. But it has kept one alive. Worse, the 167-page ruling comes with concurring opinions that suggest Ninth Circuit judges think Section 230 immunity is, on the whole, letting social media companies get away with too much bad stuff.
The two lawsuits whose dismissals were affirmed deal with the San Bernardino shooting and the terrorist attacks in Paris, France. The one kept alive deals with a terrorist attack in Istanbul, Turkey. The first case (Gonzalez v. Google) deals with allegations that Google's revenue sharing with terrorist organizations amounted to material support. But the allegations aren't strong enough to sustain the lawsuit.
Although monetary support is undoubtedly important to ISIS’s terrorism campaign, the TAC is devoid of any allegations about how much assistance Google provided. As such, it does not allow the conclusion that Google’s assistance was substantial. Nor do the allegations in the TAC suggest that Google intended to assist ISIS. Accordingly, we conclude the Gonzalez Plaintiffs failed to state a claim for aiding-and-abetting liability under the ATA. We do not consider whether the identified defects in the Gonzalez Plaintiffs’ revenue-sharing claims—principally, the absence of any allegation regarding the amount of the shared revenue—could be cured by further amendment because the Gonzalez Plaintiffs were given leave to amend those claims and declined to do so.
The third case (Clayborn v. Twitter) involves several of the same allegations, but dealing with the San Bernardino shooting. The claims here fail mainly because there was no evidence ISIS directed or was directly involved in the attack.
Even if Congress intended “authorized” to include acts ratified by terrorist organizations after the fact, ISIS’s statement after the San Bernardino Attack fell short of ratification. The complaint alleges that ISIS stated, “Two followers of Islamic State attacked several days ago a center in San Bernardino in California, we pray to God to accept them as Martyrs.” This clearly alleges that ISIS found the San Bernardino Attack praiseworthy, but not that ISIS adopted Farook’s and Malik’s actions as its own.
The second case (Taamneh v. Twitter) stays alive. The allegations are pretty much identical to those in Gonzalez v. Google, with the main difference being what issues the lower court reached. In this case, there was no Section 230 discussion. And, as the Ninth Circuit sees it, that difference allows the claims it dismissed in Gonzalez to survive in Taamneh.
Because the bulk of the Gonzalez Plaintiffs’ claims were properly dismissed on the basis of § 230 immunity, our decision in Gonzalez principally focuses on whether the Gonzalez Plaintiffs’ revenue-sharing theory sufficed to state a claim under the ATA. In contrast, the district court in Taamneh did not reach § 230; it only addressed whether the Taamneh Plaintiffs plausibly alleged violations of the ATA for purposes of Rule 12(b)(6). The Taamneh appeal is further limited by the fact that the Taamneh Plaintiffs only appealed the dismissal of their aiding-and-abetting claim.
Hesitantly, the Ninth says this single lawsuit can proceed, mainly because it specified more direct support like use of Google's AdSense and other revenue-sharing.
We also recognize the need for caution in imputing aiding-and-abetting liability in the context of an arms-length transactional relationship of the sort defendants have with users of their platforms. Not every transaction with a designated terrorist organization will sufficiently state a claim for aiding-and-abetting liability under the ATA. But given the facts alleged here, we conclude the Taamneh Plaintiffs adequately state a claim for aiding-and-abetting liability.
Now, here comes the bad stuff. The concurring opinions don't deal much with the facts of the case, but rather with some judges' view that Section 230 is too broad and should be trimmed back. If Congress won't do it, maybe the Ninth Circuit will.
Here's Judge Marsha Berzon's take:
I concur in the majority opinion in full. I write separately to explain that, although we are bound by Ninth Circuit precedent compelling the outcome in this case, I join the growing chorus of voices calling for a more limited reading of the scope of section 230 immunity. For the reasons compellingly given by Judge Katzmann in his partial dissent in Force v. Facebook, 934 F.3d 53 (2d Cir. 2019), cert. denied, 140 S. Ct. 2761 (2020), if not bound by Circuit precedent I would hold that the term “publisher” under section 230 reaches only traditional activities of publication and distribution—such as deciding whether to publish, withdraw, or alter content—and does not include activities that promote or recommend content or connect content users to each other. I urge this Court to reconsider our precedent en banc to the extent that it holds that section 230 extends to the use of machine-learning algorithms to recommend content and connections to users.
The judge has problems with the recommendation algorithms used by social media companies -- ones that naturally tend to show people the sorts of things they appear to be interested in. In most cases, this is innocuous. But in some cases, the algorithms can send people down rabbit holes.
If viewers start down a path of watching videos that the algorithms link to interest in terrorist content, their immersive universe can easily become one filled with ISIS propaganda and recruitment. Even if the algorithm is based on content-neutral factors, such as recommending videos most likely to keep the targeted viewers watching longer, the platform’s recommendations of what to watch send a message to the user. And that message—“you may be interested in watching these videos or connecting to these people”—can radicalize users into extremist behavior and contribute to deadly terrorist attacks like these.
This is a really weird, really dangerous place to start drawing the line in Section 230 lawsuits. Algorithms react to input from users. If YouTube can't be held directly responsible for videos uploaded by users, it makes sense it would be immunized against algorithmically suggesting content based on users' actions and preferences. The algorithm does next to nothing on its own without input from content viewers. It takes a user to really get it moving.
Judge Ronald Gould's concurrence contains many of the same complaints about social media recommendation algorithms. And he similarly believes Section 230 shouldn't cover these, apparently for the simple reason that terrorists can benefit from recommendations made to users who've expressed an affinity for content allegedly created by terrorists.
The majority ultimately concludes that Section 230 shields Google from liability for its content-generating algorithms. I disagree. I would hold that Plaintiffs’ claims do not fall within the ambit of Section 230 because Plaintiffs do not seek to treat Google as a publisher or speaker of the ISIS video propaganda, and the same is true as to the content-generating methods and devices of Facebook and Twitter.
Accepting plausible complaint allegations as true, as we must, Google, through YouTube, and Facebook and Twitter through their various platforms and programs, acted affirmatively to amplify and direct ISIS content, repeatedly putting it in the eyes and ears of persons who were susceptible to acting upon it.
And if Congress won't act fast enough for Judge Gould, then the courts should step in and regulate social media companies.
I further urge that regulation of social media companies would best be handled by the political branches of our government, the Congress and the Executive Branch, but that in the case of sustained inaction by them, the federal courts are able to provide a forum responding to injustices that need to be addressed by our justice system. Here, that means to me that the courts should be able to assess whether certain procedures and methods of the social media companies have created an unreasonably dangerous social media product that proximately caused damages, and here, the death of many.
Judge Gould says this should be easy to do correctly and without collateral damage to legitimate content.
The record shows that despite extensive media coverage, legal warnings, and congressional hearings, social media companies continued to provide a platform and communication services to ISIS before the Paris attacks, and these resources and services went heedlessly to ISIS and its affiliates, as the social media companies refused to actively identify ISIS YouTube accounts, and only reviewed accounts reported by other YouTube users. If, for example, a social media company must take down within a reasonable time sites identified as infringing copyrights, it follows with stronger logic that social media companies should take down propaganda sites of ISIS, once identified, within a reasonable time to avoid death and destruction to the public, which may be victimized by ISIS supporters. Moreover, if social media companies can ban certain speakers who flout their rules by conveying lies or inciting violence, as was widely reported in the aftermath of tweets and posts relating to the recent “insurrection” of January 6, 2021, then it is hard to see why such companies could not police and prohibit the transmission of violent ISIS propaganda videos, in the periods preceding a terrorist attack.
This ignores the fact that the DMCA process is pretty much an ongoing train wreck, one that's abused to silence speech and often mistakenly targets non-infringing content. And social media companies' attempts to stop the spread of disinformation or deal with harassing/threatening content have rarely been viewed as competent, much less exemplary. This whole spiel ignores the fact that a lot of the moderation Gould considers easy or successful is also heavily reliant on reports by site users.
Finally, not only does Judge Gould suggest Section 230 should be narrowed, he thinks another course of legal action should be made available to plaintiffs to sue tech companies not just for the content they host, but for the actions of terrorist organizations all over the world.
As a matter of federal common law, I would hold that when social media companies in their platforms use systems or procedures that are unreasonably dangerous to the public—as in the case where their systems line up repeated messages in aid of terrorists like ISIS—or when they omit to act to avoid harm when omitting the act is unreasonably dangerous to the public—as in the case where they fail to review and self-regulate their websites adequately to notice and remove propaganda videos from ISIS that are likely to cause harm—then there should be a federal common law claim available against them.
That's Gould's "product liability" theory. In his view, the algorithms are defective because some users turn towards terrorism. And if the product is defective, the manufacturer can be sued. Gould really has to stretch the analogy to make it fit.
Here and similarly, social media companies should be viewed as making and “selling” their social media products through the device of forced advertising under the eyes of users.
Huh. But what if users use ad blockers? I mean, that's just one of several questions this raises. The product is access to site users and their attention. That's what's being sold to advertisers. If that's the case, the only parties who would have standing to bring lawsuits under this theory would be dissatisfied companies who feel their ads are being placed along content that's being served up by algorithms that are possibly radicalizing some users into committing terrorist acts. That's more than a little attenuated from the actual harm, especially if no one working for the aggrieved companies has been a victim of a terrorist attack. Sure, users could try to complain the product is defective, but the product -- according this this judge's own take -- isn't the social media platform.
Look, moderation is far from perfect and will likely always cause some sort of collateral damage as adjustments are made. If everyone would like to see less moderation and fewer social media options, they should definitely allow the courts and Congress to start creating a bunch of exceptions to Section 230 immunity. In these three lawsuits, plaintiffs suffered tragedies and were encouraged by questionable law firms to sue third parties with no link to terrorists and their acts other than some hosted content. If these claims had any merit, we'd see more wins. But we haven't seen these wins because these claims are weak and seem mostly propelled by finding the largest, easiest target to hit rather than any true desire to see justice done.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: 9th circuit, ata, intermediary liability, section 230, terrorism
Companies: twitter
Reader Comments
Subscribe: RSS
View by: Time | Thread
I generally disagree with this/these statements:
"Algorithms react to input from users. If YouTube can't be held directly responsible for videos uploaded by users, it makes sense it would be immunized against algorithmically suggesting content based on users' actions and preferences. The algorithm does next to nothing on its own without input from content viewers."
I think the judges and much of the general feeling out there in this regard are reating to the reality that the AI being deployed by FB, Google (YT), and the like are SO tuned to maximize engagement they quickly lead users down a dark and often unintended or desired path. This is was well portrayed by a recent simple experiment that two editors from Wired or something like that did where one took a "conservative" or right angle and the other a "liberal/progressive" or left slant. They documented how long it took to plummet down the rabbit hole....
Although I cannot find that article the spoiler is it didn't take long esp on the right side. REAL quick the echo chamber went into effect and the results were quite scary. I think people are recognizing this and wanting to "do something."
But there are all kinds of examples where AI and algorithms are doing "stuff" without users or contrary to users intent.
https://theintercept.com/2021/04/09/facebook-algorithm-gender-discrimination/
https://news.v irginia.edu/content/study-how-facebook-pushes-users-especially-conservative-users-echo-chambers
http s://arstechnica.com/tech-policy/2021/06/amazon-is-firing-flex-workers-using-algorithms-with-little-h uman-intervention/
https://www.wired.com/story/how-algorithm-blocked-kidney-transplants-black-patien ts/
https://arstechnica.com/tech-policy/2019/01/yes-algorithms-can-be-biased-heres-why/
[ link to this | view in chronology ]
Re:
How exactly can one "plummet down the rabbit hole" if it is done "contrary to users intent"? It seems obvious to me that clicking a related video demonstrates intent to watch the video. Is it just a problem because their INITIAL intent wasn't to watch extreme videos? Why? Intent changes all the time. I intend to watch a video, then intend to watch a different one. It seems to me the real thing people are upset about is the very fact that people intend to consume extreme content. The sites give people what they want, but they shouldn't want what they want.
[ link to this | view in chronology ]
Re: Re:
Plummeting down rabbit hole iceberg explained. Try this one trick to not believe what happens next.
[ link to this | view in chronology ]
Re: Re:
Never heard the term "garbage in, gospel out" you seem to be subjecting yourself to the fallacy that users are in fact in control. They certainly have some and even much input but there is a TON of input coming from the engagement AI that you are clearly discounting.
I'm guessing you didn't read the articles and haven't performed your own "Studies" on this as the algorithms are wrong an awful lot. And that has consequences. 1/6/21
"machine-learning systems—"algorithms"—produce outputs that reflect the training data over time"
[ link to this | view in chronology ]
Re: Re: Re:
If the algos are wrong, people won't engage. I fail to see the problem. If anything they fail to suggest interesting content most of the time.
[ link to this | view in chronology ]
Re:
If it's unwanted, you search for different stuff, watch different stuff, and guess what won't be recommended then? Are you claiming that people aren't interested in that which they are interested? That "teh algorithmz" are sucking in people like a cult and programming them?
[ link to this | view in chronology ]
Re: Re:
Yes, I am saying that this is in fact a problem. And although judges are frequently technically illiterate they aren't all stupid. And while almost all politicians are to be called to question on intents and motive they also tend to be decent readers of the winds.
But yes, CLEARLY people are being sucked into things they may or may not have initially been trying to explore. I mean is this not freaking completely OBVIOUS by now?
Not every right leaning human wanted to get sucked into the vortex of dystopia that 30-40% of Americans now habitate......
[ link to this | view in chronology ]
Re: Re: Re:
"But yes, CLEARLY people are being sucked into things they may or may not have initially been trying to explore"
Well, it works both ways - I've found a lot through recommendations that have had material benefit to me that I may not have found otherwise. But, I tend to stray away from anything overtly political in nature, I'd rather keep my feed full of movie reviews, music and video game discussion that I can drift in and out of if I have a bout of insomnia and I make sure autoplay is turned off in case I do fall asleep.
I think that the problem with some people is that they're not educated enough in how to do things like recognise bullshit and evaluate sources, and they may be too lazy to properly curate their viewing history.
By that, I mean that if someone (for example) views some video talking about upcoming Marvel movies and then start getting recommended videos on alt-right people whining about how it's not all straight white men being cast in certain roles, where you go from there depends on your reaction. Mine would be "not interested", and if I flag a few videos and still get recommendations based on that video I go into my viewing history and delete it. The problems come when someone starts going "huh, that sounds like a good point", then all of a sudden their most recent history is alt-right propaganda, and rather than recognising this they keep watching...
That's not to say that those are the only problematic videos, but if you're going to keep uncritically watching such things then YouTube will assume that's what you want to watch - since, you know, that's what you are watching. There are many problems, but ultimately unless you want to insist that these platforms do no recommendation at all, then the onus is on the user to curate their own experience. You might prefer the time when all the TV anyone watched was scheduled by a handful of channels for an entire city/state/country and not individually tailored, but that time is gone.
[ link to this | view in chronology ]
Re: Re: Re: Re:
Good points. Perhaps there is middle ground between mid 1960 broadcast TV choices and haradcore 2020 style max engagement profit slurping.....
[ link to this | view in chronology ]
Re:
Since I watch a mixture of things on YouTube, including educational videos, I sometimes get offered Prager U videos. I know that Prager U is pretty far to the right and that I generally don't agree with their content so, I don't click on their videos. The algorithm offers me fewer Prager U videos since I've been not clicking them. Sometimes, if I watch a video that is specifically arguing against the content in a specific video (I went down a bit of a rabbit hole watching anti-flat Earth videos a few months ago) then I'll see a bunch of "pro" that topic videos in my recommendations but, part of media literacy in the 21st century is knowing when you're being led astray by content.
[ link to this | view in chronology ]
Re: Re:
Sure, perhaps you are media literate. And I too endeavor as such.
However my Google News feed is a steaming pile of garbage sometimes no matter how hard I attempt to curate it. Without going completely heavy handed and blocking sources the thing is constantly tossing garbage at me. It's merely an irritant tho.
But we have like 30-40% of completely media illiterates on ONE side of the tribal isle. I'm SURE a similar exists on the other side of the tribal isle. That puts the number of media illiterates in the US at like 60-80%.
Democracy may not survive such odds....
[ link to this | view in chronology ]
Re: Re: Re:
Then, the question is how you solve the problem of media illiteracy. Because, while the problems being described here might mainly be focussed on social media and other online platforms, it's definitely not exclusively there. Crippling online services does not mean that the same people won't just take their ideas from TV news, talk radio and tabloid newspapers instead.
[ link to this | view in chronology ]
Re: Re: Re: Re:
True, alas I do not know the answer to that we have so many education problems and media illiteracy is a thorny nugget for sure. My original point was my disagreement that algorithmic outcomes are solely the input of end users. And I don't believe that is true. There is much that goes into an algorithm aside from user choices.
Beyond that I think we have some problems on or hands with both social media AND cable news type media sewage that we are not dealing with adequately and there are some real ramifications to our society in this regard.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
I'm not sure that anyone is saying that they're solely the result of end users, but at the end of the day being presented with dodgy material does not mean that you have to watch or believe it. I and many others have been presented with the propaganda in the past, and our reaction was to block it and take steps to ensure that it stops appearing. It would be nice if it didn't come up to begin with, but there is a personal element involved in how you deal with it when it does.
It's like spam - sure, the best solution for everyone is that it doesn't exist, then the next best thing is that a working spam filter gets rid of it before you read it. But, if you read spam and your reaction to it is to wire money to that nice Nigerian prince, that's on you.
[ link to this | view in chronology ]
Re: cited evidence not feelz
Lotta disagrees with my opinion but none with citations and studies and you know "facts" stuff.....
Up yo game
[ link to this | view in chronology ]
Re: Re: cited evidence not feelz
Ok, fact: You don't even know what "isle" means.
[ link to this | view in chronology ]
Am I the only one who finds it hard to follow the distinction between "activities that promote or recommend content" and "traditional activities of publication"? Like, book reviews sound pretty "traditional" to me. So does printing a Top 40 list of radio hits, or a list of bestselling novels, or a list of movies in descending order of weekend box-office. What distinction is the judge actually trying to get at here, and how much does it depend upon a warped impression of what a "machine-learning algorithm" is?
[ link to this | view in chronology ]
Re:
It is a stupid fundamentally illiterate "distinction" both technically and in the traditional sense. If that was apllied as promoting then a simple pure facts automated listing of "most viewed videoes" would qualify despite it being a fully automated process of straightforward math, and requirements to stop it because of supposed impact would be straight up chilling effect and defacto censorship.
[ link to this | view in chronology ]
Yeah and we all know the justice system never put an innocent man in jail because he never did enough to stop something
[ link to this | view in chronology ]
Once again, there's no criticism about Section 230 that does not misstate its function.
[ link to this | view in chronology ]
This ignores the fact that the DMCA process is pretty much an ongoing train wreck,
It also ignores the fact that they are asking for "moderation" of unknown stuff before some unknown thing might happen, by comparing it to moderation that happened after 4 years of egregious bullshit and a half-assed coup attempt by morons.
Different fucking animals, Your Honor.
[ link to this | view in chronology ]
Am I the only one who read this:
... as this:
https://dilbert.com/strip/1997-01-28
"From now on, I want advanced notice of any unplanned outages. ... And I need it yesterday."
[ link to this | view in chronology ]
Re:
Ah, for the days when Adams wrote amusing truths about the stupidness of other people instead of being a prime example of stupidity himself...
[ link to this | view in chronology ]
Re: Re:
I forst found abour Adams's dementia when I fount out his reality-adverse beliefs about climate science (that there's no hard evidence for it; it's just scientists making up what they want), and he's only deteriotated from there.
[ link to this | view in chronology ]