A Case Where The Courts Got Section 230 Right Because It Turns Out Section 230 Is Not Really All That Hard
from the helpful-precedent dept
Having just criticized the Second Circuit for getting Section 230 (among other things) very wrong, it's worth pointing out an occasion where it got it very right. The decision in Force v. Facebook came out last year, but the Supreme Court recently denied any further review, so it's still ripe to talk about how this case could, and should, bear on future Section 230 litigation.
It is a notable decision, not just in terms of its result upholding Section 230 but in how it cut through much of the confusion that tends to plague discussion regarding Section 230. It brought the focus back to the essential question at the heart of the statute: who imbued the content at issue with its allegedly wrongful quality? That question is really is the only thing that matters when it comes to figuring out whether Section 230 applies.
This case was one of the many seeking to hold social media platforms liable for terrorists using them. None of them have succeeded, although for varying reasons. For instance, in Fields v. Twitter, in which we wrote an amicus brief, the claims failed but not for Section 230 reasons. In this case, however, the dismissal of the complaint was upheld on Section 230 grounds.
The plaintiffs put forth several theories about why Facebook should not have been protected by Section 230. Most of them tried to construe Facebook as the information content provider of the terrorists' content, and thus not entitled to the immunity. But the Second Circuit rejected them all.
Ultimately the statute is simple: whoever created the wrongful content is responsible for it, not the party who simply enabled its expression. The only question is who created the wrongful content, and per the court, "[A] defendant will not be considered to have developed third-party content unless the defendant directly and 'materially' contributed to what made the content itself 'unlawful.'" [p. 68].
Section 230 really isn't any more complicated than that. And the Second Circuit clearly rejected some of the ways people often try to make it more complicated.
For one thing, it does not matter that the platform exercised editorial judgment over which user content it displayed. After all, even the very decision to host third-party content at all is an editorial one, and Section 230 has obviously always applied in the shadow of that sort of decision.
The services have always decided, for example, where on their sites (or other digital property) particular third-party content should reside and to whom it should be shown. Placing certain third-party content on a homepage, for example, tends to recommend that content to users more than if it were located elsewhere on a website. Internet services have also long been able to target the third-party content displayed to users based on, among other things, users' geolocation, language of choice, and registration information. And, of course, the services must also decide what type and format of third-party content they will display, whether that be a chat forum for classic car lovers, a platform for blogging, a feed of recent articles from news sources frequently visited by the user, a map or directory of local businesses, or a dating service to find romantic partners. All of these decisions, like the decision to host third-party content in the first place, result in "connections" or "matches" of information and individuals, which would have not occurred but for the internet services' particular editorial choices regarding the display of third-party content. We, again, are unaware of case law denying Section 230(c)(1) immunity because of the "matchmaking" results of such editorial decisions. [p. 66-67]
Nor does it matter that the platforms use algorithms to help automate editorial decisions.
[P]laintiffs argue, in effect, that Facebook's use of algorithms is outside the scope of publishing because the algorithms automate Facebook's editorial decision-making. That argument, too, fails because "so long as a third party willingly provides the essential published content, the interactive service provider receives full immunity regardless of the specific edit[orial] or selection process." [p. 67]
Even if the platform uses algorithms to decide whether to make certain content more "visible," "available," and "usable," that does not count as developing the content. [p. 70]. Nor does simply letting terrorists use its platform to make it a partner in the creation of their content. [p. 65]. The court notes that in cases where courts have found platforms liable as co-creators of problematic content, they had played a much more active role in the development of specific instances of problematic expression than simply enabling it.
Employing this "material contribution" test, we held in FTC v. LeadClick that the defendant LeadClick had "developed" third parties' content by giving specific instructions to those parties on how to edit "fake news" that they were using in their ads to encourage consumers to purchase their weight-loss products. LeadClick's suggestions included adjusting weight-loss claims and providing legitimate-appearing news endorsements, thus "materially contributing to [the content's] alleged unlawfulness." [We] also concluded that a defendant may, in some circumstances, be a developer of its users' content if it encourages or advises users to provide the specific actionable content that forms the basis for the claim. Similarly, in Fair Housing Council v. Roommates.Com, the Ninth Circuit determined that—in the context of the Fair Housing Act, which prohibits discrimination on the basis of sex, family status, sexual orientation, and other protected classes in activities related to housing—the defendant website's practice of requiring users to use pre-populated responses to answer inherently discriminatory questions about membership in those protected classes amounted to developing the actionable information for purposes of the plaintiffs' discrimination claim. [p. 69]
Of course, as the court noted, even in Roommates.com, the platform was not liable for any and all potentially discriminatory content supplied by its users.
[I]t concluded only that the site's conduct in requiring users to select from "a limited set of pre-populated answers" to respond to particular "discriminatory questions" had a content-development effect that was actionable in the context of the Fair Housing Act. [p. 70]
Also, woven throughout the decision the court also included an extensive discussion, [see, e.g., p. 65-68], about that perpetual red herring: the term, "publisher," which keeps creating confusion about the scope of the law. One of the most common misconceptions about Section 230 is that it hinges on some sort of "platform v. publisher" distinction, immunizing only "neutral platforms" and not anyone who would qualify as a "publisher." People often mistakenly believe that a "publisher" is the developer of the content, and thus not protected by Section 230. In reality, however, for purposes of Section 230 platforms and publishers are actually one and the same, and therefore all protected by it. As the court explains, the term "publisher" just stems from the understanding of the word as "one that makes public," [p. 65], which is the essential function of what a platform does to distribute others' speech, and that distribution is not the same thing as creation of the offending content. Not even if the platform has made editorial decisions with respect to that distribution. Being a publisher has always entailed exercising editorial judgment over what content to distribute and how, and, as the court makes clear, it is not suddenly a basis for denying platforms Section 230 protection.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: 2nd circuit, carl force, material support, section 230, terrorism
Companies: facebook
Reader Comments
Subscribe: RSS
View by: Time | Thread
Now, let’s see how 230 detractors manage to misinterpret all of this.
[ link to this | view in chronology ]
Re:
I assume the same ways they've been misinterpreting it for the past 25 years.
[ link to this | view in chronology ]
Re:
That's easy, they'll just do what they've been doing the entire gorram time and ignore any parts of the ruling/law/argument that contradicts what they want/incorrectly believe the law to say.
[ link to this | view in chronology ]
Re: Re:
http://home.mlive.in.th/Privilege/Index?useridx=56932119&inroom=1&languageType=1
[ link to this | view in chronology ]
It always comes down to the same argument: it's impossible for ME to do the job of finding plagiarism of MY OWN work; I want SOMEONE to do it, because I profit; therefore YOU must do it for me, even though you don't even have as much information as I do.
Any sociopath would reason the same way. "You must remake the world to my insane vision, because I can't. And you don't have the right to have a vision at all, because I am the center of the universe."
[ link to this | view in chronology ]
…what in the blue hell does plagiarism have to do with a Section 230 case involving the speech of terrorists?
[ link to this | view in chronology ]
Re:
Because per the rules of copyright maximalists, plagiarism is copyright, and copyright is everything.
[ link to this | view in chronology ]
Re: Re:
How often have we seen an attempt to extend copyright dressed up as an attack on "terrorism" or an attempt to "protect children"?
**IA lawyer to deputy attorney general: "grommet.com is making it too easy to upload our sacred bloviating content: get them for SOMETHING. We don't care what. Just make sure it's expensive to defend against, and easy to drum up PR against, from the tame media supported by our advertising."
[ link to this | view in chronology ]
To take this further: a newspaper delivery boy should not be held liable for delivering a newspaper that contains illegal content... even if the newspaper delivery boy usually goes through the paper ahead of delivery and redacts anything the particular recipient may find objectionable.
Or even more terse: "don't shoot the messenger."
[ link to this | view in chronology ]
Not hard without an axe to grind
It's amazing how easy the law is to understand when you aren't trying to twist it so you can go after the easier/richer target: You only get to hold the actual creator of content accountable for what they said/posted, you don't get to blame the platform they used to post on.
'Blame goes to the guilty party, not whoever's easiest to find' is really not a difficult concept, so the fact that people keep getting it wrong shows just how lazy and/or greedy they really are.
[ link to this | view in chronology ]
Re: Not hard without an axe to grind
Beyond that, i still find the entire "terroristic content" thing a bullshit issue in the first place. The people who did terrorism and caused your loved ones harm are the people to blame. Not some asshat posting "pro-terror" stuff on a website, and most certainly not the website. Maybe the terrorism-related content should be taken down, but then, there is an awful lot of similar content with respect to "radicalization" which somehow doesn't fit the terrorism model because it is culturally acceptable, or even "patriotic".
Free speech, something something whatever
[ link to this | view in chronology ]
The Internet has been turned into a cesspool of defamation by CDA Section 230. You have sites like Pissed Consumer which provide a safe haven for nuts wanting to falsely accuse business people of felonies they never committed. They ask for no proof, just post it, edit it, and them monetize it. They shake down the business owner through some fake “mediation” system and split the take with the “consumer.” If you sue them for publishing defamatory content they run and hide behind their precious, undeserved Section 230 immunity. Fight them and they ratchet up the fake postings till you don’t want to live any longer from the bullying. How many more innocent lives, families and businesses need to be senselessly ruined by this thoughtless law?
[ link to this | view in chronology ]