Lessons From Making Internet Companies Liable For User's Speech: You Get Less Speech, Less Security And Less Innovation
from the not-good dept
Stanford's Daphne Keller is one of the world's foremost experts on intermediary liability protections and someone we've mentioned on the website many times in the past (and have had her on the podcast a few times as well). She's just published a fantastic paper presenting lessons from making internet platforms liable for the speech of its users. As she makes clear, she is not arguing that platforms should do no moderation at all. That's a silly idea that no one who has any understanding of these issues thinks is a reasonable idea. The concern is that as many people (including regulators) keep pushing to pin liability on internet companies for the activities of their users, it creates some pretty damaging side effects. Specifically, the paper details how it harms speech, makes us less safe, and harms the innovation economy. It's actually kind of hard to see what the benefit side is on this particular cost-benefit equation.
As the paper notes, it's quite notable how the demands from people about what platforms should do keeps changing. People keep demanding that certain content gets removed, while others freak out that too much content is being removed. And sometimes it's the same people (they want the "bad" stuff -- i.e., stuff they don't like -- removed, but get really angry when the stuff they do like is removed). Perhaps even more importantly, the issues for why certain content may get taken down are the same issues that often involve long and complex court cases, with lots of nuance and detailed arguments going back and forth. And yet, many people seem to think that private companies are somehow equipped to credibly replicate that entire judicial process, without the time, knowledge or resources to do so:
As a society, we are far from consensus about legal or social speech rules. There are still enough novel and disputed questions surrounding even long-standing legal doctrines, like copyright and defamation, to keep law firms in business. If democratic processes and court rulings leave us with such unclear guidance, we cannot reasonably expect private platforms to do much better. However they interpret the law, and whatever other ethical rules they set, the outcome will be wrong by many people’s standards.
Keller then looked at a variety of examples involving intermediary liability to see what the evidence says would happen if we legally delegate private internet platforms into the role of speech police. It doesn't look good. Free speech will suffer greatly:
The first cost of strict platform removal obligations is to internet users’ free expression rights. We should expect over-removal to be increasingly common under laws that ratchet up platforms’ incentives to err on the side of taking things down. Germany’s new NetzDG law, for example, threatens platforms with fines of up to &euro'50 million for failure to remove “obviously” unlawful content within twenty-four hours’ notice. This has already led to embarrassing mistakes. Twitter suspended a German satirical magazine for mocking a politician, and Facebook took down a photo of a bikini top artfully draped over a double speed bump sign.11 We cannot know what other unnecessary deletions have passed unnoticed.
From there, the paper explores the issue of security. Attempts to stifle terrorists' use of online services by pressuring platforms to remove terrorist content may seem like a good idea (assuming we agree that terrorism is bad), but the actual impact goes way beyond just having certain content removed. And the paper looks at what the real world impact of these programs have been in the realm of trying to "counter violent extremism."
The second cost I will discuss is to security. Online content removal is only one of many tools experts have identified for fighting terrorism. Singular focus on the internet, and overreliance on content purges as tools against real-world violence, may miss out on or even undermine other interventions and policing efforts.
The cost-benefit analysis behind CVE campaigns holds that we must accept certain downsides because the upside—preventing terrorist attacks—is so crucial. I will argue that the upsides of these campaigns are unclear at best, and their downsides are significant. Over-removal drives extremists into echo chambers in darker corners of the internet, chills important public conversations, and may silence moderate voices. It also builds mistrust and anger among entire communities. Platforms straining to go “faster and further” in taking down Islamist extremist content in particular will systematically and unfairly burden innocent internet users who happened to be speaking Arabic, discussing Middle Eastern politics, or talking about Islam. Such policies add fuel to existing frustrations with governments that enforce these policies, or platforms that appear to act as state proxies. Lawmakers engaged in serious calculations about ways to counter real-world violence—not just online speech—need to factor in these unintended consequences if they are to set wise policies.
Finally, the paper looks at the impact on innovation and the economy and, again, notes that putting liability on platforms for user speech can have profound negative impacts.
The third cost is to the economy. There is a reason why the technology-driven economic boom of recent decades happened in the United States. As publications with titles like “How Law Made Silicon Valley” point out, our platform liability laws had a lot to do with it. These laws also affect the economic health of ordinary businesses that find customers through internet platforms—which, in the age of Yelp, Grubhub, and eBay, could be almost any business. Small commercial operations are especially vulnerable when intermediary liability laws encourage over-removal, because unscrupulous rivals routinely misuse notice and takedown to target their competitors.
The entire paper weighs in at a neat 44 pages and it's chock full of useful information and analysis on this very important question. It should be required reading for anyone who thinks that there are easy answers to the question of what to do about "bad" content online, and it highlights that we actually have a lot of data and evidence to answer the questions that many legislators seem to be regulating based on how they "think" the world would work, rather than how the world actually works.
Current attitudes toward intermediary liability, particularly in Europe, verge on “regulate first, ask questions later.” I have suggested here that some of the most important questions that should inform policy in this area already have answers. We have twenty years of experience to tell us how intermediary liability laws affect, not just platforms themselves, but the general public that relies on them. We also have valuable analysis and sources of law from pre-internet sources, like the Supreme Court bookstore cases. The internet raises new issues in many areas—from competition to privacy to free expression—but none are as novel as we are sometimes told. Lawmakers and courts are not drafting on a blank slate for any of them.
Demands for platforms to get rid of all content in a particular category, such as “extremism,” do not translate to meaningful policy making—unless the policy is a shotgun approach to online speech, taking down the good with the bad. To “go further and faster” in eliminating prohibited material, platforms can only adopt actual standards (more or less clear, and more or less speech-protective) about the content they will allow, and establish procedures (more or less fair to users, and more or less cumbersome for companies) for enforcing them.
On internet speech platforms, just like anywhere else, only implementable things happen. To make sound policy, we must take account of what real-world implementation will look like. This includes being realistic about the capabilities of technical filters and about the motivations and likely choices of platforms that review user content under threat of liability.
This is an important contribution to the discussion, and highly recommended. Go check it out.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: cda 230, daphne keller, dmca 512, free speech, innovation, intermediary liability, section 230, security
Reader Comments
Subscribe: RSS
View by: Time | Thread
It's sad that we are needing to impose such burdens on free speech and on the Internet in order to have concrete examples of stuff people like TD, EFF and others have been pointing out for years.
[ link to this | view in thread ]
Re: Real Examples
3... 2... 1...
[ link to this | view in thread ]
Exactly...
Well, this is the result of the "silence hate speech" movement. The best solution against speech you hate will only ever be more speech.
All efforts to silence someone tends to only make them louder, much like the Streisand effect. Which is funny how TD can understand this in the context of keeping things out of the media but cannot for the life of them see how an equivalent effect occurs inside of their own "regulate all the things" mentality.
Let them keep ratcheting speech... things have to get worse before they get better. People have to suffer in large quantities before they start killing each other in enough quantities for those seeking liberty are successful enough to create any peace. And when so much peace has been had someone will come along and elevate minor offenses to great offenses then the cycle starts again.
The ebb and flow of the human tide of hypocrisy, a human trait without boundary!
[ link to this | view in thread ]
Re: Exactly...
What is funny is your odd obsession with not actually reading anything on Techdirt.
[ link to this | view in thread ]
Re: Re: Real Examples
Any form of speech suppression is nothing more than a tool of thought policing, plain and simple. Life is a messy affair, those that think they can make it less messy are deluding themselves and others. Everyone knows it too, but they just can't resist continuing to try regardless of the damage they incur!
[ link to this | view in thread ]
Re: Re: Exactly...
The same mentality that justifies one thing often justifies the other. For you, the default accusation for someone who has said something you didn't agreed with is to offer your ignorant version of a psych evaluation.
The point is to try and draw the lines between all of these relationships for you. If you disagree, how about putting up some substance other than telling us that your feelings are hurt and that we need to seek help.
[ link to this | view in thread ]
Re: Exactly...
Which is funny how TD can understand this in the context of keeping things out of the media but cannot for the life of them see how an equivalent effect occurs inside of their own "regulate all the things" mentality.
I've asked you many many times to stop lying about us. So, let's try again: where do we support "regulate all the things"? Go on. Point it out. And don't tap dance out of it. Where? Point us to the posts where we ask for regulation of "all the things"?
You can't, of course. Because for the most part -- as has been explained to you multiple times -- we are very much against regulation of new technologies. There is one area of exception, which is where we are VERY NARROWLY for regulation of the infrastructure level of broadband access solely in order to create an open market of services above that layer. And we've even explained, in great detail, why this one exception is okay -- and bizarrely, you yourself admit that you, too, are in favor of regulation in certain limited circumstances, like us, when there is not sufficient competition.
And yet, you continue to brazenly lie and insist we celebrate regulation of all things.
Stop it.
[ link to this | view in thread ]
Obvious upside
[ link to this | view in thread ]
I think that government fear is pushing this kind of thing, because governments all over the world are tired of hearing they're fucking up ad infinitum.
They never imagined the entire world, even those in other countries, would be critiquing them like this.
The emperors are all naked and they don't like it being pointed out 24-7. So just stop it by removing anything critical of anything or mildly offensive.
Overbroad takendowns? What's that? The critics all over the world are silent and life is good. It has always been good, correct?
[ link to this | view in thread ]
When people talk, they wish to be listened to..
Idiots, assholes, Bitches, bastards, Smart, Dumb, Those with no history background, Those that had Little idea that something had already been done..and MANY other types of people..
We had GREAT conversations, ideas, and Assholes..
As long as we had 1 rule..the site was great.
NO PERSONAL INSULTS.. If you have something to say, say it ABOUT THE TOPIC ONLY.. If you want to create another topic, go ahead.
Assholes that DIDNT stick to topic Hated us..
The topics, ideas, concepts we discussed were GREAT..and we all LEARNED THINGS..
Then comes all the conspiracies..ever fully talk thru and compare the concepts of a conspiracy?? Its fun pulling the strings of it and showing it cant be done..(then you see it working)
[ link to this | view in thread ]
Unregulated monopolies [was Re: Re: Exactly...]
Perhaps you see that as a “very narrow” exception. But, in general, very broadly, it's my belief that unregulated monopolies are inimical to the public interest.
In some cases, they simply ought to be broken up.
[ link to this | view in thread ]
[ link to this | view in thread ]
Less Speech, Less Security, Less Innovation: 2 outta 3 ain't bad
[ link to this | view in thread ]
Re: Unregulated monopolies [was Re: Re: Exactly...]
I'm a bit surprised that Mike and the rest of the Techdirt gang would not be in favor of government regulation of the investment environment. There probably should be more regulation rather than less, because whenever people are asking for money it practically invites fraud and abuse. Friday's news of Elizabeth Holmes, Silicon Valley's version of Bernard Madoff, being indicted on federal fraud charges for running that high-tech billion-dollar ponzi scheme known as Theranos should serve as a wake-up call for anyone not aware of the extreme lack of regulation of startup companies and other speculative investments.
Also, Theranos filed and was granted hundreds of bogus patents that were used as a tool to trick investors into thinking that the company was legitimate. How's that for an even more lucrative type of patent abuse than patent trolling?
[ link to this | view in thread ]
Working as designed
Wealthy and powerful people will still have outlets for their speech. This is a suppression of speech by ordinary people. Stop thinking of it as a side effect and realize that it is the primary objective.
[ link to this | view in thread ]
Re: Unregulated monopolies [was Re: Re: Exactly...]
Now that the base technology has converged on digital networks, they are consolidating locally, because duplicating fiber runs for cable and phone is very expensive and a waste of resources, or abandoning phone lines for mobile where that is the cheaper, but less reliable option for delivering a digital network, to serve areas of low population density.
Infrastructure always becomes a local monopoly, and in most places is regulated so as to ensure competition at the higher levels. The US problem is not regulation as such, a regulatory agency structure that is easy for industries to capture, and the way the agencies are structured, and the way their senior management is appointed is where the solution to the regulatory problems lie.
You need to solve the political and regulatory capture problems, as destruction of regulatory power is what is making the current problems worse.
[ link to this | view in thread ]
Re: Re: Unregulated monopolies [was Re: Re: Exactly...]
What regulation do you believe is missing to prevent things like that? As you say, she's already being indicted on fraud, meaning she (allegedly) broke existing laws/regulations. I do wonder whether she knew the stuff was all bogus, or had deluded herself along with the other investors. Did she sell many of her shares? (Madoff knew all along his thing was a scam.)
As to the bogus patents, they're public, and major investors would have been wise to ask scientists and engineers to use that information to evaluate the reproducibility of these results.
[ link to this | view in thread ]
basic rules
- no intermediary liability, as long as they don't proactively push for illegal content. (This would need to be very strictly and narrowly defined, as the vet few exceptions to free speech are supposed to be.)
- no automatic takedown required, ever. Automatic detection of "bad content" can only lead to notification, either to a moderation team or to potential "victim" of the content. Mandating automated takedown will definitely lead to abuse.
[ link to this | view in thread ]
GOOD
[ link to this | view in thread ]
[ link to this | view in thread ]
@ "platform liability laws had a lot to do with it"
To certain extent would be okay. We've now found that extent.
But masnicks want to go all the way to ABSOLUTE IMMUNITY for "platforms" while at the same time they want those "platforms" (which are intended to be FOR the The Public to use freely) to be de facto censors with arbitrary power to suppress what any given member of The Public might want to say, even over that person's First Amendment right. In other words, masnicks have NO problem with "private" censorship, except the false attempt to say that it's different from gov't censorship. -- But no: your voice removed from a "platform", which the Sandvig decision says are now Public Forums, IS flat out fascistic censorship as any Nazi would do, period.
[ link to this | view in thread ]
Re: Re: Re:
If you hate any form of speech suppression, how would you feel if Section 230 went away and encouraged more suppression?
[ link to this | view in thread ]
Re:
Just to be clear: By saying less speech is “good” because a lot of it is “garbage”, does this mean you would approve of censorship based on arbitrary standards of quality that will never be in your control?
[ link to this | view in thread ]
Re:
Platforms such as Twitter, Tumblr, Facebook, and 4chan are not public utilities, nor were they created entirely through public funding. While they may be de facto public forums, they do not have any legal, moral, or ethical obligation to remain neutral in regards to speech. The owners and operators of those privately-owned platforms have every right to moderate those platforms however they see fit—up to and including banning certain kinds of speech.
If you do not like how one platform treats your speech, find a different platform. If no one else’s platform will have you, make your own. If you cannot do that, well…good luck.
[ link to this | view in thread ]
Re: basic rules
“Intermediary” is a large word.
ISPs lobby for the idea that they should be considered no differently that Facebook, Google, and Twitter. Indeed, the ISP “intermediaries” can point to § 230 of the Communications Act of 1934 (as amended by the Telecommunications Act of 1996) for—
(Emphasis.)
But before the '96 amendment, cases like O'Brien v Western Union (2nd Cir. 1940) talked about the legal duty of common carriers not to block messages — even potentially objectionable messages.
(Footnote omitted.)
Let that sink in: The telegraph company, back then, under some circumstances, had a legal duty to accept and transmit a telegram, even though defamatory or objectionable.
In this regard, note also § 201(a) of the Communications Act
And further, § 206 — “Carriers’ liability for damages.”
So, “intermediary” is a large word. And occasionally, we do want “intermediaries” held liable.
[ link to this | view in thread ]
Public funding [was Re: Re: ]
Neither Ollie's Barbecue, nor Heart of Atlanta Motel were publicly funded.
Nor were the company town in Marsh (1946), nor the grain elevator in Munn (1877).
Some people may make a religion or fetish out of “private” property, but it's not really all that much of a consideration when it comes down to obeying the law. Some people even make a religion out of not serving black people in their restaurants, and well, that's Ollie's Barbecue, and Ollie's Barbecue was not “created entirely through public funding”.
[ link to this | view in thread ]
Re:
How much control do you believe the government should have over what can be said and who can say it on privately-owned platforms such as Twitter or, say, the Techdirt comments section?
[ link to this | view in thread ]
Re: Re: Re: Unregulated monopolies [was Re: Re: Exactly...]
In the case of outright fraud, there's probably not much the government can do upfront to prevent people from lying, forgery, "cooking the books" and other aspects of a carefully-crafted scam operation. That goes for probably any subversion of law. The IRS might be the best federal agency at catching white-collar crooks, which probably shouldn't surprise us, since it's in the government's own self-interest to catch tax cheats above all other kinds of fraud. But whether the audits, subpoenas, and general blood-hounding that the IRS is known for would be too intrusive to business if similarly practiced by the FTC, SEC, or other federal oversight agencies is something to be considered.
Regulation tends to be rearward-looking, trying to prevent the last major scandal after it's already happened. Hopefully something can be learned from the case of Theranos that can be applied in the future to prevent similar situations from repeating so easily, or at least being on guard for identifying the warning signs earlier.
If only more investors had practiced the Warren Buffett Rule, which goes something like ... "invest only in things you actually know something about."
[ link to this | view in thread ]
Re: Re:
The problem is that the current internet economy tends to be dominated by a single giant "platform" in many types of service, and getting kicked off can be catastrophic because there are no comparable substitutes.
Let's say that you're an antique/junk dealer who just got kicked off eBay. You might as well shut down and go out of business, because all the other online auction sites are so tiny that you will likely never find a buyer.
[ link to this | view in thread ]
Private Platforms [was Re: Re: ]
Where the line gets really blurry is on “privately-owned platforms” such as DNS.
If someone wants to operate a domain name such as TheSlants.com, then should Verisign Inc. (or ICANN), be required to host that “disparaging” speech on Verisign's private DNS servers?
[ link to this | view in thread ]
Re:
[ link to this | view in thread ]
Re: Private Platforms [was Re: Re: ]
DNS/domain name services are important to how we use the Internet; they should lean in favor of more speech instead of less, even if that speech is offensive or “disparaging”. I am loathe to suggest regulating them in a way that forces them to host content which they do not want on their servers, though.
[ link to this | view in thread ]
Re: Re: Re:
I would tell that dealer to stop relying on a single fixed point of income that could crash and burn at any moment. If your business can be destroyed by an eBay ban, it deserves to be destroyed by that ban.
[ link to this | view in thread ]
Im calling bullsh!t
And worse, the brutal online bullying of speakers by the deep state online mobs working from Fort Bening GA, Israel, or any of many NGOs that are currently waging domestic chaos on Americas constitution.
Here are topics that are openly censored online:
1 criticism of what Norman Finklestein calls the Holocaust industry,or the ADL spy rings of 1993 that targeted,tracked, and harassed peace activists, and ratted them out to police.
~ This contributed entirely to our current speech policing climate, and the total surveillance state we live in today.
https://www.counterpunch.org/2002/02/25/the-adl-spying-case-is-over-but-the-struggle-continues /
2 criticism of any associated repression that has occured since then, and the direct link between speech policing and todays total surveillance state via the NSA-FVEY~Israel data theft pipeline
3 criticism of any other group or organization that has adopted the tactics of speech policing.
4 criticism of American police training under a zionified script in israel.
5 discussions of media ownership, and the gate keeping mechanisms of media~censorship.
All of media.these topics are censored/taboo/prohibited/targeted by military grade influence operations
Even on this exact forum.
I am not worried about clumsy altRight propaganda online, orDavid Duke, or even theR Russians~because none of those are currently censoring,cyber stalking,offline stalkng, or murdering American activists.
[ link to this | view in thread ]
Re: @ "platform liability laws had a lot to do with it"
IS flat out fascistic censorship as any Nazi would do
Which your President Trump supports. You wouldn't want Techdirt to be non-Trumpish, now would you blue boy?
I'll just wait for the dust to clear from the mental meltdown you're about to have...
[ link to this | view in thread ]
Re: Im calling bullsh!t
Please take your anti-Semitic rhetoric to a more welcoming forum.
[ link to this | view in thread ]
Re: @ "platform liability laws had a lot to do with it"
If you want the prior laws of publishing to apply, you will turn the platforms into publishers, and have to accept that the platform will vet your speech before publication, and demand transfer of copyright for significant works as well.
Facebook, Twitter etc are not publishers, they are platforms that let people publish for themselves, and who exercise a post publication of speech that breaches their terms of service.
In other words, without section 230 or similar protection for platforms, you would not be able to post freely here, or on any other site, but rather only have your speech published after a delay and if some editor approved it.
[ link to this | view in thread ]
Im calling bullsh!t
This. Just this.
Feel free to make my point for me.
[ link to this | view in thread ]
Re: Re: Private Platforms [was Re: Re: ]
DoJ has been loathe to accept the idea that the publication of DNS resource records (NS and A RRs) is protected speech. See “Operation In Our Sites”, and recall particularly the arguments made in and around the Rojadirecta case.
If Puerto80's RRs aren't protected expression, then Verisign's carriage of those RRs isn't protected either — and any mandate to carry similar RR content on Verisign's DNS servers just doesn't involve compelled speech.
[ link to this | view in thread ]
Re: Re: Private Platforms [was Re: Re: ]
Circa 1623 the justices sitting in King's Bench made an apt observation on quashing the “Indictment: Hill ou Pasch (2 Rolle 345; 81 Eng. Rep. 842). The report is in that Anglo-Norman bastard “Law French”—
The upshot being that a common innkeeper can take down his sign —quit— and thus discharge his burden of giving lodging to all travellers.
(Via David S. Bogen, ”The Innkeeper's Tale: The Legal Development of a Public Calling”, p.88 (p.38 in PDF).)
[ link to this | view in thread ]
Re: Re: Im calling bullsh!t
Please stop conflating my dislike forracist gang stalking scum with dislike ofany one race,class, or ethnicity.
What a stupid useful idiot you are.
I am guessing you are goy, with a desperate need for Kabbalah credits on your own race baiting account? Hope you use the blockchain!
I dont need those kinds of desperation points. Nor do I need racists and their useful idiots defining dialectic space for my country.
....
Tracking extremistsEdit
The ADL keeps track of the activities of various extremist groups and movements.[19] According to ADL Director Abe Foxman, "Our mission is to monitor and expose those who are anti-Jewish, racist, anti-democratic, and violence-prone, and we monitor them primarily by reading publications and attending public meetings …. Because extremist organizations are highly secretive, sometimes ADL can learn of their activities only by using undercover sources … [who] function in a manner directly analogous to investigative journalists. Some have performed great service to the American people—for example, by uncovering the existence of right-wing extremist paramilitary training camps—with no recognition and at considerable personal risk."[20] A person apprehended in connection to the 2002 white supremacist terror plot had drawn a cartoon of himself blowing up the Boston offices of the ADL.[21]
The ADL regularly releases reports on anti-Semitism and extremist activities on both the far left and the far right. For instance, as part of its Law Enforcement Agency Resource Network (L.E.A.R.N.), the ADL has published information about the Militia Movement[22] in America and a guide for law enforcement officials titled Officer Safety and Extremists.[23] An archive of "The Militia Watchdog" research on U.S. right-wing extremism (including groups not specifically cited as anti-Semitic) from 1995 to 2000 is also available on the ADL website.[22]
In the 1990s, some details of the ADL's monitoring activities became public and controversial, including the fact that the ADL had gathered information about some non-extremist groups. In 2013, J.M. Berger, a former nonresident fellow of the Brookings Institution, wrote that media organizations should be more cautious when citing the Southern Poverty Law Center and ADL, arguing that they are "not objective purveyors of data".[24]
In July 2017, the ADL announced that they would be developing profiles on 36 alt-right and alt-lite leaders.[25][26]
[ link to this | view in thread ]