Don't Repeat FOSTA's Mistakes
from the learn-how-the-internet-works dept
Some of the most fruitful conversations we can have are about nuanced, sensitive, and political topics, and no matter who or where we are, the Internet has given us the space to do that. Across the world, an unrestricted Internet connection allows us to gather in online communities to talk about everything from the mundane to the most important and controversial, and together, to confront and consider our societies' pressing problems. But a growing chorus of U.S. politicians is considering dangerous new policies that would limit our ability to have those complex conversations online.
The Chair of the U.S. House Homeland Security Committee, Bennie Thompson, is urging tech companies to prioritize the removal of “sensitive, violent content” from their online platforms. But as we were worried might happen, the Chair didn’t stop there—he’s also threatening new legislation if the companies don’t move quickly.
In a letter written shortly after the heartbreaking shooting in New Zealand, which the shooter had livestreamed on multiple platforms, Rep. Thompson told Google, Facebook, Microsoft, and Twitter that if they don’t act, “Congress must consider policies to ensure that terrorist content is not distributed on your platforms, including by studying the examples being set by other countries." Calling for more aggressive moderation policies in the face of horrifying crimes is understandable, particularly when the major online platforms have failed to address how they can be exploited by individuals who broadcast or amplify hate and violence to unsuspecting users. Some might even argue that more aggressive moderation is a lamentable but needed shift in the online landscape.
But the desire to hold platforms legally accountable for the content that users post often backfires, expanding to silence legitimate voices, especially those that have long sought to overcome marginalization. These policies reward platforms for their censorship rather than for their ability to determine bad speech from good, or for meaningfully updating their business models to address how they’re feeding into this behavior. This is not to mention how the high technical bar required to implement the policies reinforces the dominance of the major platforms, which have the resources to comply with the new regulation, while new, innovative competitors do not. And if those policies are enacted into law—as has happened in other countries—the results are magnified, as platforms move to censor normal, everyday speech to protect themselves from liability.
FOSTA Provides Clear Evidence Of How These Regulations Fail
Congress doesn’t need to look at other countries for examples of how these sorts of policies might play out. Less than a year ago, it passed FOSTA, ostensibly to fight sex trafficking. Digital rights advocates, including EFF, fought against FOSTA in Congress because they feared its passage would threaten free expression online by criminalizing large portions of online speech and targeting sex workers and their allies. Groups that work closely with sex workers and sex trafficking victims warned Congress that the bill could put both consensual sex workers and sexual trafficking victims in even more danger. Horribly, these warnings appear to have come true, as sex workers have reported being subject to violence while also being shut out of online platforms that they relied on to obtain health and safety resources, build communities, and advocate for their human rights.
FOSTA sent a wider shock wave through cyberspace, resulting in takedowns of content and censorship that many wouldn’t expect to result from such a law. Although a wide range of plaintiffs are fighting the bill in court, some of the damage is already done. Some websites made changes explicitly as a result: Craigslist, for example, shut down its entire personals section, citing the risk the law created for them. Other small, community-based platforms shut down entirely rather than deal with FOSTA’s crippling criminal and civil liability. And although we cannot be certain that online platforms such as Tumblr and Facebook’s recent policy changes were the direct result of the law, they certainly appear to be. Tumblr banned all sexual content; Facebook created a new “sexual solicitation” policy that makes discussion of consensual, adult sex taboo.
Regardless of a direct link to FOSTA, however, it’s readily apparent that digital rights advocates’ worst fears are coming true: when platforms face immense liability for hosting certain types of user speech, they are so cautious that they over-correct and ban a vast range of discussions about sex, sexuality, and other important topics, because they need to stay far clear of content that might lead to legal liability. Given the incredible chilling effect that FOSTA has had on the Internet and the community of sex workers and their allies who relied on online platforms, Internet users need to ensure that Congress knows the damage any law aimed at shifting liability for “terrorist” content to platforms would cause.
A bill that makes platforms legally responsible for “terrorist content”—even one that seems like it would only impact a small range of speech—would force platforms to over-censor, and could affect a range of people, from activists discussing strategies and journalists discussing newsworthy events to individuals simply voicing their opinions about the real and terrible things that happen in our world. Banishing topics from the Internet stunts our ability to grow and solve issues that are real and worthy of our full attention. These types of regulations would not just limit the conversation—they would prevent us from engaging with the world's difficulties and tragedies. Just as an automated filter is not able to determine the nuanced difference between actual online sex trafficking and a discussion about sex trafficking, requiring platforms to determine whether or not a discussion of terrorist content is the same as terrorist content—or face severe liability—would inevitably lead to an over-reliance on filters that silence the wrong people, and as with FOSTA, would likely harm those who are affected by terrorist acts the most.
Online platforms have the right to set their own policies, and to remove content that violates their community standards. Facebook, for example, has made clear that it will take down even segments of the horrendous video that are shared as part of a news report, or posts in which users “actually intended to highlight and denounce the violence.” It’s also updated its policy on removing content that refers to white nationalism and white separatism. But formally criminalizing the online publication of even a narrow definition of “terrorist content” essentially forces platforms to shift the balance in one direction, resulting in them heavily policing user content or barring certain topics from being discussed at all—and potentially silencing journalists, researchers, advocates, and other important voices in the process.
Remember: without careful—and expensive—scrutiny from moderators, platforms can’t tell the difference between hyperbole and hate speech, sarcasm and serious discussion, or pointing out violence versus inciting it. As we’ve seen across the globe, users who engage in counter-speech against terrorism often find themselves on the wrong side of the rules. Facebook has deactivated the personal accounts of Palestinian journalists, Chechen independence activists, and even a journalist from the United Arab Emirates who posted a photograph of Hezbollah leader Hassan Nasrallah with a LGBTQ pride flag overlaid on it—a clear case of parody counter-speech that Facebook’s filters and content moderators failed to grasp.
Creating Liability for Violent Content Would Be Unconstitutional
Assuming members of Congress make good on their promise to impose legal liability on platforms that host “sensitive, violent content,” it would be plainly unconstitutional. The First Amendment sharply limits the government’s ability to punish or prohibit speech based on its content, especially when the regulation targets an undefined and amorphous category of “sensitive, violent content.” Put simply: there isn’t an exception to the First Amendment for that category of content, much less one for extremist or terrorist content, even though the public and members of Congress may believe such speech has little social value or that its dissemination may be harmful. As the Supreme Court has recognized, the “guarantee of free speech does not extend only to categories of speech that survive an ad hoc balancing of relative social costs and benefits.” Yet this is precisely what Chairman Thompson purports to do.
Moreover, although certain types of violent speech may be unprotected by the First Amendment, such as true threats and speech directly inciting imminent lawless activities, the vast majority of the speech Chairman Thompson objects to is fully protected. And even if online platforms hosted unprotected speech such as direct incitement of violent acts, the First Amendment would bar imposing liability on the platforms unless they intended to encourage the violent acts and provided specific direction to commit them.
The First Amendment also protects the public’s ability to listen to or otherwise access others’ speech, because the ability to receive that information is often the first step before exercising one’s own free speech. Because platforms will likely react to the threat of legal liability by simply not publishing any speech about terrorism—not merely speech directly inciting imminent terrorist attacks or expressing true threats, for example—this would deprive platform users of their ability to decide for themselves whether to receive speech on certain content. This runs directly counter to the First Amendment, and imposing liability on platforms for hosting “sensitive, violent content” would also violate Internet users’ First Amendment rights.
Around the World, Laws Aimed At Curbing Extremist Speech Do More Harm Than Good
If Congress truly wants to look to other countries as an example of how policy may be enacted, it should also look at whether or not that country’s policy has been successful. By and large, requiring platforms to limit speech through similar regulations has failed much like FOSTA.
In France, an anti-terrorism law passed after the Charlie Hebdo shooting “leaves too much room for interpretation and could be used to censor a wider range of content, including news sites,” according to the Committee to Protect Journalists. Germany’s NetzDG, which requires companies to respond to reports of illegal speech within 24 hours, has resulted in the removal of lawful speech. And when democratic countries enact such regulations, more authoritarian governments are often inspired to do the same. For example, cybercrime laws implemented throughout the Middle East and North Africa often contain anti-terrorism provisions that have enabled governments to silence their critics.
The EU’s recently proposed regulation—which would require companies to take down “terrorist content” within one hour—might sound politically popular, but would be poisonous to online speech. Along with dozens of other organizations, we’ve asked that MEPs consider the serious consequences that the passing of this regulation could have on human rights defenders and on freedom of expression. Asking companies to remove content within an hour of its being posted essentially forces them to bypass due process and implement filters that censor first and ask questions later.
If anyone should think that our government would somehow overcome the tendency to abuse these sorts of regulations, take note: Just this month, the Center for Media Justice and the ACLU sued the FBI for refusing to hand over documents related to its surveilling of “Black Identity Extremists,” a “new domestic terror threat,” that, for all intents and purposes, it seems to have made up. Government agencies have a history of defining threats without offering transparency about how they determine those definitions, giving them the ability to determine who to surveil with impunity. We should not give them the ability to decide who to censor on online platforms as well. While allowing Internet companies to self-moderate may not be a perfect solution, the government should be extremely careful considering any new regulations that would limit speech—or else it will be wading into ineffective, dangerous, and unconstitutional, territory.
Reposted from the EFF's Deeplinks blog