Don't Repeat FOSTA's Mistakes

from the learn-how-the-internet-works dept

Some of the most fruitful conversations we can have are about nuanced, sensitive, and political topics, and no matter who or where we are, the Internet has given us the space to do that. Across the world, an unrestricted Internet connection allows us to gather in online communities to talk about everything from the mundane to the most important and controversial, and together, to confront and consider our societies' pressing problems. But a growing chorus of U.S. politicians is considering dangerous new policies that would limit our ability to have those complex conversations online.

The Chair of the U.S. House Homeland Security Committee, Bennie Thompson, is urging tech companies to prioritize the removal of “sensitive, violent content” from their online platforms. But as we were worried might happen, the Chair didn’t stop there—he’s also threatening new legislation if the companies don’t move quickly.

In a letter written shortly after the heartbreaking shooting in New Zealand, which the shooter had livestreamed on multiple platforms, Rep. Thompson told Google, Facebook, Microsoft, and Twitter that if they don’t act, “Congress must consider policies to ensure that terrorist content is not distributed on your platforms, including by studying the examples being set by other countries." Calling for more aggressive moderation policies in the face of horrifying crimes is understandable, particularly when the major online platforms have failed to address how they can be exploited by individuals who broadcast or amplify hate and violence to unsuspecting users. Some might even argue that more aggressive moderation is a lamentable but needed shift in the online landscape.

But the desire to hold platforms legally accountable for the content that users post often backfires, expanding to silence legitimate voices, especially those that have long sought to overcome marginalization. These policies reward platforms for their censorship rather than for their ability to determine bad speech from good, or for meaningfully updating their business models to address how they’re feeding into this behavior. This is not to mention how the high technical bar required to implement the policies reinforces the dominance of the major platforms, which have the resources to comply with the new regulation, while new, innovative competitors do not. And if those policies are enacted into law—as has happened in other countries—the results are magnified, as platforms move to censor normal, everyday speech to protect themselves from liability.

FOSTA Provides Clear Evidence Of How These Regulations Fail

Congress doesn’t need to look at other countries for examples of how these sorts of policies might play out. Less than a year ago, it passed FOSTA, ostensibly to fight sex trafficking. Digital rights advocates, including EFF, fought against FOSTA in Congress because they feared its passage would threaten free expression online by criminalizing large portions of online speech and targeting sex workers and their allies. Groups that work closely with sex workers and sex trafficking victims warned Congress that the bill could put both consensual sex workers and sexual trafficking victims in even more danger. Horribly, these warnings appear to have come true, as sex workers have reported being subject to violence while also being shut out of online platforms that they relied on to obtain health and safety resources, build communities, and advocate for their human rights.

FOSTA sent a wider shock wave through cyberspace, resulting in takedowns of content and censorship that many wouldn’t expect to result from such a law. Although a wide range of plaintiffs are fighting the bill in court, some of the damage is already done. Some websites made changes explicitly as a result: Craigslist, for example, shut down its entire personals section, citing the risk the law created for them. Other small, community-based platforms shut down entirely rather than deal with FOSTA’s crippling criminal and civil liability. And although we cannot be certain that online platforms such as Tumblr and Facebook’s recent policy changes were the direct result of the law, they certainly appear to be. Tumblr banned all sexual content; Facebook created a new “sexual solicitation” policy that makes discussion of consensual, adult sex taboo.

Regardless of a direct link to FOSTA, however, it’s readily apparent that digital rights advocates’ worst fears are coming true: when platforms face immense liability for hosting certain types of user speech, they are so cautious that they over-correct and ban a vast range of discussions about sex, sexuality, and other important topics, because they need to stay far clear of content that might lead to legal  liability. Given the incredible chilling effect that FOSTA has had on the Internet and the community of sex workers and their allies who relied on online platforms, Internet users need to ensure that Congress knows the damage any law aimed at shifting liability for “terrorist” content to platforms would cause.

A bill that makes platforms legally responsible for “terrorist content”—even one that seems like it would only impact a small range of speech—would force platforms to over-censor, and could affect a range of people, from activists discussing strategies and journalists discussing newsworthy events to individuals simply voicing their opinions about the real and terrible things that happen in our world. Banishing topics from the Internet stunts our ability to grow and solve issues that are real and worthy of our full attention. These types of regulations would not just limit the conversation—they would prevent us from engaging with the world's difficulties and tragedies. Just as an automated filter is not able to determine the nuanced difference between actual online sex trafficking and a discussion about sex trafficking, requiring platforms to determine whether or not a discussion of terrorist content is the same as terrorist content—or face severe liability—would inevitably lead to an over-reliance on filters that silence the wrong people, and as with FOSTA, would likely harm those who are affected by terrorist acts the most.

Online platforms have the right to set their own policies, and to remove content that violates their community standards. Facebook, for example, has made clear that it will take down even segments of the horrendous video that are shared as part of a news report, or posts in which users “actually intended to highlight and denounce the violence.” It’s also updated its policy on removing content that refers to white nationalism and white separatism. But formally criminalizing the online publication of even a narrow definition of “terrorist content” essentially forces platforms to shift the balance in one direction, resulting in them heavily policing user content or barring certain topics from being discussed at all—and potentially silencing journalists, researchers, advocates, and other important voices in the process.

Remember: without careful—and expensive—scrutiny from moderators, platforms can’t tell the difference between hyperbole and hate speech, sarcasm and serious discussion, or pointing out violence versus inciting it. As we’ve seen across the globe, users who engage in counter-speech against terrorism often find themselves on the wrong side of the rules. Facebook has deactivated the personal accounts of Palestinian journalists, Chechen independence activists, and even a journalist from the United Arab Emirates who posted a photograph of Hezbollah leader Hassan Nasrallah with a LGBTQ pride flag overlaid on it—a clear case of parody counter-speech that Facebook’s filters and content moderators failed to grasp.

Creating Liability for Violent Content Would Be Unconstitutional

Assuming members of Congress make good on their promise to impose legal liability on platforms that host “sensitive, violent content,” it would be plainly unconstitutional. The First Amendment sharply limits the government’s ability to punish or prohibit speech based on its content, especially when the regulation targets an undefined and amorphous category of “sensitive, violent content.” Put simply: there isn’t an exception to the First Amendment for that category of content, much less one for extremist or terrorist content, even though the public and members of Congress may believe such speech has little social value or that its dissemination may be harmful. As the Supreme Court has recognized, the “guarantee of free speech does not extend only to categories of speech that survive an ad hoc balancing of relative social costs and benefits.” Yet this is precisely what Chairman Thompson purports to do.

Moreover, although certain types of violent speech may be unprotected by the First Amendment, such as true threats and speech directly inciting imminent lawless activities, the vast majority of the speech Chairman Thompson objects to is fully protected. And even if online platforms hosted unprotected speech such as direct incitement of violent acts, the First Amendment would bar imposing liability on the platforms unless they intended to encourage the violent acts and provided specific direction to commit them.

The First Amendment also protects the public’s ability to listen to or otherwise access others’ speech, because the ability to receive that information is often the first step before exercising one’s own free speech. Because platforms will likely react to the threat of legal liability by simply not publishing any speech about terrorism—not merely speech directly inciting imminent terrorist attacks or expressing true threats, for example—this would deprive platform users of their ability to decide for themselves whether to receive speech on certain content. This runs directly counter to the First Amendment, and imposing liability on platforms for hosting “sensitive, violent content” would also violate Internet users’ First Amendment rights.  

Around the World, Laws Aimed At Curbing Extremist Speech Do More Harm Than Good

If Congress truly wants to look to other countries as an example of how policy may be enacted, it should also look at whether or not that country’s policy has been successful. By and large, requiring platforms to limit speech through similar regulations has failed much like FOSTA.

In France, an anti-terrorism law passed after the Charlie Hebdo shooting “leaves too much room for interpretation and could be used to censor a wider range of content, including news sites,” according to the Committee to Protect Journalists.  Germany’s NetzDG, which requires companies to respond to reports of illegal speech within 24 hours, has resulted in the removal of lawful speech. And when democratic countries enact such regulations, more authoritarian governments are often inspired to do the same. For example, cybercrime laws implemented throughout the Middle East and North Africa often contain anti-terrorism provisions that have enabled governments to silence their critics.

The EU’s recently proposed regulation—which would require companies to take down “terrorist content” within one hour—might sound politically popular, but would be poisonous to online speech. Along with dozens of other organizations, we’ve asked that MEPs consider the serious consequences that the passing of this regulation could have on human rights defenders and on freedom of expression. Asking companies to remove content within an hour of its being posted essentially forces them to bypass due process and implement filters that censor first and ask questions later.

If anyone should think that our government would somehow overcome the tendency to abuse these sorts of regulations, take note: Just this month, the Center for Media Justice and the ACLU sued the FBI for refusing to hand over documents related to its surveilling of “Black Identity Extremists,” a “new domestic terror threat,” that, for all intents and purposes, it seems to have made up. Government agencies have a history of defining threats without offering transparency about how they determine those definitions, giving them the ability to determine who to surveil with impunity. We should not give them the ability to decide who to censor on online platforms as well. While allowing Internet companies to self-moderate may not be a perfect solution, the government should be extremely careful considering any new regulations that would limit speech—or else it will be wading into ineffective, dangerous, and unconstitutional, territory.

Reposted from the EFF's Deeplinks blog

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: fosta, internet


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • identicon
    Anonymous Coward, 2 Apr 2019 @ 1:14pm

    It the Politician's syllogism;

    1. We must do something
    2. This is something
    3. Therefore, we must do this.

    link to this | view in chronology ]

  • icon
    ECA (profile), 2 Apr 2019 @ 2:52pm

    Why is it that logical ideas are forbidden.??

    Lets see...
    where do you install a SERVICE, that can help Prostitutes...
    where would you Install a Service to HELP STUDENTS, with whats happening IN SCHOOL?
    Where would you install ANY service to help, and control, and Assist those in need???

    WHERE THEY'RE MOST NEEDED...not...Way over there, someplace in an apt. 1 room location...

    Even a portable unit that runs thru the area, Waiting for KIDS HIJACKED into the underground...to have ACCESS and a CHANCE to get out...

    A place to students to COME AND COMPLAIN, BITCH, MOAN and groan,,,,and have someone listen, and take action AS NEEDED....

    The Church USED to do this, but got stuck inthe middle of a SHIT load of controversy..

    WHO pays for this???
    you dontate to church, you pay taxes, you TELL THE SCHOOL, you DO the work yourself...
    What happened to the SAFE PLACE SYSTEM?? where parents would give shelter to those KIDS in need???

    THE OTHER SUBJECT....................................
    Anyone should be able to CREATE their OWN site....NOT ADVERTISED...
    (which is a great idea as the cops and feds can watch and monitor it)
    ANY IDEALS/... can be posted and said, in that CLOSED area...
    A closed AREA that can Admin the CRAP out of itself and regulate anything said...
    THAT IS THE CHOICE OIF THE SITE...its an IDEAL of anything they wish, NOT TO SAY...( wow, I love religion)
    .......ASide
    I dont care if its FB, YT, of any other site....
    Let them make the restriction,
    BUT....
    They have to stay in their OWN AREA....and if the service DEMANDS IT...
    They cant EDIT what others may say in THAT GROUP...
    .................................................
    Let people have Anything to say and DEBATE as they wish, but you have to go find it...it may not be public, and not adverted..IT IS STILL THERE...

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 2 Apr 2019 @ 7:54pm

    They're not mistakes. They were perfectly intended desirable outcomes.

    FOSTA was a demonstration to prove that politicians could get laws written in despite public backlash. Same for the net neutrality repeal. All while doing jack all to solve the problems such laws were supposed to solve. Because the denizens of the ivory tower won't be affected - it's a feature, not a bug.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 3 Apr 2019 @ 6:11am

    The whole aim is take from the public the ability to discover all the lies and BS politicians, the rich, the powerful and the famous spin and pass it on. Those mentioned above have had centuries of being sble to do snd say exactly what they want and dont want it to stop. On the other hand, those same people sbove want to be able to know everything possible about everyone on the Planet with a view to being able to prosecute everyone who had the audacity of making their escapades public. The way to do this is exactly what is happening. Removing this ability by changing old laws and introducing new laws, such as the 'Right to be forgotten', the new laws in the EU (touted as no filters required, until passed, now filters required everywhere, resulting in literally millions of sites closing because of being unable to pay for filters and even less able to pay the monies when, not if, found liable for breaking the laws). The public must not be allowed to be able to hold to account those who want to be sble to do anything they want while holding the rest of the planet under slavery terms! Wars have been fought to stop the desires of 1 nation being inflcted on to others, to stop many countries being ruled by one but how the hell do you stop those of the same wealth, power and mindset, spread across the Planet, in every country, in positions of power in every government, making laws and rules that everyone but them have to follow, with severe consequences if not? The Internet basically set the people free and the elite must stop it! And they are doing everything possible to do so. Its more important than stopping aliens from being acknowledged! The damage to the Planet, however, is catastrophic, but is immaterial compared to being able to carry on, themselves, being 'above the law'!

    link to this | view in chronology ]

  • icon
    ECA (profile), 3 Apr 2019 @ 12:21pm

    The best thing we did...

    was Leave the EU, and come to the America's.
    We settled and created a new world with great ideas and designed..
    THEN for some reason invited those that we LEFT BEHIND, to follow us.. I think WE forgot Why/who we left behind..

    NOW those SAME reasons are back again.. and we dont have a SHIP(S) to leave them behind.

    link to this | view in chronology ]

  • identicon
    Anonymous Coward, 3 Apr 2019 @ 10:48pm

    Of course, if one country ever becomes a reality, such laws will unenforceable.

    The Republic Of Silicon Valley, which could happen in the event of a CalExit, could become Internet equivalent of Swiss banking.

    The Silciconian government could merely tell the the remaining USA that they will not cooperate with the United States, and there is nothing the USA could do about it.

    Companies located in the Republic Of Silicon Valley would no longer have to comply with United States laws. U.S. laws would not apply in a Republic Of Silicon Valley. The would only be subject to Siliconian laws. The laws of the California Republic, the USA, the EU, and others, would not apply.

    The EU's Article 13 would also be unenforceable in the Republic Of Silicon Valley.

    link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.