May And Macron's Ridiculous Adventure In Censoring The Internet
from the these-are-bad-ideas,-marc dept
For some observers, struggling UK Prime Minister Theresa May and triumphant French President Emmanuel Macron may seem at somewhat opposite ends of the current political climate. But... apparently they agree on one really, really bad idea: that it's time to massively censor the internet and to blame tech companies if they don't censor enough. We've been explaining for many years why this is a bad idea, but apparently we need to do so again. First, the plan:
The prime minister and Emmanuel Macron will launch a joint campaign on Tuesday to tackle online radicalisation, a personal priority of the prime minister from her time as home secretary and a comfortable agenda for the pair to agree upon before Brexit negotiations begin next week.
In particular, the two say they intend to create a new legal liability for tech companies if they fail to remove inflammatory content, which could include penalties such as fines.
It's no surprise that May is pushing for this. She's been pushing to regulate the internet for quite some time, and it's a core part of her platform (which is a bit "weak and wobbly" as they say these days). But, Macron... well, he's been held up repeatedly as a "friend" to the tech industry, so this has to be seen as a bit of a surprise in the internet world. Of course, there were hints that he might not really be all that well versed in the way technology works when he appeared to support backdoors to encryption. This latest move just confirms an unfortunate ignorance about the technology/internet landscape.
Creating a new legal liability for companies that fail to remove inflammatory content is going to be a massive disaster in many, many ways. It will damage the internet economy in Europe. It will create massive harms to free speech. And, it won't do what they seem to think it will do: it won't stop terrorists from posting propaganda online.
First, a regime that fines companies for failing to remove "inflammatory content" will lead companies to censor broadly, out of fear that any borderline content they leave up may open them up to massive liability. This is exactly how the Great Firewall of China works. The Chinese government doesn't just say "censor bad stuff" it tells ISPs that they'll get fined if they allow bad stuff through. And thus, the ISPs over-censor to avoid leaving anything that might put them at risk online. And, when it comes to free speech, doing something "the way the Chinese do things" tends not to be the best idea.
Second, related to that, once they open up this can of worms, they may not be happy with how it turns out. It's great to say that you don't think "inflammatory content" should be allowed online, but who gets to define "inflammatory" makes a pretty big difference. As we've noted, you always want to design regulations as if the people you trust the least are in power. This is not to say that May or Macron themselves would do this, but would you put it past some politicians in power to argue that online content from political opponents is too "inflammatory" and thus must be removed? What about if the press reveals corruption? That could be considered "inflammatory" as well.
Third, one person's "inflammatory content" is another's "useful evidence." We see this all the time in other censorship cases. I've written before about how YouTube was pressured to take down inflammatory "terrorist videos" in the past, and ended up taking down the account of a human rights group documenting atrocities in Syria. It's easy to say "take down terrorist content!" but it's not always easy to recognize what's terrorist propaganda versus what's people documenting the horrors that the terrorists are committing.
Fourth, time and time again, we've seen the intelligence community come out and argue against this kind of censorship, noting that terrorists posting inflammatory content online is a really useful way to figure out what they're up to. Demanding that platforms take down these useful sources of open source intelligence will actually harm the intelligence community's ability to monitor and stop plans of attack.
Fifth, this move will almost certainly be used by autocratic and dictatorial regimes to justify their own widespread crackdown on free speech. And, sure, they might do that already, but removing the moral high ground can be deeply problematic in diplomatic situations. How can UK or French diplomats push for more freedom of expression in, say, China or Iran, if they're actively putting this in place back home. Sure, you can say that they're different, but the officials from those countries will argue it's the exact same thing: you're censoring the internet to "protect" people from "dangerous content." Well, they'll argue, that's the same thing that we do -- it's just that we have different threats we need to protect against.
Sixth, this will inevitably be bad for innovation and the economy in both countries. Time and time again, we've seen that leaving internet platforms free from liability for the actions of their users is what has helped those companies develop, provide useful services, employ lots of people and generally help create new economic opportunities. With this plan, sure, Google and Facebook can likely figure out some way to censor some content -- and can probably stand the risk of some liability. But pretty much every other smaller platform? Good luck. If I were running a platform company in either country, I'd be looking to move elsewhere, because the cost of complying and the risk of failing to take down content would simply be too much.
Seventh, and finally, it won't work. The "problem" is not that this content exists. The problem is that lots of people out there are susceptible to such content and are interested and/or swayed by it. That's a much more fundamental problem, and censoring such content doesn't do much good. Instead, it tends to only rally up those who were already susceptible to it. They see that the powers-that-be -- who they already don't trust -- find this content "too dangerous" and that draws them in even closer to it. And of course that content will find many other places to live online.
Censoring "bad" content always seems like an easy solution if you haven't actually thought through the issues. It's not a surprise that May hasn't -- but we had hopes that perhaps Macron wouldn't be swayed by the same weak arguments.
Filed Under: censorship, emmanuel macron, filtering, france, free speech, inflammatory content, intermediary liability, terrorism, terrorist content, theresa may, uk
Companies: facebook, google