from the this-is-a-bad-idea dept
Look, let's just start with the basics: there are some bad people out there. Even if the majority of people are nice and well-meaning, there are always going to be some people who are not. And sometimes, those people are going to use the internet. Given that as a starting point, at the very least, you'd think we could deal with that calmly and rationally, and recognize that maybe we shouldn't blame the tools for the fact that some not very nice people happen to use them. Unfortunately, it appears to be asking a lot these days to expect our politicians to do this. Instead, they (and many others) rush out immediately to point the fingers of blame for the fact that these "not nice" people exist, and rather than point the finger of blame at the not nice people, they point at... the internet services they use.
The latest example of this is the UK Parliament that has released a report on "hate crime" that effectively blames internet companies and suggests they should be fined because not nice people use them. Seriously. From the report:
Here in the UK we have easily found repeated examples of social media companies failing to remove illegal content when asked to do so—including dangerous terrorist recruitment material, promotion of sexual abuse of children and incitement to racial hatred. The biggest companies have been repeatedly urged by Governments, police forces, community leaders and the public, to clean up their act, and to respond quickly and proactively to identify and remove illegal content. They have repeatedly failed to do so. That should not be accepted any longer. Social media is too important to everyone—to communities, individuals, the economy and public life—to continue with such a lax approach to dangerous content that can wreck lives. And the major social media companies are big enough, rich enough and clever enough to sort this problem out—as they have proved they can do in relation to advertising or copyright. It is shameful that they have failed to use the same ingenuity to protect public safety and abide by the law as they have to protect their own income.
Social media companies currently face almost no penalties for failing to remove illegal content. There are too many examples of social media companies being made aware of illegal material yet failing to remove it, or to do so in a timely way. We recommend that the Government consult on a system of escalating sanctions to include meaningful fines for social media companies which fail to remove illegal content within a strict timeframe.
This is the kind of thing that sounds good to people who (a) don't understand how these things actually work and (b) don't spend any time thinking through the consequences of such actions.
First off, it's easy for politicians and others to sit there and assume that "bad" content is obviously bad. The problem here is twofold: first, there is so much content showing up that spotting the "bad" stuff is not nearly as easy as people assume, and second, because there's so much content, it's often difficult to understand the context enough to recognize if something is truly "bad." People who think this stuff is obvious or easy are ignorant. They may be well-meaning, but they're ignorant.
So, for example, they say that these are cases where such content has been "reported" on the assumption that this means the companies must now "know" that the content is bad and they should remove it. The reality is much more difficult. DO they recognize how many such reports these companies receive? Do they realize that before companies start taking down content willy nilly, that they have to actually understand what's going on? Do they realize that it's not so easy to figure out what's really happening sometimes?
Let's go through the examples given: "dangerous terrorist recruitment material." Okay, seems obvious. But how do you distinguish terrorist recruitment videos from documenting terrorist atrocities? It's not as easy as you might think. Remember how a video of a European Parliament debate on anti-torture was taken down because the system or a reviewer thought it was promoting terrorism? People think this stuff is black and white, but it's not. It's all gray. And the shades of gray are very difficult to distinguish. And the shades of gray may differ greatly from one person to another.
Sexual abuse of children. Yes, clearly horrible. Clearly things need to be done. There are, already, systems for government-associated organizations and social media platforms to share hashes of photos deemed to be problematic and these are blocked. But, again, edge cases are tricky. Remember, it wasn't that long ago that Facebook got mocked for taking down the famed Napalm Girl photo? Here's a situation that seems black and white: no naked children. Seems reasonable. Except... this naked child is an iconic photo that demonstrates the horrors of war. That doesn't mean we should let all pictures of naked children online -- far from it, obviously. But the point is that it's not always so black and white, and any policy proposal that assumes it is (as the UK Parliament seems to be suggesting) has no idea what a mess it's causing.
Next on the list: "incitement to racial hatred." This would be so called "hate speech." But, as we've noted time and time again, this kind of thinking always ends up turning into authoritarian abuse. Over and over again we see governments punish people they don't like, by claiming what they're saying is "hate speech." But, you say, "incitement to racial hatred" is clearly over the line. And, sure, I agree. But be careful who gets to define both "incitement" and "racial hatred." You might not be so happy. Here in the US, there are people who (ridiculously, in my opinion) argue that groups like Black Lives Matter are a form of "incitement to racial hatred." Now, you might think that's crazy, but there are lots of people who disagree with you. And some of them are in power. Now are you happy about handing them the tools to demand that all social media sites take down their content or face fines? Or, how do you expect Google and Facebook to instantly determine if a video is a clip from a Hollywood movie, rather than "incitement to racial hatred?" There are plenty of powerful scenes in movies that none of us would consider "polite speech," but we don't think they should be taken down as "incitement to racial hatred."
Then the report notes that "the major social media companies are big enough, rich enough and clever enough to sort this problem out." First off, that's not true. As noted above, companies make mistakes about this stuff all the time. They take down stuff that should be left up. They leave up stuff that people think they should take down. You have no idea how many times each and every day these companies have to make these decisions. Sometimes they get it right. Sometimes they don't. Punishing them for a mistake in being too slow is a near guarantee that they'll be taking down a ton of legit stuff, just to avoid punishment.
Separately, who decides who's a "major social media company" that has to do this? If rules are passed saying social media companies have to block this stuff, congrats, you've just guaranteed that Facebook and Google/YouTube are the last such companies. No new entrant will be able to take on the burden/liability of censoring all content. If you try and somehow, magically, carve out "major" social media companies, how do you set those boundaries, without creating massive unintended consequences?
The report falsely claims that these companies have successfully created filters that can deal with advertising and copyright, which is laughable and, once again, ignorant. The ad filter systems on these platforms are terrible. We use Google ads for some of our ad serving, and on a near constant basis we're weeding out terrible ads, because no company is able to and awful people are getting their ads into the system all the time. And copyright? Really? If that's the case, why are the RIAA/MPAA still whining about Google daily? These things are much harder than people think, and it's quite clear that whoever prepared this report has no clue and hasn't spoken to anyone who understands this stuff.
Social media companies currently face almost no penalties for failing to remove illegal content.
What a load of hogwash. They face tremendous "penalties" in the form of public anger. Whenever these stories come out, the companies in question talk about how much more they need to do, and how many people they're hiring to help and all that. They wouldn't be doing that if there were "no penalties." The "penalties" don't need to be legal or fines. It's much more powerful when the actual users of the services make it clear what they don't like and won't stand for. Adding an additional legal threat doesn't change or help with that. It just leads to more problems.
And that's just looking at two awful paragraphs. There's much more like that. As Alec Muffett points out, the report has some really crazy ideas, like saying that the services need to block "probably illegal content" that has "similar names" to illegal content:
Despite us consistently reporting the presence of videos promoting National Action, a proscribed far-right group, examples of this material can still be found simply by searching for the name of that organisation. So too can similar videos with different names. As well as probably being illegal, we regard it as completely irresponsible and indefensible.
So, not only do the authors of this report want Google to remove any video that is reported, no questions asked (despite a long history of such systems being widely abused), it wants them to magically find all "similar" content that is "probably illegal" even with "different names." Do they have any idea what they're asking for? And immediately after that they, again, insist that this must be possible because of copyright filters. Of course, these would be the same copyright filters that tried to take down Cory Doctorow's book Homeland because it had a "similar name" to the Fox TV show "Homeland." "Similar names" is a horrific way to build a censorship system. It will not work.
What's so frustrating about this kind of nonsense is that it keeps popping up again and again, often from people with real power, in large part because they simply do not comprehend the actual result of what they're saying or the nature of the actual problem. There are not nice people doing not nice things online. We can all agree (hopefully) that we don't like these not nice people and especially don't like the not nice things they do online. But to assume that the answer to that is to blame the platforms they use for not censoring them fast enough misses the point completely. It will create tremendous collateral damage for tons of people, often including the most vulnerable, while doing absolutely nothing to deal with the not nice people and the not nice things they are doing.
Filed Under: censorship, europe, hate crime, hate speech, intermediary liability, parliament, platforms, uk
Companies: facebook, google, twitter, youtube