If You Think Big Internet Companies Are Somehow To Blame For The New Zealand Massacre, You're Wrong
from the big-tech-derangement-syndrome dept
I know, I know, it's cool these days to hate on the big internet companies, but people keep getting carried away with accusations that don't reflect reality. We should be able to agree that there are problems with the big internet companies (and to suggest ways to deal with them), without falling prey to easy attacks on those companies that don't make sense once you understand the details. The latest example of this "Big Tech Derangement Syndrome" came in response to last week's absolutely horrific massacre in Christchurch, New Zealand. As many have noted, the attack was almost perfectly planned to play to a certain corner of the shitposting, trolltastic parts of the internet. Indeed, the use of social media by the attacker appeared to follow a very similar model to the one that had been perfected by ISIS a few years ago.
This has, of course, resulted in a lot of hand-wringing about the role of the internet in all of this. And I do think that there are many worthwhile conversations to be had about how internet platforms promote or highlight certain content over other content (though, of course, any attempt to really deal with that will almost certainly lead to more bogus accusations of "conservative bias" on these platforms).
But what was disturbing to me was how many people focused on the fact that the various internet platforms had a tough time getting rid of all of the copies of the livestream video that was posted of the attack. I saw multiple variations on the theme that "if the internet platforms really cared about it, they could block those videos," with the clear implication being that the platforms don't take this issue seriously. A particularly ill-informed variant on this was "if this video were covered by copyright, it would have been blocked." Of course, those of us who spend lots of time talking about the failures of using filters to try to block copyright-covered content know that's not even close to true.
What surprised and disappointed me was that this even came from people who should know better. The Washington Post's Drew Harwell lead the charge with a widely retweeted thread that lamented how many internet platforms carried aspects of the attack and questioned what "responsibility" the platforms had:
The New Zealand massacre was livestreamed on Facebook, announced on 8chan, reposted on YouTube, commentated about on Reddit, and mirrored around the world before the tech companies could even react.
— Drew Harwell (@drewharwell) March 15, 2019
What responsibility do we want these companies to have? On Reddit, one of the most popular sites on the Internet, people have been narrating the video on a forum called "watchpeopledie." After more than an hour, this was posted: pic.twitter.com/C8nmt7CZgh
— Drew Harwell (@drewharwell) March 15, 2019
It's been eight hours and you can literally still watch this video on YouTube.
— Drew Harwell (@drewharwell) March 15, 2019
Harwell, along with multiple other Washington Post reporters, then put out an article entitled The New Zealand shooting shows how YouTube and Facebook spread hate and violent images -- yet again. It seems to lay the blame squarely on the feet of the tech platforms:
Friday’s slaughter in two New Zealand mosques played out as a dystopian reality show delivered by some of America’s biggest technology companies. YouTube, Facebook, Reddit and Twitter all had roles in publicizing the violence and, by extension, the hate-filled ideology behind it.
These companies — some of the richest, most technologically advanced in the world — failed to rapidly quell the spread of troubling content as it metastasized across platforms, bringing horrific images to internet users worldwide.
This is literally blaming the messenger, and distracting from those actually responsible (the horrible, despicable excuse for a human being who carried out the attack and the awful people who cheered him on), to instead blame the tools of communication that all of use. We use them because they are convenient and powerful. And that includes for horrific messages as well as nicer ones.
Harwell's colleague at the Washington Post, Margaret Sullivan, whose views I almost always find myself nodding in agreement to, also seemed strangely out of touch on this one, insisting the platforms need to "get serious."
Editorial judgment, often flawed, is not only possible. It’s necessary.
The scale and speed of the digital world obviously complicates that immensely. But saying, in essence, “we can’t help it” and “that’s not our job” are not acceptable answers.
Friday’s massacre should force the major platforms — which are really media companies, though they don’t want to admit it — to get serious.
As violence goes more and more viral, tech companies need to deal with the crisis that they have helped create.
They must figure out ways to be responsible global citizens as well as profitmaking machines.
Another tweet, from Alex Hern, a technology reporter at the Guardian literally suggested that YouTube and Facebook need to hire a single person to keep doing searches to delete videos:
If you can't read that, it says:
it is days like today that I just do not understand why YouTube and Facebook don't hire one person--just one--to sit there searching for "New Zealand terror attack" and just delete the obvious reposts that keep popping up on that search term.
There were probably a million tweets trying to make this point in a similar way. The general theme is that the internet platforms don't care about this stuff, and that they optimize for profits over the good of society. And, while that may have been an accurate description a decade ago, it has not been true in a long, long time. The problem, as we've been discussing here on Techdirt for a while, is that content moderation at scale is impossible to get right. It is not just "more difficult," it is difficult in the sense that it will never be acceptable to the people who are complaining.
Part of that is because human beings are flawed. And some humans are awful people. And they will do awful things. But we don't blame "radio" for Hitler (Godwin'd!) just because it was a tool the Nazis used. We recognize that, in every generation, there may be terrible people who do terrible things, using the technologies of the day.
But the bigger issue is that both the scale and challenges in moderating content like this presents a much more difficult question that most people (including, apparently, tech reporters) understand. Over the weekend, Facebook noted that it had removed 1.5 million copies of the video with 1.2 million being blocked from upload entirely. That means another 300,000 had to be found, reviewed and then a call made on whether to delete it.
In the first 24 hours we removed 1.5 million videos of the attack globally, of which over 1.2 million were blocked at upload...
— Facebook Newsroom (@fbnewsroom) March 17, 2019
One person hitting search over and over again is not going to track down over a million copies (or even 300,000) of a video. That's just not how it works. Also, think about this for a second: if there were 1.5 million attempts to upload the video (just on Facebook), think how many (despicable) people are out there trying to spread this content. It is a lot more than these companies could reasonably hire or should want to hire solely for the purpose of policing the speech of those despicable individuals.
And, even if they were doing the searching, it raises other challenges. Motherboard has a good overview of how Facebook handles moderation in these circumstances, which already shows how difficult some of the challenges are. But even more interesting is a piece by Julia Alexander at the Verge explaining that there's a lot more to deleting those videos than just flagging and deleting. Alexander specifically looks at YouTube and how it handles these things:
Exact re-uploads of the video will be banned by YouTube, but videos that contain clips of the footage have to be sent to human moderators for review, The Verge has learned. Part of that is to ensure that news videos that use a portion of the video for their segments aren’t removed in the process.
YouTube’s safety team thinks of it as a balancing act, according to sources familiar with their thinking. For major news events like yesterday’s shooting, YouTube’s team uses a system that’s similar to its copyright tool, Content ID, but not exactly the same. It searches re-uploaded versions of the original video for similar metadata and imagery. If it’s an unedited re-upload, it’s removed. If it’s edited, the tool flags it to a team of human moderators, both full-time employees at YouTube and contractors, who determine if the video violates the company’s policies.
This makes sense -- and it's something we talk about in the copyright context all the time. It's one thing to flag and block exact replica videos, but if they're somewhat edited, they need to be reviewed. They could be news reporting or commentary or some other perfectly reasonable use of the video. There are also, potentially, questions about if the videos are evidence or documentation of a crime that need to be considered before just deleting things willy nilly.
And, of course, today (many days after mocking YouTube for not taking down these videos) the Washington Post comes back with some actual reporting on how much effort YouTube actually put into stopping the video and how difficult it was. Suffice it to say, they had more than one person working on this.
[Neal] Mohan, YouTube’s chief product officer, had assembled his war room — a group of senior executives known internally as “incident commanders” who jump into crises, such as when footage of a suicide or shooting spreads online.
The team worked through the night, trying to identify and remove tens of thousands of videos — many repackaged or recut versions of the original footage that showed the horrific murders. As soon as the group took down one, another would appear, as quickly as one per second in the hours after the shooting, Mohan said in an interview.
As its efforts faltered, the team finally took unprecedented steps — including temporarily disabling several search functions and cutting off human review features to speed the removal of videos flagged by automated systems. Many of the new clips were altered in ways that outsmarted the company’s detection systems.
But, either way, contrary to the arguments that the companies don't care or don't prioritize this, they absolutely do. It's just that they are dealing with an incredible level of scale, with many of the videos (at least 300,000 in the case of Facebook and one per second in the case of YouTube) likely needing to be reviewed and a judgment call needing to be made whether or not that version should be kept up or taken down. The idea that either company could just snap their fingers and fix this is pure fantasy.
Alex Stamos, Facebook's former Chief Security Officer put together a thoughtful thread on how impossible this situation is for the companies:
This is actually the wrong framing. This isn't about the video "going viral" in the traditional sense, where a piece of content explodes on social media *because* of engagement on that platform.
TL;DR It isn't going to get a lot better than this.
— Alex Stamos (@alexstamos) March 15, 2019
Millions of people are being told online and on TV that there is a video and a document that are too dangerous for them to see, so they are looking for it in all the normal places.
Look at Google searches for the video. "Beto" added as a current event search term to give scale. pic.twitter.com/ehoLo0OVVp
— Alex Stamos (@alexstamos) March 15, 2019
At the same time, this shooter was an active member of a rather horrible online community (which I will not amplify) that encourages this kind of behavior. He posted the FB Live link and mirrors to his manifesto right before, so thousands of people got copies in real-time.
— Alex Stamos (@alexstamos) March 15, 2019
So now we have tens of millions of consumers wanting something and tens of thousands of people willing to supply it, with the tech companies in between.
YouTube and Facebook/Instagram have perceptual hashing built during the ISIS crisis to deal with this and teams looking.
— Alex Stamos (@alexstamos) March 15, 2019
Two challenges:
1) What amount of video is ok for reporting purposes? This seems to be one of the questions from last night, where excerpts of the video were allowed and also included in legitimate media footage (most of that seems to be over).— Alex Stamos (@alexstamos) March 15, 2019
Also, my Twitter last night was full of legitimate journalists screenshotting the manifesto and commenting. Should those accounts be censored or shut down by Twitter? Again, that has slowed but there are no agreed upon guidelines here.
— Alex Stamos (@alexstamos) March 15, 2019
2) Perceptual hashes and audio fingerprinting are both fragile, and a lot of these same kinds of people have experience beating them to upload copyrighted content. Each time this happens, the companies have to spot it and create a new fingerprint.
— Alex Stamos (@alexstamos) March 15, 2019
So this isn't about virality, this is about "How much control do the 2-3 largest tech companies have to block millions of people in free societies from trading relatively small amounts of data?"
The answer is: less than you think.
— Alex Stamos (@alexstamos) March 15, 2019
There's more in the thread, which is worth reading, but the short version is that this is not nearly as easy a problem to solve as many people seem to think.
And, as Stamos hints at in his thread, even as everyone was complaining about this content showing up on internet platforms, people don't seem to have had the same reaction to the fact that many in the media were spreading the same stuff. Rupert Murdoch's Sky News aired the video. Or as media analyst Thomas Baekdal noted:
Newspapers: YouTube and Facebook need to get their act together in moderating their content to prevent would-be terrorist to get inspired.
Also newspapers: Here is the terrorist's manifesto. Here is who he was inspired by. Here is a profile of him. Here is a bunch of pictures. pic.twitter.com/AyeQAiVjJp
— Thomas Baekdal (@baekdal) March 15, 2019
Baekdal later wrote a very good thread about how wrong this attack on social media has been. He highlights how many news orgs -- at the same time they were criticizing YouTube and Facebook -- were posting screenshots or snippets of the livestream themselves, and promoting the details of the attacker's mad rantings. The whole thread is worth reading, but the conclusion is the key. By focusing the blame on the messengers -- the internet platforms -- we distract from solving real problems:
I'm sorry media people. As a media analyst, I love you. I want the media to have the best future possible. But this constant one-sided and often completely distorted form of anti-tech lobbyism is simply dishonest. Worse, by making this all about YouTube and Facebook, we mislead people into thinking that this entire problem is something that is just easily solved by just having YouTube use their copyright algorithms ... and then all the terrorist and hate speech would go away. It won't. This makes people passive, because it allows them to think that this problem is just an external one by the tech companies, so we (in society or in the media) don't have to do anything ourselves. But we do. This *is* a societal problem. We all have to step up here. Stop this one-sided anti-tech lobbyism. It's incredibly dishonest. It misleads the public, it makes the problem much harder to solve, and it tries to hide our own role. It's so frustrating to look at every day. Yes, the tech companies need to do better, but...my god...so do we!
As for the arguments, such as Sullivan's, that the "answer" to this is that social media platforms need to retain "editorial control" and review content -- as demonstrated above, that's nonsense given the scale. It's difficult enough to try to block a single video, but imagine having to go through absolutely everything. As Aaron Ross Powell noted in another good thread, just because editing works for newspapers, that doesn't make sense when we're talking about platforms for communicating among billions of people.
The Washington Post exercises editorial control, yes. But how much does it publish? A hundred new things a day? Maybe? Compared to: "Every 60 seconds on Facebook: 510,000 comments are posted, 293,000 statuses are updated, and 136,000 photos are uploaded." If those numbers are correct, that's 422 *million* status updates alone every day. How many editors would it take to exercises the Washington Post's level of control over those? Facebook's big, but not that big. You can filter profanity, yes. And other key words. You can use machine learning to help. But bad actors are good at figuring out what gets through filters. The only way to be sure is to do what the Post does: Look at everything. And at 422 million posts a day, even if only a tiny fraction of those are ultimately violence promoting, and even if only a tiny fraction of *those* make it through editorial, you've still got more than enough to radicalize the occasional murderous madman.
Social media is not traditional media. It's so much bigger and more open that you can't analogize from one to the other. It would be impossible to run social media like the Washington Post.
No matter how much people who don't grasp the difference tell us "to get serious."
And that gets to one final point I'd like to make on this. Beyond the difficulty of taking down the video/madman rantings after the fact, a few complained that the content was even allowed to be posted in the first place. The attacker clearly planned a media strategy, and that meant releasing some of the content before the attack to eager followers who cheered him on live. But if you think it's easy to spot that content in real time, you haven't spent much time on the internet. And, all of this is made even worse by the fact that a lot of online behavior is performative, rather than serious. Lots of shitposting is just that: shitposting. Recognizing where it's going to cross over into real behavior is not nearly as easy as some seem to think.
And if you argue that it shouldn't matter and we should just start shutting down people saying crazy stuff online, well, I'd suggest listening to the latest episode of NPR's Invisibilia podcast called Post, Shoot, which focuses on violence in Wilmington, Delaware, and how some of it crossed over from kids trash talking each other on Instagram. But then the story goes further, and notes that, in response, police have basically started locking up black and brown kids, claiming evidence of gang activity, for merely posting Instagram photos with guns or money. In most cases, the kids are just showing off. It's kind of a thing kids do. They're not actually gangsters, they're just pretending to be gangsters, because kids do that. But they're being locked up for posing as gangsters. And that's not helping anyone.
Social media is a reflection of reality and reality is hellishly messy. People are flawed to varying degrees, and a certain percentage are despicable, horrible people. I'd like to believe it's a small percentage, but they do exist. And we shouldn't blame the technology they use.
Filed Under: attacker, blame, christchurch, massacre, mosque, new zealand, social media
Companies: facebook, google, reddit, youtube