Facebook, Twitter Consistently Fail At Distinguishing Abuse From Calling Out Abuse
from the the-wrong-approach dept
Time and time again, we see that everyone who doesn't work in the field of trust and safety for an internet platform seems to think that it's somehow "easy" to filter out "bad" content and leave up "good" content. It's not. This doesn't mean that platforms shouldn't try to deal with the issue. They have perfectly good business reasons to want to limit people using their systems to abuse and harass and threaten other users. But when you demand that they be legally responsible -- as Germany (and then Russia) recently did -- bad things happen, and quite frequently those bad things happen to the victims of abuse or harassment or threats.
We just wrote about Twitter's big failure in suspending Popehat's account temporarily, after he posted a screenshot of a threat he'd received from a lawyer who's been acting like an internet tough guy for a few years now. In that case, the person who reviewed the tweet keyed in on the fact that Ken White had failed to redact the contact information from the guy threatening him -- which at the very least raises the question of whether or not Twitter considers threats of destroying someone's life to be less of an issue than revealing that guy's contact information, which was already publicly available via a variety of sources.
But, it's important to note that this is not an isolated case. In just the past few days, we've seen two other major examples of social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators. The first is the story of Francie Latour, as told in a recent Washington Post article, where she explains how she went on Facebook to vent about a man in a Boston grocery store loudly using the n-word to describe her and her two children, and Facebook's response was to ban her from Facebook.
But within 20 minutes, Facebook deleted her post, sending Latour a cursory message that her content had violated company standards. Only two friends had gotten the chance to voice their disbelief and outrage.
The second story comes from Ijeoma Oluo, who posted to Medium about a strikingly similar situation. In this case, she made what seems to me to be a perfectly innocuous joke about feeling nervous for her safety as a black woman in a place with many white people. But a bunch of rabid angry people online got mad at her about it and start sending all sorts of abusive tweets and hateful messages to her on Facebook. She actually says that Twitter was pretty good at responding to reports of abusive content. But, as in the Latour story, Facebook responded by banning Oluo for talking about the harassment she was receiving.
And finally, facebook decided to take action. What did they do? Did they suspend any of the people who threatened me? No. Did they take down Twitchy’s post that was sending hundreds of hate-filled commenters my way? No.
They suspended me for three days for posting screenshots of the abuse they have refused to do anything about.
That, of course, is a ridiculous response by Facebook. And Oluo is right to call them out on it, just as Latour and White were right to point out the absurdity of their situations.
But, unfortunately, the response of many people to this kind of thing is just "do better Facebook" or "do better Twitter." Or, in some cases, they even go so far as to argue that these companies should be legally mandated to take down some of the content. But this will backfire for the exact same reason that these ridiculous situations happened in the first place. When you run a platform and you need to make thousands or hundreds of thousands or millions of these kinds of decisions a day, you're going to make mistakes. And that's not because they're "bad" at this, it's just the nature of the beast. With that many decisions -- many of which involve people demanding immediate action -- there's no easy way to have someone drop in and figure out all of the context in the short period of time they have to make a decision.
On top of that, because this has to be done at scale, you can't have a team that is all skilled in understanding context and nuance and culture. Nor can you have people who can spend the necessary time to dig deeper to figure out and understand the context. Instead, you end up with a ruleset. And it has to be standardized so that non-experts are able to make judgments on this stuff in a relatively quick timeframe. That's why about a month ago, there was a kerfuffle when Facebook's "hate speech rule book" was leaked, and it showed how it could lead to situations where "white men" were going to be protected.
And when you throw into this equation the potential of legal liability, a la Germany (and what a large group of people are pushing for in the US), things will get much, much worse. That's because when there's legal liability on the line, companies will be much faster to delete/suspend/ban, just to avoid the liability. And many people calling for such things will be impacted themselves. None of the people in the stories above could have reasonably expected to get banned by these platforms. But, when people demand that platforms "take responsibility" that's what's going to happen.
Again, this is not in any way to suggest that online platforms should be a free for all. That would be ridiculous and counterproductive. It would lead to everything being overrun by spam, in addition abusive/harassing behavior. Instead, I think the real answer is that we need to stop putting the burden on platforms to make all the decisions, but figure out alternative ways. I've suggested in the past, that one possible solution is turning the tools around. Give end users much more granular control about how they can ban or block or silence content they don't want to see, rather than leaving it up to a crew of people who have to make snap decisions on who's at fault when people get angry online.
Of course, there are problems with my suggestion as well -- it could certainly accelerate the issues of self-contained bubbles of thought. And it could also result in plenty of incorrect blocking as well. But the larger point is that this isn't easy, and every single magic bullet solution has serious consequences, and often those consequences fall on the people who are facing the most abuse and harassment, rather than on those doing the abuse and harassment. So, yes, platforms need to do better. The three stories above are all ridiculous, and ended up harming people who were highlighting harassing behavior. But continuing to rely on platforms and teams of people to weed out content someone deems "bad" is not a workable solution, and it's one that will only lead to more of these kinds of stories.
And, worst of all, the abusers and harassers know and thrive on this. The guy who got Ken White's account banned gloated about it on Twitter. I'm sure the same was true of the folks who went after Oluo and likely "reported" her to Facebook. Any time you rely on the platform to be the arbiter, remember that he people who want to harass others quickly learn that they can use that as a tool for further harassment themselves.
Filed Under: abuse, free speech, harassment, intermediary liability, moderation, platforms, policing
Companies: facebook, twitter