Facebook, Twitter Consistently Fail At Distinguishing Abuse From Calling Out Abuse
from the the-wrong-approach dept
Time and time again, we see that everyone who doesn't work in the field of trust and safety for an internet platform seems to think that it's somehow "easy" to filter out "bad" content and leave up "good" content. It's not. This doesn't mean that platforms shouldn't try to deal with the issue. They have perfectly good business reasons to want to limit people using their systems to abuse and harass and threaten other users. But when you demand that they be legally responsible -- as Germany (and then Russia) recently did -- bad things happen, and quite frequently those bad things happen to the victims of abuse or harassment or threats.
We just wrote about Twitter's big failure in suspending Popehat's account temporarily, after he posted a screenshot of a threat he'd received from a lawyer who's been acting like an internet tough guy for a few years now. In that case, the person who reviewed the tweet keyed in on the fact that Ken White had failed to redact the contact information from the guy threatening him -- which at the very least raises the question of whether or not Twitter considers threats of destroying someone's life to be less of an issue than revealing that guy's contact information, which was already publicly available via a variety of sources.
But, it's important to note that this is not an isolated case. In just the past few days, we've seen two other major examples of social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators. The first is the story of Francie Latour, as told in a recent Washington Post article, where she explains how she went on Facebook to vent about a man in a Boston grocery store loudly using the n-word to describe her and her two children, and Facebook's response was to ban her from Facebook.
But within 20 minutes, Facebook deleted her post, sending Latour a cursory message that her content had violated company standards. Only two friends had gotten the chance to voice their disbelief and outrage.
The second story comes from Ijeoma Oluo, who posted to Medium about a strikingly similar situation. In this case, she made what seems to me to be a perfectly innocuous joke about feeling nervous for her safety as a black woman in a place with many white people. But a bunch of rabid angry people online got mad at her about it and start sending all sorts of abusive tweets and hateful messages to her on Facebook. She actually says that Twitter was pretty good at responding to reports of abusive content. But, as in the Latour story, Facebook responded by banning Oluo for talking about the harassment she was receiving.
And finally, facebook decided to take action. What did they do? Did they suspend any of the people who threatened me? No. Did they take down Twitchy’s post that was sending hundreds of hate-filled commenters my way? No.
They suspended me for three days for posting screenshots of the abuse they have refused to do anything about.
That, of course, is a ridiculous response by Facebook. And Oluo is right to call them out on it, just as Latour and White were right to point out the absurdity of their situations.
But, unfortunately, the response of many people to this kind of thing is just "do better Facebook" or "do better Twitter." Or, in some cases, they even go so far as to argue that these companies should be legally mandated to take down some of the content. But this will backfire for the exact same reason that these ridiculous situations happened in the first place. When you run a platform and you need to make thousands or hundreds of thousands or millions of these kinds of decisions a day, you're going to make mistakes. And that's not because they're "bad" at this, it's just the nature of the beast. With that many decisions -- many of which involve people demanding immediate action -- there's no easy way to have someone drop in and figure out all of the context in the short period of time they have to make a decision.
On top of that, because this has to be done at scale, you can't have a team that is all skilled in understanding context and nuance and culture. Nor can you have people who can spend the necessary time to dig deeper to figure out and understand the context. Instead, you end up with a ruleset. And it has to be standardized so that non-experts are able to make judgments on this stuff in a relatively quick timeframe. That's why about a month ago, there was a kerfuffle when Facebook's "hate speech rule book" was leaked, and it showed how it could lead to situations where "white men" were going to be protected.
And when you throw into this equation the potential of legal liability, a la Germany (and what a large group of people are pushing for in the US), things will get much, much worse. That's because when there's legal liability on the line, companies will be much faster to delete/suspend/ban, just to avoid the liability. And many people calling for such things will be impacted themselves. None of the people in the stories above could have reasonably expected to get banned by these platforms. But, when people demand that platforms "take responsibility" that's what's going to happen.
Again, this is not in any way to suggest that online platforms should be a free for all. That would be ridiculous and counterproductive. It would lead to everything being overrun by spam, in addition abusive/harassing behavior. Instead, I think the real answer is that we need to stop putting the burden on platforms to make all the decisions, but figure out alternative ways. I've suggested in the past, that one possible solution is turning the tools around. Give end users much more granular control about how they can ban or block or silence content they don't want to see, rather than leaving it up to a crew of people who have to make snap decisions on who's at fault when people get angry online.
Of course, there are problems with my suggestion as well -- it could certainly accelerate the issues of self-contained bubbles of thought. And it could also result in plenty of incorrect blocking as well. But the larger point is that this isn't easy, and every single magic bullet solution has serious consequences, and often those consequences fall on the people who are facing the most abuse and harassment, rather than on those doing the abuse and harassment. So, yes, platforms need to do better. The three stories above are all ridiculous, and ended up harming people who were highlighting harassing behavior. But continuing to rely on platforms and teams of people to weed out content someone deems "bad" is not a workable solution, and it's one that will only lead to more of these kinds of stories.
And, worst of all, the abusers and harassers know and thrive on this. The guy who got Ken White's account banned gloated about it on Twitter. I'm sure the same was true of the folks who went after Oluo and likely "reported" her to Facebook. Any time you rely on the platform to be the arbiter, remember that he people who want to harass others quickly learn that they can use that as a tool for further harassment themselves.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: abuse, free speech, harassment, intermediary liability, moderation, platforms, policing
Companies: facebook, twitter
Reader Comments
Subscribe: RSS
View by: Time | Thread
[ link to this | view in chronology ]
It's an incredibly complex problem
You see the blatant racism. You see the death threats. You see the rape threats. Of course you'd mark it as offensive! The reviewers don't have the time to get the context that Oluo is posting to shame the original posters (exactly the same as the PopeHat situation). They just see the hateful messages and mark them bad.
[ link to this | view in chronology ]
Re: It's an incredibly complex problem
First rank weeds out the obvious yes/no.
then a second rank with more time to consider things better. Maybe a 3rd?
Scan/look/consider in effect.
[ link to this | view in chronology ]
Re: Re: It's an incredibly complex problem
I turned in one report july 27th, I finally got a response august 7th.
[ link to this | view in chronology ]
Re: It's an incredibly complex problem
All of the posts that Oluo's account was banned for are posts that were reported to Facebook as being abusive...and Facebook declined to take action because they didn't violate any rules.
So Oluo posted the abusive comments that, according to Facebook, didn't break any rules. And was banned for breaking the rules against threatening and abusing people.
Do you see the problem yet?
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Legal System
(1) both sides present their case
(2) someone decides
(3) there is an "appeals" process where one (or more) layers can review the decisions for fairness
Maybe those companies (Facebook, Twitter, etc) trying to set up a review system should take inspiration from this deep historical source.
[ link to this | view in chronology ]
Re: Legal System
[ link to this | view in chronology ]
Re: Legal System
The legal system in the U.S. has a basic approach that starts like this:
[ link to this | view in chronology ]
Re: Re: Legal System
Is everyone here still really, firmly opposed to “business method” patents? Yeah, I kinda vaguely comprehend that there's arguably a millenium and more of soi-disant prior art here… But that's just arguable… …right?
Look, even after eBay, business method patents are still the law.
And this one is ON A COMPUTER. WITH A SOCIAL NETWORK. FOR DISPUTE RESOLUTION.
[ link to this | view in chronology ]
"social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."
**Just read through TODAY'S comments a couple pieces ago!**
https://www.techdirt.com/articles/20170808/15450037961/techdirt-now-with-more-free-speech-repo rting.shtml
**I say ME and the others who complained there EXACTLY fit the topic of this piece.** You have taken no action in the 8 years I've been complaining here, that those who use words like THIS are the problem, NOT those of us on-topic and civil:
"There are white people, and then there are ignorant motherfuckers like you...."
http://www.techdirt.com/articles/20110621/16071614792/misconceptions-free-abound-why-do-brai ns-stop-zero.shtml#c1869
But of course YOU, Michael Ma snick, HIRE that person to re-write here! Explain that in light of this piece.
So, Techdirt: the "community standard" that I always exceed is to NOT make completely unprovoked, racist-tinged, vile, insulting, vulgar, off-topic one-liners. -- Oh, and Geigner never apologized, but instead tried to dodge with classic abuser tactic of making a deal: he'll stop if I don't raise the topic again. Just read a couple after that link, then try to tell ME I'm a "troll". Phooey on you kids. You're uncivil, indecent, and liars.
It's NOT how said, it's WHAT. YOU are banning viewpoints.
---
13th attempt starting from 11 Pacific! This topic seems locked down with each comment approved, another hidden censorship tactic here.
[ link to this | view in chronology ]
Re: "social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."
Made me notice this: "remember that he people who want to harass others quickly learn that they can use that as a tool for further harassment themselves." -- They can ALSO use a "report" or "flag" to harass. There's only ONE reason that's done here, and it's to reduce impact of some comments. When a site continually colludes with a faction and never punishes comments such as the one I link to, it's not due any favorable regard, to say the least.
[ link to this | view in chronology ]
Re: Re: "social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."
[ link to this | view in chronology ]
Re: "social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."
lolwut? Paranoid much?
[ link to this | view in chronology ]
Re: "social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."
or maybe they're not potent enough based on your totally incomprehensible ranting
[ link to this | view in chronology ]
Re: "social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."
Is that You?
[ link to this | view in chronology ]
Re: Re:
Anything is possible.
[ link to this | view in chronology ]
Re: Re: "social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."
And now he thinks no one knows who he is despite the same troll tactics.
[ link to this | view in chronology ]
Re: "social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."
[ link to this | view in chronology ]
Re: "social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."
Try resubmitting after an hour, not after less than a minute. Worked for me.
Repeatedly posting the same comment over again is the hallmark of either a spambot, or somebody with the patience of one.
[ link to this | view in chronology ]
Re: Re: "social media platforms banning or punishing the victims of harassment and abuse for posting about it, rather than the perpetrators."
lol at that.
[ link to this | view in chronology ]
White men *are* protected by virtue of the laws preventing discrimination against anyone for their sex or race, amongst other attributes. Fuck anyone who thinks discrimination is hateful but it's ok so long as the victim is white and male.
[ link to this | view in chronology ]
Response to: White man are protected by Anonymous Coward on Aug 9th,2017 @ 12:47pm
[ link to this | view in chronology ]
Re: Response to: White man are protected by Anonymous Coward on Aug 9th,2017 @ 12:47pm
Facebook didn't prioritize protecting white men. They prioritized protecting those who were targeted for two or more categories that Facebook was watching for.
Black men get equal protection with white men or asian women under that system. White drivers get no protection, the same way black drivers don't and women drivers don't -- but black women drivers do get protected.
White men got protected because gender and race are two protected categories under their system. Quit being so racist, it's not about white people.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
> “wypipo,” to evade being flagged by the platform
In my experience, that isn't why that term is used. Every time I've seen it used on Twitter, it's been in the context of a racial slur for whites.
[ link to this | view in chronology ]
My Hypocritical Solution: Verified Non-Anonymous Accounts Only
[ link to this | view in chronology ]
Re: My Hypocritical Solution: Verified Non-Anonymous Accounts Only
[ link to this | view in chronology ]
Re: Re: My Hypocritical Solution: Verified Non-Anonymous Accounts Only
So would magical unicorns
There used to be plenty, but they've all been grabbed up by the golden-crypto-key press gangs.
[ link to this | view in chronology ]
Re: My Hypocritical Solution: Verified Non-Anonymous Accounts Only
I don't see why. We haven't solved that problem irl either, and it's pretty much impossible to use a pseudonym there. Honestly, there are much more likely to be consequences doing it in person than doing it with your real name online, but that hasn't stopped anyone irl.
Oh, a few might get tracked down, but anyone worried about that can just get an account under their real name just to post dumb stuff on and not include location information on it. After all, there are likely thousands of people with your name so it's not like the average racist(sexist, etc.) git could actually be tracked down by anyone.
[ link to this | view in chronology ]
Re: My Hypocritical Solution: Verified Non-Anonymous Accounts Only
[ link to this | view in chronology ]
Re: My Hypocritical Solution: Verified Non-Anonymous Accounts Only
We've actually discussed this before, and it's not true for a variety of reasons. First, Facebook already requires real names and there's a ton of abuse there. Second, multiple studies on the topic have shown the "abuse" levels between anonymous and real names is really no different. Third, being anonymous has tremendous benefits that shouldn't be tossed out just because some people abuse it.
[ link to this | view in chronology ]
Re: Re: My Hypocritical Solution: Verified Non-Anonymous Accounts Only
Well, they *say* they do, but I don't think it's ever been enforced except in cases where they're using it as a reason to kick people off after abuse has happened. Unless something's changed recently, I don't believe they're ever pre-vetted anyone.
[ link to this | view in chronology ]
Re: Re: Re: My Hypocritical Solution: Verified Non-Anonymous Accounts Only
[ link to this | view in chronology ]
Re: Re: Re: Re: My Hypocritical Solution: Verified Non-Anonymous Accounts Only
In my Facebook feed I have as "friends" 3 dogs, one building, a number of completely made up people (test accounts from when I used to work for a company that produced Facebook games) and couple of businesses (from before Facebook introduced pages). There's also a few people I know using very obvious pseudonyms, and they've never had any issues despite being regular users. In fact, all of these accounts are still active despite them obviously not relating to a real name.
So, since Facebook really don't do any active vetting of whether people are using their real names, it doesn't seem right to say that Facebook are already forcing people not to be anonymous when using it.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Protection
> could lead to situations where "white men" were going to be protected.
You say that as if white men don't deserve the same protection as everyone else.
[ link to this | view in chronology ]
Re: Protection
Pick any race you care to name, any religion, any wealth level, any gender, and you will find that they have been the target of bigotry, etc. Jews, blacks, asians, native Americans, none of them are unique in this.
White people have never been exempt, you can find quite a few places around the world where being white gets you abused and discriminated against.
And in our Western societies, the Social Justice Warriors have decided that being white makes you exempt from having human rights, or deserving to be treated fairly. If any other race is proud of their heritage, it's good and pure -- but heaven help the white kid who is proud of his heritage, because he will be told that being proud of his heritage makes him evil.
[ link to this | view in chronology ]
Re: Re: Protection
"he will be told that being proud of his heritage makes him evil."
Define "proud of his heritage". There might be something in the definition which gives you a clue. Generally speaking, there's nothing wrong with being proud of your heritage, but there does seem to be a correlation between certain type of "pride" and white nationalism - that correlation might be something you're inadvertently referencing.
For example - being an Englishman, there's generally nothing wrong with people being proud to be English. However, the white nationalists have tended to throw around the St George flag as a symbol of their violent racial hatred, and this has led to it being tarnished somewhat as a symbol. I've never seen anyone being told that they can't be proud to be English/British, but it does tend to send a certain type of message if a person chooses the St George flag instead of the Union Flag to broadcast that.
It's a shame, but the reason it's objectionable to some is not because people are being told they can't be "proud of their heritage". It's because people flying that flag have beaten and murdered people in its name.
[ link to this | view in chronology ]
some "joke"
[ link to this | view in chronology ]