from the you-make-the-call dept
We've talked a lot in the past about the impossibility of doing content moderation well at scale, but it's sometimes difficult for people to fathom just what we mean by "impossible," with them often assuming -- incorrectly -- that we're just saying it's difficult to do well. But it goes way beyond that. The point is that no matter what choices are made, it will lead to some seriously negative outcomes. And that includes doing no moderation at all. In short there are serious trade-offs to every single choice.
Probably without meaning to, the NY Times recently had a pretty good article somewhat exploring this issue in looking at what Facebook is trying to do prevent suicides. We had actually touched on this subject a year ago, when there were reports that Facebook might stop trying to prevent suicides, as it had the potential to violate the GDPR.
However, as the NY Times article makes clear, Facebook really is in a damned if you do, damned if you don't position on this. As the Times points out, Facebook "ramped up" its efforts to prevent suicides after a few people streamed their suicides live on Facebook. Of course, what that underplays significantly is how much crap Facebook got because these suicides were appearing on its platform. Tabloids, like the Sun in the UK, had entire lists of people who died while streaming on Facebook and demanded to know "what Mark Zuckerberg will do" to respond. When the NY Post wrote about one man committing suicide streamed online... it also asked for a comment from Facebook (I'm curious if reporters ask Ford for a comment when someone commits suicide by leaving their car engine on in a garage?). Then there were the various studies, which the press used to suggest social media leads to suicides (even if that's not what the studies actually said). Or there were the articles that merely "asked the question" of whether or not social media "is to blame" for suicides. If every new study leads to reports asking if social media is to blame for suicides, and every story about suicides streamed online demands comments from Facebook, the company is clearly put under pressure to "do something."
And that "do something" has been to hire a ton of people and point its AI chops at trying to spot people who are potentially suicidal, and then trying to do something about it. But, of course, as the NY Times piece notes, that decision is also fraught with all sorts of huge challenges:
But other mental health experts said Facebook’s calls to the police could also cause harm — such as unintentionally precipitating suicide, compelling nonsuicidal people to undergo psychiatric evaluations, or prompting arrests or shootings.
And, they said, it is unclear whether the company’s approach is accurate, effective or safe. Facebook said that, for privacy reasons, it did not track the outcomes of its calls to the police. And it has not disclosed exactly how its reviewers decide whether to call emergency responders. Facebook, critics said, has assumed the authority of a public health agency while protecting its process as if it were a corporate secret.
And... that's also true and also problematic. As with so many things, context is key. We've seen how in some cases, police respond to calls of possible suicidal ideation by showing up with guns drawn, or even helping the process along. And yet, how is Facebook supposed to know -- even if someone is suicidal -- whether or not it's appropriate to call the police in that particular circumstance (this would be helped a lot if the police didn't respond to so many things by shooting people, but... that's a tangent).
The concerns in the NY Times piece are perfectly on point. We should be concerned when a large company is suddenly thrust into the role of being a public health agency. But, at the same time, we should recognize that this is exactly what tons of people were demanding when they were blaming Facebook for any suicides that were announced/streamed on its platform. And, at the same time, if Facebook actually can help prevent a suicide, hopefully most people recognize that's a good thing.
The end result here is that there aren't any easy answers -- and there are massive (life altering) trade offs involved in each of these decisions or non-decisions. Facebook could continue to do nothing, and then lots of people (and reporters and politicians) would certainly scream about how it's enabling suicides and not caring about the lives of people at risk. Or, it can do what it is doing and try to spot suicidal ideation on its platform, and reach out to officials to try to get help to the right place... and receive criticism for taking on a public health role as a private company.
“While our efforts are not perfect, we have decided to err on the side of providing people who need help with resources as soon as possible,” Emily Cain, a Facebook spokeswoman, said in a statement.
The article also has details of a bunch of attempts by Facebook to alert police to suicide attempts streaming on its platform with fairly mixed results. Sometimes the police were able to prevent it, and in other cases, they arrived too late. Oh, and for what it's worth, the article does note in an aside that Facebook does not provide this service in the EU... thanks to the GDPR.
In the end, this really does demonstrate one aspect of the damned if you do, damned if you don't situation that Facebook and other platforms are put into on a wide range of issues. If users do something bad via your platform, people immediately want to blame the platform for it and demand "action." But what kind of "action" then leads to all sorts of other questions and huge trade-offs, leading to more criticism (sometimes from the same people). This is why expecting any platform to magically "stop all bad stuff" is a fool's errand that will only create more problems. We should recognize that these are nearly impossible challenges. Yes, everyone should work to improve the overall results, but expecting perfection is silly because there is no perfection and every choice will have some negative consequences. Understanding what they actually are and being able to discuss them openly without being shouted down would be helpful.
Filed Under: content moderation, dilemma, privacy, public health, social media, suicide, suicide prevention
Companies: facebook