Content Moderation At Scale Remains Impossible: Vaccines Edition
from the the-way-of-the-world dept
Last week a story started to blow up that was used, once again, by the media to beat up on Facebook. The headline, from the Daily Beast, says it all: Facebook Axed Pro-Vaccine Ads From Hospitals and Health Orgs, Let Anti-Vaxxer Ads Slip Through. As the story notes, Facebook has (smartly) decided to not allow anti-vax nonsense advertising. It will, of course, allow important pro-vaccination awareness advertising. It does this for a pretty good reason: anti-vax nonsense is killing people. Vaccinations save lives (and I know some anti-vaxxers reading this are foaming at the mouth to scream at us in the comments, and let's just be clear: you're wrong and you should stop it before you kill more people). Anyway, here's what went down:
This month, the Idaho Department of Health and Welfare, the state’s official health department, bought 14 ads to promote a statewide program providing free pediatric vaccinations. Facebook removed all of them.
During the same time period, Children’s Health Defense, an anti-vaccine nonprofit founded and chaired by the nation’s most prominent vaccine conspiracy theorist, Robert F. Kennedy Jr., successfully placed more than 10 ads stoking unfounded fear about vaccines and other medical conspiracy theories.
I saw some people on Twitter using this to attack Facebook, but actually it just highlights the same point we've been making for a few years now: content moderation at scale is impossible to do well and you will always, always make mistakes. And this is one of many kinds of mistakes that happen all the time. Unless someone is deeply, deeply engaged in these issues, distinguishing between anti-vax anti-science quackery can sometimes be difficult. And if moderators are taught to be wary of "vaccine" advertisements, they may just start to key in on anything that mentions vaccines -- including something from a government Department of Health. In some cases it appears that automated systems are to blame:
“It’s our understanding that auto-blocking software flagged these ads, since the text resembles when ads appear to be spreading vaccine misinformation,” said Emily Lowther, a spokeswoman for the Minnesota Hospital Association, who expressed frustration at the phenomenon.
Of course, perhaps what's more interesting is that part of the reason the Daily Beast was even able to write this story is because of Facebook's transparency on advertisements with its Ad Library.
You could say that Facebook must do a better job at this kind of thing, but that would require focusing even more attention on these ads, which inevitably means some other set of ads will end up getting messed up as well. Content moderation at scale is impossible to do well, and that's not a Facebook issue, it's a societal one. There are some people who are going to be pushing bad information, and they're always going to seek to make it look as legit as possible. That's a problem, but expecting that one company can magically fix it seems like a silly thing to do.
Filed Under: ads, anti-vax, content moderation, vaccines
Companies: facebook