The Most Important Part Of The Facebook / Oversight Board Interaction Happened Last Week And Almost No One Cared
from the pay-attention dept
The whole dynamic between Facebook and the Oversight Board has received lots of attention -- with many people insisting that the Board's lack of official power makes it effectively useless. The specifics, again, for most of you not deep in the weeds on this: Facebook has only agreed to be bound by the Oversight Board's decisions on a very narrow set of issues: if a specific piece of content was taken down and the Oversight Board says it should have been left up. Beyond that, the Oversight Board can make recommendations on policy issues, but the companies doesn't need to follow them. I think this is a legitimate criticism and concern, but it's also a case where if Facebook itself actually does follow through on the policy recommendations, and everybody involved acts as if the Board has real power... then the norms around it might mean that it does have that power (at least until there's a conflict, and you end up in the equivalent of a Constitutional crisis).
And while there's been a tremendous amount of attention paid to the Oversight Board's first set of rulings, and to the fact that Facebook asked it to review the Trump suspension, last week something potentially much more important and interesting happened. With those initial rulings on the "up/down" question, the Oversight Board also suggested a pretty long list of policy recommendations for Facebook. Again, under the setup of the Board, Facebook only needed to consider these, but was not bound to enact them.
Last week Facebook officially responded to those recommendations, saying that it had agreed to take action on 11 of the 17 recommendations, is assessing the feasibility on another five, and was taking no action on just one. The company summarized those decisions in that link above, and put out a much more detailed pdf exploring the recommendations and Facebook's response. It's actually interesting reading (or, at least for someone like me who likes to dig deep into the nuances of content moderation).
Since I'm sure it's most people's first question: the one "no further action" was in response to a policy recommendation regarding COVID-19 misinformation. The Board had recommended that when a user posts information that disagrees with advice from health authorities, but where the "potential for physical harm is identified but is not imminent" that "Facebook should adopt a range of less intrusive measures." Basically, removing such information may not always make sense, especially if it's not clear that the information in disagreement with health authorities might not be actively harmful. As per usual, there's a lot of nuance here. As we discussed, early in the pandemic, the suggestions from "health authorities" later turned out to be inaccurate (like the WHO and CDC telling people not to wear masks in many cases). That makes relying on those health authorities as the be all, end all for content moderation for disinformation inherently difficult.
The Oversight Board's response in this issue more or less tried to walk that line, recognizing that health authorities' advice may adapt over time as more information becomes clear, and automatically silencing those who push back on the official suggestions from health officials may lead to over-blocking. But, obviously, this is a hellishly nuanced and complex topic. Part of the issue is that -- especially in a rapidly changing situation, where our knowledge base starts out with little information and is constantly correcting -- it's difficult to tell who is pushing back against official advice for good reasons or for conspiracy theory nonsense reasons (and there's a very wide spectrum between those two things). That creates (yet again) an impossible situation. The Oversight Board was suggesting that Facebook should be at least somewhat more forgiving in such situations, as long as they don't see any "imminent" harm from those disagreeing with health officials.
Facebook's response isn't so much pushing back against the Board's recommendation -- but rather to argue that it already takes a "less intrusive" approach. It also argued that Facebook and the Oversight Board basically disagree on the definition of "imminent danger" from bad medical advice (the specific issue came up in the context of someone in France recommending hydroxychloroquine as a treatment for COVID). Facebook said that, contrary to the Board's finding, it did think this represented imminent danger:
Our global expert stakeholder consultations have made it clear that, that in the context of a health emergency, the harm from certain types of health misinformation does lead to imminent physical harm. That is why we remove this content from the platform. We use a wide variety of proportionate measures to support the distribution of authoritative health misinformation. We also partner with independent third-party fact-checkers and label other kinds of health misinformation.
We know from our work with the World Health Organization (WHO) and other public health authorities that if people think there is a cure for COVID-19 they are less likely to follow safe health practices, like social distancing or mask-wearing. Exponential viral replication rates mean one person’s behavior can transmit the virus to thousands of others within a few days.
We also note that one reason the board decided to allow this content was that the person who posted the content was based in France, and in France, it is not possible to obtain hydroxychloroquine without a prescription. However, readers of French content may be anywhere in the world, and cross-border flows for medication are well established. The fact that a particular pharmaceutical item is only available via prescription in France should not be a determinative element in decision-making.
As a bit of a tangent, I'll just note the interesting dynamic here: despite "the narrative" which claims that Facebook has no incentive to moderate content due to things like Section 230, here the company is arguing for the ability to be more heavy handed in its moderation to protect the public from danger, and against the Oversight Board which is asking the company to be more permissive.
As for the items that Facebook "took action" on, a lot of them are sort of bland commitments to do "something" rather than concrete changes. For example, at the top of the list were things around confusion between the Instagram community guidelines and the Facebook community guidelines, and to be more transparent about how those are enforced. Facebook says that they're "committed to action" on this, but I'm not sure I can actually tell you what actions it's actually taken.
We’ll continue to explore how best to provide transparency to people about enforcement actions, within the limits of what is technologically feasible. We’ll start with ensuring consistent communication across Facebook and Instagram to build on our commitment above to clarify the overall relationship between Facebook’s Community Standards and Instagram’s Community Guidelines.
Um... great? But what does that actually mean? I have no idea.
Evelyn Douek, who studies this issue basically more than anyone else, notes that many of these commitments from Facebook are kind of weak:
Some of the “commitments” are likely things that Facebook had in train already; others are broad and vague. And while the dialogue between the FOB and Facebook has shed some light on previously opaque parts of Facebook’s content moderation processes, Facebook can do much better.
As Douek notes, some of the answers do reveal some pretty interesting things that weren't publicly known before -- such as how its AI deals with nudity, and how it tries to distinguish the nudity it doesn't want from things like nudity around breast cancer awareness:
Facebook explained the error choice calculation it has to make when using automated tools to detect adult nudity while trying to avoid taking down images raising awareness about breast cancer (something at issue in one of the initial FOB cases). Facebook detailed that its tools can recognize the words “breast cancer” but users have used these words to evade nudity detection systems, so Facebook can’t just rely on just leaving up every post that says “breast cancer.” Facebook has committed to providing its models with more negative samples to decrease error rates.
Douek also notes that some of Facebook's claims to be implementing the Board's recommendations are... misleading. They're actually rejecting the Board's full recommendation:
In response to the FOB’s request for a specific transparency report about Community Standard enforcement during the COVID-19 pandemic, Facebook said it was “committed to action.” Great! What “action,” you might ask? It says that it had already been sharing metrics throughout the pandemic and would continue to do so. Oh. This is actually a rejection of the FOB’s recommendation. The FOB knows about Facebook’s ongoing reporting and found it inadequate. It recommended a specific report, with a range of details, about how the pandemic had affected Facebook’s content moderation. The pandemic provided a natural experiment and a learning opportunity: Because of remote work restrictions, Facebook had to rely on automated moderation more than normal. The FOB was not the first to note that Facebook’s current transparency reporting is not sufficient to meaningfully assess the results of this experiment.
Still, what's amazing to me is that these issues, which might actually change key aspects of Facebook's moderation basically got next to zero public attention last week as compared to the simple decisions on specific takedowns (and the massive flood of attention the Trump account suspension decision will inevitably get).
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: content moderation, policies, recommendations
Companies: facebook, oversight board
Reader Comments
Subscribe: RSS
View by: Time | Thread
So, Facebook will enforce the LATEST "official" information...
... and never be held liable for yesterday's enforcement of what turned out to be WRONG.
Now, I'm not getting into arguing about COVID beyond that what you claim is just simply WRONG too, but I do hope that you're wearing two masks, staying in your house, except to get the shot -- which is a mutagen, NOT a vaccine -- and so on, not making yourself a hypocrite on this topic like your governor Newsom.
ANYHOO, since "the science" in emerging areas manifestly isn't / wasn't nailed down:
A) Why should Facebook appoint itself as censor?
B) Isn't Facebook then the Publisher and potentially liable?
[ link to this | view in thread ]
Re: So, Facebook will enforce the LATEST "official" information.
A) They shouldn't.. and can't, obviously got no power to censor.. Just some guys running a website you have no right to use to begin with, they have no authority over anything except themselves.
B) Nope whether you call them a publisher or not is meaningless. Hit someone in the head with a hammer? liable. Made the hammer? not liable.
[ link to this | view in thread ]
I'm still trying to figure out how boobs are so evil they must be hidden online.
I mean once upon a time there was this level of freakout over an exposed ankle & the ensuing claims that it would lead to the fall of good men.
I mean like the TOS says everyone has to be an adult, so its no 'think of the children'.
Like a smart response would be having a settling that says don't show me adult pictures & let the AI just flag things, rather than trying to turn the platform into wasteland where you can't even mention breast cancer without setting off all of the alarm bells...
While BC gets all of the air in the room testicular cancer is a thing, how could anyone attempt to raise awareness about that in the age of someone thinks they saw a public hair in an image & we had to take it down??
People want to abdicate responsibility for their comfort online to the platforms & make them only display the world in the way they like... while forcing everyone else have to accept their comfort level as the baseline.
Platforms should honestly stop trying to win the unwinnable war & find other ways to please people.
AI flagged this content as graphic b/c x reasons, you can say don't just display that let me decide on case by case rather than nuking the image only to discover it was an old Coppertone ad & the cartoon exposed upper buttock drove the AI crazy.
But "forcing" people to take responsibility for what they see is treated like a cardinal sin, I don't want to see breasts... then why is your freinds list populated with nothing but THOTs who post their latest bandaid & dental floss creations?
The computer can NEVER make the world fit your desires, take control & responsibility for yourself.
The entire time I've been on Twitter I think I've posted exactly 1 thing with an actual exposed penis and I debated doing it. I clearly clearly labeled it, but it was the best punchline to a joke... iirc dude got nailed in his exposed junk by a snowball.
I know people who follow me aren't looking for porn content & I don't post that sort of thing, the trade off is I tell them to never look at my likes. :D
If I was posting stuff they hated they'd leave rather than demand that Twitter check every picture I post & make sure they would never get offended again.
I've only had 1 follower be offended by something I posted in a response to them, they weren't bitchy or threatening they disliked a meme I used a lot about the NYT's. I deleted that tweet, 2nd tweet ever I deleted.
When I can use Twitter I see naked men, that happens when you follow a bunch of porn stars & accounts that RT hot guys that I find hot as well. No one forces me to follow them or see their pictures, I don't need an AI making sure I don't see a female breast I just don't interact with those that post that content.
Platforms are really really shitty parents, stop asking them to be your parent & filter the internet to your comfort level. They can't always make sure you never see a boob on FB, but you are perfectly okay with never looking at your kids FB account...
[ link to this | view in thread ]
Re:
Not that I want to validate that idea that TOS have any real meaning whatsoever,
but I don't think facebook TOS requires you to be an adult, just 13 or something?
At any rate, it's not that people want to "abdicate responsibility for their comfort", it's the opposite. Facebook knows people will take such responsibility upon themselves and will potentially leave if they aren't comfortable on facebook. They only care about keeping customers eyes on ads, so they target lowest common comfort denominator and thats no nudity.
[ link to this | view in thread ]
Re: So, Facebook will enforce the LATEST "official" information.
The problem is that you have to look at context. The "don't wear a mask" advice wasn't "wrong" per se. It was stated out of a fear that everyone would buy up all the masks that health care providers needed, which was a legitimate fear. It was also stated before the virus was wide spread. Some people interpreted "don't wear a mask at this time" as "you don't need a mask ever" or "masks don't work." And even that's actually true in some cases. Some people rightly said "masks don't work" but with the caveat that "masks don't work if you're still touching your face and not washing your hands, etc." but the simplified version is what gets repeated by people who want to support their own biases.
There's so much nuance in what was said and what was rephrased and what was repeated out of context that saying what was said was "WRONG" is functionally meaningless. And pointing to that as some kind of proof that you should never listen to the medical community because they changed their advice is misguided and absurd.
If one doctor is guilty of malpractice, it doesn't make every doctor guilty of malpractice anymore than having an MD means everything you say is good medical advice.
[ link to this | view in thread ]
Re: Re:
And this is what has hurt Tumblr. They lost a lot of traffic when they banned adult content. The company went from selling for a billion to selling for 3 million in about 6 years. Some people don't know their audience.
[ link to this | view in thread ]
Re: Re:
I think a large portion of FB users would really enjoy nudity... as long as their mom isn't looking over their shoulder.
We have so many groups that tell us the evils of seeing a boob/nipple/womans ankle that all pretend women's bodyparts are magical and turn all men into rapists while deciding for them that they can't decide if they want to show a little side boob or not.
The superb owl wasn't that heavily attended by actual people, pandemic or something I heard, yet the same groups that do it every year claimed 3 trillion underaged kids snatched from walmart parking lots were trucked in to service 300 men a day.
These are the same groups who think seeing a boob to early in life (ignoring that whole first few years of life portion) breaks them forever, we need to start requiring more facts for these things to be decided.
FB sold peoples most private details to anyone who paid them for years, they were targeted for psyops to swing votes... yet they stay on the platform... and you think a tit will make them leave?
[ link to this | view in thread ]
Re: Re: Re:
There is a time & a place for everything...
Its one thing to see GQ vs Maxim vs Playboy vs Hustler vs some of the more hardcore material out there.
FB allowed private groups talking about overthrowing the government & people are still more worried they might see a boob on the platform.
Humans love to focus on things that attract attention... even if they aren't true.
I saw Janet Jacksons nipple on the tv & it did NOT cause me to run out and molest women but to hear people still talking about it as one of the most horrible things on tv we need to prevent makes me wonder if they've seen the footage of the aftermath of a drone strike on a wedding that was mistakenly targeted.
[ link to this | view in thread ]
Re: So, Facebook will enforce the LATEST "official" information.
"I do hope that you're wearing two masks, staying in your house, except to get the shot -- which is a mutagen, NOT a vaccine -- and so on"
Well, I'm not, but my life has generally been normal for months except for some travel restrictions and early closing of bars and restaurants, which are being mostly lifted next week.
But, then, I'm not surrounded by idiots who believe their right to infect and kill 500k people is justified because they can't take slight discomfort for a few weeks, meaning that this has gone on way longer than it should have. Until a spike around Christmas largely caused by people travelling from other areas, my town has barely been affected, whereas people are dropping like flies in the "free" places where incompetent leaders manage to infect half their own staff.
"Why should Facebook appoint itself as censor?"
Because they own the property they are moderating. They do not have any power to moderate anyone else's property, nor are they attempting to do so.
"Isn't Facebook then the Publisher and potentially liable?"
Not according to section 230 or the existing laws in any other country of the planet. Which, unless you get your wish and the rest of us are unfortunately unable to participate of the schadenfreude of you experiencing the negative consequences of what you wish for, are still the law of the land.
[ link to this | view in thread ]
Re: Re: So, Facebook will enforce the LATEST "official" informat
"The problem is that you have to look at context."
He has nothing if he sticks to pesky things like facts and context.
"It was stated out of a fear that everyone would buy up all the masks that health care providers needed, which was a legitimate fear"
There was some doubt early on about how the virus spread and genuine evidence that it was more due to surface contact than being airborne. But, yeah, given the fact that some people were apparently trying to build home extensions using toilet rolls (despite the disease not causing issues that would require more of it), and there was a genuine shortage of N95 masks and other PPE, that was the correct advice at the time. Which was quickly altered (not overturned) when the effectiveness of non-N95 masks because clear.
The only good thing about these people is that they all repeat the same misinformation rather than making up their own, so it's relatively easy to identify and defuse, even if some platforms have taken too long to do so.
[ link to this | view in thread ]
Re: Re: Re:
"I think a large portion of FB users would really enjoy nudity"
They would. They can also go to PornHub or any of millions of other sites if they really want it.
[ link to this | view in thread ]
Re: Re: Re: Re:
Except that the prude militias have set their sites on making sure there is no nudity anywhere online by being loud, lying that it causes all of societies ills, & pretending that they actually have enough people to boycott to hurt a corporation... but its just the optics that matter.
See also: Congress passing a law to allow the 911 families to sue The Kingdom for their loss, while knowing full well international law bars that sort of actions. Why did a pointless bill pass??
Because no one dared have it appear they didn't fully support the families, it would have been political suicide... so instead we wasted time, money, effort, passing a law that would do nothing it promised because it couldn't, it would hurt the families again who were given false hope that they might get some closure all to keep political opponents from spinning not supporting a pointless bill as they hate America.
FB spent more time trying to detect & block boobs than stopping white supremacists...
[ link to this | view in thread ]
Re: Re: Re:
I think it's not that a large number aren't fine with nudity or even enjoy, it's that they aren't as offended by not having nudity on facebook as the prudes are offended by having it.
I think facebook thinks a tit has more of a chance to cause a some prudes to leave or cause fuss in the news to make others leave than not having it will cause it from the "fine with nudity" side. Same reason they are caving to republicans asking them to manipulate discussion towards bullshit and conspiracy theories for them. Whiny wheel gets the grease
[ link to this | view in thread ]
Re: Re: Re: Re: Re:
"Except that the prude militias have set their sites on making sure there is no nudity anywhere online"
...something which they have failed at and will continue to fail at.
"Because no one dared have it appear they didn't fully support the families, it would have been political suicide"
Or, because it didn't affect anyone personally and nobody outside of the "for the children" crowd they targeted noticed or cared about anything other than the 9/11 angle. Try that with telling people they can't look at boobs anywhere and I guarantee there will be a different response.
"FB spent more time trying to detect & block boobs than stopping white supremacists..."
That is a problem, but nobody forces you to use them if you disagree with that stance.
[ link to this | view in thread ]