from the content-moderation-mess dept
Yesterday was certainly an interesting day in the content moderation realm. Early in the morning the NY Post released a story that I won't link to, mainly because it's hot garbage. If you want a concise, non-hyperbolic, and thorough (and non-partisan) analysis of why the story is problematic, I suggest reading Thomas Rid's analysis, which highlights the many questions the story raises, and why it should not be reported on until certain details are confirmed.
The story began to spread on social media, though. There were many people (like Thomas) raising questions, or in some cases debunking various aspects of the story (or just putting it into perspective). However, among supporters of President Trump, the story was (misleadingly) being used to claim there was a "smoking gun" of malfeasance on the part of Joe Biden's son, Hunter. The actual details of the story don't seem to support the claims being made. At the very least, without further confirmation, the story had all the hallmarks of a disinformation campaign.
As the story spread, both Facebook and Twitter decided to take steps to limit the spread, in both cases making use of relatively recently introduced policies. For Facebook, it simply limited the spread while fact checkers could check the story:
The company put that policy in place a year ago after facing widespread complaints that by the time Facebook did a thorough fact check on certain information, that information would have already spread widely. So Facebook put in place a sort of two-step process. If something is going viral, and there are concerns that it is disinformation, the company will first slow the ability to spread the information, allowing the full fact-check to take place, and then making a further decision based on the results of the fact check. This is basically a "let's slow this down while we check it out and then decide what to do" policy.
Twitter's move was a bit stronger and more controversial. It blocked people from sharing the link altogether, though not for the reasons most people thought. Twitter says it blocked the sharing of the link because the article was pointing to "hacked" materials. We've discussed Twitter's policy against hacked material in the past, including over the (controversial) decision to shut down the @DDoSecrets account, which posts newsworthy leaked documents, including the "BlueLeaks" documents of law enforcement leaks.
As seen in shutting down @DDoSecrets, Twitter takes a pretty strict definition of "hacked material" and under that definition, the Post's reporting on emails that were taken from a laptop would qualify. That is, even if you disagree with the specific policy that Twitter has in place, it's silly to claim that the takedown was for political purposes.
That said, this policy remains a silly one, and a dangerous one. As we have noted in the past in discussing it, journalists regularly report on leaked or "hacked" materials. And in many cases they are newsworthy. In this case, if the information is correct, then it is newsworthy.
But, even more to the point, it was not difficult to guess where this decision was going to end up -- which is that the usual crew of Trump sycophants immediately insisted that this was "censorship" on the part of Twitter and Facebook, and an attempt to stifle a story that, if accurate, could be seen by some to paint Joe Biden in a bad light. Indeed, Senator Josh Hawley -- who seems to have found his niche in creating absolutely bullshit panics about this kind of stuff -- quickly got out his pen to send stupid letters. First he sent a nonsense letter to Facebook, claiming that Facebook following the policy it put in place last year was "selective" and "suggests partiality on the part of Facebook."
Seems odd that if that's his concern he hasn't sent such letters to the NY Post or Fox News or Breitbart -- all of whom have made editorial choices that show "partiality" to one candidate. And, of course, the letter fails to acknowledge that this has been Facebook's policy for a year now, or that there may be legitimate reasons to try to slow the spread of possible disinformation. Instead, he tries to cook up a conspiracy.
His letter to Twitter is more of the same. Here, we have a US Senator demanding information regarding a private company's editorial policies. That's a HUGE 1st Amendment problem.
This statement raises questions about the
applicability of your policy, especially because such a pre-emptive removal of a news
story on such grounds—and the additional scrutiny you have applied—appears to be an
unusual intervention that is not universally applied to all content.
I ask that you immediately answer these questions and provide the requisite justifications
so that your users can feel confident that you are not seeking to influence the outcome of
the presidential election with your content removal decisions.
Politicians should not be able to demand any private company explain their editorial decision making process, especially under the threat of legislation. It's an obvious intimidation tactic, and raises significant 1st Amendment questions.
Finally, Hawley sent an even more ridiculous letter to the Federal Election Commission, claiming that these moderation decisions violate campaign finance law. This is wrong on many, many, many levels. But first I'll note, in passing, that Republicans like Senator Hawley have made sure that the FEC doesn't even have enough members to meet, because the Republican Senate has refused to confirm them.
As for Hawley's claims about campaign finance, they are also hot garbage:
This conduct does not merely censor the core political speech of ordinary Americans, though it certainly does
that. Twitter’s and Facebook’s conduct also appears to constitute a clear violation of federal campaign-finance law.
Federal law prohibits any corporation from making a contribution to a federal candidate for office. 52 U.S.C. § 30118(a).
Twitter and Facebook are both corporations. A “contribution” includes “anything of value . . . for the purpose of
influencing any election for Federal office.” 52 U.S.C. § 30101(8)(A)(i). Twitter’s and Facebook’s active suppression of
public speech about the New York Post article appears to constitute contributions under federal law. There can be no
serious doubt that the Biden campaign derives extraordinary value from depriving voters access to information that, if
true, would link the former Vice President to corrupt Ukrainian oligarchs. And this censorship manifestly will influence
the presidential election.
This is ridiculous. Again, if true, then Fox News, Breitbart, and the NY Post all "violate federal campaign finance law." Hell, under this interpretation, the NY Post "violated campaign finance law" by running this article. So would any newspaper that ran an endorsement. Or any newspaper that (quite reasonably) refused to run this sketchy garbage story from Rudy Giuliani in the first place. This is not how any of this works. And Hawley doesn't want it to work that way either because it would destroy the media companies that prop up his own nonsense.
All that said, I still think that Twitter's decision to block the sharing of the link was hamfisted. Coming just after I praised the company for focusing on friction, rather than suppression, this move seemed more about suppression. And, of course, it was not effective. Lots of people started coming up with ways to get around the block, and at least one account tweeted out every sentence of the article as individual tweets. The Senate Republicans' Twitter account tweeted out a video of the article. And "Streisand Effect" trended on Twitter as Trump supporters noted that the block would likely only serve to draw more attention to the original story.
But it also did something worse. It played right into the bullshit narrative that Twitter is engaging in "anti-conservative bias" in its moderation practices. And it becomes a perfect (if inaccurate) talking point for Republicans like Josh Hawley and Donald Trump whose entire schtick is playing the victim. It also takes away attention from just how sketchy the original story was, and changes the focus to questions about social media companies and their moderation practices. That's not the story here and it shouldn't be. But these decisions made it so.
Filed Under: content moderation, disinformation, misinformation
Companies: facebook, ny post, twitter