Social Media Can Apply COVID-19 Policies To Reduce the Spread of Election Disinformation
from the do-more,-do-better dept
With less than eighty days until Election Day and a pandemic surging across the country, disinformation continues to spread across social media platforms, posing dangers to public health, voting rights, and our democracy. Time is short and social media platforms need to ramp up their efforts to combat election disinformation and online voter suppression — just as they have with COVID-19 disinformation.
Social media platforms have content moderation policies in place to counter both COVID-19 disinformation and election disinformation. However, platforms seem to be taking a more proactive approach to combating COVID-19 disinformation by building tools, spending significant resources, and most importantly, changing their content moderation policies to reflect the evolving nature of inaccurate information about the virus.
To be clear, COVID-19 disinformation is still rapidly spreading online. However, the platforms’ actions on the pandemic demonstrate they can develop specific policies to address and remove this harmful content. Platforms’ efforts to mitigate election disinformation, on the other hand, are falling short, due to the significant gaps that remain in their content moderation policies. Platforms should seriously examine how their COVID-19 disinformation policies can apply to reducing the spread of election disinformation and online voter suppression
Disinformation on social media can spread in a variety of ways including (1) the lack of prioritizing authoritative sources of information and third-party fact-checking; (2) algorithmic amplification and targeting; and (3) platform self-monetization. Social media platforms have revised their content moderation policies on COVID-19 to address many of the ways disinformation can spread about the pandemic.
For example, Facebook, Twitter, and YouTube all direct their users to authoritative sources of COVID-19 information. In addition, Facebook works with fact-checking organizations to review and rate pandemic-related content; YouTube utilizes fact-checking information panels; and Twitter is beginning to add fact-checked warning labels. Twitter has also taken the further step of expanding its definition on what it considers harmful content in order to capture and remove more inaccurate content related to the pandemic. To reduce the harms of algorithmic amplification, Facebook uses automated tools to downrank COVID-19 disinformation. Additionally, Facebook places restrictions on its advertisement policy to prevent the sale of fraudulent medical equipment and the platform prohibits ads that use exploitative tactics to create a panic over the pandemic as two methods for stopping the monetization of pandemic-related disinformation.
These content moderation policies have resulted in social media platforms taking down significant amounts of COVID-19 disinformation including recent posts from President Trump. Again, disinformation about the pandemic persists on social media. But these actions show the willingness of platforms to take action and reduce the spread of this content.
In comparison, social media platforms have not been as proactive in enforcing or developing new policies to respond to the spread of election disinformation. Platforms’ civic integrity policies are primarily limited to prohibiting inaccurate information about the processes of voting (e.g., misrepresentations about the dates and times people can vote). But even these limited policies are not being consistently enforced.
For example, Twitter placed a warning label on one of Trump’s inaccurate tweets about mail-in-voting procedures but have taken no action on other similar tweets from the president. Further, social media platforms current policies may not be broad enough to take into account emerging voter suppression narratives about voter fraud and election rigging. Indeed, Trump has pushed inaccurate content about mail-in-voting across social media platforms, falsely claiming it will lead to voter fraud and election rigging. With many states expanding their mail-in-voting procedures due to the pandemic, Trump’s continued inaccurate attacks on this method of voting threaten to confuse and discourage eligible voters from casting their ballot.
Platform content moderation policies also contain significant holes that bad actors continue to exploit to proliferate online voter suppression. For example, Facebook refuses to fact-check political ads even if they contain demonstrably false information that discourage people from voting. President Trump’s campaign has taken advantage of this by flooding the platform with hundreds of ads that spread disproven claims about voter fraud. Political ads with election disinformation can be algorithmically amplified or micro-targeted to specific communities to suppress their vote.
Social media platforms including Facebook and Twitter have recently announced new policies they will be rolling out to fight online voter suppression. As outlined above, there are some lessons platforms can learn from their efforts in combatting COVID-19 disinformation.
First, social media platforms should prioritize directing their users to authoritative sources of information when it comes to the election. Authoritative sources of information include state and local election officials. Second, platforms must consistently enforce and expand their content moderation policies as appropriate to remove election disinformation. Like their COVID-19 disinformation policies, platforms should build better tools and expand definitions of harmful content when it comes to online voter suppression. Finally, platforms must address the structural problems that allow bad actors to engage in online voter suppression tactics including algorithmic amplification and targeted advertisements.
COVID-19 – as dangerous and terrifying an experience as it has been – has at least proven that when platforms want to step up their efforts to stop the spread of disinformation, they can. If we want authentic civic engagement and a healthy democracy that enables everyone’s voices to be heard, then we need digital platforms to ramp up their fight against online voter suppression, too. Our voices – and the voices of those in marginalized communities -- depend on it.
Just as combating COVID-19 disinformation is important to our public health, reducing the spread of election disinformation is critical to authentic civic engagement and a healthy democracy. As part of our efforts to stop the spread of online voter suppression, Common Cause will continue to monitor social media platforms for election disinformation and encourages readers to report any inaccurate content to our tip line. At the end of the day, platforms themselves must step up their fight against new online voter suppression efforts.
Yosef Getachew serves as the Media & Democracy Program Director for Common Cause. Prior to joining Common Cause, Yosef served as a Policy Fellow at Public Knowledge where he’s worked on a variety of technology and communications issues. His work has focused on broadband privacy, broadband access and affordability, and other consumer issues.
Filed Under: content moderation, covid-19, disinformation, election disinformation