Content Moderation Case Study: Dealing With 'Cheap Fake' Modified Political Videos (2020)
from the political-moderation dept
Summary: For years now there have been concerns raised about the possibility of “deep fake” videos impacting an election. Deep fakes are videos that have been digitally altered, often to insert someone’s face onto another person’s body, to make it appear that they were somewhere they were not or did something they did not. To date, most of the more sophisticated deep fake videos have been mainly for entertainment purposes but there has been a concern that they could lead to faked accusations against politicians or other public figures. However, so far, there has been little evidence of deep fake videos being used in elections. This may be because the technology is not yet good enough or because such videos have been easy to debunk through other evidence.
Meanwhile, there has been increasing concern about something slightly different: cheap fake or shallow fake videos, which are just slight modifications and adjustments to real videos—less technically sophisticated, but also potentially harder to combat.
One of the most high profile examples of this was a series of videos that went viral on social media of House Speaker Nancy Pelosi that were modified by slowing down the video to 75% of the original speed. The modified videos were spread with false claims that they showed Pelosi slurring her words, possibly indicating intoxication. Various media organizations fact checked the claims, noting that the videos were altered and therefore presented a very inaccurate picture of Pelosi and her speech patterns.
Social media companies were urged by some to delete these videos, including by Speaker Pelosi herself, who argued that Facebook in particular should remove them. Both Facebook and Twitter refused to take down the video, saying that it did not violate their policies. YouTube removed the video.
In response to concerns raised by Pelosi, some highlighted that it would be impossible to expect social media to remove every misleading political statement that either took the words of an opponent out of context or presented them in a misleading way, while others suggested that there’s a clear difference when it comes to manipulated video as compared to manipulated text.
Others highlighted that it would be difficult to distinguish manipulated video from satire or other attempts at humor.
Company Considerations:
- Where should companies draw the line between misleading political content and deliberate misinformation?
- Under what conditions would a misleading cheap fake video separately violate other policies, such as harassment?
- What should the standards be for removing political content that could be deemed misleading?
- Does medium matter? Should there be different rules for manipulated videos as compared to other types of content, such as taking statements out of context?
- Should there be exceptions for parody/satire?
- Are there effective ways for distinguishing videos that are manipulated to mislead vs. those that are manipulated for humor or commentary?
- Should the company have different standards if the subject of a cheap fake video was not a political or public figure?
- What are other alternative approaches, beyond blocking, that could be used to address manipulated political videos?
- What are the possible unintended consequences if all “manipulated video” is deemed a policy violation?
- What, if any, is the value of not removing videos of political or public figures that are clearly misleading? Would there be any unintended consequences of such a policy?
- What are the implications for democracy if manipulated political videos are allowed to remain on a platform, where they may spread virally?
- Are misleading “cheap fake” videos about politicians considered political speech?
- Who should decide when “cheap fake” political speech is inaccurate and inappropriate—should it be social media platforms, a general public consensus, or a third body??
- How might “cheap fake” videos be used for harassment and bullying of non-public figures, and what are the potential implications for real life harm?
- If the cheap fake video didn’t originate from a public source (as in the Pelosi video) but a private video, how could a company determine that those videos were manipulated?
Twitter left the video up. However, by the time a very similar event happened a few months later, Twitter announced a new plan for dealing with such content, saying that it would begin adding labels to “manipulated media,” offering context for people who came across such videos so they would understand that the video is not being shown in its original context. One of the first examples of Twitter applying this “manipulated media” label was to a comedy segment by late night TV entertainer Jimmy Kimmel who used some manipulated video to make fun of former Vice President Mike Pence.
Originally posted to the Trust & Safety Foundation website.
Filed Under: cheap fakes, content moderation, deep fakes, modified content, nancy pelosi, politics, shallow fakes, social media
Companies: facebook, twitter, youtube