Content Moderation Case Study: Twitter Experiences Problems Moderating Audio Tweets (2020)
from the content-moderation-in-new-areas dept
Summary: Since its debut in 2007, Twitter hasn't changed much about its formula, except for expanding its character limit from 140 to 280 in 2007 and adding useful features such as lists, trending topics and polls. Twitter has embraced images and videos, adding it to its original text-only formula, but seemed to have little use for audio. That changed in June 2020 when Twitter announced it would allow users to upload audio-only tweets. Remaining true to the original formula, audio tweets were limited to 140 seconds, although Twitter will automatically add new audio tweets to a thread if the user's recording ran long.
With Twitter engaged in day-to-day struggles moderating millions of tweets, critics and analysts expressed concern the platform would be unable to adequately monitor tweets whose content couldn't be immediately discerned by other users. The content would be unable to be pre-screened by moderators -- at least not without significant AI assistance. But that assistance might prove problematic if it caused more problems than it solved by overblocking.
There was also the potential for harassment. Since abusive audio tweets relied heavily on other Twitter users reporting, abusive audio tweets could be posted and remain up until someone noticed and reported it. Another issue audio tweets raised wasn't about proactively flagging and removing unwanted content, but that this new offering excluded certain Twitter users from being a part of the conversation.
“Within hours of the first voice tweets being posted, deaf and hard-of-hearing users began to criticize the tool, saying that Twitter had failed to provide a way to make the audio clips accessible for anyone who can’t physically hear them.”
-- Kiera Frazier, YR Media
The new feature debuted without auto-captioning or any other options that would have made the content more accessible to Deaf or hard of hearing users.
There were other potential problems, such as users being exposed to possibly disturbing content with no heads up from the platform.
“‘You can Tweet a Tweet. But now you can Tweet your voice!’ This was how Twitter introduced last week its new audio-tweet option. In the replies to the announcement [another user asked], “Is this what y’all want?” ... reposting another user’s audio tweet, which used the new feature to record the sounds of… porn.
-- Hanna Kozlowska, OneZero
Unlike other adult content on Twitter, the recording of porn sounds was not labelled as sensitive by Twitter or hidden from users whose account settings requested they not be shown this sort of content.
Company considerations:
- Is it possible to proactively filter audio content to be flagged, prevented from being posted, or quickly removed?
- If an audio tweet was reported, should Twitter remove this feature from the user immediately or wait until it is reviewed? If the tweet violates Twitter's content policy, should they temporarily or permanently take away this feature from the user?
- Should unmoderated audio be labelled as sensitive if they are reported until cleared by moderators/AI?
- Should users be given the option to hide/block all audio tweets?
- What makes audio-only moderation different from text, image, or video moderating? Is audio-only moderation more challenging?
- What are other proactive methods of moderating audio content? Would they be more or less effective than relying on users flagging abusive content?
- Is AI reliable enough to handle most instances of unwanted content without the assistance of human moderators?
- How can platforms ensure that audio-only or visual-only content be accessible to those who have a hearing or visual disability or impairment?
“We're sorry about testing voice Tweets without support for people who are visually impaired, deaf, or hard of hearing. It was a miss to introduce this experiment without this support.
Accessibility should not be an afterthought.”
The platform fixed some issues with visual accessibility and said it was implementing a combination of auto- and human-captioning to give Deaf persons a way to access this content.
As for the porn-audio tweet, Twitter flagged it after it was reported but did not appear to have any other approach to dealing with questions around adult content in audio tweets. It appears sensitive content is not as easy to detect when it's in audio form, which means that for now, it's up to users to report unwanted or abusive content so that Twitter can take action.
Originally published on the Trust & Safety Foundation website.
Filed Under: audio, audio tweets, content moderation
Companies: twitter