Content Moderation Case Study: Twitter's Self-Deleting Tweets Feature Creates New Moderation Problems
from the content-moderation-at-the-fleeting-level dept
Summary: In its 15 years as a micro-blogging service, Twitter has given users more characters per tweet, reaction GIFs, multiple UI options, and the occasional random resorting of their timelines.
The most recent offering was to give users the option to create posts designed to be swept away by the digital sands of time. Early in 2020, Twitter announced it would be rolling out "Fleets" — self-deleting tweets with a lifespan of only 24 hours. This put Twitter on equal footing with Instagram's "Stories" feature, which allows users to post content with a built-in expiration date.
In the initial, limited rollout of Fleets, Twitter reported that the feature showed advantages over the platform's standard offering. Twitter Comms tweeted that initial testing looked promising, stating that it was seeing "less abuse with Fleets" with only a "small percentage" of Fleets being reported each day.
Whether this early indicator was a symptom of the limited rollout or users viewing self-deleting abuse as a problem that solves itself, the wider rollout wasn't nearly as easy as earlier indicators nor was it relatively abuse free. Fleet’s full debut arrived in the wake of an incredibly contentious U.S. presidential election — one marred by election interference accusations and a constant barrage of misinformation. The full rollout also came after nearly a year of a worldwide pandemic, which resulted in a constant flow of misinformation across multiple social media platforms globally.
While amplification of misinformation contained in Fleets was somewhat tempered by their innate ephemerality, as well as very limited interaction options, it seemed unclear how — or how well — Twitter was handling moderating misinformation spread by the new communication option. Extremism researcher Marc-Andre Argentino was able to send out a series of "fleets" containing misinformation and banned URLS, noting that Twitter only flagged one that asserted a link between the virus and cell phone towers.
Samantha Cole reported other Fleet moderation issues. Writing for Motherboard, Cole noted that apparent glitches were allowing users to see Fleets from people they had blocked, as well as Fleets from people who had blocked them. Failing to maintain settings that users set up to block or mute others created more avenues for abuse. Cole also pointed out that users weren't being notified when their tweets were added to Fleets, providing abusive users with another option to harass while the targets of abuse remain unaware.
Company Considerations:
- How can Twitter prevent new features from duplicating existing moderation problems?
- How can companies test a feature’s initial rollout to better detect possible abuses, and therefore resulting in less moderation needs in the wider rollout?
- How does ephemeral content affect moderation efforts and moderation response time?
- If issues remain unsolved or poorly-addressed, who has the power to shut down or temporarily disable a new feature?
- How much time should moderation teams be given to adjust to new responsibilities and new inputs when a new feature is rolled out? What metrics would be useful to determine whether moderation responses are successfully addressing new abuses and problems?
Issue Considerations:
- What processes should companies have in place to mitigate damage if a feature doesn't perform in the expected way and/or creates unforeseen problems?
- Does "fleeting" content have the potential to cause moderators to view abusive posts as problems that will solve themselves? How can this mindset be discouraged or counteracted?
Resolution: Twitter's immediate response to the issues during the full rollout was to temporarily slow the deployment of the feature to users. While the issues that impacted moderation never really dissipated, the feature itself did. Twitter noted that Fleets did not have the uptake it expected. Although Fleets was supposed to encourage more engagement from Twitter users who lurked more than posted, observers noted that the feature appeared to be used mostly by users who were already heavily-engaged with the platform.
With the feature never being much more than a novelty for Twitter die-hards, Twitter killed off the feature on August 3, 2021, taking with it the moderation problems the self-killing Fleets had created.
Originally posted to the Trust & Safety Foundation website
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: content moderation, fleets, transitory content
Companies: twitter
Reader Comments
Subscribe: RSS
View by: Time | Thread
UPSers 401k plan
The UPSers 401k plan also works like insurance to the company against the employees leaving the firm early. Taxes will not be withdrawn until the salary taken out from the bank. https://upserscom.website/upsers-401k-plan/
[ link to this | view in chronology ]
If only we knew how it would work/fail
Okay, the problem is old as the web/internet. As old as big business having servers.
To solve it you need two things: A big test environment, that is **not connected to the production environment.
Now, this part is a bit busy: You must give test/beta access to, and only to, this list of testers; Engineers. Race car drivers/mechanics/engineers. Parents of teens, and a separate group for pre-teens (or your youngest perceived audience).
After those people have finished trashing it, then you can let actual teens in as well. They will trash it further.
Why these people? Engineers and the Racers are always working with not enough resources, and too many rules (which are limitations). They spend a lot of energy on circumventing restrictions.
Parents, do I really have to explain what a teenagers and preteens are capable of? Breaking the rules without breaking them completely is their forte.
Of course, a lot of expense could be spared if they would just read; It is impossible to moderate at scale, by Mike Masnick. But companies are driven to make fools of themselves and reading will be given to a subordinate to do, and give a two bullet summary next exec meeting.
[ link to this | view in chronology ]