Content Moderation Case Study: Twitter Acts To Remove Accounts For Violating The Terms Of Service By Buying/Selling Engagement (March 2018)
from the fake-followers dept
Summary: After an investigation by BuzzFeed uncovered several accounts trafficking in paid access to "decks" -- Tweetdeck accounts from which buyers could mass-retweet their own tweets to make them go "viral" -- Twitter acted to shut down the abusive accounts.
Most of the accounts were run by teens who leveraged the tools provided by Twitter-owned Tweetdeck to provide mass exposure to tweets for paying customers. Until Twitter acted, users who saw their tweets go viral under other users' names tried to police the problem by naming paid accounts and putting them on blocklists.
Twitter's Rules expressly forbid users from "artificially inflating account interactions”. But most accounts were apparently removed under Twitter's anti-spam policy -- one it beefed up after BuzzFeed published its investigation. The biggest change was the removal of the ability to simultaneously retweet tweets from several different accounts, rendering these "decks" built by "Tweetdeckers" mostly useless. Tweetdeckers responded by taking a manual approach to faux virality, sending direct messages requesting mutual retweets of posted content.
Unlike other corrective actions taken by Twitter in response to mass abuse, this cleanup process appears to have resulted in almost no collateral damage. Some users complained their follower counts had dropped, but this was likely the result of near-simultaneous moderation efforts targeting bot accounts.
Decisions to be made by Twitter:
- Do additional moderation efforts -- AI or otherwise -- need to be deployed to detect abuse of Twitter Rules?
- How often do these efforts mistakenly target legitimately "viral" content?
- Will altering Tweetdeck features harm users who aren't engaged in the buying and selling of "engagement?"
- Will power users or those seeking to abuse the rules move to other third-party offerings to avoid moderation efforts?
- Is there any way to neutralize "retweet for retweet" requests in direct messages without raising concerns about user privacy?
Questions and policy implications to consider:
- Does targeting spam more aggressively risk alienating advertisers who rely on repetitive/scheduled posts and active user engagement?
- Does spam (in whatever form -- including the manufactured virality seen above) still provide some value for Twitter as a company, considering it relies on active users and engagement to secure funding and/or sell ad space to companies?
- Do viral posts still add value for Twitter users, even if the source of the virality is illegitimate?
- Will increased moderation of spam reduce user engagement during events where advertising efforts and user engagement are routinely expected to increase (elections, sporting events, etc.)?
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: buying engagement, case study, content moderation, fake followers, tweetdeck
Companies: twitter
Reader Comments
Subscribe: RSS
View by: Time | Thread
Why pay to re-tweet?
I’m unable to divine why Person A would pay money to Person B so that Person B would perform the service of taking a tweet written by Person C (who has no previous relationship with Person A, and whose tweets would not seem to have any applicability to the goals of Person A) and sharing that tweet with a large audience.
If Person C had written a tweet that had said “Send money to my bank account #1234567” and Person A happened to have access to that bank account, then I could see Person A wanting to broadcast that tweet as widely as possible. But it’s unlikely such a situation would arise.
Is the explanation that the tweet is some re-usable cash-grab instrument? I can imagine something such as “If you like this tweet, then send $50 to bank account #8901234” but there is no tweet that is such an effective tool, it cannot be imitated. So why would there be a need for Person A to steal a tweet from Person C, for dissemination by Person B?
It doesn’t seem to make sense.
[ link to this | view in chronology ]