Content Moderation Case Study: Lyft Blocks Users From Using Their Real Names To Sign Up (2019)
from the scunthorpe-again? dept
Summary: Users attempting to sign up for a new ride-sharing program ran into a problem from the earliest days of content moderation. The "Scunthorpe problem" dates back to 1996, when AOL refused to let residents of Scunthorpe, England register accounts with the online service. The service's blocklist of "offensive" words picked out four of the first five letters of the town's name and served up a blanket ban to residents.
Flash forward twenty-three years and services still aren't much closer to solving this problem.
Users attempting to sign up for Lyft found themselves booted from the service for "violating community guidelines" simply for attempting to create accounts using their real names. Some of the users affected were Nicole Cumming, Cara Dick, Dick DeBartolo, and Candace Poon.
These users were asked to "update their names," as though such a thing were even possible to do with a service that ties names to payment systems and internal efforts to ensure driver and passenger safety.
Decisions to be made by Lyft:
- Should names triggering Community Guidelines violations be reviewed by human moderators, rather than automatically rejected?
- Is the cross-verification process enough to deter pranksters and trolls from activating accounts with actually offensive names?
- Considering the identification system is backstopped by credit cards and payment services that require real names, does deploying a blocklist actually serve any useful purpose?
- Given that potential users are likely to abandon a service that generates too much friction at sign up, does a blocklist like this do damage to company growth?
- Does global growth create a larger problem by adding other languages and possible names that will trigger rejections of more potential users? Can this be mitigated by backstopping more automatic processes with human moderators?
Unfortunately, the problem still hasn't been solved. Candace Poon -- whose first attempt to sign up for Lyft was rejected -- just ran into the same issue attempting to create an account for new social media platform, Clubhouse.
Originally posted to the Trust & Safety Foundation website.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: content moderation, filtering, keywords, names, scunthorpe problem
Companies: lyft
Reader Comments
Subscribe: RSS
View by: Time | Thread
And in case anybody else recognized his name: yes, the Dick DeBartolo mentioned in this story is Dick DeBartolo the prolific writer for Mad Magazine.
[ link to this | view in chronology ]
People who could have been affected.
I have relatives whose last name is Weiner, and they could've been affected by this ban. It just shows you: never outsource naughty-word moderation to machines when people can actually have them as their real names.
[ link to this | view in chronology ]
A new example of the Scunthorpe problem: Facebook yanked down the page for the French city of Bitche. (The page has since been restored, but still.)
[ link to this | view in chronology ]
Relevant article about the issue from the perspective of a programmer:
Falsehoods Programmers Believe About Names
[ link to this | view in chronology ]
The only thing automatic profanity filters are good for is amusing "pranksters and trolls" intentionally playing with them. Everyone else finds them irrelevant at best, and disastrous at worst.
[ link to this | view in chronology ]
AirBNB also has silly filters
If you try to put the following words as your employer or mention them in your bio AirBNB won't allow it: Google, Twitter, Facebook (there are others including competitors like VRBO).
It's not a well constructed filter, adding unicode zero width spaces between the letters fools it.
[ link to this | view in chronology ]