Content Moderation Case Study: Kik Tries To Get Abuse Under Control (2017)
from the kids-will-be-kids dept
Summary: The messaging service Kik was founded in 2009 and has gone through multiple iterations over the years. However, it seemed to build a large following for mostly anonymous communication, allowing users to create many new usernames not linked to a phone number, and to establish private connections via those usernames. This privacy feature has been applauded by some as being important for journalists, activists and at-risk populations.
However, the service has also been decried by many as being used in dangerous and abusive ways. NetNanny puts it as the number one dangerous messaging apps for kids, saying that it “has had a problem with child exploitation” and highlighting the many “inappropriate chat rooms” for kids on the app. Others have said that, while the service is used by many teenangers, many feel that it is not safe for them and full of sexual content and harassment.
Indeed, in 2017, a Forbes report detailed that Kik had a huge “child exploitation problem.” It described multiple cases of child exploitation that we found on the app, and claimed that it did not appear that the company was doing much to deal with the problem, which seemed especially concerning given that over half of its users base was under 24 years of age.
Soon after that article, Kik began to announce some changes to its content moderation efforts. It teamed up with Microsoft to improve its moderation practices. It also announced a $10 million effort to improve safety on the site and named some high profile individuals to its new Safety Advisory Board.
A few months later the company announced updated community standards, with a focus on safety, and a partnership with Crisis Text Line. However, that appeared to do little to stem the concerns. A report later in 2018 said that, among law enforcement, the app that concerned them most was Kik, with nearly all saying that they had come across child exploitation cases on the app, and that the company was difficult to deal with.
In response, the company argued that while it was constantly improving its trust & safety practices, it also wanted to protect the privacy of its users.
Decisions to be made by Kik:
- How can a company that promotes the privacy-protective nature of its messaging also limit and prevent serious and dangerous abusive practices?
- How closely should Kik work with law enforcement when they find evidence of crimes on the platform?
- Are there additional tools and features that can be implemented that would discourage those looking to use the platform in abusive ways?
- Are there ways to retain the benefits for journalists, activists, and at-risk groups that do not put others -- especially children -- at risk?
- What are the tradeoffs between enabling useful private communications and making sure such tools are not used in abusive or dangerous ways?
Despite an announcement in late 2019 that the company was going to shut down the messaging service to focus on a new cryptocurrency plan, it reversed course soon after and sold off the messenger product to a new owner. In the year and half since the sale, Kik has not added any new content to its safety portal, and more recent articles still highlight how frequently child predators are found on the site.
Originally published on the Trust & Safety Foundation website.
Filed Under: content moderation, messaging, privacy, safety
Companies: kik