Content Moderation Case Study: Newsletter Platform Substack Lets Users Make Most Of The Moderation Calls (2020)
from the newsletter-moderation dept
Summary: Substack launched in 2018, offering writers a place to engage in independent journalism and commentary. Looking to fill a perceived void in newsletter services, Substack gave writers an easy-to-use platform they could monetize through subscriptions and pageviews.
As Substack began to attract popular writers, concerns over published content began to increase. The perception was that Substack attracted an inordinate number of creators who had either been de-platformed elsewhere or embraced views not welcome on other platforms. High-profile writers who found themselves jobless after crafting controversial content appeared to gravitate to Substack (including big names like Glenn Greenwald of The Intercept and The Atlantic's Andrew Sullivan), giving the platform the appearance of embracing views by providing a home for writers unwelcome pretty much everywhere else.
A few months before the current controversy over Substack's content reached critical mass, the platform attempted to address questions about content moderation with a blog post that said most content decisions could be made by readers, rather than Substack itself. Its blog post made it clear users were in charge at all times: readers had no obligation to subscribe to content they didn't like and writers were free to leave at any time if they disagreed with Substack's decisions.
But even then, the platform's moderation policies weren't completely hands off. As its post pointed out, the platform would take its own steps to remove spam, porn, doxxing, and harassment. Of course, the counterargument raised was that Substack's embrace of controversial contributors provided a home for people who'd engaged in harassment on other platforms (and who were often no longer welcome there).
Decisions to be made by Substack:
- Does offloading moderation to users increase the amount of potentially-objectionable content hosted by Substack?
- Does this form of moderation give Substack the appearance it approves of controversial content contributed by others?
- Is the company prepared to take a more hands-on approach if the amount of objectionable content hosted by Substack increases?
- Does a policy that relies heavily on users and writers to enforce allow users and contributors to shape Substack's "identity?"
- Does limiting moderation by Substack attract the sort of contributors Substack desires to host and/or believes will make it more profitable?
- Does the sharing of content off-platform undermine Substack's belief that others have complete control over the kind of content they're seeing?
Most significantly, it announced it would not allow "hate speech" on its platform, although its definition was more expansive than policies on other social media services. Attacks on people based on race, ethnicity, religion, gender, etc. would not be permitted. However, Substack would continue to host attacks on "ideas, ideologies, organizations, or individuals for other reasons, even if those attacks are cruel and unfair."
Originally posted to the Trust & Safety Foundation website.
Filed Under: content moderation, controversy, email, newsletters
Companies: substack