The Copia Institute To The Oversight Board Regarding Facebook's Trump Suspension: There Was No Wrong Decision
from the context-driven-coin-flip dept
The following is the Copia Institute's submission to the Oversight Board as it evaluates Facebook's decision to remove some of Trump's posts and his ability to post. While addressed to the Board, it's written for everyone thinking about how platforms moderate content.
The Copia Institute has advocated for social media platforms to permit the greatest amount of speech possible, even when that speech is unpopular. At the same time, we have also defended the right of social media platforms to exercise editorial and associative discretion about the user expression it permits on its services. This case illustrates why we have done both. We therefore take no position on whether Facebook's decision to remove former-President Trump's posts and disable his ability to make further posts was the right decision for Facebook to make because choosing to do so or choosing not to is each defensible. Instead our goal is to explain why.
Reasons to be wary of taking content down. We have long held the view that the reflex to remove online content, even odious content, is generally not a healthy one. Not only can it backfire and lead to the removal of content undeserving of deletion, but it can have the effect of preserving a false monoculture in online expression. Social media is richer and more valuable when it can reflect the full fabric of humanity, even when that means enabling speech that is provocative or threatening to hegemony. Perhaps especially then, because so much important, valid, and necessary speech can so easily be labeled that way. Preserving different ideas, even when controversial, ensures that there will be space for new and even better ones, whereas policing content for compliance with current norms only distorts those norms' development.
Being too willing to remove content also has the effect of teaching the public that when it encounters speech that provokes the way to respond is to demand its suppression. Instead of a marketplace of ideas, this burgeoning tendency means that discourse becomes a battlefield, where the view that will prevail is the one that can amass enough censorial pressure to remove its opponent—even if it's the view with the most merit. The more Facebook feeds this unfortunate instinct by removing user speech, the more vulnerable it will be to further pressure demanding still more removals, even when it may be of speech society would benefit from. The reality is that there will always be disagreements over the worth of certain speech. As long as Facebook assumes the role of an arbitrator, it will always find itself in the middle of an unwinnable tug-of-war between conflicting views. To break this cycle, removals should be made with reluctance and only limited, specific, identifiable, and objective criteria to justify the exception. It may be hard to employ them consistently at scale, but more restraint will in the long run mean less error.
Reasons to be wary of leaving content up. The unique challenge presented in this case is that the Facebook user at the time of the posts in question was the President of the United States. This fact cuts in multiple ways: as the holder of the highest political office in the country Trump's speech was of particular relevance to the public, and thus particularly worth facilitating. After all, even if Trump's posts were debauched, these were the views of the President, and it would not have served the public for him to be of this character and the public not to know.
On the other hand, as the then-President of the United States his words had greater impact than any other user's. They could do, and did, more harm, thanks to the weight of authority they acquired from the imprimatur of his office. And those real-world effects provided a perfectly legitimate basis for Facebook to take steps to (a) mitigate that damage by removing posts and (b) end the association that had allowed him to leverage Facebook for those destructive ends.
If Facebook concludes that anyone's use of its services is not in its interests, the interests of its user community, or the interests of the wider world Facebook and its users inhabit, it can absolutely decide to refuse that user continued access. And it can reach that conclusion based on wider context, beyond platform use. Facebook could for instance deny a confessed serial killer who only uses Facebook to publish poetry access to its service if it felt that the association ultimately served to enable the bad actor's bad acts. As with speech removals, such decisions should be made with reluctance and based on limited, specific, identifiable, and objective criteria, given the impact of such terminations. Just as continued access to Facebook may be unduly empowering for users, denying it can be equally disempowering. But in the case of Trump, as President he did not need Facebook to communicate to the public. He had access to other channels and Facebook no obligation to be conscripted to enable his mischief. Facebook has no obligation to enable anyone's mischief, whether they are a political leader or otherwise.
Potential middle-grounds. When it comes to deciding whether to continue to provide Facebook's services to users and their expression, there is a certain amount of baby-splitting that can be done in response to the sorts of challenges raised by this case. For instance, Facebook does more than simply host speech that can be read by others; it provides tools for engagement such as comments and sharing and amplification through privileged display, and in some instances allows monetization. Withdrawing any or all of these additional user benefits is a viable option that may go a long way toward minimizing the problems of continuing to host problematic speech or a problematic user without the platform needing to resort to removing either entirely.
Conclusion. Whether removing Trump's posts and further posting ability was the right decision or not depends on what sort of service Facebook wants to be and which choice it believes it best serves that purpose. Facebook can make these decisions any way it wants, but to minimize public criticism and maximize public cooperation how it makes them is what matters. These decisions should be transparent to the user community, scalable to apply to future situations, and predictable in how they would, to the extent they can be, since circumstances and judgment will inevitably evolve. Every choice will have consequences, some good and some bad. The choice for Facebook is really to affirmatively choose which ones it wants to favor. There may not be any one right answer, or even any truly right answer. In fact, in the end the best decision may have little to do with the actual choice that results but rather the process used to get there.
Filed Under: appeals, content moderation, donald trump, facebook supreme court, free speech, oversight, review
Companies: facebook, oversight board