We Shouldn't Want Internet Giants Deciding Who To Silence; But They Should Let Users Decide Who To Hear
from the rethinking-moderation dept
A few weeks back I wrote a big piece on internet platforms and their controversial content moderation efforts. As I've pointed out more recently, part of the reason why what they do is so bad is it is literally impossible to do this well at the scale they do things at. That is, even if they can reach 99% accuracy, given the amount of content on these sites, it's still going to take down a ton of legitimate stuff, while leaving up an awful lot of awful stuff. This doesn't mean they shouldn't do anything -- but my own proposal is for them to shift the way they think about this issue entirely, and move the moderation out from the center to the ends. Let third parties create their own filters/rules and allow anyone else to not just use them, but to adjust and modify and reshare them as well. Then allow the users to not just "opt-in" to the kind of experience they want, but allow them to further tweak it to their own liking as well.
I've seen some pushback on this idea, but it seems much more viable than the alternatives of "do nothing at all" (which just leads to platforms overwhelmed with spam, trolls and hatred), and continue to focus on a centralized moderation system. There have been a number of articles recently that have done a nice job highlighting the problems of having Silicon Valley companies decide who shall speak and who shall not. EFF's Jilian York highlights the problems that occur when there's no accountability, even if platforms have every legal right to kick people off their platforms.
This is one major reason why, historically, so many have fought for freedom of expression: The idea that a given authority could ever be neutral or fair in creating or applying rules about speech is one that gives many pause. In Europe’s democracies, we nevertheless accept that there will be some restrictions – acceptable within the framework of the Universal Declaration of Human Rights and intended to prevent real harm. And, most importantly, decided upon by democratically-elected representatives.
When it comes to private censorship, of course, that isn’t the case. Policies are created by executives, sometimes with additional consultations with external experts, but are nonetheless top-down and authoritarian in nature. And so, when Twitter makes a decision about what constitutes ‘healthy public conversation’ or a ‘bad-faith actor,’ we should question those definitions and how those decisions are made, even when we agree with them.
We should push them to be transparent about how their policies are created, how they moderate content using machines or human labor, and we should ensure that users have a path for recourse when decisions are made that contradict a given set of rules (a problem which happens all too often).
Jillian's colleague at EFF, David Greene, also had an excellent piece in the Washington Post about how having just a few giant companies decide these things should worry us:
We should be extremely careful before rushing to embrace an Internet that is moderated by a few private companies by default, one where the platforms that control so much public discourse routinely remove posts and deactivate accounts because of objections to the content. Once systems like content moderation become the norm, those in power inevitably exploit them. Time and time again, platforms have capitulated to censorship demands from authoritarian regimes, and powerful actors have manipulated flagging procedures to effectively censor their political opponents. Given this practical reality, and the sad history of political censorship in the United States, let's not cheer one decision that we might agree with.
Even beyond content moderation's vulnerability to censorship, the moderating process itself, whether undertaken by humans or, increasingly, by software using machine-learning algorithms, is extremely difficult. Awful mistakes are commonplace, and rules are applied unevenly. Company executives regularly reshape their rules in response to governmental and other pressure, and they do so without significant input from the public. Ambiguous "community standards" result in the removal of some content deemed to have violated the rules, while content that seems equally offensive is okay.
Vera Eidelman, of the ACLU similarly warns of the pressures that are increasingly put on tech companies that will inevitably lead to the silencing of marginalized voices:
Given the enormous amount of speech uploaded every day to Facebook’s platform, attempting to filter out “bad” speech is a nearly impossible task. The use of algorithms and other artificial intelligence to try to deal with the volume is only likely to exacerbate the problem.
If Facebook gives itself broader censorship powers, it will inevitably take down important speech and silence already marginalized voices. We’ve seen this before. Last year, when activists of ...anchor markup--p-anchor" rel="nofollow noopener noopener" target="_blank">experiences of police violence, Facebook chose to shut down their livestreams. The ACLU’s own Facebook post about censorship of a public statue was also inappropriately censored by Facebook.
Facebook has shown us that it does a bad job of moderating “hateful” or “offensive” posts, even when its intentions are good. Facebook will do no better at serving as the arbiter of truth versus misinformation, and we should remain wary of its power to deprioritize certain posts or to moderate content in other ways that fall short of censorship.
Finally, over at Rolling Stone, Matt Taibbi makes a similar point. What starts out as kicking out people we generally all agree are awful people, leads to places we probably won't like in the end:
Now that we’ve opened the door for ordinary users, politicians, ex-security-state creeps, foreign governments and companies like Raytheon to influence the removal of content, the future is obvious: an endless merry-go-round of political tattling, in which each tribe will push for bans of political enemies.
In about 10 minutes, someone will start arguing that Alex Jones is not so different from, say, millennial conservative Ben Shapiro, and demand his removal. That will be followed by calls from furious conservatives to wipe out the Torch Network or Anti-Fascist News, with Jacobin on the way.
We’ve already seen Facebook overcompensate when faced with complaints of anti-conservative bias. Assuming this continues, “community standards” will turn into a ceaseless parody of Cold War spy trades: one of ours for one of yours.
This is the nuance people are missing. It’s not that people like Jones shouldn’t be punished; it’s the means of punishment that has changed radically.
This is why I think it's so important that the framework be shifted. People have long pointed out that "just because you have free speech doesn't mean I need to listen," but the way social media networks are constructed, it's not always so easy not to listen. The very limited "block / mute" toolset that Twitter provides is not nearly enough. The more platforms can push the moderation decision making out to the ends of the network, including by allowing third parties to create different "views" into those networks, the better off we are. It's no longer the internet giants making these decisions. In fact, it increases "competition" on the moderation side itself, while also increasing the transparency with which such systems operate.
So, really, it's time we stopped focusing on who the platforms should silence, and give more power to help the end users decide who they wish to hear.
Filed Under: censorship, centralizataion, decentralization, filters, free speech, human rights, intermediary liability, platforms, silence, social media
Companies: facebook, google, twitter