from the many,-many-issues dept
I'm going to try to do something that's generally not recommended on the internet: I'm going to try to discuss a complicated issue that has many nuances and gray areas. That often fails, because all too often people online immediately leap to black or white positions, because it's easy to miss the nuance when arguing about an emotionally potent issue. In this case, I want to discuss an issue that's already received plenty of attention: how various platforms -- starting with GoDaddy and Google, but with much of the attention placed on Cloudflare -- decided to stop serving the neo-Nazi forum site the Daily Stormer. Now, I'll note that as all that went down, I was focused on a multi-day drive out to (and then back from) the middle of absolute nowhere (a beautiful place) to watch the solar eclipse thing that everyone was talking about -- meaning that for the past week I've been disconnected from the internet quite a bit, which meant that I (a) missed much of the quick takes on this and (b) had plenty of time to really think about it. And, the simple fact is that it is a complicated issue, no matter what anyone says. So let's dig in.
Let's start with the basics: Nazis -- both the old kind and the new kind -- are bad. My grandfather fought Nazis in Europe and Northern Africa during WWII, and I have no interest in seeing Nazis in America of all places. But even if you believe that Nazis and whoever else uses the Daily Stormer are the worst of the absolute worst, there are many other issues at play here beyond just "don't provide them service." Of course, lots of services are choosing not to. Indeed, both the Washington Post and Quartz are keeping running tallies of all the services that have been booting Nazis and other racist groups. And, I think it's fairly important to state that these platforms have their own First Amendment rights, which allow them to deny service to anyone. There's certainly no fundamental First Amendment right for people to use any service they want. That's not how free speech works.
A second complicating factor is that there are different levels of services and their decisions can have very different impacts. So, for example, if some blog doesn't allow you to comment, that's not a big deal on the free speech front since there are millions of other places you can comment online. But if no one will even provide you any access to the internet, then there are some larger questions there about your right to access the entire network that everyone uses to speak. And there's a spectrum between those two end points. There are only a few ISPs, so if Comcast and Verizon decide you can't be online, you may not be online at all. There are multiple places where you can register domains, but if all the registrars blacklist certain providers, then you can effectively be banned from the open internet entirely. It's harder to say where things like Facebook, Google and even Cloudflare fall along that spectrum. Some might argue that you don't need any of those services -- while others might say that Google and Facebook are so central to everyday life that being forced off of them puts people at a serious disadvantage. Cloudflare is even more complicated, since it's just a middleman CDN/DDoS protection/security provider. But, as the company's CEO admitted in kicked off the Daily Stormer, there are very few other services online that could protect a site like that from the kinds of DDoS attacks that the site regularly gets (the fact that Daily Stormer briefly popped up on Dream Host this week and almost all of Dream Host was hit with massive, debilitating DDoS attacks just emphasizes that point).
But this issue is key: not all internet services are the same, and no single rule should apply across all of them. It simply wouldn't make sense.
Recognize: this is more complicated than you think
As many experts in the field have noted, these things are complicated. And while I know many people have been cheering on each and every service kicking off these users, we should be careful about what that could lead to. Asking platforms to be the arbiters of what speech is good and what speech is bad is frought with serious problems. As Jillian York eloquently put it:
I’m not so worried about companies censoring Nazis, but I am worried about the implications it has for everyone else. I’m worried about the unelected bros of Silicon Valley being the judge and jury, and thinking that mere censorship solves the problem. I’m worried that, just like Cloudflare CEO Matthew Prince woke up one morning and decided he’d had enough of the Daily Stormer, some other CEO might wake up and do the same for Black Lives Matter or antifa. I’m worried that we’re not thinking about this problem holistically.
Kate Knibbs, over at The Ringer, also has a nuanced article about this, pointing out how relying on internet platforms to "police hate" results in all sorts of potential problems and contradictions. Even if we all agree that Nazi propaganda is bad, there's a big question about whether or not this (censorship by platforms) is the proper response:
The world will be a better place if technology companies are able to disrupt the spread of propaganda. But while their post-Charlottesville efforts are an encouraging sign that technology companies are finally treating the prospect of domestic right-wing extremist groups as a serious threat, the way these companies have chosen to address that threat is an unsettling reminder that they are near-unfettered gatekeepers of speech. We are online at the whim and for the profit of a few extremely wealthy multinational corporations with faulty track records for moderating content. As overdue and appreciated as their efforts to root out hate groups from the digital world are, their efforts to preserve an open internet should be undertaken with equal urgency.
This, in fact, is the same very public struggle that Cloudflare's CEO, Matthew Prince, has been having over the issue. As he explained in his original statement, he's not really comfortable with the fact that one person -- even himself -- basically has the power to kick someone off the internet entirely. A few days later, in a (possibly paywalled) piece at the Wall Street Journal, he's still second guessing himself.
Your black and white quick take on this misses the point:
Yes, I know that some of you are angrily getting ready to scream one of two (contradictory) things in the comments: (1) free speech should mean that all these sites should be allowed to remain up or (2) oh, come on: Nazis are obviously bad and there's no slippery slope in denying them internet services. But there are strong responses to both of those extreme viewpoints, which come from opposite ends of the spectrum. Again, free speech also means that platforms have rights to choose what speech they host and what speech they don't host. Don't like it? Start your own platform. Similarly, no one truly believes that all content must be allowed on all platforms at all times. For anyone who claims that's not true, I'll just point to the email filter you use to show you're wrong. We accept filtering decisions in our email because we know that a completely unfiltered experience is so filled with garbage as to make it unusable. The question then becomes one of where do we draw the lines for moderation.
As for potential comment (2): yes, Nazis are obviously bad. But here's the problem: there are plenty of people (including some of those who are desperately typing out argument (1) above) who will argue that other groups -- antifa, BLM, the SLPC -- are just as bad. And then... you're just left with a fight on your hands about who's bad. And that doesn't solve anything. Even worse, it puts tremendous subjective power into the hands of those in charge. And, specifically for those who are making this "Nazis are obviously bad, so there's no slippery slope" argument, think about who's in charge right now. Do you really want them defining who's "bad" and who's "good"?
On top of that, we're constantly pointing to example after example after example of platforms being really bad at properly determining what's really bad and what's good. Doing so requires time and context -- which are two things that don't come easily on the internet.
At the very least, putting the onus on the internet platforms to be required to make these kinds of calls means that you're trusting a very small number of self-appointed people -- with very different incentives -- to be the world's speech police. And that should be concerning. Some argue there's no slippery slope argument in banning Nazis because they're Nazis. But there is a different slippery slope: the appointment of private, for-profit platforms being the speech police and the arbiters of what's good speech and what's bad speech. Yes, as noted, those platforms have every right to determine what they don't want on their own platforms, but as we move along that spectrum discussed above, and the power of a centarlized platform could mean cutting people off entirely, the overall impact of these decisions becomes greater and greater. And rushing headlong into a world where we trust private companies to make speech determinations just because they built a scalable platform seems like the wrong way to go about things. Just because you can build a big platform doesn't mean you're good at determining who should be allowed to speak.
Merely censoring doesn't solve the problem
This is a key point that hasn't been brought up very much, but as the coiner of "The Streisand Effect," I'm kind of obliged to do so: it's a pretty common gut reaction to really awful content that the best (and sometimes "only") option is to silence it. And there may be some narrow cases where that actually works. But all too often, attempts to silence or censor content only lead to more attention getting paid to that content. And, in the case of Nazis, it actually has a reinforcing impact that isn't widely considered. Many of the ignorant folks who jump on board with these groups (and, yes, they are ignorant) believe that they're being "edgy" and "contrarian" and "outside the norm." And pulling down their websites reinforces this view. It doesn't make them rethink their ignorant hate. It makes them think they're on to something. They interpret it as "the establishment" or "the swamp" or whoever not being able to handle the truth that they're bringing.
It certainly doesn't do much to educate the ignorant of why their beliefs are ignorant. This is why we often talk about the importance of counterspeech, which can be surprisingly effective, even in dealing with Nazis. But counterspeech isn't always the answer and isn't always effective. There is no counterspeech to deal with spam, for example. But that's why we've developed a system of tools and filters to deal with spam, but don't legally mandate that, say, domain registrars stomp out spammers.
This is why it's complicated:
Up top, I noted that the whole thing is more complicated than many people are willing to recognize. And it's because of the competing factors I discussed above. Some level of moderation is fundamental, necessary and right. Your email spam filter reveals that you know this is true. And platforms do have every right (including the First Amendment) to refuse service to assholes. But, at the same time, we should be concerned about a few centralized powers, or even individuals, being in a position to make these decisions on an ad hoc basis. This may not apply to smaller platforms, but the big guys that are often seen as "necessary" for participating in public life, certainly raise some questions.
So, how the hell do you weigh these seemingly competing factors? Some moderation is necessary, but expecting platforms to police opens up a whole host of problems from arbitrariness to the powerful silencing the less powerful and more.
Towards a (still complicated) solution:
Not surprisingly, EFF's take on the whole situation brings us closer to a framework for thinking about this issue. In fact, while they don't state this directly, in much of the world, we do have at least some history with a system that has faced similar complications and has a process. That system is the existing judicial system, and that process is due process. It is, of course, far from perfect. But there may be lessons we can learn from it. EFF suggests pulling in some of its features including transparency and a right of appeal.
Other elements of the Net risk less when they are selective about who they host. But even for hosts, there’s always a risk that others—including governments—will use the opaqueness of the takedown process to silence legitimate voices. For any content hosts that do reject content as part of the enforcement of their terms of service, or are pressured by states to secretly censor, we have long recommended that they implement procedural protections to mitigate mistakes—specifically, the Manila Principles on Intermediary Liability. The principles state, in part:
- Before any content is restricted on the basis of an order or a request, the intermediary and the user content provider must be provided an effective right to be heard except in exceptional circumstances, in which case a post facto review of the order and its implementation must take place as soon as practicable.
- Intermediaries should provide user content providers with mechanisms to review decisions to restrict content in violation of the intermediary’s content restriction policies.
- Intermediaries should publish their content restriction policies online, in clear language and accessible formats, and keep them updated as they evolve, and notify users of changes when applicable.
In other words, for these core, centralized chokepoints, there needs to be transparency and due process.
Of course, there are dangers in that as well. Last year, in hosting a panel on just this subject at Rightscon, we discussed the idea of internal corporate "due process" for moderating content. Medium's Alex Feerst discussed how they argue these issues out, as if they're in court, with someone representing each side. But when I asked about whether or not the "internal case law" would ever be made public, the answer was likely no. And you can also understand why. Because there are certainly some individuals and people who specifically are seeking to game the system (think: spammers and trolls). Revealing the exact policies upfront gives them extra ammo on how to game the system, violating the spirit of those rules, while not the letter. In other words, some would argue (compellingly) that some aspects of transparency here could make the problems even worse.
So while I'm certainly all for more due process, and some associated transparency, I worry that the requirements of transparency are not entirely realistic either -- especially in areas with rapidly changing activities and norms.
Can we rethink the internet?
To me, this keeps reminding me of an article I wrote two years ago, about why we should be looking at protocols, not platforms. The early days of the internet were built on protocols -- and the power was in the end-to-end nature of things. But with those protocols, people could build their own implementations and software to work with those protocols. The power was thus at the ends. Individuals could choose how they interact with the protocols and they could implement their own solutions without being completely cut off. You could filter out the content you didn't want. But the choice was yours. Over the last decade, especially, we've moved far away from that ideal (in part because there appears to be more money in locked-in, centralized platforms, rather than more distributed protocols). But, opening things up offers some opportunity to allow good things to happen.
Let the ignorant Nazis gather -- they're going to figure out a way to do so anyway. But have widely available (and recommended) filters to allow most decent people to ignore them. Or, let others focus on using counterspeech against them. Let various attempts at responding to and diffusing the power of ignorant propaganda bloom, rather than assuming that the best response is to just make it all disappear entirely. This, of course, does not solve everything. But it certainly seems like a better solution than hoping a few giant companies magically figure out how to become benevolent dictators over what content is allowed online.
In the end, there isn't an "easy" solution to any of this, and anyone pitching one is almost certainly selling snake oil. Expecting to solve "hate" by allowing a small number of internet platforms to censor "bad" people is a fool's errand. First, it's likely to be ineffective, and second, it will inevitably lead to bad results, with content you don't think should be blocked getting blocked. Platforms may have the right to police and moderate their own content, but demanding that they do so in all cases is going to lead to bad results. In the end, some of it needs to come down to a recognition of the different levels of service along the spectrum. Further down the line, with smaller services on the network, any moderation should be seen as a choice those platforms make. But as you move up the chain, at some point we need to be a lot more careful about the power of certain players to completely cut people off from the internet. This is the problem of an internet that has become too centralized in some areas. And, to me, it still feels the better solution isn't putting more power in the hands of massive centralized "infrastructure" providers, but pushing the power out to the ends, in the spirit of the original, open end-to-end internet. Give the ends of the network the power. Let them share tools and filters among each other, but let's not rush to demand that a few key centralized players be the final arbiters of speech online.
Filed Under: daily stormer, free speech, infrastructure, moderation, nazis, platforms, policing, slippery slope
Companies: cloudflare, facebook, godaddy, google