from the that-would-be-nice dept
Over a year ago, Tim Karr had an interesting and important post about
openness on the internet. While much of it, quite reasonably, focuses on authoritarian governments trying to stomp out dissent online, he makes an important point towards the end about how the fact that content online is ruled by various "terms of service" from different private entities, rather than things like the First Amendment, can raise serious concerns:
And the threat isn't entirely at the hands of governments. In last week's New Republic, Jeffrey Rosen reported on a cadre of twentysomething "Deciders" employed by Facebook, Twitter and YouTube to determine what content is appropriate for those platforms -- and what content should get blocked.
While they seem earnest in their regard for free speech, they often make decisions on issues that are way beyond their depth, affecting people in parts of the world they've never been to.
And they're often just plain wrong, as Facebook demonstrated last week. They blocked a political ad from progressive group CREDO Action that criticized Facebook founder Mark Zuckerberg's support of the Keystone XL pipeline.
This case is just one of several instances where allegedly well-intentioned social media companies cross the line that separates Internet freedom from Internet repression.
And it actually goes beyond that to some extent. As well-intentioned as these companies might be, the example above shows that they'll cave on issues all the time.
"Hosting your political movement on YouTube is a little like trying to hold a rally in a shopping mall. It looks like a public space, but it's not -- it's a private space," writes Ethan Zuckerman of MIT's Center for Civic Media. "And your use of it is governed by an agreement that works harder to protect YouTube's fiscal viability than to protect your rights of free speech."
Zuckerman compares the social media executives to "benevolent despots" who use their corporate terms of service -- not the First Amendment -- to govern their decision making about content.
In many ways, it may be even
more complicated than Karr and the people he quotes describe. First off, even if you have a company that claims it will respect a right to free expression, it's not their decision alone to make. As we saw, for example, with Wikileaks, when there's strong pressure to silence a site, the downstream providers can get antsy and pull the plug. Upstream hosting firms, data centers and bandwidth providers can all be pressured or even threatened legally, and usually someone somewhere along the line will cave to such threats. In such cases, it doesn't matter how strongly the end service provider believes in free speech; if someone else along the chain can pull things down, then promises of supporting free speech are meaningless.
The other issue is that most sites are pretty much legally compelled to have such terms of use, which provide them greater flexibility in deciding to stifle forms of speech they don't appreciate. In many ways, you have to respect the way the First Amendment is structured so that, even if courts have conveniently chipped away at parts of it at times (while, at other times making it much stronger), there's a clear pillar that all of this is based around. Terms of service are nothing like the Constitution, and can be both inherently wishy-washy and ever-changeable as circumstances warrant.
This issue keeps coming up. A few months ago, Jillian York wrote a powerful piece about how we run a risk in
treating private social media spaces as if they're public:
The trouble with private companies controlling our speech is that they are subject not only to shareholders, but also to governments. Many of the most popular social media companies – most notably Twitter, which once called itself “the free speech wing of the free speech party” – profess a commitment to free expression. But in their efforts to provide access to their services to users around the world, these companies often face an unfortunate choice: to avoid being blocked by a government’s censorship apparatus, they must sometimes agree to take down content, at least in a given country.
[....]
In any case, when a company unnecessarily complies with censorship orders from a foreign government, it sends the message to users that profit is more important than free speech, something that all of the aforementioned companies count amongst their values. Furthermore, by making the company – and not the government issuing the orders – the “bad guy,” it becomes harder for users within a country to fight back, and less clear to users that the governments seeking censorship are the real enemy.
And now this issue is coming up again in a slightly different context, with the decision of various social platforms this week to
block the video of James Foley (or even linking to it). Glenn Greenwald has now chimed in on the subject as well, and makes the key point about how, even if you understand the reasons for why these companies chose to do it (and, it might
not even be "valuing profit over free speech"), it
creates a real challenge for free speech when someone (anyone) gets to decide what is and what is not allowed:
Given the savagery of the Foley video, it’s easy in isolation to cheer for its banning on Twitter. But that’s always how censorship functions: it invariably starts with the suppression of viewpoints which are so widely hated that the emotional response they produce drowns out any consideration of the principle being endorsed.
It’s tempting to support criminalization of, say, racist views as long as one focuses on one’s contempt for those views and ignores the serious dangers of vesting the state with the general power to create lists of prohibited ideas. That’s why free speech defenders such as the ACLU so often represent and defend racists and others with heinous views in free speech cases: because that’s where free speech erosions become legitimized in the first instance when endorsed or acquiesced to.
The question posed by Twitter’s announcement is not whether you think it’s a good idea for people to see the Foley video. Instead, the relevant question is whether you want Twitter, Facebook and Google executives exercising vast power over what can be seen and read.
Given all of this, it seems like it would be good to have some sort of even safer "public space" online. Karr, in his piece from last year, suggests that companies that want to be supporters of an open internet be much more transparent about their moderating decisions, allowing for public review:
To be more accountable to users, these platforms should adopt publicly transparent processes allowing a full view of every decision to block content. And these sites should invite feedback from users as a check against abuses.
I like that idea, though I can see how it would be difficult to implement in practice. But, really, an even bigger question is how do we set up a space on the internet that isn't prone to such issues. I'd hate to think that it would need to be hidden away in the "dark web" like the infamous Silk Road market, but I'm not sure how else one would create such a truly safe harbor that is impervious to outside attempts to block.
York hopes that companies will "stand up" against such censorship requests, but it always seems like there's a weak link somewhere in the chain. It would be great if everyone agreed to protect the speech, but when complainants can go to ISPs asking for filters, to upstream providers, to server hosting companies, to domain registrars and more, you would need to build a top to bottom wall of organizations totally committed to free speech. I'm not sure that's possible.
And that leaves us with quite a conundrum if you're looking for a true venue for free speech online. It's almost technically
not truly possible.
Filed Under: blocking, censorship, first amendment, free speech, james foley, terms of service, upstream providers