Free Speech, Censorship, Moderation And Community: The Copia Discussion
from the not-an-easy-issue dept
As I noted earlier this week, at the launch of the Copia Institute a couple of weeks ago, we had a bunch of really fascinating discussions. I've already posted the opening video and explained some of the philosophy behind this effort, and today I wanted to share with you the discussion that we had about free expression and the internet, led by three of the best people to talk about this issue: Michelle Paulson from Wikimedia; Sarah Jeong, a well-known lawyer and writer; and Dave Willner who heads up "Safety, Privacy & Support" at Secret after holding a similar role at Facebook. I strongly recommend watching the full discussion before just jumping into the comments with your assumptions about what was said, because for the most part it's probably not what you think:I think we would be better served as a tech community in acknowledging that we do moderate and control. Everyone moderates and controls user behavior. And even the platforms that are famously held up as examples... Twitter: "the free speech wing of the free speech party." Twitter moderates spam. And it's very easy to say "oh, some spam is malware and that's obviously harmful" but two things: One, you've allowed that "harm" is a legitimate reason to moderate speech and two, there's plenty of spam that's actually just advertising that people find irritating. And once we're in that place, it is the sort of reflexive "no restrictions based on the content of speech" sort of defense that people go to? It fails. And while still believing in free speech ideals, I think we need to acknowledge that that Rubicon has been crossed and that it was crossed in the 90s, if not earlier. And the defense of not overly moderating content for political reasons needs to be articulated in a more sophisticated way that takes into account the fact that these technologies need good moderation to be functional. But that doesn't mean that all moderation is good.This is an extremely important, but nuanced point that you don't often hear in these discussions. Just today, over at Index on Censorship, there's an interesting article by Padraig Reidy that makes a somewhat similar point, noting that there are many free speech issues where it is silly to deny that they're free speech issues, but plenty of people do. The argument then, is that we'd be able to have a much more useful conversation if people admit:
Don't say "this isn't a free speech issue", rather "this is a free speech issue, and I’m OK with this amount of censorship, for this reason.” Then we can talk."Soon after this, Sarah Jeong makes another, equally important, if equally nuanced, point about the reflexive response by some to behavior that they don't like to automatically call for blocking of speech, when they are often confusing speech with behavior. She discusses how harassment, for example, is an obvious and very real problem with serious and damaging real-world consequences (for everyone, beyond just those being harassed), but that it's wrong to think that we should just immediately look to find ways to shut people up:
Harassment actually exists and is actually a problem -- and actually skews heavily along gender lines and race lines. People are targeted for their sexuality. And it's not just words online. It ends up being a seemingly innocuous, or rather "non-real" manifestation, when in fact it's linked to real world stalking or other kinds of abuse, even amounting to physical assault, death threats, so and so forth. And there's a real cost. You get less participation from people of marginalized communities -- and when you get less participation from marginalized communities, you lead to a serious loss in culture and value for society. For instance, Wikipedia just has fewer articles about women -- and also its editors just happen to skew overwhelmingly male. When you have great equality on online platforms, you have better social value for the entire world.She then noted that this was a major concern because there's a big push among many people who aren't arguing for better free speech protections:
That said, there's a huge problem... and it's entering the same policy stage that was prepped and primed by the DMCA, essentially. We're thinking about harassment as content when harassment is behavior. And we're jumping from "there's a problem, we have to solve it" and the only solution we can think of is the one that we've been doling out for copyright infringement since the aughties, and that's just take it down, take it down, take it down. And that means people on the other end take a look at it and take it down. Some people are proposing ContentID, which is not a good solution. And I hope I don't have to spell out why to this room in particular, but essentially people have looked at the regime of copyright enforcement online and said "why can't we do that for harassment" without looking at all the problems that copyright enforcement has run into.
And I think what's really troubling is that copyright is a specific exception to CDA 230 and in order to expand a regime of copyright enforcement for harassment you're going to have to attack CDA 230 and blow a hole in it.
That's a huge viewpoint out right now: it's not that "free speech is great and we need to protect against repressive governments" but that "we need better content removal mechanisms in order to protect women and minorities."From there the discussion went in a number of different important directions, looking at other alternatives and ways to deal with bad behavior online that get beyond just "take it down, take it down," and also discussed the importance of platforms being able to make decisions about how to handle these issues without facing legal liability. CDA 230, not surprisingly, was a big topic -- and one that people admitted was unlikely to spread to other countries, and the concepts behind which are actually under attack in many places.
That's why I also think this is a good time to point to a new project from the EFF and others, known as the Manila Principles -- highlighting the importance of protecting intermediaries from liability for the speech of their users. As that project explains:
All communication over the Internet is facilitated by intermediaries such as Internet access providers, social networks, and search engines. The policies governing the legal liability of intermediaries for the content of these communications have an impact on users’ rights, including freedom of expression, freedom of association and the right to privacy.In short, it's important to recognize that these are difficult issues -- but that freedom of expression is extremely important. And we should recognize that while pretty much all platforms contain some form of moderation (even in how they are designed), we need to be wary of reflexive responses to just "take it down, take it down, take it down" in dealing with real problems. Instead, we should be looking for more reasonable approaches to many of these issues -- not in denying that there are issues to be dealt with. And not just saying "anything goes and shut up if you don't like it," but that there are real tradeoffs to the decisions that tech companies (and governments) make concerning how these platforms are run.
With the aim of protecting freedom of expression and creating an enabling environment for innovation, which balances the needs of governments and other stakeholders, civil society groups from around the world have come together to propose this framework of baseline safeguards and best practices. These are based on international human rights instruments and other international legal frameworks.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: censorship, community, copia, free expression, moderation, section 230
Companies: copia, copia institution
Reader Comments
Subscribe: RSS
View by: Time | Thread
Believing in free speech means facing some nauseating examples of free speech and still believing in free speech even after you have seen the full horror of what some people will say when given free speech.
Imagine the most horrible person saying the most horrible things. Now, picture yourself standing next to that person saying that he has the right to express himself. That is what is required of those that want to say they believe in free speech. Anything else is lip service.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Spam, business and People
As an example the promo pieces you've run, clearly identified as such and even seem to interest some people, so in that context limks to different places to buy, reviews, pricing even actual vender posts with local contact info etc would seem to be appropriate no off topic and not spam, but if my post consisted of BUYMYPENISENLARGMENT that's clearly not relevant to the discussion at hand, modding for some semblance of focus and not allowing commercial entities to drown out the actual speech of people is not censorship, no matter what SCOTUS says corporations are NOT people and should not have the same rights AS people.
[ link to this | view in chronology ]
Re: Spam, business and People
"Modding for some semblance of focus" may or may not meet the definition of censorship, but it is inescapably a violation of "pure" free speech.
[ link to this | view in chronology ]
Re: Re: Spam, business and People
If the CEO of company X comes out and says I personally XXX that is different, he is responsible and it is his/her actual person(al) assertion, but to include spam as free speech takes things on the wrong track since it assigns rights to things rather than people, there is no way that can work out ISDS comes to mind, it's all part of the same narrative things have rights, people don't, any discussion that doesn't start with the assertion that people have rights and things don't cannot lead anywhere but wrong.
[ link to this | view in chronology ]
information and noise
In that context the spam contains no information, only noise. The problem is that the noise drowns out all the information.
Offensive speech contains information, even if we disagree. Filtering out the noise is useful. The trick is to not confuse objectionable speech with noise, not always easy.
Whatever forum you are in will have some constraints.
On the street corner you are constrained by your vocal limitations. In an auditorium there are a limited number of microphones. In an online forum you are limited by whatever rules the forum creator sets.
If those rules are too restrictive go elsewhere or create your own.
[ link to this | view in chronology ]
For a long time, there was no record of such deletions. They just disappeared. This has happened so long that now there are several bots that watch the front page for such deletions in what appears to be an attempt at controlling the discussion by removing it, especially on what seems to be certain forbidden subjects or attempts at sliding it down in importance.
While most mods are not taking responsibility for the deletions they are now starting to show up in the deleted records to try and justify those actions they won't do in their own domains.
Many have left for another clone site called Voat. More and more seem to be showing up each day there, leaving what they claim to be severe censorship and mod abuse of power.
[ link to this | view in chronology ]
Reddit
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re: Facebork
Facebork Doxing/Stalker central
[ link to this | view in chronology ]
Modding
1. I have never been a moderator so, opinions and what everyone has and all that.
2. Read the article, watched the discourse
3. Long time reader but for reasons most can understand I do not have a login to distinguish my voice
4. my proposal is off the cuff, more time to ponder could refine but I would like to catch
Now that that is out of the way I would like try and catch Mike while he is still working
What if we could have a middle ground, Chan style but with a twist. you have an article, write up, blog post, etc. with commenting enabled. why not take an approach somewhat similar to what Jeff Atwood and crew have done over at the Discourse sites and blend it with Chan style boards. In my hypothetical world the Mods would, when deemed necessary (spam, harassment, etc.) create a mirror "board". you would the copy over the article and comment stream and allow for the progression of discourse being modded separately. you are taking down or blocking just moving. The Discourse crew does something similar with posts that are made in the wrong section E.G. a hardware issue question in the programming section. just do Chan board style replication to allow for different flows. This way you do not stifle speech you simply redirect the flow. Does this make sense?
[ link to this | view in chronology ]
Modding 2
[ link to this | view in chronology ]
Where this is actually happening
But it's important to differentiate between real and virtual, people are asshole I don't think this is a surprise to anyone(or at least it should not be) but they are not in your living room.. If you really want to punch them and shut them up go there.. If it's not worth it to do that then maybe it's just people being assholes on the internet, if people are IRL real assholes chances are there are people that are not near them that will deal with the situation, the facebook suicides demonstrated that people hundreds and thousands of miles away cannot help with real problems in a timely manner, if peoples speech is so objectionable that it needs to be opposed there are probably people nearer by that will oppose it.
I have had people from elsewhere come to try to help with anti-Fa work and they didn't they where in the way it's the same with the internet in general your a long way away and people mostly don't mean the insane madness that comes out of their keyboards and if they do chances are that there are people near that are dealing with it.
Blah...
4chan is not the internet, it's not even the most threatening part of the internet.. worry about your bank, employer,FBI,NSA, Local cops...
[ link to this | view in chronology ]
Hmm. Them's some fine words, but am I still blocked? Test in 1.. 2.. 3...
And with long enough of real text to maybe not be automatically tossed.
[ link to this | view in chronology ]
#13 Al
And yet you posted it anyway.
[ link to this | view in chronology ]
And from there it's a short slide down the slippery slope...
But that doesn't mean we should let harassment fly, either. This is an amazing opportunity to open up the debate on freedom of speech online and I look forward to seeing more Copia videos on important issues like this.
[ link to this | view in chronology ]