Incentivizing Better Speech, Rather Than Censoring 'Bad' Speech
from the there-are-other-solutions dept
This has gone on for a while, but in the last year especially, the complaints about "bad" speech online have gotten louder and louder. While we have serious concerns with the idea so-called "hate speech" should be illegal -- in large part because any such laws are almost inevitably used against those the government wishes to silence -- that doesn't mean that we condone and support speech designed to intimidate, harass or abuse people. We recognize that some speech can, indeed, create negative outcomes, and even chill the speech of others. However, we're increasingly concerned that people think the only possible way to respond to such speech is through outright censorship (often to the point of requiring online services, like Facebook and Twitter to silence any speech that is deemed "bad").
As we've discussed before, we believe that there are alternatives. Sometimes that involves counterspeech -- including a wide spectrum of ideas from making jokes, to community shaming, to simple point-for-point factual refutation. But that's on the community side. On the platform side -- for some reason -- many people seem to think there are only two options: censorship or free for all. That's simply not true, and focusing on just those two solutions (neither of which tend to be that effective) shows a real failure of imagination, and often leads to unproductive conversations.
Thankfully, some people are finally starting to think through the larger spectrum of possibilities. On the "fake news" front, we've seen more and more suggestions that the best "pro-speech" way to deal with such things is with more speech as well (though there are at least some concerns about how effective this can be). Over at Quartz, reporter Karen Hao recently put together a nice article about how some platforms are thinking about this from a design perspective... and uses Techdirt as one example, in how we've created small incentives in our comment system for better comments. The system is far from perfect, and we certainly don't suggest that every comment we receive is fantastic. But I think that we do a pretty good job of having generally good discussions in our comments that are interesting to read. Certainly a lot more interesting than other sites.
The article also discusses how Medium has experimented with different design ideas to encourage more thoughtful comments as well, and quotes professor Susan Benesch (who we've mentioned many times in the past), discussing some other creative efforts to encourage better conversations online, including Parlio (which sadly was shut down after being purchased by Quora) and League of Legends -- which used some feedback loops to deal with abusive behavior:
In one experiment, Lin measured the impact of giving players who engaged in toxic behavior specific feedback. Previously, if a player received a suspension for making racist, homophobic, sexist, or harassing comments, they were given an error message during login with no specifics on why the punishment had occurred. Consequently, players often got angry and engaged in worse behavior once they returned to the game. League of Legends reform card.
As a response, Lin implemented “reformation cards” to tell players exactly what they had said or done to earn their suspension and included evidence of the player engaging in that behavior. This time, if a player got angry and posted complaints about their reformation card on the community forum, other members of the community would reinforce the card with comments like, “You deserve every ban you got with language like that.” The team saw a 70% increase in their success with avoiding repeat offenses from suspended users.
However, the key thing, as Benesch notes, is getting past the idea that the only responses to speech a large majority of people think is "bad" is to take it down and/or punish the individual who made it:
“There is often the assumption in public discourse and in government policymaking and so forth that there are only two things you can do to respond to harmful speech online,” says Benesch. “One of those is to censor the speech, and the other is to punish the person who has said or distributed it.” Instead, she says, we could be persuading people not to post the content in the first place, rank it lower in a feed, or even convince people to take it down and apologize for it themselves.
Obviously, there are limits on all of these options -- and anything can and will be abused over time. But by at least thinking through a wider range of possibilities than "censor" or "leave everything exactly as is" we can hopefully get to a better overall solution for many internet discussion platforms.
Meanwhile, Josh Constine, at TechCrunch recently had some good suggestions as well specifically for Twitter and Facebook for ways that they can encourage more civility, without resorting to censorship. Here's one example:
Practically, Twitter needs to change how replies work, as they are the primary vector of abuse. Abusers can @ reply you and show up in your notifications, even if you don’t follow them. If you block or mute them, they can create a new throwaway account and continue the abuse. If you block all notifications from people you don’t follow, you sever your connection to considerate discussion with strangers or potential friends — what was supposed to be a core value-add of these services.
A powerful way to prevent this @ reply abuse would be to prevent accounts that aren’t completely registered with a valid phone number, haven’t demonstrated enough rule-abiding behavior or have been reported for policy violations from having their replies appear in recipients’ notifications.
This would at least make it harder for harassers to continue their abuse, and to create new throwaway accounts that circumvent previous blocks and bans in order to spread hatred.
There may be concerns with that as well, but it's encouraging that more people are thinking about ways that design decision can make things better, rather than resorting to just out and out censorship.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: abuse, comments, design, free speech, harassment
Companies: facebook, medium, twitter
Reader Comments
Subscribe: RSS
View by: Time | Thread
That's an un-American approach... it's neither divisive nor irrational. I like it.
[ link to this | view in chronology ]
Blocking them just sends them a sign that you are on to them. All that does is make them create yet another new anonymous account and start spouting off again and again and again. Sounds like the same solution that MPAA and RIAA are employing with copyright censorship and we all know how well that is going....
I was pondering the idea of creating 'echo chambers' that members of a given vitriolic nature are able to interact in their own little 'walled garden' if you will. Your comments will show up in your feed, your posts aren't taken down... just they might not be seen on someone else's account.
Who cares what you spout when we identify you as a troublemaker, (e.g. troll/racist/abuser) when only you can see it. If you aren't in a community that you can correct the behavior, the silent treatment might be a great way to keep the community whole. With things like Facebook and Twitter being so vast that you can't possibly see every possible message someone posts, they already make decisions on what you want to see. They can just weight these troll accounts to the bottom and no one would be the wiser.
[ link to this | view in chronology ]
Re:
Please note, muting only works if you stop responding prior to doing so or they'll realise what you've done.
[ link to this | view in chronology ]
It's too late for Twitter and Facebook
Those of us who've been around for a while know -- from the first-hand experience of our own failures -- that you can't retrofit abuse prevention. It has to be designed-in before the first line of code is written or the first server plugged in. Belatedly trying to slap band-aids on after the fact has never worked, it's not working, and it's not going to work.
Not that this will stop them from trying, or that'll stop them from claiming success when it's obvious to everyone that they've failed. But when they take the podium and make those claims, ignore what they say and look instead what they've done: if it's just another set of tweaks to a system design that was fatally flawed before it was built, then what you're listening to isn't progress: it's just bullshit.
[ link to this | view in chronology ]
Yep...
Ha ha... now you have to "work/put in your time" to have a voice. I like this, you start off without a say at all, just exactly what democracy is all about!
This will only create a more powerful platform of bullying than you can imagine. Black Mirror "Nosedive" anyone?
Your entire article seems to be promoting a Global Public rating system for everyone as the cure for "bad speech"!
How is this a good suggestion? I saw right through it from the get go!
[ link to this | view in chronology ]
Re: Yep...
The suggestion still allows those without completely-registered accounts or with no prior history to judge them on to have their say. It just doesn't let them shove their way into my notification feed and my attention. That's the way the world works: when you're a newcomer to a community with no history in it people don't pay nearly as much attention to what you say as they do to a long-time member with a rich history of making good points. If you intend to be a long-term member of the community the lack of history remedies itself in relatively short order. If all you want is to have other people notice you screaming and react to you... sooooo not my problem.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
huh??? they are not saying that. where do you get that from?
Are you one of those that whined about the NFL going after knee takers and saying they had a right to kneel during the anthem but brow beat all of the posters on this forum whining about getting their posts flagged?
Puleeze!
[ link to this | view in chronology ]
Re: Re:
Also, getting the ban hammer or your posts flagged isn't a free speech issue for the same reasons as to whether or not the NFL chooses to regulate the behavior of its players since both are PRIVATE PLATFORMS.
It's really weird that folks like you want to create Fairness Doctrine for the Internet when the courts already ruled it doesn't work for the airwaves. Like it or not, if I host a website I don't have to give you a comment section to screech at me. The freedom to speak or to express yourself doesn't mean you get a subsidy on its dissemination nor a built-in audience for it to be received. Once you admit to those two points then we can have a fruitful discussion. Until then, you're no better than those alt-right clowns that try to force everyone to "debate" them.
[ link to this | view in chronology ]
Re: Re: Re:
…what
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re:
I don't get why Tech Dirt keeps pushing the idea that moderating on a site is a free speech violation when in practice we can all setup our own sites, forums, and the like to discuss matters. It would be better for Tech Dirt, in the pursuit of free speech, to promote more distributed and decentralized hosting/sharing of people's views.
These two things are not mutually exclusive. Indeed, we have REGULARLY advocated for a more distributed and decentralized system, just as you say. But the fact is, as it stands today, most people still use these services. And it's perfectly reasonable to argue that they should be designed better.
[ link to this | view in chronology ]
Re: Re:
IMO, the better option is to not use the services and build our own. It's slow and it takes time to develop but the fact that Mastodon continues to grow despite its flaws is a good example of this. It does so without pandering to political views it just promises federated social networks and self-hosting options.
[ link to this | view in chronology ]
yeah Twitters not getting my phone number.
Perhaps a better way for twitter to handle it is to use dev/null.
Rather than show the ZOMG YOUR BLOCKED message... show them nothing.
Let them think the target is just ignoring them.
You tell them they are blocked they rev up the attacks because they got a cookie for the bad behavior. The reward is I was so awful they blocked me.
Much of the "hate speech" I see people bitching about on Twitter really is OMG THEY CALLED ME A NAME I CALL OTHER PEOPLE ALL THE TIME!!!!
We have the professional "victims" who thrive on showing the world how poor innocent them is under attack, while they keep doing things to provoke people.
We have the jackholes who say horrible things trying to get a response.
Twitter has given people the ability to have a list of 'forbidden words' for their time line, yet some people are still screaming that someone said a bad word.
Twitter has made the system so stupid trying to please everyone. I saw a RT of someone who reported 3 accounts who said exactly the same threats, and she got 3 different responses from Twitter ranging from suck it up to we removed it & banned them.
Twitter is inconsistent in what they do.
Someone with a blue check has WAY more leeway in what they say to people & if someone uses the exact same words back they end up with a timeout.
Elevating some users over others ALWAYS ends well.
Their insane idea for a safety team to protect people was hyper one sided and enraged people more.
Twitter needs to decide do they just want a happy place for blue checks to decide who is worthy of speaking to them or a platform for all.
So much could be handled if instead of playing fairy godmother they said use the tools we gave you before demanding new ones.
Look at the happy happy psychos we have here, while they like to talk about the conspiracy keeping them from commenting... its the community who looks at the rant, decides its of no value, presses the button and waits for it to take its course. Occasionally one of us snorted 1 to many pixie stix and responds, even though we know its a mistake... because we think we might say the thing that finally stops the insanity. More often than not we read 1 line and hit the report button... and life is better.
[ link to this | view in chronology ]
Re:
Let them think the target is just ignoring them."
ghost blocking? insidious.... I like it!
I very much like the ignore option over the block option.
[ link to this | view in chronology ]
Re: Re:
They called me a mean name, I can't let this stand and I must slap them with my glove. I need to report them, call them out, send my team after them, and keep doing things until I win.
No one wants to be the grown-up first, they want to clap back and win... most of them aren't that creative.
Dude is bothering you, hit the button... don't obsessively refresh their feed to see if they are still talking about you. Don't keep talking about them after you block them trying to rile up people to be your attack dogs & get ahead.
More than once I've said bye felecia and blocked someone, and not looked back. I'm sure other people have blocked me for not living up to their expectations, my heart bleeds. I'm not always nice people, but I'm pretty open about that fact.
[ link to this | view in chronology ]
https://plus.google.com/u/0/+MarcCalvert/posts/fftw7Cwv9Rr
An interesting response to hateful speech.
[ link to this | view in chronology ]
more important
[ link to this | view in chronology ]
Re: more important
Why yes, it is important that you start taking those pills the nice doctor gave you.
[ link to this | view in chronology ]
Re: Re: more important
[ link to this | view in chronology ]
Re: Re: Re: more important
[ link to this | view in chronology ]
Re: Re: Re: Re: more important
And you’re failing. And I’m sorry.
Go volunteer at a soup kitchen.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: more important
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: more important
If the North Korea situation had any actual relevance to the article at hand, maybe you would have a point. But it does not. So you do not.
And if you want to save lives, get off your ass, then do something tangible and productive. Complaining here is as productive as a date with Rosey Palms—and it is equally as self-serving.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: more important
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: more important
I cannot recall any such legislation. Even if it were, the hatefulness of a given expression of ideas and thoughts does not make it illegal, even in a public setting. “Hate speech”, like “obscenity”, lacks a narrow definition that would allow for the suppression of such speech without collateral damage to protected speech. This is why “hate speech” is still protected speech under the First Amendment.
This has nothing to do with the original article. It has barely anything to do with the comment that started this chain. Please try to stay on a specific track of thought.
Unless and until you can put an actual gun to my actual head and give me that order, I will remain non-compliant. Do what you must; I have no fear of you.
[ link to this | view in chronology ]
Oh, and one more thing...
“A date with Rosey Palms” is a synonym for masturbation. You can jerk yourself off all you want, but doing so will only ever make you feel good for all of a few seconds—just like all the whining you are doing here.
[ link to this | view in chronology ]
Re: Oh, and one more thing...
[ link to this | view in chronology ]
Re: Oh, and one more thing...
[ link to this | view in chronology ]
Re: Re: Oh, and one more thing...
Rosey Palms. It’s Rosey Palms. If you are going to feign outrage at a euphemism older than I am, at least have the goddamn decency to spell it right.
Unless you happen to be on a date with her while you talk about…whatever the hell it is you were talking about in this thread. I wanna say “Korean cuisine”—was it something like that?
[ link to this | view in chronology ]
Re: Re: Re: Oh, and one more thing...
[ link to this | view in chronology ]
Re: Re: Re: Oh, and one more thing...
[ link to this | view in chronology ]
Re: Re: Re: more important
[ link to this | view in chronology ]
I'm curious as to how you would apply this to the argument against strong encryption. It's easy to treat encryption as a binary issue (there's either strong encryption, or broken, i..e, no encryption).
But what if strongly-encrypted comms involve speech that creates negative outcomes and chills the speech of others?
Should freedom of speech be a binary issue? And if not, can we/should we use the 1st Amendment as an argument in favor of strong encryption?
[ link to this | view in chronology ]
Re:
Then there is the whole sharing intel issue, you know, the one where some agencies knew about people taking flight training but didn't want to learn how to land, but didn't tell the other agencies because it's 'our' intel?
[ link to this | view in chronology ]
Re: Rectifying encryption with nuanced free speech
Let's see...
Encryption is a binary issue, and the experts waste a lot of breath trying to explain this to politicians. I understand politicians being skeptical of it being a binary issue as most things aren't, but that's what it is. But then because a computer recieves an encrypted communication (or downloads a publicly published message) doesn't mean that communication needs to be shown.
As for freedom-of-speech, that means everyone should be able to say pretty much whatever they want but noone is abliged to listen. Basically this is handled exactly the same way as with encryption.
So there's nothing to remedy between their stances on encryption and free-speech.
[ link to this | view in chronology ]
Re:
We punish the outcomes instead of either the speech itself or the way it was delivered. We allow encryption to remain intact instead of outlawing or weakening it because “bad guys” use it. We recognize that hateful or offensive speech may create negative outcomes without outlawing such speech.
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re:
Hey!
I do not have the mouth of a duck.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
[ link to this | view in chronology ]
[ link to this | view in chronology ]
How TechDirt Can Do Better
I've basically stopped commenting here for one very simple reason - you guys trash comments from IP addresses you don't like. What's worse is that you do not give any indication ahead of time that you are going to trash a comment.
I use a VPN service and there are obviously spammers also using this VPN service. Its reasonable (but not necessarily the best option) to give extra scrutiny to posts from IP addresses used by this VPN. But my experience is that Techdirt doesn't just give extra scrutiny, you guys frequently just blackhole those posts.
So I spend a lot of time writing a post, only to get a message that it is going to be reviewed by editors. I come back the next day and my completely appropriate comment still hasn't been made public. That's the kind of behaviorial conditioning that quickly teaches people that their contributions are not valued, and so only a fool would continue making contributions that have an unknown chance of being tossed aside.
The very least you could do is tell your users up front if they might be wasting their time.
And yes, I recognize the irony of making this comment which will probably also be tossed aside. Its the first one I've written in about half a year and I'm only chancing it because its highly relevant and maybe, just maybe, you guys will take note and do something about it.
[ link to this | view in chronology ]
Re: How TechDirt Can Do Better
If you use a system known to get caught in a spamfilter, try turning off that system for the length of time you need to post a comment here.
[ link to this | view in chronology ]
Re: How TechDirt Can Do Better
Unless you comment on a friday night or weekend they're generally pretty good about going through the 'Held for moderation' comments fairly quickly, so I find it kinda hard to believe that they are blackholing 'completely appropriate' comments simply because they were caught by the spam filter.
I've seen the kinds of comments that make it through moderation, so you'll excuse me if I'm hesitant to accept the 'my completely appropriate comments were completely blocked' claim simply on your word.
[ link to this | view in chronology ]
Re: Re: How TechDirt Can Do Better
Also, with the number of inappropriate posts that make it through, I seriously doubt the issue is the 'appropriateness' of your post.
[ link to this | view in chronology ]
Re: How TechDirt Can Do Better
But my experience is that Techdirt doesn't just give extra scrutiny, you guys frequently just blackhole those posts.
We don't do this. Over weekends it may take more time to review posts, but we review comments caught by the spam filter and release them if they are not spam.
[ link to this | view in chronology ]
Re: Re: How TechDirt Can Do Better
For the sake of argument, I'll accept that is true. But even so, when it takes days for a post to be published that significantly reduces the utility of that post because after a few days everybody has moved on. Nobody is paying attention anymore.
Imagine what its like to expend the time and energy to contribute, only to have your contributions effectively ignored. Its not obvious from the editorial side, but it is extremely discouraging to put in that effort for naught. It would be one thing if it were published and nobody responded, but being forced to be invisible until nobody GAF anymore is disheartening in the extreme.
Maybe it is impractical on your end to work through all comments in a timely fashion, but when the process treats actual people as just another cog in the wheel it is inevitable that anyone with self-respect will simply opt-out. The least you can do is alert people up front that the work they put in won't get the same treatment as everyone else.
I like to say that the "war on terror" is really a war on dignity. The same thing applies to the war on comment spammers/trolls. Any actual people who get caught in the crossfire get treated as less than human and more like bots. If that happens enough (where "enough" is a pretty low threshold) then any self-respecting person will just stop trying. At which point they won't even show up in your site metrics.
If you can, try putting yourself in 'our' shoes, send all of your comments to an admin and have them randomly wait hours or days before making them visible. I think the experience of being ignored despite your best intentions will be eye-opening. It is literally dehumanizing.
[ link to this | view in chronology ]
Re: Re: Re: How TechDirt Can Do Better
Well, I voted for Hillary Clinton, so…
[ link to this | view in chronology ]
Re: Re: Re: Re: How TechDirt Can Do Better
[ link to this | view in chronology ]
Re: Re: Re: How TechDirt Can Do Better
If you are using a method of submitting comments that you know is likely to cause your comments to be caught by the spam filters(VPN, Tor), then they shouldn't need to 'alert' you at all, you should know already that your comments stand a good chance of being flagged and caught. What more do you want them do do, check every IP address pre-post before allowing you to post so they can give an additional warning?
Regarding your 'try if from my perspective', I'd suggest doing the same thing. Picture if you will what the comment section would look like without a spam filter in place. I can tell you that the one time I saw it fail on a notable level was bad enough that I do not even want to think about how bad it would be without it in place, such that having some comments caught by the spam filter is very much the lesser of the two evils.
[ link to this | view in chronology ]
Re: Re: Re: How TechDirt Can Do Better
But even so, when it takes days for a post to be published that significantly reduces the utility of that post because after a few days everybody has moved on. Nobody is paying attention anymore.
It rarely, if ever, takes "days." We check the filter every few hours during weekdays, and we try to check once a day on weekends. If stuff gets busy we don't always get that far.
But trust me, the system is much better than if we just allowed all those posts to go through. These days, the spam filter probably catches 1000 spam comments per day... and maybe catches anywhere from 2 to 10 legit comments per day. That's about it. On a cost benefit basis, there's really no other solution.
Many other sites refuse to allow VPN or Tor posting at all. That seems like a worse solution to us. And not all VPNs or Tor are blocked. But if the system senses a service that has a history of spamming...
[ link to this | view in chronology ]
Re: How TechDirt Can Do Better
After I dropped a mail to them asking why they looked into it and there was some problem with my account. They fixed it and since then all my posts go through without being held for review.
[ link to this | view in chronology ]
If social media networks were protocols with a client, usable by anybody on any OS and curated by inclusion instead of a general morass, it'd be a little more like the dying IM clients, but not bad.
Opt-ins for social networking instead of exposing every user to the general morass of people is a net gain and you only have yourself to blame for letting the wrong users in. Strangers need permission to message you personally, period.
It may sound wrong or exclusionary, but when it's the client users' prerogative who gets to see what, it can be much easier to manage.
As for trolls, well... I try not to care about them. I believe that 99.999% of death/harm/rape threats are from people eager to send you pizzas more than bullets, and just get a laugh out of making somebody squirm or cry because of typed characters on a screen.
If those threats were all or even somewhat legitimate, we'd be having a very different kind of conversation.
On the other hand, if you've said something "wrong", and they show proof that they know who you are and what your workplace number is, you bet your job might be in jeopardy.
**That** I have no solution for yet. :P
[ link to this | view in chronology ]
Stack Overflow
[ link to this | view in chronology ]
Hide comments from the trolls
The idea is that if a user gets too many downvotes from the community then all of his posts are hidden from the community. The user can still post comments and he'll still see his posts, but no one else will.
This way, he can post all the insults that he wants *and* he can't play the victim by saying "ohmergerd, my right to free speech is being censored!".
After a while, he'll get bored because his insults aren't having any effect and no one's responding to him.
[ link to this | view in chronology ]
The problem is, it's really hard to even have a civil public conversation
To really have a productive online conversation, it should be mostly private (and potentially anonymous), so that you're conversing with just one person, not 1,000. To encourage civil behavior, there can be a rating system in which the participants rate each other, and third parties can occasionally "rate the raters" to keep participants honest.
For all the communication tools on the Internet, there is a gaping hole: there is no way to choose the characteristics of who you want to communicate with. If I'm a liberal but pro-life male Democrat from the midwest, I should be able to choose to "converse" about abortion with a pro-choice female Republican Trump supporter from the northeast. Sadly, there's no good way to arrange such a conversation, yet it would certainly foster more productive outcomes than you typically get from even a moderated group discussion on today's social media offerings.
[ link to this | view in chronology ]
Re: The problem is, it's really hard to even have a civil public conversation
[ link to this | view in chronology ]
Re: Re: The problem is, it's really hard to even have a civil public conversation
I've accidentally voted comments funny that I wanted to vote insightful. Clicking Insightful again removed the vote.
[ link to this | view in chronology ]