We Shouldn't Want Internet Giants Deciding Who To Silence; But They Should Let Users Decide Who To Hear
from the rethinking-moderation dept
A few weeks back I wrote a big piece on internet platforms and their controversial content moderation efforts. As I've pointed out more recently, part of the reason why what they do is so bad is it is literally impossible to do this well at the scale they do things at. That is, even if they can reach 99% accuracy, given the amount of content on these sites, it's still going to take down a ton of legitimate stuff, while leaving up an awful lot of awful stuff. This doesn't mean they shouldn't do anything -- but my own proposal is for them to shift the way they think about this issue entirely, and move the moderation out from the center to the ends. Let third parties create their own filters/rules and allow anyone else to not just use them, but to adjust and modify and reshare them as well. Then allow the users to not just "opt-in" to the kind of experience they want, but allow them to further tweak it to their own liking as well.
I've seen some pushback on this idea, but it seems much more viable than the alternatives of "do nothing at all" (which just leads to platforms overwhelmed with spam, trolls and hatred), and continue to focus on a centralized moderation system. There have been a number of articles recently that have done a nice job highlighting the problems of having Silicon Valley companies decide who shall speak and who shall not. EFF's Jilian York highlights the problems that occur when there's no accountability, even if platforms have every legal right to kick people off their platforms.
This is one major reason why, historically, so many have fought for freedom of expression: The idea that a given authority could ever be neutral or fair in creating or applying rules about speech is one that gives many pause. In Europe’s democracies, we nevertheless accept that there will be some restrictions – acceptable within the framework of the Universal Declaration of Human Rights and intended to prevent real harm. And, most importantly, decided upon by democratically-elected representatives.
When it comes to private censorship, of course, that isn’t the case. Policies are created by executives, sometimes with additional consultations with external experts, but are nonetheless top-down and authoritarian in nature. And so, when Twitter makes a decision about what constitutes ‘healthy public conversation’ or a ‘bad-faith actor,’ we should question those definitions and how those decisions are made, even when we agree with them.
We should push them to be transparent about how their policies are created, how they moderate content using machines or human labor, and we should ensure that users have a path for recourse when decisions are made that contradict a given set of rules (a problem which happens all too often).
Jillian's colleague at EFF, David Greene, also had an excellent piece in the Washington Post about how having just a few giant companies decide these things should worry us:
We should be extremely careful before rushing to embrace an Internet that is moderated by a few private companies by default, one where the platforms that control so much public discourse routinely remove posts and deactivate accounts because of objections to the content. Once systems like content moderation become the norm, those in power inevitably exploit them. Time and time again, platforms have capitulated to censorship demands from authoritarian regimes, and powerful actors have manipulated flagging procedures to effectively censor their political opponents. Given this practical reality, and the sad history of political censorship in the United States, let's not cheer one decision that we might agree with.
Even beyond content moderation's vulnerability to censorship, the moderating process itself, whether undertaken by humans or, increasingly, by software using machine-learning algorithms, is extremely difficult. Awful mistakes are commonplace, and rules are applied unevenly. Company executives regularly reshape their rules in response to governmental and other pressure, and they do so without significant input from the public. Ambiguous "community standards" result in the removal of some content deemed to have violated the rules, while content that seems equally offensive is okay.
Vera Eidelman, of the ACLU similarly warns of the pressures that are increasingly put on tech companies that will inevitably lead to the silencing of marginalized voices:
Given the enormous amount of speech uploaded every day to Facebook’s platform, attempting to filter out “bad” speech is a nearly impossible task. The use of algorithms and other artificial intelligence to try to deal with the volume is only likely to exacerbate the problem.
If Facebook gives itself broader censorship powers, it will inevitably take down important speech and silence already marginalized voices. We’ve seen this before. Last year, when activists of ...anchor markup--p-anchor" rel="nofollow noopener noopener" target="_blank">experiences of police violence, Facebook chose to shut down their livestreams. The ACLU’s own Facebook post about censorship of a public statue was also inappropriately censored by Facebook.
Facebook has shown us that it does a bad job of moderating “hateful” or “offensive” posts, even when its intentions are good. Facebook will do no better at serving as the arbiter of truth versus misinformation, and we should remain wary of its power to deprioritize certain posts or to moderate content in other ways that fall short of censorship.
Finally, over at Rolling Stone, Matt Taibbi makes a similar point. What starts out as kicking out people we generally all agree are awful people, leads to places we probably won't like in the end:
Now that we’ve opened the door for ordinary users, politicians, ex-security-state creeps, foreign governments and companies like Raytheon to influence the removal of content, the future is obvious: an endless merry-go-round of political tattling, in which each tribe will push for bans of political enemies.
In about 10 minutes, someone will start arguing that Alex Jones is not so different from, say, millennial conservative Ben Shapiro, and demand his removal. That will be followed by calls from furious conservatives to wipe out the Torch Network or Anti-Fascist News, with Jacobin on the way.
We’ve already seen Facebook overcompensate when faced with complaints of anti-conservative bias. Assuming this continues, “community standards” will turn into a ceaseless parody of Cold War spy trades: one of ours for one of yours.
This is the nuance people are missing. It’s not that people like Jones shouldn’t be punished; it’s the means of punishment that has changed radically.
This is why I think it's so important that the framework be shifted. People have long pointed out that "just because you have free speech doesn't mean I need to listen," but the way social media networks are constructed, it's not always so easy not to listen. The very limited "block / mute" toolset that Twitter provides is not nearly enough. The more platforms can push the moderation decision making out to the ends of the network, including by allowing third parties to create different "views" into those networks, the better off we are. It's no longer the internet giants making these decisions. In fact, it increases "competition" on the moderation side itself, while also increasing the transparency with which such systems operate.
So, really, it's time we stopped focusing on who the platforms should silence, and give more power to help the end users decide who they wish to hear.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: censorship, centralizataion, decentralization, filters, free speech, human rights, intermediary liability, platforms, silence, social media
Companies: facebook, google, twitter
Reader Comments
Subscribe: RSS
View by: Time | Thread
The easy fix
Other than a complete shutdown, light handed moderation is necessary to keep things working. No Moderation hasn't been a workable answer since the "Greencard Lottery" advertisements went out over usenet.
If anyone really wants an unmoderated experience as a matter of principal they really don't understand what spam is.
Mass blocking astroturfers, spammers, bots and nazi's is fine but it's hard to stop there.
TD puts some of that control in the users, and seems to do an acceptable job balancing that. So thanks for the article Mike, and the forum to discuss it.
[ link to this | view in chronology ]
Re: The easy fix
[ link to this | view in chronology ]
However I think it's going to get push back from the people at social media giants to enjoy having the power they have. AND from the people who want one (or a small handful of) place to lay all the blame/accountability. I suspect that those two groups make up a major portion of the people best able to influence the situation.
[ link to this | view in chronology ]
Just your ongoing way to avoid responsibility, lying that
it's "the community" with a "voting system".
As evidenced here, that alleged "vote" always works the way you want to suppress criticism and you give no details such as whether an Administrator has final approval.
And now, censor this and give all the evidence needed that Techdirt simply advocates CENSORING.
[ link to this | view in chronology ]
Re: Just your ongoing way to avoid responsibility, lying that
[ link to this | view in chronology ]
Re: Re: Just your ongoing way to avoid responsibility, lying that
That's what I've been saying, and you approve, just don't call it CENSORSHIP (with the word trick that it's not gov't).
But disadvantage of viewpoints IS CENSORSHIP enough for a "free speech" site that claims to want the margins kept broad. FACT IS that you kids can't stand my mild-mannered dissent, but prattle about how virtuous you are.
[ link to this | view in chronology ]
Re: Re: Re: Just your ongoing way to avoid responsibility, lying that
[ link to this | view in chronology ]
Re: Re: Re: Re: Just your ongoing way to avoid responsibility, lying that
[ link to this | view in chronology ]
Re: Re: Re: Re: Just your ongoing way to avoid responsibility, lying that
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Just your ongoing way to avoid responsibility, lying that
[ link to this | view in chronology ]
Re: Re: Re: Just your ongoing way to avoid responsibility, lying that
And again, community moderation as supported on this site is not censorship. Your posts are still visible to the world. They're merely hidden behind a "show" link. If we could vote you off the island then you'd have censorship.
The government isn't the only entity that can block something and have it be called censorship. The community can do that, too. The difference is that when the government does it it's usually unconstitutional. When the community does it it's culling the herd and totally legal.
[ link to this | view in chronology ]
Re: Re: Re: Just your ongoing way to avoid responsibility, lying that
Sorry wait...
FACT you are NOT CENSORED HERE. At any time EVERY USER can click UNHIDE and see your DRIVEL. ;)
[ link to this | view in chronology ]
Your “dissent”, such as it is, relies on calling Techdirt names and bashing the commenter community for flagging your posts. You do not dissent on the basis of a fundamental disagreement with a central point being made. You do not refute the central point with which you disagree. All you do is whine and complain like a spoiled brat who keeps getting told “no” when he really wants to hear “yes”. If we thought you were offering anything of substance and nothing that sounded like you having a years-long grudge over something as trivial and meaningless as a perceived slight on a tech blog comments section, we would stop flagging your comments.
Feel free to give us actual dissent and discussion, if you can do so. We welcome the opportunity to disagree—and to discuss our points of disagreement. But right now, all you deliver are the baseless complaints of a man-child who thinks he commands respect but only ever demands attention. Respect is earned, and you have a lot of earning to do, so either get to work or stop masturbating to the idea that you are some radical counter-culture hero of society because you act like an attention whoring asshole on a tech blog’s comment section.
[ link to this | view in chronology ]
Re:
But right now, all you deliver are the baseless complaints of a man-child who thinks he commands respect but only ever demands attention.
"I am Rip Va- Anonymous Coward, and I command your respect!"
"No, you demand my attention."
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re:
Basically put: "Nothing you say will dissuade me from censoring your words."
A conversation I do not wish to repeat with you hypocrites.
I've said it before, and I'll say it again:
You can censor those you do not like, but you do not have the right to censor them for me.
This is "pitchfork" mentality, where Techdirt advocates putting censorship into the hands of the public.
That'll work out. /sarcasm.
Until Techdirt repairs the broken system by giving you hypocrites the power to block my content, I will not support this site financially despite the good its authors do.
I don't give a damn if 99.99999% of you flag the content. None of you has the right to block it from me.
CENSORSHIP is CENSORSHIP no matter who's in control.
Why can't you hypocrites understand this.
[ link to this | view in chronology ]
Re: Re:
CENSORSHIP is CENSORSHIP no matter who's in control.
[ link to this | view in chronology ]
Re: Re: Re:
My, my. You're consuming content with paying for it.
Dirty pirate.
[ link to this | view in chronology ]
Re: Re:
No, it means "provide reasonable words, and we will treat them reasonably". But, even simply stating that is enough to trigger your persecution complex, it seems. I don't believe you've ever attempted to be a reasonable person, however, so I doubt this will ever be tested.
"You can censor those you do not like, but you do not have the right to censor them for me."
We absolutely do. I've said this before, but when you have a drunk asshole in the room, the fact that he gets kicked out after annoying too many people is not infringing on his rights. Stop being that drunk asshole if you don't like being told to shut up. We don't even kick you out of the room here, we just sit you in the corner and warn people that you might be an obnoxious drunken asshole.
I wonder if you're so dead set on whining about these things on sites that actually do kick you out for dissent, or if you're just too stupid to understand the freedom you have on this site that never blocks anyone?
Also, I wonder what you alternative is. You steadfastly refuse to create a handle, let alone an account, so you provide no method by which TD can know what you wish to see, even if the option is provided. They therefore have to go for a default that appeases most users, which is always going to be "we'd rather not see spam and trolls".
"I don't give a damn if 99.99999% of you flag the content"
Obviously. But, the fact that literally everyone in the room is telling you to shut up would give an intelligent person at least pause to consider that maybe it's him who's the problem. If you meet 100 people in a day and 99 tell you you're an asshole, perhaps the problem is not that you happened to meet 99 other assholes?
"None of you has the right to block it from me."
Yes, we actually do. Part of freedom of speech is freedom of association and we are telling you we do not wish to associate with you. That is our right, and people choose to exercise it.
The right that does NOT exist, except in the minds of delusional idiots such as yourself, is the right to demand to use privately owned space to say whatever you wish without recourse. If you go into a vegan restaurant and start shouting about the benefits of a good steak, your rights are not being infringed upon when everyone tells you to shut up or leave.
[ link to this | view in chronology ]
Re: Re: Re:
Or just to leave, period.
No one deserves a second chance, but most times a kind proprietor will give them one anyways.
Or several hundred, if Techdirt is any example.
[ link to this | view in chronology ]
Re: Re: Re: Re:
That's part of the reason I answer back - it's a strange specimen that I don't come across all that often, and does demand a little prodding t see how it reacts,
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
It's all coming from one or two people, maybe three, with a massive chip on their shoulder.
Were I to make some guesses, there are a few contributing factors:
A) First and foremost, Techdirt is a fairly small community of active participants. I've no way to measure readership, but there is what I would call a "handful" of regular commenters.
What this does is give the bitter boys targets. They have somewhere to aim their vitriol and expect it to reach someone.
The smaller amount of comments also means everyone looking in the comment section is going to see what they have to say - or at least, will see that they have said something. Despite the claims of censorship, even when flagged by the community, these posts are highly visible. In fact, by hiding the post, it actually draws the eye and makes you wonder what kind of nonsense was posted that it got flagged.
B) In line with the above, the article writers on occasion respond to comments - or mostly, I've observed Mike doing so. So again, despite all the claims of censorship and etc., they are getting responses. They are being heard. This is all validating in some fashion, even if everybody is telling them that what they are saying is insane bullshit.
I'd say it's partly because of the attention paid to them. They think they can actually accomplish something, or at least they can go about their day feeling self-righteous in having once again told those TD people what's what, even if they never actually listen.
Other sites don't have quite the same style of community engagement, or if they do, the community is a lot less welcoming to the vocal assholes, and is less welcoming in a way that gets them to leave.
This is all of course a bunch of guessing, but because I came up with it, I like it, and will treat it as true.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
I don't think this is (necessarily) the same person, repeating the same "I'm being censored!" complaint.
I think this is a second person, complaining that people flagging the first person blocks the second person from seeing the first person's comments (without having to allow scripts and click through the "flagged by the community" link).
Parts of your reply would still apply equally well to that, but other parts read to me as if you're responding to "I'm being censored!" rather than to "the content I might want to read is being censored!", so I'm not sure whether I'm parsing things correctly.
[ link to this | view in chronology ]
Re: Just your ongoing way to avoid responsibility, lying that
I think I shall build one of these, and give your comments the attention they deserve.
[ link to this | view in chronology ]
It also requires that every user create an account in order to post so that their content may be labeled, too. I know anonymous posting is in very strong favor here at TD but really, an "Anonymous Coward" isn't really that much more anonymous than any given profile created here. Requiring a profile in order to post really doesn't make one less anonymous.
[ link to this | view in chronology ]
Re:
That's fine on a system where you can trust the community to moderate.
On Twitter, I think it would last about five minutes before somebody set up a botnet to affix fake tags to various tweets.
[ link to this | view in chronology ]
Re: Re:
You need some weighting system. Ex.: if some people mark as "left-wing", show it as a tentative tag and let people vote against the tag. People whose tags you agree with should have more effect on your own filters. If the bots only ever agree with each other their tags can be ignored by other users.
[ link to this | view in chronology ]
Re: Re: Re:
That's a good system -- similar to the old meta-moderation system that Slashdot used to use -- but it's also complex, and might be difficult for a typical end user to understand. See Tarleton Gillespie's story today on claims of search engine bias and how they're fueled by people simply not understanding how search algorithms work.
[ link to this | view in chronology ]
Re: Re: Re: Re:
I'd be sure to add some randomness too—maybe 1% of the time, a comment will get a random tag added or removed on its way to the user, or get displayed when normally filtered, to see if people reach the same conclusions.
[ link to this | view in chronology ]
Re: Re:
Interestingly, IIRC it was also implemented for the same reasons - right wingers whining endlessly about too many articles being approved that didn't conform to what they wanted to herar.
[ link to this | view in chronology ]
How will The Public know it's themselves rather than them?
Since corporations control the code and policies without least accountability?
Sheesh. Only point transparent about you is how feeble your tricks are.
[ link to this | view in chronology ]
Re: How will The Public know it's themselves rather than them?
[ link to this | view in chronology ]
"It’s not that people like Jones shouldn’t be punished;"
Punished for what? Being stupid? If only. I say let the village idiot speak. If we end up with a few more idiots in the village than sobeit. Chances are we won't.
"So, really, it's time we stopped focusing on who the platforms should silence, and give more power to help the end users decide who they wish to hear."
It's amazing to me how everyone lets FB, Twitter, etc shape their online experience, it appears. I don't do any social media so I couldn't say, but it appears that way.
[ link to this | view in chronology ]
Re:
You mean besides the ones harassing grieving parents and shooting up pizza parlors, right?
[ link to this | view in chronology ]
"We Shouldn't Want Internet Giants Deciding" - WE don't; YOU DO!
Masnick is for corporations CONTROLLING the speech and outlets of "natural" persons. He repeats it often, can't be mistaken. From last year:
"And, I think it's fairly important to state that these platforms have their own First Amendment rights, which allow them to deny service to anyone."
https://www.techdirt.com/articles/20170825/01300738081/nazis-internet-policing-content -free-speech.shtml
Masnick is not hedging "lawyers say and I don't entirely agree", or "that isn't what I call serving The Public", but STATES FLATLY. Masnick wants a few corporations to have complete and arbitrary control of ALL MAJOR outlets for The Public! He claims that YOUR Constitutional First Amendment Right in Public Forums are over-arched by what MERE STATUTE lays out!
[ link to this | view in chronology ]
Re: "We Shouldn't Want Internet Giants Deciding" - WE don't; YOU DO!
Your elected representatives have seen fit to give corporations certain constitutional rights. That includes the 1st Amendment and gives them the right to moderate/censor what appears on their platforms. It even includes you if you were to put up a blog that allowed comments; You can decide which comments appear and which do not. All perfectly legal.
Just because FB, Twitter, Instagram and others have become super popular places for people to post and communicate doesn't make them government owned and/or controlled and thus required to allow everyone on the planet a voice to say whatever they like. Those platforms are still non-government and may do as they please with their platforms, just as you would with your blog.
Why is this so hard for you?
[ link to this | view in chronology ]
Re: Re: "We Shouldn't Want Internet Giants Deciding" - WE don't; YOU DO!
He can't seem to grasp the fact that corporations are made of people, and that they, collectively, are guaranteed many of the same rights that are recognized individually, by virtue of the First Amendment right of free assembly, and it's recognized (COMMON LAW!) corollary right of association.
Which, in turn, has a recognized (COMMON LAW!) corollary right - to not associate. It is by that right that private individuals and businesses can tell him to fuck off completely, and why making his statements merely hidden is a very kind way of dealing with him.
[ link to this | view in chronology ]
I keep saying that he confuses private with privately-owned; he has yet to refute my assumption.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re: "We Shouldn't Want Internet Giants Deciding" - WE don't; YOU DO!
[ link to this | view in chronology ]
Re: Re: Re: "We Shouldn't Want Internet Giants Deciding" - WE don't; YOU DO!
[ link to this | view in chronology ]
Personal filter setups
Now, let's take Techdirt as an example of where moderating is failing...every time *moderation* comes up as a topic, the trolls roll out, there's a few hundred comments, which is more than the about 50 I personally would like to read.
Let's assume Stephen P Stone likes my approach to TD comments...how would I share that with him in an automagic way???
Note that we aren't really solving the accountability problem here, just distributing responsibility more widely. And even though I agree with TD's current moderation practice, it's not *terribly* accountable.
[ link to this | view in chronology ]
Re: Personal filter setups
Let's assume Stephen P Stone likes my approach to TD comments...how would I share that with him in an automagic way???
I can think of a few ways off-hand, including a "meta-comment" section which includes all comments made by specific people that you "follow" regardless of the article it was made in, or a sort function allowing you to sort comments by a similar list (or perhaps, a list tracking which people you have upvoted in the past) etc.
Note that we aren't really solving the accountability problem here, just distributing responsibility more widely.
And once you have distributed the problem far enough, the "accountability problem" is solved (at least, as much as any accountability problem can be solved when humans would rather not be accountable for themselves). Or in other words, once the problem has been distributed, it ceases to be a problem related to any real application, and simply starts being a part of the human condition, joining the "accountability problems" in all other aspects of human society, which are caused not by those aspects, but by the fact that it is humans that are using those aspects.
[ link to this | view in chronology ]
Why should it be more accountable? If the community believes a comment should be flagged, no matter the reason why, the community is the one whose votes are counted. You can appeal to the community for an unflagging if you so wish; we can then decide whether you deserve one.
I do not flag on the basis of ideological dissent or disagreement with a central point of discussion. I flag on the basis of a comment being an outright troll, a spam comment, or offering only insults or bullshit in lieu of a discussion (or a good joke). I even flag my own comments sometimes, simply because I know I make bullshit comments that deserve a flagging. (And you have no idea how many comments I have drafted but deleted because I wanted to unload on a troll but thought better of it before I submitted those comments.)
Besides, you are forgetting about someone who should also be held accountable for the flagging: the person who made the comment. They are responsible for posting soon-to-be-flagged bullshit; they are accountable, at a minimum, for the reception it gets.
[ link to this | view in chronology ]
Re:
Is that a real thing? I see the "flag", lightbulb, and LOL buttons. Are the latter two counted as "unflag" votes, or do you mean something else?
Part of being "unaccountable" is that it's not described how this works. How many people have to click that flag button before the comment disappears? If it gets hidden, is that for everyone—in which case few people will ever see it to give it a chance at unflagging—or does someone have to "validate" it? Is there any indication why a comment is hidden?
Sometimes when I click "post" my comments are held for moderation. I don't know why. They appear hours or days later.
[ link to this | view in chronology ]
Re: Re:
If you flag a comment, you can change your mind and unflag it later. It's the same button. The flag button.
We counted once. I think it's five.
The only people who can remove their flag vote are people who have already flagged the post.
Generally speaking, they have probably read the post.
No. When it hits the set number of flag votes (which, again, I believe is 5), it gets hidden.
It's usually pretty clear in context. It's usually spam, or one of the regular trolls getting downvoted for spouting the same old talking points they always spout.
That's a spam filter. Typically if a post gets caught in the spam filter, it's because it has too many links in it.
The mods check the filter; if the comment isn't spam, they let it through.
All this has been discussed in the comments at some considerable length; Mike, Leigh, et all have been very forthcoming with how it works. But I understand if you haven't been around for those conversations, it all might seem a little opaque to you. I can see how putting up a moderation FAQ might be useful for newcomers.
Then again, I think a lot of this stuff is pretty obvious or easy to figure out just by observing.
[ link to this | view in chronology ]
Re: Re: Re:
That would be great. I've complained, too, about how Mike sometimes says his writings are in the public domain, but hasn't written it anywhere "official". So of course we shouldn't be surprised that people, in general, don't know that.
A fixed flag count of 5 like you describe could easily be abused. Get 4 friends (or 4 other accounts) and flag the messages of people you don't like. I don't believe we've seen serious abuse but can see why people call the system unaccountable. You'd have no idea which 5 people flagged it or what they considered objectionable. Sure the reason's usually going to be clear, but you never know how people will (or did) misread stuff.
[ link to this | view in chronology ]
Re: Re: Re: Re:
But I think that distinction is inherent to any conversation about online moderation -- because appropriate moderation will depend on the platform.
Yes, someone could abuse the flag feature as a bullying tactic (and indeed the Report mechanism is often misused on large platforms like Twitter). But, as you say, that doesn't appear to be an issue here. Techdirt's a somewhat popular site but has a relatively small and manageable community; the honor system has worked so far, and if it ever breaks down, the admins can reevaluate the flag system then.
[ link to this | view in chronology ]
Re: Re: Re:
It seems related to Tor. I get it without any links, or with internal Techdirt links. I could switch IPs and repost, and it will usually be posted immediately. But I don't know who the mods are or whether they're online, and it might cause duplicate messages, so I rarely do it.
[ link to this | view in chronology ]
Re: Re: Re: Re:
It seems related to Tor.
Using Tor will significantly increase the odds of a comment being caught by the spam filter, yes.
[ link to this | view in chronology ]
Re: Personal filter setups
[ link to this | view in chronology ]
Re: Re: Personal filter setups
Well, no; many places have moderators who ban the trolls outright, or at least have tools for blocking them.
I'm with Christenson: the trolls, and people feeding them, have a tendency to drown out productive conversation. I don't read Techdirt because I want to see the same damn "Blue whines about how he's being censored; other people explain why he's wrong; this goes on for 50 replies" cycle a half-dozen times a day. But apparently some people do.
This place has a great community and can be a great place to come for insightful, intelligent, nuanced conversation on a variety of technical and political issues. When they aren't spending all day getting caught up in Blue's schtick for the seven thousandth time instead.
[ link to this | view in chronology ]
Good discussions can come from replies to Blue, Hamilton, and the rest of the troll brigade. And besides: When a good point can be made by replying to one of those schmucks, why pass up that opportunity?
[ link to this | view in chronology ]
Re:
That's fine. More commonly, I see pointless sarcastic replies along the lines of "hi, I'm (NAME) and I completely misunderstand Mike's views on copyrights". The original comment gets flagged and hidden while the replies stick around to clutter things up. Let's not do that.
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
It's not the most user-friendly thing; you need to install Greasemonkey, Tampermonkey, or similar. And it doesn't have the "click to show" feature you're suggesting; it just removes the replies entirely. But I've provided the source and you're welcome to tweak it however you like.
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re:
Can, but, in my opinion, seldom are.
Blue doesn't know what the First Amendment is; Chip doesn't know what a natural monopoly is; Hamilton doesn't know what common law is; MyNameHere just hates it when due process is enforced; over and over again, ad infinitum. There's not a whole lot of new ground to be tread, just the same points over and over again.
Because the point's already been made a hundred times and there's no value in repeating it for the hundred-and-first.
Because you could be having another, better conversation with somebody else instead.
Because there are other, better ways to spend your time than saying the same things to the same trolls, post after post, day after day, and year after year.
Because there's more to life than scoring points in online arguments.
[ link to this | view in chronology ]
Re: Personal filter setups
Seems to me that most people here are pretty happy with the moderation system as-is, and aren't interested in anything more granular.
Or at any rate I've been sharing my code for blocking abusive posters for over a year and, as far as I know, nobody else but me has ever used it.
We discussed this in another thread, but I think it's an interesting idea so I'll mention it again: a simple Bayesian analysis of every comment on Techdirt that's ever been hidden due to flagging would probably preduce a pretty accurate filter for abusive posters -- at least, the regular ones.
[ link to this | view in chronology ]
Re: Re: Personal filter setups
That only means it's not so bad that people will leave or start hacking up alternate systems. Not a very high bar, particularly when Mike's vaguely proposing an alternate system. Facebook and Twitter have bigger budgets to figure out things like "allowing third parties to create different 'views' into those networks", but I'd be interested to see what it might look like on Techdirt.
[ link to this | view in chronology ]
Re: Re: Re: Personal filter setups
I think it's more than that; I think most people think the flag system works fine and no additional moderation tools are desired.
A lot of people seem to actually enjoy talking in circles with the same two or three trolls day in and day out.
[ link to this | view in chronology ]
Re: Re: Re: Re: Personal filter setups
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Personal filter setups
Respectfully, I suggest you read the comments on the other articles under the content moderation tag.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Personal filter setups
In general I see a lot of people bitching about the Techdirt moderation system and a lot of people supporting it. I would not presume to know which represents the majority.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Personal filter setups
I think if you look closer, you'll find that you don't see a lot of people bitching about the Techdirt moderation system, you see one person bitching about the Techdirt moderation system a lot.
[ link to this | view in chronology ]
Including your own.
[ link to this | view in chronology ]
Re:
Now, how do you share that with them? (Techdirt says push moderation out to the edge, that means you and me, not TD management)
[ link to this | view in chronology ]
ShareholdersDefineSpeech
In fact the opposite could be said with the right-to-work laws in several states that state unions do not represent the voices of all employees... or go back to the George W Bush era where they deployed chain link fences and created free speech zones away from main events.
We're in an era where laws and accompanying regulations haven't caught up with technology.
A technology platform that drops a user because of their speech isn't the same as arresting citizens who go out into a public space and yell their opinions.
The corporate liability is to the shareholder which is effected by the position/stance/brand/position of the corporation and thus enables the corporation to control speech, any speech that impacts their bottom line.
The citizen liability unlike a corporation means speech can and often does lead to jail.
It's not about marginalizing some speech.
People can still go into the public square and shout at the world.
The issue is society's reliance on technology to speak - which is much more frightening that being locked up in jail or put behind a chain link free speech zone.
__Shareholders__ in a very real sense __define__ what __speech__ is acceptable or not and until we can regulate the profit motive, speech will be censored.
[ link to this | view in chronology ]
This is the broader—and much more pertinent—issue that a lot of the “MODERATION IS CENSORSHIP” crowd could bring up if they were less focused on starting flame wars about Twitter and more focused on starting a coherent dialogue about the role of technology in our daily lives.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Google is notorious for altering search results based on whatever their goals are this week. Yet the end user has little way to know that the reason they are getting so few results this week for X isn't that X is any less popular but due solely to Google changing the search algorithm. The only way the end user can discover this is by using several search engines and comparing the results. IMO, when Google does this, they are lying to the end user about what results are popular or pertain to the user's search request.
[ link to this | view in chronology ]
Re:
Really? I wouldn't rule it out but notorious would mean they'd be heads deep in a shitstorm with regulators. So far I haven't seen anything remotely looking like a shitstorm. Grandstanding yes but nothing of substance.
[ link to this | view in chronology ]
Relative Pricing
I had multiple tabs open to follow different leads in searching for the product. Evidently each of the leads I was following must have accessed the products webpage differently.
This of course raises significant public concerns with product pricing and even your personal information. The company knows your zip code and personal wealth and modifies the product price accordingly.
[ link to this | view in chronology ]
Re:
What are our goals this week Google?
Same as every week, more ads!
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Are the Search Alogithms Biased Against Conservatives?
The social media industry tends to be very "left" leaning and those on the "right" have claimed that their conservative viewpoints are being repressed, because those "leftists" write the algorithms. Seems to be a reasonable explanation on the surface. But I suspect that the apparent suppression of conservative viewpoints may be more complex than simply pointing a finger of blame at an abstract algorithm. I would like to see more research into how the apparent suppression of conservative comments is occurring.
For an example, the apparent suppression of conservative comments may not be due to a biased algorithm but could, in reality, be a side-effect of how conservatives use social media when compared to liberals.
[ link to this | view in chronology ]
If anything, they stay in the centre with occasional dips into “left” or “right” that depend on whatever flaming bag of dog crap lands on their doorstep in a given day.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
How exactly do they use it? Generally speaking you have a timeline of all the posts from people you follow, with perhaps some promoted content (Twitter does this, I don't really use FB so I don't know about it). So you read through your timeline to see what's going on in the world (according to the people you follow). If that's how you use social media, the implicit bias is your own.
Okay, let's talk about accounts being shut down. If you break the TOS you get shadowbanned or kicked off. If this happens more on the right than on the left, what is the content generally being posted by those accounts? There's the clue. If I'm right that egregious douchebaggery is likely to get you kicked off a platform then either one of two things are true:
1) more hateful douchebags on the right than on the left/liberal/progressive side, or
2) some selectively censorious douchebag is kicking off right wing douchebags only.
So... if people are using social media to behave badly and to foment hatred against [$group] and get kicked off for it, is this a -wing thing or a "don't be such a douchebag" thing? Serious question.
[ link to this | view in chronology ]
Re: Re: Re: Re:
Facebook too. Your wall has a feed of the people & groups you follow (not necessarily in the order they were posted, see below), peppered occasionally with promoted content.
"So you read through your timeline to see what's going on in the world (according to the people you follow)"
Well... yes and no. The biggest current annoyance with FB is that they try and guess which posts you're most interested in. You can affect this to some degree by marking people as "close friends" (thus always seeing their recent posts at the top of your feed) or by unfollowing users you disagree with (i.e., you're still "friends" for other purposes but you don't automatically see what they post) But, Facebook's algorithm still tries guessing for you.
It can be a major annoyance for various reasons - for example, I know I've missed events I would have liked to attend, but because I hadn't communicated directly for a while with the friend who posted about them it didn't show in my feed until after the event was over. It's not hard to imagine that people are unwittingly being drawn into echo chambers because FB chose not to show them the dissenting voices in the argument.
Every social media platform acts differently, but they all seem to be the same sort of way - they try to make decisions about what you're likely to most interested in, and this tends to put blinders on less savvy readers. If people depend on them as their primary or even only source of news, they end up having skewed ideas about what other people think as a whole.
"So... if people are using social media to behave badly and to foment hatred against [$group] and get kicked off for it, is this a -wing thing or a "don't be such a douchebag" thing?"
It's kind of both. There are douchebags everywhere, no matter which part of the political spectrum you reside upon. However, certain types of douchebaggery will tend to congregate on certain parts of the spectrum. When a certain rule is applied, therefore, it may seem to address one side more than the other, even though realistically it's only because that particular type of douchebag is simply represented more on one side. It just depends on the type of rule that is applied.
For example, white supremacists will often be more right-wing, while militant vegans tend more toward the left. If the "don't be a hateful douchebag" standard is applied, then people promoting attacks on meat eaters will be banned as equally as those promoting attacks on black people. However, if the standard applied is "don't be a racist", the vegans will be free to continue. To the racist, that might appear to be unequal treatment, but all it really means is that the site decided that the racism issue was more important to them than the meat eating one.
A person may disagree with that choice, but the correct answer is to either stop being a racist douchebag or to go to a platform where racism is acceptable, not to whine that they should be able to be as racist as they want on a platform that finds it abhorrent.
[ link to this | view in chronology ]
Re: Are the Search Alogithms Biased Against Conservatives?
There was a comment about that on Ars. Conservatives tend to be old people yelling at the cloud. The guy in question gave his grandparent as an example. The old man hadn't even heard about internet based conservative outlets. It seems the apparent bias towards liberal content is actually a result of liberal people being more fluent in tech and being more active on the internet.
[ link to this | view in chronology ]
Re: Are the Search Alogithms Biased Against Conservatives?
I have heard this claim many times now but have not seen any examples. Have any of these individuals stated clearly, exactly what happened and why they think they are the victim of censorship?
[ link to this | view in chronology ]
Re: Are the Search Alogithms Biased Against Conservatives?
[ link to this | view in chronology ]
Re: Re: Are the Search Alogithms Biased Against Conservatives?
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re:
That depends. During the height of GamerGate, how many Twitter accounts were suspended that belonged to Gaters making death threats and harassing certain “targets”, and how many accounts were suspended that belonged to Gater targets shooting back insulting rhetoric to their harassers? (And yes, the second one happened multiple times.)
Twitter does not care about specific views; they care about who is making the bigger tirefire by way of reportbombing.
[ link to this | view in chronology ]
Any woman who spoke out about #metoo before it became fashionable could have been banned as a "misogynist troll." One female anti-feminist lost her social media account for things far less than what feminsits had done.
I still think the market solves these probles naturally, since censorship has always proven to be bad for business. While totalitarianism is easy to enforce, doing so while pretending to be open and democratic is not.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re:
Or are you using TOR again?
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
And the result will be the end of the internet. The powers that wanted this might be shocked when everyone stops using the POS after it has been screwed up.
And the birth of internet 2.0 will hit mainstream. The big wig fat boys will have to start all over again - LOL.
[ link to this | view in chronology ]
As far as it goes, but what about amplification
[ link to this | view in chronology ]
In the end..
If your country is a Fair and decent place...OPINION is only opinion. NOT an actionable thing. it is 1 person having a problem, that is for the .1% that have a problem that the gov. has no reason to worry about.
Once an opinion has an affect on the society, THINGS need to be looked at and Adjusted to the Major parts of society.. NOt for 1 group, but for fairness to all..
On the internet...IF someone wants to have a SAY, let them..but NOT to spam other persons..IF we want to see/listen to them..WE CAN GO TO THEIR CHANNEL, and listen.
Its the same as with Trolls..THEY CAN have a good reason/comment to say. you can even give them a Room of their OWN for their OWN opinions..
There are many reasons to listen to EVERYONE.. because you ARE NEVER 100% accurate..there are Many sides to anything. Seeing opinions, can help you see what you MAY HAVE MISSED..
How many games are perfect? NONE. (not even windows10),.
PS...THAT IS MY OPINION.
[ link to this | view in chronology ]
Re: In the end..
Voting for Trump has brought this problem to the surface but it is not unique to politics.
Our general tolerance for bullying renders free speech concerns moot. We have none.
[ link to this | view in chronology ]
Re: Re: In the end..
ASK them to prove their foundation of what they are saying...
Its not the idea to talk random, its the idea that someone HAS an opinion..
Get them to explain their side..And if you find them a threat, call the cops.
thats called SELF POLICING..something we SHOULD do..
The only difficulty in all of this, tends to be ANON, and finding someone you have Little knowledge about.. Iv even said it to my chat buddies..It would take you 15-30 minutes to remember WHERE I LIVE, Find the local police/emergency, get them to my home to ASSIST ME..
In 15 minutes Anything could of happened.
[ link to this | view in chronology ]
The reason this won't work? It will do no more than reinforce one's opinions, which will stem from what one reads, and accepts, as fact(s).
The fact that no post ever published, anywhere, at any time, on any medium, has stated "This is my opinion, you should make up your own mind.", or words to that effect, tells me that we internet uses, all of us, are pretty much looking to be part of the herd, and not very willing to stand out for our own selves.
Stated another way, this idea will lead to confirmation bias on a level never before dreamed of. I predict that somewhere along the line, the repercussions of "us versus them" is going to become very ugly.
Please note that I am not saying that we should be forced to read other's opinions that we find to be repugnant, but simply that we should not wander willy-nilly down a path of "I don't like what 45 is saying, so I'm going to tune him out, just ignore him and all of his cronies." At some point in the future, when they're pretty sure that we're all ignoring them, they're going to come for us. And it won't be with pitchforks and firebrands, either. The survivors will be lamenting that we could've had prior warning, if we were not so close-minded, and had been paying at least a little attention to their rantings and ravings.
Uncharacteristically, I have no "better solution" to the problem. Sorry, I'll keep thinking on it, but in the meantime....
sumgai
p.s. Sorry Mike, hope your Cheerios still taste OK!
Disclaimer: See the quote in my second paragraph.
[ link to this | view in chronology ]
Re:
Stated another way, this idea will lead to confirmation bias on a level never before dreamed of.
A few people have suggested this, but I don't buy it for a variety of reasons.
People have been complaining that Twitter/Facebook/YouTube are ALREADY a giant filter bubble. You already choose who you follow/subscribe to on those sites. So I don't see this as changing that.
The whole concept of the filter bubble is exaggerated anyway. Most people don't care that much, but DO want to avoid assholes/trolls. That's not being in a filter bubble. It's just getting rid of assholes and trolls.
[ link to this | view in chronology ]
Re: Re:
Allow me to both agree and disagree on some of your points, please.
1) Alleged filters on social media are something about which I know nothing - I refuse to give up my last shards of privacy, so I don't participate in anyway. I limit my on-line presence to Fora such as yours, where I usually find evidence of the median participant IQ to be somewhat higher than that of a raw carrot.
2) The fact that "most people don't care that much" is concerning, at least to me. It says a whale of a lot about apathy, and opens a large diorama of reasons for that willingness to not participate in one's own government, even at a visceral level. Wanting to avoid assholes and/or trolls of various kinds is laudable, but in point of fact, if we avoid them, then we are just pretending to ourselves that they aren't there in reality.
Compare this to the SESTA/FOSTA crap - "If we remove this (already criminal) activity from the internet, then it will go away entirely". I'm sure you can think of other examples near and dear to your heart.
To quote Sgt Springer, from my Boot Camp days: "If you stick your head in the sand, then your ass is exposed, with a couple of nice big red rings painted on it." Better to know where they are, what they're doing, and how to keep them from causing ever greater harm.
3) I must defer to my statements in 1) above. However, I do hold out hope that you're correct, that people will become less inurred to the actions of their fellow citizens, particularly those in the business of governing others. I'm not looking for outrage and a desire for retribution, I'll be happy to see merely a high degree of concern and a willingness to express a thoughtful opinion. (Read that last as: not an emotional outpouring, devoid of any rationality.)
Thanks Mike. ;)
sumgai
[ link to this | view in chronology ]
"of ...anchor markup--p-anchor" rel="nofollow noopener noopener" target="_blank">experiences of police violence"
[ link to this | view in chronology ]
So... the end game has been broached...
[ link to this | view in chronology ]
But it's unlikely to satisfy the "do something" -people.
Because they really do want to silence people they disagree with, not just avoid reading them themselves. Complaints about bias in censorship, real or imagined, are often directed not so much at at the existence of a bias, but rather that the bias isn't in their favor.
[ link to this | view in chronology ]