The Good Censor Document Shows Google Struggling With The Challenges Of Content Moderation
from the thoughtful-analysis dept
Last week, the extreme Trump-supporting media sites went positively ballistic when Breitbart released a leaked internal presentation entitled "The Good Censor." According to Breitbart and the other Trumpkin media, this is somehow "proof" that Google is censoring conservatives, giving up on free speech and planning to silence people like themselves. To put this into a context those sites would understand, this is "fake news." I finally had the time to read through the 85 page presentation and, uh, it paints a wholly different picture than the one that Breitbart and such sites have been painting.
Instead, it pretty clearly lays out why content moderation is impossible to do well at scale and that it will always result in decisions that upset a lot of people (no matter what they do). It also discusses how "bad actors" have effectively weaponized open platforms to silence people.
It does not, as some sites have suggested, show a Google eager to censor anyone. Indeed, the report repeatedly highlights the difficult choices it faces, and repeatedly highlights how any move towards increased censorship can and will be abused by governments to stamp out dissent. It also is pretty self critical, highlighting how the tech companies themselves have mismanaged all of this to make things worse (here's just one example of a much more thorough analysis in the document):
The presentation actually spends quite a lot of time talking about the problems of any censorship regime, but also noting that various governments basically are requiring censorship around the globe. It's also quite obviously not recommending a particular path, but explaining why companies have gotten more aggressive in moderating content of late (and, no, it's not because "Trump won"). It notes how bad behavior has driven away users, how governments have been increasingly using regulatory and other attacks against tech companies, and how advertisers were being pressured to drop platforms for allowing bad behavior.
The final five slides are also absolutely worth reading. It notes that "The answer is not to 'find the right amount of censorship' and stick to it..." because that would never work. It acknowledges that there are no right answers, and then sets up nine principles -- in four categories -- which make an awful lot of sense.
- Don't take sides
- Police tone instead of content
- Justify global positions
- Enforce standards and policies clearly
- Explain the technology
- Improve communications
- Take problems seriously
- Positive guidelines
- Better signposts
So while this document is being used to attack Google, it actually is yet another useful tool in showing (1) just how impossible it is to do these things right and (2) how carefully companies are thinking about this issue (rather than just ignoring it, as many insist). I recognize that's not as "fun" a story as slagging the big bad tech giant for its plan to silence people, but... it's a more accurate story.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: content moderation, free speech, government pressure, market of ideas, social media, the good censor
Companies: google
Reader Comments
Subscribe: RSS
View by: Time | Thread
SO after a week: IF "impossible", then Google IS doing it WRONG.
Your #2 is FLAT LIE: NO ONE said Google is "ignoring it"! In fact, the complaint -- ADMITTED RIGHT THERE BY GOOGLE -- is that Google / Facebook / Twitter ARE CENSORING, with the further point also substantiated in that PDF that they focus on "conservative" or Republicans as problem. -- YES, are some "leftists" or "terrorist" examples, but in practice says Trump stole the election.
[ link to this | view in thread ]
Re: SO after a week: IF "impossible", then Google IS doing it WRONG.
Google is just trying to find a cover story for its attacks on free speech of political opponents -- and for its intent to gain money in Communist China REGARDLESS.
And then, Google tries to push all blame off onto "governments", even though are no examples of that given.
That's why legislation and court cases are in the works. -- My bet is that Kavanaugh strongly affirms my opinion that "platforms" ARE the new Public Forums and that corporations ARE violating First Amendment Rights of "natural" persons, NOT free to "moderate" as wish which Masnick constantly tries to put over. -- IF NOT the current "MNN" case, in another.
Feeble defense of your "sponsor", a week late, and FLAT LYING, the best one can expect from a Google shill.
New readers, if any, substance for the charge is:
https://copia.is/wp-content/uploads/2015/06/sponsors.png
NOTE ALSO that Masnick NEVER mentions that "sponsorship" here, as any journalist is ethically required to: Masnick is NOT a journalist, and has only Goolge's interest for his ethics.
[ link to this | view in thread ]
Re: SO after a what?
Your #2 is FLAT LIE: NO ONE said Google is "ignoring it"! In fact, the complaint
Your accusation is a flat out lie. It is not untruthful, or in anywhat incorrect to say people have accused Google of ignoring problems. Clearly, many people have said this!
Google/Twitter/FB absolutely can not operate without moderation. As the article states, moderation is imposed by various legal and extra legal factors.
What your comment fails to do is show that this moderation is designed to silence your faction? (Please cite.)
[ link to this | view in thread ]
Re: Re: SO after a week: IF "impossible", then Google IS doing it WRONG.
[ link to this | view in thread ]
[ link to this | view in thread ]
Re: SO after a week: IF "impossible", then Google IS doing it WRONG.
(You didn't read it, of course.)
[ link to this | view in thread ]
Re: Breitbart
[ link to this | view in thread ]
In which case, you can say goodbye to virtually every comment section on a blog (including this one), ever social interaction network, and every other kind of website that allows third-party submissions. No “platform” will ever allow third-party submissions if they cannot moderate the platform as they see fit.
[ link to this | view in thread ]
If the straitjacket fits…
[ link to this | view in thread ]
Re: SO after a week: IF "impossible", then Google IS doing it WRONG.
[ link to this | view in thread ]
Not only Trum-supporting media sites.
As a Democrat voter I firmly argue for anti-trust breakup of the major tech companies in response to their efforts to censor free speech of American citizens.
This goes beyond the invasion of a plain language reading of the Fourth Amendment online. What has transpired in the UK is absolutely Orwellian beyond what we have thus far in the US. Reading through The Good Censor, the central argument is that Google wants to apply the UK system of censorship in America.
That hill I deem worth dying on.
The loud censorship happy portion of the Democrat voter base is an over-vocal minority. The much larger silent majority strongly disagree with censorship. On that point I'm happy to agree with the current administration.
Win through words and argument. When you can't win through argument of your words, you've lost to the better idea. Revise and improve on the platform.
[ link to this | view in thread ]
That's just the tip of the iceberg. Shadowbanning is rampant online, but even then, the market can handle it. New companies will always spring up to monetize the audiences ignored by the more established outfits. Twitter was once such a bastion of free speech, until it got big. This story has played out dozens of times since it first happened on AOL, where censorship of conservatives inadvertently led to what would become the alt.right.
Whatever this site's agenda, if what it says is true, or false, that will ultimately come to light. The internet detects censorship and damage and routes around it. Always has, always will. That's the nature of a decentralized communications medium that was designed to survive a nuclear war.
[ link to this | view in thread ]
Re:
If people wanted free speech online, USENET would still be thriving, but it's not. Every time a company tries to censor people, the market fixes the problem long before regulators ever could. Censorship is what destroyed AOL and it took all of four years.
[ link to this | view in thread ]
Re:
[ link to this | view in thread ]
Re:
What efforts have Twitter, Google, etc. undertaken to prevent the average person from using their voice? I mean, what have they done to stop you, me, or anyone else from spinning a one-user personal Mastodon instance, self-hosting a personal blog, or literally anything else that will allow people to express themselves without relying on the privilege of using platforms owned by Twitter, Google, etc.?
And yes, use of those platforms is a privilege, not a right. You are not entitled to force Twitter into hosting your speech; the same goes for Google, Facebook, and any other platform which you do not personally own. By the same token, those platforms cannot legally prevent you from jumping onto another platform—i.e., Google cannot stop me from using Twitter, Tumblr, or Mastodon to bitch about Google’s moral and ethical shortcomings. If you have a law, statute, or court ruling that says otherwise on any of those points, now would be the time to present it.
That explains why Republicans have done their best to disenfranchise voters, gerrymander voting districts, and make voting a much harder process.
[ link to this | view in thread ]
Re:
[ link to this | view in thread ]
Re: Re: SO after a week: IF "impossible", then Google IS doing it WRONG.
I still say the stupid people in the audience need to be sterilized, and until we do that, it's pointless to try to save these slugs from themselves. We're a world overridden by idiotic slugs who shouldn't be allowed to breed, and who are poisoning the gene pool to the point where extinction is threatened.
Yes, I'm being hyperbolic. I do not actually support Eugenics...for now.
[ link to this | view in thread ]
Re: Re:
Were it proven, would you call it bias?
There are enough public examples not to need mine.
[ link to this | view in thread ]
Re: Re:
If people want free speech, let's all go back to USENET, or boycott companies which censor. Perhaps we can apply USENET's SPAM rules and "throttle" spammers without censoring anyone or let people opt-in to "total free speech." Some company will jump profitably into this vacuum if it's a real problem.
YouTube bans very few videos, and Vimeo bans even fewer. Search engines do index controversial material. People who search for news using specific search terms will generally get unbiased results.
[ link to this | view in thread ]
Re:
Damn, but you love to otherword people.
No, it is unequal moderation. The whole point of this article, like others in its vein, is that moderation does not properly scale when a platform grows as big as Twitter. Mistakes will be made, hopefully (but not always) in good faith. When those mistakes hit you, you can either “route around” them—more on that in a bit—or you can get irrationally angry about a platform denying you a privilege you thought was an entitlement. Your choice.
(By the by: Those other fuckers should have been banned on principle, while you should have received only a metaphorical ass-kicking in your mentions.)
What, then, makes people like you feel the need to get all pissy about Google when y’all can “route around it” with ProtonMail, the Mastodon and PeerTube protocols, DuckDuckGo, and any non-Google service or protocol that competently recreates the functionality of existing Google services?
[ link to this | view in thread ]
It is if you refuse to offer any proof that it happened, up to and including the Twitter usernames of everyone involved as well as uncensored, undoctored screenshots of the tweets in question and (if possible) direct links to those tweets. Provide the proof and we will judge it for ourselves. Until then, your anecdote is bullshit and we will continue to call it such.
[ link to this | view in thread ]
Re:
So when someone isn't banned from Twitter for threatening to come to my home to kill me, and another claims to be standing outside my home, ready to shoot me, but when I get a ban for suggesting #metoo is hypocritical, that's not bias, just "good censorship."
This is a strawman. We never said that's "good censorship." Indeed, even if we take your version of events that happened to you as an accurate depiction (which I find unlikely, but let's take it), that only serves to further prove the point: content moderation at scale is impossible to do well. It will always lead to mistakes. No one is saying that's "good censorship."
But we should be encouraging platforms to be thoughtful and careful in how they manage these things -- because they're under tremendous pressure to "do something." So mocking companies for having a thoughtful approach is just as ridiculous as saying they "shouldn't do anything."
[ link to this | view in thread ]
Re: Re: Re:
Pray tell, how has Twitter’s uneven moderation prevented anyone from using any other platform besides Twitter?
[ link to this | view in thread ]
Re: Re: SO after a what?
Saying Google has ignored "problems" does not imply they're ignoring "this issue". "Problems" could refer to specific ignored instances of an issue they're not, in general, ignoring.
[ link to this | view in thread ]
Re:
What? Of course it's bias. Censorship is always biased; any act of dividing texts into those that should be or shouldn't be censored is bias (as is the general decision of whether any censorship should exist, e.g., I'm biased against censorship even if others think it's "good").
[ link to this | view in thread ]
Here we go again!
I've been reading about how Instagram also enables abuse, noticing comments about irrelevant garbage showing up in the mmasnick twitter feed, and thinking:
What should be censored is *very* contextual...same content is OK or not depending on presentation. So garbage for *YOU* is fine in *MY* abuse collection. Oh, and at the moment, there are no consequences to anyone, hardly, for abuse. The difficulty also arises from "free" platforms...moderation has to come from somewhere, and 20K paid employees just aren't going to cut it for a billion users.
Therefore:
1) push decisionmaking to the end users, who are crowdsourced. (See "flag" button on Techdirt, seems to work pretty well)
2) Every area/channel has an "owner"...for youtube it is who posts the video, for twitter it is the person whose feed it is, etc. Allow OWNERS to have good tools for the comments, including filtering out "burner" accounts, closing comments, etc.
(Yes, I've got a content farm of long-running burner accounts with lots of followers for sale, but anonymous horribleness has to have a cost somewhere)
3) Allow for multiple rating bodies...half the US is panicking about "pornography" while the other half is wondering when the first half will wake up to reality...and of course wants to ensure that their prissy employer's computers don't access anything NSFW so as to stay out of Title IX trouble!
[ link to this | view in thread ]
Which means you support it, but you are unwilling to own your position until the Overton Window moves close enough to said position that it becomes acceptable.
Just say you want to sterilize “the poors” (or whatever segment of the population you want to start sterilizing) and get it over with. I promise, no one here will think any less of you than they already do.
[ link to this | view in thread ]
Re: Re: Re:
So yes, we don't need your anecdote (still presented with zero evidence, of course), or any anecdotes.
[ link to this | view in thread ]
[ link to this | view in thread ]
Re:
Yeah, the tone-policing thing reeks of bullshit to me. It all but says that a White supremacist arguing for ethnic genocide in a civil and polite manner legitimately deserves more leeway than a Black man arguing for his right to live with less-than-polite language.
[ link to this | view in thread ]
Re: Re:
Got suspended for tweeting anger at the non-Jewish journalists involved in that hit-piece. One of the journalists even had a history of making casual antisemitic jokes.
[ link to this | view in thread ]
Re: Re:
Instead of using a paid USENET provider, a lot of folks switched to things like Yahoo Groups.
If the politicians really get involved in this fight, wonder if we will see a repeat.
[ link to this | view in thread ]
Re:
As I said, it's the tip of the iceberg, and the extreme case is Alex Jones anyway. Back in the 1990s it was 2 Live Crew. In the 1960s it was Lenny Bruce.
In thhe 1970s, people wanted Three's Company banned for being too risqué, and now they show reruns of it on Nick At Nite.
I've seen enough of Twittet's enforcement to conclude bias, and enough of Google's to conclude that it is not. Other people obviously have different opinions.
I've also said that censorship is a problem that tends to solve itself. AOL was clearly biased in the 1990s and cost itself its position of dominance.
[ link to this | view in thread ]
Re: Re: Re: SO after a week: IF "impossible", then Google IS doing it WRONG.
[ link to this | view in thread ]
Re: Re: Breitbart
[ link to this | view in thread ]
Re: Re: Re:
[ link to this | view in thread ]
Google didn't seem to find it very difficult to decide to build a censored search engine with queries linked to individuals' phone numbers to help the Chinese government stamp out dissent, though.
[ link to this | view in thread ]
Re: Re: Re: SO after a week: IF "impossible", then Google IS doing it WRONG.
[ link to this | view in thread ]
Re:
[ link to this | view in thread ]
Re: Re: Re: Re:
They compiled a video and claimed a random clip of him pointing at a corner was a Sig Heil or that him dressing up as an SS Officer while playing a World War 2 game and mocking Nazis was evidence that he supported Nazis.
Many of the clips they pulled claiming he was a Nazi was, in context, impossible to mistake as having any relation to Nazis whatsoever. They forced that association in.
[ link to this | view in thread ]
So what?
Twitter is legally allowed to show bias in its moderation; any site that moderates third-party submissions is allowed that same right. To say that Twitter does not have that right is to say that YouTube, Archive of Our Own, FurAffinity, and even Stormfront—just for a few quick examples—have no right to show bias in favoring or disfavoring certain content. It would amount to saying a site like Twitter, AO3, etc. must host any form of legal speech no matter what. How would you feel if you opened a blog and you were forced to host a wholly unmoderated comments section because the law says you cannot show bias in your moderation?
Bias is not just “left vs. right”. Bias is pro-LGBT vs. anti-LGBT, pro-choice vs. anti-choice, pro-racism vs. anti-racism, pro-Google vs. anti-Google—in other words, it is the decision to favor a specific opinion or point of view over opposite-yet-similar opinions/views. If you hold the opinion that foul language has no place on your blog, you have the right to moderate your blog’s comments section in a way that conforms to your bias against words like “fuck”, “shit”, and “Barbra Streisand”. No law, statute, or court ruling says you must be forced to host such speech on your blog; if someone wants to use it, they can use it anywhere else that will accept it—but they cannot force it upon your blog.
I would think a notion such as “you can’t be forced to host speech you don’t wanna host” is non-controversial. Then again, people like you seem to think Twitter should be forced to host the speech of disinformation peddlers such as InfoWars, White supremacists such as Richard “I got alt-highfived and became a living joke” Spencer, absolute lunatics such as Donald Trump, or even just a worthless pissant furry with lots of free time on his hands.
[ link to this | view in thread ]
Re:
Especially when the criteria can be self-conflicting, even from one person. Nazi insults are one thing when they are being used as an example of abuse, another if they are being directed at someone.
[ link to this | view in thread ]
Re: Re: Re:
Were it proven, I'd call it a miracle that you proved anything.
_There are enough public examples not to need mine._
And this is exactly why you have no credibility, bobmail.
Shouldn't you be writing a self-help book on HW to get all the pussy you can grab?
[ link to this | view in thread ]
Re: Re:
And I suppose you have a wholly objective set of criteria for fair moderation that can scale to handle a service the size of Twitter?
[ link to this | view in thread ]
Be careful what you wish for
It never ceases to be funny watching them arguing for a position that stands to screw them over if they actually 'win' and get it. It's like watching someone arguing for the immediate destruction of a bridge they are currently standing on; you almost want to see them get what they claim to want, just to see the look on their face when they realize what it means for them.
[ link to this | view in thread ]
Re: Re:
[ link to this | view in thread ]
'The commentor who cried 'death threats!'(among other things)'
Coulda swore there was some old story about that, something about a young shepherd and a wolf that didn't exist?
Memory's a little fuzzy, but I seem to recall some sorta moral about how if you make a habit of lying and/or get a reputation for dishonesty then even if you do happen to tell the truth at some point no-one will have any reason to believe you, and you'll have no-one but yourself to blame for that.
Completely unrelated of course, no idea why it even came to mind after reading your comment.
[ link to this | view in thread ]
'They said it, not me.'
'Don't take sides, moderate based upon tone, and crack down on abusive content.'
If that is supposed to be a smoking gun that Google is going after conservatives/brietbart fans then that's a pretty damning picture they are painting of their own side. They are all but saying that conservatives are more likely than others to post content that would trip a troll/abusive content filter, such that they're basically their own critics.
[ link to this | view in thread ]
Re:
It sounds like they're moving towards it, but for the record, this report was written by Insights Labs. It contains recommendations by outsiders contracted to probably inform Google on what they feel has been going on with the public perception of censorship online.
I used to think it was a sure sign of things moving in the direction of censorship, but after watching this video, I'm not so sure.
It could go either way. If this document, being written by people contracted by Google itself to give it recommendations, is listened to, then it may step away from its fiddling with search results and YouTube videos as much.
If not, it's as good as pissing in the wind.
I wonder if this isn't a PR stunt to "out" observers' own reactions. That's useful data to Google, too.
[ link to this | view in thread ]
Re: Re: SO after a week: IF "impossible", then Google IS doing it WRONG.
In politics, accepting donations and lobbying and pre-written bills to be signed into law is just free speech. (For one side of the aisle anyway.)
Do you really expect copia institute sponsors to be listed at the head of every article? No, no you don't. You wouldn't list such things repeatedly, either, when they are clearly available at a permanent and obvious (conspiritorially secret) link. And it would blow all the fun out of the water for you and your ilk.
Also, if Google owns Masnick, they should fire him for doing a really awful job of promoting teh Goog propaganda.
As for flat lying, see Breitbart, and also Fox News, who has a court ruling under their belt saying that they can lie to the public under the guise of "news" all they want. They proudly fought for their right to lie. (And they both censor the fuck out of anything "left" or which they otherwise don't like.)
Of course, y'all do have the right to generate fake news and play your fake martyr game. Have fun with that.
[ link to this | view in thread ]
Re:
Of course, if some elements of this are true, and the threats were true (as opposed to protected hyperbolic speech), then i would be calling the fooken cops, not Twitter. Then send Twit the police report.
Claims of directed bias are like other belief and pattern seeking effects of the unexamined human mind. In this case, as if _one being_ were looking at _all the things_ and particularly singled out a TwitUser with an agenda.
I am not saying it can't happen, that a lame employee could conceivably have it in for a particular TwitUser and somehow handles all the complaint material associated with that person, but for it to happen at scale is, frankly, ridiculous. And since not all anti-#me-too comments or accounts, for example, get deleted/banned, not even a sizable fraction of them, we may assume the claims of bias are brain farts or direct bullshit. (Yeah. They really do censor one out of a billion comments with their left-wing (? or whatever) agenda. It is really putting the kibosh on "conservatives". @@ )
[ link to this | view in thread ]
Re: Re: Breitbart
Like the black woman who they claimed gleefully watched a white couple lose their farm instead of doing her job and helping them (the media found said white couple from the story. They said the black woman saved their farm and they were eternally grateful to her).
[ link to this | view in thread ]
Re: Re: Re:
Handing control (and an impossible problem) over to the Chinese Communist Party is much easier than solving an impossible problem yourself!
Remember, even my own personal criteria for fairness are self-contradictory! I know it when I see it, just like porn!
[ link to this | view in thread ]
Re: Re: Re: Re: Re:
[ link to this | view in thread ]
Re: Re: Re: SO after a week: IF "impossible", then Google IS doing it WRONG.
> •Improve communications
Improve communications? Google wins the Understatement of the Year award with that one. I suppose anything would be a step up from 'nothing'. When someone is banned or suspended on any of these platforms, they're sent a vague computer-generated notice and given no ability to speak to anyone about it.
[ link to this | view in thread ]
Re: Re: Breitbart
[ link to this | view in thread ]
Re:
[ link to this | view in thread ]
Re: 'The commentor who cried 'death threats!'(among other things)'
[ link to this | view in thread ]
Re: Re: 'The commentor who cried 'death threats!'(among other things)'
He might have(or at least claimed to have done so), after having warned him twice to stop with the unsupported claims and watching him continue to make them I filed him under 'dishonest troll' and flag him by default, ignoring anything he says as a waste of time.
[ link to this | view in thread ]
Re: Re: Re: Re: Re: Re:
But that wasn't even the most damning part. The most damning part were all the examples where the Wall Street Journal made up the context out of thin air. As in there was absolutely no relation whatsoever that any reasonable person could've made without the Wall Street Journal claiming that him randomly pointing at a corner in one of his vids was a Sig Heil.
[ link to this | view in thread ]
Re: Re: Re: Breitbart
White person here, would never behave in such an abhorrent way. This is why we can't all get along: let's stop assuming the worst about the person you haven't seen "here" before.
[ link to this | view in thread ]
Re:
I really want to dismiss this claim as specious on the basis of the fact that the only people who use "Democrat" as an adjective seriously (outside of contexts like "Democrat [so-and-so's name]") are kneejerk right-wingers.
Unfortunately, that fact seems to be becoming increasingly obsolete and inaccurate these days...
[ link to this | view in thread ]
Re: Re:
I would add one more conditional on there - not only can they not legally prevent you from speaking by other channels [1], they also cannot practically prevent you from doing so.
If they could do (and were doing) the latter, you'd have a hard time convincing me that that would not still be a violation of the freedom of speech, although I readily grant that it would not be a violation of the First Amendment.
[1] I'm parsing "legally" here as "with the force of law", not "without violating the law". I.e., not that it would be illegal for them to prevent you from doing this, but merely that an attempt by them to prevent you from doing it would not be backed by government enforcement. If there are laws which would in fact make such prevention an illegal act on their part, I'm not being able to think of what those laws are.
[ link to this | view in thread ]
Re: Re:
It can be a "lame employee" or a whole division that instead of doing their jobs end up being a censor center.
Your argument that "not all accounts using the same speech are banned therefore censorship is inexistant" is beyond stupid.
[ link to this | view in thread ]