Before You Talk About How Easy Content Moderation Is, You Should Listen To This
from the radiolab-explains dept
For quite some time now, we've been trying to demonstrate just how impossible it is to expect internet platforms to do a consistent or error-free job of moderating content. Especially at the scale they're at, it's an impossible request, not least because so much of what goes into content moderation decisions is entirely subjective about what's good and what's bad, and not everyone agrees on that. It's why I've been advocating for moving controls out to the end users, rather than expecting platforms to be the final arbiters. It's also part of the reason why we ran that content moderation game at a conference a few months ago, in which no one could fully agree on what to do about the content examples we presented (for every single one there were at least some people who argued for keeping the content up or taking it down).
On Twitter, I recently joked that anyone with opinions on content moderation should first have to read Professor Kate Klonick's recent Harvard Law Review paper on The New Governors: The People, Rules and Processes Governing Online Speech, as it's one of the most thorough and comprehensive explanations of the realities and history of content moderation. But, if reading a 73 page law review article isn't your cup of tea, my next recommendation is to spend an hour listening to the new Radiolab podcast, entitled Post No Evil.
I think it provides the best representation of just how impossible it is to moderate this kind of content at scale. It discusses the history of content moderation, but also deftly shows how impossible it is to do it at scale with any sort of consistency without creating new problems. I won't ruin it for you entirely, but it does a brilliant job highlighting how as the scale increases, the only reasonable way to deal with things is to create a set of rules that everyone can follow. And then you suddenly realize that the rules don't work. You have thousands of people who need to follow those rules, and they each have a few seconds to decide before moving on. And as such, there's not only no time for understanding context, but there's little time to recognize that (1) content has a funny way of not falling within the rules nicely and (2) no matter what you do, you'll end up with horrible results (one of the examples in the podcast is one we talked about last year, explaining the ridiculous results, but logical reasons, for why Facebook had a rule that you couldn't say mean things about white men, but could about black boys).
The most telling part of the podcast is the comparison between two situations, in which the content moderation team at Facebook struggled over what to do. One was a photo that went viral during the Boston Marathon bombings a few years ago, showing some of the carnage created by the bombs. In the Facebook rulebook was a rule against "gore" that basically said you couldn't show a person's "insides on the outside." And yet, these photos did that. The moderation team said that they should take it down to follow the rules (even though there was vigorous debate). But, they were overruled by execs who said "that's newsworthy."
But this was then contrasted with another viral video in Mexico of a woman being beheaded. Many people in Mexico wanted it shown, in order to document and alert the world of the brutality and violence that was happening there, which the government and media were mostly hiding. But... immediately people around the world freaked out about the possibility that "children" might accidentally come across such a video and be scarred for life. The Facebook content moderation team said leave it up, because it's newsworthy... and the press crushed Facebook for being so callous in pushing gore and violence... so top execs stepped in again to say that video could no longer be shown.
As the podcast does a nice job showing, these are basically impossible situations, in part because there are all different reasons why some people may want to see some content, and others should not see it. And we already have enough trouble understanding the context of the content, let alone the context of the viewer in relation to the content.
I've been seeing a microcosm of this myself in the last few days. After my last post about platforms and content moderation around the Alex Jones question, Twitter's Jack Dorsey was kind enough to tweet about it (even though I questioned his response to the whole mess). And, so for the past week or so I've been getting notified of every response to that tweet, which seems pretty equally divided between people who hate Alex Jones screaming about how Jack is an idiot for not banning Jones and how he's enabling hate mongers, and people who love Alex Jones screaming about how Jack is silencing dissent and how he's a liberal asshole silencing conservatives.
And no matter where on the spectrum of responses you may fall (or even totally outside of that spectrum), it should come down to this: we shouldn't be leaving these decisions up to Jack. Or Mark. Yes, those companies can and must do a better job, but what people fail to realize is that the job we're asking them to do is literally an impossible one. And that's why we really should be looking to move away from the situation in which they even need to be doing it. My solution is to move the controls outwards to the ends, allowing individuals and third parties to make their own calls. But there may be other solutions as well.
But something that is not a solution is merely expecting that these platforms can magically "get it right."
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: content moderation, radiolab
Companies: facebook
Reader Comments
Subscribe: RSS
View by: Time | Thread
YET you tacitly state that Google, Facebook, Twitter colluded
rightly to "de-platform" Alex Jones.
Again, this is just solely having everything your way, not least consistent.
Anyone, state and link to something out-of-bounds from Alex Jones. -- And which isn't easily matched by my finding a piece on Techdirt repeating utterly unfounded "conspiracy" of Trump-Russia collusion. Comparatively, Jones is WELL within both visible evidence and likelihoods. It's ONLY that you neo-liberal partisan and netwits want to believe.
Yeah, as I predicted: irresolute weenies.
Look. What's acceptable is KNOWN to prosecutors and when in a court, else there'd be vastly more cases brought. Doesn't take any real acumen to study cases and state a simple policy aligned with common law. -- Dang near every web-site EXCEPT Techdirt has such stated clearly, AND no other web-site that I know of resorts to the hidden cheating of claiming that "the community" with a "voting system" is responsible. YOU won't even state the numbers of alleged clicks, or whether an Administrator has final approval. -- Though we all know one does, as proven by the sudden change on MY comments in August-September. -- The way YOU are doing "moderation" isn't acceptable under common law.
[ link to this | view in chronology ]
Re: YET you tacitly state that Google, Facebook, Twitter colluded
I think Mike is pointing out that at scale, inconsistency is *inevitable*.
[ link to this | view in chronology ]
I absolutely refuse to give Alex Jones traffic via a link, but his whole “the Sandy Hook shooting was a false flag operation with crisis actors playing the roles of dead kids and grieving parents” schtick seems pretty fucking out-of-bounds to me.
More evidence exists to corroborate that alleged collusion than will ever exist to corroborate Jones’s claims about the Sandy Hook massacre.
What specific evidence proves true, beyond a reasonable doubt, Jones’s claim that the Sandy Hook massacre was a false flag operation carried out by crisis actors?
“Legally acceptable” is not always the same thing as “morally acceptable”. To wit: Alex Jones’s claims about the Sandy Hook shooting. (And the “legally acceptable” status of those claims are in question.)
Reddit immediately and prominently comes to mind.
Why should they?
If you really believed Techdirt’s moderation is illegal, you would have already filed a lawsuit over it.
[ link to this | view in chronology ]
Weren’t you people just saying collusions not a crime?
[ link to this | view in chronology ]
Re: YET you tacitly state that Google, Facebook, Twitter colluded
[ link to this | view in chronology ]
Re: YET you tacitly state that Google, Facebook, Twitter colluded
So...you didn't listen to it.
Is this a corollary of your inability to read?
[ link to this | view in chronology ]
Re: YET you tacitly state that Google, Facebook, Twitter colluded
I'll assume you mean common law for the US, as this is a US based site and is thus under US law.
Common law is derived from judicial precedent. Given this knowledge... You're contradicting your own argument, as case law regarding comments often brings up CDA 230 (a statutory law).
CDA 230 does not require a voting system. Logicly, this also means it does not require the site to say how the "voting" (in this case by means of flagging) system works.
[ link to this | view in chronology ]
Re: YET you tacitly state that Google, Facebook, Twitter colluded
Newly leaked confidential Media Matters / Soros policy memos reveal that the recent de-platforming of Alex Jones and others are just a small part of a larger effort to destroy free speech and replace it with a bastardized corporate controlled shit show:
"In the next four years, Media Matters will continue its core mission of disarming right-wing misinformation...
"....Internet and social media platforms, like Google and Facebook, will no longer uncritically and without consequence host and enrich fake news sites and propagandists..."
https://www.scribd.com/document/337535680/Full-David-Brock-Confidential-Memo-On-Fig hting-Trump#from_embed
[ link to this | view in chronology ]
Oh good, now I know I can add you to the list of regular trolls.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re: Re:
Uses Soros bogeyman without any sense of irony.
[ link to this | view in chronology ]
Re: Re: YET you tacitly state that Google, Facebook, Twitter colluded
Lol. A researcher who uses themselves as the primary source of information is always going to be a very poor one.
"Media Matters "
Huh, I've never seen those on the usual lists of boogeymen you people use. Did they uncover something embarrassing about someone recently?
[ link to this | view in chronology ]
Re:
He has a platform all of his own - Infowars (among others). He does not have any inherent right to use anybody else's space.
"Anyone, state and link to something out-of-bounds from Alex Jones"
As mentioned before, the Sandy Hook false flag theory was way, way out of bounds, and that one has led to the parents of victims being harassed in real life. Popular platforms do not want or need this kind of toxic lunatic infecting them, and they do not have to accept him if they don't want.
"no other web-site that I know of resorts to the hidden cheating of claiming that "the community" with a "voting system" is responsible"
Off the top of my head: Slashdot, Ars Technica and Reddit all do this, although there are certainly others However, I've never seen anybody so consistently moan and bring up stupid conspiracy theories about those systems, it's only here I see such a thing.
You definitely need to brush up on your reading. Maybe try something outside of right-wing nutjob sites known to lie to their readership? You might accidentally catch a glimpse of the reality the rest of us live in.
Although, ironically, such sites never seem to have a problem outright deleting comments they disagree with, so this site is definitely better then the ones you've admitted to reading even if your dumb theory was close to reality.
[ link to this | view in chronology ]
Unicorns...or not
To my mind, we need more competition...so Twitter and Facebook don't have quite as much influence. That way Jack and Mark can be as megalomaniac as they like on their personal platforms.
To get the diversity competition brings, twitter and Facebook (hell, craigslist too??) should not have any copyright in the user-generated content. Maybe some of the other barriers to entry can also be removed.
[ link to this | view in chronology ]
Re: Unicorns...or not
I think small communities inherently work better. Only by being a member of a community can a moderator understand context.
Remember the story a few months back about Twitter suspending That Anonymous Coward for using an offensive term in reference to himself? A moderator who knew him would have understood the context of his comment and would not have punished him for it.
Even in small communities, there's going to be controversy over moderation action, or a lack thereof. No community is ever going to agree 100% on everything, unless it's a "community" of just one person. There's no perfect system. But community moderation is a better system than moderation by faceless employees of some megacorporation.
[ link to this | view in chronology ]
Re: Re: Unicorns...or not
I think small communities inherently work better. Only by being a member of a community can a moderator understand context.
I agree with this in general... but, on the flip side, I also see the massive benefits of enabling large communities to communicate with one another. The number of people I've met via Twitter has been incredibly useful. So I struggle with how to put those two issues together.
I know that Mastodon is one attempt to do so, where you could have multiple smaller communities that "federate" and allow inter-instance communication, but that has some other issues as well.
[ link to this | view in chronology ]
And those issues are not lost on both developers and users of Mastodon/the Fediverse, given how some Masto instances are forking the software and adding features they believe will improve it.
[ link to this | view in chronology ]
Re: Re: Re: Unicorns...or not
[ link to this | view in chronology ]
Re: Re: Unicorns...or not
Throwing a request for help into the Twitter ocean can get a response from a combination of people to solve the problem, or give the answer, far faster than any bureaucracy can route it through all its chains of command.
It is the ability for people to retweet that makes twitter both a powerful tool for good and evil. Because of this I agree with Mike, give users the tools to tailor what they see, and the APIs, so that a group of people can route through a common filter if that is what they desire.
[ link to this | view in chronology ]
Re: Re: Re: Unicorns...or not
It's godawful for saying anything complex or nuanced.
[ link to this | view in chronology ]
Re: Re: Re: Re: Unicorns...or not
The way Twitter works makes it a good tool for top level notification, responses, and providing an overview of what is happening.
[ link to this | view in chronology ]
Re: Re: Unicorns...or not
The problem always fixes itself when hungry competition exploits the censorship. Slowly, the web caught up to AOL, and by 2002, AOL was irrelevant. Best thing to do here would be to short the stocks of the censors as they will eventually crash as twitter did from the 50s to the teens.
It's not backlash but irrelevance that harms the censors. The discussions become bland, and people seek the truth. In one controversial self-help area, AOL censored everyone, as did Prodigy, and everyone landed on USENET, their ideas caught fire, and literally took over the world.
USENET still exists, btw. If people wanted free speech they'd be using that. YouTube is actuallyh quite permissive much more so than Twitter or Facebook, which is why I believe Google will ultimately win the FANG battle.
[ link to this | view in chronology ]
Re: Unicorns...or not
This is likely why the current major social media platforms are doing well in spite of their flaws.
[ link to this | view in chronology ]
Re: Re: Unicorns...or not
There's a twitter ocean out there, and the only barrier to sharing, say, the "techdirt mastodon feed" (which would also let you search the ocean, because twitter has no legal barriers to just that) would be whether Mike Masnick felt like getting it programmed and provisioned.
[ link to this | view in chronology ]
Re: Re: Re: Unicorns...or not
Eventually another service will siphon their audience by allowing true free speech. Google does this pretty much with YouTube, perhaps because they know if they chase people off, while they can afford to lose the revenue, they can NOT afford to have a competitior grow strong enough to challenge them.
Google also protects the intellectual property of its creators so well that it's the only place oen can count on not being pirated, though "copycat" videos are more of an issue (but that's not a copyright law issue).
Amazon has hosted files for major piracy sites and destroyed the market for e-books with their "royalty" structure that payws 70 percent if you charge between $2.99-9.99 a copy. This is almost price-fixing, and it forces authors to conform to a vertical constraint which is generally not tolerated like with what broke up the movie studios and theaters. I'd rather put a book on video on youTube for people to watch as i scroll through the pages than sell it on Amazon. Another thing Google does is eliminate advertising conflicts of interest. Videos on the law are not always spojnsored by law firms, for example, so there's no holding back on the content the way it would be if free material is used to market premium material.
To divert to copyright for a moment, I agree that middlemen stink, that corporations exploit artists, and that the audience has a right to demand free samples, but I do not agree that anyone should use any of the above (or lack thereof) as a rationale to justify piracy or weakening protection, unless you want to just put everything in the public domain and let the best techie win.
Content moderation is a nonstarter. Every time there has been censorship someone has risen up from it. Even Twitter had a no-holds-barred approach when it was building its audience, and something like Gab could easily catch fire.
I've been tempted to pitch a defamation-free search engine that also does not allow links to infringing material even if the engine is immune, as an alternative to what we have now. How about an opt-in search engine, or a link-based portal that is constructed by humans, creating more jobs and spreading the wealth?
The internet is too decentralized for censorship to be an issue. Too many ways around it, even if they seem inconvenient. Anyone who wants an audience will certainly find a way to build one, and any platform that wants money will find away to accommodate them. The syst em really does work. This website is a good example of that even if the guy who runs it uses slanted language, something frowned upon in journalism.
[ link to this | view in chronology ]
Re: Re: Re: Re: Unicorns...or not
I agree that middlemen stink, that corporations exploit artists, and that the audience has a right to demand free samples
That's a first.
but I do not agree that anyone should use any of the above (or lack thereof) as a rationale to justify piracy or weakening protection
Sure, because the alternative of throwing more money at the corporations you claim to loathe so much, as well as letting invasive software ruin our machines as a penalty for purchasing legal products like you wanted, has clearly proven to be the constructive solution to solving the issues that you purportedly hate.
Oh, wait, no it doesn't. It's just far easier for you to demand a pound of flesh from the rest of the planet because your corporate masters ripped you off. And after you sucked their cocks and everything! My heart bleeds, truly.
[ link to this | view in chronology ]
Er...
Your two examples, at least as stated, seem fairly possible to resolve. In both instances I'd say the newsworthy "verdict" is quite accurate. The only distinction seems to be the press losing their minds (my words, not yours and, yes, hyperbolic) over the second example, prompting a change in response to their pressure. That is my take on that, at least with the information you provided.
[ link to this | view in chronology ]
Re: Er...
Your two examples, at least as stated, seem fairly possible to resolve.
Now resolve a million of those a day, with 5 seconds to decide. And don't make any mistakes or you get slammed in the press. Or don't make any that upset Republicans, or Democrats, or minority groups, or majority groups. Good luck.
[ link to this | view in chronology ]
Re: Re: Er...
[ link to this | view in chronology ]
Because taking any longer would leave many other reports left in the inbox. Inefficiency gets people fired, after all.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
Two questions:
1) How much time(on average) would be 'enough' to fairly judge a particular piece, consider history of the poster for stuff like implied sarcasm/parody vs seriousness, consider context of the piece within that account and in general, decide how 'newsworthy' it might be, and other factors what could result in an otherwise 'obvious' violation being allowed?
2) Assuming doing the above one million times a day, how many people would you estimate would be needed to properly vet said content in a timely manner?
[ link to this | view in chronology ]
Re: Re: Re: Er...
Why only 5 seconds to decide?
Listen to the podcast... That's all the time that people have to review stuff because there are so many pieces of reported information. In short: the content keeps flowing and flowing. And, no, the answer isn't just "hire more people." They're doing that. But the content and report clicks are coming faster.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
News Flash: Facebook creates "trustworthiness" score
That seems like a good force multiplier for Facebook moderation.
[ link to this | view in chronology ]
Children, not boys
The line:
"you couldn't say mean things about white men, but could about black boys"
should read
"you couldn't say mean things about white men, but could about black children".
The linked article talks about children, not boys. You could never say mean things about black boys on Facebook (two protected descriptors), although the identifier of "children" wasn't protected.
And I think that I recall that in the Radiolab story, it was mentioned that this rule was changed, as it should have been, so that you can no longer say mean things about black children.
[ link to this | view in chronology ]
Re: Children, not boys
[ link to this | view in chronology ]
What needs to happen...
Amazing that people need to keep going back to see what else can offend them just so they can biatch about it.
No one NEEDS social media. If all it does is upset you then you're doing it wrong
[ link to this | view in chronology ]
Re: What needs to happen...
[ link to this | view in chronology ]
Re: Re: What needs to happen...
[ link to this | view in chronology ]
Re: Re: Re: What needs to happen...
[ link to this | view in chronology ]
The modern "information superhighway" is built on a house of cards created by "tolltakers" who declared themselves essential to its existence (it's not), and set up toll bootsh which enabled them o steal hundreds of billions of dollars.
As they say here about copyright, if a business model is unsustainable, it deserves to perish. That goes for Big Internet (and the monopolistic practice of law) as much as it does for copyright.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
HEY, why weren't "platforms" controlling speech from the start?
Answer: because they deliberately kept away from controversy until big and influential enough that believe can make their move to gain total power.
However, legally, that puts them in the wrong through deliberate non-feasance, breaking the explicit deal, and accepting without least caution previously so that now is a huge and apparently causeless change.
Alex Jones is indeed a good example, has been ranting about Sandy Hook for 5 years. SO WHY NOW are these globalist corporations cracking down?
In every way, these corporations have proved themselves lurking evils, just waiting for right time to attack. -- What else are they planning?
[ link to this | view in chronology ]
Re: HEY, why weren't "platforms" controlling speech from the start?
[ link to this | view in chronology ]
Re: HEY, why weren't "platforms" controlling speech from the start?
Do...do you smell toast?
[ link to this | view in chronology ]
Re: HEY, why weren't "platforms" controlling speech from the start?
[ link to this | view in chronology ]
Re: Re: HEY, why weren't "platforms" controlling speech from the start?
The fact that they don't enforce their terms equally AND the fact that they did not enforce their terms AT ALL for YEARS means that their current de-platforming event could be illegal.
[ link to this | view in chronology ]
Show me the law that Google, Facebook, Twitter, etc. broke by kicking Alex Jones off of their respective platforms.
[ link to this | view in chronology ]
Re: Re: Re: HEY, why weren't "platforms" controlling speech from the start?
[ link to this | view in chronology ]
Re: Re: Re: HEY, why weren't "platforms" controlling speech from the start?
It's embarrassing.
[ link to this | view in chronology ]
We get it, you’re an anti-Semite.
[ link to this | view in chronology ]
I agree. But let's talk about what really happened
Now I'm not jacking your topic, no pun intended, but let's talk about the big obese elephant in the room first, and that's the recent coordinated de-platforming of Alex Jones.
Newly leaked confidential Media Matters / Soros policy memos reveal that the recent de-platforming of Alex Jones and others are just a small part of a larger effort to destroy free speech and replace it with a bastardized corporate controlled shit show:
"In the next four years, Media Matters will continue its core mission of disarming right-wing misinformation...
"....Internet and social media platforms, like Google and Facebook, will no longer uncritically and without consequence host and enrich fake news sites and propagandists..."
https://www.scribd.com/document/337535680/Full-David-Brock-Confidential-Memo-On-Fig hting-Trump#from_embed
So basically we already have a conspiracy led by a corrupt un-elected billionaire to control and direct speech in this country through coordinated effort of several of the largest internet companies.
My Point?!?!?
The amount of pressure, effort, internal apparatchiks, and political gymnastics involved in pulling this off is again nothing short of brilliant. Again, I have nothing but admiration for George Soros and his "open" foundation organizations ability to subvert and manipulate society to his own twisted will.
HOWEVER, what you propose is basically, consolidating and centralizing a system which could be a shared objective resource which these companies would defer to in the future when it comes to content moderation. I admire you Mike, and your idealistic disregard of reality. Unfortunately, intentional or not, all you are going to achieve is to make it easier to destroy practical access to speech and information.
Sorry to rain on your gay parade, but history shows that centralization of power never increases freedom.
[ link to this | view in chronology ]
Please provide proof of your claim and the necessary citations required for verifying your evidence. FYI: That Scribd link is not proof; nothing in that document shows anything about any sort of plan to censor or outright control the speech of others. (Saying “we want to fight disinformation” is not the same thing as saying “we want to shut up Fox News and Donald Trump forever”.)
How is the idea to “move the controls outwards to the ends, allowing individuals and third parties to make their own calls” anything close to the idea of “consolidating and centralizing a system which […] these companies would defer to in the future when it comes to content moderation”?
…which is likely one reason why Mike advocated for the exact opposite of giving too much power to the social media companies.
[ link to this | view in chronology ]
Re:
I provided proof. Your bizarre retort only fools the most stupid and ignorant.
The so called "scribd link" is actually a link to a pdf file from Media Matters. Scribd is a document hosting company. Nice try though.
You've lost all credibility by attacking 'the link' and there is no point in continuing any further discourse.
[ link to this | view in chronology ]
Re: Re:
I like your writing, and hope you will continue. If you get ruffled by Stephen, you are only playing into his (slimy) hands.
Continue to share your opinions, please. You are an interesting writer and seem to have skin thick enough to hang out here, such individuals are few and far between.
Publius
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re:
Those nutjobs don't represent anything remotely Rand-ian...
[ link to this | view in chronology ]
Where does that document say, explicitly and unambiguously, that Media Matters is trying to outright control or censor the speech of others?
I did not attack “the link” (or the website it leads to), I attacked the idea that the document sitting behind that link says what you claim it says. If you were confident enough in your claims to back them up without bullshitting your way into “credibility” (or offering an anti-Semitic conspiracy theory about George Soros), you could give me a straight answer to that question I just posed.
[ link to this | view in chronology ]
Re: I agree. But let's talk about what really happened
We call it 'civilization'. It's a whole group of us that get together, and decide what's acceptable and what's not, based on social norms.
quite 'surprisingly', a self-admited fantasist, creating items he's stated under oath to be 'fictional' and 'invented for the purposes of entertainment' doesn't get the priviledges of factual reporting. How shocking that this 'civilization' conspiracy has foiled his efforts to dump that skanky shit-peddler where his too-shit-for-spam-email pills deserve to be, nowhere.
[ link to this | view in chronology ]
Re: Re: I agree. But let's talk about what really happened
[ link to this | view in chronology ]
Soros! drink!
[ link to this | view in chronology ]
Re: I agree. But let's talk about what really happened
Newly leaked confidential Media Matters / Soros policy memos reveal that the recent de-platforming of Alex Jones and others are just a small part of a larger effort to destroy free speech and replace it with a bastardized corporate controlled shit show:
Why bring up Soros?
Last I checked, Media Matters controls neither Google nor Facebook (and, is more or less considered a partisan joke).
So basically we already have a conspiracy led by a corrupt un-elected billionaire to control and direct speech in this country through coordinated effort of several of the largest internet companies.
No. You have a document put together by a small partisan operation expressing its own goals, which have exactly as much impact on the operation of large platforms as the demands of, say, Breitbart.
HOWEVER, what you propose is basically, consolidating and centralizing a system which could be a shared objective resource which these companies would defer to in the future when it comes to content moderation.
What? I proposed literally exactly the opposite. I have proposed pushing everything out to the ends in a more distributed system. I advocate for the end users getting control over all of their own data and content, so that the platforms don't control it. I've advocated for the platforms taking a hands off approach on content moderation and instead providing tools and APIs so that others can either provide their own filters or interfaces, or that end users can design their own.
So, why would you claim I'm advocating for literally the exact opposite of what I advocate for other than being a total and utter troll.
history shows that centralization of power never increases freedom
I agree. Which is why I've long advocated for more decentralization, entirely contrary to your claims.
[ link to this | view in chronology ]
CDA 230 ?
One the one hand you happily dismiss any lawsuit that threatens platforms by pointing to CDA230, otoh now you eschew the burden behind keeping the CDA230 loophole open, by being required to at least moderate user content?
I smell a little bit of hypocrisy here.
Cheers, Oliver
[ link to this | view in chronology ]
Re: CDA 230 ?
But what do you know, you’re almost certainly a troll.
[ link to this | view in chronology ]
Re: CDA 230 ?
One the one hand you happily dismiss any lawsuit that threatens platforms by pointing to CDA230, otoh now you eschew the burden behind keeping the CDA230 loophole open, by being required to at least moderate user content?
CDA 230 does not require moderation of content. So I'm not sure where the hypocrisy is. CDA 230 does enable moderation of content by saying a service provider is not liable for the moderation choices. But it does not "require" it.
So... what hypocrisy are you talking about? My position is entirely consistent.
[ link to this | view in chronology ]
It's not impossible -- or even hard
And overlaying all of that is their myopic, simplistic view that more is better. It's not. Twitter would be vastly more useful at 10% of its current size, a goal easily achieved by permanently blacklisting the people, domains, and networks responsible for abuse.
This isn't difficult. It's a lot of hard work, but it's not difficult. It's much easier for snivelling coward Jack Dorsey to whine and whine than it is to actually roll up his sleeves and do what needs to be done.
[ link to this | view in chronology ]
If you say so
Well damn, in that case sounds like there's a grand opportunity just waiting for some brilliant go-getter to jump on it, and as someone claiming that it's an easy thing to do sounds like you're just the kind of person to do it.
As such I'm sure many will be waiting with eager anticipation for you to roll out the competing platform that will utterly demolish both Facebook and Twitter and show them how they should have been doing it.
[ link to this | view in chronology ]
Re: It's not impossible -- or even hard
Twitter would be vastly more useful at 10% of its current size, a goal easily achieved by permanently blacklisting the people, domains, and networks responsible for abuse.
They did. Then you pissed and moaned after Jared Taylor got his ass kicked off Twitter. There's no fucking pleasing you, Hamilton.
[ link to this | view in chronology ]
"Well MY standard says you're off, and since I run the place..."
I don't think that's Hamilton, but whoever they are you do bring up a good point and one I wish I'd caught. They claim that the services could be vastly improved by simply giving the boot to those abusing it. Great, abusing the platform according to who? Maybe that's exactly what they're starting to do, and in that case would they still be for it?
[ link to this | view in chronology ]
Obvious
On either side are absolutists - Any absolute position such as "You can't moderate anything because it's a violation of my natural rights" is obviously unworkable. The answer has to sit somewhere in the middle.
The best comments always start with, "It's easy just..." Because I known damn well none of those people actually have a working solution to this or they would be shouting the praises of their site instead of sniping from their anonymous bunkers.
Or "Mike is doing it all wrong..." while screaming about some ideology divorced from actual law. (Again, show me how your service works with absolute lack of moderation before you criticize the light hand of TD.)
If there was an easy way this wouldn't generate so much heated debate.
[ link to this | view in chronology ]
Re: Obvious
Not only that, but the problem is complex, so complex in fact that it finds itself having to constantly balance on two or more conflicting horns of the dilemma: newsworthy? too gory? too spammy? encourages fistfights? off topic? false? too un-funny? threats?
I take Mike Masnick's position on this: decentralizing and diversifying the decisionmaking is best. The question is whether legal changes over CDA230 (such as not allowing copyright on user-generated content) make sense or not.
[ link to this | view in chronology ]
"the job we're asking them to do"
"We" want our content moderated because "we" cannot handle the idea of doing it for ourselves, it's too fccking hard. And if "we" did it ourselves we'd have no-one to put the blame on because it simply cannot be "my" fault.
And that is just sad.
If you don't like something, don't look at / or read it. If you don't feel it is appropriate for your children then take the time to BE A PARENT and curate what they have access to and take the time to EDUCATE THEM.
And its none of your fccking business how others do their parenting, as long as they are being parents.
[ link to this | view in chronology ]
Re: "the job we're asking them to do"
It is more about a certain fringe of others that get taken in by the honey-tongued devils and do evil of various degrees. For example, the poor soul that believed in Pizzagate and is now rotting in prison. For example #gamergate. For example doxxing and harassing the Sandy Hook survivors.
Or how about the Slenderman girl, the one that tried to murder her best friend?
[ link to this | view in chronology ]
Re: Re: "the job we're asking them to do"
That is the same reasoning that has led to the security theater following 9/11. In other words it is well into the realm of good intentions that pave the road to hell. Those on the fringe need a safety net, and are not helped by a clamp down on the rest of society to try and protect them.
[ link to this | view in chronology ]
Re: Re: Re: "the job we're asking them to do"
The problem is, the fringe is all kinds of shades of gray, and what are we supposed to do about my acquaintance, otherwise functioning, who believes in this #QANON crap and wants to act on it?
I claim that decentralization is a reasonable direction. No, it won't address the panic in many that they are losing their position in the hierarchy as normalcy is upended and the purity of their childhood is lost (what are all these different-looking people, why can't I beat them up, and how can two men get married???)
I don't know how to address the panic...sorry.
[ link to this | view in chronology ]
Re: Re: Re: Re: "the job we're asking them to do"
Decentralization works for some things, while centralization works for others. Pushing fringe and extremist groups into their own little enclaves is the best way of allowing them to become more extreme and weird in their views as you remove any moderating influences and increase their sense of isolation.
Also a problem with federated systems is that they do not scale particularly well, which is one of the reasons that RSS for example, is not as useful as it should be. A popular node can end up with excessive traffic, especially when compared to it direct user base who are paying for the resources. Also, it is more difficult to make connections in a federated system, as they lack the reach of a centralized system untargeted open messages.
A federated system has advantages when it comes to communications that should only reach a limited audience, such as family and smaller social groups.
Therefore, in many ways federated and centralized are complementary, and have different strengths and weaknesses. I.e. Federated would be a good choice for an extended family to stay in touch, while a centralized system is better when a creator wants to announce new content, or inform fans of a delay etc. Federated is better for local politics and issues, but centralized for larger scale politics.
That should be what are you and your friends and acquaintances going to do about, as there is nothing I as a stranger can do about the problem. However, if you are looking for something to debunk that a conspiracy theory, a centralized system is going to be more useful to you, as it gives the widest coverage of debunking efforts, and a better chance of finding someone with a counter argument that resonates with your acquaintance.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: "the job we're asking them to do"
Hmmm...seems I'm fresh out of ideas and having a *moral panic* (TM Techdirt!)
That's right, twitter and facebook are guilty by association....just like in those ridiculous "material support of terrorism" lawsuits that keep getting dismissed.
It's not like this is the first time in history that desperate, panicked people like my friend have reacted irrationally and cruelly!
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
Couple that to politicians, who do not accept as doing nothing is often the best solution at their level, and more paving slabs get laid on the road to hell.
[ link to this | view in chronology ]
Convert HTML to Feed
Google Search? Facebook? Twitter? I convert all this and much more. Forums? Search pages on e-commerce sites? Yes, yes, yes!
This way I can use my feed reader to filter everything with a blacklist of regular expressions, then send the updates as e-mail messages. Doing this I don't waste time with content I already know I don't want to see, as it is all filtered automatically. Plus some keywords are highlighted to make it easier to eyeball some categories of content.
I can easily check 1000+ pages multiple times per day and not see a single advertising.
These companies don't need to know what I like and what I don't like. I do my own computing and don't need their black box algorithms.
[ link to this | view in chronology ]