Robert Reich Loses The Plot: Gets Basically Everything Wrong About Section 230, Fairness Doctrine & The 1st Amendment
from the dude,-no dept
I still find myself somewhat amazed at how otherwise intelligent people seem to lose their entire minds over the fact that there's a fair bit of misinformation out there. Robert Reich is not a dumb guy, but like so many these days, he seems to work himself up in a lather about things he doesn't seem to understand. He has an opinion piece at the Guardian about his suggestions to restore American democracy... and apparently part of that is throwing out the 1st Amendment. Which is, you know, quite a choice. I won't comment on his first and third suggestions (voting rights and money/politics) because those aren't my areas of expertise, but when he dips his toe into Section 230 and misinformation, I have to point out that everything Reich writes in this piece is so far beyond wrong that it would need to ask directions just to get back into the vicinity of "just kinda wrong."
The second step is to constrain big anger emanating from social media, Fox News, and other outlets. There are two ways to do this without undermining freedom of speech.
Hmm. So, I've been among those who have pointed out that the actual evidence has shown repeatedly Fox News is a much more important vector of misinformation than social media, but at least I recognize you can't do anything about that legally, because of the 1st Amendment. So I'm curious how Reich thinks he's going to do this without infringing on the 1st Amendment (spoiler alert: he's not going to do that, he's just going to ignore the 1st Amendment). Also, it's kinda weird how he lumps social media and Fox News together, since they're very different, and, as such, require very, very different strategies to counteract. If Reich spoke to an expert rather than pontificating out of ignorance he might have learned that.
Revoke Section 230 of the Communications Act, which now protects digital media providers from liability for content posted by their users even if that content is harmful, hateful or misleading. There is no continuing justification for this legal protection, particularly at a time when the largest of these providers have become vast monopolies.
I mean, this is just beyond clueless. Perhaps he missed the famed NY Times correction (that it has run verbatim multiple times)?
I mean, anyone with even a passing familiarity with the 1st Amendment knows that "harmful, hateful, or misleading" speech is protected by the 1st Amendment (there is a tiny, tiny, tiny exception, for a very, very narrow form of "harmful" speech -- that which meets the Brandenburg standard -- but that wouldn't apply here and 230 wouldn't help either for that exception). All 230 does is help get these frivolous lawsuits tossed out procedurally earlier, which is a huge benefit.
Also, that last line is particularly stupid: Section 230 protects all websites that host 3rd party content and users who share that content. The fact that a very small number of those companies are large changes nothing about that.
Even worse, if Reich knew even the first thing about the history of Section 230, he'd realize how totally blinkered and backwards this suggestion is. Section 230 is what allows websites to experiment with better ways to stop disinformation. Section 230 came about because of a lawsuit whereby a website was found liable for information it left up because it moderated other content. So if you get rid of 230, sites that moderate face more legal liability. That means some of them will decide the better approach is not to moderate at all, which would better protect those sites given the history of the law in this area.
Reich's second suggestion is no better.
Create a new “fairness doctrine” requiring that all broadcasters, including cable, cover issues of public importance in ways that present opposing perspectives. This will be difficult to enforce, to be sure, but it would at least affirm the nation’s commitment to holding broadcasters to a higher standard than merely making money.
Again this shows a total ignorance of the fairness doctrine, including how it worked in practice and the legal basis under which it was allowed to exist. It can't cover cable. That's not even an open question. Under the doctrine that (narrowly!) allowed the fairness doctrine to apply to broadcast TV, the Supreme Court made it pretty clear that this was only constitutional because broadcasters were using limited spectrum that came from the government. That's not true of cable.
And, again, Reich should maybe learn something about the history of the fairness doctrine -- which scared broadcasters away from taking on controversial topics -- and meant less coverage of things like civil rights.
Also, would this mean CNN would now need to present the Trumpian side of every story? Why would anyone want that? Any story about the benefits of COVID vaccines needs to be balanced with a vaccine denier? That's ridiculous. And unconstitutional.
What's incredible is that with both of these, Reich insists that neither of these would "undermine freedom of speech." Except both would do exactly that. Removing the ability of websites to moderate freely is an attack on free speech rights, and in the case of the fairness doctrine, the Red Lion case explicitly highlights how applying it to something like cable is a blatant attack on the 1st Amendment.
Maybe Reich's ideas on the other suggestions in the piece are better -- but they're so fundamentally flawed on the area I know about, it leads me to believe that Reich has turned into one of those guys who spreads misinformation. Ironically, in Reich's own fantasy world, he should probably be held liable for that.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: 1st amendment, cable news, democracy, fairness doctrine, free speech, misinformation, robert reich, section 230, social media
Reader Comments
The First Word
“How to Kill A Democracy
1- Freak out about people doing dumb people things.
2- Demand we repeal basic rights and freedoms in the name of "doing something" to "protect our basic rights and freedoms"
3- Surprised Pikachu face as the truly evil person comes along and uses those "do something" measures to take control and turn democracy into a dictatorship.
Subscribe: RSS
View by: Time | Thread
Will you post an essay about how to reform hate on the internet? This blog, which I like, has just become a "why this other person is wrong" blog. Would you be willing to share more good reform proposals too? Sorry if I've missed them.
[ link to this | view in chronology ]
Re:
How about you do that? "Reform hate", whatever that even means.
The post, quite rightfully, is saying "here's another person suggesting we make everything worse", and points out the very basic flaws underinning Reich's "argument".
If you have any idea on how to make people better human beings, the last 10,000 years or more are waiting with baited breath,
[ link to this | view in chronology ]
Re:
Reform hate on the internet? I think you have it a tad backwards, it isn't the internet that is the real problem, it's the hateful people that's the problem.
[ link to this | view in chronology ]
Re: Re:
This argument reeks of the same bogus “guns don’t kill people, people kill people” shit that 2nd Amendment wingnuts use to defend their right to roll back or even prevent the most reasonable of gun control laws and own as many assault weapons as they want.
I think that disarming hateful people of the ability to spread hateful ideas online as easily as they’re able to do right now would go a long way toward ensuring that we have less hateful people in the future.
[ link to this | view in chronology ]
Re: Re: Re:
and how do you plan on doing that without destroying either the First Amendment rights of others or Section 230?
[ link to this | view in chronology ]
Re: Re: Re: Re:
I suspect they they secretly are one of those "hateful people" they are so adamant we disarm.
Also: Got to admit, I rather hate it when people are out to strip others of their right to speak [1]. I suppose that makes me one of those people they really want silenced.
[1] this is totally different that people not being listen to. I'm all for helping people choose which voices they want to hear, or not.
[ link to this | view in chronology ]
Re: Re: Re:
bogus “guns don’t kill people, people kill people” shit
Though I share your disdain for how that point is frequently snidely raised and abused with regards to gun regulations, if we're going to use it as an analogy for something like speech, then it is important to remember that it is, at core, true
[ link to this | view in chronology ]
Re: Re: Re: Re:
Social media platforms and the overall Internet regulatory landscape as they currently stand compound into an issue where people have the ability to spread hate and lies with so few consequences in ways that help create more hateful people. Gun culture and lax gun control laws compound into an issue where people who otherwise shouldn’t be able to get a gun are easily able to get guns and use them for all the bad shit that entails.
I firmly believe that the Internet is part of the problem, and that we can’t just jump to “hateful people are the problem” and “governments need to address societal problems to produce fewer hateful people”. This is becase, to address societal problems, we need a functional government. And we can’t have a functional government if hateful people are able to spread lies and hate with next to no consequences via the Internet in ways that influence people to vote for shitty candidates who don’t want thoae societal problems solved. Therefore, I’d argue, the current fucked-up structure of the Internet and the way folks can spread hate and lies with impunity to stop societal problems from getting solved, is itself a societal problem.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
Social media platforms and the overall Internet regulatory landscape as they currently stand compound into an issue where people have the ability to spread hate and lies with so few consequences in ways that help create more hateful people. Gun culture and lax gun control laws compound into an issue where people who otherwise shouldn’t be able to get a gun are easily able to get guns and use them for all the bad shit that entails.
Social media platforms and the internet have also compounded the ability of people to gain massive audiences for all sorts of good reasons - quality independent reporting on vital issues, long-dismissed perspectives from marginalized and oppressed groups, exposure of horrific injustices in our systems of law and policing, democratization of massive knowledge bases, and the creation of all kinds of content that is just entertaining or useful or fun.
Is there an equivalent for that aspect with regards to guns?
If not, I suggest we discard the analogy for being too limited to be useful in this discussion.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
This is becase, to address societal problems, we need a functional government. And we can’t have a functional government if hateful people are able to spread lies and hate with next to no consequences via the Internet in ways that influence people to vote for shitty candidates who don’t want thoae societal problems solved. Therefore, I’d argue, the current fucked-up structure of the Internet and the way folks can spread hate and lies with impunity to stop societal problems from getting solved, is itself a societal problem.
You DO see how you're describing this as an impossible chicken-egg scenario, right?
1 We can't fix societal problems without a functional government
2 We can't have a functional government without fixing the current state of the internet
3 The current state of the internet is a societal problem
4 GOTO 1
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re:
That’s true. Which also means that when Masnick talks about how government needs to solve societal problems instead of complaining about Internet companies, Masnick is engaging in the same chicken and egg argument.
The only real hope I have at this point is that the current bipartisan push for Internet regulation leads us to a space where the Dems can trick the Repubs into putting forward a new set of regulations and rules targeted toward ensuring that assholes and liars on social media as well as the social media plaforms themselves that continually shrug and say that it’s oh so hard while turning a blind eye to their most profitable bad-faith actors can face some sort of legal repercussions.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re:
Masnick is engaging in the same chicken and egg argument
No, not if you reject the premise that social media and lax internet regulation is the primary cause of government dysfunction. You're the one with an argument that rests on that premise.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re:
"Which also means that when Masnick talks about how government needs to solve societal problems instead of complaining about Internet companies, Masnick is engaging in the same chicken and egg argument."
Not really. The problem as it exists right now is that some people falsely conflate private vs. public entities, and believe that any restriction on speech is unacceptable. At the end of the day most of the things being complained about were not even remotely controversial when they happened in physical spaces, and they only matter because of a fake conflation between large private spaces and the old public square.
"The only real hope I have at this point is that the current bipartisan push for Internet regulation leads us to a space where the Dems can trick the Repubs into putting forward a new set of regulations and rule"
If your hopes are pinned on people you agree with abusing rules to get a result you want, I think I have some bad news about what happens when the people you disagree with are in charge..
"while turning a blind eye to their most profitable bad-faith actors can face some sort of legal repercussions."
I certainly agree that this is bad, but it's a very different argument as to whether or not Facebook and Twitter allow Nazis to harass their users. They perhaps should face consequences when they demonstrably allow abusive users because they get more money, but that's a different conversation as to whether they should face consequences for allowing Alex Jones to harass Sandy. Hook parents.
[ link to this | view in chronology ]
Re: Re: Re:
Then disarm the hateful people, if you can find a way. A social method would be sensible. Like, one that doesn't violate laws, especially the core founding principles of a country. Bonus points for accomplishing that without fucking everyone else.
[ link to this | view in chronology ]
Re: Re: Re: Re:
The hateful people are abusing the core founding principles of the country in an attempt to seize power, and then use that power to then dismantle the country and reshape it to strip it of a lot of the freedoms that were expanded and offered to people well after the founding of the country.
What “social” methods would you deem “sensible”? Do you mean dunking on them on social media to let others knownhow wrong they are? Because if it’s that, then over the last 5 years that’s barely put a dent in, well, anything.
Do you mean getting them banned from social media? One of the main issues is that the platforms where hateful people have the most reach and audience have no incentive to ban those hugh-profile hateful people unless they’ve outlived their usefulness. The global scale of said platforms makes boycotting them nigh-on impossible.
At some point, some sort of legal consequences need to come into play. If the founding principles of a country both 1) Enable hateful individuals to foment a political base to seize power in a way so that they can dismantle said country and 2) Serve as a roadblock to people who want to keep the country from being dismantled by said hateful individuals and their political base, then I think that those founding principles need a strong do-over. The Boston Globe had a recent series called “Editing The Constitution” on this subject, and I think that they did a good job with it.
[ link to this | view in chronology ]
Re: Re: Re:
No, it's not the same. A gun's primary use is to kill, internet's primary use is to communicate. If you want to conflate those contexts, go ahead, but it only makes you look like a fool. If you want to regulate speech on the internet, say goodbye to the first amendment which will inevitable lead to more regulation of more speech deemed harmful by some politician with an axe to grind.
And how do you propose to do that? How do you identify who to ban or not? Internet companies have literally spent billions of dollars trying to tackle the problem already with limited success and collateral damage littering the internet.
"Disarming hateful people online" doesn't actually stop them from being hateful, and they will find an outlet for that hate somewhere.
[ link to this | view in chronology ]
Re: Re: Re: Re:
And those same companies don’t have the balls to ban the high-profile shitheads that serve as the core of the issue because they’re too profitable. They tiptoe around the real issue, they know what the issue is, and refuse to do anything about it.
Yes, that’s true. But it prolly won’t be a place where they have a huge audience and their hate has an absolute minimum spread. Deplatforming works. Disarming hateful people of the tools they can use to easily spread hate, it works. Legal consequences have to come into play for social media platforms and their most hateful users somewhere, sometime, for the sake of sanity and reason to prevail.
The only thing that you and Masnick and Co. seem to want is for this fucked-up status-quo to continue, for us to keep doing the same shit year after year within the confines of a system that clearly hasn’t been working.
[ link to this | view in chronology ]
You are free to point out where anyone have said such a thing but that would be a futile exercise. What you fail to grasp is that if a person tells you that it's against the first amendment to silence someone using the law, it doesn't mean that it's a defense of said persons speech. When you start rationalizing that it's okay to do it because of the assholes, what other rights will be rationalized away down the line?
What you want is a quick and easy solution for a complex problem but history has proven over and over again that quick and easy solutions for those kind of problems always come with huge collateral damages and new even more complex problems, especially when it comes to societal problems. And when we point out that there is no easy solutions, you somehow think we want the "status-quo" to continue. The leap of reasoning there seems to be a tad emotional and lacking logic.
[ link to this | view in chronology ]
Re:
Masnick and Co. have had plenty of time over the last half-decade to help others come up with solutions to complex problems. They’ve spent it time and time again telling everyone else that they’re wrong or stupid (or both) and to do nothing because doing anything will have severe repercussions.
Mike’s best solution that he came up with on his own is to trust the same folks (Mike really got a stiffy when Jack Dorsey started up BlueSky) that created web2 which has been the source of so many problems to create web3 out of “Protocols, Not Platforms” like nothing could possibly go wrong.
[ link to this | view in chronology ]
Re: Re:
What’s your solution then, genius? You’ve had five years to implement it but you’re out here just whining like a bitch.
[ link to this | view in chronology ]
Re: Re: Re:
Well, at least you are consistent in your disdain for constitutional rights you feel threatened by.
[ link to this | view in chronology ]
Re: Re: Re: Re:
Yes, I feel threatened by the ability of shitheads to excercise their First Amendment rights without consequence in ways that foment bigots and liars to go out and vote and take other actions that would strip me and others of our own rights. If you don’t feel threatened by that and the way shit has been going over the last 5 years, then I’m not sure what to tell you, other than something like “Where the fuck have you been all this time?”
[ link to this | view in chronology ]
Those assholes may come for my rights, therefore I shall deprive them of their rights first!
Those assholes may come and steal my money, therefore I shall steal their money first!
Those assholes may come and kill me, therefore I shall kill them first!
[ link to this | view in chronology ]
Re:
Yes, that’s what you tend to do when you’re facing down a fascist regime trying to seize power, you fight back.
[ link to this | view in chronology ]
Re: Re:
Yes, you fight back. But eroding the rights fort everyone is like drilling holes in a boat because you think some of your fellow passengers are dangerous - you'll all just end up in the water together.
[ link to this | view in chronology ]
Re: Re:
Last I checked, America finally managed to vote the white supremacists out.
That does mean you have slightly more options than taking up arms and murdering the 73 million who voted for Trump.
[ link to this | view in chronology ]
Re:
They have nothing it seems. When the authoritarians take over and destroy the constitution (which could be very soon), they will make China's internet look open and free in comparison. When they are knocking on Techdirts door they will be yelling "but, but section 230!" It needs to be reformed, how? I don't know but as of now it gives a very small vocal minority way to much influence over a (properly so) lazy majority.
[ link to this | view in chronology ]
Re: Re:
If it is proper for the majority to be "lazy" doesn't that mean it will always be possible for vocal minorities to have outsized influence?
[ link to this | view in chronology ]
Re: Re:
All of this. The current status-quo where people are able to spread lies and hate online with relatively little (if any) consequence, it isn’t working. There’s a major consequence deficit at play.
The only solution that Masnick/Techdirt seems to have come up with is “Protocols, Not Platforms”. Re-decentralizing the Internet sounded intriguing at the time that he wrote it. But I had my concerns mainly around how it felt like asking people to “nerd harder” to solve the problems and issues with it to get users on board with how complex it might make social media, much the same way that governments wrongly ask people to “nerd harder” to come up with magical impossible encryption that bad guys can’t crack but leaves the governments with a backdoor. And now, with cryptoshit and NFTs having come to dominate the talk of decentralizing the web and web3 in general, and with Masnick hopping on the NFT and cryptoshit bandwagon, I don’t trust that that’s a good solution.
[ link to this | view in chronology ]
Re: Re: Re:
I mean... aren't you both kind of demanding that Mike "nerd harder" to come up with a solution for online hate right now?
What if "communications systems that liars and bigots can't use to spread hate and disinfo, but people can still use to engage in free and robust public speech" is just as magical and impossible as safely backdoored encryption?
[ link to this | view in chronology ]
Re: Re: Re: Re:
I fully understand that preventing hate and misinfo entirely is impossible. My main points are 1) That it’s still way, way too easy for lies and hate to spread on social media, and 2) If more people faced clear-cut consequences (especially high-profile spreaders of lies and hate) such as deplatforming, it would improve the quality of those platforms quite a lot and prevent the spread of a lot of lies and hate.
When Reddit banned a wide swath of its most toxic subreddits years back, it was shown that across the site, the quality of discussion went up and toxicity went down. Deplatforming works. But platforms right now don’t have enough incentive to actually deplatform their most high-profile bad actors unless it’s clear they’ve “outlived their usefulness” (Ex. Twitter banning Trump after years of him having free reign on the site). The biggest companies having their platforms being global with countless users means mounting an effective boycott is damn near impossible unless you have people with bargaining power to help out (like the various owners, operators, and mods of the subreddits that make up Reddit; they’ve been a huge help in pressuring Reddit to back down from stupid decisions or force their hand to make positive ones).
I’d like to see platforms such as Facebook and Twitter face clear-cut legal consequences and penalties for failing to take action against the most blatantly hateful and incendiary individuals on their platforms. Facebook shouldn’t have been allowed to keep Steve Bannon on the site after he called for the beheading of Fauci and more, as an example. Gosar tweeting an altered anime video showing him killing AOC and attacking Biden and being able to keep his account, as another example. Elon Musk spreading lies about COVID-19 and calling for an end to needed lockdowns, knowing that his drooling fanboys treat what he says as truth is yet another example. When FB and Twitter make excuses for assholes and give them high levels of leeway, FB and Twitter become part of the problem.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
I’d like to see platforms such as Facebook and Twitter face clear-cut legal consequences and penalties for failing to take action against the most blatantly hateful and incendiary individuals on their platforms. Facebook shouldn’t have been allowed to keep Steve Bannon on the site after he called for the beheading of Fauci and more, as an example. Gosar tweeting an altered anime video showing him killing AOC and attacking Biden and being able to keep his account, as another example. Elon Musk spreading lies about COVID-19 and calling for an end to needed lockdowns, knowing that his drooling fanboys treat what he says as truth is yet another example.
Can I ask: are you limiting this to online platforms, or is it also a case where you don't think (for example) news networks should be allowed to interview those people anymore either?
Because it seems like all you're saying is you want tighter restrictions on speech in general - like perhaps a somewhat broader definition of what counts as incitement to violence (to cover the Bannon and Gosar examples) and some sort of new category of unprotected speech for mis/disinformation (to cover the Musk example).
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re:
I guess I am. And I have no problem with that. Setting higher standards and raising the bar for discourse and debate through ensuring that more assholes and liars suffer actual legal consequences would be a marked improvement over what we have now.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re:
In almost every one of your comments, you've delineated your proposals and your complaints by referring to the "hateful" or "liars" or "assholes" as though these are clearly defined objective categories
I get it - it's really easy to focus on the most obvious, egregious people. I suspect I'd agree with you on the vast majority of people you apply these labels to.
Nevertheless, you need to acknowledge that these labels are subjective. That many of the statements you've made are in fact contradictory or hypocritical if you remove the presupposition that it's always clear who falls into these categories (e.g. "hateful people shouldn't be able to reshape the country and strip its freedoms" v.s. "we need to revisit the founding principles of the country and add new limitations to freedom of speech"). And that there will always, always be edge cases (likely vastly outnumbering the obvious cases) and the potential for massive collateral damage, as well as abuse and manipulation by a government that you yourself bemoan for its seemingly intractable dysfunction.
I think it would behoove you to see if you can phrase some version of your proposal in a way that doesn't rely on a presupposed clear delineation between good and bad people.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re:
"In almost every one of your comments, you've delineated your proposals and your complaints by referring to the "hateful" or "liars" or "assholes" as though these are clearly defined objective categories"
They're subjective categories, but I'm yet to see an example of someone negatively affected by the rights allowed by 230 who don't fit into one of those categories quite clearly.
"Nevertheless, you need to acknowledge that these labels are subjective"
I don't see how anyone's not saying these things are anything other than subjective. Just that if you opinion differs from others in the room, they're allowed to tell you to quiet down or leave, as happens in any physical room.
"I think it would behoove you to see if you can phrase some version of your proposal in a way that doesn't rely on a presupposed clear delineation between good and bad people."
There doesn't even need to be such a thing. Platforms decide who they want on their platform, and they make that decision based on their community. This protects a right-wing anti-vaxxer subreddit as much as it does the community, here. The only problem comes when the AV group decides to post disinformation here then pretend they've been wronged because they got told to, leave the room.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re:
If you think I'm arguing against section 230, you have misunderstood.
The commenter I'm replying to is the one arguing against 230 - and, indeed, against certain free speech protections themselves.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re:
I apologise if I misunderstood, when there's a long thread of AC commenters it can be easy to misplace something and they appear as the same colour to me right now.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re:
I would disagree with the second part of that statement, and I think that's part of the problem.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re:
"And I have no problem with that. Setting higher standards and raising the bar for discourse and debate through ensuring that more assholes and liars suffer actual legal consequences would be a marked improvement over what we have now."
Numerous countries in the EU have hate speech laws - like Sweden and Germany with a blanket ban on public discourse judged as incitement against demographic.
Problem is, this shit works in Europe but can not work in the US where the execution of law is based on precedent and one bad judge can turn a measured legal response to hate speech made by nazis and bigots into a complete shit-show which targets anything a suitably law-savvy grifter feels is convenient.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
There are a slight problems with bringing in legal penalties for a site not removing hate speech and misinformation. Apart from being a direct first amendment violation, the one that states the government shall make no laws limiting freedom of speech and the press, who decides what is hate speech and misinformation, and what would the previous president have banned.
[ link to this | view in chronology ]
Re: Re: Re:
Nice job dragging in unrelated shit. Got anything in that soapbox to clean it up with?
Again, propose something, then, to do something about the hate, or the hateful people. Don't screw everyone else, and put burdens on the platforms, which are not the problem.
P.S., Consequences are meant to be fit to the problem at hand. What do you want, arrests?
[ link to this | view in chronology ]
Re: Re:
"When they are knocking on Techdirts door they will be yelling "but, but section 230!""
Well, yeah... if that's not changed then there will be real problems with the knocking on doors.
"I don't know but as of now it gives a very small vocal minority way to much influence over a (properly so) lazy majority."
It really doesn't. Section 230 means that TD can't be held legally liable for things that the usual gaggle of mental patients say here. They're loud, but they don't have any more power than the rest of us, which would only happen if TD were to be held liable for their words. Until then, they can say what they say, the rest of us can tell them to GTFO and TD can moderate as they see fit.
[ link to this | view in chronology ]
Re:
What means will work, other than banning the current republican party? Hatred has become baked into US politics, as has misinformation.
[ link to this | view in chronology ]
Re:
Impossible.
One person's "hate" is anther person's religion.
[ link to this | view in chronology ]
Re:
"Will you post an essay about how to reform hate on the internet?"
Will you post an essay on how to perform cold fusion? That seems just as possible, after all.
Criticising a bad idea does not mean you have an answer to the alternative, it just means you know it's a bad idea. Hate is, unfortunately, a part of human behaviour that exists so long as free speech exists. The question is how to moderate it without affecting other forms of speech. Section 230 seems to be a decent answer to that question that's not without its flaws, but the response to that is definitely not to just allow hate speech.
[ link to this | view in chronology ]
Re:
I have a paper coming out soon on how to fix the internet...
[ link to this | view in chronology ]
Is it the one with the title "How to pull a plug" ?
[ link to this | view in chronology ]
Re: Re:
Is it the one where you talk about how jpegs of monkeys and decentralized money will somehow save the world?
[ link to this | view in chronology ]
Re: Re: Re:
Um. No.
[ link to this | view in chronology ]
just add him to the list of those who supposedly know all but know fuck all!
[ link to this | view in chronology ]
Internet of monopolies?
Not really.
Allot of them got closed down, restricted, couldnt compete, didnt do a good job designing anything, College based sites that never got Up dated. There is so much in the past that happened, but Shows HOW this country works.
We have Corps that think they know everything, but cant even figure out that Posting all your movies to the net, is going to cost allot of bandwidth.
Opposing Viewpoints?
MAKE IT SIMPLE. If you are reporting news, it has to be FACTUAL. NO OPINIONS. Let others decide, NOT THEM.
How many do you want? I can see a debate running DAYS LONG, to decide the Chicken or the egg, and 5+ ways they will demand Aliens did it.(not the S. American ones)
[ link to this | view in chronology ]
Re: Internet of monopolies?
PS.
There are problems with it.
They already patched it 1-2 times.
It was SIMPLE, and to the point.
[ link to this | view in chronology ]
How to Kill A Democracy
1- Freak out about people doing dumb people things.
2- Demand we repeal basic rights and freedoms in the name of "doing something" to "protect our basic rights and freedoms"
3- Surprised Pikachu face as the truly evil person comes along and uses those "do something" measures to take control and turn democracy into a dictatorship.
[ link to this | view in chronology ]
Don't forget to mention the lies and disinformation that come out of MSNBC.
[ link to this | view in chronology ]
Re:
Which would be what, exactly?
[ link to this | view in chronology ]
Re:
[Projects facts not evidence]
Bets on thos bot eing programmed to spit out the "No collusion" lie like all the others?
[ link to this | view in chronology ]
Re:
You've spelled FAUX news wrong.
[ link to this | view in chronology ]
Old people and ice floes...
I swear the old ideas were the best ideas.
[ link to this | view in chronology ]
One day an honest argument might be made, just not today
And the 'The only arguments against 230 are dishonest ones' streak continues, at this point you might as well call it a scientific theory right alongside gravity and evolution it's been tested and confirmed so often.
[ link to this | view in chronology ]
This isn't rocket science
We do notice and take down for copyright violations. There is no reason that we cannot implement a similar regime for false information. Re the First Amendment, you can't test the boundaries of the Constitution without a test case. You can't get a test case without passing a law. Throwing up our hands before trying anything won't get us anywhere. The nut jobs are willing to pack the courts and pass blatantly unconstitutional laws. Meanwhile, the normies are just sitting on their hands.
[ link to this | view in chronology ]
Re: This isn't rocket science
The boundaries are quite well established by now, it's just that some people are very eager to move far past the boundaries.
Here's the problem with your "this isn't rocket science", if the system is in such shambles that the courts can be packed with people who are willing to pass unconstitutional laws, passing other unconstitutional laws as a defense against that is kind of stupid because the end result is still the same.
Rocket science is dead easy when compared to rocket engineering, just like how suggestions on how to fix societal problems are easy to make, but ridiculously hard to actually implement to get the intended result without blowing up said society.
[ link to this | view in chronology ]
Re: Re: This isn't rocket science
This isn't a complete ban on speech. This is a time, place, and manner restriction. People are still free to make false statements outside the social network. This would just restrict the manner in which they get to make their false statements. The person would be denied the amplification that the social network provides. There is no well established rule against this.
[ link to this | view in chronology ]
Re: Re: Re: This isn't rocket science
Ever heard of something called prior restraint ?
If you want to make suggestions, it behooves you to actually have some rudimentary knowledge about the subject at hand.
[ link to this | view in chronology ]
Re: Re: Re: Re: This isn't rocket science
The statement here has already published before it is challenged. Prior restraint involves preventing a statement from ever being published. If you want to make critiques, it behooves you to actually have some rudimentary knowledge about the subject at hand.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: This isn't rocket science
Limiting what people are allowed to say on social media is prior restraint, and if enforced by the law it is censorship. You would also put in place the tools needed for an authoritarian take over, by enabling control of information available to the public..
[ link to this | view in chronology ]
If you set up rules to restrict what people are allowed to say beforehand and then use the law to enforce it, it is prior restraint and that's what you suggested.
If you set up rules what people are allowed to say and then use the law to enforce it after the fact, you have infringed their 1A rights.
[ link to this | view in chronology ]
Re:
Your comment is a great example of why we need a notice and take down system for false information.
[ link to this | view in chronology ]
Re: Re:
You are entirely free to explain in detail why you think what I wrote was false information. If you can't it only means you had a hissy fit because someone pointed out your lack of knowledge.
So what is it, did you just make shit up or can you detail what the false information is?
[ link to this | view in chronology ]
Re: Re: Re:
"In First Amendment law, prior restraint is government action that prohibits speech or other expression before the speech happens."
As I previously explained, prior restraint does not apply here because the false statement isn't challenged until after it has already published. I have wasted time out of my life repeatedly explaining this to you. Many others have done the same thing on the interwebz. It would be great if we had an actual information economy that rewarded people for making sure that statements are true.
https://www.law.cornell.edu/wex/prior_restraint#
[ link to this | view in chronology ]
Re: Re: Re: Re:
If you have a law that says you can't say xyz it specifically prohibits that type of speech before it happens. The moment such a law comes into existence prior restraint exists regardless if any speech has been done to trigger it. This shouldn't be hard to understand: Law prohibits some types of speech -> prior restraint. This is always true regardless of when or if the speech is done and when it's "restricted".
What you have wasted your time on is being wrong.
[ link to this | view in chronology ]
Re: Re:
And your comment is the perfect example of why notice and take down is so dangerous, it will be abused by those who wish to impose their rules on society.
[ link to this | view in chronology ]
Re: This isn't rocket science
We do notice and take down for copyright violations. There is no reason that we cannot implement a similar regime for false information.
There are many problems with the copyright takedown comparison, but I'll focus on just one:
DMCA notices can only be sent by a single party: the copyright holder (or their representative) for the piece of content in question.
Who is the equivalent party for sending "false information" takedowns?
[ link to this | view in chronology ]
Re: Re: This isn't rocket science
Anyone who is willing to claim that it is false? There could be some nominal dollar amount paid by the challenger to discourage frivolous claims. If the person making the statement does not request review within some time limit, then the statement is taken down and the challenger gets their money back. If the person requests review, then they put up the same nominal amount. Half of the fees go to a neutral third party that reviews the statement. The other half goes to the winner or more than half if you want to incentivize true statements/challenges. There could be a system of strikes that escalate a suspension periods. There could also be a system of appeals. There are plenty of creative people that could come up with some kind of solution like this.
[ link to this | view in chronology ]
Re: Re: Re: This isn't rocket science
Your ideas would enable the rich to censor the poor, as like DMCA take downs, the poor could not afford to challenge notices. Beside which how are impartial reviewers selected, and how do you ensure the reviewers selected have the knowledge to judge the truth.
[ link to this | view in chronology ]
Re: Re: Re: Re: This isn't rocket science
Heck, let's even say there was some magical way to select trustworthy, neutral, knowledgeable reviewers!
There are about 500 hours of video uploaded to YouTube every minute. If just 1% of that content gets a contested review, you'd need 300 active reviewers just to be able to keep up with watching it. If they work 12-hour shifts then that's 600 people total. Just to cover one popular site, under the generous assumption that only 1% of content gets reviewed.
Now let's throw in Twitch (around 1500 hours streamed every minute - so with the same generous math, we need another 1800 reviewers there). Then we've got TikTok and Facebook Video and Twitter Video and Instagram Video and Snapchat (upload stats not available, but it's many more hours-per-minute). And those are just the big platforms - there are many, many, many others. Plus every single podcast (who even knows how much).
So we're going to need thousands upon thousands of reviewers just to keep up with audiovisual content. And of course, this is just to watch it - it presumes no additional time for research or making a determination. Plus, every reviewer can't be qualified to cover every topic, or able to speak every language - so you're going to need significant padding in the numbers for that too. And we haven't even gotten to text content or image content.
So, cool, we'd just need to recruit and maintain a team of tens of thousands of reviewers that span every language and every field of expertise (to the level where they never need to do research and can always make accurate determinations off the top of their head, instantly), all working 12-hour shifts 7 days a week.
Yup, sounds totally doable and like it will definitely work great and not be riddled with errors. lol
[ link to this | view in chronology ]
Re: Re: Re: This isn't rocket science
lol okay good joke, you got me, I thought you were serious at first
[ link to this | view in chronology ]
Re: Re: Re: This isn't rocket science
So not only are you proposing that we rely on a heavily broken system of moderation for social media, your plans for protecting against abuse is pay-to-win?
You know, the DMCA already has those exact same problems. Introducing paid vested interests into the equation is not going to have the improvements you fervently believe will take place.
[ link to this | view in chronology ]
Re: This isn't rocket science
"We do notice and take down for copyright violations. There is no reason that we cannot implement a similar regime for false information."
you mean other than the fact that the DMCA system has no actual punishments for people who file false takedowns, which thus incentivizes people to abuse the system as a means of taking down protected speech?
[ link to this | view in chronology ]
Re: This isn't rocket science
"We do notice and take down for copyright violations. There is no reason that we cannot implement a similar regime for false information."
Pointing at the worst shit-show in modern legislation, worldwide, and saying "Let's do it like that" might not be the cure we're looking for, just sayin'. Having thousands of bots spamming youtube with fake DMCA request is bad enough, really.
"The nut jobs are willing to pack the courts and pass blatantly unconstitutional laws. Meanwhile, the normies are just sitting on their hands."
The problem here being that the US system is built specifically to be unstable. In europe we do have hate speech laws which have been implemented without massive amounts of collateral damage.
But then again, in Europe I also don't have to worry a police officer would rob me of my wallet, car and desktop PC in civil forfeiture or shoot up my lawn and call it "regrettable, but hey, QI". We don't have 2000 companies specializing in union busting, or workplaces comparable to indentured serfdom. No one will even dream of starting a tort over the dumbest possible of reasons and SLAPP is unheard of - because our courts of law aren't performance theater.
The problem is that US law execution and procedure relies on principles which are absent any sanity checks beyond the wallet depth of the person with the biggest team of lawyers.
Hence any exception you care to write into your constitutional rights for good, proportional and valid reason will be abused to hit you a hundred times as hard, anywhere. Especially with a lot of lucre riding on every successful tort.
"The nut jobs are willing to pack the courts and pass blatantly unconstitutional laws. Meanwhile, the normies are just sitting on their hands."
Yeah, and the democrats are really going to have to learn to fight dirty. Like it or not this is a war, because when the GOP base sees it as a war that's what it's become. And the aggressor always sets the minimum bar of what is acceptable. You either match their methods or you cave. And so far the democrats just keep caving.
That means they need to pad the courts and start hammering the republicans right back. Like that californian democrat lawmaker trying to pull a gun control law using the same methodology as the new texas anti-abortion law, for instance.
[ link to this | view in chronology ]
Re: This isn't rocket science
Oh look, an entire article on why that's a stupid idea:
https://www.techdirt.com/articles/20211209/22513048090/tanzanias-abuse-us-copyright-law-to-sil ence-critics-twitter-should-be-warning-regulators-looking-to-mess-with-content-moderation.shtml
[ link to this | view in chronology ]
"That means some of them will decide the better approach is not to moderate at all"
A dream come true.
[ link to this | view in chronology ]
Re:
Only to the insane, ignorant or deranged who look at such charming platforms as 4chan and think 'This is nice I suppose but how good would it be with less moderation?'
[ link to this | view in chronology ]
Re: Re:
It's funny how all the Trumpfluffers are all for no moderation after their beloved Jan 6th figurehead got his ass handed to him by Twitter, but ask them if they'd be alright if leftists making fun of them should be allowed on Parler without moderation, and they all go eerily silent.
Well, not so much "funny" as in "ha ha" but more like "funny" as in "pathetic and predictable".
[ link to this | view in chronology ]
Re: Re: Re:
I support no moderation for anyone, regardless of their political option. All I care about is that people with stupid arguments are called out for it publicly and there's record left of their stupidity.
If you delete someone's post or comment, no matter how stupid or deranged it was, at least some portion of people will go "Hmm, what are they afraid of, what if he/she is right?" And the stupidity will spread.
[ link to this | view in chronology ]
Re: Re: Re: Re:
In which case you hand the hecklers the right to poison every online conversation, and make reasoned discussion online all but impossible.
[ link to this | view in chronology ]
Re: Re: Re: Re:
“If you delete someone's post or comment, no matter how stupid or deranged it was, at least some portion of people will go "Hmm, what are they afraid of, what if he/she is right?" And the stupidity will spread.”
That has to be the dumbest argument in this entire post.
Congratulations?
[ link to this | view in chronology ]
Re: Re: Re: Re:
Nobody makes the above argument in good faith, as the sole result of giving platform to radicalization is to maximise the number of those influenced by it - the true goal of all those wanting it up.
[ link to this | view in chronology ]
Re: Re: Re: Re:
"I support no moderation for anyone, regardless of their political option. All I care about is that people with stupid arguments are called out for it publicly and there's record left of their stupidity."
No you don't. Not in any place where you yourself will be visiting.
The proof of that will be any open house party where that one guy decided to puke in the buffet, take a leak on the hortensias, or spend half an hour railing in an ever increasing voice about <insert inflammatory, often hilariously wrongfully recounted topic here> while the rest of the party tries to pretend the guy doesn't exist.
The response has never been to leave the offensive person be to spoil everyone else's evening. It's always been to show that personage the door and invite him/her to leave.
And that's really the same as on social media. Those places you normally frequent to meet with friends and relatives should not allow visitors who don't mind the most basic of manners. Because the very second FB stops moderating is the very second it loses - instantly - 90% of the audience.
And that is why even if the alt-right get everything they wish for all they end up with is the same tumbleweed-strewn landscape they have on Gab and Parler and an instant reimplementation of 230.
Notably worse, even. You want the first amendment gone for good? Force everyone to put up with the benighted fuckwits following Q and Dear Leader for one month. The next congress voted in will consist exclusively of people with a mandate from the people to add "...except when it comes to racism, bigotry and misinformation in which case none of this amendment applies." tacked to the end of 1A.
[ link to this | view in chronology ]
Re: Re: Re:
Afraid, my ass.
[ link to this | view in chronology ]
Re:
Only to trash
[ link to this | view in chronology ]
Ecpectatoion
What do you expect coming from the Moron who immediately claimed that Milo Y. had hired antifa to disrupt his own speech at UC Berkeley?
I mean after that, how could you think he had anything at all to say of any relevance, logic, or value?
[ link to this | view in chronology ]
Re: Ecpectatoion
Fam, if you're going to shitpost from your mobile, at least wipe the rabid froth from your mouth before you pull off some big brain typo like "Ecpectatoion".
[ link to this | view in chronology ]
Re: Re: Ecpectatoion
That's Greek for "expected", you idiot.
[ link to this | view in chronology ]
Re: Re: Re: Ecpectatoion
Nice try, ROGS.
[ link to this | view in chronology ]
Re: Ecpectatoion
“ I mean after that, how could you think he had anything at all to say of any relevance, logic, or value?”
That’s quite the incisive and accurate self description bro.
[ link to this | view in chronology ]