For what reason should the burden of moderating racial slurs on Twitter fall upon the targets of those slurs? For what reason should a Black Twitter user subject themselves to racialized harassment (including racial slurs) for the sake of being able to block/mute other users when Twitter could prevent much of that by moderating racist assholes off the platform? What gives a racist any more right to use Twitter than the target of that racism?
Why use a site where people have different opinions when you can use facetwit and be coddled?
You’re not looking hard enough if you sincerely think Facebook or Twitter (you can stop saying “Facetwit”, by the by) aren’t filled with “different opinions” from all sides of the political spectrum. That neither service regards racism, anti-queer propaganda, and COVID disinformation as speech worth hosting is irrelevant — and 100% legal, no matter how much that fact hurts your feelings.
I argue moderation is speech of the platform. This is the case platform should not have immunity for.
In which case, you’re arguing for compelled association with speech a service doesn’t want to host. Your underlying reasoning is essentially that moderation — “the speech of the platform” — would block legally protected (yet morally heinous) speech from being on the platform. Under that reasoning, no platform could block or delete spam, porn, spam porn (those sick fucks…), racial slurs, anti-queer slurs, Klan propaganda, pro-“conversion ‘therapy’ ” propaganda, COVID-19 mis- and disinformation, and anyone who says “Yoko Kanno sucks”. Your changes to the law wouldn’t — couldn’t! — allow that happen.
If your moderation itself (not others' speech) violates any law, you should bear the consequence.
Moderation can’t violate the law unless it is legitimately (and provably) biased against a protected class of citizens as outlined by either state or federal law. Twitter admins can’t ban Black users from Twitter for being Black — but Black users can be banned for violating the Terms of Service.
And I hate to break this to you, but “conservative” (or “liberal”, for that matter) is not a protected class.
Can you explain more?
While the First Amendment doesn’t explicitly mention the right of association, Supreme Court rulings have determined that such a right exists. The government generally cannot compel or deny association between anyone without a damned good reason for doing so. That goes for social media, too: Nobody has a right to make someone give them a platform or an audience.
How is allowing a person be sued on other laws (for something they did) a "threat" or forcing them to do anything?
By repealing 230, you would be forcing services into one of three corners: Overmoderation, undermoderation, or shutdown. A service would have to either forgo association with most speech in favor of the most inoffensive content, associate with the most heinous and offensive speech, or refuse association with all speech. A refusal by a service to do any of the three would result in a death by 1,000 cuts…er, lawsuits. How is that not using the force of law to compel an association (or disassociation)?
Just like when a bar wants to discriminate someone by race, they could be sued.
Moderating speech is not, per se, unlawful discrimination — no matter how much your feelings might tell you otherwise.
If "moderation" is an action you do, it should be something I can sue you for.
The First Amendment protects your rights to speak freely and associate with whomever you want. It doesn’t give you the right to make others listen. It doesn’t give you the right to make others give you access to an audience. And it doesn’t give you the right to make a personal soapbox out of private property you don’t own.
Nobody is entitled to a platform or an audience at the expense of someone else — and that includes you. No lawsuit will ever change that fact.
Why don't you give me a proof of why moderation is so harmless that anything that you do to moderate will not harm anyone?
How about no. You made the claim first; now let’s see you back it up.
Imagine a 230-ish law that says "no one can sue you if you sing at home". While singing usually doesn't harm anyone, what if it does? Now the victim has no way to get justice.
For what reason can’t — shouldn’t — Twitter delete it? For what reason should Twitter admins be denied the right to decide whether they will host racial slurs and other bigoted speech on Twitter?
You might have a point if Sidney Powell hadn’t been claiming that she had evidence of widescale voter fraud — which is a claim of fact, not opinion, and a bullshit claim to boot.
That the likes of GE and Prodigy died off quickly while the move-don’t-delete AOL and the DGAF Compuserve are still around in some form tells you where the general public was on the decisions.
No, it doesn’t. GE and Prodigy died off for reasons that are most likely unrelated to its moderation decisions. Same goes for AOL and Compuserve in re: how they’re still around.
That isn’t to say a moderation decision can’t change the fortunes of an interactive web service, though. Tumblr got rid of a good chunk of porn and became a far less trafficked service as a result. But I doubt a service willing to let just about anything fly will ever get a mass audience on the level of Twitter. I mean, who would want to deal with the Worst People Problem other than, y’know, those “worst people”?
Eh, not all conspiracy theories are harmful, in the sense that believing and investigating them could harm one’s self or others. Anything beneath the Science Denial level is mostly benign — not entirely, but mostly.
For what reason should a Black Twitter user have to shoulder the responsibility of filtering racist garbage out of their timelines when Twitter could just as easily refuse to let racist garbage on Twitter?
If you agree with me that there are limited situations where platforms' could be liable due to its moderation practices
Therein lies your problem: We don’t. The only liability a platform should have for third-party speech is if it knowingly and intentionally refuses to delete illegal speech (e.g., CSAM, true threats of violence) — and that’s less a “moderation” issue than it is a “breaking state/federal laws” issue. In any other situation, the platform should have immunity from liability for speech it didn’t publish itself.
Except it does. Choose to accept racist speech on your platform, regardless of how you feel about such speech, and your platform will be thought of as friendly to racist speech (and to racists). People will associate your platform with racism, even if you never wanted that to happen. And if you don’t believe that could happen, look at what people think of Gab and Parler (coughworstpeopleproblemcough).
230 makes platforms immune from moderation because — and I think you’re gonna love this — the lawmakers who authored 230 wanted to give family-friendly platforms the right to moderate without facing lawsuits for speech they didn’t moderate. Like, that’s literally on the Congressional record thanks to Chris Cox:
We want to encourage people like Prodigy, like CompuServe, like America Online, like the new Microsoft network, to do everything possible for us, the customer, to help us control, at the portals of our computer, at the front door of our house, what comes in and what our children see.
…
[O]ur amendment will do two basic things: First, it will protect computer Good Samaritans, online service providers, anyone who provides a front end to the Internet, let us say, who takes steps to screen indecency and offensive material for their customers. It will protect them from taking on liability such as occurred in the Prodigy case in New York that they should not face for helping us and for helping us solve this problem. Second, it will establish as the policy of the United States that we do not wish to have content regulation by the Federal Government of what is on the Internet, that we do not wish to have a Federal Computer Commission with an army of bureaucrats regulating the Internet because frankly the Internet has grown up to be what it is without that kind of help from the Government. In this fashion we can encourage what is right now the most energetic technological revolution that any of us has ever witnessed. We can make it better. We can make sure that it operates more quickly to solve our problem of keeping pornography away from our kids, keeping offensive material away from our kids, and I am very excited about it.
“Big Tech” has to be aware of the “Worst People” Problem. The question, then, is whether they care enough to keep it from being their problem instead of, say, Gab’s.
The primary issue with your suggestion lies in the “notified of a user’s speech” stipulation. Under what condition is a site “notified” — when a user submits a report, when an automated moderation bot acts on the report, or when a human moderator receives the report? Not every service will be able to process reports as quickly as you or I can snap our fingers. Potentially harmful and heinous speech could remain on the service indefinitely if the delay between a report and a human moderator eyeballing the report is “too long” under your stipulation. After all, no service would risk losing its 230 immunity by moderating such speech after the “editorial control” limits you suggested.
But let’s put this into practical perspective so you get a real idea of what I’m talking about. Let’s say that your stipulation says a service has 24 hours, upon the filing of a report by a user, to have a human moderator act upon that report — and any act of moderation after that 24-hour period, done by either human hands or automation, would result in the loss of 230 immunity.
A bigot posts a racial slur on Twitter. Someone reports the tweet at 11:54am on a Wednesday. Under your stipulation (with the conditions laid out above), Twitter’s moderation staff has until 11:54am the next day to moderate that speech. Their missing that deadline — which is exactly what happens, for reasons that don’t need exploring at this juncture — means the speech must remain up or else Twitter loses its 230 immunity.
So I have One Simple Question for you once you process this. Yes or no: Twitter, Facebook, etc. being unable to moderate bigotry because “we didn’t get to it fast enough” — is that the exact outcome you want your suggestion to enable?
Facebook has (unfortunately, probably correctly) realized that if it undermines 230, it can do so in a manner that Facebook can survive, and its smaller competitors cannot.
Hey, all y’all anti-230 advocates: Does it hurt to know you’re in bed with Facebook on this?
On the post: Changing Section 230 Won't Make The Internet A Kinder, Gentler Place
For what reason should the burden of moderating racial slurs on Twitter fall upon the targets of those slurs? For what reason should a Black Twitter user subject themselves to racialized harassment (including racial slurs) for the sake of being able to block/mute other users when Twitter could prevent much of that by moderating racist assholes off the platform? What gives a racist any more right to use Twitter than the target of that racism?
You’re not looking hard enough if you sincerely think Facebook or Twitter (you can stop saying “Facetwit”, by the by) aren’t filled with “different opinions” from all sides of the political spectrum. That neither service regards racism, anti-queer propaganda, and COVID disinformation as speech worth hosting is irrelevant — and 100% legal, no matter how much that fact hurts your feelings.
On the post: Changing Section 230 Won't Make The Internet A Kinder, Gentler Place
In which case, you’re arguing for compelled association with speech a service doesn’t want to host. Your underlying reasoning is essentially that moderation — “the speech of the platform” — would block legally protected (yet morally heinous) speech from being on the platform. Under that reasoning, no platform could block or delete spam, porn, spam porn (those sick fucks…), racial slurs, anti-queer slurs, Klan propaganda, pro-“conversion ‘therapy’ ” propaganda, COVID-19 mis- and disinformation, and anyone who says “Yoko Kanno sucks”. Your changes to the law wouldn’t — couldn’t! — allow that happen.
On the post: Changing Section 230 Won't Make The Internet A Kinder, Gentler Place
Please provide evidence that shows how moderation has ever harmed you.
I’ll wait.
On the post: Changing Section 230 Won't Make The Internet A Kinder, Gentler Place
Moderation can’t violate the law unless it is legitimately (and provably) biased against a protected class of citizens as outlined by either state or federal law. Twitter admins can’t ban Black users from Twitter for being Black — but Black users can be banned for violating the Terms of Service.
And I hate to break this to you, but “conservative” (or “liberal”, for that matter) is not a protected class.
While the First Amendment doesn’t explicitly mention the right of association, Supreme Court rulings have determined that such a right exists. The government generally cannot compel or deny association between anyone without a damned good reason for doing so. That goes for social media, too: Nobody has a right to make someone give them a platform or an audience.
By repealing 230, you would be forcing services into one of three corners: Overmoderation, undermoderation, or shutdown. A service would have to either forgo association with most speech in favor of the most inoffensive content, associate with the most heinous and offensive speech, or refuse association with all speech. A refusal by a service to do any of the three would result in a death by 1,000 cuts…er, lawsuits. How is that not using the force of law to compel an association (or disassociation)?
Moderating speech is not, per se, unlawful discrimination — no matter how much your feelings might tell you otherwise.
The First Amendment protects your rights to speak freely and associate with whomever you want. It doesn’t give you the right to make others listen. It doesn’t give you the right to make others give you access to an audience. And it doesn’t give you the right to make a personal soapbox out of private property you don’t own.
Nobody is entitled to a platform or an audience at the expense of someone else — and that includes you. No lawsuit will ever change that fact.
How about no. You made the claim first; now let’s see you back it up.
I…
…I just…
…fucking what
On the post: Changing Section 230 Won't Make The Internet A Kinder, Gentler Place
For what reason can’t — shouldn’t — Twitter delete it? For what reason should Twitter admins be denied the right to decide whether they will host racial slurs and other bigoted speech on Twitter?
On the post: Former Trump Lawyer Facing Sanctions In Michigan Now Saying The Things She Said Were Opinions Are Actually Facts
Ah yes, Rudy “I killed my career between a cock and a charred place” Giuliani. Is he even still a thing?
On the post: Former Trump Lawyer Facing Sanctions In Michigan Now Saying The Things She Said Were Opinions Are Actually Facts
You might have a point if Sidney Powell hadn’t been claiming that she had evidence of widescale voter fraud — which is a claim of fact, not opinion, and a bullshit claim to boot.
On the post: Changing Section 230 Won't Make The Internet A Kinder, Gentler Place
No, it doesn’t. GE and Prodigy died off for reasons that are most likely unrelated to its moderation decisions. Same goes for AOL and Compuserve in re: how they’re still around.
That isn’t to say a moderation decision can’t change the fortunes of an interactive web service, though. Tumblr got rid of a good chunk of porn and became a far less trafficked service as a result. But I doubt a service willing to let just about anything fly will ever get a mass audience on the level of Twitter. I mean, who would want to deal with the Worst People Problem other than, y’know, those “worst people”?
On the post: Changing Section 230 Won't Make The Internet A Kinder, Gentler Place
Eh, not all conspiracy theories are harmful, in the sense that believing and investigating them could harm one’s self or others. Anything beneath the Science Denial level is mostly benign — not entirely, but mostly.
(And Epstein didn’t kill himself 🤫)
On the post: Changing Section 230 Won't Make The Internet A Kinder, Gentler Place
For what reason should a Black Twitter user have to shoulder the responsibility of filtering racist garbage out of their timelines when Twitter could just as easily refuse to let racist garbage on Twitter?
On the post: Changing Section 230 Won't Make The Internet A Kinder, Gentler Place
Therein lies your problem: We don’t. The only liability a platform should have for third-party speech is if it knowingly and intentionally refuses to delete illegal speech (e.g., CSAM, true threats of violence) — and that’s less a “moderation” issue than it is a “breaking state/federal laws” issue. In any other situation, the platform should have immunity from liability for speech it didn’t publish itself.
On the post: Changing Section 230 Won't Make The Internet A Kinder, Gentler Place
Except it does. Choose to accept racist speech on your platform, regardless of how you feel about such speech, and your platform will be thought of as friendly to racist speech (and to racists). People will associate your platform with racism, even if you never wanted that to happen. And if you don’t believe that could happen, look at what people think of Gab and Parler (coughworstpeopleproblemcough).
On the post: Changing Section 230 Won't Make The Internet A Kinder, Gentler Place
Newspapers have editors who pick and choose what speech to publish. Twitter doesn’t.
On the post: Changing Section 230 Won't Make The Internet A Kinder, Gentler Place
230 makes platforms immune from moderation because — and I think you’re gonna love this — the lawmakers who authored 230 wanted to give family-friendly platforms the right to moderate without facing lawsuits for speech they didn’t moderate. Like, that’s literally on the Congressional record thanks to Chris Cox:
On the post: Changing Section 230 Won't Make The Internet A Kinder, Gentler Place
…says someone who obviously has no experience in moderating a platform or curating a community. Not everyone revels in vice signalling, y’know.
On the post: Changing Section 230 Won't Make The Internet A Kinder, Gentler Place
“Big Tech” has to be aware of the “Worst People” Problem. The question, then, is whether they care enough to keep it from being their problem instead of, say, Gab’s.
On the post: Changing Section 230 Won't Make The Internet A Kinder, Gentler Place
shut up, Meg
On the post: Changing Section 230 Won't Make The Internet A Kinder, Gentler Place
“Fair?”
No, it isn’t fair.
The primary issue with your suggestion lies in the “notified of a user’s speech” stipulation. Under what condition is a site “notified” — when a user submits a report, when an automated moderation bot acts on the report, or when a human moderator receives the report? Not every service will be able to process reports as quickly as you or I can snap our fingers. Potentially harmful and heinous speech could remain on the service indefinitely if the delay between a report and a human moderator eyeballing the report is “too long” under your stipulation. After all, no service would risk losing its 230 immunity by moderating such speech after the “editorial control” limits you suggested.
But let’s put this into practical perspective so you get a real idea of what I’m talking about. Let’s say that your stipulation says a service has 24 hours, upon the filing of a report by a user, to have a human moderator act upon that report — and any act of moderation after that 24-hour period, done by either human hands or automation, would result in the loss of 230 immunity.
A bigot posts a racial slur on Twitter. Someone reports the tweet at 11:54am on a Wednesday. Under your stipulation (with the conditions laid out above), Twitter’s moderation staff has until 11:54am the next day to moderate that speech. Their missing that deadline — which is exactly what happens, for reasons that don’t need exploring at this juncture — means the speech must remain up or else Twitter loses its 230 immunity.
So I have One Simple Question for you once you process this. Yes or no: Twitter, Facebook, etc. being unable to moderate bigotry because “we didn’t get to it fast enough” — is that the exact outcome you want your suggestion to enable?
On the post: As Predicted, Smaller Media Outlets Are Getting Screwed By Australia's Link Tax
Oh, if only there were a popular meme to pair this text with… 😁
On the post: No, Facebook's Argument In Response To Muslim Advocates' Lawsuit Is Not 'Awkward'; Facebook Caving On 230 Is What's Awkward
Hey, all y’all anti-230 advocates: Does it hurt to know you’re in bed with Facebook on this?
Next >>