There's a way that Taylor could earn the respect of the public by buying out Kytch and its employees to make diagnostic devices of their own. But we've seen how Taylor earns around 25% of its income from service calls, so my guess is that they will opt for the money over the respectable solution.
They ask that they cite the evidence for such nonsense, and those who don't live up to the rules face consequences.
This sounds like a way for government to control debate. Take, for example, the bs FCC wireless coverage maps. Many knew that it was bogus. Providing a citation to refute otherwise within 24 hours might not be possible. Sometimes, we "officially" prove what we thought was true only months afterward.
This reminds the tradeoffs made with regard to the U.S. legal system. A cop might genuinely believe that a crook ran into a residence. Or the cop may just be fishing for evidence. How do you stop the abuse? Get a warrant. A defense lawyer might genuinely know that his client is guilty, because the defendant told him what actually happened, and now the lawyer is going to try his best to defend the client. How do you prevent prosecutors from exploiting self incrimination? Attorney client privilege rules.
It's an adversarial system, it seems inefficient, and it opens itself to possible exploitation. But it beats all the alternatives. The tradeoff for the better system is the whole "eternal vigilance" thing. A better system might not be an easier system.
Terrorist death threats and csam are not opinions, and can be reported to the authorities and prosecuted. I don't want FB to leave everything up. Rather, social media should return to good faith moderation whereby objective language is removed, such as profanity, commercial spam, financial scams, or any other types of illegal speech that you mentioned. Stop trying to police differences of opinion.
Please point to the text in section 230 that makes a ditinction between platform and publisher?
As I mentioned, Cubby v. Compuserve is a good case. The platform wasn't aware, and didn't know, and didn't control the speech. That's the hallmark of a platform.
Also, while you're at it, why don't you want to tell us which conservative opinions are being censored?
This week's censorship highlight is Shawna Chappell, mother of one of the American servicemen killed in the Kabul bombing last week. On Sunday, she posted an apolitical story on Instagram showing the last photograph that she had of her son alive, along with some words detailing the grief that she was experiencing. This post, and her account, was censored.
After the censorship was publicized in the media, Instagram restored the account, and of course explained that it was taken down in error due to a mysterious "glitch" that shall remain untold. But it's clear that she got censored for her past comments, particularly those on other platforms, which were anti government and against Joe Biden. Folks who express opposition to the current administration cannot be allowed to gain notoriety.
Four years ago, gold star families who denounced the sitting president were hailed as heroes. This week, they get flushed down the memory hole until the techlash embarrasses them back into normalcy.
However, under Texas' anti-choice law -- remember, anyone can sue anyone for "inducing" an abortion -- Facebook theoretically faces liability for leaving that information up.
Based on this logic, a paper company could be found liable if a bank robber writes a demand note and hands it to a teller. The more logical decision would be that, if FB is just acting as a platform and not a publisher, then they are not inducing anything. In fact, if FB didn't do any moderation, they could probably escape all liability because they were not even aware of the post, akin to Cubby v. Compuserve.
If Congress is doing something illegal that violates privacy, anyone can sue to stop them.
We've seen this before, where folks within the government will leak the information, and then noone will be held accountable. The one time that a government official did leak information, the prosecutor railroaded the wrong guy in the Libby-Plame affair, while letting the actual leaker off the hook.
If a carrier's services worked after a disaster, whereas its competition failed, I'm sure it would be one of the greatest possible advertisements for the company, and would cause to a ton of customers to switch. Unfortunately, thanks to regional monopolies, noone can escape to a different telecom that hardens their infrastructure.
Most social media companies track everything you do, primarily for marketing purposes. But I'm skeptical that they apply that same track-everything mentality when it comes to their own behavior. As an example, most people who get banned are simply given a generic message. Exact lists with corresponding reasoning may not exist. My prediction is that these companies will respond by saying that they would love to help, and also they can recall some high-profile incidents, however they don't keep that sort of information and so they can't provide it.
But it was right there in the title, and talked about in the second to last paragraph. It was designed as a counter argument to the bill, so it's okay for us to talk about it. Determining classroom curriculum is not an attack on free speech, while mandating must-carry provisions is an extension of free speech. Many states currently provide for greater protections than those listed in the constitution.
why is your right to speech more important than my right not to associate with your speech?
One of the nifty things about social media is that you get to choose who you follow on social media. If you don't like what someone is saying, you can choose not to listen to it. On the other hand, deplatforming is about preventing others who DO want to associate with a speaker from being able to associate. Censorship still interferes with the association aspect.
If you work for someone else as a speaker, your job is not a free speech platform. If you work at a tech support call center, you will probably get fired for proselytizing religion instead of fixing customers' problems. The employers get to set the curriculum in the classroom.
Of course, disinformation doesn't have a clear definition
Disinformation is any political opinion that you would prefer to censor rather than explain.
but it's not a problem you solve by sweeping it under the rug. It's a mirror on the real underlying societal problems the world faces -- which we should be talking about and trying to come up with better solutions for, rather than insisting that Facebook can make it all go away
You're definitely correct, and it should. But some folks don't have good answers or solutions. Particularly if previously respected institutions have been given a chance, but lost credibility. So the fallback is to create gatekeepers and stamp out competing opinions.
More now than ever before, we need congressional term limits. Half of the incumbents win reelection on little more than name recognition. A new batch of legislators every 4 or 12 years would allow newcomers to campaign on policies, instead of branding.
to open up another hole in Section 230 so that those sex workers can then sue OnlyFans for tortious interference?!? Um, what?
It's a little ambiguous, but the author may not be talking about interactive service providers on the internet, but rather payment processors.
"Perhaps, she said, those changes might even allow sex workers who feel their businesses have been harmed by payment processors to sue them for tortious interference."
I think the "them" refers to payment processors, not platforms on the internet. Again, it's a little ambiguous. But most of the article rails against banks deciding what internet sites are and are not permitted to host. The payment processors are perhaps gaining too much power to decide what legal activities people can pay for with their own money, and have become a primary source of a moral panic.
all of these challenges that Gettr is facing are no different than the ones that other social media apps faced
Proposals have been issed that whenever someone gets banned from a site, that a specific reason be issued for the banning. It definitely would have been useful in this circumstance. At least GETTR provided a valid reason after the fact, that they were banning with the (mis-)understanding that they were removing an impostor account. Most of the monopolist social media networks never provide any such reasoning, at most reporting that it was just a mysterious "technical glitch". Banning gets done by deliberate human action. The question is whether it was an understandable mistake, or it was politically motivated.
There has been a significant outcry since the announcement to limit the content. Perhaps, if the collective outcry of the content producers was loud enough, then the investors and banking service providers realized that any OF shutdown anger could be redirected. That was the assurance they needed that OF cannot be shut down without serious consequences.
But that wouldn't solve the problem. It would only relocate it.
Worse than relocating the problem, it defeats the purpose. One important benefit of determining that an individual officer violated the rights of a citizen would be that the bad officer has to pay a price out of his own pocket. If the problem gets offloaded onto a city department, then the taxpayers pay the price of the officer's mistake.
Perhaps. However, I have a fairly simple explanation for why things are the way they are, whereas others are perplexed and confused by these seemingly insane events and behaviors. It reinforces the idea that I am biased, and correct.
On the post: FTC Decides Maybe It's Time To Start Asking Why McDonalds Ice Cream Machines Are Broken All The Damn Time
Hire The Modders Model
There's a way that Taylor could earn the respect of the public by buying out Kytch and its employees to make diagnostic devices of their own. But we've seen how Taylor earns around 25% of its income from service calls, so my guess is that they will opt for the money over the respectable solution.
On the post: Lessons Learned From Creating Good Faith Debate In A Sea Of Garbage Disinformation
Fake It Until You Make It
This sounds like a way for government to control debate. Take, for example, the bs FCC wireless coverage maps. Many knew that it was bogus. Providing a citation to refute otherwise within 24 hours might not be possible. Sometimes, we "officially" prove what we thought was true only months afterward.
On the post: The Challenge In Content Moderation And Politics: How Do You Deal With Bad Faith Actors?
Bad Faith Works Both Ways
This reminds the tradeoffs made with regard to the U.S. legal system. A cop might genuinely believe that a crook ran into a residence. Or the cop may just be fishing for evidence. How do you stop the abuse? Get a warrant. A defense lawyer might genuinely know that his client is guilty, because the defendant told him what actually happened, and now the lawyer is going to try his best to defend the client. How do you prevent prosecutors from exploiting self incrimination? Attorney client privilege rules.
It's an adversarial system, it seems inefficient, and it opens itself to possible exploitation. But it beats all the alternatives. The tradeoff for the better system is the whole "eternal vigilance" thing. A better system might not be an easier system.
On the post: Where Texas' Social Media Law & Abortion Law Collide: Facebook Must Keep Up AND Take Down Info On Abortion
Re: Re: Platforms Can't Induce
Terrorist death threats and csam are not opinions, and can be reported to the authorities and prosecuted. I don't want FB to leave everything up. Rather, social media should return to good faith moderation whereby objective language is removed, such as profanity, commercial spam, financial scams, or any other types of illegal speech that you mentioned. Stop trying to police differences of opinion.
On the post: Where Texas' Social Media Law & Abortion Law Collide: Facebook Must Keep Up AND Take Down Info On Abortion
Re: Re: Platforms Can't Induce
As I mentioned, Cubby v. Compuserve is a good case. The platform wasn't aware, and didn't know, and didn't control the speech. That's the hallmark of a platform.
This week's censorship highlight is Shawna Chappell, mother of one of the American servicemen killed in the Kabul bombing last week. On Sunday, she posted an apolitical story on Instagram showing the last photograph that she had of her son alive, along with some words detailing the grief that she was experiencing. This post, and her account, was censored.
After the censorship was publicized in the media, Instagram restored the account, and of course explained that it was taken down in error due to a mysterious "glitch" that shall remain untold. But it's clear that she got censored for her past comments, particularly those on other platforms, which were anti government and against Joe Biden. Folks who express opposition to the current administration cannot be allowed to gain notoriety.
Four years ago, gold star families who denounced the sitting president were hailed as heroes. This week, they get flushed down the memory hole until the techlash embarrasses them back into normalcy.
On the post: Where Texas' Social Media Law & Abortion Law Collide: Facebook Must Keep Up AND Take Down Info On Abortion
Platforms Can't Induce
Based on this logic, a paper company could be found liable if a bank robber writes a demand note and hands it to a teller. The more logical decision would be that, if FB is just acting as a platform and not a publisher, then they are not inducing anything. In fact, if FB didn't do any moderation, they could probably escape all liability because they were not even aware of the post, akin to Cubby v. Compuserve.
On the post: GOP Hollowly Threatens To 'Shut Down' Telecom Companies For Cooperating With Legal January 6 Inquiries
We've seen this before, where folks within the government will leak the information, and then noone will be held accountable. The one time that a government official did leak information, the prosecutor railroaded the wrong guy in the Libby-Plame affair, while letting the actual leaker off the hook.
On the post: AT&T's 911, Cellular Networks Face Plant In Wake Of Hurricane Ida
No Alternative
If a carrier's services worked after a disaster, whereas its competition failed, I'm sure it would be one of the greatest possible advertisements for the company, and would cause to a ton of customers to switch. Unfortunately, thanks to regional monopolies, noone can escape to a different telecom that hardens their infrastructure.
On the post: House Committee Investigating January 6th Capitol Invasion Goes On Social Media Fishing Expedition; Companies Should Resist
Re: Re: You Are The Product, Not Them
Insider whiteblowers have already done so. But you're right, not tracking the evidence makes it difficult to prove in lawsuits.
On the post: House Committee Investigating January 6th Capitol Invasion Goes On Social Media Fishing Expedition; Companies Should Resist
You Are The Product, Not Them
Most social media companies track everything you do, primarily for marketing purposes. But I'm skeptical that they apply that same track-everything mentality when it comes to their own behavior. As an example, most people who get banned are simply given a generic message. Exact lists with corresponding reasoning may not exist. My prediction is that these companies will respond by saying that they would love to help, and also they can recall some high-profile incidents, however they don't keep that sort of information and so they can't provide it.
On the post: Texas Legislature Says You Can't Teach About Racism In Schools, But Social Media Sites Must Host Holocaust Denialism
Re: Re: Save It For The Water Cooler
But it was right there in the title, and talked about in the second to last paragraph. It was designed as a counter argument to the bill, so it's okay for us to talk about it. Determining classroom curriculum is not an attack on free speech, while mandating must-carry provisions is an extension of free speech. Many states currently provide for greater protections than those listed in the constitution.
One of the nifty things about social media is that you get to choose who you follow on social media. If you don't like what someone is saying, you can choose not to listen to it. On the other hand, deplatforming is about preventing others who DO want to associate with a speaker from being able to associate. Censorship still interferes with the association aspect.
On the post: Texas Legislature Says You Can't Teach About Racism In Schools, But Social Media Sites Must Host Holocaust Denialism
Save It For The Water Cooler
If you work for someone else as a speaker, your job is not a free speech platform. If you work at a tech support call center, you will probably get fired for proselytizing religion instead of fixing customers' problems. The employers get to set the curriculum in the classroom.
On the post: Most Information About Disinformation Is Misinformation
It Can All Make Sense
Disinformation is any political opinion that you would prefer to censor rather than explain.
You're definitely correct, and it should. But some folks don't have good answers or solutions. Particularly if previously respected institutions have been given a chance, but lost credibility. So the fallback is to create gatekeepers and stamp out competing opinions.
On the post: States Wouldn't Be Pushing Inconsistent Tech Laws If Congress Wasn't So Corrupt
Term Limits
More now than ever before, we need congressional term limits. Half of the incumbents win reelection on little more than name recognition. A new batch of legislators every 4 or 12 years would allow newcomers to campaign on policies, instead of branding.
On the post: Academic: Problems Created By Undermining Section 230 Can Be Solved... By Undermining Section 230?
Maybe A Different Target
It's a little ambiguous, but the author may not be talking about interactive service providers on the internet, but rather payment processors.
I think the "them" refers to payment processors, not platforms on the internet. Again, it's a little ambiguous. But most of the article rails against banks deciding what internet sites are and are not permitted to host. The payment processors are perhaps gaining too much power to decide what legal activities people can pay for with their own money, and have become a primary source of a moral panic.
On the post: Trumpist Gettr Social Network Continues To Speed Run Content Moderation Learning Curve: Bans, Then Unbans, Roger Stone
Re: Re: Probably Not
While there can be a fine line between parody and impersonation, those on the impersonation side do not, since it's not a political opinion.
-Getting censored proves that your opinion is the strongest.
On the post: Trumpist Gettr Social Network Continues To Speed Run Content Moderation Learning Curve: Bans, Then Unbans, Roger Stone
Probably Not
Proposals have been issed that whenever someone gets banned from a site, that a specific reason be issued for the banning. It definitely would have been useful in this circumstance. At least GETTR provided a valid reason after the fact, that they were banning with the (mis-)understanding that they were removing an impostor account. Most of the monopolist social media networks never provide any such reasoning, at most reporting that it was just a mysterious "technical glitch". Banning gets done by deliberate human action. The question is whether it was an understandable mistake, or it was politically motivated.
On the post: OnlyFans: Oops, Just Kidding; Keep Posting Sexually Explicit Material
Too Big To Shut Down
There has been a significant outcry since the announcement to limit the content. Perhaps, if the collective outcry of the content producers was loud enough, then the investors and banking service providers realized that any OF shutdown anger could be redirected. That was the assurance they needed that OF cannot be shut down without serious consequences.
On the post: Congressional Lawmaker Give Up Attempt To Dump Qualified Immunity In Police Reform Efforts
Bad Target
Worse than relocating the problem, it defeats the purpose. One important benefit of determining that an individual officer violated the rights of a citizen would be that the bad officer has to pay a price out of his own pocket. If the problem gets offloaded onto a city department, then the taxpayers pay the price of the officer's mistake.
On the post: NY Times And Washington Post Criticize Facebook Because The Chicago Tribune Had A Terrible Headline
Re: Re: Clamoring For A Gatekeeper
Perhaps. However, I have a fairly simple explanation for why things are the way they are, whereas others are perplexed and confused by these seemingly insane events and behaviors. It reinforces the idea that I am biased, and correct.
Next >>