Update: Being curious about the reasons why people might get banned, I can see that Parler has rules posted in its Community Guidelines section that they disallow impersonation accounts. With Parler being a new app, there is currently that "Land Grab" phase where a whole lot of account names are not yet taken. It appears that a number of folks have been attempting to register themselves as public officials, or websites for which they dislike. As an example, the Thor Benson guy cited above attempted to register himself as the official account for The Federalist. Others attempted to register as Donald Trump.
So yeah, probably joining a community and immediately breaking the rules isn't such a great idea. Pretty clear violations.
What I'm more Interested in is -- why were they banned? If they broke some sort of clearly established rules, then that's awesome that they got banned. But if they were banned for simply for disagreeing with others, then that seems unfair. Discussing the reasons why is what separates a free speech platform from a biased platform.
As I remember, there was a companion to The Federalist - a site called Zerohedge that went along with the original reporting. It looks like Zerohedge got completely banned. Also, it looks like The Federalist's recourse in order to stay monetized was to take down their entire comments section.
But back to the original reporting -- done by NBC, it appears that the NBC reporter Adele Fraser collaborated with an outside group called CCDH to complain to Google and enact the ban.
All of this appears to be different from the Techdirt experience. For Techdirt, there was no media tweeting out the ban with glee, no sitewide demonetization, and the comments section is still very much in operation. This is exactly why we know there is a bias at the major big tech corporations. If you say the right things, Google will hold your hand and make it okay. But if you're not politically correct enough, then the sites' policies will be used as a weapon against you.
I guess maybe we should all switch to Brave browser. The complaints of malicious moderation versus mere incompetent moderation could all just disappear.
The real question is: What political speech has been banned?
In today's example of social media bias against conservatives, an undercover watchdog group infiltrated Facebook's content moderation division, and videotaped numerous admissions of bias:
"One of the content moderators was asked if she deleted every Republican item that came up on her queue, she said: “Yes! I don’t give no f*cks, I’ll delete it.”
The same moderator said she does not take down anti-Trump content, even if it did violate policy.
“You gotta take it down but I leave it up,” she said. “If you see something that’s not supposed to be up, it’s probably me.”
Another content moderator, Lara Kontakos, was asked what she did when she saw a posts supporting the president: “If someone is wearing a MAGA hat, I am going to delete them for terrorism.”
Then, Kontakos looked around at her colleagues: “I think we are
all doing that.”
Steve Grimmett, a content review lead, said it was Facebook’s culture to target the president and his supporters. “It’s a very progressive company, who’s very anti-MAGA.”"
And while Parler's Community Guidelines are written in a manner that makes it look like they're mimicking 1st Amendment jurisprudence, that's a trick they're playing, because the specifics do not match the reality.
For many of us who see problems with social media censorship, we aren't particularly interested in a platform that allows all speech deemed permissible by the 1st Amendment.
We don't care about repetitive spam. We don't care about commercial spam. We dont care about obscenity or pornography. Go ahead and ban that stuff.
Mostly what we don't want banned is political speech. Simply because you disagree with something does not make it bannable. Being offended ought not allow you to take down the speech of others.
The real question is: can Parler do moderation without demonstrating political bias? Will they allow political speech, and show no favoritism to one side or the other if rhetoric becomes heated? I don't know, and the odds of breaking the Twitter monopoly is quite the proverbial mountain to climb. But the idea that lip service is being paid is a step in the right direction. Defining offensive material in the TOS is not a dissuasion.
If Bolton were to claim that it's fictional at any time, then those who performed the copying would be totally screwed without a commentary fair use defense.
Your wish to destroy any sites ability to moderate as they see fit, which includes a good faith effort to remove or block content they think the majority of their users would find objectionable...
I don't primarily object to sites doing moderation, especially since I am open to user-based moderation, as well as moderation against profanity. Rather, I object to corporations engaging in political bias and censorship, and then hiding behind the "objectional" argument (especially since most social media feeds are opt-in!).
...would destroy the Internet's usefulness for many users.
This is a hyperbolic. Seeing the occasional comment with which you disagree will not destroy the internet. It's a similar loss of credibility as to those who claimed the internet would cease to function after Jun 11th, 2018.
What good will come from exempting people from liability...
First, if the platform is exempted, then the platform won't need to handle all moderation decisions with a call center; costs could be greatly reduced.
Second, corporations would not be the primary deciders of what speech is allowed to be discussed.
Third, many user-based moderation systems that I've seen currently in use allow anyone to un-hide the moderated text, if they desire. Since the speech isn't completely eliminated, total censorship does not occur.
There's no fucking way we can afford or staff a live call center to handle every troll who gets upset that users voted down his comment as trollish.
I have heard of a potential fix by exempting user-based moderation. Often, this has an added benefit of not making certain speech completely inaccessible. With one mouse click you you turn off user-based moderation and see what's being said anyhow. It's likely that a lot of truly objectionable content would quickly be flagged and fall into this category, cutting down costs considerably.
Given T-Mobile told regulators repeatedly that the merger would dramatically expand 5G deployment and jobs by default, neither should have been a problem.
Were there any public hearings? I would love to see some company executives charged with lying under oath during testimony.
If I have a home video security system, and my house gets broken into, I kind of expect the police investigators to take ALL of the footage. Are the police going to see me entering and exiting my house, deliveries, friends and neighbors? Yup.
I suppose the biggest concern for me is public disclosure. Just as your doctor will know about your medical situation in order to provide care, the police are going to investigate your life in order to secure a conviction against a perp. And the police have limited time and resources, so I expect them to prioritize cases with the best chances, not the worst chances.
So perhaps we need something similar to HIPAA laws, where it is understood that access to information is needed for an investigation, but confidentiality is also expected.
In a day and age of voluntary "follows" and internet corporation monopolies, "we don't do that here" really breaks down. "We" attempts to minimize the million followers, and "here" attempts to minimize the 99% market share, beyond the point of believability.
Section 230 does not refer to either “publishers” or “platforms”. That dichotomy has no bearing on this discussion.
It certainly has bearing, in that section 230 reformers see this as a deficiency in current law, and desire a change. We understand that the current law does not appear to make this distinction, but we want that distinction to be made.
While I could understand copyright claims on the box art, I am somehow skeptical that any individual cheese-it was made with any artistic design whatsoever. Can you imagine being able to copyright the look of a pretzel, or a cookie?
If those places can “censor” speech based on not wanting that speech on their property, for what reason should Twitter not be extended the same courtesy?
Because then there are consequences for choosing to be a publisher. For example, if they declare "Our service is for Democrats only", it would perhaps be very honest of them. But then they would lose a large portion of their user base, and break the ubiquity and monopoly that they might currently enjoy.
If I say in public, "Apples are my favorite fruit", I would not want a prosecutor to file charges against me. But if I had previously signed a contract as a promoter for an orange company, then I would expect the company to file a court complaint against me if I violated my contract.
Government prosecutors and regulators are not the same thing as a court adjudicating a private contract violation.
Does that include party political platforms, or political campaign sites?
I have little doubt that political sites would choose to consider themselves a publisher, not an open platform. The same as how a political convention hall is not open to any speech, and would probably kick out protesters.
On the post: As Predicted: Parler Is Banning Users It Doesn't Like
Re: Reasons
Update: Being curious about the reasons why people might get banned, I can see that Parler has rules posted in its Community Guidelines section that they disallow impersonation accounts. With Parler being a new app, there is currently that "Land Grab" phase where a whole lot of account names are not yet taken. It appears that a number of folks have been attempting to register themselves as public officials, or websites for which they dislike. As an example, the Thor Benson guy cited above attempted to register himself as the official account for The Federalist. Others attempted to register as Donald Trump.
So yeah, probably joining a community and immediately breaking the rules isn't such a great idea. Pretty clear violations.
On the post: As Predicted: Parler Is Banning Users It Doesn't Like
Reasons
What I'm more Interested in is -- why were they banned? If they broke some sort of clearly established rules, then that's awesome that they got banned. But if they were banned for simply for disagreeing with others, then that seems unfair. Discussing the reasons why is what separates a free speech platform from a biased platform.
On the post: GOOGLE THREATENS TO DEFUND TECHDIRT? Where Are All The Politicians Complaining?
Weaponized Policies
As I remember, there was a companion to The Federalist - a site called Zerohedge that went along with the original reporting. It looks like Zerohedge got completely banned. Also, it looks like The Federalist's recourse in order to stay monetized was to take down their entire comments section.
But back to the original reporting -- done by NBC, it appears that the NBC reporter Adele Fraser collaborated with an outside group called CCDH to complain to Google and enact the ban.
All of this appears to be different from the Techdirt experience. For Techdirt, there was no media tweeting out the ban with glee, no sitewide demonetization, and the comments section is still very much in operation. This is exactly why we know there is a bias at the major big tech corporations. If you say the right things, Google will hold your hand and make it okay. But if you're not politically correct enough, then the sites' policies will be used as a weapon against you.
I guess maybe we should all switch to Brave browser. The complaints of malicious moderation versus mere incompetent moderation could all just disappear.
On the post: Just Like Every Other Platform, Parler Will Take Down Content And Face Impossible Content Moderation Choices
Re: Re: Re: Re:
Just because you disagree with the politics of the content does not make it untrue. Watch the video.
Update: Facebook HR executive Leslie Brown, who was seen in the video making biased comments has been fired. Still think it's just a hoax?
On the post: Just Like Every Other Platform, Parler Will Take Down Content And Face Impossible Content Moderation Choices
Re: Re:
In today's example of social media bias against conservatives, an undercover watchdog group infiltrated Facebook's content moderation division, and videotaped numerous admissions of bias:
"One of the content moderators was asked if she deleted every Republican item that came up on her queue, she said: “Yes! I don’t give no f*cks, I’ll delete it.”
The same moderator said she does not take down anti-Trump content, even if it did violate policy.
“You gotta take it down but I leave it up,” she said. “If you see something that’s not supposed to be up, it’s probably me.”
Another content moderator, Lara Kontakos, was asked what she did when she saw a posts supporting the president: “If someone is wearing a MAGA hat, I am going to delete them for terrorism.”
Then, Kontakos looked around at her colleagues: “I think we are
all doing that.”
Steve Grimmett, a content review lead, said it was Facebook’s culture to target the president and his supporters. “It’s a very progressive company, who’s very anti-MAGA.”"
On the post: Just Like Every Other Platform, Parler Will Take Down Content And Face Impossible Content Moderation Choices
For many of us who see problems with social media censorship, we aren't particularly interested in a platform that allows all speech deemed permissible by the 1st Amendment.
We don't care about repetitive spam. We don't care about commercial spam. We dont care about obscenity or pornography. Go ahead and ban that stuff.
Mostly what we don't want banned is political speech. Simply because you disagree with something does not make it bannable. Being offended ought not allow you to take down the speech of others.
The real question is: can Parler do moderation without demonstrating political bias? Will they allow political speech, and show no favoritism to one side or the other if rhetoric becomes heated? I don't know, and the odds of breaking the Twitter monopoly is quite the proverbial mountain to climb. But the idea that lip service is being paid is a step in the right direction. Defining offensive material in the TOS is not a dissuasion.
On the post: John Bolton Doesn't Need Copyright Protection
If Bolton were to claim that it's fictional at any time, then those who performed the copying would be totally screwed without a commentary fair use defense.
On the post: Another Day, Another Bad Bill To Reform Section 230 That Will Do More Harm Than Good
Re: Re: Re:
I don't primarily object to sites doing moderation, especially since I am open to user-based moderation, as well as moderation against profanity. Rather, I object to corporations engaging in political bias and censorship, and then hiding behind the "objectional" argument (especially since most social media feeds are opt-in!).
This is a hyperbolic. Seeing the occasional comment with which you disagree will not destroy the internet. It's a similar loss of credibility as to those who claimed the internet would cease to function after Jun 11th, 2018.
On the post: Another Day, Another Bad Bill To Reform Section 230 That Will Do More Harm Than Good
Re:
First, if the platform is exempted, then the platform won't need to handle all moderation decisions with a call center; costs could be greatly reduced.
Second, corporations would not be the primary deciders of what speech is allowed to be discussed.
Third, many user-based moderation systems that I've seen currently in use allow anyone to un-hide the moderated text, if they desire. Since the speech isn't completely eliminated, total censorship does not occur.
On the post: Another Day, Another Bad Bill To Reform Section 230 That Will Do More Harm Than Good
I have heard of a potential fix by exempting user-based moderation. Often, this has an added benefit of not making certain speech completely inaccessible. With one mouse click you you turn off user-based moderation and see what's being said anyhow. It's likely that a lot of truly objectionable content would quickly be flagged and fall into this category, cutting down costs considerably.
On the post: T-Mobile Is Already Trying To Wiggle Out Of Its Sprint Merger Conditions
Vouch
Were there any public hearings? I would love to see some company executives charged with lying under oath during testimony.
On the post: UK Information Commissioner Says Police Are Grabbing Too Much Data From Phones Owned By Crime Victims
Over-Sight
If I have a home video security system, and my house gets broken into, I kind of expect the police investigators to take ALL of the footage. Are the police going to see me entering and exiting my house, deliveries, friends and neighbors? Yup.
I suppose the biggest concern for me is public disclosure. Just as your doctor will know about your medical situation in order to provide care, the police are going to investigate your life in order to secure a conviction against a perp. And the police have limited time and resources, so I expect them to prioritize cases with the best chances, not the worst chances.
So perhaps we need something similar to HIPAA laws, where it is understood that access to information is needed for an investigation, but confidentiality is also expected.
On the post: Further Thoughts On Moderation v. Discretion v. Censorship
I Dont Want You Saying That Anywhere
In a day and age of voluntary "follows" and internet corporation monopolies, "we don't do that here" really breaks down. "We" attempts to minimize the million followers, and "here" attempts to minimize the 99% market share, beyond the point of believability.
On the post: Hello! You've Been Referred Here Because You're Wrong About Section 230 Of The Communications Decency Act
Re:
It certainly has bearing, in that section 230 reformers see this as a deficiency in current law, and desire a change. We understand that the current law does not appear to make this distinction, but we want that distinction to be made.
On the post: Hello! You've Been Referred Here Because You're Wrong About Section 230 Of The Communications Decency Act
Re: Re: Re: Re: Reform
I think we should allow spam filters, and disallow corporations from building an open speech platform while engaging in political bias.
On the post: Cheez-It Issues A Bogus DMCA Notice To Nuke A Picture It Didn't Like, Receives Dozens Of Offensive Images In Response
Food Copyright
While I could understand copyright claims on the box art, I am somehow skeptical that any individual cheese-it was made with any artistic design whatsoever. Can you imagine being able to copyright the look of a pretzel, or a cookie?
On the post: Hello! You've Been Referred Here Because You're Wrong About Section 230 Of The Communications Decency Act
Re:
Because then there are consequences for choosing to be a publisher. For example, if they declare "Our service is for Democrats only", it would perhaps be very honest of them. But then they would lose a large portion of their user base, and break the ubiquity and monopoly that they might currently enjoy.
On the post: Hello! You've Been Referred Here Because You're Wrong About Section 230 Of The Communications Decency Act
Re:
If I say in public, "Apples are my favorite fruit", I would not want a prosecutor to file charges against me. But if I had previously signed a contract as a promoter for an orange company, then I would expect the company to file a court complaint against me if I violated my contract.
Government prosecutors and regulators are not the same thing as a court adjudicating a private contract violation.
On the post: Hello! You've Been Referred Here Because You're Wrong About Section 230 Of The Communications Decency Act
Re:
And courts adjudicate contract violations and defamation cases. It's perfectly fine.
On the post: Hello! You've Been Referred Here Because You're Wrong About Section 230 Of The Communications Decency Act
Re: Re: Reform
I have little doubt that political sites would choose to consider themselves a publisher, not an open platform. The same as how a political convention hall is not open to any speech, and would probably kick out protesters.
Next >>