Don't these takedown notices need to be somehow approved by a human? I don't understand how anyone can accept a ban from an automated system as legitimate. A notice from a robotic takedown system ought to be ignored until there is human review, so that someone can be held to account for mistakes like this.
The article you are responding do has done such a study.
Again, it measured numbers of interactions (a dubious measure of success, let alone anything else) and did not measure bias. The closest supporting hypothesis might be "conservatives are successful at getting attention on the platform, therefore there must not be any bias by administrators". A very shaky logic at best.
Please provide evidence of statistically significant
No such study has been done to date. Instead, we get to witness biased behavior- when a conservative speaker says something that (maybe) violates the rules, that speech is censored. But when a leftist says something that violates the rules, no action is taken.
If you take a test, and answer 100 of the questions correctly, but the prof marks 10% of your answers as wrong, you still did pretty good on the test with 90%. And the prof acted scummy.
Measuring the absolute performance of social media commentators in terms of interactions cannot measure bias. It just means that conservative commentators are active on social media, and they get a lot of interactions. And the the corporate censors acted scummy.
You can attract attention, and be a target for censorship at the same time.
One side preaches “Black Lives Matter”. The other side preaches “White Power”. Do you really believe Twitter should be forced to host both “sides” of that “debate”?
Yes. I may disagree with your viewpoint, but I will defend to the death your right to say it.
I would point you to my comment below, titled 'Put up or shut up', which lists a way you can demonstrate just how dedicated you are to the idea.
I read that post again, and considered what I would get out of it if I did.
"Do this at least once a day(say at the first opportunity you have for free time) so you can get a good feel for the kind of content you are trying to foist on others and then come back with the same argument and I might take you seriously,"
I do not want the ability to foist any content upon anyone else. I want people to have the ability to voluntarily hear whatever speech is offered, even if it is unpopular. I dont care if some authority doesn't want me to read The Anarchists Cookbook, or watch a video about someone growing weed in their own backyard. If I want to view it, then I ought to be able to view it.
When you say "not on this platform", you are almost always attempting to block people who want to hear a message from hearing it. Don't you find it a worthy endeavor to do accommodate both sides, especially with the advent of technology? Allow those who want to hear the message to get it, and for those who don't it would get blocked?
You’re calling for the work of moderators to be done by people who aren’t prepared for what that work entails.
I find this to be quite an elitist view, that only some kind of trained professional moderator should be permitted to moderate, and therefore the only way moderation can be achieved is at the behest of a corporation. On the basis that most moderators between 1995 and 2015 were perfectly okay despite zero training, I reject your claim that some moderators in recent history were exposed to bad stuff online and had bad experiences, that therefore user moderation ought never be attempted.
But no. I am also not calling for this work to be done by users, on the basis that I do not support the revocation of section 230, as I originally mentioned. Instead, I am saying that if revocation were to occur, then there would still be the capability for free speech platforms to remain.
Yes, even then, as to moderate something you first have to see it.
User based moderation tools would allow people to copy their moderation settings from someone else. And copying data is something that computers are pretty good at. I anticipate that over 99% of users would choose to copy from the moderation settings of others, thereby the vast majority of people would not see such material.
And for that you would inflict horrors untold, simply because you don't like the fact that platforms are allowed to have biases.
I would not, because as I mentioned above that I do not support a revocation of 230. I'm just saying that a revocation of 230 would still result in free speech platforms.
You’re still saying that individual users should have to do the kind of work that sends Facebook moderators into therapy.
No, I would not say that. Individual moderation tools would allow people to take advantage of one of the things that computers are great at: copying data. Most users would undoubtedly copy their moderation settings from others with more proclivity, and that they also trust.
That is what you would inflict on the public, from children to adults, having everyone wading through that, simply because you don't like the idea of a platform engaging in moderation.
Not if you gave individual users moderation tools. I'm not in favor of "no moderation". Primarily, I'm opposed to politically biased mass moderation by corporations. Providing the moderation tools to users would prevent that type of undesired content, while also satisfying my desire to eliminate censorship of legitimate free speech.
The outcome of the first one can be seen already. We call it 4chan, 8chan, or any other service like it.
I can't say that I'm familiar with how the moderation works on either of these two platforms, since I have never perused them. So I could be wrong on this. However, it is my understanding that neither of them offer moderation tools to the individual users. The solution to a world where 230 is revoked is to provide moderator tools to each individual, to alleviate your concerns.
Ahhh but you overlook the 3rd possibility from my earlier post: user-based moderation. Give the users the tools to choose for themselves what gets banned, and what does not. The platform itself avoids liability by performing no moderation, ala Cubby v Compuserve.
Re: "free speech online would not be wiped out by revoking 230"
Well, the last time 230 was dented by FOSTA we lost a ton of porn. That sounds a lot like the chilling of free speech on the internet.
Certainly. The sites that lost wanted to continue moderating. This then forced them to moderate and censor. But what happens when you cannot moderate at all without liability, ala Stratton Oakmont v Prodigy?
Still, it does make quite a mess, which is why I don't support revokation. I'm just saying that a new paradigm will emerge if it does, which although much different, will eventually return to a free speech system.
In other words, he wants to go even further than Trump and literally wipe out free speech online.
While I don't support a complete revocation, everyone should know that free speech online would not be wiped out by revoking 230. Online platforms would simply be unable to themselves moderate an interactive computer service without assuming liability. Thus, platforms would need to allow all speech by default, and then put the moderation tools into the hands of the users themselves. Let the people decide what to ban, and what not to ban.
Some police departments near where I live deliberately target out-of-towners for speeding tickets. Issuing massive citations against the locals is a good way for the mayor to lose the next election, but out-of-towners don't get to vote.
So I wonder if the enforcement actions are related to the company presence in the state? Go easy on the local telecom companies, because they provide local jobs, campaign contributions, and they've got lobbyists. But an out of state big tech corporation looks like a big payday from out of state. How much of a local presence has Google established in Arizona? Can they start local branch offices and hire lobbyists to stave off enforcement?
The bias happens nearly daily. For today's example, twitter is working to censor Trump for this tweet, supposedly for "glorifying violence", even tho he was denouncing violence.
One nefarious possibility might be to make it legal to provide false identity information to some corporations. People might be able to pay (in cash!) to get a fake ID card made, and buy a cell phone. The carrier is creating a profile, but for who? Continue to pay the subscription with one of those prepaid debit cards. And then burn that phone after about a year, and get a new one. Of course, law enforcement won't like this, because of course they are also taking advantage of the lack of privacy. But without enforcement, we might have an option to fight back, if some minor tweaks to existing law are made.
Because the Daily Stormer is specifically curated to highlight neo-Nazi speech, we can safely assume that it won’t host valuable information. Its gatekeepers explicitly select fascistic speech for publication before the content goes live and are unlikely to grant a platform to anything else.
The Daily Stormer doesn't sound to me like an unfiltered platform. It sounds like a very filtered platform, designed to publish and promote the viewpoint of its owner(s), which is a view that most of us detest. I consider open and unfiltered platforms to be far superior to this one.
Newspapers are publishers, not a platform. They do not enjoy section 230 protection, and very much have been held liable in courts for publishing problematic articles.
On the post: Just As The Copyright Office Tries To Ignore The Problem Of Bad Takedowns, NBC & Disney Take Down NASA's Public Domain Space Launch
Automated Garbage
Don't these takedown notices need to be somehow approved by a human? I don't understand how anyone can accept a ban from an automated system as legitimate. A notice from a robotic takedown system ought to be ignored until there is human review, so that someone can be held to account for mistakes like this.
On the post: New Study Finds No Evidence Of Anti-Conservative Bias In Facebook Moderation (If Anything, It's The Opposite)
Re: Re: Re: Re: Performance Measure
Again, it measured numbers of interactions (a dubious measure of success, let alone anything else) and did not measure bias. The closest supporting hypothesis might be "conservatives are successful at getting attention on the platform, therefore there must not be any bias by administrators". A very shaky logic at best.
On the post: New Study Finds No Evidence Of Anti-Conservative Bias In Facebook Moderation (If Anything, It's The Opposite)
Re:
In today's NY Times news, they caved to the leftists.
https://www.google.com/amp/s/nypost.com/2020/06/02/new-york-times-changes-headline-followi ng-pressure-from-democrats/amp/
On the post: New Study Finds No Evidence Of Anti-Conservative Bias In Facebook Moderation (If Anything, It's The Opposite)
Re: Re: Performance Measure
No such study has been done to date. Instead, we get to witness biased behavior- when a conservative speaker says something that (maybe) violates the rules, that speech is censored. But when a leftist says something that violates the rules, no action is taken.
Today's example is congressman Matt Gaetz
https://www.google.com/amp/s/www.nytimes.com/2020/06/01/technology/twitter-matt-gaetz-warning. amp.html
Vs the people on the favored side of the isle, openly coordinating riot activity go scott-free.
https://www.breitbart.com/tech/2020/05/31/twitter-allows-looters-to-coordinate-criminal- behavior-while-it-declares-blacklivesmatter/
On the post: New Study Finds No Evidence Of Anti-Conservative Bias In Facebook Moderation (If Anything, It's The Opposite)
Performance Measure
If you take a test, and answer 100 of the questions correctly, but the prof marks 10% of your answers as wrong, you still did pretty good on the test with 90%. And the prof acted scummy.
Measuring the absolute performance of social media commentators in terms of interactions cannot measure bias. It just means that conservative commentators are active on social media, and they get a lot of interactions. And the the corporate censors acted scummy.
You can attract attention, and be a target for censorship at the same time.
On the post: Joe Biden Wastes A Huge Opportunity To Support Free Speech; Still Wants To 'Revoke' Section 230
Re:
One side preaches “Black Lives Matter”. The other side preaches “White Power”. Do you really believe Twitter should be forced to host both “sides” of that “debate”?
Yes. I may disagree with your viewpoint, but I will defend to the death your right to say it.
On the post: Joe Biden Wastes A Huge Opportunity To Support Free Speech; Still Wants To 'Revoke' Section 230
Re: Re: Re: Re: Re: Re: Re:
I read that post again, and considered what I would get out of it if I did.
"Do this at least once a day(say at the first opportunity you have for free time) so you can get a good feel for the kind of content you are trying to foist on others and then come back with the same argument and I might take you seriously,"
I do not want the ability to foist any content upon anyone else. I want people to have the ability to voluntarily hear whatever speech is offered, even if it is unpopular. I dont care if some authority doesn't want me to read The Anarchists Cookbook, or watch a video about someone growing weed in their own backyard. If I want to view it, then I ought to be able to view it.
When you say "not on this platform", you are almost always attempting to block people who want to hear a message from hearing it. Don't you find it a worthy endeavor to do accommodate both sides, especially with the advent of technology? Allow those who want to hear the message to get it, and for those who don't it would get blocked?
On the post: Joe Biden Wastes A Huge Opportunity To Support Free Speech; Still Wants To 'Revoke' Section 230
Re:
I find this to be quite an elitist view, that only some kind of trained professional moderator should be permitted to moderate, and therefore the only way moderation can be achieved is at the behest of a corporation. On the basis that most moderators between 1995 and 2015 were perfectly okay despite zero training, I reject your claim that some moderators in recent history were exposed to bad stuff online and had bad experiences, that therefore user moderation ought never be attempted.
But no. I am also not calling for this work to be done by users, on the basis that I do not support the revocation of section 230, as I originally mentioned. Instead, I am saying that if revocation were to occur, then there would still be the capability for free speech platforms to remain.
On the post: Joe Biden Wastes A Huge Opportunity To Support Free Speech; Still Wants To 'Revoke' Section 230
Re: Re: Re: Re: Re:
User based moderation tools would allow people to copy their moderation settings from someone else. And copying data is something that computers are pretty good at. I anticipate that over 99% of users would choose to copy from the moderation settings of others, thereby the vast majority of people would not see such material.
I would not, because as I mentioned above that I do not support a revocation of 230. I'm just saying that a revocation of 230 would still result in free speech platforms.
On the post: Joe Biden Wastes A Huge Opportunity To Support Free Speech; Still Wants To 'Revoke' Section 230
Re:
No, I would not say that. Individual moderation tools would allow people to take advantage of one of the things that computers are great at: copying data. Most users would undoubtedly copy their moderation settings from others with more proclivity, and that they also trust.
On the post: Joe Biden Wastes A Huge Opportunity To Support Free Speech; Still Wants To 'Revoke' Section 230
Re: Re: Re:
Not if you gave individual users moderation tools. I'm not in favor of "no moderation". Primarily, I'm opposed to politically biased mass moderation by corporations. Providing the moderation tools to users would prevent that type of undesired content, while also satisfying my desire to eliminate censorship of legitimate free speech.
On the post: Joe Biden Wastes A Huge Opportunity To Support Free Speech; Still Wants To 'Revoke' Section 230
Re:
I can't say that I'm familiar with how the moderation works on either of these two platforms, since I have never perused them. So I could be wrong on this. However, it is my understanding that neither of them offer moderation tools to the individual users. The solution to a world where 230 is revoked is to provide moderator tools to each individual, to alleviate your concerns.
On the post: Joe Biden Wastes A Huge Opportunity To Support Free Speech; Still Wants To 'Revoke' Section 230
Re:
Ahhh but you overlook the 3rd possibility from my earlier post: user-based moderation. Give the users the tools to choose for themselves what gets banned, and what does not. The platform itself avoids liability by performing no moderation, ala Cubby v Compuserve.
On the post: Joe Biden Wastes A Huge Opportunity To Support Free Speech; Still Wants To 'Revoke' Section 230
Re: "free speech online would not be wiped out by revoking 230"
Certainly. The sites that lost wanted to continue moderating. This then forced them to moderate and censor. But what happens when you cannot moderate at all without liability, ala Stratton Oakmont v Prodigy?
Still, it does make quite a mess, which is why I don't support revokation. I'm just saying that a new paradigm will emerge if it does, which although much different, will eventually return to a free speech system.
On the post: Joe Biden Wastes A Huge Opportunity To Support Free Speech; Still Wants To 'Revoke' Section 230
While I don't support a complete revocation, everyone should know that free speech online would not be wiped out by revoking 230. Online platforms would simply be unable to themselves moderate an interactive computer service without assuming liability. Thus, platforms would need to allow all speech by default, and then put the moderation tools into the hands of the users themselves. Let the people decide what to ban, and what not to ban.
On the post: Arizona AG Sues Google For Location Data Failures, After Telecom Got A Wrist Slap For Far Worse Behavior
Local Resident
Some police departments near where I live deliberately target out-of-towners for speeding tickets. Issuing massive citations against the locals is a good way for the mayor to lose the next election, but out-of-towners don't get to vote.
So I wonder if the enforcement actions are related to the company presence in the state? Go easy on the local telecom companies, because they provide local jobs, campaign contributions, and they've got lobbyists. But an out of state big tech corporation looks like a big payday from out of state. How much of a local presence has Google established in Arizona? Can they start local branch offices and hire lobbyists to stave off enforcement?
On the post: No, Twitter Fact Checking The President Is Not Evidence Of Anti-Conservative Bias
Daily Proof of Bias
The bias happens nearly daily. For today's example, twitter is working to censor Trump for this tweet, supposedly for "glorifying violence", even tho he was denouncing violence.
https://twitter.com/WhiteHouse/status/1266342941649506304?s=20
Meanwhite, liberal youtuber Ajia G very much glorified violence and was reported to twitter. Twitter has refused to take any action.
https://twitter.com/QueenOfGeele/status/1265634003585044486?ref_src=twsrc%5Etfw
It is very clear that twitter is biased against conservatives, and will censor them, but not liberals.
On the post: Can You Protect Privacy If There's No Real Enforcement Mechanism?
The Deception Option
One nefarious possibility might be to make it legal to provide false identity information to some corporations. People might be able to pay (in cash!) to get a fake ID card made, and buy a cell phone. The carrier is creating a profile, but for who? Continue to pay the subscription with one of those prepaid debit cards. And then burn that phone after about a year, and get a new one. Of course, law enforcement won't like this, because of course they are also taking advantage of the lack of privacy. But without enforcement, we might have an option to fight back, if some minor tweaks to existing law are made.
On the post: Two Cheers For Unfiltered Information
Is it?
The Daily Stormer doesn't sound to me like an unfiltered platform. It sounds like a very filtered platform, designed to publish and promote the viewpoint of its owner(s), which is a view that most of us detest. I consider open and unfiltered platforms to be far superior to this one.
On the post: Trump's Final Executive Order On Social Media Deliberately Removed Reference To Importance Of Newspapers To Democracy
Platform or Publisher
Newspapers are publishers, not a platform. They do not enjoy section 230 protection, and very much have been held liable in courts for publishing problematic articles.
Next >>