Do you really want the government saying you're a) a "platform" or "publisher" in one area and whether or not you're b) "biased" or "unbiased" in another?
No, the internet service itself would choose. They could declare things such as "We are a free speech platform. We will not censor messages based upon a political bias", or perhaps they could say "We're a bunch of hardcore Democrats. We will gladly publish those with a left wing viewpoint, and we will ban anyone who sounds like a Republican".
Also, the government wouldn't determine whether there is bias or not. Instead, private court action would allow aggrieved parties to present their case if they believe that the internet service violated their own terms of service.
An excellent dissertation on how section 230 currently works. However, many of us who want to see section 230 reforms already know what it says, and can see how it operates. That's why we want reforms.
Once a company starts moderating content, it OUGHT to choose between being a platform, or a publisher.
A platform that has political bias is not neutral, and thus OUGHT to lose its Section 230 protections.
Section 230 requires all moderation to be in "good faith" and if moderation is "biased" then you SHOULDN'T get 230 protections.
If you use another DNS other than the service from your ISP, and the DNS is not encrypted, then I believe an unscrupulous ISP could still monitor, collect, and sell the data. While not perfect, DNS over Http is a step in the right direction.
I would just like for there to be more competition. I say it would begin to solve a lot of problems, like DNS privacy. Without competition, outsiders like Mozilla will be the most disruptive factor in this space.
ISPs that prioritize data privacy can distinguish themselves with customers, partners and civil society.
If there's competition, then yes. But for many areas with an ISP monopoly or duopoly, then rollout is going to be slow, or perhaps nonexistent. This is why Mozilla is taking the lead over ISPs.
Recent reports have announced that AT&T currently has around $200 billion in debt. Too many unprofitable mergers and acquisitions. Now, they're facing pressure to unload some of their acquisitions to pay down the debt. Last week they announced that they are attempting to sell their Warner Brothers gaming unit. Now they're attempting to lay off employees.
The tax cuts don't have anything to do with layoffs. Last I checked, the United States bases taxes on income. There's no such thing as a tax break for layoffs. The layoffs are due to AT&T's poor business decisions. Even with tax cuts, AT&T couldn't overcome its own mistakes.
Perhaps we could shine a little sunshine on the topic. Similar to how we have nutrition labels on food, or energy consumption labels on appliances, there could be labels on "news" organizations, so that readers/viewers could understand just how radical and biased some of the reporting is.
Of course, some of this would be difficult to measure. While measuring campaign contributions might be easy, there are a ton of political ideology tests out there online. I'm unsure of any scientific methodology, or their accuracy. And how could you enforce news hosts or editors to answer a questionnaire truthfully? It's a lot harder than counting calories or measuring wattage. Perhaps other methodologies could be developed, such as airtime spent on certain subjects as an indicator of a network's attempting to influence its audience.
Ordinarily, I'm tempted to say that the legislative branch should get to decide, but not this time. The Supreme Court created this problem without the legislative branch; it ought to undo it without the legislative branch as well. If the legislature does move to eliminate Qualified Immunity, then who can say that the supreme court won't invent some other new obstacle for those who seek redress in court?
To say that there is no rationale for ICE agents intercepting illegal immigrants on the way to court proceedings isn't exactly true. In April 2019, Massachusetts judge Shelley Richmond Joseph was charged with aiding and abetting the escape of an illegal from her courthouse. ICE agents announced their presence before hand, and waited outside the courtroom for the proceedings to conclude. During the proceedings, the judge turned off the tape recorder against district court rules, had a discussion for 52 seconds before turning it back on, released the defendant from custody, had the defendant and attorney escorted out a back door, then lied about it later. If federal agents can't trust the judges to obey the law, taking defendants into custody before the proceedings seem to be the reasonable thing to do.
Now that the basis for the merger has been revealed to be untrue, the unemployed workers ought to have recourse to undo the merger and get their jobs back.
Sadly, there probably isn't any legal mechanism to roll back a merger based on faulty premises.
The only solution I can think of is that in the future, employee unions should attempt to sign a long term contract with the merging companies, and when the corporations refuse, use that as evidence during the merger approval process.
Do you still think human review is reasonable possible, and note more smaller sites increases the number of people required overall?
I found a statistic from omnicoreagency dot com which claims there are 5 billion comments posted to facebook monthly. It may not be as much as 3 billion posts per day, and also I bet the flagged posts are actually fewer than 0.01%. We're getting into some Drake Equation stuff here.
But I get your point. You're probably right that human review for all flagged posts is still too difficult to achieve for everything. A reasonable number of moderators would probably be overwhelmed by some kind of emergency crisis event. And there's probably a whole lot of other problems with a paid comment model, such as privacy. I'm not saying that it would work.
Mostly what I'm trying to get at is that there might be a way to make life easier for moderators, or decrease the chances of problems occurring as in the main story by increasing the human-review to automated-scrub ratio. It will never be perfect, and so the Masnick Impossibility Theorem will still always hold true. I'm just trying to point out the tradeoffs of the current system, and explain how our current free comment model involves a relatively high moderation cost compared to the relatively low cost of bad actors creating a new account to abuse. A better system design will probably require a significant model overhaul.
I'm not saying that a paid comment model would succeed. Free commenting DOES have its advantages. I'm just trying to say that the Masnick Impossibility Theorem is correct under the current environment.
First, the question: Would you have created an account and posted here if you had to pay to do so, with all that would entail?
Perhaps not for just one website, but I could imagine a system where someone could pay to have their identity verified, and then this login might work for comments or forum discussions across a number of affiliated websites. I might go for that.
Second, the counter-point: A one-time $20 per user cost would only even possibly pay for moderation fees if you took into account...
You're right, it couldn't pay for ongoing moderation. My hope would be for such a system to prevent human moderators from being overwhelmed by bad actors creating a new account, and misbehaving on their first post.
Those that deliberately break the rules seldom find themselves facing significant penalties for it might be a part of it, but the biggest part is simply scale and context, where there's simply too much content...
And this is also why the Masnick Impossibility Theorem is still always going to be correct. However, part of the theorem is that we ought to try and improve our existing moderation systems, even if we never achieve a perfect system. Can a system with paid access to forum discussions be developed to be superior to a free access system filled with spam and bots and trolls? I'm not sure, but I'd like to throw out an idea for consideration, and even if it's not practical, then it at least explains the cost shifting phenomena which makes like miserable for the moderators.
What you are proposing is that every post by every user is moderated,
No, I wouldn't do that. I would only have human review for posts which get flagged by an automated system.
Human review might greatly improve moderation at scale and prevent false-positives such as in the story. But the number flagged posts seems impossible due to the overwhelming volume. If there was a cost to the bad actors, then the amount of moderation necessary could decrease across the board. As an example, few people get kicked out of a nightclub within the first few minutes after paying a cover charge. Not many folks are obtuse enough to misbehave immediately after paying $20 at the door.
It simply can't manually review every page like that, because while you can figure out why each of these is absurd, or why there's confusion, for every bit of time you spend doing that, you have thousands of trolls, bots, and actual racists signing up and creating mayhem as well.
This seems to be a downside of the Facebook model -- free (there are upsides, too, of course). If it were a paid software product, the purchase cost could become a revenue source for Facebook. I'm sure that even at a one time upfront price point of, let's say $20 it would easily pay for a human to do manual reviews. How fast could a moderator ban spam bot accounts for $20 per pop? I would love to try a business model like that.
Alas, the reality is people on the internet are anonymous, and social media products are "free". So because we can't push the costs of moderation onto the rules-breakers, it probably is always going to be impossible to moderate at scale.
I am confused as to how this is a Twitter problem? This is a copyright problem. Any other platform would (and does) face the same issues.
Two thoughts:
1.) As a system becomes larger, more automation is needed. What if smaller companies were to use human reviews, who could provide an actual explanation of what happened, instead of a useless automated form page?
2.) If there wasn't so much of a monopoly, perhaps a platform that gets a reputation for unwarranted takedowns would lose customers. A platform that
a fights on behalf of its users, sees bogus takedown, leaves the content up, and fires off a nastrygram to the copyright troll might earn a lot of community goodwill. Other platforms might then be required to follow suit.
Still, you'd think that after so many run ins with regulators and consumers, wireless carriers would simply stop using the word "unlimited" entirely, instead focusing on other metrics like speed or reliability.
In other unrelated industries, commercial laws have been written describing what sales associates can and cannot say. The bottom line is that using certain language gets results, which translates to money. Companies that were more ethical and voluntarily chose not to use the problematic language were at a clear competitive disadvantage. Some companies were even willing to absorb some fines and penalties, justifying it as the cost of doing business for continuing to use problematic language. Until the fines add up to more than the benefit of using the deceptive marketing language, it will continue to be used.
I'll give you an example -- if you see enough videos depicting police brutality against citizens, you might know that there's a problem. Even if the police don't release the data about how often this occurs, you still see a problem, and you want something to be done about it.
1) For things like rental agreements, credit cards, etc, people stand to lose money or a place to stay.
There are a number of personalities and celebrities that have made big bucks by promoting themselves through social media. Aside from the money, you are right that rules are established when something is very important. That we also consider rules to guarantee freedom of speech alongside expensive housing or vehicle purchases shows just how important free speech is.
On the post: Hello! You've Been Referred Here Because You're Wrong About Section 230 Of The Communications Decency Act
Re: Re: Reform
Because corporations that engage in censorship is equally as dangerous as governments that engage in censorship.
On the post: Hello! You've Been Referred Here Because You're Wrong About Section 230 Of The Communications Decency Act
Re: Re: Reform
No, the internet service itself would choose. They could declare things such as "We are a free speech platform. We will not censor messages based upon a political bias", or perhaps they could say "We're a bunch of hardcore Democrats. We will gladly publish those with a left wing viewpoint, and we will ban anyone who sounds like a Republican".
Also, the government wouldn't determine whether there is bias or not. Instead, private court action would allow aggrieved parties to present their case if they believe that the internet service violated their own terms of service.
On the post: Hello! You've Been Referred Here Because You're Wrong About Section 230 Of The Communications Decency Act
Reform
An excellent dissertation on how section 230 currently works. However, many of us who want to see section 230 reforms already know what it says, and can see how it operates. That's why we want reforms.
Once a company starts moderating content, it OUGHT to choose between being a platform, or a publisher.
A platform that has political bias is not neutral, and thus OUGHT to lose its Section 230 protections.
Section 230 requires all moderation to be in "good faith" and if moderation is "biased" then you SHOULDN'T get 230 protections.
On the post: It's Long Past Time To Encrypt The Entire DNS
Re: Re:
If you use another DNS other than the service from your ISP, and the DNS is not encrypted, then I believe an unscrupulous ISP could still monitor, collect, and sell the data. While not perfect, DNS over Http is a step in the right direction.
I would just like for there to be more competition. I say it would begin to solve a lot of problems, like DNS privacy. Without competition, outsiders like Mozilla will be the most disruptive factor in this space.
On the post: It's Long Past Time To Encrypt The Entire DNS
If there's competition, then yes. But for many areas with an ISP monopoly or duopoly, then rollout is going to be slow, or perhaps nonexistent. This is why Mozilla is taking the lead over ISPs.
On the post: AT&T Has Now Eliminated 41,000 Jobs Since Its $42 Billion Trump Tax Cut
Deficits
Recent reports have announced that AT&T currently has around $200 billion in debt. Too many unprofitable mergers and acquisitions. Now, they're facing pressure to unload some of their acquisitions to pay down the debt. Last week they announced that they are attempting to sell their Warner Brothers gaming unit. Now they're attempting to lay off employees.
The tax cuts don't have anything to do with layoffs. Last I checked, the United States bases taxes on income. There's no such thing as a tax break for layoffs. The layoffs are due to AT&T's poor business decisions. Even with tax cuts, AT&T couldn't overcome its own mistakes.
On the post: Trump's Plan To Turn US Global Media Operations Into State-Sponsored Breitbart... Could Threaten The Open (And Encrypted) Internet
Re:
Perhaps we could shine a little sunshine on the topic. Similar to how we have nutrition labels on food, or energy consumption labels on appliances, there could be labels on "news" organizations, so that readers/viewers could understand just how radical and biased some of the reporting is.
Of course, some of this would be difficult to measure. While measuring campaign contributions might be easy, there are a ton of political ideology tests out there online. I'm unsure of any scientific methodology, or their accuracy. And how could you enforce news hosts or editors to answer a questionnaire truthfully? It's a lot harder than counting calories or measuring wattage. Perhaps other methodologies could be developed, such as airtime spent on certain subjects as an indicator of a network's attempting to influence its audience.
On the post: Appeals Court Judge: Supreme Court Needs To Unfuck The Public By Rolling Back The Qualified Immunity Doctrine
Underpinning
Ordinarily, I'm tempted to say that the legislative branch should get to decide, but not this time. The Supreme Court created this problem without the legislative branch; it ought to undo it without the legislative branch as well. If the legislature does move to eliminate Qualified Immunity, then who can say that the supreme court won't invent some other new obstacle for those who seek redress in court?
On the post: Federal Court Says ICE Can No Longer Enter New York Courthouses Just To Arrest Alleged Undocumented Immigrants
To say that there is no rationale for ICE agents intercepting illegal immigrants on the way to court proceedings isn't exactly true. In April 2019, Massachusetts judge Shelley Richmond Joseph was charged with aiding and abetting the escape of an illegal from her courthouse. ICE agents announced their presence before hand, and waited outside the courtroom for the proceedings to conclude. During the proceedings, the judge turned off the tape recorder against district court rules, had a discussion for 52 seconds before turning it back on, released the defendant from custody, had the defendant and attorney escorted out a back door, then lied about it later. If federal agents can't trust the judges to obey the law, taking defendants into custody before the proceedings seem to be the reasonable thing to do.
On the post: Yet More Layoffs Hit Sprint/T-Mobile, Despite Promises This Assuredly Wouldn't Happen
Take It Back
Now that the basis for the merger has been revealed to be untrue, the unemployed workers ought to have recourse to undo the merger and get their jobs back.
Sadly, there probably isn't any legal mechanism to roll back a merger based on faulty premises.
The only solution I can think of is that in the future, employee unions should attempt to sign a long term contract with the merging companies, and when the corporations refuse, use that as evidence during the merger approval process.
On the post: Content Moderation At Scale Is Impossible: Facebook Kicks Off Anti-Racist Skinheads/Musicians While Trying To Block Racists
Re: Re: Re: Re:
I found a statistic from omnicoreagency dot com which claims there are 5 billion comments posted to facebook monthly. It may not be as much as 3 billion posts per day, and also I bet the flagged posts are actually fewer than 0.01%. We're getting into some Drake Equation stuff here.
But I get your point. You're probably right that human review for all flagged posts is still too difficult to achieve for everything. A reasonable number of moderators would probably be overwhelmed by some kind of emergency crisis event. And there's probably a whole lot of other problems with a paid comment model, such as privacy. I'm not saying that it would work.
Mostly what I'm trying to get at is that there might be a way to make life easier for moderators, or decrease the chances of problems occurring as in the main story by increasing the human-review to automated-scrub ratio. It will never be perfect, and so the Masnick Impossibility Theorem will still always hold true. I'm just trying to point out the tradeoffs of the current system, and explain how our current free comment model involves a relatively high moderation cost compared to the relatively low cost of bad actors creating a new account to abuse. A better system design will probably require a significant model overhaul.
On the post: Content Moderation At Scale Is Impossible: Facebook Kicks Off Anti-Racist Skinheads/Musicians While Trying To Block Racists
Re: Re:
I'm not saying that a paid comment model would succeed. Free commenting DOES have its advantages. I'm just trying to say that the Masnick Impossibility Theorem is correct under the current environment.
Perhaps not for just one website, but I could imagine a system where someone could pay to have their identity verified, and then this login might work for comments or forum discussions across a number of affiliated websites. I might go for that.
You're right, it couldn't pay for ongoing moderation. My hope would be for such a system to prevent human moderators from being overwhelmed by bad actors creating a new account, and misbehaving on their first post.
And this is also why the Masnick Impossibility Theorem is still always going to be correct. However, part of the theorem is that we ought to try and improve our existing moderation systems, even if we never achieve a perfect system. Can a system with paid access to forum discussions be developed to be superior to a free access system filled with spam and bots and trolls? I'm not sure, but I'd like to throw out an idea for consideration, and even if it's not practical, then it at least explains the cost shifting phenomena which makes like miserable for the moderators.
On the post: Content Moderation At Scale Is Impossible: Facebook Kicks Off Anti-Racist Skinheads/Musicians While Trying To Block Racists
Re: Re:
No, I wouldn't do that. I would only have human review for posts which get flagged by an automated system.
Human review might greatly improve moderation at scale and prevent false-positives such as in the story. But the number flagged posts seems impossible due to the overwhelming volume. If there was a cost to the bad actors, then the amount of moderation necessary could decrease across the board. As an example, few people get kicked out of a nightclub within the first few minutes after paying a cover charge. Not many folks are obtuse enough to misbehave immediately after paying $20 at the door.
On the post: Content Moderation At Scale Is Impossible: Facebook Kicks Off Anti-Racist Skinheads/Musicians While Trying To Block Racists
This seems to be a downside of the Facebook model -- free (there are upsides, too, of course). If it were a paid software product, the purchase cost could become a revenue source for Facebook. I'm sure that even at a one time upfront price point of, let's say $20 it would easily pay for a human to do manual reviews. How fast could a moderator ban spam bot accounts for $20 per pop? I would love to try a business model like that.
Alas, the reality is people on the internet are anonymous, and social media products are "free". So because we can't push the costs of moderation onto the rules-breakers, it probably is always going to be impossible to moderate at scale.
On the post: Copyright Gets In The Way Of Chef Andres' 'Recipes For The People'; Because The DMCA Takedown System Is Still Broken
Re: Re: Time to shut down Twitter
Two thoughts:
1.) As a system becomes larger, more automation is needed. What if smaller companies were to use human reviews, who could provide an actual explanation of what happened, instead of a useless automated form page?
2.) If there wasn't so much of a monopoly, perhaps a platform that gets a reputation for unwarranted takedowns would lose customers. A platform that
a fights on behalf of its users, sees bogus takedown, leaves the content up, and fires off a nastrygram to the copyright troll might earn a lot of community goodwill. Other platforms might then be required to follow suit.
On the post: AT&T Says Being Misleading About 'Unlimited' Data Plans Was Ok, Because Reporters Told Consumers It Was Being Misleading
Advatage
In other unrelated industries, commercial laws have been written describing what sales associates can and cannot say. The bottom line is that using certain language gets results, which translates to money. Companies that were more ethical and voluntarily chose not to use the problematic language were at a clear competitive disadvantage. Some companies were even willing to absorb some fines and penalties, justifying it as the cost of doing business for continuing to use problematic language. Until the fines add up to more than the benefit of using the deceptive marketing language, it will continue to be used.
On the post: Senator Hawley's Section 230 Reform Even Dumber Than We Expected; Would Launch A Ton Of Vexatious Lawsuits
Re:
You're right, I think that it's MORE important, and therefore in more need of contractual protection.
On the post: Senator Hawley's Section 230 Reform Even Dumber Than We Expected; Would Launch A Ton Of Vexatious Lawsuits
Re:
I'll give you an example -- if you see enough videos depicting police brutality against citizens, you might know that there's a problem. Even if the police don't release the data about how often this occurs, you still see a problem, and you want something to be done about it.
On the post: Senator Hawley's Section 230 Reform Even Dumber Than We Expected; Would Launch A Ton Of Vexatious Lawsuits
Re: Re: Re:
Under the legislation, they would be permitted to define racist and bigoted speech, and then they could continue to moderate and ban.
On the post: Senator Hawley's Section 230 Reform Even Dumber Than We Expected; Would Launch A Ton Of Vexatious Lawsuits
Re:
There are a number of personalities and celebrities that have made big bucks by promoting themselves through social media. Aside from the money, you are right that rules are established when something is very important. That we also consider rules to guarantee freedom of speech alongside expensive housing or vehicle purchases shows just how important free speech is.
Next >>