Social media companies currently moderate based on a form of community standards—i.e., standards they believe will net them the largest possible community.
You would think that's the case, but without more transparency into the decision making process of the companies we don't really know for sure (of course legally mandated transparency is probably a bad idea but I'm not calling for that). One of my issues with social media platforms is that oftentimes their policies and actions don't seem to align with user preferences at all. Other times they do, but only by coincidence. After all, we keep saying "Platforms have the right to moderate however they want", but few stop to think about what users want.
Private companies don't have to "support" free speech.
I think that's what "discretion" means. That said, if a company claims to "support free speech", I think it's fair to evaluate whether that is indeed true or not. Like you said, they don't have to support free speech, so if they don't, I would prefer them to be honest about it.
Fb and Twitter management love conservatives. It's the rank-and file employees that "hate" conservatives (assuming we can call it that), and even that is largely because of the culture that arises from being based in California.
Even Reddit was moderated. Otherwise, it would fill up with Child Pornography and copyright infringement and things that would hold it liable.
Yes and no. Most of what is referred to as "moderation" on Reddit is not what most people usually think of when discussing content moderation. Stuff like removing duplicate threads or "low-quality" posts. What we usually think of as moderation would be Reddit staff ("admins") banning subreddits and/or users for violating the Content Policy, and Reddit was indeed relatively moderation-free in that regard. In the past Reddit didn't even have a content policy, and (IIRC) a staff member actually once said something to the effect of "we would be doing something very wrong if we ever have to cite the TOS as a justification for an action). Of course, this was something like 10 years ago, and Reddit had around 30-50 employees. There are many subreddits from that time that would not be allowed to exist today.
Yes, social media companies and newspapers exercise editorial judgment in different ways. And that's what's so important about the 1st Amendment protections afforded to both. It's because of the 1st Amendment, preventing government from getting involved in such editorial choices that allows things as diverse as newspapers in one area and social media websites in another area to exist. They are exercising their editorial judgment in different ways because the 1st Amendment allows them too -- and suggesting that social media shouldn't qualify for the same level of protection seems totally antithetical to the entire point of the 1st Amendment itself.
I think what the article is hinting at is that the 1st Amendment would look a lot different if it had been drafted with social media in mind.
The argument about privacy laws seems confusing as well. Obviously, there could be some privacy laws that have a serious impact on the 1st Amendment. In the EU, obviously, we've seen the rollout of the GDPR with its "right to be forgotten," which would never pass 1st Amendment scrutiny in the US. But, not every privacy law would include such restrictions on speech, and none of this means that it would be effectively impossible to pass privacy laws in the US. It would only do so if those privacy laws clearly intruded on 1st Amendment protected editorial discretion and rights.
But this does limit the policy options available, such as the "right to be forgotten" that you just mentioned. I'm sure this is not your intention, but once again it makes the 1st Amendment sound like a burden, as if it is a constitutional obligation rather than a good thing in itself.
Ultimately, from a user perspective, I think it comes down to who you trust more to represent your interests. Yes, the laws are unconstitutional, but in their arguments, both the states of Florida and Texas and the internet company trade groups that oppose them portray themselves as standing up for regular social media users. As most people here would argue (and which I completely agree), the governments of Florida and Texas certainly aren't standing up for users. But this doesn't necessarily mean that the platforms are.
I'm honestly perplexed at why Twitter implemented such a broad policy, so difficult to enforce, and so open to abuse. It seems extremely unlike the more thoughtful trust & safety moves the company has made over the past few years.
And that's exactly why I think more transparency is a good thing. Even if a government mandate is unconstitutional, there's nothing stopping them from doing it on their own accord.
This wouldn't be the Court's place to decide, but two major issues for me would be
1) Should the policymaking process of social media platforms more closely resemble the policymaking process of governments?
2) Should principles of the criminal justice system (things like due process and the right to appeal) apply to social media moderation?
For me, I would lean towards "yes" to both, but as usual, the problem is scale. Facebook/Meta says "Of course...we can’t meaningfully engage with billions of people" but right now, there is no way for users to directly submit feedback regarding policy to social media platforms. If you have an issue with a law, you can write your Congressman. If you have an issue with a platform's policy, you're stuck whining about it or leaving the platform entirely. Platforms have started to highlight how they work with experts and activist groups to shape their policies, but I would like to see them bring regular users into the mix as well.
There seems to be widespread support for more transparency and a better appeals process (just not as a government mandate), but even as the judgment notes, it's a matter of scale. And honestly I'm not sure how to overcome that.
Disney could have ignored demands to remove the episode, if there actually were any demands. It could be a proactive move by Disney which makes it even worse.
It was most likely a proactive move, which also gives government officials an easy out when confronted on this ("It wasn't our call. Go talk to them"). And this is why I disagree vehemently (but respectfully) when some users here say self-censorship isn't a thing. Yes, you could say that Disney exercised "discretion" to decide "We won't show this here" instead of the Chinese government saying "You can't show this here" but as you can see, there is little practical difference.
Re: If Facebook wanted to discourage cops making fake users...
ban all the personal accounts of employees
IIRC there was a bit of backlash to this cause they literally banned every employee and almost everyone thought that was a bit excessive. Sucks to be the guy mopping floors at NSO Group headquarters I guess.
I think the issue is that opponents to mandatory transparency tend to focus on how this is unconstitutional instead of how this is a bad idea. Now, Techdirt has written on this topic before and I'm sure Mike thinks this is a bad idea in addition to being unconstitutional, but without going back to the articles I honestly cannot recall his reasoning. And IMO this just makes 1A sound like a burden to those less familiar with US constitutional law: as if mandatory transparency is a great idea, but we can't do it cause it's unconstitutional.
as well as produce standardized archives of the material they remove or otherwise moderate.*
i mean, sure. Mandated? Not so much.
What do they mean by this? Something like this Internet Archive/Wayback Machine? If so, this is something I support, but I consider it mainly a historic preservation issue at this point as opposed to a content moderation, transparency, or 1A issue.
You know, there are people who think that corporate personhood shouldn't be a thing. Not saying that I'm one of them, but I spent my student years during a time when "The Corporation" - both the book and the documentary - were popular.
And now for the serious part: This is the logical conclusion of allowing too muvh moderation freedom.
No, it's not. Don't be foolish.
Of course, freedom includes the freedom to make bad decisions. I do wonder, though, whether platforms can be guided to make better decisions. Maybe if we start thinking of them more as communities as opposed to businesses? I'm starting to think that maybe a service like Facebook is simply not suited to be a private for-profit business.
How does this fit into dealing with climate change denial, which I expect the platforms to come under scrutiny for sooner or later? Should those who deny climate science be treated the same as anti-vaxxers or neo-Nazis? They arguably cause harm with their disinformation, but aren't quite as obnoxious to regular users or community-disrupting as the latter two groups.
So what should platforms do about, say, climate change denial (which is an issue I expect to come up shortly)? Specifically, should it give those who deny climate science the same treatment as anti-vaxxers and neo-Nazis?
Yes, but content moderation is difficult and expensive, and the big platforms are all for-profit corporations. So you'll need to reconcile this and the fact that business-wise, it is in their best interests to moderate as little as possible.
On the post: [UPDATE] Elizabeth Warren Is NOT Cosponsoring A Bill To Repeal 230
Re:
You would think that's the case, but without more transparency into the decision making process of the companies we don't really know for sure (of course legally mandated transparency is probably a bad idea but I'm not calling for that). One of my issues with social media platforms is that oftentimes their policies and actions don't seem to align with user preferences at all. Other times they do, but only by coincidence. After all, we keep saying "Platforms have the right to moderate however they want", but few stop to think about what users want.
On the post: Weeks After Blasting Twitter For 'Strangling Free Expression' GETTR Bans The Term 'Groyper' In Effort To Stop White Nationalist Spam
Re: Re:
I think that's what "discretion" means. That said, if a company claims to "support free speech", I think it's fair to evaluate whether that is indeed true or not. Like you said, they don't have to support free speech, so if they don't, I would prefer them to be honest about it.
On the post: Weeks After Blasting Twitter For 'Strangling Free Expression' GETTR Bans The Term 'Groyper' In Effort To Stop White Nationalist Spam
Re: Re: Refreshing Honesty
Fb and Twitter management love conservatives. It's the rank-and file employees that "hate" conservatives (assuming we can call it that), and even that is largely because of the culture that arises from being based in California.
On the post: Weeks After Blasting Twitter For 'Strangling Free Expression' GETTR Bans The Term 'Groyper' In Effort To Stop White Nationalist Spam
Re: Re:
Yes and no. Most of what is referred to as "moderation" on Reddit is not what most people usually think of when discussing content moderation. Stuff like removing duplicate threads or "low-quality" posts. What we usually think of as moderation would be Reddit staff ("admins") banning subreddits and/or users for violating the Content Policy, and Reddit was indeed relatively moderation-free in that regard. In the past Reddit didn't even have a content policy, and (IIRC) a staff member actually once said something to the effect of "we would be doing something very wrong if we ever have to cite the TOS as a justification for an action). Of course, this was something like 10 years ago, and Reddit had around 30-50 employees. There are many subreddits from that time that would not be allowed to exist today.
On the post: Tanzania's Abuse Of US Copyright Law To Silence Critics On Twitter Should Be A Warning For Regulators Looking To Mess With Content Moderation
Re:
Ehh I think there’s a distinction between “troll” and “person with a bad take”. Trolls don’t genuinely believe the things they say.
On the post: Robert Reich Loses The Plot: Gets Basically Everything Wrong About Section 230, Fairness Doctrine & The 1st Amendment
Re: Re: Re: Re: Re: Re: Re: Re: Re:
I would disagree with the second part of that statement, and I think that's part of the problem.
On the post: No, The Arguments Against Florida's & Texas' Content Moderation Bills Would Not Block All Internet Regulations
Re: Re:
Now, now, let's not call Evelyn Douek and Casey Newton idiots...
On the post: No, The Arguments Against Florida's & Texas' Content Moderation Bills Would Not Block All Internet Regulations
I think what the article is hinting at is that the 1st Amendment would look a lot different if it had been drafted with social media in mind.
But this does limit the policy options available, such as the "right to be forgotten" that you just mentioned. I'm sure this is not your intention, but once again it makes the 1st Amendment sound like a burden, as if it is a constitutional obligation rather than a good thing in itself.
Ultimately, from a user perspective, I think it comes down to who you trust more to represent your interests. Yes, the laws are unconstitutional, but in their arguments, both the states of Florida and Texas and the internet company trade groups that oppose them portray themselves as standing up for regular social media users. As most people here would argue (and which I completely agree), the governments of Florida and Texas certainly aren't standing up for users. But this doesn't necessarily mean that the platforms are.
On the post: Twitter's New 'Private Information' Policy Takes Impossible Content Moderation Challenges To New, Ridiculous Levels
And that's exactly why I think more transparency is a good thing. Even if a government mandate is unconstitutional, there's nothing stopping them from doing it on their own accord.
On the post: Texas Court Gets It Right: Dumps Texas's Social Media Moderation Law As Clearly Unconstitutional
This wouldn't be the Court's place to decide, but two major issues for me would be
1) Should the policymaking process of social media platforms more closely resemble the policymaking process of governments?
2) Should principles of the criminal justice system (things like due process and the right to appeal) apply to social media moderation?
For me, I would lean towards "yes" to both, but as usual, the problem is scale. Facebook/Meta says "Of course...we can’t meaningfully engage with billions of people" but right now, there is no way for users to directly submit feedback regarding policy to social media platforms. If you have an issue with a law, you can write your Congressman. If you have an issue with a platform's policy, you're stuck whining about it or leaving the platform entirely. Platforms have started to highlight how they work with experts and activist groups to shape their policies, but I would like to see them bring regular users into the mix as well.
There seems to be widespread support for more transparency and a better appeals process (just not as a government mandate), but even as the judgment notes, it's a matter of scale. And honestly I'm not sure how to overcome that.
On the post: Disney Yanks China-Mocking Simpsons Episode From Its Hong Kong Streaming Service
Self-censorship
It was most likely a proactive move, which also gives government officials an easy out when confronted on this ("It wasn't our call. Go talk to them"). And this is why I disagree vehemently (but respectfully) when some users here say self-censorship isn't a thing. Yes, you could say that Disney exercised "discretion" to decide "We won't show this here" instead of the Chinese government saying "You can't show this here" but as you can see, there is little practical difference.
This is a perfect example of self-censorship.
On the post: Facebook (Again) Tells Law Enforcement That Setting Up Fake Accounts Violates Its Terms Of Use
Re: If Facebook wanted to discourage cops making fake users...
IIRC there was a bit of backlash to this cause they literally banned every employee and almost everyone thought that was a bit excessive. Sucks to be the guy mopping floors at NSO Group headquarters I guess.
On the post: Washington Post Forgets It Fought (And Won) Legal Battle Against Mandatory Transparency; Now Demands Internet Co's Face The Same
I think the issue is that opponents to mandatory transparency tend to focus on how this is unconstitutional instead of how this is a bad idea. Now, Techdirt has written on this topic before and I'm sure Mike thinks this is a bad idea in addition to being unconstitutional, but without going back to the articles I honestly cannot recall his reasoning. And IMO this just makes 1A sound like a burden to those less familiar with US constitutional law: as if mandatory transparency is a great idea, but we can't do it cause it's unconstitutional.
On the post: Washington Post Forgets It Fought (And Won) Legal Battle Against Mandatory Transparency; Now Demands Internet Co's Face The Same
Re:
What do they mean by this? Something like this Internet Archive/Wayback Machine? If so, this is something I support, but I consider it mainly a historic preservation issue at this point as opposed to a content moderation, transparency, or 1A issue.
On the post: Facebook Banning & Threatening People For Making Facebook Better Is Everything That's Wrong With Facebook
Re: Re: Re: Re:
You know, there are people who think that corporate personhood shouldn't be a thing. Not saying that I'm one of them, but I spent my student years during a time when "The Corporation" - both the book and the documentary - were popular.
On the post: Facebook Banning & Threatening People For Making Facebook Better Is Everything That's Wrong With Facebook
Re: Re:
Of course, freedom includes the freedom to make bad decisions. I do wonder, though, whether platforms can be guided to make better decisions. Maybe if we start thinking of them more as communities as opposed to businesses? I'm starting to think that maybe a service like Facebook is simply not suited to be a private for-profit business.
On the post: Facebook Banning & Threatening People For Making Facebook Better Is Everything That's Wrong With Facebook
Re: Re: It's No Wonder
You know what they say about broken clocks.
On the post: Bad Faith Politicians Are Using Social Media Suspension To Boost Their Own Profiles
Re:
How does this fit into dealing with climate change denial, which I expect the platforms to come under scrutiny for sooner or later? Should those who deny climate science be treated the same as anti-vaxxers or neo-Nazis? They arguably cause harm with their disinformation, but aren't quite as obnoxious to regular users or community-disrupting as the latter two groups.
On the post: Bad Faith Politicians Are Using Social Media Suspension To Boost Their Own Profiles
Re: Re:
So what should platforms do about, say, climate change denial (which is an issue I expect to come up shortly)? Specifically, should it give those who deny climate science the same treatment as anti-vaxxers and neo-Nazis?
On the post: Bad Faith Politicians Are Using Social Media Suspension To Boost Their Own Profiles
Re:
Yes, but content moderation is difficult and expensive, and the big platforms are all for-profit corporations. So you'll need to reconcile this and the fact that business-wise, it is in their best interests to moderate as little as possible.
Next >>