You don’t get, or deserve, a right to vote on the policies of a given platform. You may not even get the chance to have a say in whether a given policy sucks.
Like I said, I'm not going to pretend that I'm entitled to this. What I want to discuss is whether it's a good idea to give users more of a say in policy and moderation decisions. If it is, then there should be ways to convince platform owners that it is in their best interests to do so. It's clear that public pressure already has some influence in the platform's policy/moderation decisions, but that's more in an abstract way. And I'm not calling for platforms to be democracies: that would (perhaps unfortunately) be unworkable in an online context. Even Wikipedia is explicitly not a democracy.
But doing so means you live by the rules of the “landlords”
That's actually not quite accurate: if it was, I'll be entitled to tenant protection laws. Nor would I expect it to be, since I don't pay "rent" to a social media platform.
Platforms are not democracies. Whoever has the money (the owner(s)) makes the rules, and they aren't up for discussion.
I find that the best-moderated communities solicit feedback from their users. Of course, what they do with the feedback is another matter, but at least the opportunity for discussion is there.
How do you appeal something you actually did, the proof being right out in the open for all to see? It comes down to a matter of interpretation. Need I remind you about "he who has the gold is the judge, jury and executioner"?
I'm also thinking about the times when it's something you didn't actually do. With so much moderation being automated, it happens fairly often. My thinking is Facebook's Oversight Board, but turned into an industry group and expanded to cover more platforms. But as you suggested, the problem is scale (how many cases can they process?) and "he who has the gold..." (Facebook, and it's $175 million's worth).
Given how conservative assholes would try their hardest to rig such polls in favor of policies that would favor conservative voices and spit in the face of marginalized voices? Yes, it would be a bad idea.
I brought this up less as a serious proposal and more to see if anyone else remembers this (or was I just hallucinating?)
Besides, the final say on such matters should always be up to the people who own the property, not the people who are given the privilege of using it.
Legally and technically, social media platforms are for-profit businesses, but they sell themselves as communities. Their rules are often called "Community Guidelines" or something to that effect. And in the future, I think it wouldn't be so ridiculous to say "I live on Facebook/Twitter/Youtube" - in fact such people already exist: you may have heard of them referred to as the "Extremely Online". But with things like the "metaverse" and tech playing an ever-increasing role in people's lives, the idea of platforms as virtual places where people "live" (as opposed to businesses that people patronize) might not seem so far-fetched. Social media platforms are already referred to as "virtual communities", and if the creators of said communities want me to "live" there, then I sure do hope that I get to do so as the "citizen" of a democracy, just as I get to do in my real, physical community.
I mean, in my ideal world, social media companies would be co-ops. Or maybe run under a Wikipedia-like model. But that's not really a wish - that's a fantasy.
Maybe Twitter should take user sentiment into account when making a decision. It wouldn't even need to ask users directly: no doubt Twitter has analytical tools that can track positive, negative, and neutral tweets in response to the suspension.
And I think the answer also varies depending on what you consider the primary goal of content moderation to be. Is it to limit the reach of harmful content? To make the site more user-friendly (and thus attract the most users)? Advertiser-friendly? To take a moral stand? Responding to pressure from employees, activists and/or regulators? Of course these things aren't mutually exclusive, but what you prioritize would impact your strategy.
I distinctly remember Facebook holding a vote on TOS changes in 2006. Of course, it was set up to fail: they set a quorum and then didn't tell anyone about the vote except on a "Facebook Policy" page that you had to seek out and follow, so naturally most people didn't know about this.
That would be one way of giving users more of a say, although I'm not sure how wise this would be in this day and age.
Agree with this article, but I do wish that platforms would offer users more of a direct say in content moderation decisions, a better appeals process, and a way to contact someone for support/feedback. The difference, of course, is that I'm not going to pretend that I'm in any way entitled to these things (it's a wish, not a demand) because, as Mike wrote, "websites are allowed to do whatever the hell they want on their own websites".
On the post: Louisiana & Alabama Attorneys General Set Up Silly Hotline To Report 'Social Media Censorship' They Can't Do Anything About
Re:
Like I said, I'm not going to pretend that I'm entitled to this. What I want to discuss is whether it's a good idea to give users more of a say in policy and moderation decisions. If it is, then there should be ways to convince platform owners that it is in their best interests to do so. It's clear that public pressure already has some influence in the platform's policy/moderation decisions, but that's more in an abstract way. And I'm not calling for platforms to be democracies: that would (perhaps unfortunately) be unworkable in an online context. Even Wikipedia is explicitly not a democracy.
That's actually not quite accurate: if it was, I'll be entitled to tenant protection laws. Nor would I expect it to be, since I don't pay "rent" to a social media platform.
On the post: Louisiana & Alabama Attorneys General Set Up Silly Hotline To Report 'Social Media Censorship' They Can't Do Anything About
Re: Re:
I find that the best-moderated communities solicit feedback from their users. Of course, what they do with the feedback is another matter, but at least the opportunity for discussion is there.
I'm also thinking about the times when it's something you didn't actually do. With so much moderation being automated, it happens fairly often. My thinking is Facebook's Oversight Board, but turned into an industry group and expanded to cover more platforms. But as you suggested, the problem is scale (how many cases can they process?) and "he who has the gold..." (Facebook, and it's $175 million's worth).
On the post: Louisiana & Alabama Attorneys General Set Up Silly Hotline To Report 'Social Media Censorship' They Can't Do Anything About
Re:
I brought this up less as a serious proposal and more to see if anyone else remembers this (or was I just hallucinating?)
Legally and technically, social media platforms are for-profit businesses, but they sell themselves as communities. Their rules are often called "Community Guidelines" or something to that effect. And in the future, I think it wouldn't be so ridiculous to say "I live on Facebook/Twitter/Youtube" - in fact such people already exist: you may have heard of them referred to as the "Extremely Online". But with things like the "metaverse" and tech playing an ever-increasing role in people's lives, the idea of platforms as virtual places where people "live" (as opposed to businesses that people patronize) might not seem so far-fetched. Social media platforms are already referred to as "virtual communities", and if the creators of said communities want me to "live" there, then I sure do hope that I get to do so as the "citizen" of a democracy, just as I get to do in my real, physical community.
I mean, in my ideal world, social media companies would be co-ops. Or maybe run under a Wikipedia-like model. But that's not really a wish - that's a fantasy.
On the post: Bad Faith Politicians Are Using Social Media Suspension To Boost Their Own Profiles
Maybe Twitter should take user sentiment into account when making a decision. It wouldn't even need to ask users directly: no doubt Twitter has analytical tools that can track positive, negative, and neutral tweets in response to the suspension.
And I think the answer also varies depending on what you consider the primary goal of content moderation to be. Is it to limit the reach of harmful content? To make the site more user-friendly (and thus attract the most users)? Advertiser-friendly? To take a moral stand? Responding to pressure from employees, activists and/or regulators? Of course these things aren't mutually exclusive, but what you prioritize would impact your strategy.
On the post: Louisiana & Alabama Attorneys General Set Up Silly Hotline To Report 'Social Media Censorship' They Can't Do Anything About
Re:
Well, that's obviously up for discussion.
I distinctly remember Facebook holding a vote on TOS changes in 2006. Of course, it was set up to fail: they set a quorum and then didn't tell anyone about the vote except on a "Facebook Policy" page that you had to seek out and follow, so naturally most people didn't know about this.
That would be one way of giving users more of a say, although I'm not sure how wise this would be in this day and age.
On the post: Louisiana & Alabama Attorneys General Set Up Silly Hotline To Report 'Social Media Censorship' They Can't Do Anything About
Agree with this article, but I do wish that platforms would offer users more of a direct say in content moderation decisions, a better appeals process, and a way to contact someone for support/feedback. The difference, of course, is that I'm not going to pretend that I'm in any way entitled to these things (it's a wish, not a demand) because, as Mike wrote, "websites are allowed to do whatever the hell they want on their own websites".
Next >>