It's Time to Talk About Internet Companies' Content Moderation Operations
from the transparency dept
As discussed in this post below, on February 2nd, Santa Clara University is hosting a gathering of tech platform companies to discuss how they actually handle content moderation questions. Many of the participants have written short essays about the questions that will be discussed at this event -- and over the next few weeks we'll be publishing many of those essays. This first one comes from Professor Eric Goldman, who put together the conference, explaining the rationale behind the event and this series of essays.
Many user-generated content (UGC) services aspire to build scalable businesses where usage and revenues grow without increasing headcount. Even with advances in automated filtering and artificial intelligence, this goal is not realistic. Large UGC databases require substantial human intervention to moderate anti-social and otherwise unwanted content and activities. Despite the often-misguided assumptions by policymakers, problematic content usually does not have flashing neon signs saying "FILTER ME!" Instead, humans must find and remove that content—especially with borderline cases, where machines can't make sufficiently nuanced judgments.
At the largest UGC services, the number of people working on content moderation is eye-popping. By 2018, YouTube will have 10,000 people on its "trust & safety teams." Facebook's "safety and security team" will grow to 20,000 people in 2018.
Who are these people? What exactly do they do? How are they trained? Who sets the policies about what content the service considers acceptable?
We have surprisingly few answers to these questions. Occasionally, companies have discussed these topics in closed-door events, but very little of this information has been made public.
This silence is unfortunate. A UGC service's decision to publish or remove content can have substantial implications for individuals and the community, yet we lack the information to understand how those decisions are made and by whom. Furthermore, the silence has inhibited the development of industry-wide "best practices." UGC services can learn a lot from each other—if they start sharing information publicly.
On Friday, a conference called "Content Moderation and Removal at Scale" will take place at Santa Clara University. (The conference is sold out, but we will post recordings of the proceedings, and we hope to make a live-stream available). Ten UGC services will present "facts and figures" about their content moderation operations, and five panels will discuss cutting-edge content moderation issues. For some services, this conference will be the first time they've publicly revealed details about their content moderation operations. Ideally, the conference will end the industry's norm of silence.
In anticipation of the conference, we assembled ten essays from conference speakers discussing various aspects of content moderation. These essays provide a sample of the conversation we anticipate at the conference. Expect to hear a lot more about content moderation operational issues in the coming months and years.
Eric Goldman is a Professor of Law, and Co-Director of the High Tech Law Institute, at Santa Clara University School of Law. He has researched and taught Internet Law for over 20 years, and he blogs on the topic at the Technology & Marketing Law Blog.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: companies, content moderation, filtering, intermediary liability, internet platforms, moderation
Reader Comments
Subscribe: RSS
View by: Time | Thread
[ link to this | view in chronology ]
I received an app to do moderation jobs through lionbridge
That being said, it's subjective accuracy and was basically relying on human tuning and they were clearly targeting the ability to pay people a very low income to moderate.
[ link to this | view in chronology ]
Re: I received an app to do moderation jobs through lionbridge
[ link to this | view in chronology ]
"UGC service's decision to publish" -- No, "platforms" neither make the decision nor PUBLISH, or they'd be liable: "platforms" are practically IMMUNE under Section 230, CDA.
When you get a BASIC fact wrong, the rest can't be right. -- And yeah, goes on as below to include "remove", but at best it's lousy phrasing.
Now, The Masnick often sez that "platforms" such as mega-corporations Facebook, Google, and Twitter, have a First Amendment Right to arbitrarily control the speech of "natural" persons, even to stop the "service". The second tier of weenies say persons will just have to find alternatives, but separate in some tiny venue is clearly not equal. So I say that's un-American and that these supra-national corporations need good clear cause under common law to do ANY regulating. -- Sure, for a while they'll pick targets vaguely justified under common law, but soon as everyone is used to speech controlled by corporations, it'll be used against anyone not a rabid globalist.
And proving that this is just more of the same nattering is that there's nothing solid, just announcing yet another "conference" where self-aggrandizing weenies (who believe themselves "insiders") will variously gloss over facts and hide the ongoing censorship by globalist corporations of "conservative" and pro-American persons.
[ link to this | view in chronology ]
Re: "UGC service's decision to publish" -- No, "platforms" neither make the decision nor PUBLISH, or they'd be liable: "platforms" are practically IMMUNE under Section 230, CDA.
BTW: I note the the interim limit on length of both subject line and body seem to be removed, so since generally more convenient for all, I'm back to prior practice.
[ link to this | view in chronology ]
Re: "UGC service's decision to publish" -- No, "platforms" neither make the decision nor PUBLISH, or they'd be liable: "platforms" are practically IMMUNE under Section 230, CDA.
[ link to this | view in chronology ]
Re: Re: "UGC service's decision to publish" -- No, "platforms" neither make the decision nor PUBLISH, or they'd be liable: "platforms" are practically IMMUNE under Section 230, CDA.
You're far worse than wrong, you're INEFFECTIVE.
This is a typical AC one-liner -- probably by a Techdirt "administrator" -- unable to think of more substance than gainsaying.
[ link to this | view in chronology ]
Re: Re: Re: "UGC service's decision to publish" -- No, "platforms" neither make the decision nor PUBLISH, or they'd be liable: "platforms" are practically IMMUNE under Section 230, CDA.
[ link to this | view in chronology ]
Re: "UGC service's decision to publish" -- No, "platforms" neither make the decision nor PUBLISH, or they'd be liable: "platforms" are practically IMMUNE under Section 230, CDA.
Will you next start claiming that a fringe on the flag means all judges are presiding over maritime court and therefore have no jurisdiction? Are you a sovereign ciizen who doesn't need a driver's license and can write off your debts with the secret bank account encoded in your birth certificate.
[ link to this | view in chronology ]
Re: Re: "UGC service's decision to publish" -- No, "platforms" neither make the decision nor PUBLISH, or they'd be liable: "platforms" are practically IMMUNE under Section 230, CDA.
Oh, it's likely "Stephen T Stone", then -- and AS IF you want me to make a point!
Do you even grasp that lawyers invented "corporate" persons out of the blue? They've forced use of the term "natural", not me.
Those who understand will appreciate the accuracy, while those who oppose the Constitution will try to minimize use of the terms.
[ link to this | view in chronology ]
Re: Re: Re: "UGC service's decision to publish" -- No, "platforms" neither make the decision nor PUBLISH, or they'd be liable: "platforms" are practically IMMUNE under Section 230, CDA.
[ link to this | view in chronology ]
Re: Re: Re: "UGC service's decision to publish" -- No, "platforms" neither make the decision nor PUBLISH, or they'd be liable: "platforms" are practically IMMUNE under Section 230, CDA.
[ link to this | view in chronology ]
Re: Re: Re: Re: "UGC service's decision to publish" -- No, "platforms" neither make the decision nor PUBLISH, or they'd be liable: "platforms" are practically IMMUNE under Section 230, CDA.
So who are you, AC, to assert a a rule here is that all questions must be answered? (Including this one.)
Yet more off-topic one-liners all you have? (It's likely the same one AC with magic of Tor Browser.)
Next you'll be typing chicken noises.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: "UGC service's decision to publish" -- No, "platforms" neither make the decision nor PUBLISH, or they'd be liable: "platforms" are practically IMMUNE under Section 230, CDA.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: "UGC service's decision to publish" -- No, "platforms" neither make the decision nor PUBLISH, or they'd be liable: "platforms" are practically IMMUNE und
No, not THE goddamn Batman! -- I think you're just
a goddamn idiot. But thanks for showing the heights of Techdirt discourse.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: "UGC service's decision to publish" -- No, "platforms" neither make the decision nor PUBLISH, or they'd be liable: "platforms" are practically IMMUNE
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: "UGC service's decision to publish" -- No, "platforms" neither make the decision nor PUBLISH, or they'd be liable: "platforms" are practically IMMUNE under Section 230, CDA.
Seriously, out_of_the_blue, let your fuck buddies up for oxygen every once in a while. Oxygen's not copyrighted, I promise there's no infringement.
[ link to this | view in chronology ]
Re: Re: "UGC service's decision to publish" -- No, "platforms" neither make the decision nor PUBLISH, or they'd be liable: "platforms" are practically IMMUNE under Section 230, CDA.
hey, cool info, ac! want to use this myself, so would you state what's true, and give a really authortative source?
or are you just making stuff up?
[ link to this | view in chronology ]
Re: Re: "UGC service's decision to publish" -- No, "platforms" neither make the decision nor PUBLISH, or they'd be liable: "platforms" are practically IMMUNE under Section 230, CDA.
[ link to this | view in chronology ]
Re: "UGC service's decision to publish" -- No, "platforms" neither make the decision nor PUBLISH, or they'd be liable: "platforms" are practically IMMUNE under Section 230, CDA.
I am not going to do a reply to each of your individual posts because of course not. Consider this a catch-all for all of your bullshit in this entire conversation thread up to now.
They do. They absolutely do. Google could literally shut down YouTube tomorrow at noon and nobody outside of Google’s management team could stop them.
(Also: “Natural persons” is not a magic catchphrase, Mr. SovCit. You do not strengthen your argument by using that or other SovCit lingo as if it means something to anyone other than you.)
That is correct. If a service boots you from its platform or shuts down, you will have to find an alternative. You cannot force a platform to either host your speech or remain active.
Given how the American socioeconomic system has allowed a handful of major corporations to control the production and distribution of nearly all of our media, calling this system “un-American” seems dishonest.
Again: SovCit buzzwords that you either cannot or will not explain or define do not bolster your argument. As for the “good clear cause” issue, I would think the right of a company to “regulate” speech on any platform it owns is cause enough.
You mean like the people who get suspended from Twitter because they dared to snap back at a harasser and got reported en masse by that harasser’s followers? Or the queer people who had their videos blocked within YouTube search simply because they had LGBT-friendly titles/content? Corporations already fuck this up. There is no perfect moderation. That said, we can accept the imperfection of Internet moderation while working to improve it.
(And for a third time: “Globalist”, like “natural person” and “common law”, is not the magic word you think it is.)
Question! Do you believe companies such as Google should be forced to host advertisements from anyone with the cash to afford an ad spot?
Uh, two things: Twitter reported that his account was legitimately hacked, and Sean Hannity has a television show on which he can complain. And yes, you cannot complain publicly on that platform, but there are plenty of other platforms out there to use. If Twitter dumps you, you can use given Mastodon or GNU Social instance, Tumblr, YouTube, Blogger, Ello, and any other site that allows for publicly-viewable posts to complain how Twitter silenced all of your speech forever.
So what?
The average person calls those things “corporations”. Lawyers, reporters, and SovCits are the only assholes who use the phrase “corporate persons” with sincerity. To which one of the three groups do you belong?
[ link to this | view in chronology ]
"The company I don't like should be required to host that!" "What about the one you DO like?"
Question! Do you believe companies such as Google should be forced to host advertisements from anyone with the cash to afford an ad spot?
A more apt question might be whether they believe that the sites they listed should be forced to host advertisements from anyone with the cash to pay for the spots, no matter the ideological difference or violations in any TOS they might have.
I mean it's one thing to say that the big bad mega-corp should be required to host content no matter the source, but when that gets turned around such that people they are siding with might have to host content they don't agree with, that can be quite another matter if they aren't willing to apply the rule equally.
[ link to this | view in chronology ]
Re: Re: "UGC service's decision to publish" -- No, "platforms" neither make the decision nor PUBLISH, or they'd be liable: "platforms" are practically IMMUNE under Section 230, CDA.
Absolutely disgusting display of intellectual dishonesty here. You should be ashamed.
[ link to this | view in chronology ]
Re: Re: Re: "UGC service's decision to publish" -- No, "platforms" neither make the decision nor PUBLISH, or they'd be liable: "platforms" are practically IMMUNE under Section 230, CDA.
[ link to this | view in chronology ]
Re: Re: Re: Re: "UGC service's decision to publish" -- No, "platforms" neither make the decision nor PUBLISH, or they'd be liable: "platforms" are practically IMMUNE under Section 230, CDA.
Now that he's realized everyone who has a brain is going to call him out on his bullshit he's decided to opt for troll apologetics, soothing the butthurt of MyNameHere and out_of_the_blue. Authoritarian fellatio artists gotta stick together.
[ link to this | view in chronology ]
Re: "UGC service's decision to publish" -- No, "platforms" neither make the decision nor PUBLISH, or they'd be liable: "platforms" are practically IMMUNE under Section 230, CDA.
Demanding that others carry your speech is as much an affront to their freedom, as demanding that others censor their or others speech, as both demands are one person trying to force others to bend to their will.
[ link to this | view in chronology ]
Ignoring the history of people will figure out the black box, bypass it, and another round of secrets will be stuffed in the box in an unending spiral of we can fix this, if only we spend more!!!!!!
They face pressure from politicians, & 'well meaning' groups who want to wrap the world in nerf & pretend bad things don't exist if we can hide them. Corporations are now responsible for what children see & must be punished if they fail to protect 'the children'!!
I didn't get weird because I saw the word nazi online. I've said some horrible cutting things & used bad words, thankfully I've avoided the Moral Moderation League so far. Heh I've triggered moderation here a couple times, but magically it isn't for my content its for how it looked to code... and its reviewed & approved quickly. They are trying to stop spammers & the trade off is sometimes if I sound like a bot I get caught too.
We can not keep blaming Google, FB, Twitter, etc. because you saw something that offended you. The internet is vast & its still going to be out there, somewhere. Politicians of a certain age believe Google = Internet. While they are large, they aren't the entire internet.
No code can replace human review, but humans can only handle so much. Without clear training you end up with silly enforcement. See Also: A verified Twitter account can say things worse than unverified & not get the same timeout.
We need to admit, there is bad shit out there in the world.
It is OUR responsibility to prepare ourselves & kids for seeing it rather than just expect someone else once again can be legislated to do the heavy lifting for us.
Oooh and because filters are fun...
Has anyone Seen Kyle? He's about this tall! Have you Seen Kyle? (if you aren't laughing, say it outloud or google have you seen kyle).
[ link to this | view in chronology ]
"Racist Russian trolls are weaponizing freedom of speech to censor marginalized persons!"
vs
"Interest groups are drowning conversation threads on social media with off-topic or otherwise intentionally inflammatory rhetoric"
If you want productive discussion, it starts with you. You *know* there's certain demographics that claim partisan social media management are censoring Conservatives. Why use rhetoric that's likely to provoke them rather than presenting the issue neutrally? You'll get less trolls that way.
[ link to this | view in chronology ]
Re:
Because those same people will weaponize a rhetorically-neutral response against you.
[ link to this | view in chronology ]
Re: Re:
I am disappointed that you assume all who could possibly disagree are beyond reasonable discourse. The idea behind rhetorical neutrality isn't to eliminate hecklers - it's to reduce their numbers and to clearly present the problem.
I've been seeing quite a lot of reasonablele complaints about absurd social media moderation in recent months. Enough complaints that I am predisposed to think partisan idealogues are using the cover of the "moderation" topic to moderate reasonable, non-inflammatory speech they don't like. I'm not saying this as a partisan idealogue - I don't care for tribal politics and I prefer focusing on issues.
If you're serious about furthering what amounts to public policy discourse while minimizing hecklers and trolls, please keep inflammatory rhetoric to a minimum.
If you insist on using inflammatory rhetoric, then you shouldn't be surprised when you get inflammatory responses.
[ link to this | view in chronology ]
Re: Re: Re:
I assume anyone is capable of reasonable discourse unless they give me reason to think otherwise. Mr. SovCit, for example, does that when he uses SovCit voodoo phrases like “natural persons” and “common law” as if they are legitimate arguments.
Such arguments will not always be addressed by people who can and will argue in good faith. Rhetorical neutrality can still be weaponized against those who would use it. After all, which one sounds like a more effective phrasing for mainstream adoption: “family reunification” or “chain migration”? (Hint: The former is the actual language used in immigration laws; the latter is the phrasing adopted by conservatives and right-wingers.)
Well, yeah. Dogmatic partisan thinking is a hell of a thing.
Or we can tell the trolls and hecklers to fuck off.
[ link to this | view in chronology ]
Re: Re: Re: Re:
If you truly think anyone is capable of reasonable discourse, why provoke? I'm not saying the nut you're referring to is reasonable but that using inflammatory rhetoric poisons the well from the get-go and gives ammo to alt-right idealogues to recruit non-partisan trolls to join in their campaigns.
It's unnecessary and escalates conflict for no discernable gain.
I always found that framing to be of the "terrorists vs freedom fighters" variety. Both are emotional and easily co-opted by people looking to add their pet issue to the bogeyman list (see: "ecoterrorism" or most of the War On Terror). It obscures rational cost/benefit analysis and is mainly an excuse to gain mob support behind an issue that the Big Boys And Girls already decided behind closed doors.
Yep. The recent Twitch fiasco where a woman whose mother is from Africa was banned with moderators calling her racist for an ethnic cooking video of actual food eaten in Africa, not racist caricatures. This woman is sending in her DNA test to Twitch to get this resolved.
I find someone sending in a DNA test to a social platform to be absolutely absurd, yet it's the dogmatic political environment we're in with tech that led to this.
My disappointment remains. If this is an omen of how the debate will play out in the coming months, I expect it to devolve into little more than name calling.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
I provoke only when someone has proven themselves either incapable or unwilling to debate in good faith. If I disagree with someone here, I give them the opportunity to debate me in good faith, which is why I try to hold off on using personal insults unless they demonstrate a lack of good faith in debating me. My experience is my own; your mileage may vary.
“Alt-right” ideology runs on inflammatory rhetoric; if anything, the “alt-right” revels in using it. If they can “own the libs” by provoking someone with liberal/progressive/left-wing political beliefs into the political equivalent of a “Yo’ Momma” battle, the “alt-right” will do exactly that.
That is, in a sense, my point: Whereas “family reunification” is an accurate phrasing of that immigration policy, “chain migration” is the phrase that seeped into the mainstream because it is “punchier”. The latter phrase obscures the actual policy (reuniting families through a legal immigration process) and creates a more frightening image of immigrants, related or not, coming into the country one after the other in a long “chain” that makes the US less safe.
This kind of obscurement tends to come from conservatives/right-wingers far more than it does from liberals/left-wingers, by the way. Look at the whole uproar over “death panels” as another example.
It is. It absolutely is. And Twitch higher-ups should be ashamed of themselves if they do not immediately overturn her ban.
You say that like I’ve never heard it before.
As stated above, I am more than willing to debate in sincere good faith. But when I think someone is trying to take advantage of that, be it by “otherwording” me or by resorting to insults of their own or some other bad-faith debate tactic, I let loose with the venom and bile. I insult Mr. SovCit right off the bat, for example, because they have proven time and time again that they have more interest in trolling this site than in having a good-faith debate on the subject at hand.
If someone avoids the vitriol, I treat them in kind. If they spit in my face, I treat them in kind. You only get what you give.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re:
That is, in a sense, my point: Whereas “family reunification” is an accurate phrasing of that immigration policy, “chain migration” is the phrase that seeped into the mainstream because it is “punchier”.
Family reunification is the laudable purpose that was in the minds of those who framed the law.
Chain migration is an unintended consequence that can happen when people take advantage of the law.
Unfortunately is is very hard to frame a law in such a way as to allow the former whilst preventing the latter.
We all know of hard cases that occur even under current, relatively relaxed, immigration rules - so the unintended consequences do work both ways.
The real problem is that it will be impractical to have reasonable immigration rules whilst the countries that source immigration are in such a messed up state.
In some places this will probably fix itself within a couple of generations (eg most of Eastern Europe).
In other places it is a matter of better governance and a certain amount of aid/inward investment. (eg Sub Saharan Africa, Latin America).
In still others there are cultural/religious issues that make genuine progress difficult - several of these countries have actually regressed in recent years (eg Iran, Turkey, Pakistan and most of the Middle East/North Africa.
In reality the rhetoric of neither tha alt-right nor the left on its own will solve things like this. The left have to admit that the conservative, even the alt right are actually correct on some issues, and the right have to admit that the underlying principles of the left are correct - evenn though they have messed up in some areas.
Unfortunatley both sides are moving apart into their own spaces. The reason that google/facebook etc should not censor the conservative voices is not for the sake of the conservatives - it is for the sake of the left - who need to hear those voices - in order to correct their own mistakes.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re:
So the racists have gone from opposing merit-based immigration and pushing "family reunification"-based immigration as a way to keep out the darkies, to pushing merit-based immigration and opposing "family reunification" as a way to keep out the darkies.
The party affiliations of the racists may have shifted somewhat, and the exact policies they push to achieve their goals may be different, but the goals themselves seem more or less unchanged.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re:
I should clarify my stance:
I hope discourse on the topic of moderation moves away from proactive antagonization.
You conflate ideology with culture.
What I'm saying is that, for the past several years, I've seen alt-right propagandists successfully winning the "hearts and minds" of relatively non-partisan persons by focusing on the political insanity within tech and the associated censorship. Responding to an antagonist with antagonism is a-ok but very often I find antagonism is proactive.
While I'm a-ok with migrants bringing their families, I find the choices of framing you present to be silly. It's like comparing a cute kitten and a man-eating lion.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re:
And how do they frame their arguments—in “neutral” terms or in the carefully-chosen phrasing used by “alt-right” ideologues to stoke maximum outrage?
No, my comparison would be like referring to lions as “felines” and someone on the “alt-right” referring to lions as “man-eaters”. The former is intentionally neutral; the latter is used with the intent of scaring people.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re:
Almost exclusively neutral terms except whrn painting right-wing persons as victims or painting left or left-wing as unhinged, cartoonishly bigoted bullies.
They take great pains to avoid mentioning alt-right politics but frequently valorize the ones that have been targeted by activists as victims.
"[Visual novel author] is being censored for his opinions! The Left pressured his publisher into dropping him!"
(I go to see what his views are and find his Visual Novel is labelled alt-Hero)
I admit I find some of their complaints with merit where the antagonists have just been pointlessly cruel to someone who really was minding their own business. But I don't know what has merit vs what is horseshit until I look for myself.
[ link to this | view in chronology ]
Re: Re: Re:
I am disappointed that you assume all who could possibly disagree are beyond reasonable discourse.
Possibly not. But from the ones who frequent this site?
Pick one, from average_joe/antidirt, out_of_the_blue, Richard Bennett, Hamilton/Shiva Ayyadurai, MyNameHere/horse with no name/Just Sayin', darryl, Technopolitical, and Tara Carreon.
We'll wait. Take as much time as you need.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
Respond to something I actually said next time.
[ link to this | view in chronology ]
[ link to this | view in chronology ]
[ link to this | view in chronology ]
Re:
Care to substantiate these allegations?
[ link to this | view in chronology ]
Re: Re:
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re:
Content moderation on a given platform is neither generalized nor governmental censorship. You can argue that it is censorship for that specific platform, but that is the best argument you can make in that regard. If Twitter bans you, dozens of other services, both free and paid, can and will host your speech—and the government, at least in the United States, cannot stop you from making use of those services.
You cannot force a platform to host your speech. I have no idea why you would even want to try.
[ link to this | view in chronology ]
Re: Re:
You cannot force a platform to host your speech. I have no idea why you would even want to try.
Because some people apparently believe that they are owed a platform to speak from, consequence free, and that the fact that a non-government individual/company owns that platform should not be any bar to their 'right' to use it.
Or put another way, while they want people to respect their free speech rights, even the 'rights' they don't actually have, they apparently don't believe that those running platforms should enjoy free speech rights of their own, and/or be allowed to exercise any control over their platform in ways that might infringe the non-existent 'rights' they are claiming.
[ link to this | view in chronology ]
Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re:
I want to test the limits of your stance here. If Twitter’s management team decided to shut down the entire service next week, should any government in the world have the right to force a privately owned/operated Internet service into staying open as if it were a public utility?
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
That said, I'd think it prudent for there to be plans in place were Facebook to ever fall.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re:
You referred to “a private company [taking] over the commons”. This phrasing implies a view of services such as Twitter as a public utility, insofar as “the public square” can be considered a public utility. If I am otherwording you, I apologize for the misinterpretation of your comment.
(That said: You technically did not answer the question I raised.)
That sucks for someone who gets banned. It still does not explain why Twitter or Facebook should have “extra obligations” just because a service is widely-used and popular. Give me an actual argument for such “obligations”—what they are and why you believe them to be necessary—and I will have more to say on the matter.
The only plans that need to be made in this regard are the plans by Facebook to destroy every single byte of user data after the service shuts down for good, such that no one can ever use or abuse that data in any way. Other than that, I cannot think of any “plans” that must be made by anyone in the event of a Facebook shutdown.
[ link to this | view in chronology ]
Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re:
It already holds. Twitter is a privately-owned service (“space”) where people congregate, and Twitter management has the right to control who gets to use the service (“come through their doors”). They also have the right to set “standards of behavior” for people who are on the service (“inside their premises”). That Twitter is a “club” with a membership of several million people, spambots, Russian disinformation bots, and James Woods does not change those facts.
[ link to this | view in chronology ]
Re:
[ link to this | view in chronology ]
Re: Re:
The social commons is whatever everyone congregates at. If no one congregates there, it's no longer the commons.
The reason it's not a utility is because the social media site itself isn't anything special. The infrastructure isn't special. What's special is access to the people.
To stretch your utility comparison further, a power company that doesn't provide power isn't an equitable alternative to one that does. That they both have power lines running to your residence and that both have the ability to carry power to your house doesn't matter if, at the end of the day, only one of them is actually able to deliver electricity.
Do you truly not understand the value of social interaction from a professional and personal standpoint?
The obligations I speak of are transparency and consistency in how moderation is carried out. Less of this ad-hoc stuff where pressure groups effectively dictate changes to moderation on an ad-hoc and double standard basis.
An enormous number of people rely on Facebook for social and business purposes. If Facebook were to suddenly disappear, there'd be chaos. To many people, Facebook is the Internet. There's many easily-forseeable consequences that would likely result if Facebook were to shut down. It'd be irresponsible for civil society and governments to not have contingency plans.
[ link to this | view in chronology ]
Re: Re: Re:
(Psst. You replied to the wrong reply.)
In reference to the idea of the social commons, you brought up the notion of “the public square”. That phrase conjures the image of an actual public square—that is, a public place where members of the general public can legally congregate and speak their minds. Twitter is not such a space, no matter how many people use the service.
This analogy would make more sense if you could show me what “power” is supposed to be in the Twitter-to-power company comparison. And just for the record, any competitor to Twitter is an alternative. Whether it is an “equitable” one really does not matter, since any given Twitter competitor is the same thing as Twitter: a privately owned/operated service.
I do. That still does not explain why Twitter has any responsibility or obligation to function as if it were a true “public square”.
If you had said that from the start, you might have saved yourself quite a bit of typing.
None of those plans, whatever they may be, should involve forcing Facebook management to keep the service open indefinitely because it is a supposed “lynchpin of society”. Besides, relying on one company or service for a narrowly specific purpose, such as people relying on Facebook to be “the Internet” for them, is a disaster waiting to happen.
[ link to this | view in chronology ]
Re: Re: Re: Re:
The modern day choices are Twitter and Facebook and 4chan.
Social media has consolidated enormously. The options for a well populated speech platform aren't particularly plentiful these days.
"Power" in this context is access to other people. The value in social media isn't to speak, but to converse.
I've made my stance clear that a company which has taken over a large part of the commons should be subject to additional obligations accordingly. I'm not of the mind that social media companies will always behave magnanimously. Why so many people think they will baffles me.
That goes both ways. You reflexively dismissed the concept of social media companies having any degree of obligation in regards to their moderation policy because, presumably, their current enforcement is ostensibly focused against repulsive people.
I said nothing to the effect of forcing a service's management to keep that service open indefinitely. What are you even responding to?
It already is a disaster waiting to happen. That's why I'm saying it'd be prudent to have contingency plans.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re:
Hence why lots of people are trying to break down the major Internet silos with the Open Web and decentralization efforts. The Mastodon protocol is a good example: Rather than focusing on making a single silo to compete with Twitter, the Masto devs made a protocol that anyone could host and customize to their liking. The federation part of the protocol allows for Masto instances to connect with each other and form a larger network without forcing users to remain on a single instance. For all the guff people might give it, Mastodon is a great idea, and I would love to see its primary idea of decentralized federation become the driving force behind new protocols and services—like, say, a protocol that could compete with Tumblr.
Besides the before-mentioned obligation to be transparent and consistent with moderation, what other obligations should be thrust upon services like Twitter?
You will find no disagreement here. Corporations are inherently sociopathic and should be treated as such. Ditto for “brands”.
I am not dismissive of the notion. (Twitter would benefit from both transparency and moderation consistency.) My objection comes from the word “obligation” and the context in which it is used here. Twitter is still a privately owned company regardless of its supposed “commons” status. Someone who says it must have further obligations pushed upon it beyond what is prescribed by law tends to skeeve me out.
My wariness of the word “obligation” in this context, your references to “the public commons”, and your talk of “making plans” in the case of a shutdown of a service like Twitter might have led me to think that you believe the government should have at least some say in whether such a service gets to close. If I went overboard with my thinking, I apologize for my fuck-up.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re:
Nice in theory, but the implementation left much to be desired.
The people that run Mastodon have tried using their "biggest instance" influence to shame and, in the case of the biggest Japanese instance, attempt blackmail to get other instances to adopt their specific moderation and content policies.
I was livid when I saw this in action. It was an attempt to build a censorship-resistant platform and then censor large chunks of it. They said it was to ban Nazis but just like much content moderation these days it went very far beyond that.
I'd say some degree of due process. Threre are currently double standards with regard to moderation that shouldn't be seen as acceptable.
The desire to make money is definitely a strong one for corporations, but it hardly operates in a vacuum from the personal sentiments of a given corporation's leadership. Just my personal view.
Twitter and Facebook took overt actions to replace countless smaller social media outlets. I don't think this should come without a cost beyond the added infrastructure.
No worries. Any governmental planning I was thinking about would be emergency services whose primary outreach medium right now is Facebook.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re:
When it comes to that specific instance—on which I have an account, for full disclosure—their rules and code of conduct align with what I want from a social media service. But Mastodon is a protocol, not a service, and it is not limited to mastodon.social alone. The whole point of developing it as a protocol was to avoid siloing the software and its users into one site. Don’t like m.s’s policies? Find an instance that you do like and encourage others to join you. You do not have to give up on Mastodon because of one “bad” instance.
Therein lies a big issue: How would you design due process for moderation of a site with millions of users, billions of posts, and nowhere near enough actual humans to work behind the scenes on fair and consistent moderation?
Not to say I disagree with your notion. It is an ideal to work toward. Getting there, however, would be a logistical nightmare.
Their potential obligations for being the major players must be weighed against their rights. Punishing the major players just for being the major players reeks of petty vengeance.
Ah, see, now that is an interesting notion.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re:
The social reality is that most people won't go to the instance cut off from the one instance with 80%+ of the English speaking Mastodon population.
> Therein lies a big issue: How would you design due process for moderation of a site with millions of users, billions of posts, and nowhere near enough actual humans to work behind the scenes on fair and consistent moderation?
> Not to say I disagree with your notion. It is an ideal to work toward. Getting there, however, would be a logistical nightmare.
Perhaps start with making their "case law" public? I get there's concern with the system being gamed but the alternative also has noxious externalities.
>Their potential obligations for being the major players must be weighed against their rights. Punishing the major players just for being the major players reeks of petty vengeance.
I care far more about individual liberty than I do business liberty. That's not to say I want businesses regulated a bunch but I opt for individual liberty when business liberty and business liberty are at odds.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re:
I have neither the energy or intellectual wherewithal to continue this discussion past this point. I do, however, have it in me to thank you for this line of discussion. It has given me several things to think about for future discussion on these subjects, and that is about as good as it gets.
[ link to this | view in chronology ]
Re: Re: Re: Re: Re: Re: Re: Re: Re: Re:
[ link to this | view in chronology ]
I happened in the past..
THEY did it to TV...
NOW to try and get the net to do it..
Its a backdoor world.
[ link to this | view in chronology ]