If Lawmakers Don't Like Platforms' Speech Rules, Here's What They Can Do About It. Spoiler: The Options Aren't Great.
from the no-easy-answers dept
What should platforms like Facebook or YouTube do when users post speech that is technically legal, but widely abhorred? In the U.S. that has included things like the horrific video of the 2019 massacre in Christchurch. What about harder calls – like posts that some people see as anti-immigrant hate speech, and others see as important political discourse?
Some of the biggest questions about potential new platform regulation today involve content of this sort: material that does not violate the law, but potentially does violate platforms’ private Terms of Service (TOS). This speech may be protected from government interference under the First Amendment or other human rights instruments around the world. But private platforms generally have discretion to take it down.
The one-size-fits-all TOS rules that Facebook and others apply to speech are clumsy and unpopular, with critics on all sides. Some advocates believe that platforms should take down less content, others that they should take down more. Both groups have turned to courts and legislatures in recent years, seeking to tie platforms’ hands with either “must-remove” laws (requiring platforms to remove, demote, or otherwise disfavor currently lawful speech) or “must-carry” laws (preventing platforms from removing or disfavoring lawful speech).
This post lays out what laws like that might actually look like, and what issues they would raise. It is adapted from my ”Who Do You Sue” article, which focuses on must-carry arguments.
Must-carry claims have consistently been rejected in U.S. courts. The Ninth Circuit’s Prager ruling, for example, said that a conservative speaker couldn’t compel YouTube to host or monetize his videos. But must-carry claims have been upheld in Poland, Italy, Germany, and Brazil. Must-remove claims, which would require platforms to remove or disfavor currently legal speech on the theory that such content is uniquely harmful in the online environment, have had their most prominent airing in debates about the UK’s Online Harms White Paper.
The idea that major, ubiquitous platforms that serve as channels for third party speech might face both must-carry and must-remove obligations is not new. We have long had such rules for older communications channels, including telephone, radio, television, and cable. Those rules were always controversial, though, and in the U.S. were heavily litigated.
On the must-remove side, the FCC and other regulators have prohibited content in broadcast that would be constitutionally protected speech in a private home or the public square. On the must-carry side, the Supreme Court has approved some carriage obligations, including for broadcasters and cable TV owners.
Those older communications channels were very different from today’s internet platforms. In particular, factors like broadcast “spectrum scarcity” or cable “bottleneck” power, which justified older regulations, do not have direct analogs in the internet context. But the Communications law debates remain highly relevant because, like today’s arguments about platform regulation, they focus on the nexus of speech questions and competition questions that arise when private entities own major forums for speech. As we think through possible changes in platform regulation, we can learn a lot from this history.
In this post, I will summarize some possible regulatory regimes for platforms’ management of lawful but disfavored user content, like the material often restricted now under Terms of Service. I will also point out connections to Communications precedent.
To be clear, many of the possible regimes strike me as both unconstitutional (in the U.S.) and unwise. But spelling out the options so we can kick the tires on them is important. And in some ways, I find the overall discussion in this post encouraging. It suggests to me that we are at the very beginning of thinking through possible legal approaches.
Many models discussed today are bad ones. But many other models remain almost completely unexplored. There is vast and under examined territory at the intersection of speech laws and competition laws, in particular. Precedent from older Communications law can help us think that through. This post only begins to mine that rich vein of legal and policy ore.
In the first section of this post, I will discuss five possible approaches that would change the rules platforms apply to their users’ legal speech. In the second (and to me, more interesting) section I will talk about proposals that would instead change the rulemakers – taking decisions out of platforms’ hands, and putting them somewhere else. These ideas are often animated by thinking grounded in competition policy.
Changing the Rules
Here are five possible approaches that lawmakers could use to change the rules platforms apply to speech that is currently legal, but potentially prohibited under Terms of Service. Many of these models have both must-carry versions (preventing platform removal, demotion, demonetization, etc.) and must-remove versions (requiring those things). Most could be combined with others – in particular, by applying new rules only to the very largest internet platforms.
Ultimately, I am pessimistic about any of these approaches. Some would clearly be unconstitutional in the U.S., because of the new role they would give to the government in regulating protected speech. But I know that others disagree, since all of these models have been raised in recent discussions of platform regulation.
- “Common Carriage” Rules for Major Platforms
Some must-carry proponents suggest that edge platforms like Facebook or YouTube should be treated like government-operated public forums, or like privately-operated common carriers. On this model, platforms would be bound to deliver, and give equal treatment to, any lawful user speech. This must-carry idea doesn’t really have an analog on the must-remove side.
I’m teeing it up first because I think it is a major red herring, particularly in the U.S. discussion. Plenty of platform critics act like they want common carriage rules, but I don’t think many of them actually do, for reasons I’ll discuss here. In any case, such a massive legal change has little future as a political or constitutional matter. That leaves “common carriage” proponents likely, in practice, supporting one of the other models I’ll discuss next, like “indecency” rules or “fairness” rules.
The argument for common carriage is most compelling when applied to the internet ecosystem as a whole. Huge components of the internet, including major infrastructurel components like CloudFlare or DNS, are in the hands of private companies. If all of them can exclude legal but unpopular speech – and especially if all of them face pressure from customers, advertisers, and investors to do so —where is that speech supposed to go? Is the public interest served, and are users’ rights adequately protected, if “lawful but awful” speech is driven offline entirely? The right answers to these questions probably come from more layer-conscious internet regulation, treating edge platforms differently from infrastructure – as Annemarie Bridy discusses here. To me, concerns about the overall internet ecosystem at most support extending something like net neutrality rules higher up the internet’s technical “stack” to more infrastructure providers like CloudFlare.
If popular platforms like Facebook or YouTube were required to carry every legal post, not many people would like it. Most obviously, turning these popular forums into free speech mosh pits, and confronting users with material they consider obnoxious, immoral, or dangerous, would increase the real-world damage done by offensive or harmful speech.
Examples of legal online speech that have attracted widespread outrage include “History of why Jews ruin the world” and, horrifically, “How to burn Jews.” Examples of speech protected under recent First Amendment case law include signs held by picketers near a soldier’s funeral saying “thank God for IEDs” and “you’re going to hell.” This speech may be legally protected, but it’s not harmless. Very few people want to see it, and few believe that “responsible” platforms should leave it up. There is a reason public-interest groups and internet users typically urge platforms to take down more legal-but-offensive speech—not less.
Requiring platforms to carry speech that most users don’t want to see would also have serious economic consequences. Among other things, platforms would lose revenue from advertisers who do not want their brands associated with hateful or offensive content. Converting platforms from their current, curated state to free-for-alls for any speech not banned under law would be seen by some as tantamount to nationalization. Platforms would almost certainly challenge it as an unconstitutional taking of property.
Finally, the consequences of a pure common carriage regime for free expression aren’t clear-cut. There are speech rights on all sides of the issue. For one thing, platforms have their own First Amendment rights to set editorial policy and exclude content. (Eugene Volokh’s take on this is here, mine is in “Who Do You Sue.”)
For another, platforms sometimes silence one aggressive user—or many—in order to help another user speak. Without the platform’s thumb on the scales, some speakers, like female journalists barraged with not-quite-illegal threats of rape and violence, might be driven offline entirely.
Even many users who want platforms to reinstate their speech are unlikely to want the entire platform turned into an unmoderated free-for-all. After all, those speakers would lose their own online audience if platforms became so unattractive that listeners departed, or advertisers ceased funding platform operations.
If making platforms carry all legal expression seems too extreme, must-carry proponents might argue that platforms should have to carry some of it, or carry all legal expression but only amplify some of it, or otherwise face new laws for content currently regulated by private Terms of Service. This is where must-carry regulatory proposals begin to converge with must-remove proposals. As the next examples show, both depend on making new legal distinctions between kinds of speech, or setting detailed new rules for platforms’ speech management.
- Indecency Rules
One possible approach would be to create new legally enforced prohibitions, or carriage obligations, for content that is currently governed only by platforms’ Terms of Service. The must-carry version of such a rule would allow platforms to take down highly offensive or indecent content, while requiring them to tolerate more civil or broadly socially acceptable speech. (This seemed to be white supremacist Jared Taylor’s theory when he sued Twitter, saying the platform could remove truly abusive posts, but not “civil” and “respectful” ones like his.) The must-remove version would, like some versions of the UK’s Online Harms proposal, require platforms to take down certain legal but “harmful” material.
Addressing “lawful but awful” speech through new legal indecency rules would be reminiscent of older regulation of broadcast content. But it would be a deeply troubling solution for online speech for several reasons.
First, we are far from a national or even local consensus about what legal content is highly offensive, dangerous, indecent, or otherwise objectionable in the internet context.
Second, rules of this sort would, like their TV and radio equivalents, require substantial and ongoing regulatory intervention and rulemaking to determine which theory of offensive and dangerous speech should prevail. Any attempt to apply such rules to platforms’ rapidly evolving and technically complex ranking algorithms would be particularly challenging.
Third, unlike broadcast regulation, rules limiting online speech would put major new restrictions on ordinary people’s daily communications. That last point is the kicker, as a constitutional matter. In fact, a must-remove rule for indecent speech online would look a lot like the law struck down by the Supreme Court in its seminal internet First Amendment case, Reno v. ACLU. A must-carry rule for decent speech would be little better. It would use state power to pick winners and losers among legal speech – burdening both the speech rights of affected users, and the editorial rights of affected platforms. In the U.S., that would be possible only after a massive re-litigation of current First Amendment law. It is hard to imagine the current Supreme Court blessing “indecency” rules of either sort.
- Fairness Rules
To avoid new content-based speech regulation, lawmakers might instead let platforms enforce any TOS rules as long as they are “fair” or “neutral.” That has real problems, too. Substantive fairness rules, requiring fair treatment of all viewpoints, are very hard to imagine. Would Twitter, for example, have to give equal treatment to Democrats, Republicans, Socialists, Monarchists, and Anarchists? To people who like creamy peanut butter and people who like crunchy? To people urging us to invest in sketchy tech start-ups and people urging us not to?
At the extreme, this would simply become a common carriage regime. Other potential rules here would be reminiscent of the ones the FCC applied to older communications channels. The equal-time doctrine, for example, required broadcasters to give equal airtime to qualified candidates for public office. And the fairness doctrine required “fair” coverage for issues ranging from workers’ rights to nuclear power plant construction. Critics – mostly on the political right – charged that the doctrine was unworkable and that it effectively enabled selective enforcement by an unaccountable bureaucracy. The FCC itself eventually decided the doctrine was unconstitutional, and President Reagan vetoed a bill that would have brought it back.
Could fairness rules be content-neutral, and instead simply require platforms’ rules and processes to be transparent and consistently enforced? That’s the basis of a lot of recent proposals in the U.S. It’s hard to imagine how most such rules could be content-neutral in practice, though. If the FTC or FCC got to review platforms’ decisions about individual pieces of content, assessing whether the company applied its own Terms of Service fairly or consistently, the scope for de facto government intervention in speech rules would be enormous.
The operational burden of reviewing even just Facebook’s or Twitter’s daily torrent of appeals would be crushing. In the fantasy world where a government agency assumed that job, we’d just move from politically polarized anger at how Facebook manages that impossible task to polarized anger at how the government manages it.
At the extreme, a fairness-based model could be entirely procedural – requiring platforms to publish clear content rules and let users appeal takedown decisions, without having any government review of the appeals’ outcome. (In the U.S., that’s the PACT Act’s model right now.) That’s unlikely to reassure critics who believe platforms are biased, excluding important voices, or simply lying about their rules, though. And even a purely disclosure-based standard like this one would face real First Amendment pushback, as an interference with editorial discretion or form of compelled speech.
- Amplification Rules
Another model might target recommended, ranked, or amplified content, applying different rules to different aspects of platforms’ operations. As Tim Lee put it, we could “think of Facebook as being two separate products: a hosting product and a recommendation product (the Newsfeed).” Applying different rules to hosting on the one hand and amplification on the other would allow for somewhat more nuanced legal regimes.
A must-remove (or must-demote, or must-disfavor) version of this rule might prohibit amplification of extreme or offensive, but lawful, content. As a legal matter in the U.S., that would face the same constitutional barriers as any other government rule disfavoring lawful speech. As the Supreme Court put it in U.S. v. Playboy, “[t]he distinction between laws burdening and laws banning speech is but a matter of degree. The Government’s content‐based burdens must satisfy the same rigorous scrutiny as its content‐based bans.”
A must-carry version of this rule might require platforms to keep hosting all legal speech, but let them apply their own rules to ranking, recommendation, or amplification systems. Disfavored speakers would thus not be banished entirely and could in principle be found by other users. This idea has a Communications law flavor as well, as an update of the traditional distinction between infrastructure “network layer” intermediaries, such as ISPs; and user-facing edge or “application layer” services like Facebook or Google. Users of edge services typically want content curation and don’t want must-carry rules—or didn’t use to. The increase in must-carry claims could be taken as calls to rethink the role of major platforms and to start treating them more like essential, network-layer internet infrastructure.
A must-carry rule that effectively applied a common carriage requirement for hosting but left platforms free to set their own amplification rules would have less extreme constitutional problems than some of the other models. Unlike pure common carriage, it would not take away platforms’ editorial discretion entirely. And unlike “indecency” models, it would avoid creating new state-sponsored speech rules.
But it would still require extensive and ongoing regulation, with resulting distortion of market incentives and innovation, to decide what counts as the “infrastructure” and “edge” aspects of any given platform. Would your racist uncle’s tweets simply never appear in your feed but be visible on his profile page, for example? Be hidden from the casual visitor to his profile page but findable through search? If Google dislikes the only web page that has the text string “fifty purple pigeons eat potatoes,” could it rank a hundred other pages above it when users search for those exact words? There are a thousand variants to these questions, all of which will matter to someone. I don’t see clear or easy answers.
- Rules for Dominant Platforms
As a final variant, any of these approaches might be applied only to the largest platforms. That wouldn’t avoid the problems identified here entirely. But it could change some of the calculations – both about platforms’ ability to comply with burdensome rules, and about the public interest in making them do so.
Special rules for mega-platforms could in theory also conceptually align with some Communications law precedent about competition and the First Amendment, by imposing obligations only on those who control access to “scarce” Communications channels or audiences.
Lawmakers outside the United States have experimented somewhat with setting different rules for hosting platforms depending on their size. Germany’s NetzDG, for example, holds social networks with more than two million German users to stringent content-removal timelines, as well as higher standards of public transparency. The EU’s 2019 Copyright Directive also includes special obligations for entities hosting “large amounts” of user-generated content, and a (very limited) carve-out for start-ups.
Setting different restrictions depending on size would create problematic incentives for growing start-ups and is generally not a common approach in American law, though it has cropped up in some recent proposals. It is also hard to identify a workable definition of “bigness” that would not inadvertently sweep in complex entities like the thinly staffed, user-managed Wikipedia. (The EU Copyright Directive solves this particular problem, but not the larger need for flexible rules to protect innovation, with a targeted carve-out for “not-for-profit online encyclopedias[.]”)
Changing the Rulemakers
A set of potentially more interesting regulatory models would abandon the effort to dictate platforms’ speech rules, and instead give users more choices among competing rulesets or rulemakers. These approaches are generally grounded more in competition and user autonomy, and less in the idea that we can arrive at better, mutually agreeable rules. Proposals to change the rulemakers will likely appeal more if your primary concern about platforms is their economic dominance, their gatekeeper role in shaping public conversation, or their capacity to “push” unwanted content to users who do not wish to see it. These ideas will probably appeal less if what you want is for platforms to get rid of legal but offensive or dangerous content, or if you are concerned about filter bubbles.
- Empowering Users
For many proponents of online civil liberties (including myself), one go-to solution for problems of platform content moderation is to give users themselves more control over what they see. Settings on YouTube or Twitter, for example, could include dials and knobs to signal our individual tolerance for violence, nudity, or hateful speech. This isn’t a cure-all, but it’s still a great idea. It’s been around at least since the 1990s, when technologies like the Platform for Internet Content Selection (PICS) were supposed to allow users to choose what web content appeared in their browsers. Both the Supreme Court in Reno and Congress in passing CDA 230 relied in part on the expectation that such technologies would empower consumers.
Today, there remains much to be done to give users more control over their information diet. There is perhaps a chicken-and-egg question about the paucity of end-user content controls today and the rise of major, centralized platforms. Did internet users stop demanding these tools once they found search engines and curated hosting platforms to protect them from the worst of the web? Or did they flock to centralized platforms because good tools for true end-user control did not exist?
It may be that such tools have only limited promise as a technical matter, because they depend on accurate content labeling. A user who wanted to block most racial epithets but retain access to rap lyrics, historical documents, and news reporting, for example, could do so only if people or algorithms first correctly identified content in these categories. That’s more work than humans could do at internet scale, and algorithmic filters have so far proven highly unreliable at tasks of this sort.
Could or should lawmakers actually mandate something like this? I haven’t checked, but I bet that Larry Lessig, Jack Balkin, James Boyle, Julie Cohen, or others of their cohort wrote something smart about this decades ago. My quick take is that truly granular user content controls would be an incredible amount of work to implement, still might not work very well, and could well wind up being ignored as most users simply accept a platform’s default rules. And any law dictating content classifications for user-controlled settings would tread uncomfortably close to creating state-created content mandates, raising inevitable First Amendment questions.
Still, creating end user controls strikes me as a much better approach than replacing Mark Zuckerberg’s speech rules with Mitch McConnell’s or Nancy Pelosi’s. And at least some of the practical limitations – like users not bothering to change complicated settings – might be addressed by introducing competing rulemakers into the system. That’s part of the thinking behind the last idea I’ll discuss: what I call the “Magic APIs” approach.
- Magic APIs
A final model, and the most ambitious one, is what I think of as the Magic APIs approach. APIs, or application program interfaces, are technical tools that allow one internet service to connect with and retrieve information from another. In this model, platforms would open APIs so that competing providers, or cooperating ones, could offer users alternate rule sets for managing the content held by major platforms. This model has a lot of overlap with the ideas Mike Masnick has spelled out, in much greater detail, in his Protocols Not Platforms piece. Jonathan Zittrain has also written about ideas like this in the past, suggesting that “Facebook should allow anyone to write an algorithm to populate someone’s feed.” I’ve heard other people kick these ideas around for at least a decade. So the idea isn’t entirely new, but it’s also never had a really thorough public airing.
The Magic APIs approach would be broadly analogous to telecommunications “unbundling” requirements. These aim to insert competition into markets subject to network effects by requiring incumbents to license hard-to-duplicate resources to newcomers. In the platform context, this would mean that Google or Facebook opens up access to the uncurated version of its service, including all legal user-generated content, as the foundation for competing user-facing curation services. Competitors would then offer users some or all of the same content, with their own new content ranking and removal policies, and potentially a whole new user interface. Users might choose a G-rated version of Twitter from Disney or a racial justice-oriented lens on YouTube from a Black Lives Matter-affiliated group, for example. As Mike Masnick put it in 2018:
Ideally, Facebook (and others) should open up so that third party tools can provide their own experiences—and then each person could choose the service or filtering setup that they want. People who want to suck in the firehose, including all the garbage, could do so. Others could choose other filters or other experiences. Move the power down to the ends of the network, which is what the internet was supposed to be good at in the first place.
This, too, has some 1990s precedent. Back then, the Internet Content Rating Association (ICRA) proposed something like it for web content. In addition to user-controlled content settings at the browser level, ICRA planned to let users subscribe to add- or block-lists from trusted third parties. That would have spared users the work of setting detailed content preferences, and webmasters the work of creating detailed labels. But “federated” models along these lines, including Mastodon and a content sharing protocol plan announced by Twitter, have prompted renewed interest of late. Cory Doctorow’s recent work on “adversarial interoperability” is relevant here, too. It tackles the problem from the other end, proposing reforms of laws like the Computer Fraud and Abuse Act (CFAA) and anti-circumvention provisions of the DMCA so that new market entrants can proceed directly to offering users services of this sort – with or without platforms’ permission.
Letting users choose among competing “flavors” of today’s mega-platforms would solve some First Amendment problems by leaving platforms’ own editorial decisions undisturbed, while permitting competing editors to offer alternate versions and include speakers who would otherwise be excluded. In old school Competition terms, this would serve a media pluralism goal. But many platforms would object to compulsory Magic APIs on innumerable grounds, including the Constitution’s prohibition on most state takings of property. And some governments might not actually like decentralizing platform power in this way. After all, a more diversified Internet would lack the chokepoints for regulation and informal pressure that incumbents like Google and Facebook provide today. In any case, this approach would also create a slew of new problems—beyond the ordinary downsides of regulatory intervention.
The technology required to make Magic APIs work would be difficult, perhaps impossible, to build well; that’s the “magic” part. (But previously magical things happen all the time. So I’m always curious what platform engineers would say about this if their lawyers weren’t watching.) There are also serious questions about how such a system would interact with the complex, multiplayer technical infrastructure behind online advertising. I haven’t seen any published work addressing this issue, but I suspect it’s a pretty big one.
Having multiple content-rating efforts for the same material would also be massively inefficient. Few competitors will emerge if they all must make redundant investments to translate the same Kurdish-language post, for example, or to identify the local significance of a particular flag, song, or slang term. As I recently told Slate, in an ideal world that more “objective” part of the work could be centralized or conclusions could be shared. But the subsequent exercise of judgment about whether to exclude the content would be spread out, subject to different rules on different services.
Perhaps most dauntingly, the Magic APIs model has real privacy issues. After all, the entire Cambridge Analytica scandal was created by Facebook APIs passing information about users to a third party service provider. As a technical matter, Magic APIs would be pretty much the same thing. What would need to be (very) different for Magic APIs to succeed would be user consent and privacy protection. The hardest part would be defining the right rules for data about a user’s friends. Users aren’t likely to migrate to competing services or “flavors” of existing services if that means losing touch with, and not seeing content from, the people they know. Unless we can identify the legal and technical framework to reconcile this kind of data sharing with laws like the GDPR, Magic APIs and similar ideas will provide no way forward.
The good news is that lots of smart people are already wrangling with this problem. Discussions about data portability and interoperability have brought together experts in privacy and competition law sort out issues at the intersection of their fields. The Magic APIs model just adds another layer of complexity by incorporating speech concerns.
Conclusion
It is far from clear to me that any of these regulatory approaches have upsides that outweigh their downsides. But the really interesting ones have not yet been given a good tire-kicking by technical and legal experts, either. I find that encouraging. We have a lot to talk about, especially at the intersections of competition and speech law. (Some thinkers worth namechecking at that intersection are Harold Feld, Barbara van Schewick, Tim Wu, Blake Reid, and Berin Szoka. Importantly, they don’t all agree with each other.)
For the most ambitious proposals, like the Magic APIs model, we need to add privacy law to the mix. And realistically, we can’t stop there. We’ll make bad laws if we don’t factor in issues like equal protection and anti-discrimination. We’ll fail to solve real-world problems if we don’t address laws like the CFAA. And if lawyers have these conversations without technologists in the room, we’re in for a bad time. Getting experts out of our silos to have those conversations won’t be easy, but it may lead to real ways forward. At the risk of going very off-brand, that prospect makes me cautiously optimistic.
Daphne Keller directs the Program on Platform Regulation at Stanford's Cyber Policy Center, and was formerly the Director of Intermediary Liability at CIS. Her work focuses on platform regulation and Internet users' rights. She has published both academically and in popular press; testified and participated in legislative processes; and taught and lectured extensively. Her recent work focuses on legal protections for users’ free expression rights when state and private power intersect, particularly through platforms’ enforcement of Terms of Service or use of algorithmic ranking and recommendations. Until 2015 Daphne was Associate General Counsel for Google, where she had primary responsibility for the company’s search products. She worked on groundbreaking Intermediary Liability litigation and legislation around the world and counseled both overall product development and individual content takedown decisions.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: content moderation, intermediary liability, regulations, section 230, speech, speech rules
Reader Comments
Subscribe: RSS
View by: Thread
Rule #1
Do nothing.
[ link to this | view in chronology ]
Both sides pushing for regulatory changes are intent on controlling what other people see, hear or listen to. They are not interested in free speech, but rather in controlling society so that it conforms to their ideal, be it a puritan hell, or a fascist hell.
[ link to this | view in chronology ]
In the context of the United States, the puritan hell and the fascist hell are often one and the same.
[ link to this | view in chronology ]
Re:
You don't persuade people by lumping all the people you hate together. They'd just hate you for it, perhaps even more than they hate each other.
A good strategy for a hatemonger.
[ link to this | view in chronology ]
Re:
Not quite, because like Lutherans and Calvinists, the two sides will fight wars over minor differences of interpretation.
[ link to this | view in chronology ]
Re:
"Both sides"?
Which exactly two sides are these?
[ link to this | view in chronology ]
Re: Re:
Those who want to decide what should be removed, and those wanting to force their speech onto platforms.
[ link to this | view in chronology ]
Re: Re: Re:
Those that exercise their 1A rights, and those that hate the first group.
[ link to this | view in chronology ]
This is some thoroughly exposited thinking material.
I would also agree that most of these ideas, indeed any possible ideas, for legally forcing moderation choices (including many of those currently extant) are not likely to be good (or constitutional in the US).
[ link to this | view in chronology ]
Some people just don't accept that everyone has the same rights to have the views and opinions that they do. ("How dare they be different from me!!!") It never seems to occur to them that they don't have the right to control the world... at all.
It's not all that difficult for any platform to provide tools that allow a subscriber to block whatever content they don't want to see, read, hear, whatever FOR THEIR OWN CONSUMPTION (the digital equivalent of hiding your head in the sand)--if they can't see it, then it won't frighten them. Or they can just leave.
[ link to this | view in chronology ]
Re:
Its just as easy for people who are not welcome on one site to go to a site where they are welcome. If those sites are not popular, well you have a measure of popularity of the speech and people on them. To argue that a platform should allow your speech is imposing yourself on its users.
[ link to this | view in chronology ]
Re: Re:
"To argue that a platform should allow your speech is imposing yourself on its users."
But in the sniffling words of the Stormfront refugees miffed that the Facebook crowd gets to hang out in airy rooms of fluff and kittens while they all sit in the dank cave they chose to make their home where every corner is a toilet..."It's unfair! Everyone else should also live in a septic tank!".
[ link to this | view in chronology ]
Re:
4 sentences, 11 fallacies.
Impressive, but not surprising from this troll.
[ link to this | view in chronology ]
Re:
Some people just don't accept that everyone has the same rights to have the views and opinions that they do. ("How dare they be different from me!!!") It never seems to occur to them that they don't have the right to control the world... at all.
If you had typed that out in defense of companies being able to decide who gets to use their platforms, both because of property rights and because of the right of association, that might have been insightful, but following it up with a 'platforms should stop telling people they can't post stuff on their property!' was just priceless and well worth a chuckle, so thanks for the laugh.
It's not all that difficult for any platform to provide tools that allow a subscriber to block whatever content they don't want to see, read, hear,
It's even easier for platforms to tell people, 'Here are the rules regarding acceptable behavior and content, violate them and your ass is out the door'.
Or they can just leave.
Better idea: The platforms can tell the people posting content that people don't want to see to bugger off and find their own platforms, because the majority of their users don't want to deal with assholes and the platform doesn't want to be known as a welcome haven for that sort of person.
[ link to this | view in chronology ]
... like Lutherans and Calvinists, the two sides will fight wars over minor differences of interpretation.
Until the anabaptists re-invented religious freedom ... then the Fascists came and violently oppressed them all--Barth and Boenhoffer alike. (Actually, for that matter, so did the Bolsheviks, who in true antifa fashion, named themselves for what they were not.)
[ link to this | view in chronology ]
Please point to one group of antifascists in the United States — just one! — that is both (a) organized and active to the point where it can be considered an actual organization instead of just a bunch of people with similar ideologies, and (b) expressing fascist ideologies and showing support for fascist politicians.
I’ll wait.
[ link to this | view in chronology ]
Re:
"...who in true antifa fashion..."
You know how we can tell where you're coming from, Baghdad Bob? Last I checked even the FBI, after being pushed hard from above to find ANY way to link "antifa" to an organization or terror movement, gave up with empty hands and stated the overwhelming majority of violent US organizations are right-wing extremists.
If there's such a thing as "antifa" as a coherent organization then it's one so skilled and widespread no intelligence agency on the planet has unmasked it.
And I say this knowing that sooner or later that will just make some right-wing glue-sniffer exclaim "The Illuminati!".
[ link to this | view in chronology ]
If you wouldn't give the power to your worst enemy...
Ignoring for a moment the first amendment concerns, the real fatal flaw with having the government, any government, in a position of determining what speech is and is not allowed outside of very narrow categories like slander and threats of immediate impending violence is that it requires that you trust that they will wield that power responsibly, not just currently but as long as that power exists.
It's one thing for a person to believe that the people currently in office will make rules regarding speech that platforms should be required to keep or prohibit that they agree with, but with the passage of time it's not a question of will someone you don't agree with get that power but how quickly. Given that as flawed as it may be to leave it up to the platforms that's still a much better option than having the government set the rules in that field.
[ link to this | view in chronology ]
End-user control problems
I'm sure we've all met the guy who tags every email, text or phone message as urgent as whatever platform he's using allows for, because he considers even the most trivial thing urgent - after all, if it wasn't urgent he wouldn't be sending it. Even if it's a picture of his cat playing with a laser dot.
There's the people who deliberately misuse tags or channels to reach a wider audience rather than using the ones topical to their message. Spammers in general tend to attach EVERY possible tag to their content in hopes of that.
Going old school, Caller ID promised to let you screen your calls before picking up the handset. Blockers and spoofers put and end to that.
Even older school, shady senders routinely disguise envelopes to look like things you want or need to open. The fake subpoenas that some DAs have sent out are a recent example of that.
As long as any of that is possible, end-user controls won't be as effective as hoped.
[ link to this | view in chronology ]
Re: End-user control problems
"Even if it's a picture of his cat playing with a laser dot."
...I'd have to say that sort of picture at least serves to brighten one's day which gives it a leg up on much other communication which appears to serve no purpose other than to wear out your eyeballs, spare time and sanity.
But yeah, the end user desperately needs his communications provider or platform to moderate at scale because that nigerian prince has an extended family.
[ link to this | view in chronology ]
Tourist Attractions
Do you love to travel? Know about the most important travel tips and tourist attractions now! Visit https://touristattractions.xyz/
[ link to this | view in chronology ]
Can you show me a single article on this website where the writers have explicitly advocated for or spoke in always-positive terms about surveillance capitalism? https://www.myccpay.review/
[ link to this | view in chronology ]