Thanks for the link. Yes, I am very familiar with the difficulty of doing content moderation when the volume of content is so enormous. Have you seen this one, the Wall Street Journal reporting that Facebook executives scaled back a successful effort to make the site less divisive when they found that it was decreasing their audience share, which means fewer people seeing ads and less profit? https://www.wsj.com/articles/facebook-knows-it-encourages-division-top-executives-nixed-solutions-11 590507499 With those kind of extreme "for profit" motives, it will always be hard for FB to moderate content successfully -- don't you think? They have no real desire to it, because it would decrease their profits. Isn't that obvious?
But here's what so interesting to me about the article you referenced -- to me, when I read about how impossible mass moderation is, how no amount of technology, or human moderation assisted by algorithms will ever suceed, that to me is shouting, loud and clear "Rein in this FB monster. Enforce product liability on this very dangerous "machine." Cut it down in size, force it adopt a subscription model (which will cut number of users), turn this digital infrastructure into a investor-owned utility with strict rules around taking user data, using hyper-targeted content and ads, etc." That's what that says to me.
But to the author of the article, it says just the opposite. He write: "we are lucky to have Facebook and other major social media companies based in our country.... require that we protect and help Facebook and other social media companies as they do the best they can to preserve and expand American – and even global – democracy....How do we recognize the value to democracy and support our top social media companies which now dominate the world?"
WTF!! FB has been used to undermine democracy in the US, as well as in Philippines, Brazil, dozens of countries. And saying "Well, at least it's better than China is unacceptable -- we can send humans do the moon, we can transplant the human heart, but we can't do better than FB or China when it comes to the new media infrastructure?
This FB machine is hopelessly broken. And FYI, it cares nothing about free speech, it only cares about grabbing user attention with scandalous, conspiracy-hyped info that keeps them watching ads = more profits, even if democracies melt down in the process. There is nothing here to protect, tear it down, start all over. Put rules around FB to return it to its early days when people could find their long lost college roommate, post their pet pics, and engage in "social connection," and the reach was not unlimited, there was more friction in spreading disinformation. Remember those days? That's no longer FB's mission, free speech is an accident of FB, not its mission, only because the more users are posting the more ads they see = more profit.
In short, we need to rebuild FB, because FB won't do it by itself.
Hi Rocky, to clarify, it is the PLATFORMS that are immunized by Section 230 from what users post, even if users post libel, hate speech, incitement speech etc. But if the plaforms lose Sec 230 protection, the platform is still protected by the First Amendment for all other speech that is not libelous, inciting violence, etc.
And I am very aware that there are smaller platforms out there. I share your concern about what might happen to them. But again, they too will still have First Amendment protection. And while I understand they don’t have the resources of Big Tech, there is a part of me that thinks, “if you can’t monitor your own platform sufficiently, either with humans or algorithms or both, to make sure that it doesn’t have hate speech, incitement speech, kiddie porn, libel – in other words that it doesn’t have illegal speech – then maybe you shouldn’t be in business.” There are too many startups in Silicon Valley that go along with the “move fast and break things” mentality and don’t structure into the core of their business model some reasonable degree of oversight. That libertarian approach seemed OK in the early years of the Internet, when these platforms were new and Section 230 was put into place, specifically in 1996. Now, there have been too many incidents and headlines about the problems that this mentality, and the broken media machine that it has unleashed on the world, has fostered. It’s time to hit reset, wouldn’t you say?
BUT -- we also could put some conditions on WHICH platforms would be targeted by regulations. And just target those that are "systemically important," i.e. big enough that they define the market, i.e. Big Tech, which would be defined by something like "number of users" or "annual revenue," etc. That is the approach that the EU is taking in its recently proposed Digital Services Act and Digital Markets Act. What do you think of that approach?
Ah hah, Steven T. Stone enters the game. And who might you be sir, rushing to the defense of Masnick and Tech Dirt? Or are you actually Masnick himself (since you seem to be accusing people of masquerading as others)? You have the same sneer and snide-filled style. One thing you have going for you that Masnick did not is brevity. You succinctly outlined, at least, what he was having so much trouble articulating: his fears for what will happen if 230 is revoked. As you wrote: “Either sites would overmoderate speech (which would silence a lot more speech than you might think), undermoderate speech (which would end up chasing decent people off platforms and create another “Worst People Problem” platform), or stop accepting third-party speech altogether.”
You and Mr. Masnick put yourself out there as “experts”, but even a layperson’s understanding of these matters knows that much speech on digital platforms is already protected by the First Amendment. So the real question is: what additional legal protections does Section 230 provide, beyond the First Amendment? And is it necessary?
Mostly it provides protection for what is often called “illegal speech,” and also for “libelous speech.” The First Amendment does not protect those, but Section 230 does. In the US context, illegal speech has to do with things like child pornography, incitement to violence (crying “fire” in a crowded theater etc.), hate speech, which together make up a fairly narrow class of speech (in places like Germany, “illegal speech” has a slightly broader definition, due to its Nazi past). The First Amendment also has more limited protections for “commercial speech,” such as advertising, and Section 230 provides some additional protection for that.
So revoking Section 230 really is about whether the platforms will be held responsible for illegal speech, libelous speech, and for when platforms' algorithms accept advertisements that veer into speech that is either illegal or libelous.
That’s it. That’s a pretty narrow range on the broad spectrum of “free speech” that you and Masnick are losing your knickers over. In fact, as I read Hill’s op-ed, I think he is probably right that “revoking Section 230 will likely not have as much impact as its proponents wish or its critics fear.”
Instead your fears are highly speculative in nature -- about sites over moderating, under moderating or refusing to accept third-party speech altogether. I’ll take the last first.
The sites will never refuse accepting third-party speech because that’s how they earn their profits. Whether the third-party speech is “user generated content”, from which the platforms hlep themselves to all of our private data they want and monetize it in various ways, or the third-party speech is advertisements, this IS their revenue model. They will never stop refusing third-party speech.
Over moderating speech? Also not going to happen. In fact, turning poor Masnick's vapid logic on its head, there are many countries around the world that do not have a Section 230-type protection, and yet in those countries Facebook, Twitter and other digital platforms are still a gutter effluent of disinformation and garbage, even that is not “illegal” speech. The platforms take next-to-nothing down. Indeed, whether you have a Section 230 type law or not seems to make no difference about whether these portals are a flood of all kinds of speech.
The only way that your and Masnick’s fear about platforms “pulling down all sorts of legitimate content” would ever become realized is if these platforms had any fear of being sued by a particular user. Do you have any idea how much resources it takes to sue Facebook, Google, and Twitter? The army of lawyers that they already use to handle such issues? It takes years and millions of dollars to sue them, with little chance of winning. Heck, even the FTC reluctantly takes on Facebook and Google, and even when it fined FB $5 billion – the largest fine in the history of the FTC – it was a small speedbump for Facebook. As are the legal costs. This is a company raking in $80 billion in revenue per year.
So your fear that the platforms will be so worried about lawsuits and liability that they will start taking down content is simply not real world. Both the cost of litigation, as well as even legal settlement in the extremely rare circumstance that some lawyer will actually beat these platforms in the courtroom, would be just a rounding error for their revenue stream.
Finally, under moderate speech? Good heavens, they already do that. They really couldn’t under moderate speech anymore than they already do. If you don’t believe me, see the New York Times expose about how child pornographers, rapists and other evil users have been using various platforms to live stream their horrendous criminal acts of violence in real time. And the companies have sometimes refused to take down some of this content, even when requested by the victims.
So, to my way of thinking, as I indicated in the subject line of my first message that you censored -- or hid with a "warning" (like Twitter did to Trump, eh?) -- I think we need to put some rules around Big Tech. In fact, I am not sure I agree with Steven Hill that we should waste time trying to revoke Section 230 because it will be a big fight and I don’t think it’s going to accomplish a whole heck of a lot.
Instead, we should move forward vigorously with the other proposals that he laid out in his op-ed – turning the business model into an investor-owned utility, and forbidding these platforms from taking our personal data without user consent.
I would add other things to that list, such as restricting the hyper-targeted advertising model, and further restrictions on how these platforms design their products to target young people. I note that Italy just mandated that TikTok should block users whose age cannot be verified after the death of a 10-year-old girl who was encouraged via that platform to engage in dangerous behavior. How about some product liability, ever heard of that?
Age verification is extremely important, yet the platforms are putting very little R&D into that because they don’t have any skin in this game. Tell me Mr. Stone/Masnick -- do you think the platforms should come up with a way to verify age? I await your answer....
I am deeply shocked to discover that my contribution to this discussion has been deleted by TechDirt. You can see the remnants of my post in the original subject line of, “Hill's oped sounds good to me -- put some rules around Big T” that other people have commented on. As my original subject line indicated, I disagreed with the viewpoint put forward by Mike Masnick, the author of this TechDirt article, who really offered little substantive critique of Steven Hill’s op-ed in the Chicago Tribune, but instead engaged in mischaracterization. When I read Hill’s article myself, instead of relying on Masnick, I found Hill’s viewpoint to be lucid and compelling, and said so in my orginal post. I suggest all of you read Hill yourselves. Here's the link https://www.chicagotribune.com/opinion/commentary/ct-opinion-section-230-big-tech-congress-pro-repea l-20210128-oxzxss4zqvbxniussz5yyt4g74-story.html
It is ironic that TechDirt would CENSOR a comment in a discussion over whether revoking Section 230 would LEAD TO MORE CENSORSHIP (Masnick: “a huge amount of stifling of perfectly fine speech”). Apparently Masnick and TechDirt are against censorship – except for when THEY WANT TO CENSOR SOMOEONE!
The other important point I made in my original contribution was I speculated if Masnick’s unhinged hostility towards Hill’s reasonable article wasn’t perhaps a function of the fact that TechDirt is a project of Copia, which is itself sponsored by Google, Andreessen Horowitz and other “old boy” companies of Silicon Valley. See it yourself, go to the Copia website, scroll to the bottom https://copia.is/ I think that is fair game to point out. Apparently Masnick and TechDirt do not, they don't want you to know that so they deleted my post.
Masnick and TechDirt have revealed their true face. For those who didn’t know, now we do. This is a hack job website shilling for Silicon Valley!
After reading the unhinged diatribe from Mike Masnick, I couldn’t resist reading Steven Hill’s op-ed in the Chicago Tribune. I was prepared to read something that sounded, well, kind of idiotic, based on what Masnick said. But I actually found quite the opposite.
Hill clearly identified a major problem with Facebook and other digital media platforms, defined the problem fairly succinctly, and then set about with proposing a number of solutions. The solutions ranged from something simple, like revoking Section 230 which is much in the news these days, and then moved on to other more complicated solutions that Hill seems to think would be more effective.
Hill clearly acknowledges that (quoting here) “revoking Section 230 will likely not have as much impact as its proponents wish or its critics fear.” Including (I would say) unhinged critics like Mike Masnick, who seems to think the sky will fall if Sec 230 bites the dust. Such a Chicken Little!
So Hill then goes on to explore other possible solutions. His discussion of those solutions – including transforming the business model into an “investor-owned utility” – seems quite lucid, practical and rooted in US history. The US has a long history of turning certain systemically-important industries into some version of a utility, including (as Hill points out) telecoms, railroads and power companies. This all seems quite reasonable for discussion to me.
For reasonable people, that is. I wonder if Masnick’s viewpoint is colored by who he is paid by? TechDirt is a wholly owned subsidiary of the Copia Institute. Go to its website at https://copia.is/, see who some of its funders are. Want to take a guess? Google, Andreessen Horowitz and other troglodytes of Silicon Valley. You are who you hang out with.
Clearly Masnick is not interested in discussing real solutions. He would rather attack those who do, adding no substantive ideas of his own, in defense of the status quo. Tech Dirt should revoke his column and give it to writers like Hill.
/div>
Techdirt has not posted any stories submitted by JJ Mendoza.
Re: Re: Re: Re: Stepehn T. Stone enters the game
Thanks for the link. Yes, I am very familiar with the difficulty of doing content moderation when the volume of content is so enormous. Have you seen this one, the Wall Street Journal reporting that Facebook executives scaled back a successful effort to make the site less divisive when they found that it was decreasing their audience share, which means fewer people seeing ads and less profit? https://www.wsj.com/articles/facebook-knows-it-encourages-division-top-executives-nixed-solutions-11 590507499 With those kind of extreme "for profit" motives, it will always be hard for FB to moderate content successfully -- don't you think? They have no real desire to it, because it would decrease their profits. Isn't that obvious?
But here's what so interesting to me about the article you referenced -- to me, when I read about how impossible mass moderation is, how no amount of technology, or human moderation assisted by algorithms will ever suceed, that to me is shouting, loud and clear "Rein in this FB monster. Enforce product liability on this very dangerous "machine." Cut it down in size, force it adopt a subscription model (which will cut number of users), turn this digital infrastructure into a investor-owned utility with strict rules around taking user data, using hyper-targeted content and ads, etc." That's what that says to me.
But to the author of the article, it says just the opposite. He write: "we are lucky to have Facebook and other major social media companies based in our country.... require that we protect and help Facebook and other social media companies as they do the best they can to preserve and expand American – and even global – democracy....How do we recognize the value to democracy and support our top social media companies which now dominate the world?"
WTF!! FB has been used to undermine democracy in the US, as well as in Philippines, Brazil, dozens of countries. And saying "Well, at least it's better than China is unacceptable -- we can send humans do the moon, we can transplant the human heart, but we can't do better than FB or China when it comes to the new media infrastructure?
This FB machine is hopelessly broken. And FYI, it cares nothing about free speech, it only cares about grabbing user attention with scandalous, conspiracy-hyped info that keeps them watching ads = more profits, even if democracies melt down in the process. There is nothing here to protect, tear it down, start all over. Put rules around FB to return it to its early days when people could find their long lost college roommate, post their pet pics, and engage in "social connection," and the reach was not unlimited, there was more friction in spreading disinformation. Remember those days? That's no longer FB's mission, free speech is an accident of FB, not its mission, only because the more users are posting the more ads they see = more profit.
In short, we need to rebuild FB, because FB won't do it by itself.
/div>Re: Re: Stepehn T. Stone enters the game
Hi Rocky, to clarify, it is the PLATFORMS that are immunized by Section 230 from what users post, even if users post libel, hate speech, incitement speech etc. But if the plaforms lose Sec 230 protection, the platform is still protected by the First Amendment for all other speech that is not libelous, inciting violence, etc.
And I am very aware that there are smaller platforms out there. I share your concern about what might happen to them. But again, they too will still have First Amendment protection. And while I understand they don’t have the resources of Big Tech, there is a part of me that thinks, “if you can’t monitor your own platform sufficiently, either with humans or algorithms or both, to make sure that it doesn’t have hate speech, incitement speech, kiddie porn, libel – in other words that it doesn’t have illegal speech – then maybe you shouldn’t be in business.” There are too many startups in Silicon Valley that go along with the “move fast and break things” mentality and don’t structure into the core of their business model some reasonable degree of oversight. That libertarian approach seemed OK in the early years of the Internet, when these platforms were new and Section 230 was put into place, specifically in 1996. Now, there have been too many incidents and headlines about the problems that this mentality, and the broken media machine that it has unleashed on the world, has fostered. It’s time to hit reset, wouldn’t you say?
BUT -- we also could put some conditions on WHICH platforms would be targeted by regulations. And just target those that are "systemically important," i.e. big enough that they define the market, i.e. Big Tech, which would be defined by something like "number of users" or "annual revenue," etc. That is the approach that the EU is taking in its recently proposed Digital Services Act and Digital Markets Act. What do you think of that approach?
/div>Stepehn T. Stone enters the game
Ah hah, Steven T. Stone enters the game. And who might you be sir, rushing to the defense of Masnick and Tech Dirt? Or are you actually Masnick himself (since you seem to be accusing people of masquerading as others)? You have the same sneer and snide-filled style. One thing you have going for you that Masnick did not is brevity. You succinctly outlined, at least, what he was having so much trouble articulating: his fears for what will happen if 230 is revoked. As you wrote: “Either sites would overmoderate speech (which would silence a lot more speech than you might think), undermoderate speech (which would end up chasing decent people off platforms and create another “Worst People Problem” platform), or stop accepting third-party speech altogether.”
You and Mr. Masnick put yourself out there as “experts”, but even a layperson’s understanding of these matters knows that much speech on digital platforms is already protected by the First Amendment. So the real question is: what additional legal protections does Section 230 provide, beyond the First Amendment? And is it necessary?
Mostly it provides protection for what is often called “illegal speech,” and also for “libelous speech.” The First Amendment does not protect those, but Section 230 does. In the US context, illegal speech has to do with things like child pornography, incitement to violence (crying “fire” in a crowded theater etc.), hate speech, which together make up a fairly narrow class of speech (in places like Germany, “illegal speech” has a slightly broader definition, due to its Nazi past). The First Amendment also has more limited protections for “commercial speech,” such as advertising, and Section 230 provides some additional protection for that.
So revoking Section 230 really is about whether the platforms will be held responsible for illegal speech, libelous speech, and for when platforms' algorithms accept advertisements that veer into speech that is either illegal or libelous.
That’s it. That’s a pretty narrow range on the broad spectrum of “free speech” that you and Masnick are losing your knickers over. In fact, as I read Hill’s op-ed, I think he is probably right that “revoking Section 230 will likely not have as much impact as its proponents wish or its critics fear.”
Instead your fears are highly speculative in nature -- about sites over moderating, under moderating or refusing to accept third-party speech altogether. I’ll take the last first.
The sites will never refuse accepting third-party speech because that’s how they earn their profits. Whether the third-party speech is “user generated content”, from which the platforms hlep themselves to all of our private data they want and monetize it in various ways, or the third-party speech is advertisements, this IS their revenue model. They will never stop refusing third-party speech.
Over moderating speech? Also not going to happen. In fact, turning poor Masnick's vapid logic on its head, there are many countries around the world that do not have a Section 230-type protection, and yet in those countries Facebook, Twitter and other digital platforms are still a gutter effluent of disinformation and garbage, even that is not “illegal” speech. The platforms take next-to-nothing down. Indeed, whether you have a Section 230 type law or not seems to make no difference about whether these portals are a flood of all kinds of speech.
The only way that your and Masnick’s fear about platforms “pulling down all sorts of legitimate content” would ever become realized is if these platforms had any fear of being sued by a particular user. Do you have any idea how much resources it takes to sue Facebook, Google, and Twitter? The army of lawyers that they already use to handle such issues? It takes years and millions of dollars to sue them, with little chance of winning. Heck, even the FTC reluctantly takes on Facebook and Google, and even when it fined FB $5 billion – the largest fine in the history of the FTC – it was a small speedbump for Facebook. As are the legal costs. This is a company raking in $80 billion in revenue per year.
So your fear that the platforms will be so worried about lawsuits and liability that they will start taking down content is simply not real world. Both the cost of litigation, as well as even legal settlement in the extremely rare circumstance that some lawyer will actually beat these platforms in the courtroom, would be just a rounding error for their revenue stream.
Finally, under moderate speech? Good heavens, they already do that. They really couldn’t under moderate speech anymore than they already do. If you don’t believe me, see the New York Times expose about how child pornographers, rapists and other evil users have been using various platforms to live stream their horrendous criminal acts of violence in real time. And the companies have sometimes refused to take down some of this content, even when requested by the victims.
So, to my way of thinking, as I indicated in the subject line of my first message that you censored -- or hid with a "warning" (like Twitter did to Trump, eh?) -- I think we need to put some rules around Big Tech. In fact, I am not sure I agree with Steven Hill that we should waste time trying to revoke Section 230 because it will be a big fight and I don’t think it’s going to accomplish a whole heck of a lot.
Instead, we should move forward vigorously with the other proposals that he laid out in his op-ed – turning the business model into an investor-owned utility, and forbidding these platforms from taking our personal data without user consent.
I would add other things to that list, such as restricting the hyper-targeted advertising model, and further restrictions on how these platforms design their products to target young people. I note that Italy just mandated that TikTok should block users whose age cannot be verified after the death of a 10-year-old girl who was encouraged via that platform to engage in dangerous behavior. How about some product liability, ever heard of that?
Age verification is extremely important, yet the platforms are putting very little R&D into that because they don’t have any skin in this game. Tell me Mr. Stone/Masnick -- do you think the platforms should come up with a way to verify age? I await your answer....
/div>Censorship on TechDirt
I am deeply shocked to discover that my contribution to this discussion has been deleted by TechDirt. You can see the remnants of my post in the original subject line of, “Hill's oped sounds good to me -- put some rules around Big T” that other people have commented on. As my original subject line indicated, I disagreed with the viewpoint put forward by Mike Masnick, the author of this TechDirt article, who really offered little substantive critique of Steven Hill’s op-ed in the Chicago Tribune, but instead engaged in mischaracterization. When I read Hill’s article myself, instead of relying on Masnick, I found Hill’s viewpoint to be lucid and compelling, and said so in my orginal post. I suggest all of you read Hill yourselves. Here's the link https://www.chicagotribune.com/opinion/commentary/ct-opinion-section-230-big-tech-congress-pro-repea l-20210128-oxzxss4zqvbxniussz5yyt4g74-story.html
It is ironic that TechDirt would CENSOR a comment in a discussion over whether revoking Section 230 would LEAD TO MORE CENSORSHIP (Masnick: “a huge amount of stifling of perfectly fine speech”). Apparently Masnick and TechDirt are against censorship – except for when THEY WANT TO CENSOR SOMOEONE!
The other important point I made in my original contribution was I speculated if Masnick’s unhinged hostility towards Hill’s reasonable article wasn’t perhaps a function of the fact that TechDirt is a project of Copia, which is itself sponsored by Google, Andreessen Horowitz and other “old boy” companies of Silicon Valley. See it yourself, go to the Copia website, scroll to the bottom https://copia.is/ I think that is fair game to point out. Apparently Masnick and TechDirt do not, they don't want you to know that so they deleted my post.
Masnick and TechDirt have revealed their true face. For those who didn’t know, now we do. This is a hack job website shilling for Silicon Valley!
/div>Hill's oped sounds good to me -- put some rules around Big Tech!
After reading the unhinged diatribe from Mike Masnick, I couldn’t resist reading Steven Hill’s op-ed in the Chicago Tribune. I was prepared to read something that sounded, well, kind of idiotic, based on what Masnick said. But I actually found quite the opposite.
Hill clearly identified a major problem with Facebook and other digital media platforms, defined the problem fairly succinctly, and then set about with proposing a number of solutions. The solutions ranged from something simple, like revoking Section 230 which is much in the news these days, and then moved on to other more complicated solutions that Hill seems to think would be more effective.
Hill clearly acknowledges that (quoting here) “revoking Section 230 will likely not have as much impact as its proponents wish or its critics fear.” Including (I would say) unhinged critics like Mike Masnick, who seems to think the sky will fall if Sec 230 bites the dust. Such a Chicken Little!
So Hill then goes on to explore other possible solutions. His discussion of those solutions – including transforming the business model into an “investor-owned utility” – seems quite lucid, practical and rooted in US history. The US has a long history of turning certain systemically-important industries into some version of a utility, including (as Hill points out) telecoms, railroads and power companies. This all seems quite reasonable for discussion to me.
For reasonable people, that is. I wonder if Masnick’s viewpoint is colored by who he is paid by? TechDirt is a wholly owned subsidiary of the Copia Institute. Go to its website at https://copia.is/, see who some of its funders are. Want to take a guess? Google, Andreessen Horowitz and other troglodytes of Silicon Valley. You are who you hang out with.
Clearly Masnick is not interested in discussing real solutions. He would rather attack those who do, adding no substantive ideas of his own, in defense of the status quo. Tech Dirt should revoke his column and give it to writers like Hill.
/div>Techdirt has not posted any stories submitted by JJ Mendoza.
Submit a story now.
Tools & Services
TwitterFacebook
RSS
Podcast
Research & Reports
Company
About UsAdvertising Policies
Privacy
Contact
Help & FeedbackMedia Kit
Sponsor/Advertise
Submit a Story
More
Copia InstituteInsider Shop
Support Techdirt