How Regulating Platforms' Content Moderation Means Regulating Speech - Even Yours.
from the democratization-of-the-Internet dept
Imagine a scenario:
You have a Facebook page, on which you've posted some sort of status update. Maybe an update from your vacation. Maybe a political idea. Maybe a picture of your kids. And someone comes along and adds a really awful comment on your post. Maybe they insult you. Maybe they insult your politics. Maybe they insult your kids.
Would you want to be legally obligated to keep their ugly comments on your post? Of course not. You'd probably be keen to delete them, and why shouldn't you be able to?
Meanwhile, what if it was the other way around: what if someone had actually posted a great comment, maybe with travel tips, support for your political views, or compliments on how cute your kids are. Would you ever want to be legally obligated to delete these comments? Of course not. If you like these comments, why shouldn't you be able to keep sharing them with readers?
Now let's expand this scenario. Instead of a Facebook page, you've published your own blog. And on your blog you allow comments. One day you get a really awful comment. Would you want to be legally obligated to keep that comment up for all to see? Of course not. Nor would you want to be legally obligated to delete one that was really good. Think about how violated you would feel, though, if the law could force you to make these sorts of expressive decisions you didn't want to make and require you to either host speech you hated or force you to remove speech that you liked.
And now let's say that your website is not just a blog with comments but a larger site with a message board. And let's say the message board is so popular that you've figured out a way to monetize it to pay for the time and resources it takes to maintain it. Maybe you charge users, maybe you run ads, or maybe you take a cut from some of the transactions users are able to make with each other through your site.
And let's say that this website is so popular that you can't possibly run it all by yourself, so you run it with your friend. And now that there are multiple people and money involved, you and your friend decide to form a company to run it, which both gives you some protection and makes it easier to raise money to invest in better equipment and more staff. Soon the site is so popular that you've got dozens, hundreds, or even thousands of people employed to help you run it. And maybe now you've even been able to IPO.
And then someone comes along and posts something really awful on your site.
And someone else comes along and posts something you really like.
Which gets to the point on this post: if it was not OK for the law to be able to force you to maintain the bad comments, or to delete the good ones, when you were small, at what point did it become OK when you got big – if ever?
There is a very strong legal argument that it never became OK, and that the First Amendment interest you had in being able to exercise the expressive choices about what content to keep or delete on your website never went away – it's just that it's easier to see how the First Amendment prevents being forced to make those choices when the choices are so obviously personal (as in the original Facebook post example). But regardless of whether you host a small personal web presence, or are the CEO of a big commercial Internet platform, the principle is the same. There's nothing in the language of the First Amendment that says it only protects editorial discretion of small websites and not big ones. They all are entitled to its protection against compelled speech.
Which is not to say that as small websites grow into big platforms there aren't issues that can arise due to their size. But it does mean that we have to be careful in how we respond to these challenges. Because in addition to the strong legal argument that it's not OK to regulate websites based on their expressive choices, there's also a strong practical argument.
Ultimately large platforms are still just websites out on the Internet, and ordinarily the Internet allows for an unlimited amount of websites to come into being. Which is good, because, regardless of the business, we always want to ensure that it's possible to get new entrants who could provide the same services on terms the market might prefer. In the case of platform businesses, those may be editorial terms. Naturally we wouldn't want larger companies to be able to throw up obstacles that prevent competitors from becoming commercially viable, and to the extent that a large company's general business practices might unfairly prevent competition then targeted regulation of those specific practices may be appropriate. But editorial policies are not what may prevent another web-based platform from taking root. Indeed, the greater the discontent with the incumbent's editorial policies, the more it increases the public's appetite for other choices.
The problem is, if we regulate big platforms by targeting their editorial policies, then all of a sudden that loss of editorial freedom itself becomes a barrier to having those other choices come into being, because there's no way to make rules that would only apply to bigger websites and not also smaller or more personal ones, including all the nascent ones we're trying to encourage. After all, how could we? Even if we believed that only big websites should be regulated, how would we decide at what stage of the growth process website operators should lose their right to exercise editorial discretion over the speech appearing on their sites? Is it when they started running their websites with their friends? Incorporated? Hired? (And, if so, how many people?) Is it when they IPO'd? And what about large websites that are non-profits or remain privately run?
Think also about how chilling it would be if law could make this sort of distinction. Would anyone have the incentive to grow their web presence if its success meant they would lose the right to control it? Who would want to risk building a web-based business, run a blog with comments, or even have a personal Facebook post that might go viral, if, as a consequence of its popularity, it meant that you no longer could control what other expression appeared on it? Far from actually helping level the playing field to foster new websites seeking to be better platforms than the ones that came before, in targeting editorial policies with regulation we would instead only be deterring people from building them.
Filed Under: bias, content, free speech, regulations, social media