Forcing Internet Platforms To Police Content Will Never Work
from the it's-never-enough dept
For many years now, we've pointed out that whenever people -- generally legacy content companies and politicians -- started pushing for internet platforms like Google, Facebook and Twitter to "police" content, that no matter what those platforms did, it was never going to be enough. For example, after years of resisting, Google finally caved to the demands of the MPAA and the RIAA and started using DMCA notices as a signal for its ranking mechanism. This was the first time that Google ever allowed outside actors to directly have some level of control in how Google ranked them in organic search. And in doing so, we feared two things would happen: (1) it would just encourage others to start demanding similar powers over Google and (2) even those to whom Google caved would complain that the company wasn't doing enough. Indeed, that's exactly what happened.
With that in mind, it was great to see UK lawyer Graham Smith (of the excellent Cyberleagle blog, where he regularly opines on issues related to attempts to regulate internet platforms) recently come up with a simple set of rules for how this works, which he dubbed the Three Laws of Internet Intermediaries (might need some marketing polish on the name...).
First Law of Internet Intermediaries: Anything you agree to do will never be enough. 2/4
— Graham Smith (@cyberleagle) October 13, 2017
Second Law of Internet Intermediaries: Whatever you agree to do, you will be taken to task over transparency and accountability. 3/4
— Graham Smith (@cyberleagle) October 13, 2017
Third Law of Internet Intermediaries: However many you placate, there will be another one along tomorrow. 4/4
— Graham Smith (@cyberleagle) October 13, 2017
Note: the first and the last ones are basically identical to the concerns I raised (though stated more succinctly). But the middle one may be the most interesting one here and worth exploring. Every time we've seen internet platforms agree to start "policing" or "moderating" or "filtering" content it fails. Often miserably. Sometimes hilariously. And people wonder "how the hell could this happen." How could YouTube -- pressured to stop terrorist content from appearing, take down evidence of war crimes instead? How could Facebook -- pressured to stop harassment and abuse -- leave that content up, but silence those reporting harassment and abuse?
We've argued, many times, that much of the problem is an issue of scale. Most people have no idea just how many of these kinds of decisions platforms are forced to make in a never-ending stream of demands to "do something." And even when they hire lots of people, actually sorting through this stuff to understand the context takes time, knowledge, empathy and perspective. It's impossible to do that with any amount of speed -- and it's basically impossible to find enough people who want to dig through such context, with the skills necessary to make the right decisions most of the time. And, of course, that assumes that there even is a "right decision" -- when, more often than not, there's a very fuzzy gray area.
And, thus, the end result is that these platforms get pressured into "doing something" -- which is never enough. And then more and more people come out of the woodwork, demanding that more be done (or similar things be done on their behalf). And then the platforms make mistakes. Many, many mistakes. Mistakes that look absolutely ridiculous when put into context -- without recognizing those who made the decisions were unlikely to know the context, nor have any realistic way of gaining that context. And that leads to that second law that Graham points out: not only is it never enough, and not only do more and more people demand things be done for them, but on top of all that, people will get mad about the total lack of "accountability" and "transparency" in how these decisions are made.
Now, I'd agree that many platforms could be much more transparent about how these decisions are made, but then that creates another corollary to these rules: which is that the more transparent and accountable you are, (1) the more people game the system and make things worse and (2) the angrier people are that "bad stuff" hasn't disappeared.
At a time when the clamor for mandatory content moderation on the internet seems to be reaching a fever pitch, we should be careful what we wish for. It won't work well and the end results may make just about everything worse.
Filed Under: cda 230, graham smith, intermediary liability, section 230