Content Moderation And Human Nature
from the unavoidably-human dept
It should go without saying that communication technologies don’t conjure up unfathomable evils all by themselves. They are a convenience-enhancer, a conduit, and a magnifying lens amplifying something that’s already there: our deeply flawed humanity. Try as we might to tame it (and boy have we tried), human nature will always rear its ugly head. Debates about governing these technologies should start by making the inherent tradeoffs more explicit.
Institutions
First, a little philosophizing. From the social contract onwards, a significant amount of resources have been allocated to attempting to subdue human nature’s predilection for self-preservation at all costs. Modern society is geared towards improving the human condition by striving to unlearn — or at least overpower — our more primitive responses.
One such attempt is the creation of institutions, with norms, rules, cultures and, on paper, inherently stronger principles than those rooted deep inside people.
It’s difficult to find ideologies that don’t allow for some need for institutions. Even the most ardent of free market capitalists acquiesce to the — limited, in their mindset — benefits of certain institutions. Beyond order and a sense of impartiality, institutions help minimize humans’ unchecked power in consequential choices that can impact wider society.
One ideal posits that institutions (corporations, parties, governments) given unfettered control over society could rid us of the aspects of our humanity that we’ve so intently tried to escape, bringing forth prosperity, equality, innovation, and progress. The fundamental flaw in that reasoning is that institutions are still intrinsically connected to humanity; created, implemented, and staffed by fallible human beings.
However strict the boundaries in which humans are expected to operate, the potential for partial or even total capture is very high. The boundaries are rarely entirely solid, and even if they were, humans always have the option to not comply. Bucking the system is not just an anomaly, it’s revered in a large portion of non-totalitarian regimes as a sign of independence, strong individuality, and as a characteristic of those lauded as mavericks.
The power of institutional norms tasked with guarding against the worst of what humans can offer is proven to be useless when challenged by people for whom self-preservation is paramount. A current and facile example is the rise to power of Donald Trump and his relentless destruction of society-defining unwritten rules.
Even without challenging the institution, a turn towards self-indulgence is easily achievable, forging a path to a reshaping in its image. The most obvious example is that of communism, wherein the lofty goal of equality is operationalized through a party-state apparatus to ostensibly distribute equally the spoils of society’s labor. As history has shown, this is contingent on the sadly unlikely situation wherein all those populating institutions are genuinely altruistic. Invariably, the best-case scenario dissipates, if it ever materialized, and inequality deepens — the opposite of the desired goal.
This is not a tacit endorsement of a rule-less, institution-less dystopia simply because rules and institutions are not adept at a practically impossible task. Instead, this should be read as a cautionary tale for overextending critical aspects of society and treating them as panacea, rather than a suitable and mostly successful palliative.
Artificial Intelligence
Armed with the continuous failure of institutions to overcome human nature, you’d think we would stop trying to remove our imperfect selves from the equation.
But what we’ve seen for more than a decade now has been technology that directly and distinctly promises to remove our worst impulses, if not humans entirely, from thinking, acting, or doing practically anything of consequence. AI, the ultimate and literal deus ex machina, is advertised as a solution for a large number of much smaller concerns. Fundamentally, its solution to these problems is ostensibly removing the human element.
Years of research, experiments, blunders, mistakes and downright evil deeds have led us to safely conclude that artificial intelligence is as successful at eliminating the imperfect human as the “you wouldn’t steal a car” anti-piracy campaign was at stopping copyright infringement. This is not to denigrate the important and beneficial work scientists and engineers have put into building intelligent automation tasked with solving complex problems.
Technology, and artificial intelligence in particular, is created, run and maintained by human beings with perspectives, goals, and inherent biases. Just like institutions, once a glimpse of positive change or success is evident, we extrapolate it far beyond its limits and task it with the unachievable and unenviable goal of fixing humanity — by removing it from the equation.
Platforms
Communication technology is not directly tasked with solving society, it simply is meant as a tool to connect us all. Much like AI, it has seemingly elegant solutions for messy problems. It’s easy to see that thanks to tech platforms, be they bulletin boards or TikTok, distance becomes trivial in maintaining connection. Community can be built and fostered online, otherwise marginalized voices can be heard, and businesses can be set up and grow digitally. Even loneliness can be alleviated.
With such a slew of real and potential benefits, it’s no wonder that we started to ascribe them with increasingly more consequential roles for society; roles these technologies were never built for and are far beyond their technical and ethical capabilities.
The Arab Spring in the early 2010s wasn't just a liberation movement by oppressed and energized populations. It was also an opportunity for free PR for now tech-giants Twitter and Facebook, as various outlets and pundits branded revolutions with their names. It didn't help that CEOs and tech executives seized on this narrative and, in typical Silicon Valley fashion, took to promising things akin to a politician trying to get elected.
When you set the bar that high, expectations understandably follow. The aura of tech solutionism implies such earth-shattering advancements as ordinary.
Nearly everyone can picture the potential good for society these technologies can do. And while we may all believe in that potential, the reality is that, so far, communication technologies have mostly provided convenience. Sometimes this convenience is in fact live-saving, but mostly it’s just an added benefit.
Convenience doesn’t alter our core. It doesn’t magically make us better humans or create entirely different societies. It simply lifts a few barriers from our path. This article may be seen as an attempt to minimize the perceived role of technology in society, in order to subsequently deny it and its makers any blame for how society uses it. But that is not what I am arguing.
An honest debate about responsibility has to fundamentally start with a clear understanding of the actual task something accomplishes, the perceived task others attribute to it, and its societal and historical context. A technology that provides convenience should not be fundamental to the functioning of a society. Convenience can easily become so commonplace that it ceases to be an added benefit but an integral part of life where the prospect of it being taken away is met with screams of bloody murder.
Responsibility has to be assigned to the makers, maintainers and users of communication technology, by examining which barriers are being lifted and why. There is plenty of responsibility there to be had, and I am involved in a couple of projects that try to untangle this complex mess. However, these platforms are not the reason for the negative parts of life, they are merely the conduit.
Yes, a sentient conduit can tighten or loosen its grip, divert, amplify, temporarily block messages, but it isn’t the originator of those messages, or of the intent behind it. It can surely be extremely inviting for messages of hate and division, maybe because of business models, maybe because of engineering decisions, or maybe simply because growth and scale never actually happened in a proper way. But that hate and division is endemic to human nature, and to assume that platforms can do what institutions have persistently failed to do, namely entirely eradicate it, is nonsensical.
Regulation
It is clear that platforms, reaching the size and ubiquity that they have, require updated and smart regulations in order to properly balance their benefits and the risks. But the push (and counter-push) to regulate has to start from a perspective that understands both fundamental leaps: platforms are to human nature what section 230 (or any other national-level intermediary liability law) is to the First Amendment (or any national level text that inscribes the social consensus on free speech).
If your issue is with hate and hate speech, the main thing you have to contend with are human nature and the First Amendment, not just the platforms and section 230. Without a doubt, both the platforms and section 230 are choices and explicit actions built on top of the other two, and are not fundamentally the only or best form of what they could be.
But a lot of the issues that bubble up within the content moderation and intermediary liability space come from a concern over the boundaries. That concern is entirely related to the broader contexts rather than the platforms or the specific legislation.
Regulating platforms has to start from the understanding that tradeoffs, most of which are cultural in nature, are inevitable. To be clear: there is no way to completely stop evil from happening on these platforms without making them useless.
If we were to simply ignore hate speech, we’d eliminate convenience and in some instances invalidate the very existence of these platforms. That should not be an issue if these platforms were still seen as simple conveyors of convenience, but they are currently being seen as much more than that.
Tech executives and CEOs have moved into the fascinating space wherein they have to protect their market power to assuage their shareholders, treat their products as mind-meltingly amazing to gain and keep users, yet imply their role in society is transient and insignificant in order to mollify policy-makers all at the same time.
The convenience afforded by these technologies is allowing nefarious actors to cause substantial harm to a substantial number of people. Some users get death threats, or even have their life end tragically because of interactions on these platforms. Others will have their most private information or documents exposed, or experience sexual abuse or trauma through a variety of ways.
Unfortunately, these things happen in the offline world as well, and they are fundamentally predicated on the regulatory/institutional context and the tools that allow them to manifest. The tools are not off the hook. Their propensity to not minimize harm, online and off, are due for important conversations. But they are not the cause. They are the conduit.
Thus, the ultimate goal of “platforms existing without hate or violence” is very sadly not realistic. Neither are tradeoffs such as being ok with stripping fundamental rights in exchange for a safer environment, or being ok with some people suffering immense trauma and pain simply because one believes in the concept of open speech.
Maybe the solution is to not have these platforms at all, or ask them to change substantially. or maybe it’s to calibrate our expectations, or maybe yet, to address the underlying issues in our society. Once we see what the boundaries truly are, any debate becomes infinitely more productive.
This article is not advancing any new or groundbreaking ideas. What it does is identify crucial and seemingly misunderstood pieces of the subtext and spell it out. Sadly, the fact that these more or less evident issues needed to be said in plain text should be the biggest take-away.
As a qualitative researcher, I learned that there is no way to “de-bias” my work. Trying to remove myself from the equation results in a bland “view from nowhere” that is ignorant of the underlying power dynamics and inherent mechanisms of whatever I am studying. However, that doesn’t mean we take off our glasses when trying to see for fear of the glasses influencing what we see, because that would actually make us blind. We remedy that by acknowledging our glasses as well.
A communication platform (company, tech, product) that doesn’t have inherent biases is impossible. But that shouldn’t mean that we can’t try to ask it to be better, either through regulation, collaboration or hostile action. We just have to be cognizant of the place we’re standing when asking, the context, potential consequences and as this piece hopefully shows, what it can’t actually do.
The conversation surrounding platform governance would benefit immensely from these tradeoffs being made explicit. It would certainly dial down the rhetoric and (genuine) visceral attitudes towards debate as it would force those directly involved or invested in one outcome to carefully assess the context and general tradeoffs.
David Morar, PhD is an academic with the mind of a practitioner and currently a Fellow at the Digital Interests Lab and a Visiting Scholar at GWU’s Elliott School of International Affairs.
A good critique
Over on Twitter, Chris Riley, formerly of Mozilla Policy, made a great point:
"Lots of great analysis of "sub-text" in this issue space here, to use your term. Where I'm losing you is what feels like too much sympathy for platforms as conduits, rather than (un?)intentional amplifiers that put more energy and attention into Bad Things, because engagement."
My response to him:
"That is a great critique. I had a part that I self-edited out that talked about how the misunderstanding of their own roles coupled with the lax regulatory environment led to a growth that didn’t bother itself with societal standing. [...] Even more, they aren’t or shouldn’t be off the hook for this. On the contrary. But engagement, sadly, is the metric for so much in our society (including offline news), that it’s also tied to human nature, in my perspective."
As Chris said in his own reply, that is "[f]ully tied to human nature but it has been exploited to the degree of repulsivity."
I think this is an important piece of the puzzle, and how this fits within the larger context.
/div>Re: Checkbook "apologies"
Just wanted to say, the make-up of the board is actually incredibly impressive in terms of almost every metric (I would also say that maybe having the Vice-President of Cato, a scholar who has studied and written on free speech for a while maybe counts as a libertarian).
/div>Re:
Thanks for the comment! I would say that currently there isn't any inherent accountability mechanism for Facebook to its customers that actually has any real effect. Public disclosure of issues with Facebook is what a lot of the people now on the board have devoted their professional careers to, and there hasn't been any major structural change to how FB works wrt to CoMo.
I totally agree and believe that is a very fair critique (your second paragraph) but the difference is that a) they promised they would, and b) they are bastardizing and improperly putting into practice a potential solution to this, which would be multistakeholder governance bodies (internal, external, industry, etc. many flavors), thus delegitimizing it further. And that's where I stand as someone that has studied this form of governance very closely, and has told the FB people (otherwise very capable, hard-working and sharp people) as much, trying to make them understand the pitfalls of creating it as such.
/div>Techdirt has not posted any stories submitted by morar.
Submit a story now.
Tools & Services
TwitterFacebook
RSS
Podcast
Research & Reports
Company
About UsAdvertising Policies
Privacy
Contact
Help & FeedbackMedia Kit
Sponsor/Advertise
Submit a Story
More
Copia InstituteInsider Shop
Support Techdirt