The Tech Policy Greenhouse is an online symposium where experts tackle the most difficult policy challenges facing innovation and technology today. These are problems that don't have easy solutions, where every decision involves tradeoffs and unintended consequences, so we've gathered a wide variety of voices to help dissect existing policy proposals and better inform new ones.

Content Moderation And Human Nature

from the unavoidably-human dept

It should go without saying that communication technologies don’t conjure up unfathomable evils all by themselves. They are a convenience-enhancer, a conduit, and a magnifying lens amplifying something that’s already there: our deeply flawed humanity. Try as we might to tame it (and boy have we tried), human nature will always rear its ugly head. Debates about governing these technologies should start by making the inherent tradeoffs more explicit.

Institutions

First, a little philosophizing. From the social contract onwards, a significant amount of resources have been allocated to attempting to subdue human nature’s predilection for self-preservation at all costs. Modern society is geared towards improving the human condition by striving to unlearn — or at least overpower — our more primitive responses.

One such attempt is the creation of institutions, with norms, rules, cultures and, on paper, inherently stronger principles than those rooted deep inside people.

It’s difficult to find ideologies that don’t allow for some need for institutions. Even the most ardent of free market capitalists acquiesce to the — limited, in their mindset — benefits of certain institutions. Beyond order and a sense of impartiality, institutions help minimize humans’ unchecked power in consequential choices that can impact wider society.

One ideal posits that institutions (corporations, parties, governments) given unfettered control over society could rid us of the aspects of our humanity that we’ve so intently tried to escape, bringing forth prosperity, equality, innovation, and progress. The fundamental flaw in that reasoning is that institutions are still intrinsically connected to humanity; created, implemented, and staffed by fallible human beings.

However strict the boundaries in which humans are expected to operate, the potential for partial or even total capture is very high. The boundaries are rarely entirely solid, and even if they were, humans always have the option to not comply. Bucking the system is not just an anomaly, it’s revered in a large portion of non-totalitarian regimes as a sign of independence, strong individuality, and as a characteristic of those lauded as mavericks.

The power of institutional norms tasked with guarding against the worst of what humans can offer is proven to be useless when challenged by people for whom self-preservation is paramount. A current and facile example is the rise to power of Donald Trump and his relentless destruction of society-defining unwritten rules.

Even without challenging the institution, a turn towards self-indulgence is easily achievable, forging a path to a reshaping in its image. The most obvious example is that of communism, wherein the lofty goal of equality is operationalized through a party-state apparatus to ostensibly distribute equally the spoils of society’s labor. As history has shown, this is contingent on the sadly unlikely situation wherein all those populating institutions are genuinely altruistic. Invariably, the best-case scenario dissipates, if it ever materialized, and inequality deepens — the opposite of the desired goal.

This is not a tacit endorsement of a rule-less, institution-less dystopia simply because rules and institutions are not adept at a practically impossible task. Instead, this should be read as a cautionary tale for overextending critical aspects of society and treating them as panacea, rather than a suitable and mostly successful palliative.

Artificial Intelligence

Armed with the continuous failure of institutions to overcome human nature, you’d think we would stop trying to remove our imperfect selves from the equation.

But what we’ve seen for more than a decade now has been technology that directly and distinctly promises to remove our worst impulses, if not humans entirely, from thinking, acting, or doing practically anything of consequence. AI, the ultimate and literal deus ex machina, is advertised as a solution for a large number of much smaller concerns. Fundamentally, its solution to these problems is ostensibly removing the human element.

Years of research, experiments, blunders, mistakes and downright evil deeds have led us to safely conclude that artificial intelligence is as successful at eliminating the imperfect human as the “you wouldn’t steal a car” anti-piracy campaign was at stopping copyright infringement. This is not to denigrate the important and beneficial work scientists and engineers have put into building intelligent automation tasked with solving complex problems.

Technology, and artificial intelligence in particular, is created, run and maintained by human beings with perspectives, goals, and inherent biases. Just like institutions, once a glimpse of positive change or success is evident, we extrapolate it far beyond its limits and task it with the unachievable and unenviable goal of fixing humanity — by removing it from the equation.

Platforms

Communication technology is not directly tasked with solving society, it simply is meant as a tool to connect us all. Much like AI, it has seemingly elegant solutions for messy problems. It’s easy to see that thanks to tech platforms, be they bulletin boards or TikTok, distance becomes trivial in maintaining connection. Community can be built and fostered online, otherwise marginalized voices can be heard, and businesses can be set up and grow digitally. Even loneliness can be alleviated.

With such a slew of real and potential benefits, it’s no wonder that we started to ascribe them with increasingly more consequential roles for society; roles these technologies were never built for and are far beyond their technical and ethical capabilities.

The Arab Spring in the early 2010s wasn't just a liberation movement by oppressed and energized populations. It was also an opportunity for free PR for now tech-giants Twitter and Facebook, as various outlets and pundits branded revolutions with their names. It didn't help that CEOs and tech executives seized on this narrative and, in typical Silicon Valley fashion, took to promising things akin to a politician trying to get elected.

When you set the bar that high, expectations understandably follow. The aura of tech solutionism implies such earth-shattering advancements as ordinary.

Nearly everyone can picture the potential good for society these technologies can do. And while we may all believe in that potential, the reality is that, so far, communication technologies have mostly provided convenience. Sometimes this convenience is in fact live-saving, but mostly it’s just an added benefit.

Convenience doesn’t alter our core. It doesn’t magically make us better humans or create entirely different societies. It simply lifts a few barriers from our path. This article may be seen as an attempt to minimize the perceived role of technology in society, in order to subsequently deny it and its makers any blame for how society uses it. But that is not what I am arguing.

An honest debate about responsibility has to fundamentally start with a clear understanding of the actual task something accomplishes, the perceived task others attribute to it, and its societal and historical context. A technology that provides convenience should not be fundamental to the functioning of a society. Convenience can easily become so commonplace that it ceases to be an added benefit but an integral part of life where the prospect of it being taken away is met with screams of bloody murder.

Responsibility has to be assigned to the makers, maintainers and users of communication technology, by examining which barriers are being lifted and why. There is plenty of responsibility there to be had, and I am involved in a couple of projects that try to untangle this complex mess. However, these platforms are not the reason for the negative parts of life, they are merely the conduit.

Yes, a sentient conduit can tighten or loosen its grip, divert, amplify, temporarily block messages, but it isn’t the originator of those messages, or of the intent behind it. It can surely be extremely inviting for messages of hate and division, maybe because of business models, maybe because of engineering decisions, or maybe simply because growth and scale never actually happened in a proper way. But that hate and division is endemic to human nature, and to assume that platforms can do what institutions have persistently failed to do, namely entirely eradicate it, is nonsensical.

Regulation

It is clear that platforms, reaching the size and ubiquity that they have, require updated and smart regulations in order to properly balance their benefits and the risks. But the push (and counter-push) to regulate has to start from a perspective that understands both fundamental leaps: platforms are to human nature what section 230 (or any other national-level intermediary liability law) is to the First Amendment (or any national level text that inscribes the social consensus on free speech).

If your issue is with hate and hate speech, the main thing you have to contend with are human nature and the First Amendment, not just the platforms and section 230. Without a doubt, both the platforms and section 230 are choices and explicit actions built on top of the other two, and are not fundamentally the only or best form of what they could be.

But a lot of the issues that bubble up within the content moderation and intermediary liability space come from a concern over the boundaries. That concern is entirely related to the broader contexts rather than the platforms or the specific legislation.

Regulating platforms has to start from the understanding that tradeoffs, most of which are cultural in nature, are inevitable. To be clear: there is no way to completely stop evil from happening on these platforms without making them useless.

If we were to simply ignore hate speech, we’d eliminate convenience and in some instances invalidate the very existence of these platforms. That should not be an issue if these platforms were still seen as simple conveyors of convenience, but they are currently being seen as much more than that.

Tech executives and CEOs have moved into the fascinating space wherein they have to protect their market power to assuage their shareholders, treat their products as mind-meltingly amazing to gain and keep users, yet imply their role in society is transient and insignificant in order to mollify policy-makers all at the same time.

The convenience afforded by these technologies is allowing nefarious actors to cause substantial harm to a substantial number of people. Some users get death threats, or even have their life end tragically because of interactions on these platforms. Others will have their most private information or documents exposed, or experience sexual abuse or trauma through a variety of ways.

Unfortunately, these things happen in the offline world as well, and they are fundamentally predicated on the regulatory/institutional context and the tools that allow them to manifest. The tools are not off the hook. Their propensity to not minimize harm, online and off, are due for important conversations. But they are not the cause. They are the conduit.

Thus, the ultimate goal of “platforms existing without hate or violence” is very sadly not realistic. Neither are tradeoffs such as being ok with stripping fundamental rights in exchange for a safer environment, or being ok with some people suffering immense trauma and pain simply because one believes in the concept of open speech.

Maybe the solution is to not have these platforms at all, or ask them to change substantially. or maybe it’s to calibrate our expectations, or maybe yet, to address the underlying issues in our society. Once we see what the boundaries truly are, any debate becomes infinitely more productive.

This article is not advancing any new or groundbreaking ideas. What it does is identify crucial and seemingly misunderstood pieces of the subtext and spell it out. Sadly, the fact that these more or less evident issues needed to be said in plain text should be the biggest take-away.

As a qualitative researcher, I learned that there is no way to “de-bias” my work. Trying to remove myself from the equation results in a bland “view from nowhere” that is ignorant of the underlying power dynamics and inherent mechanisms of whatever I am studying. However, that doesn’t mean we take off our glasses when trying to see for fear of the glasses influencing what we see, because that would actually make us blind. We remedy that by acknowledging our glasses as well.

A communication platform (company, tech, product) that doesn’t have inherent biases is impossible. But that shouldn’t mean that we can’t try to ask it to be better, either through regulation, collaboration or hostile action. We just have to be cognizant of the place we’re standing when asking, the context, potential consequences and as this piece hopefully shows, what it can’t actually do.

The conversation surrounding platform governance would benefit immensely from these tradeoffs being made explicit. It would certainly dial down the rhetoric and (genuine) visceral attitudes towards debate as it would force those directly involved or invested in one outcome to carefully assess the context and general tradeoffs.

David Morar, PhD is an academic with the mind of a practitioner and currently a Fellow at the Digital Interests Lab and a Visiting Scholar at GWU’s Elliott School of International Affairs.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: content moderation, human nature


Reader Comments

Subscribe: RSS

View by: Thread


  • icon
    ECA (profile), 24 Aug 2020 @ 4:22pm

    Technology, the un-needed Burdon, but...

    A society, starting from Less lech, that loves independence, But seeing that Tech is a solution to ?????.
    What is the solution?
    You take the requirement away from us that is/was demanded. You also take other things, while SOME may not see/need this tech.
    Many advancements Take away Jobs, Which is good for the corps, and for Control of resources.
    Look up the Job of being a Telephone operator. Over time its become Very automated. And the Corps can keep the prices up, and Show Big savings, but fewer Employees, needed.
    For all the savings with Automated tech, who is winning or losing. The corps dont seem to want to give Back any savings. But the Phone system is Larger then ever, and Supposedly, Cheaper and Cheaper. and so are the paychecks for those on top, as there are so FEW lower then they are, anymore.
    AI, is interesting. Because of its use's. And how far do you want to go. If we created a Computer dedicated on Economics(totally unscientific, science). We could put number into it and watch how things changed in a domino affect and spread around. We could watch while 1 agency changed and raised prices(for little to no reason) and affect everything around them. 1 little ripple of change, that affect so many.
    We could create a AI that was Fair and balanced, in its perspective of life and times, and watch it work. Could it tell us the future of what Needs to be done to FIX THINGS?? not really. As some idiot would mess everything up just to prove it cant be done. Only if you could kill off all human interaction/intervention with making things better, could hte AI itself be happy.
    We tend to beat down those that are abit independent, until theyy can only do what THEY are told. And even when they fail to do as they were told, Even after doing it the WAY the boss wanted, they can be FIRED..
    If we cant NEED/WANT everyone to be responsbile for themselves, as far as they can. We tend not to trust anyone or anything.
    Iv delt with many persons that declare they are RIGHT/CORRECT on the basis of a person in a church Pounding into their heads, "we are right, and only This is right", with so much as not Teching the history of what is written. The history, the times, the reasoning of what and why things WERE that way. That they take it to the point of irrational, and that things CANT be any other way. Then we goto the other side where Some feel they can Make things better if we do this/that, and take the HUMAN out of the equation. So what do you do after everything is Automated? Allot. Because some idiot will try to destroy what has been created even if you have made it so he can SIT and do nothing except Roll over and die, or anything else he wants.

    We could create a Huge Eutopia, and there are those that still would want control over the system. Mostly to input THEIR OWN bias into the system. Just watch the Jetsons, and see how miserable we can make 1 person.

    link to this | view in chronology ]

  • identicon
    Ben, 9 Oct 2020 @ 12:06pm

    Towards the end of the article your write, "A communication platform (company, tech, product) that doesn’t have inherent biases is impossible. But that shouldn’t mean that we can’t try to ask it to be better, either through regulation, collaboration or hostile action."

    What do you mean by asking it to be better through "hostile action" ?

    link to this | view in chronology ]


Follow Techdirt
Essential Reading

New To Techdirt?

Explore some core concepts:

read all »

Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.