morar’s Techdirt Profile

morar

About morar


https://www.linkedin.com/in/davidmorar



Posted on Techdirt Greenhouse - 24 August 2020 @ 12:00pm

Content Moderation And Human Nature

from the unavoidably-human dept

It should go without saying that communication technologies don’t conjure up unfathomable evils all by themselves. They are a convenience-enhancer, a conduit, and a magnifying lens amplifying something that’s already there: our deeply flawed humanity. Try as we might to tame it (and boy have we tried), human nature will always rear its ugly head. Debates about governing these technologies should start by making the inherent tradeoffs more explicit.

Institutions

First, a little philosophizing. From the social contract onwards, a significant amount of resources have been allocated to attempting to subdue human nature’s predilection for self-preservation at all costs. Modern society is geared towards improving the human condition by striving to unlearn — or at least overpower — our more primitive responses.

One such attempt is the creation of institutions, with norms, rules, cultures and, on paper, inherently stronger principles than those rooted deep inside people.

It’s difficult to find ideologies that don’t allow for some need for institutions. Even the most ardent of free market capitalists acquiesce to the — limited, in their mindset — benefits of certain institutions. Beyond order and a sense of impartiality, institutions help minimize humans’ unchecked power in consequential choices that can impact wider society.

One ideal posits that institutions (corporations, parties, governments) given unfettered control over society could rid us of the aspects of our humanity that we’ve so intently tried to escape, bringing forth prosperity, equality, innovation, and progress. The fundamental flaw in that reasoning is that institutions are still intrinsically connected to humanity; created, implemented, and staffed by fallible human beings.

However strict the boundaries in which humans are expected to operate, the potential for partial or even total capture is very high. The boundaries are rarely entirely solid, and even if they were, humans always have the option to not comply. Bucking the system is not just an anomaly, it’s revered in a large portion of non-totalitarian regimes as a sign of independence, strong individuality, and as a characteristic of those lauded as mavericks.

The power of institutional norms tasked with guarding against the worst of what humans can offer is proven to be useless when challenged by people for whom self-preservation is paramount. A current and facile example is the rise to power of Donald Trump and his relentless destruction of society-defining unwritten rules.

Even without challenging the institution, a turn towards self-indulgence is easily achievable, forging a path to a reshaping in its image. The most obvious example is that of communism, wherein the lofty goal of equality is operationalized through a party-state apparatus to ostensibly distribute equally the spoils of society’s labor. As history has shown, this is contingent on the sadly unlikely situation wherein all those populating institutions are genuinely altruistic. Invariably, the best-case scenario dissipates, if it ever materialized, and inequality deepens — the opposite of the desired goal.

This is not a tacit endorsement of a rule-less, institution-less dystopia simply because rules and institutions are not adept at a practically impossible task. Instead, this should be read as a cautionary tale for overextending critical aspects of society and treating them as panacea, rather than a suitable and mostly successful palliative.

Artificial Intelligence

Armed with the continuous failure of institutions to overcome human nature, you’d think we would stop trying to remove our imperfect selves from the equation.

But what we’ve seen for more than a decade now has been technology that directly and distinctly promises to remove our worst impulses, if not humans entirely, from thinking, acting, or doing practically anything of consequence. AI, the ultimate and literal deus ex machina, is advertised as a solution for a large number of much smaller concerns. Fundamentally, its solution to these problems is ostensibly removing the human element.

Years of research, experiments, blunders, mistakes and downright evil deeds have led us to safely conclude that artificial intelligence is as successful at eliminating the imperfect human as the “you wouldn’t steal a car” anti-piracy campaign was at stopping copyright infringement. This is not to denigrate the important and beneficial work scientists and engineers have put into building intelligent automation tasked with solving complex problems.

Technology, and artificial intelligence in particular, is created, run and maintained by human beings with perspectives, goals, and inherent biases. Just like institutions, once a glimpse of positive change or success is evident, we extrapolate it far beyond its limits and task it with the unachievable and unenviable goal of fixing humanity — by removing it from the equation.

Platforms

Communication technology is not directly tasked with solving society, it simply is meant as a tool to connect us all. Much like AI, it has seemingly elegant solutions for messy problems. It’s easy to see that thanks to tech platforms, be they bulletin boards or TikTok, distance becomes trivial in maintaining connection. Community can be built and fostered online, otherwise marginalized voices can be heard, and businesses can be set up and grow digitally. Even loneliness can be alleviated.

With such a slew of real and potential benefits, it’s no wonder that we started to ascribe them with increasingly more consequential roles for society; roles these technologies were never built for and are far beyond their technical and ethical capabilities.

The Arab Spring in the early 2010s wasn't just a liberation movement by oppressed and energized populations. It was also an opportunity for free PR for now tech-giants Twitter and Facebook, as various outlets and pundits branded revolutions with their names. It didn't help that CEOs and tech executives seized on this narrative and, in typical Silicon Valley fashion, took to promising things akin to a politician trying to get elected.

When you set the bar that high, expectations understandably follow. The aura of tech solutionism implies such earth-shattering advancements as ordinary.

Nearly everyone can picture the potential good for society these technologies can do. And while we may all believe in that potential, the reality is that, so far, communication technologies have mostly provided convenience. Sometimes this convenience is in fact live-saving, but mostly it’s just an added benefit.

Convenience doesn’t alter our core. It doesn’t magically make us better humans or create entirely different societies. It simply lifts a few barriers from our path. This article may be seen as an attempt to minimize the perceived role of technology in society, in order to subsequently deny it and its makers any blame for how society uses it. But that is not what I am arguing.

An honest debate about responsibility has to fundamentally start with a clear understanding of the actual task something accomplishes, the perceived task others attribute to it, and its societal and historical context. A technology that provides convenience should not be fundamental to the functioning of a society. Convenience can easily become so commonplace that it ceases to be an added benefit but an integral part of life where the prospect of it being taken away is met with screams of bloody murder.

Responsibility has to be assigned to the makers, maintainers and users of communication technology, by examining which barriers are being lifted and why. There is plenty of responsibility there to be had, and I am involved in a couple of projects that try to untangle this complex mess. However, these platforms are not the reason for the negative parts of life, they are merely the conduit.

Yes, a sentient conduit can tighten or loosen its grip, divert, amplify, temporarily block messages, but it isn’t the originator of those messages, or of the intent behind it. It can surely be extremely inviting for messages of hate and division, maybe because of business models, maybe because of engineering decisions, or maybe simply because growth and scale never actually happened in a proper way. But that hate and division is endemic to human nature, and to assume that platforms can do what institutions have persistently failed to do, namely entirely eradicate it, is nonsensical.

Regulation

It is clear that platforms, reaching the size and ubiquity that they have, require updated and smart regulations in order to properly balance their benefits and the risks. But the push (and counter-push) to regulate has to start from a perspective that understands both fundamental leaps: platforms are to human nature what section 230 (or any other national-level intermediary liability law) is to the First Amendment (or any national level text that inscribes the social consensus on free speech).

If your issue is with hate and hate speech, the main thing you have to contend with are human nature and the First Amendment, not just the platforms and section 230. Without a doubt, both the platforms and section 230 are choices and explicit actions built on top of the other two, and are not fundamentally the only or best form of what they could be.

But a lot of the issues that bubble up within the content moderation and intermediary liability space come from a concern over the boundaries. That concern is entirely related to the broader contexts rather than the platforms or the specific legislation.

Regulating platforms has to start from the understanding that tradeoffs, most of which are cultural in nature, are inevitable. To be clear: there is no way to completely stop evil from happening on these platforms without making them useless.

If we were to simply ignore hate speech, we’d eliminate convenience and in some instances invalidate the very existence of these platforms. That should not be an issue if these platforms were still seen as simple conveyors of convenience, but they are currently being seen as much more than that.

Tech executives and CEOs have moved into the fascinating space wherein they have to protect their market power to assuage their shareholders, treat their products as mind-meltingly amazing to gain and keep users, yet imply their role in society is transient and insignificant in order to mollify policy-makers all at the same time.

The convenience afforded by these technologies is allowing nefarious actors to cause substantial harm to a substantial number of people. Some users get death threats, or even have their life end tragically because of interactions on these platforms. Others will have their most private information or documents exposed, or experience sexual abuse or trauma through a variety of ways.

Unfortunately, these things happen in the offline world as well, and they are fundamentally predicated on the regulatory/institutional context and the tools that allow them to manifest. The tools are not off the hook. Their propensity to not minimize harm, online and off, are due for important conversations. But they are not the cause. They are the conduit.

Thus, the ultimate goal of “platforms existing without hate or violence” is very sadly not realistic. Neither are tradeoffs such as being ok with stripping fundamental rights in exchange for a safer environment, or being ok with some people suffering immense trauma and pain simply because one believes in the concept of open speech.

Maybe the solution is to not have these platforms at all, or ask them to change substantially. or maybe it’s to calibrate our expectations, or maybe yet, to address the underlying issues in our society. Once we see what the boundaries truly are, any debate becomes infinitely more productive.

This article is not advancing any new or groundbreaking ideas. What it does is identify crucial and seemingly misunderstood pieces of the subtext and spell it out. Sadly, the fact that these more or less evident issues needed to be said in plain text should be the biggest take-away.

As a qualitative researcher, I learned that there is no way to “de-bias” my work. Trying to remove myself from the equation results in a bland “view from nowhere” that is ignorant of the underlying power dynamics and inherent mechanisms of whatever I am studying. However, that doesn’t mean we take off our glasses when trying to see for fear of the glasses influencing what we see, because that would actually make us blind. We remedy that by acknowledging our glasses as well.

A communication platform (company, tech, product) that doesn’t have inherent biases is impossible. But that shouldn’t mean that we can’t try to ask it to be better, either through regulation, collaboration or hostile action. We just have to be cognizant of the place we’re standing when asking, the context, potential consequences and as this piece hopefully shows, what it can’t actually do.

The conversation surrounding platform governance would benefit immensely from these tradeoffs being made explicit. It would certainly dial down the rhetoric and (genuine) visceral attitudes towards debate as it would force those directly involved or invested in one outcome to carefully assess the context and general tradeoffs.

David Morar, PhD is an academic with the mind of a practitioner and currently a Fellow at the Digital Interests Lab and a Visiting Scholar at GWU’s Elliott School of International Affairs.

3 Comments

Posted on Techdirt - 2 June 2020 @ 10:44am

Facebook's Oversight Board Can't Intervene, So Stop Asking

from the find-the-money-card dept

As Facebook employees stage a digital walk-out and make their thoughts known about the social media giant’s choice to not intervene in any way on “political posts”, especially those of President Donald Trump, some have called for the newly-created Oversight Board to step up and force a change in Facebook. While the official answer is that they can’t start (because supposedly they haven’t given out laptops yet), the real and very simple reason why the Facebook Oversight Board won’t get involved is because it can’t. It’s not created to function that way, it’s not staffed for something like this, and ultimately, due to its relationship with Facebook, anything it would say on this matter right now would be taken in an advisory capacity at best. Facebook, understandably not wanting to actually give any of its power away, played confidence games with the idea of external, independent oversight, and it’s clear that they fooled a lot of people. Let me explain. 

In three-card-monte, the huckster keeps shuffling three playing cards until the victim is likely to guess wrong on where the “money card” may be hiding, and proceeds to flop the cards one by one. For Facebook’s prestidigitation on content moderation, last month’s announcement of the initial 20 highly-regarded experts tapped as members for its independent oversight board is the second card flop, and predictably, the money card is not there. 

The ongoing sleight of hand performed by Facebook is subtle but fundamental. The board was set up as truly independent, in every way, from member to case selection and to the board’s internal governance. In terms of its scope and structure, it is guided by previously-released bylaws to primarily handle a small set of content removal cases (which come up to the board after exhausting the regular appeals process), and dictate Facebook to change its decisions in those cases. To a much lesser extent, the Board can, although time and resources are not allocated for this, provide input, or recommendations about Facebook’s content moderation policies, however, Facebook is not obligated in any way to follow those policy recommendations, but to simply respond in 30 days and talk about any action it may take.

In the pages of the San Francisco Chronicle’s Open Forum, and elsewhere, I and others have called attention to this empty action as far back as September 2019, at the first card flop, the public release of the Board’s charter and bylaws. The project continued unabated and unchanged as friendly experts extolled the hard work of the team and preached optimism. Glaring concerns over the Board’s advisory-at best, non-binding overall power, not only weren’t addressed, but actually dismissed by cautioning that board member selection, last month’s flop, would be where the money card is. Can you spot the inconsistency? It doesn’t matter if you have the smartest independent advisors if you’re not giving them the opportunity to actually impact what you do. Of course, the money card wasn’t there.

In early May, the Menlo Park-based company released the list of its Oversight Board membership, with impressive names (former heads of state, Nobel Prize laureates and subject matter experts from around the world). Because the Board is truly independent, Facebook’s role was minimal, beyond coming up with said structure and bylaws with the consultation of experts from around the world (full disclosure: the author was also involved in one round of consultations in mid 2019), it only directly chose the 4 co-chairs who then were heavily involved in the choice of the other 16 members. A lot of chatter around this announcement focused, predictably, on who the people are; is the board diverse; is it experienced enough, etc, while some, have even focused on how independent the board truly is. As the current crisis is showing, none of that matters.

As we witness the Board’s institutionalized, structural and political inability to perform oversight it is becoming entirely clear that Facebook is not, at all, committed to fixing its content moderation problems in any meaningful way, and that political favor is more important than consistently applied policies. There is no best case scenario anymore as the Board can only fail or infect the rest of the industry. And what is a lose-lose for all of us will likely still be a win-win for Facebook.  

The bad case scenario is the likeliest: the Board is destined to fail. While Zuckerberg’s original ideas of transparency and openness were great on paper, the Board quickly turned into just a potential shield against loud government voices (such as Big Tech antagonist Sen. Hawley). Not only is that not working, Sen. Hawley responded to the membership list with even harsher rhetoric, but the importance placed on the optics versus the reality of solving this problem is even more obvious now. Giving the Board few, if any, real leverage mechanisms over the company can at most build a shiny Potemkin village and not an oversight body. If we dispense with all the readily-available evidence to the contrary, and give Facebook the benefit of the doubt that it tried, the alternative reasons for this rickety and impotent construction are not much better. It may be because giving a final say over difficult cases, the Board’s main job, is not something Facebook was comfortable with doing by itself anyway (and who can blame them given the pushback the platform gets with any high-profile decision). Or it may be because of a bizarre allegiance to the flawed constitutional law perspective that Facebook can build itself a Supreme Court, which makes the Board act as an appellate court of sorts, with a vague potential for creating precedent rather than truly providing oversight. 

If the Board’s failure doesn’t tarnish the perspective of a legitimate private governance model for content moderation, there’s a lot to learn on how to avoid unforced errors. First, we can safely say that while corporations may be people, they are definitely not states. Creating a pseudo judiciary without any of the accouterments of a liberal-democratic state, such as a hard-to-change constitution, co-equal branches and some sort of social contract is a recipe for disaster. Second is a fact that theory, literature and practice have long argued: structure fundamentally dictates how this type of private governance institution will run. And with an impotent Board left to mostly bloviate after the fact, without any real means to make changes to the policies themselves, this structure clearly points to a powerless but potentially loud “oversight” mechanism, pushed to the front, as a PR stunt, but unequipped to deal with the real problems of the platform. Finally, we see that even under intense pressure from numerous and transpartisan groups, and a potential openness to fixing a wicked problem, platforms are very unwilling to actually give up, even partly, their role and control in moderating content, but will gladly externalize their worst headaches. If their worst headaches were aligned with the  concerns of their users, that would be great, but creating “case law” for content moderation is an exercise in futility, as the company struggles to reverse-engineer Trump-friendly positions with its long-standing processes. We don’t have lower court judges who get to dutifully decide whether something is inscribed in the board’s previous actions. We have either underworked, underpaid and scarred people making snap decisions every minute, or irony and nuance illiterate algorithms who are poised to interpret these decisions mechanically. And more to the point, we have executives deciding to provide political cover to powerful players rather than enforce their own policies, knowing full well they’re not beholden to any oversight, since even if already up and running, by the time the Board ruled on this particular case, if ever, the situation would have since no longer been of national importance.

As always, there still is a solution. The Oversight Board may be beyond salvaging, but the idea of a private governance institution, where members of the public, civil society, industry and even government officials, can come together and try to reach a common ground for what the issues are and what the solutions might be, should still flourish, and should not be thrown away simply because Facebook’s initial attempt was highly flawed. Through continued vigilance and genuine, honest critiques of its structure and real role in the Facebook ecosystem, the Oversight Board can, at best, register as just one experiment of many, not a defining one, and we can soldier on with more diverse, inclusive, transparent, and flexible, industry-wide dialogues and initiatives.

The worst case scenario is if the Board magically coasts through without any strong challenge to its shaky legitimacy, or its impotent role. The potential for this to happen is there, since there are more important things in the world to worry about than whether Facebook’s independent advisory body has any teeth. In that case Facebook intends to, one way or another, franchise it to the rest of the industry. And that would be the third, and final flop. However, as I hope you figured it out by now, the money card wouldn't be there either. The money card, the card that Facebook never actually intended on giving away or even showing us, the power over content moderation policies, was never embedded in the structure of the board, its membership or any potential industry copycats that could legitimize it. This unexpected event allowed us to take a peek at the cards, the money card is still where it was all along, in Facebook’s back pocket.

David Morar is Associate Researcher at the Big Data Science Lab at the West University of Timisoara, Romania

18 Comments


This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it