from the the-mirror dept
When the New York Times reported Facebook’s plan to improve its reputation, the fact that the initiative was called “Project Amplify” wasn’t a surprise. “Amplification” is at the core of the Facebook brand, and “amplify the good” is a central concept in its PR playbook.
Amplify the good
Mark Zuckerberg initiated this talking point in 2018. “I think that we have a clear responsibility to make sure that the good is amplified and to do everything we can to mitigate the bad,” he said after the Russian election meddling and the killings in Myanmar.
Then, other Facebook executives adopted this notion regardless of the issue at hand. The best example is Adam Mosseri, Head of Instagram.
In July 2019, addressing online bullying, Mosseri said: “Technology isn’t inherently good or bad in the first place …. And social media, as a type of technology, is often an amplifier. It’s on us to make sure we’re amplifying the good and not amplifying the bad.”
In January 2021, After January 6 Capitol attack, Mosseri said: “Social media isn’t good or bad, like any technology, it just is. But social media is specifically a great amplifier. It can amplify good and bad. It’s our responsibility to make sure that we amplify more good and less bad.”
In September 2021, after a week of exposés about Facebook by the WSJ, The Facebook Files, Mosseri was assigned to defend the company once again. “When you connect people, whether it’s online or offline, good things can happen and bad things can happen,” he said in his opening statement. “I think that what is important is that the industry as a whole tries to understand both those positive and negative outcomes, and do all they can to magnify the positive and to identify and address the negative outcomes.”
Mosseri clearly uses the same messaging document, but Facebook’s PR template contains more talking points. Facebook also asserts that there have always been bad people or behaviors, and the current connectivity simply makes them more visible.
A mirror for the ugly
According to the “visibility” narrative, tech platforms simply reflect the beauty and ugliness in the world. Thus, social media is sometimes a cesspool because humanity is sometimes a cesspool.
Mark Zuckerberg addresses this issue several times, with the main message that it is just human nature. Nick Clegg, VP of Global Affairs and Communications, repeatedly shared the same mindset. “When society is divided and tensions run high, those divisions play out on social media. Platforms like Facebook hold up a mirror to society,” he wrote in 2020. “With more than 3 billion people using Facebook’s apps every month, everything that is good, bad misogynist and ugly in our societies will find expression on our platform.”
“Social media broadly, and messaging apps and technology, are a reflection of humanity,” Adam Mosseri repeated. “We communicated offline, and all of a sudden, now we’re also communicating online. Because we’re communicating online, we can see some of the ugly things we missed before. Some of the great and wonderful things, too.”
This “mirror of society” statement is being criticized for being intentionally uncomplicated. Because the ability to shape, not merely reflect, people’s preferences and behavior is also how Facebook makes money. Therefore, despite Facebook’s recurring statements, it is accused of not reflecting but increasing the bad and ugly.
Amplify the bad
“These platforms aren’t simply pointing out the existence of these dark corners of humanity,” John Paczkowski from BuzzFeed News, told me. “They are amplifying them and broadcasting them. That is different.”
After an accumulation of deadly events, such as the Christchurch shooting, Kara Swisher wrote about amplified hate and “murderous intent that leaps off the screen and into real life.” She argued that “While this kind of hate has indeed littered the annals of human history since its beginnings, technology has amplified it in a way that has been truly destructive.”
It is believed that bad behavior (e.g., disinformation) is induced by the way that tech platforms are designed to maximize engagement. Thus, Facebook’s victim-centric approach refuses to acknowledge that perhaps bad actors don’t misuse its platform but rather use it as intended (“machine for virality”).
Ev Williams, the co-founder of Blogger, Twitter, and Medium, said he now believes that he had failed to appreciate the risks of putting such powerful tools in users’ hands with minimal oversight. “One of the things we’ve seen in the past few years is that technology doesn’t just accelerate and amplify human behavior,” he wrote. “It creates feedback loops that can fundamentally change the nature of how people interact and societies move (in ways that probably none of us predicted).”
So, things had turned toxic in ways that tech founders didn’t predict. Should they have foreseen them? According to Mark Zuckerberg, an era of tech optimism led to unintended consequences. “For the first decade, we really focused on all the good that connecting people brings … But it’s clear now that we didn’t do enough,” he said After the Cambridge Analytica scandal. He admitted they didn’t think through “how people could use these tools to do harm as well.” Several years after the Techlash coverage began, there’s a consensus that they needed to “do more” to purposefully deny the ability to abuse them.
One of the reasons it was (and still is) a challenging task is their scale. According to this theme, the growth-at-all-cost “blinded” them, and they turned so big to be successfully managed at all. Due to their bigness, they are always in a game of cat-and-mouse with bad actors. “When you have hundreds of millions of users, it is impossible to keep track of all the ways they are using and abusing your systems,” Casey Newton, from the Platformer newsletter, explained in an interview. “They are always playing catch-up with their own messes.”
Due to the unprecedented scale at which Facebook operates, it is dependent on algorithms. Then, it claims that any perceived errors result from “algorithms that need tweaking” or “artificial intelligence that needs more training data.” But is it just an automation issue? It depends on who you ask.
The algorithms’ fault vs. the people who build them or use them
Critics say that machines are only as good as the rules built into them. “Google, Twitter, and Facebook have all regularly shifted the blame to algorithms, but companies write the algorithms, making them responsible for what they churn out.”
But platforms tend to avoid this responsibility. When ProPublica revealed that Facebook’s algorithms allowed advertisers to target users interested in “How to burn Jews” or “History of why Jews ruin the world,” Facebook’s response was: The anti-Semitic categories were created by an algorithm rather than by people.
At the same time, Facebook‘s Nick Clegg argued that human agency should not be removed from the equation. In a post titled “You and the Algorithm: It takes two to Tango,” he criticized the dystopian depictions of their algorithms, in which “people are portrayed as powerless victims, robbed of their free will.” As if “Humans have become the playthings of manipulative algorithmic systems.”
“Consider, for example, the presence of bad and polarizing content on private messaging apps - iMessage, Signal, Telegram, WhatsApp - used by billions of people around the world. None of those apps deploy content or ranking algorithms. It’s just humans talking to humans without any machine getting in the way,” Clegg wrote. “In many respects, it would be easier to blame everything on algorithms, but there are deeper and more complex societal forces at play. We need to look at ourselves in the mirror and not wrap ourselves in the false comfort that we have simply been manipulated by machines all along.”
Fixing the machine vs. the underlying societal problems
Nonetheless, there are various attempts to fix the “broken machine,” and some potential fixes are discussed more often. One of the loudest calls is for tougher regulation – legislation should be passed to implement reforms. Yet, many remain pessimistic about the prospects for policy rules and oversight because regulators tend not to keep pace with tech developments. Also, there’s no silver-bullet solution, and most of the recent proposals are overly simplistic.
“Fixing Silicon Valley’s problems requires a scalpel, not an axe,” said Dylan Byers. However, tech platforms are faced with a new ecosystem of opposition, including Democrats and Republicans, antitrust theorists, privacy advocates, and European regulators. They all carry axes.
For instance, there are many new proposals to amend Section 230 of the Communications Decency Act. But, as Casey Newton noted, “it won’t fix our politics, or our broken media, or our online discourse, and it’s disingenuous for politicians to suggest that it would.”
When self-regulation is proposed, there is an inherent commercial conflict since platforms are in the business of making money for their shareholders. Facebook only acted after problems escalated and caused real damage. For example, only after the mob violence in India (another problem that existed before WhatsApp, and may have been amplified by the app) the company instituted rules to limit WhatsApp’s ‘virality.’” Other algorithms have been altered in order to eliminate conspiracy theories and their groups from being highly recommended.
Restoring more human control requires different remedies: from decentralization projects, which seek to shift the ownership of personal data away from Big Tech and back toward users, to media literacy, which seek to formally educate people of all ages about the way tech systems function, as well as encourage appropriate, healthy uses.
The proposed solutions could certainly be helpful, and they all should be pursued. Unfortunately, they are unlikely to be adequate. We will probably have an easier time fixing algorithms, or the design of our technology than we will have fixing society, and humanity has to deal with humanity’s problems.
Techdirt’s Mike Masnick recently addressed the underlying societal problems that need fixing. “What we see - what Facebook and other social media have exposed – is often the consequences of huge societal failings.” He mentioned various problems with education, social safety nets, healthcare (especially mental healthcare), income inequality and corruption. Masnick concluded we should be trying to come up with better solutions for those issues rather than “insisting that Facebook can make it all go away if only they had a better algorithm or better employees.”
We saw that with COVID-19 disinformation. After President Joe Biden blamed Facebook for “killing people,” and Facebook responded by saying they are “helping save lives,” I argued that this dichotomous debate sucks. Charlie Warzel called it (on his Galaxy Brian newsletter) “an unproductive, false binary of a conversation,” and he is absolutely right. Complex issues deserve far more nuance.
I can’t think of a more complex issue than tech platforms’ impact on society, in general, and Facebook’s impact in particular. However, we seem to be stuck between the storylines discussed above, of “amplifying the good vs. the bad.” It is as if you can only think favorably or negatively about “the machine,” and you must pick a side and adhere to its intensified narrative.
Keeping to a single narrative can escalate rhetoric and create an insufficient discussion, as evidenced by a recent Mother Jones article. The “Why Facebook won’t stop pushing propaganda” piece describes how a woman tried to become Montevallo’s first black mayor and lost. Montevallo is a very small town in Alabama (7,000 people), whose population is two-thirds white. Her race loss was blamed on Facebook: The rampart of misinformation and rumors about her affected the voting.
While we can’t know what got people to vote one way or another, we should consider that racism was prevalent in places like Alabama for a long time. Facebook was the candidate's primary tool for her campaign, highlighting the good things about her historic nomination. Then, racism was amplified in Facebook’s local groups. In the article, the fault was centered on the algorithm amplification, on Facebook's “amplification of the bad.” Facebook’s argument that it only “reflects the ugly” does not hold true here if it makes it more robust. Yet, the root cause in this case remains the same, racism. Facebook “doing better” and amending its algorithms will not be enough unless we also address the source of the problem. WE can and should “do better,” as well.
Dr. Nirit Weiss-Blatt is the author of The Techlash and Tech Crisis Communication
Filed Under: algorithms, amplification, mark zuckerberg, society
Companies: facebook