Facebook: Amplifying The Good Or The Bad? It's Getting Ugly
from the the-mirror dept
When the New York Times reported Facebook’s plan to improve its reputation, the fact that the initiative was called “Project Amplify” wasn’t a surprise. “Amplification” is at the core of the Facebook brand, and “amplify the good” is a central concept in its PR playbook.
Amplify the good
Mark Zuckerberg initiated this talking point in 2018. “I think that we have a clear responsibility to make sure that the good is amplified and to do everything we can to mitigate the bad,” he said after the Russian election meddling and the killings in Myanmar.
Then, other Facebook executives adopted this notion regardless of the issue at hand. The best example is Adam Mosseri, Head of Instagram.
In July 2019, addressing online bullying, Mosseri said: “Technology isn’t inherently good or bad in the first place …. And social media, as a type of technology, is often an amplifier. It’s on us to make sure we’re amplifying the good and not amplifying the bad.”
In January 2021, After January 6 Capitol attack, Mosseri said: “Social media isn’t good or bad, like any technology, it just is. But social media is specifically a great amplifier. It can amplify good and bad. It’s our responsibility to make sure that we amplify more good and less bad.”
In September 2021, after a week of exposés about Facebook by the WSJ, The Facebook Files, Mosseri was assigned to defend the company once again. “When you connect people, whether it’s online or offline, good things can happen and bad things can happen,” he said in his opening statement. “I think that what is important is that the industry as a whole tries to understand both those positive and negative outcomes, and do all they can to magnify the positive and to identify and address the negative outcomes.”
Mosseri clearly uses the same messaging document, but Facebook’s PR template contains more talking points. Facebook also asserts that there have always been bad people or behaviors, and the current connectivity simply makes them more visible.
A mirror for the ugly
According to the “visibility” narrative, tech platforms simply reflect the beauty and ugliness in the world. Thus, social media is sometimes a cesspool because humanity is sometimes a cesspool.
Mark Zuckerberg addresses this issue several times, with the main message that it is just human nature. Nick Clegg, VP of Global Affairs and Communications, repeatedly shared the same mindset. “When society is divided and tensions run high, those divisions play out on social media. Platforms like Facebook hold up a mirror to society,” he wrote in 2020. “With more than 3 billion people using Facebook’s apps every month, everything that is good, bad misogynist and ugly in our societies will find expression on our platform.”
“Social media broadly, and messaging apps and technology, are a reflection of humanity,” Adam Mosseri repeated. “We communicated offline, and all of a sudden, now we’re also communicating online. Because we’re communicating online, we can see some of the ugly things we missed before. Some of the great and wonderful things, too.”
This “mirror of society” statement is being criticized for being intentionally uncomplicated. Because the ability to shape, not merely reflect, people’s preferences and behavior is also how Facebook makes money. Therefore, despite Facebook’s recurring statements, it is accused of not reflecting but increasing the bad and ugly.
Amplify the bad
“These platforms aren’t simply pointing out the existence of these dark corners of humanity,” John Paczkowski from BuzzFeed News, told me. “They are amplifying them and broadcasting them. That is different.”
After an accumulation of deadly events, such as the Christchurch shooting, Kara Swisher wrote about amplified hate and “murderous intent that leaps off the screen and into real life.” She argued that “While this kind of hate has indeed littered the annals of human history since its beginnings, technology has amplified it in a way that has been truly destructive.”
It is believed that bad behavior (e.g., disinformation) is induced by the way that tech platforms are designed to maximize engagement. Thus, Facebook’s victim-centric approach refuses to acknowledge that perhaps bad actors don’t misuse its platform but rather use it as intended (“machine for virality”).
Ev Williams, the co-founder of Blogger, Twitter, and Medium, said he now believes that he had failed to appreciate the risks of putting such powerful tools in users’ hands with minimal oversight. “One of the things we’ve seen in the past few years is that technology doesn’t just accelerate and amplify human behavior,” he wrote. “It creates feedback loops that can fundamentally change the nature of how people interact and societies move (in ways that probably none of us predicted).”
So, things had turned toxic in ways that tech founders didn’t predict. Should they have foreseen them? According to Mark Zuckerberg, an era of tech optimism led to unintended consequences. “For the first decade, we really focused on all the good that connecting people brings … But it’s clear now that we didn’t do enough,” he said After the Cambridge Analytica scandal. He admitted they didn’t think through “how people could use these tools to do harm as well.” Several years after the Techlash coverage began, there’s a consensus that they needed to “do more” to purposefully deny the ability to abuse them.
One of the reasons it was (and still is) a challenging task is their scale. According to this theme, the growth-at-all-cost “blinded” them, and they turned so big to be successfully managed at all. Due to their bigness, they are always in a game of cat-and-mouse with bad actors. “When you have hundreds of millions of users, it is impossible to keep track of all the ways they are using and abusing your systems,” Casey Newton, from the Platformer newsletter, explained in an interview. “They are always playing catch-up with their own messes.”
Due to the unprecedented scale at which Facebook operates, it is dependent on algorithms. Then, it claims that any perceived errors result from “algorithms that need tweaking” or “artificial intelligence that needs more training data.” But is it just an automation issue? It depends on who you ask.
The algorithms’ fault vs. the people who build them or use them
Critics say that machines are only as good as the rules built into them. “Google, Twitter, and Facebook have all regularly shifted the blame to algorithms, but companies write the algorithms, making them responsible for what they churn out.”
But platforms tend to avoid this responsibility. When ProPublica revealed that Facebook’s algorithms allowed advertisers to target users interested in “How to burn Jews” or “History of why Jews ruin the world,” Facebook’s response was: The anti-Semitic categories were created by an algorithm rather than by people.
At the same time, Facebook‘s Nick Clegg argued that human agency should not be removed from the equation. In a post titled “You and the Algorithm: It takes two to Tango,” he criticized the dystopian depictions of their algorithms, in which “people are portrayed as powerless victims, robbed of their free will.” As if “Humans have become the playthings of manipulative algorithmic systems.”
“Consider, for example, the presence of bad and polarizing content on private messaging apps - iMessage, Signal, Telegram, WhatsApp - used by billions of people around the world. None of those apps deploy content or ranking algorithms. It’s just humans talking to humans without any machine getting in the way,” Clegg wrote. “In many respects, it would be easier to blame everything on algorithms, but there are deeper and more complex societal forces at play. We need to look at ourselves in the mirror and not wrap ourselves in the false comfort that we have simply been manipulated by machines all along.”
Fixing the machine vs. the underlying societal problems
Nonetheless, there are various attempts to fix the “broken machine,” and some potential fixes are discussed more often. One of the loudest calls is for tougher regulation – legislation should be passed to implement reforms. Yet, many remain pessimistic about the prospects for policy rules and oversight because regulators tend not to keep pace with tech developments. Also, there’s no silver-bullet solution, and most of the recent proposals are overly simplistic.
“Fixing Silicon Valley’s problems requires a scalpel, not an axe,” said Dylan Byers. However, tech platforms are faced with a new ecosystem of opposition, including Democrats and Republicans, antitrust theorists, privacy advocates, and European regulators. They all carry axes.
For instance, there are many new proposals to amend Section 230 of the Communications Decency Act. But, as Casey Newton noted, “it won’t fix our politics, or our broken media, or our online discourse, and it’s disingenuous for politicians to suggest that it would.”
When self-regulation is proposed, there is an inherent commercial conflict since platforms are in the business of making money for their shareholders. Facebook only acted after problems escalated and caused real damage. For example, only after the mob violence in India (another problem that existed before WhatsApp, and may have been amplified by the app) the company instituted rules to limit WhatsApp’s ‘virality.’” Other algorithms have been altered in order to eliminate conspiracy theories and their groups from being highly recommended.
Restoring more human control requires different remedies: from decentralization projects, which seek to shift the ownership of personal data away from Big Tech and back toward users, to media literacy, which seek to formally educate people of all ages about the way tech systems function, as well as encourage appropriate, healthy uses.
The proposed solutions could certainly be helpful, and they all should be pursued. Unfortunately, they are unlikely to be adequate. We will probably have an easier time fixing algorithms, or the design of our technology than we will have fixing society, and humanity has to deal with humanity’s problems.
Techdirt’s Mike Masnick recently addressed the underlying societal problems that need fixing. “What we see - what Facebook and other social media have exposed – is often the consequences of huge societal failings.” He mentioned various problems with education, social safety nets, healthcare (especially mental healthcare), income inequality and corruption. Masnick concluded we should be trying to come up with better solutions for those issues rather than “insisting that Facebook can make it all go away if only they had a better algorithm or better employees.”
We saw that with COVID-19 disinformation. After President Joe Biden blamed Facebook for “killing people,” and Facebook responded by saying they are “helping save lives,” I argued that this dichotomous debate sucks. Charlie Warzel called it (on his Galaxy Brian newsletter) “an unproductive, false binary of a conversation,” and he is absolutely right. Complex issues deserve far more nuance.
I can’t think of a more complex issue than tech platforms’ impact on society, in general, and Facebook’s impact in particular. However, we seem to be stuck between the storylines discussed above, of “amplifying the good vs. the bad.” It is as if you can only think favorably or negatively about “the machine,” and you must pick a side and adhere to its intensified narrative.
Keeping to a single narrative can escalate rhetoric and create an insufficient discussion, as evidenced by a recent Mother Jones article. The “Why Facebook won’t stop pushing propaganda” piece describes how a woman tried to become Montevallo’s first black mayor and lost. Montevallo is a very small town in Alabama (7,000 people), whose population is two-thirds white. Her race loss was blamed on Facebook: The rampart of misinformation and rumors about her affected the voting.
While we can’t know what got people to vote one way or another, we should consider that racism was prevalent in places like Alabama for a long time. Facebook was the candidate's primary tool for her campaign, highlighting the good things about her historic nomination. Then, racism was amplified in Facebook’s local groups. In the article, the fault was centered on the algorithm amplification, on Facebook's “amplification of the bad.” Facebook’s argument that it only “reflects the ugly” does not hold true here if it makes it more robust. Yet, the root cause in this case remains the same, racism. Facebook “doing better” and amending its algorithms will not be enough unless we also address the source of the problem. WE can and should “do better,” as well.
Dr. Nirit Weiss-Blatt is the author of The Techlash and Tech Crisis Communication
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: algorithms, amplification, mark zuckerberg, society
Companies: facebook
Reader Comments
Subscribe: RSS
View by: Time | Thread
The problem with all these social medium companies should do better calls, is that it is appointing them as arbiters of human thoughts and ideas.
[ link to this | view in chronology ]
Friday deep thoughts
All new cars are ugly, except for the ones that you can't afford.
-also-
Multiple choice tests are the easiest because the answers are already provided
[ link to this | view in chronology ]
Too Long and a bit of Choir Preaching
Dear Techdirt:
I think most of Techdirt's regulars are aware of 80% of the above; indeed, tools such as social media are powerful, amoral things, and that also goes for corporations and advertising. I'd have liked this article better if it were shorter, but also if it either cataloged some of the experiments to make things more pro-social or went into detail on some successes or failures.
For the author, I've seen the phrase "saying the quiet part out loud" used quite a bit lately. What was it, I assume after about 1965/1970, that made that quiet part (anti-social attitudes) need to be quiet that has disappeared?
[ link to this | view in chronology ]
The best way for Facebook to "do better" would be to stop trying to manipulate society. It's just a communication platform, not Big Brother. Or at least it's not supposed to be.
People talking about things other people don't like are not Facebook's messes. Facebook's bumbling attempts to enforce its own fickle, narrow-minded view of "good" are.
Especially when contrasted by its business need of driving more "engagement," which in practice means promoting the biggest and most heated arguments.
[ link to this | view in chronology ]
Re: what Facebook is...
Facebook is a machine designed to generate profits for its company. It does that by selling a product (users) to the customer (advertisers). There is no "reforming" that process because Facebook is a machine functioning as designed. It has nothing to do with social good or free speech. It's an ad-delivery system. If you don't like that, then just leave it and don't look back. If enough of the products leave, then Facebook will have nothing to sell and the problem is solved.
[ link to this | view in chronology ]
Facebook does little more than reflect the ones who use it. Media is only as social--or antisocial--as the ones promoting it. Of course, people still blame the Post Office for delivering junk mail.
[ link to this | view in chronology ]
Section 230 says don't blame the platform.
[ link to this | view in chronology ]
Right wing dream
I have to wonder, could "amplifying" speech be considered speech? Would that lose them 230 protection?
[ link to this | view in chronology ]
Re: Right wing dream
No.
[ link to this | view in chronology ]
Re: Re: Right wing dream
So, if it's not speech, Congress can make a law...
[ link to this | view in chronology ]
Re: Re: Re: Right wing dream
"No" to "would that lose them 230 protection?". The whole point of 230 is to protect platforms for their moderation decisions, which includes amplifying/promoting third party speech. A law punishing social media platforms for how they promote (or don't promote) the speech of their users would definitely violate the first amendment.
[ link to this | view in chronology ]
I agree with you that it is not an either-or situation.
Humans can be shitty and platforms can make them shittier.
What I don’t share is your optimism that all can get better.
[ link to this | view in chronology ]
Re:
Well, what can I say, I am an optimist.
[ link to this | view in chronology ]
Re:
The news reports the bad, and largely ignores the good, it is not therefore a reliable measure of social media. Politicians react to the bad, which is why we cannot have nice things, because the good is ignored and killed as collateral damage.
[ link to this | view in chronology ]
Re: Re:
That's not quite the bias of news reports -- news, like anyone else, is typically looking to grab eyeballs, and so the rare, bizarre events get reported on. Even Techdirt is trying to do that, and we like the long game being played.
Quick, go find me a news report on people committing suicide with guns...or dying of alcohol or tobacco-related conditions, or even of car wrecks. These things are rarely reported, but quite common.
[ link to this | view in chronology ]
It’s clear by now, that although Facebook tries to amplify the good, it ends up amplifying the bad. We stopped listening to their statements.
[ link to this | view in chronology ]
Social Media
Social Media isgossip. It is mostly relevant to the relatively llltterate and those who feel somehow disenfranchised. While better education would not solve all the problems it would be a giant step in the right direction.
It would also result in a significant reduction in the power of social media - the more you know, the less you rely on gossip.
[ link to this | view in chronology ]
Re: Social Media
According to those who would destroy it. It is also a tool for self help groups, special interest groups, teaching, oh and organizing and co-ordinating efforts to deal with disasters, as for example the clean up efforts after hurricane Sandy.
[ link to this | view in chronology ]
Facebook is working as designed
Facebook is a machine working as designed. You can't reform a functioning machine but you can pull the plug. The government can't and shouldn't pull the plug. The users - aka "products" to Facebook, being sold to the true customers, the advertisers - can pull the plug by leaving.
So, if you enjoy being treated like a product, by all means continue to use Facebook. But if you develop a bit more self-respect, go head and leave. I did, years ago. I'm doing fine. What is everyone scared of?
[ link to this | view in chronology ]