from the the-list-is-growing dept
Late last year I published Part I of a project to map out all the complaints we hear about social media in particular and about internet companies generally. Now, here's Part 2.
This Part should have come earlier; Part 1 was published in November. I'd hubristically imagined that this is a project that might take a week or a month. But I didn't take into account the speed with which the landscape of the criticism is changing. For example, just as you're trying to do more research into whether Google really is making us dumber, another pundit (Farhad Manjoo at the New York Times) comes along and argues that Apple -- a tech giant no less driven by commercial motives than Google and its parent company, Alphabet -- ought to redesign its products to make us smarter (by making them less addictive). That is, it's Apple's job to save us from Gmail, Facebook, Twitter, Instagram, and other attention-demanding internet media — which we connect to through Apple's products, as well as many others.
In these same few weeks, Facebook has announced it's retooling the user experience for Facebook users in ways aimed at making the experience more personal and interactive and less passive. Is this an implicit admission that Facebook, up until now, has been bad for us? If so, is it responding to the charges that many observers have leveled at social-media companies — that they're bad for us and that they're bad for democracy.
And only this last week, social-media companies have responded to concerns about political extremists (foreign and domestic) in Senate testimony. Although the senators had broad concerns (ISIS recruitment, bomb-making information on YouTube), there was, of course, some allocation of time on the ever-present question of Russian "misinformation campaigns," which may not have altered the outcome of 2016's elections but still may aim to affect 2018 mid-terms and beyond.
These are recent developments, but coloring them all is a more generalized social anxiety about social media and big internet companies that is nowhere better summarized than in Senator Al Franken's last major public policy address. Whatever you think of Senator Franken's tenure, I think his speech was a useful accumulation of the growing sentiment among commentators that there's something out of control with social media and internet companies that needs to be brought back into control.
Now, let's be clear: even if I'm skeptical here about some claims that social media and internet giants are bad for us, that doesn't mean these criticisms necessarily lack any merit at all. But it's always worth remembering that, historically, every new mass medium (and mass-medium platform) has been declared first to be wonderful for us, and then to be terrible for us. So it's always important to ask whether any particular claim about the harms of social media or internet companies is reactive, reflexive... or whether it's grounded in hard facts.
Here are reasons 4, 5, and 6 to believe social media are bad for us. (Remember, reasons 1, 2, and 3 are here.)
(4) Social media (and maybe some other internet services) are bad for us because they're super-addictive, especially on our sweet, slick handheld devices.
"It's Time for Apple to Build a Less Addictive iPhone," according to New York Times tech columnist Farhad Manjoo, who published a column to that effect recently. To be sure, although "Addictive" is in the headline, Manjoo is careful to say upfront that, although iPhone use may leave you feeling "enslaved," it's not "not Apple's fault" and it "isn't the same as [the addictiveness] of drugs or alcohol." Manjoo's column was inspired by an open letter from an ad-hoc advocacy group that included an investment-management firm and the California State Teachers Retirement System (both of which are Apple shareholders). The letter, available here at ThinkDifferentlyAboutKids.com (behind an irritating agree-to-these-terms dialog) calls for Apple to add more parental-control choices for its iPhones (and other internet-connected devices, one infers). After consulting with experts, the letter's signatories argue, "we note that Apple's current limited set of parental controls in fact dictate a more binary, all or nothing approach, with parental options limited largely to shutting down or allowing full access to various tools and functions." Per the letter's authors: "we have reviewed the evidence and we believe there is a clear need for Apple to offer parents more choices and tools to help them ensure that young consumers are using your products in an optimal manner."
Why Apple in particular? Obviously, the fact that two of the signatories own a couple of billion dollars' worth of Apple stock explains this choice to some extent. But one hard fact is that Apple's share of the smartphone market mostly stays in the 12-to-20-percent range. (Market leader Samsung has held 20-30 percent of the market since 2012.) Still, the implicit argument is that Apple's software and hardware designs for the iPhone will mostly lead the way for other phone-makers going forward, as they mostly have for the first decade of the iPhone era.
Still, why should Apple want to do this? The idea here is that Apple's primarily a hardware-and-devices company — which distinguishes Apple from Google, Facebook, Amazon, and Twitter, all of which primarily deliver an internet-based service. Of course, Apple's an internet company too (iTunes, Apple TV, iCloud, and so on), but the company's not hooked on the advertising revenue streams that are the primary fuel for Google, Facebook, and Twitter, or on the sales of other, non-digital merchandise (like Amazon). The ad revenue for the internet-service companies creates what Manjoo argues are "misaligned incentives" — when ad-driven businesses' economic interests lie in getting more users clicking on advertisements, he reasons, he's "skeptical" that (for example) Facebook is the going to offer any real solution to the "addiction" problem. Ultimately, Manjoo agrees with the ThinkDifferentlyAboutKids letter -- Apple's in the best position to fix iPhone "addiction" because of their design leadership and independence from ad revenue.
Even so, Apple has other incentives to make iPhones addictive — notably, pleasing its other investors. Still, investors may ultimately be persuaded that Apple-led fixes will spearhead improvements, rooted in our devices, of our social-media experience. (See, for example, this column: Why Investors May Be the Next to Join the Backlash Against Big Tech's Power.)
It's worth remembering that the idea technology is addictive is itself an addictive idea — not that long ago, it was widely (although not universally) believed that television was addictive. This New York Times story from 1990 advances that argument, although the reporter does quote a psychiatrist who cautions that "the broad definition" of addiction "is still under debate." (Manjoo's "less addictive iPhone" column inoculates itself, you'll recall, by saying iPhone addiction is "not the same.")
"Addiction" of course is an attractive metaphor, and certainly those of us who like using our electronics to stay connected can see the appeal of the metaphor. And Apple, which historically has been super-aware of the degree to which its products are attractive to minors, may conclude—or already have concluded, as the ThinkDifferentlyAboutKids folks admit — that more parental controls are a fine idea.
But is it possible that smartphones maybe already incorporate a solution for addictiveness? Just the week before Manjoo's column, another Times writer, Nellie Bowles asked whether we can make our phones less addictive just by playing with the settings. (The headline? "Is the Answer to Phone Addiction a Worse Phone?") Bowles argues, based on interviews with researchers, that simply setting your phone to use grayscale instead of color inclines users to respond less emotionally and impulsively—in other words, more mindfully—when deciding whether to respond to their phones. Bowles says she's trying the experiment herself: "I've gone gray, and it's great."
At first it seems odd to focus on the device's user interface (parental settings, or color palette) if the real problem of addictiveness is internet content (social media, YouTube and other video, news updates, messages). One can imagine a Times columnist in 1962—in the opening years of widespread color TV— responding to Newt Minow's famous "vast wasteland" speech by arguing that TV-set manufacturers should redesign sets so that they're somewhat more inconvenient—no remote controls, say—and less colorful to watch. (So much for NBC's iconic Peacock opening logo)
In the interests of science, I'm experimenting with some of these solutions myself. For years already I've configured my iDevices not to bug me with every Facebook and Twitter update or new-email notice. Plus, I was worried about this grayscale thing on my iPhone X—one of the major features of which is a fantastic camera. But it turns out that you can toggle between grayscale and color easily once you've set gray as the default. I kind of like the novelty of all-gray—no addiction-withdrawal syndrome yet, but we'll see how that goes.
(5) Social media are bad for us because they make us feel bad, alienating us from one another and causing is to be upset much of the time.
Manjoo says he's skeptical whether Facebook is going to fix the addictiveness of its content and interactions with users, thanks to those "misaligned incentives." It should be said of course that Facebook's incentives—to use its free services to create an audience for paying advertisers—at least have the benefit of being straightforward. (Apple's not dependent on ads, but they still want new products to be attractive enough for users to want to upgrade.) Still, Facebook's Mark Zuckerberg has announced that the company is redesigning Facebook's user experience, (focusing first on its news feed) to emphasize quality time ("time well spent") over more "passive" consumption of the Facebook ads and video that may generate more hits for some advertisers. Zuckerberg maintains that Facebook, even as it has operated over the last decade-plus of general public access, had been good for many and maybe for most users:
"The research shows that when we use social media to connect with people we care about, it can be good for our well-being. We can feel more connected and less lonely, and that correlates with long term measures of happiness and health."
Even so, Zuckerberg writes (translating what Facebook has been hearing from some social-science researchers), "passively reading articles or watching videos -- even if they're entertaining or informative -- may not be as good." This is a gentler way of characterizing what some researchers have recently been arguing, which is that, for some people at least, using Facebook causes depression. This article for example, relies on sociologist Erving Goffman's conceptions of how we distinguish between our public and private selves as we navigate social interactions. Facebook, it's argued, "collapses" our public and private presentations—the result is what social-media researcher danah boyd calls "context collapse." A central idea here is that, because what we publish on Facebook for our circle is also to some high degree public, we are stressed by the need (or inability) to switch between versions of how we present ourselves. In addition context collapse, the highly curated pages we see from other people on Facebook may suggest that their lives are happy in ways that ours are not.
I think both Goffman's and boyd's contributions to our understanding of the sociology of identity (both focus on how we present ourselves in context) are extremely useful, but it's important to think clearly about any links between Facebook (and other social media) and depression. To cut to the chase: there may in fact be strong correlations between social-media use and depression, at least for some people. But it's unclear whether social media actually cause depression; it seems just as likely that causation may go in the other direction. Consider that depression has also been associated with internet use generally (prior to the rise of social-media platforms), with television watching, and even, if you go back far enough, with what is perceived to be excessive consumption of novels and other fiction. Books, of course, are now regarded as redemptive diversions that may actually cure your depression.
So here's a reasonable alternative hypothesis: when you're depressed you seek diversion from depression—which may be Facebook, Twitter, or something else, like novels or binge-watching quality TV. It may be things that are genuinely good for you (books! Or The Wire!) or things that are unequivocally bad for you. (Don't try curing your depression with drinking!) Or it may be social media, which at least some users will testify they find energizing and inspiring rather than enervating and dispiriting.
As a longtime skeptic regarding studies of internet usage (a couple of decades ago I helped expose a fraudulent article about "cyberporn" usage), I don't think the research on social media and its potential harmful side-effects is any more conclusive than Facebook's institutional belief that its social-media platforms are beneficial. But I do think Facebook as a dominant, highly profitable social-media platform is under the gun. And, as I've written here and elsewhere, its sheer novelty may be generating a moral panic. So it's no wonder—especially now that the U.S. Congress (as well as European regulators) are paying more attention to social media—that we're seeing so many Facebook announcements recently that are aimed at showing the company's responsiveness to public criticism.
Whether you think anxiety about social-media is merited or otherwise, you may reasonably be cynical about whether a market-dominant for-profit company will refine itself to act more consistently in the public interest—even in the face of public criticism or governmental impulses to regulate. But such a move is not unprecedented. The key question is whether Facebook's course corrections -- steering us towards personal interactions over "passive" consumption of things like news reports -- really do help us. (For example, if you believe in the filter-bubble hypothesis, it seems possible that Facebook's privileging of personal interactions over news may make filter bubbles worse.) This brings us to Problem Number 6, below.
(6) Social media are bad for us because they're bad for democracy.
There are multiple arguments that Facebook and other social media (Twitter's another frequent target) are bad for democracy. The Verge provides a good beginning list here. The article notes that Facebook's own personnel—including its awesomely titled "global politics and government outreach director"
— are acknowledging the criticisms by publishing a series of blog postings. The first one is from the leader of Facebook's "civic engagement team," and the others are from outside observers, including Harvard law professor Cass Sunstein (who's been a critic of "filter bubbles" since long before that term was invented—his preferred term is "information cocoons.").
I briefly mentioned Sunstein's work in Part 1. Here in Part 2 I'll note mainly that Sunstein's essay for Facebook begins by listing ways in which social-media platforms are actually good for democracy. In fact, he writes, "they are not merely good; they are terrific." In spite of their goodness, Sunstein writes, they also exacerbate what he's discussed earlier (notably in a 1999 paper) as "group polarization." In short, he argues, the filter bubble makes like-minded people hold their shared opinions more extremely. The result? More extremism generally, unless deliberative forums are properly designed with appropriate "safeguards."
Perhaps unsurprisingly, given that Facebook is hosting his essay, Sunstein credits Facebook with taking steps to provide those such safeguards, which in his view includes Facebook chief Mark Zuckerberg's declaration that the company is working to fight misinformation in its news feed. But I like Sunstein's implicit recognition that political polarization, while bad, may be no worse as a result of social media in particular, or even this century's modern media environment as a whole:
"By emphasizing the problems posed by knowing falsehoods, polarization, and information cocoons, I do not mean to suggest that things are worse now than they were in 1960, 1860, 1560, 1260, or the year before or after the birth of Jesus Christ. Information cocoons are as old as human history."
(I made that argument, in similar form, in a debate with Farhad Manjoo—not then a Times columnist—almost a decade ago.)
Just as important, I think, is Sunstein's admission that that we don't really have unequivocal data showing that social media are a particular problem even in relation to other modern media:
"Nor do I mean to suggest that with respect to polarization, social media are worse than newspapers, television stations, social clubs, sports teams, or neighborhoods. Empirical work continues to try to compare various sources of polarization, and it would be reckless to suggest that social media do the most damage. Countless people try to find diverse topics, and multiple points of view, and they use their Facebook pages and Twitter feeds for exactly that purpose. But still, countless people don't."
Complementing Sunstein's essay is a piece by Facebook's Samidh Chakrabarti, who underscores the company's new initiative to make News Feed contributions more transparent (so you can see who's funding a political ad or seemingly authentic "news story). Chakrabarti also expresses the company's hope that its "Trust Project for News On Facebook" will help users "sharpen their social media literacy." And Facebook's just announced its plan to use user rankings to rate media sources' credibility.
I'm all for more media literacy, and I love crowd-sourcing, and I support efforts to encourage both. But I share CUNY journalism professor Jeff Jarvis's concern that other components of Facebook's comprehensive response to public criticism may unintentionally undercut support, financial and otherwise, for trustworthy media sources.
Now, I'm aware that some critics are arguing that the data really are solidly showing that social media are undermining democracy. But I'm skeptical whether "fake news" on Facebook or elsewhere in social media changed the outcome of the 2016 election, not least because the Pew Research Center's study a year ago suggests that digital news sources weren't nearly as important as traditional media sources. (Notably, Fox News was hugely influential among Trump voters; there was no counterpart news source for Clinton voters.)
That said, there's no reason to dismiss concerns about social media, which may play an increasing role—as Facebook surely has—as an intermediary of the news. Facebook's Chakrabarti may want to promote "social media literacy," and the company has been forced to acknowledge that "Russian entities" tried to use Facebook as an "information weapon." But Facebook doesn't want in the least to play the rule a social-media-literate citizenry should be playing for itself. Writes Chakrabart:
"In the public debate over false news, many believe Facebook should use its own judgment to filter out misinformation. We've chosen not to do that because we don't want to be the arbiters of truth, nor do we imagine this is a role the world would want for us."
Of course some critics may disagree. As I've said above, the data are equivocal, but that hasn't made its interpreters equivocal. Take for example a couple of recent articles—one academic and another aimed at popular audience—that cast doubt on whether the radical democratization of internet access is a good thing—or at least, whether it's as good a thing as we hoped for a couple of decades ago. One is UC Irvine professor Richard Hasen's law-review article published last year (set for formal publication in the First Amendment Law Review this year), which he helpfully distilled to an LA Times op-ed here. The other is Wired's February 2018 cover story: "It's the (Democracy-Poisoning) Golden Age of Free Speech." (The Wired article is also authored by an academic, UNC Chapel Hill sociology professor Zeynep Tufekci.)
Both Hasen's and Tufekci's articles underscore that internet access has inverted an assumption that long informed free-speech law—that the ability to reach mass audiences is necessarily going to be expensive and scarce. In the internet era, what we have instead is what UCLA professor Eugene Volokh memorably labelled, in a Yale Law Journal law-review article more than 20 years ago, as "cheap speech." Volokh correctly anticipated back then that internet-driven changes in the media landscape would lead some social critics to conclude that the First Amendment's broad protections for speech would need to be revised:
"As the new media arrive, they may likewise cause some popular sentiment for changes in the doctrine. Today, for instance, the First Amendment rules that give broad protection to extremist speakers-Klansmen, Communists, and the like-are relatively low-cost, because these groups are politically rather insignificant. Even without government regulation, they are in large measure silenced by lack of funds and by the disapproval of the media establishment. What will happen when the KKK becomes able to conveniently send its views to hundreds of thousands of supporters throughout the country, or create its own TV show that can be ordered from any infobahn-connected household?"
There, in a nutshell, is a prediction of the world we're living in now (except that we, fortunately, failed to adopt the term "infobahn"). Hasen believes "non-governmental actors"—that is, Facebook and Twitter and Google and the like — may be "best suited to counter the problems created by cheap speech." I think that's a bad idea, not least because corporate decision-making may be less accountable than public law and regulation and, as Manjoo puts it, they are "misaligned incentives." Tufekci, I think, has the better approach. "[I]n fairness to Facebook and Google and Twitter," she writes in Wired, "while there's a lot they could do better, the public outcry demanding that they fix all these problems is mistaken." Because there are "few solutions to the problems of digital discourse that don't involve huge trade-offs," Tufekci insists that deciding what those solutions may be is necessarily a "deeply political decision"—involving difficult discussions what we ask the government to do... or not to do.
She's got that right. She's also right that we haven't had those discussions yet. And as we begin them, we need to remember radically democratic empowerment (all that cheap speech) may be part of the problem, but it's also got to be part of the solution.
Update: Part 3 is now available.
Mike Godwin is a Distinguished Senior Fellow at R Street Institute.
Filed Under: content, moderation, moral panics, social media
Companies: facebook, google, twitter, youtube