The Whole YouTube Radicalizes People Story Doesn't Seem To Have Much Evidence To Back It Up
from the myths,-myths-all-around-me dept
There seem to be a lot of "myths" about big internet companies that don't stand up to that much scrutiny, even as they're often accepted as common knowledge. There's the idea that Facebook's algorithm remains in place only because it makes Facebook more money (Facebook's own internal research suggests otherwise), or that disinformation goes viral on social media first (a detailed study showed cable news is a much bigger vector of virality).
Another big one is that YouTube "radicalizes" people via its algorithm. There are lots of stories about how someone went on to YouTube to watch, like, video game clips, and within a week had become an alt-right edge lord troll shouting Trump slogans or whatever. Hell, this was a key plot point in the Social Dilemma, in which the young boy in the fictionalized sitcom family starts watching some videos on his phone, and a week later is participating in an extremist political rally that turns into a riot.
However, a very thorough recent study (first highlighted by Ars Technica) found that there's really not much evidence to support any of this narrative. From the abstract:
Recently, YouTube’s scale has fueled concerns that YouTube users are being radicalized via a combination of biased recommendations and ostensibly apolitical “anti-woke” channels, both of which have been claimed to direct attention to radical political content. Here we test this hypothesis using a representative panel of more than 300,000 Americans and their individual-level browsing behavior, on and off YouTube, from January 2016 through December 2019. Using a labeled set of political news channels, we find that news consumption on YouTube is dominated by mainstream and largely centrist sources. Consumers of far-right content, while more engaged than average, represent a small and stable percentage of news consumers. However, consumption of “anti-woke” content, defined in terms of its opposition to progressive intellectual and political agendas, grew steadily in popularity and is correlated with consumption of far-right content off-platform. We find no evidence that engagement with far-right content is caused by YouTube recommendations systematically, nor do we find clear evidence that anti-woke channels serve as a gateway to the far right. Rather, consumption of political content on YouTube appears to reflect individual preferences that extend across the web as a whole.
Of course, this isn't the first study to find the same thing. A similar study that was released last year came to the same basic conclusion:
In conclusion, our study shows that one cannot proclaim that YouTube’s algorithm, at the current state, is leading users towards more radical content. There is clearly plenty of content on YouTube that one might view as radicalizing or inflammatory. However, the responsibility of that content is with the content creator and the consumers themselves. Shifting the responsibility for radicalization from users and content creators to YouTube is not supported by our data.
A study from two years ago... also found the same thing:
In short, the best quantitative evidence available demonstrates that any “radicalization” that occurs on YouTube happens according to the standard model of persuasion: people adopt new beliefs about the world by combining their prior beliefs with new information (Guess and Coppock, 2018). People select information about topics that interest them; if political, they prefer information that is at least some what congenial to their prior beliefs (Stroud, 2017). Persuasion happens at the margins when it does happen.
Indeed, that study showed that the classic story of someone watching a Trump-leaning "alt-lite" video and getting sucked down into alt-right extremism doesn't seem likely to happen that often.
A random walk algorithm beginning at an Alt-Lite video and taking 5 steps randomly selecting one of the ten recommended videos will only be recommended a video from the Alt-Right approximately one out every 1,700 trips. For a random walker beginning at a “control” video from the mainstream media, the probability is so small that it is difficult to see on the graph, but it is certainly no more common than one out of every 10,000 trips.
And, not that you would necessarily trust research coming directly from YouTube itself, but the company recently released some information on this question as well. Contrary to the "common knowledge" the company hasn't seen "more engagement" on extremist content:
Actually, through surveys and feedback, we’ve found that most viewers do not want to be recommended borderline content, and many find it upsetting and off-putting. In fact, when we demoted salacious or tabloid-type content we saw that watchtime actually increased by 0.5% percent over the course of 2.5 months, relative to when we didn’t place any limits.
Also, we haven’t seen evidence that borderline content is on average more engaging than other types of content. Consider content from flat earthers. While there are far more videos uploaded that say the Earth is flat than those that say it’s round, on average, flat earth videos get far fewer views. Surveys show that borderline content is satisfying to only a very small portion of viewers on YouTube. We’ve invested significant time and money toward making sure it doesn’t find its way to broader audiences through our recommendations system. Today, borderline content gets most of its views from sources other than non-subscribed recommendations.
Now, I will note that YouTube recently changed that final line. I found it last month when Evelyn Douek tweeted about it -- but she highlighted that final sentence which said something different at the time:
From YouTube's blog yesterday: "borderline content gets most of its views from other platforms that link to YT"
Yup! That is why a) we need to consider the internet as an entire ecosystem not platform-by-platform & b) YT shd release more info abt thishttps://t.co/ypsT5CCd0K pic.twitter.com/TV0WGZssu9
— evelyn douek (@evelyndouek) September 16, 2021
In the version she tweeted, the final line reads: "Today, borderline content gets more of its views from other platforms that link to YouTube." But, sometime after she tweeted that, YouTube changed it to read: "Today, borderline content gets most of its views from sources other than non-subscribed recommendations." That's... a little different. And at the very least, it makes me wonder why the change -- and at least highlights that there may be a difference between "subscribed recommendations" and "non-subscribed recommendations."
Still, there are questions about how people are finding YouTube videos -- and they do come from all over. As Douek highlights in a follow up tweet, the largest number of external views on Facebook are to YouTube. This doesn't absolve YouTube and its algorithm, but once again highlights how (1) all of this is a lot more complicated than people make it out to be, (2) the internet is an interconnected ecosystem, not just a few giant sites, and (3) the common wisdom you may have heard... might not be supported by the data.
Filed Under: algorithms, myth debunking, outside links, radicalization
Companies: youtube