Deception & Trust: A Deep Look At Deep Fakes
from the don't-get-carried-away dept
With recent focus on disinformation and “fake news,” new technologies used to deceive people online have sparked concerns among the public. While in the past, only an expert forger could create realistic fake media, deceptive techniques using the latest research in machine-learning allow anyone with a smartphone to generate high-quality fake videos, or “deep fakes.”
Like other forms of disinformation, deep fakes can be designed to incite panic, sow distrust in political institutions, or produce myriad other harmful outcomes. Because of these potential harms, lawmakers and others have begun expressing concerns about deep-fake technology.
Underlying these concerns is the superficially reasonable assumption that deep fakes represent an unprecedented development in the ecosystem of disinformation, largely because deep-fake technology can create such realistic-looking content. Yet this argument assumes that the quality of the content carries the most weight in the trust evaluation. In other words, people making this argument believe that the highly realistic content of a deep fake will induce the viewer to trust it — and share it with other people in a social network — thus hastening the spread of disinformation.
But there are several reasons to be suspicious of that assumption. In reality, deep-fake technology operates similarly to other media that people use to spread disinformation. Whether content will be believed and shared may not be derived primarily from the content’s quality, but from psychological factors that any type of deceptive media can exploit. Thus, contrary to the hype, deep fakes may not be the techno-boogeyman some claim them to be.
Deceiving with a deep fake.
When presented with any piece of information — be it a photograph, a news story, a video, etc. — people do not simply take that information at face value. Instead, individuals in today’s internet ecosystem rely heavily on their network of social contacts when deciding whether to trust content online. In one study, for example, researchers found that participants were more likely to trust an article when it had been shared by people whom the individual already trusted.
This conclusion comports with an evolutionary understanding of human trust. In fact, humans likely evolved to believe information that comes from within their social networks, regardless of its content or quality.
At a basic level, one would expect such trust would be unfounded; individuals usually try to maximize their fitness (the likelihood they will survive and reproduce) at the expense of others. If an individual sees an incoming danger and fails to alert anyone else, that individual may have a better chance of surviving that specific interaction.
However, life is more complex than that. Studies suggest that in repeated interactions with the same individual, a person is more likely to place trust in the other individual because, without any trust, neither party would gain in the long term. When members of a group can rely on other members, individuals within the group gain a net benefit on average.
Of course, a single lie or selfish action could help an individual survive an individual encounter. But if all members of the group acted that way, the overall fitness of the group would decrease. And because groups with more cooperation and trust among their members are more successful, these traits were more likely to survive on an aggregate level.
Humans today, therefore, tend to trust those close to them in a social network because such behavior helped the species survive in the past. For a deep fake, then, the apparent authenticity of the video may be less of a factor in deciding whether to trust that information than whether the individual trusts who shares it.
Further, even the most realistic, truthful-sounding information can fail to produce trust when the individual holds beliefs that contradict the presented information. The theory of cognitive dissonance contends that when an individual’s beliefs contradict his or her perception, mental tension — or cognitive dissonance — is created. The individual will attempt to resolve this dissonance in several ways, one of which is to accept evidence that supports his or her existing beliefs and dismissing evidence that does not. This leads to what is known as confirmation bias.
One fascinating example of confirmation bias in action came in the wake of President Donald Trump’s press secretary claiming that more people watched Trump’s inauguration than any other inauguration in history. Despite the video evidence and a side-by-side photo comparison of the National Mall indicating the contrary, many Trump supporters claimed that a photo depicting turnout on Jan. 20, 2017, showed a fuller crowd than it actually did because they knew it was a photo of Trump’s inauguration (Sean Spicer later clarified that he was including the television audience as well as the in-person audience, but the accuracy of that characterization is also debatable.) In other words, the Trump supporters either convinced themselves that the crowd size was larger despite observable evidence to the contrary, or they knowingly lied to support — or confirm — their bias.
The simple fact is that it does not require much convincing to deceive the human mind. For instance, multiple studies have shown that rudimentary disinformation can generate inaccurate memories in the targeted individual. In one study, researchers were able to implant fake childhood memories in subjects by simply providing a textual description of an event that never occurred.
According to these theories, then, when it comes to whether a person believes a deep fake is real, the quality matters less than whether an individual has pre-existing biases or trusts the person who shared it. In other words, existing beliefs, not the perceived “realness” of a medium, drives whether new information is believed. And, given the diminished role that the quality of a medium plays in the believability calculus, more rudimentary methods — like using Photoshop to alter photographs — can achieve the same results as a deep fake in terms of spreading disinformation. Thus, while deep fakes present a challenge generally, deep fakes as a class of disinformation do not present an altogether new problem as far as believability is concerned.
Sharing Deep Fakes Online.
With the rise of social media and the fundamental change in how we share information, some worry that the unique characteristics of deep fakes could make them more likely to be shared online regardless of whether they deceive the target audience.
People share information — whether it be in written, picture or video form — online for many different reasons. Some may share it because it is amusing or pleasing. Others may do so because it offers partisan political advantage. Sometimes the sharer knows the information is false. Other times, the sharer does not know whether the information is accurate but simply does not care enough to correct the record.
People also tend to display a form of herd behavior in which seeing others share content drives the individual to share the content themselves. This allows disinformation to spread across larger platforms like Facebook or Twitter as the content builds up a base of sharing. The number of people who receive disinformation, then, can grow exponentially at a very rapid pace. As the popularity of a given piece of content increases, so too does its credibility as it reaches the edges of a network, exploiting the trust that individuals have in their social networks. And even if the target audience does not believe a given deep fake, widespread propagation of the content can still cause damage; simply viewing false content can reinforce beliefs that the user already has, even if the individual knows that the content is an exaggeration or a parody.
Deep fakes, in particular, present the audience with rich sound and video that engage the viewer. A realistic deep fake that can target the user’s existing beliefs and exploit his or her social ties, therefore, may spread rapidly online. But so, too, do news articles and simple image-based memes. Even without the richness of a deep fake, still images and written text can target the psychological factors that drive content-sharing online. In fact, image-based memes already spread at alarming rates due to their simplicity and the ease with which they convey information. And while herd-behavior tendencies will drive more people to share content, this applies to all forms of disinformation, not just deep fakes.
Currently, a video still represents an undeniable record of events for many people. But as this technology becomes more commonplace and the limitations of video become more apparent, the psychological factors above will drive trust and sharing. And the tactics that bad actors use to deceive will exploit these social patterns regardless of medium.
When viewed in this context, deep fakes are not some unprecedented challenge society cannot adapt to; they are simply another tool of disinformation. We should of course remain vigilant and understand that deep fakes will be used to spread disinformation. But we also need to consider that deep fakes may not live up to the hype.
Jeffrey Westling (@jeffreywestling) is a Technology and Innovation Research Associate at the R Street Institute.
Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.
Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.
While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.
–The Techdirt Team
Filed Under: deep fakes, disinformation, trust
Reader Comments
Subscribe: RSS
View by: Time | Thread
Trust
When we see stories about how fast Whatsapp can spread the rumor, "A rapist just entered town and is going to kidnap someone!!" what will happen when we are bombarded with fake pictures and video as well?
It's already frightening how fast rumors spread.
[ link to this | view in thread ]
Deep fakes have been the choice tools of the government for eons.
[ link to this | view in thread ]
Re:
So, what you're saying is that the government is a deep fake deep state?
[ link to this | view in thread ]
AND??
what did we do in the past??
There have been tons of newspapers that have changed abit to connect to the internet.. What happened in the past when this happened??
Are we at such a time that we are DEMANDING truth?? OMG!! FINALLY?? WHY In HELL SHOULD WE...THE PEOPLE have been besieged for years with BS.. trying to get thru all of it has been very entertaining.
Are we going to tell Christians that they are a Jewish sect??
Are we going to tell everyone HOW/WHy the USA has gone to war so many times, since the late 1890's???
Are we going to explain that the biggest corp and Job creator is the military complex??
How about explaining how the USA really didnt do much to the Southern nation in the Americas, except foster more corruption and installing Leaders that WANTED the corps to take over..
Or that Corps have tried taking over most of those nations...1 way or another..
GET OFF IT, we aint doing any of that, and you know it..this is Grandstanding at its glorious..
NOW we can finger point at anything and everything...They did this, they did that, and WE didnt do anything attitude..
[ link to this | view in thread ]
Either we're going to start facing facts...
...and committing to evidence-based data and policy...
...or we're going to kill ourselves by shitting the pool until it's toxic.
That's how the Permian-Triassic Extinction Event killed 90% of all life on the planet (mostly microbe farts). That's what we're doing, only with industry as a population multiplier.
Either we're going to learn how to stop bullshitting ourselves, and tossing fact-based science for more comfortable, simpler narratives, or we're going to blindly walk ourselves right into a massive population correction.
I'm not saying we can. We may be exactly that stupid. But those are our options at this point.
[ link to this | view in thread ]
Re: Either we're going to start facing facts...
Or, we could demand multiple sources, and then vet those sources.
[ link to this | view in thread ]
Believe want you want to believe?
[ link to this | view in thread ]
Re: Either we're going to start facing facts...
Sorry to drift off on a technicality here, but, ironically, your metaphor for false information is... well... not fully truthful.
There is no certain answer to either exact amount of life lost in, or to the cause of, the end-Permian extinction. The best guesses are 96% of marine species, 70% of land animal species, 83% of all species, but that's diversity, not the amount of life.
The reason it happened? Can't just be the microbes, because all they would do is create global warming. Five years ago that seemed like a great idea, because they discovered a type of bacteria evolved the ability to eat acetate and fart methane, at the same time as a huge increase in global temperatures, resulting in oceans as hot as 105 degrees. Unfortunately for that theory, last year they discovered that this happened hundreds of thousands of years after most things had died within a sudden 30,000 year span.
What was the reason then? Still no smoking gun, but to summarize, the best theory is that an asteroid struck Antarctica, which made the opposite side of the planet -- Siberia -- into a giant volcano that erupted for a million years, spewing out millions of tons of rock and CO2. The rock allowed methane-farting bacteria to evolve, which caused intense global warming, killing off the land animals. The CO2 poisoned the oceans and killed off the marine life. The lack of competition allowed hydrogen-sulfide-pooping bacteria to flourish, destroying the ozone layer. The ultraviolet light that was allowed in killed off the plants.
In short? Yeah, the farting bacteria didn't help anything, but who's really to blame? Russia and aliens.
[ link to this | view in thread ]
Re: AND??
and part and parcel..
comes from those that have created their own proof and facts..
I can MAKE it happen..isnt proof.
I can beat the hell out of it, and it should die.. just dont get it.
The biggest debate we can have is facts over Justice..
In some cases there are no facts in Justice..
And if facts had any Justice, MJ would be legal, never would have hemp been made illegal, Currents would be Legal.. We would be wearing More cotton and wool goods rather then Plastic. and Ropes would be predominantly made of Hemp..
Ever hear that stinging nettle was used for clothing?? yep.
WE have forgotten many truths.
[ link to this | view in thread ]
Re: Re: AND??
And its funny..
That long ago, we had a problem with Pain killers..
Laudanum... was very common, and made illegal..
Made from??
Opium
Ethanol..
Seems we have the SAME problem..
[ link to this | view in thread ]
Re: Re: Either we're going to start facing facts...
Wow. I stand... educated.
This is way more complex than I thought.
[ link to this | view in thread ]
Re: Re:
Not fake. They spent trillions of dollars of America's wealth on a tunnel complex under the United States. They initiate false flag operations to sway public opinion in order to falsely justify agendas. They say one thing while doing another. They have been faking America out for at least six decades.
[ link to this | view in thread ]
Deep fakes will be fuel to denialism
Like the issue we know from Photoshopped images, the bigger issue of Deep fakes will likely be the endless knee-jerk calls of "fake news".
We will distrust even more what we are shown by traditional and traditionally reputable sources, news outlets, etc.
Easily faked video will no doubt increase the trolls and deluge of fake news. It assists the "bad actors" with their goal. We will feel the temptation to quickly (or ultimately) dismiss "inconvenient" raw footage more fully as mere propaganda.
[ link to this | view in thread ]
"Fake news"
We get allegations that videos are fake news even without deepfakes, If someone doesn't want to believe surmounting evidence that their heartthrob representative committed a crime (or a social faux pas on camera), they'll just deny reality.
It happens daily in this era.
When deepfakes are produced as social commentary (say, Trump and Putin being convincingly faked into a sex scene, for an obvious example) at that point we can decide society has taken them to heart.
[ link to this | view in thread ]