Get Ready For Deepfakes To Be Used In Financial Scams

from the it's-coming dept

Last month, scammers hijacked the Twitter accounts of former President Barack Obama and dozens of other public figures to trick victims into sending money. Thankfully, this brazen act of digital impersonation only fooled a few hundred people. But artificial intelligence (AI) is enabling new, more sophisticated forms of digital impersonation. The next big financial crime might involve deepfakes—video or audio clips that use AI to create false depictions of real people.

Deepfakes have inspired dread since the term was first coined three years ago. The most widely discussed scenario is a deepfake smear of a candidate on the eve of an election. But while this fear remains hypothetical, another threat is currently emerging with little public notice. Criminals have begun to use deepfakes for fraud, blackmail, and other illicit financial schemes.

This should come as no surprise. Deception has always existed in the financial world, and bad actors are adept at employing technology, from ransomware to robo-calls. So how big will this new threat become? Will deepfakes erode truth and trust across the financial system, requiring a major response by the financial industry and government? Or are they just an exotic distraction from more mundane criminal techniques, which are far more prevalent and costly?

The truth lies somewhere in between. No form of digital disinformation has managed to create a true financial meltdown, and deepfakes are unlikely to be the first. But as deepfakes become more realistic and easier to produce, they offer powerful new weapons for tech-savvy criminals.

Consider the most well-known type of deepfake, a “face-swap” video that transposes one person’s expressions onto someone else’s features. These can make a victim appear to say things she never said. Criminals could share a face-swap video that falsely depicts a CEO making damaging private comments—causing her company’s stock price to fall, while the criminals profit from short sales.

At first blush, this scenario is not much different than the feared political deepfake: a false video spreads through social or traditional media to sway mass opinion about a public figure. But in the financial scenario, perpetrators can make money on rapid stock trades even if the video is quickly disproven. Smart criminals will target a CEO already embroiled in some other corporate crisis, who may lack the credibility to refute a clever deepfake.

In addition to video, deepfake technology can create lifelike audio mimicry by cloning someone’s voice. Voice cloning is not limited to celebrities or politicians. Last year, a CEO’s cloned voice was used to defraud a British energy company out of $243,000. Financial industry contacts tell me this was not an isolated case. And it shows how deepfakes can cause damage without ever going viral. A deepfake tailored for and sent directly to one person may be the most difficult kind to thwart.

AI can generate other forms of synthetic media beyond video and audio. Algorithms can synthesize photos of fictional objects and people, or write bogus text that simulates human writing. Bad actors could combine these two techniques to create authentic-seeming fake social media accounts. With AI-generated profile photos and AI-written posts, the fake accounts could pass as human and earn real followers. A large network of such accounts could be used to denigrate a company, lowering its stock price due to false perceptions of a grassroots brand backlash.

These are just a few ways that deepfakes and other synthetic media can enable financial harm. My research highlights ten scenarios in total—one based in fact, plus nine hypotheticals. Remarkably, at least two of the hypotheticals already came true in the few months since I first imagined them. A Pennsylvania attorney was scammed by imposters who reportedly cloned his own son’s voice, and women in India were blackmailed with synthetic nude photos. The threats may still be small, but they are rapidly evolving.

What can be done? It would be foolish to pin hopes on a silver bullet technology that reliably detects deepfakes. Detection tools are improving, but so are deepfakes themselves. Real solutions will blend technology, institutional changes, and broad public awareness.

Corporate training and controls can help inoculate workers against deepfake phishing calls. Methods of authenticating customers by their voices or faces may need to be re-examined. The financial industry already benefits from robust intelligence sharing and crisis planning for cyber threats; these could be expanded to cover deepfakes.

The financial sector must also collaborate with tech platforms, law enforcement agencies, journalists, and others. Many of these groups are already working to counter political deepfakes. But they are not yet as focused on the distinctive ways that deepfakes threaten the financial system.

Ultimately, efforts to counter deepfakes should be part of a broader international strategy to secure the financial system against cyber threats, such as the one the Carnegie Endowment is currently developing together with the World Economic Forum.

Deepfakes are hardly the first threat of financial deception, and they are far from the biggest. But they are growing and evolving before our eyes. To stay ahead of this emerging challenge, the financial sector should start acting now.

Jon​ Bateman is a Cyber Policy Initiative, Technology and International Affairs Fellow at the Carnegie Endowment for International Peace.

Hide this

Thank you for reading this Techdirt post. With so many things competing for everyone’s attention these days, we really appreciate you giving us your time. We work hard every day to put quality content out there for our community.

Techdirt is one of the few remaining truly independent media outlets. We do not have a giant corporation behind us, and we rely heavily on our community to support us, in an age when advertisers are increasingly uninterested in sponsoring small, independent sites — especially a site like ours that is unwilling to pull punches in its reporting and analysis.

While other websites have resorted to paywalls, registration requirements, and increasingly annoying/intrusive advertising, we have always kept Techdirt open and available to anyone. But in order to continue doing so, we need your support. We offer a variety of ways for our readers to support us, from direct donations to special subscriptions and cool merchandise — and every little bit helps. Thank you.

–The Techdirt Team

Filed Under: deepfakes, financial scams, scams


Reader Comments

Subscribe: RSS

View by: Time | Thread


  • identicon
    Anonymous Coward, 10 Aug 2020 @ 4:16pm

    Let's hear it for digital signatures to be used even with phone or video calls.

    link to this | view in chronology ]

  • identicon
    anon, 10 Aug 2020 @ 6:13pm

    The financial sector must also collaborate with tech platforms?

    Really? This statement alone shows that the author really doesn't understand the financial sector. The financial sector only functions because of technology.

    link to this | view in chronology ]

    • identicon
      Anonymous Coward, 10 Aug 2020 @ 6:34pm

      Re: The financial sector must also collaborate with tech platfor

      ...

      link to this | view in chronology ]

    • icon
      PaulT (profile), 11 Aug 2020 @ 2:12am

      Re: The financial sector must also collaborate with tech platfor

      That doesn't mean they typically collaborate with platforms outside their own, which is what the author was saying.

      link to this | view in chronology ]

  • identicon
    Anonymous Coward, 10 Aug 2020 @ 8:00pm

    Missing technique

    Scams are perpetrated because they are profitable. Scams are profitable because the risk of getting caught, versus the amount of money to be obtained, is favorable. In particular, scams are often profitable because even when the victim reports the scam, it is very difficult or outright impossible to reverse the damage, so the scammer walks away with the money. This suggests an obvious countermeasure: make it more difficult for a scam to remain profitable after it has been discovered and reported to the authorities.

    For the financial industry, this suggests having agreements in place to forcibly unwind bad transactions. Scammers try to send the money to non-cooperating banks as soon as possible, so that the transaction cannot be unwound. Counter this by an industry-wide policy that transfers to entities that refuse to participate in a forced-unwind protocol are delayed by such a margin that there is a good chance the victim will report the scam before the transfer to the non-cooperating entity. This will motivate more parties to join in, else their customers will be complaining about the long delay. Participating parties would in turn enforce that their customers can only use these funds for purposes that can be unwound. Notably, this would mean no cash withdrawals of transferred funds until after the delay period. We already have some protocols like this with paper checks, but that was done in case the check bounced. In cases where the sending bank confirms the account is funded, the receiving entity seems to be quite willing to release the funds more quickly. Reverse that, and it becomes much easier to recover from certain types of scams.

    link to this | view in chronology ]

    • icon
      Ninja (profile), 11 Aug 2020 @ 7:54am

      Re: Missing technique

      I like the idea of adding delays to transactions between sources owned by different people. In fact, adding delays, small or big, would solve problems in several areas.

      link to this | view in chronology ]

  • icon
    Ninja (profile), 11 Aug 2020 @ 7:47am

    2FA

    I wonder if these accounts had 2-factor-auth enabled and if yes then how did the attacker managed to work around it. I'm assuming they had it enabled and that the attackers managed to get the token that keeps apps connected to the service or something similar. Any info circulating about this?

    link to this | view in chronology ]


Follow Techdirt
Essential Reading
Techdirt Deals
Report this ad  |  Hide Techdirt ads
Techdirt Insider Discord

The latest chatter on the Techdirt Insider Discord channel...

Loading...
Recent Stories

This site, like most other sites on the web, uses cookies. For more information, see our privacy policy. Got it
Close

Email This

This feature is only available to registered users. Register or sign in to use it.