Dancing queen: Deepfakes purporting to depict Queen Elizabeth and Kim Jong Un went viral
Dancing queen: Deepfakes purporting to depict Queen Elizabeth and Kim Jong Un went viral © Montage : Esan Swan/FT

Queen Elizabeth had never before included a dance routine in her annual Christmas message, nor had North Korean dictator Kim Jong Un previously warned Americans that “democracy is fragile”, but that is what they appeared to do in videos that went viral last year.

Both those videos — and thousands like them — were “deepfakes” that manipulate the speech and actions of politicians and celebrities, using static artificial intelligence-generated faces.

Although many deepfakes are produced for their comedic or shock value with no intention of misleading viewers, such videos have become a tool for spreading misinformation.

A viral deepfake video featuring Kim Jong Un was used to encourage Americans to vote © RepresentUs/YouTube

“What is distinctive about deepfakes is that the audiovisual element has a more powerful effect on our psychology than other types of media,” says Jon Bateman, a researcher in Cyber Policy Initiative at the Carnegie Endowment for International Peace.

In principle, AI could automate the process of media editing and manipulation and make it accessible to anyone, but this method of deception requires huge amounts of training data and technical skills. Other, more straightforward, methods of attacking targets or influencing public opinion already provide a simpler pay-off. 

Manipulated videos that do not use deep learning algorithms — referred to as “cheapfakes” or “shallowfakes” — are widespread on social media and can alter opinion even after they are debunked. A video of Nancy Pelosi, US House Speaker, apparently drunk and slurring her words, was created by slowing it down but went viral in spite of its obvious fakeness.

“Deepfakes are a reality but they are impractical to create.” says Andy Patel, an AI researcher at F-Secure, a cyber security group. “Shallowfakes are quickly debunked but people still believe them — so why would you put more effort into it?”

Beyond the eventual potential for anyone to create a convincing fake video using out-the-box AI tools, deepfakes do offer new opportunities in high-stakes, highly targeted contexts. Mr Bateman calls these “narrowcast” deepfakes, as opposed to deepfakes broadcast widely online. Criminal and state-sponsored hacking groups can use deepfakes to impersonate specific people for fraudulent purposes, he says. 

Such attacks usually rely on email communication or human impersonations on a phone call, dubbed “vishing”. Attacks using voice calls tap into the emotive power of direct human interactions, and AI can make it harder to detect such impersonators. In 2019, an unnamed UK energy company transferred £200,000 to criminal fraudsters who used deepfake technology to impersonate the chief executive of the German parent company on a phone call.

Ed Bishop, co-founder and chief technology officer of cyber security company Tessian, says that while a deep learning model requires huge amounts of training data, it only needs a limited sample of audio or video data of the target to generate a personalised deepfake — about one minute of audio or 20 to 40 minutes of video.

Deepfake written communication, which generates believable, casual correspondence, is a big focus for Tessian, since Twitter, LinkedIn and public blogs are readily available public data sets of personal communication and could be harnessed by hackers.

Social media platforms are leading the fight against malicious deepfakes, although researchers and security experts see targeted attacks as a bigger threat than viral misinformation.

Both Facebook and Google have generated and shared huge data sets of their own deepfake audio and video clips created for training AI-based deepfake detection models. But the deep learning models used to create deepfakes — Generative Adversarial Networks (GANS) — work by testing how detectable generated deepfakes are, and then refining them. Developing AI tools that can get ahead of these models at their own game is difficult.

Accenture’s Cyber Fusion Center, the US consultancy's cyber security R&D lab, used Facebook’s deepfake data set to develop a detection tool that uses different AI models analysing deepfake features, and weights the indicators to produce an estimation of authenticity.

Malek Ben Salem, a cyber researcher who worked on the project, says detection tools analyse metadata and technical features, physical and psychological integrity, and audio and semantic irregularities.

Further advances could be made by embedding a “proof of origin” functionality at the source of media creation — phones and other devices — that would encode a digital stamp specifying when and where the media was recorded.

This could enable social media and news media platforms to automatically verify and label authenticity on media that they publish. Microsoft has partnered with media organisations on Project Origin, to develop tools to create and detect “digital fingerprints” on media samples.

Yet the looming challenge is to strike a balance between appropriate awareness levels and radical scepticism, says Mr Bateman. Deepfakes are unlikely to be effective and universally undetected in spreading misinformation, but they do not need to be. The mere threat of convincing deepfakes — along with viral shallowfakes and unconvincing deepfakes — is enough to undermine traditional sources of authority.

John Conwell, principal data scientist at internet forensics group DomainTools, emphasises the need for public education and usage of tools such as reverse media searches, but also notes their limitations. “Relying on individuals to verify media authenticity doesn’t work when the fake information matches their perception of reality, which is why political memes are so effective.” 

Letter in response to this article:

Finland shows that education is best tool to fight ‘deepfakes’ / From Nick Nigam, Principal, Samsung Next, Berlin, Germany

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments

Comments have not been enabled for this article.