Real-World Application: Political Deepfakes and Democratic Integrity

Written by: Nick Nguyen

Introduction: Deepfakes as a Threat to Democracy

Political deepfakes represent a transformative threat to democratic integrity by weaponizing AI to erase the foundation of shared reality and warp public perception. Unlike traditional propaganda, deepfakes can fabricate reality with realistic video and audio deceptions of world leaders, making them do or say things contrary to reality. The technology creates a “liar’s dividend”, where the mere existence of this deepfake technology enables politicians to dismiss genuine, incriminating evidence, as fake. This also obfuscates viewer/voter knowledge making it difficult to distinguish truth from fabrication. When the general public can no longer trust their own senses, their eyes and ears, the cynicism and confusion that arises as a result can suppress voter turnout, incite social unrest, and shift election outcomes even if correction is issued.

Conceptual illustration of AI deepfake technology

Figure 1: Conceptual illustration of AI deepfake technology.

Source: Moor Studio (2024) iStock

Case Study 1: The 2024 Biden Robocall Incident

In January 2024, just days before the New Hampshire primary, thousands of voters received an automated phone call featuring a voice identical to President Joe Biden. The voice used language familiar to Biden and urged Democrats to stay home and “save their vote” for the general election in November, rather than participate in the primary. This was a sophisticated audio deepfake designed specifically to suppress voter turnout.

The deepfake was later traced to a political consultant, Steve Kramer, who hired a “magician” to create the AI voice clone using software from ElevenLabs. The goal was to demonstrate the vulnerabilities of the electoral system, but in reality demonstrated an immediate threat to the democratic system. The audio was realistic enough to fool casual listeners, and because it was distributed via private phone calls rather than public media, it was difficult to detect and fact check in real time.

The consequences were significant. The incident prompted the Federal Communications Commission (FCC) to officially outlaw AI generated voices in robocalls under the Telephone Consumer Protection Act. While the deep fake did not ultimately flip the primary outcome, it served as a “canary in the coal mine” during the 2024 election cycle, highlighting how easily and cheaply a single actor can utilize AI to interfere with the voting behavior of thousands.

Robocaller deepfake conceptual image

Figure 2: Conceptual image of robocaller deepfakes.

Source: McMillan, R., Corse, A., & Volz, D. (2024) The Wall Street Journal

Case Study 2: The Zelenskyy Surrender Deepfake

In March 2022, during the early weeks of the Russian invasion of Ukraine, a video appeared on Ukrainian news websites and social media showing President Volodymyr Zelenskyy standing behind a podium and telling his soldiers to lay down their arms and surrender to Russian forces. The video, of course, was a deepfake and the first of its kind used in a high stakes war to attempt to influence military morale and national resolve.

The video was created using Generative Adversarial Networks (GANs) to map Zelenskyy’s face onto an actor’s body. While the quality was low, for instance, Zelenskyy’s head appeared slightly too large and his voice was deeper than usual, the video spread via a hacked news network on the “Ukraine 24” channel. The goal was to create immediate chaos and trigger a mass surrender of Ukrainian troops during a critical defensive period.

The attempt largely failed due to a pre-emptive educational campaign by the Ukrainian Government and Zelenskyy’s own rapid response. Within minutes of the video’s spread, Zelenskyy posted his own real video straight from the streets of Kyiv debunking the fake. However, the incident set a dangerous precedent for information warfare, showing how deepfakes can be used as a weapon to bypass traditional diplomacy and military force to attack a nation’s psychological will to fight.

Zekenskyy deepfake surrender image

Figure 3: Zelenskyy deepfaked video

Source: Bond, S. (2022) NPR

Why These Situations Are Complex and Challenging

Combating political deepfakes is an uphill battle because of the “asymmetry of speed”. A deepfake can be generated in minutes and go viral across global platforms in seconds, whereas professional fact checking and technical verification can take hours or even days. By the time the video is debunked, the initial emotional impact, instilling anger, fear, or other emotions, has already taken root in the viewer’s mind, the damage is done. Furthermore, confirmation bias plays a critical role, individuals are statistically less likely to question the authenticity of a video if it aligns with their own beliefs, making disinformation self-propagating within digital echo chambers.

Technical detection also faces the “cat and mouse” problem. As detection algorithms improve, the AI models used to create these deepfakes also improve by learning to bypass those specific markers. In a political context, this is further complicated by the risk of censorship. If a platform moves too quickly to remove a suspected deepfake that turns out to simply be a real, but poorly shot video, they could face accusations of political interference. This window creates an opportunity for malicious actors to exploit this gray area of content moderation during sensitive periods of time like election nights.

Broader Implications for Democracy

The long-term danger of political deepfakes is not just a single stolen election, but the total collapse of public trust in information. If any video can be faked, then any real video of corruption or misconduct can simply be dismissed as AI generated. This leads to a state where voters cannot agree on a basic set of facts required for a healthy debate. For democracy to function, there must be an informed electorate of which deepfakes threaten to replace that information with a permanent state of skepticism, where the loudest voice or most convincing algorithm, rather than truth, dictates political landscapes.

References

  1. Allyn, B. (2024). A political operative behind AI-generated Biden robocall faces $6M fine. NPR. https://www.npr.org/2024/05/23/nx-s1-4977582/fcc-ai-deepfake-robocall-biden-new-hampshire-political-operative
  2. Associated Press. (2024). Political consultant behind fake Biden robocall indicted in New Hampshire. The Guardian. https://www.theguardian.com/us-news/article/2024/may/23/biden-robocall-indicted-primary
  3. Wakefield, J. (2022). Deepfake Zelenskyy video taken down by social media platforms. BBC News. https://www.bbc.com/news/technology-60780142
  4. Bond, S. (2022). A deepfake video of Zelenskyy telling Ukrainians to surrender is circulating on social media. NPR. https://www.npr.org/2022/03/16/1087062648/deepfake-video-zelenskyy-experts-war-manipulation-ukraine-russia
  5. Kerr, D. (2024). The FCC says AI voices in robocalls are illegal. NPR. https://www.npr.org/2024/02/08/1230052884/the-fcc-says-ai-voices-in-robocalls-are-illegal
  6. Citron, D., & Chesney, R. (2018). Deepfakes and the New Disinformation War. Foreign Affairs. https://www.foreignaffairs.com/articles/world/2018-12-11/deepfakes-and-new-disinformation-war
  7. McMillan, R., Corse, A., & Volz, D. (2024). New Era of AI Deepfakes Complicates 2024 Elections. The Wall Street Journal. https://www.wsj.com/tech/ai/new-era-of-ai-deepfakes-complicates-2024-elections-aa529b9e