Holding AI-Generated Deepfakes Accountable in Election Misinformation
This article examines how AI-generated deepfakes impact election misinformation, the current legal landscape in the U.S., recent examples worldwide, and policy recommendations to ensure transparency and accountability.
How Deepfakes Are Created
Generative AI models, primarily generative adversarial networks (GANs) and autoencoders, enable the creation of highly realistic fake media by training on real images, videos, or audio of a target person. GANs consist of a generator that creates synthetic images and a discriminator that distinguishes fakes from real data, improving through iterative training. Autoencoders encode a target face and decode it onto a source video. Popular open-source tools like DeepFaceLab and FaceSwap dominate video face-swapping, while voice-cloning tools mimic speech from minutes of audio. Commercial platforms like Synthesia convert text into video avatars, which have been misused in disinformation campaigns. Mobile apps such as FaceApp and Zao also facilitate face-swapping quickly. These advancements have made deepfakes cheaper and easier to produce than ever.
Detection and authentication are key defenses, using AI models to spot inconsistencies such as blinking irregularities or metadata mismatches, while authentication embeds invisible watermarks or cryptographically signed metadata. The upcoming EU AI Act will mandate watermark signals in synthetic media, though detection remains an ongoing arms race.
Deepfakes in Recent Elections
Deepfakes have surfaced globally in election cycles, often aiming to mislead or discredit candidates. In the 2024 U.S. primaries, an AI-generated robocall mimicked President Biden’s voice to discourage voting and led to a $6 million fine and indictment under telemarketing laws. Former President Trump shared AI-generated images implying pop singer Taylor Swift’s endorsement, causing media uproar. Internationally, AI-generated content appeared in Indonesia’s presidential election, Bangladesh’s opposition attacks, Moldova’s disinformation campaigns, Taiwan’s election tensions, and Slovakia’s political scheming.
Many viral "deepfakes" are openly shared memes or cheaply doctored content rather than subtle deceptions. Even unsophisticated fakes can influence voter attitudes, highlighting the growing impact of deepfakes in global elections.
U.S. Legal Framework and Accountability
The U.S. lacks a comprehensive federal deepfake law. Existing statutes address impersonation, electioneering, and fraud, but often struggle to fit AI-generated misinformation neatly. The Department of Justice and state attorneys general are increasingly using broad fraud or voting rights interference statutes. The Federal Election Commission is preparing rules to prohibit political ads showing falsified candidate media, and agencies like the FTC and DOJ signal liability for commercial deepfakes.
Legislation and Proposals
Federal proposals such as the DEEPFAKES Accountability Act would require disclaimers on manipulated political ads and increase penalties for false election content. Over 20 states have enacted deepfake laws targeting elections, with mixed success due to First Amendment challenges. Some laws allow candidates to sue or revoke candidacies over deepfake misuse, but many cases focus on defamation or intellectual property rather than election-specific statutes.
Policy Recommendations
Experts advocate transparency through clear labeling or watermarks on AI-synthesized political content to alert audiences and hold campaigns accountable. Targeted bans on specific harms (e.g., deceptive robocalls) may be defensible, while outright bans risk infringing free speech. Limited liability tied to intent to mislead is favored.
Technical solutions like watermarking and open-source detection tools should be deployed alongside international cooperation to trace disinformation campaigns. Education and a robust independent press remain critical to building public resilience against misinformation.
As technology evolves, policies must balance deterring malicious use without stifling innovation or satire, aiming for informed voters and trustworthy elections.
Сменить язык
Читать эту статью на русском