Verify any claim · lenz.io
Claim analyzed
Politics“The increasing use of deepfake technology poses a significant threat to democratic elections.”
The conclusion
The claim is largely accurate. Multiple credible sources — including Brookings, the Brennan Center, and legislative testimony — document real election-linked deepfake incidents (voter-suppression robocalls, fabricated candidate videos, incidents across 38 countries). However, the 2024–2025 global election super-cycle did not produce the catastrophic "deepfake election" many feared, and controlled experiments show minimal direct persuasion effects on voters. The threat is real and growing — particularly through trust erosion and procedural disinformation — but its demonstrated electoral impact remains more limited than the claim implies.
Based on 23 sources: 19 supporting, 2 refuting, 2 neutral.
Caveats
- The 2024–2025 global super election cycle, covering nearly half the world's population, did not produce the large-scale deepfake-driven electoral disruption many experts predicted, suggesting the threat has not yet materialized at the scale often implied.
- Controlled experimental research (Yale ISPS) found deepfake videos have 'minimal effects' on direct voter persuasion, though this does not address indirect harms like trust erosion or procedural disinformation.
- Several sources supporting the claim come from advocacy organizations or cybersecurity firms with institutional incentives to emphasize the threat, which may inflate the perceived severity.
Sources
Sources used in the analysis
These deepfakes are no longer crude forgeries. They are often indistinguishable from reality. In recent election cycles, AI-generated robocalls have falsely instructed voters to stay home. It is one of the fastest growing threats to election security today.
The capacity of this technology to synthesize highly convincing media systematically erodes public confidence, facilitates sophisticated political manipulation, and poses a direct, existential risk to free and fair elections. As the technology underpinning deepfakes continues its trajectory toward greater accessibility and complexity, these associated risks are projected only to escalate further.
Instances of manipulated or wholly generated content have surfaced, posing a threat to democratic discourse and electoral integrity. In September 2023, generative AI-based political interference upended Slovakia’s parliamentary elections with an audio clip allegedly featuring voices discussing election manipulation. For example, deepfakes and voice cloning have already been used to imitate candidates running for office, such as an AI-generated robocall purporting to be U.S. President Joe Biden discouraging voting.
Deepfakes can directly alter voter preferences and spread disinformation about candidates by making them appear to take policy positions they do not hold or engage in illegal behavior. Coordinated disinformation campaigns utilizing deepfake videos could prevent citizens from voting by spreading false information about election procedures or intimidating voters through blackmail.
With artificial intelligence-based methods for creating deepfakes becoming increasingly sophisticated and accessible, deepfakes are raising a set of challenging policy, technology, and legal issues. Candidates in a political campaign can be targeted by manipulated videos in which they appear to say things that could harm their chances for election.
As deepfake tools become more sophisticated and accessible, they pose a significant threat to the democratic process around the world. Policymakers must recognize the urgency of the situation and take proactive measures to address this unprecedented challenge.
Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) pose significant risks, particularly in the realm of online election interference. Deepfake technology can generate realistic videos of political figures saying or doing things they never did, eroding public trust in authentic information sources.
Deepfakes and generative AI used for disinformation undoubtedly pose a risk to fair democratic processes globally. Mitigating that risk will take a concentrated effort from stakeholders, including AI developers and operators, the press, governments, political candidates, social media platforms, and voters themselves.
Problematically, however, concern about deepfakes poses a threat of its own: unscrupulous public figures or stakeholders can use this heightened awareness to falsely claim that legitimate audio content or video footage is artificially generated and fake. The fact is, many deepfakes are poorly made and easy for humans to spot.
Deepfakes amplify misinformation's impact in unprecedented ways, erasing the line between truth and fabrication. In July 2024, a deepfake video of Kamala Harris spread rapidly, raising concerns about undetectable deceptions damaging elections. As deepfakes erode trust in content, the foundation of democracy—an informed electorate—will be at risk.
A new political ad in Georgia's U.S. Senate race is raising concerns about the use of artificial intelligence in elections after Rep. Mike Collins' campaign released a deepfake video showing Sen. Jon Ossoff mocking farmers and defending a government shutdown. Ossoff never said any of it. The video, posted last week on social media, was created using artificial intelligence and features computer-generated audio of Ossoff claiming to support the shutdown and that he'd 'only seen a farm on Instagram.'
Advanced AI-generated content can now create highly convincing videos of political figures saying things they've never said, or doing things they've never done. The timing of such releases, especially during election cycles, can change public opinion before the deepfake can be discredited. The real danger lies not just in the immediate impact of false content, but in the erosion of trust in politics, creating a 'liar's dividend' where genuine footage can be dismissed as fake.
Specifically, in the political realm, the misuse of deepfakes has accelerated worldwide. According to a 2024 report by the cybersecurity firm Recorded Future, at least 38 countries experienced deepfake incidents targeting public figures within a single year. Most of these cases were linked to elections. Forged audio and video are now regularly attributed to political candidates, eroding public trust and manipulating opinions.
The fundamental concern with deepfakes is that their verisimilitude between reality and the manipulated audio, video, or images can mislead individuals into thinking that the content is real. And yet, election after election in 2024 showed that these warnings were overblown or at least premature.
The malicious use of deepfake technology can lead to violations of human rights and freedoms, or even facilitate criminal activities such as financial fraud. However, creating manipulated images can also pose other threats, including those to democratic states and the principles that govern them.
While deepfakes have dominated global headlines as an existential threat to democracy, the evidence from the 2024–2025 “super election cycle,” during which nearly half of the world's population voted, reveals a more complex reality. The much-feared “deepfake election” did not materialise; however, the convergence of cheapfakes, synthetic media, and algorithmic amplification continues to erode public trust in elections, journalism, and institutions.
The technology — which can do everything from streamlining mundane campaign tasks to creating fake images, video or audio — already has been deployed in some national races around the country and has spread far more widely in elections across the globe. Despite its power as a tool to mislead, efforts to regulate it have been piecemeal or delayed, a gap that could have the potential to significantly impact election outcomes.
Now any individual with basic tech skills and a malicious desire and intent to harm, can sow chaos and potentially influence the outcome of an election. The deepfake could be a video of a president postponing the election due to a cybersecurity attack, a candidate accepting a bribe, or an election official stuffing a ballot box.
The observed intensification of the use of deep fakes to intervene in the will-forming process in open societies or to influence election results is of particular importance in the super-election year. Fear, mistrust, and insecurity thus might be the most important effects of deep fakes.
From the rapid spread of misinformation to the rise of deepfake technology, AI is reshaping democratic processes with significant impact on democracy.
It also examines various deepfake detection methods, legal regulations, and policy measures that can lower the risk of deepfake-related scandals in elections.
Policymakers have expressed concern that deepfakes could mislead voters, but prior research has found that such videos have minimal effects. Our findings suggest that even if deepfakes are not themselves persuasive, information about deepfakes can nevertheless be weaponized to dismiss real political videos.
The EU AI Act, effective 2024, classifies deepfakes as high-risk AI and requires labeling for synthetic content, including political uses, to mitigate election interference risks while balancing innovation.
What do you think of the claim?
Your challenge will appear immediately.
Challenge submitted!
Expert review
How each expert evaluated the evidence and arguments
Expert 1 — The Logic Examiner
The logical chain from evidence to claim is largely sound but contains an important scope distinction: the claim asserts a "significant threat," not a "realized catastrophic outcome." Sources 1, 3, 4, 6, 11, and 13 provide direct evidence of documented election-linked deepfake incidents (AI robocalls suppressing votes, deepfake political ads, 38 countries experiencing deepfake incidents targeting public figures), which logically supports the existence of a real and growing threat mechanism. The opponent's strongest counter-evidence — Source 16 (Civicus) and Source 14 (Diálogo Político) — establishes that a catastrophic "deepfake election" did not materialize in 2024–2025, and Source 22 (Yale ISPS) finds minimal persuasive effects from deepfake videos in controlled experiments; however, these refutations commit a scope mismatch fallacy: disproving that deepfakes have already caused electoral collapse does not disprove that they pose a "significant threat," especially when Source 16 itself concedes that synthetic media "continues to erode public trust in elections." The proponent correctly identifies that the opponent conflates "threat not yet fully realized" with "threat not significant," while the opponent correctly flags that multi-source consensus from advocacy-oriented sources risks overstating certainty — but the experimental evidence from Yale (Source 22) is narrowly scoped to persuasion effects and does not address procedural disinformation or trust erosion, which are the primary threat mechanisms cited. The claim is therefore Mostly True: the evidence logically supports a significant and growing threat to democratic elections through documented incidents and credible mechanisms, though the magnitude and realized impact remain contested by empirical evidence from the 2024 super-cycle.
Expert 2 — The Context Analyst
The claim is broadly supported by a wide range of sources documenting real incidents (AI robocalls suppressing votes, deepfake political ads, global election-linked deepfake incidents in 38 countries per Source 13), but critical context is omitted: the most comprehensive empirical review of the 2024–2025 super election cycle (Source 16, Civicus) found the feared "deepfake election" did not materialize, Source 14 (Diálogo Político) concluded warnings were "overblown or premature," Source 22 (Yale ISPS) found deepfake videos have "minimal effects" on voters in controlled experiments, and Source 9 (Brennan Center) notes many deepfakes are easy to spot. The claim's framing of deepfakes as a "significant threat" is directionally accurate — real incidents have occurred, trust erosion is documented, and the technology is advancing — but without acknowledging that empirical evidence of large-scale electoral harm remains limited and that the most recent comprehensive election cycle did not see the catastrophic outcomes predicted, the claim overstates certainty and omits the nuanced, contested nature of the actual demonstrated impact.
Expert 3 — The Source Auditor
The most authoritative sources in this pool — Maryland General Assembly testimony (Source 1, 0.92), Brookings Institution (Sources 3 & 5, 0.85), arXiv peer-reviewed preprints (Sources 4 & 7, 0.80–0.85), and the Brennan Center for Justice (Sources 6 & 9, 0.75–0.82) — all independently confirm that deepfake technology poses a significant and growing threat to democratic elections, citing documented real-world incidents (AI robocalls suppressing votes, deepfake political ads, Slovakia election interference) and structural mechanisms like trust erosion and procedural disinformation. The opponent's strongest counter-sources are Civicus.org (Source 16, 0.70) and Diálogo Político (Source 14, 0.70), which note the feared "deepfake election" did not fully materialize in 2024–2025, and Yale ISPS (Source 22, 0.60, undated) finding "minimal effects" — but these are lower-authority, and critically, Civicus itself acknowledges ongoing erosion of public trust in elections from synthetic media, while the Yale study's unknown date and narrow experimental scope limit its weight against the broader, multi-source consensus. The claim is "Mostly True": the threat is real, documented, and confirmed by the most reliable independent sources, though the nuance that catastrophic electoral disruption has not yet fully materialized in a single election cycle is a legitimate caveat that prevents a score of 9–10.
Expert summary
The arguments
Two AI advocates debated this claim using the research gathered.
Argument for
Multiple high-authority, independent sources converge that as deepfakes become more sophisticated and accessible, they can convincingly impersonate candidates, spread procedural voting disinformation, and erode trust—core mechanisms that directly undermine electoral integrity (Source 1 Maryland General Assembly; Source 3 Brookings; Source 4 arXiv; Source 6 Brennan Center). This is not hypothetical: documented election-linked incidents (e.g., AI robocalls discouraging voting and deepfake political ads) and global reporting of accelerating, election-targeted deepfake misuse demonstrate a significant and growing threat to democratic elections (Source 1 Maryland General Assembly; Source 3 Brookings; Source 11 CBS News; Source 13 Stimson Center).
Your reliance on isolated incidents — a single robocall and one political ad from Source 11 (CBS News) and Source 3 (Brookings) — commits the fallacy of hasty generalization: cherry-picking anecdotal cases does not establish a "significant threat" when the most comprehensive empirical review available, Source 16 (Civicus.org), examined nearly half the world's population voting across the 2024–2025 super election cycle and explicitly concluded the feared deepfake election "did not materialise." Furthermore, your convergence of sources argument collapses under scrutiny because Source 22 (Yale ISPS) — the only source grounded in controlled experimental evidence rather than speculation or advocacy — directly found that deepfake videos have "minimal effects" on voters, meaning your multi-source consensus reflects theoretical alarm, not demonstrated electoral harm.
Argument against
The empirical record from the most recent and comprehensive election cycle directly undermines this claim: Source 16 (Civicus.org) explicitly states that "the much-feared 'deepfake election' did not materialise" during the 2024–2025 super election cycle when nearly half the world's population voted, and Source 14 (Diálogo Político) corroborates this, concluding that "election after election in 2024 showed that these warnings were overblown or at least premature." Furthermore, Source 22 (Institution for Social and Policy Studies) from Yale found that deepfake videos have "minimal effects" on voters, and Source 9 (Brennan Center for Justice) acknowledges that "many deepfakes are poorly made and easy for humans to spot," collectively demonstrating that the claimed "significant threat" remains largely theoretical rather than a proven, realized danger to democratic elections.
You're committing a false dichotomy by treating Civicus's point that a single “deepfake election” didn't materialize as proof there's no significant threat, even though that same report says synthetic media plus amplification is still eroding trust in elections and institutions (Source 16 Civicus.org) and multiple independent sources document real election-linked incidents like voter-suppression robocalls and deepfake candidate impersonation (Source 1 Maryland General Assembly; Source 3 Brookings Institution; Source 11 CBS News). And your reliance on “minimal effects” and “poorly made” deepfakes cherry-picks narrow findings while ignoring the core threat mechanism emphasized across the brief—procedural disinformation and trust erosion—where even a debunked or obvious fake can still depress turnout or poison discourse (Source 1 Maryland General Assembly; Source 2 IJFMR; Source 4 arXiv.org; Source 9 Brennan Center for Justice).