Claim analyzed

Politics

“The increasing use of deepfake technology poses a significant threat to democratic elections.”

The conclusion

Reviewed by Kosta Jordanov, editor · Feb 15, 2026
Mostly True
7/10
Created: February 15, 2026
Updated: March 01, 2026

The claim is largely accurate. Multiple credible sources — including Brookings, the Brennan Center, and legislative testimony — document real election-linked deepfake incidents (voter-suppression robocalls, fabricated candidate videos, incidents across 38 countries). However, the 2024–2025 global election super-cycle did not produce the catastrophic "deepfake election" many feared, and controlled experiments show minimal direct persuasion effects on voters. The threat is real and growing — particularly through trust erosion and procedural disinformation — but its demonstrated electoral impact remains more limited than the claim implies.

Based on 23 sources: 19 supporting, 2 refuting, 2 neutral.

Caveats

  • The 2024–2025 global super election cycle, covering nearly half the world's population, did not produce the large-scale deepfake-driven electoral disruption many experts predicted, suggesting the threat has not yet materialized at the scale often implied.
  • Controlled experimental research (Yale ISPS) found deepfake videos have 'minimal effects' on direct voter persuasion, though this does not address indirect harms like trust erosion or procedural disinformation.
  • Several sources supporting the claim come from advocacy organizations or cybersecurity firms with institutional incentives to emphasize the threat, which may inflate the perceived severity.

Sources

Sources used in the analysis

#1
Maryland General Assembly 2026-01-19 | Committee Testimony on S.B. 141 - January 19, 2026
SUPPORT

These deepfakes are no longer crude forgeries. They are often indistinguishable from reality. In recent election cycles, AI-generated robocalls have falsely instructed voters to stay home. It is one of the fastest growing threats to election security today.

#2
IJFMR 2026-01-15 | The Critical Threat of Deepfakes: Vulnerability and Resilience in Democratic Elections - IJFMR
SUPPORT

The capacity of this technology to synthesize highly convincing media systematically erodes public confidence, facilitates sophisticated political manipulation, and poses a direct, existential risk to free and fair elections. As the technology underpinning deepfakes continues its trajectory toward greater accessibility and complexity, these associated risks are projected only to escalate further.

#3
Brookings Institution 2024-01-01 | The impact of generative AI in a global election year
SUPPORT

Instances of manipulated or wholly generated content have surfaced, posing a threat to democratic discourse and electoral integrity. In September 2023, generative AI-based political interference upended Slovakia’s parliamentary elections with an audio clip allegedly featuring voices discussing election manipulation. For example, deepfakes and voice cloning have already been used to imitate candidates running for office, such as an AI-generated robocall purporting to be U.S. President Joe Biden discouraging voting.

#4
arXiv.org 2024-06-20 | Examining the Implications of Deepfakes for Election Integrity - arXiv.org
SUPPORT

Deepfakes can directly alter voter preferences and spread disinformation about candidates by making them appear to take policy positions they do not hold or engage in illegal behavior. Coordinated disinformation campaigns utilizing deepfake videos could prevent citizens from voting by spreading false information about election procedures or intimidating voters through blackmail.

#5
Brookings Institution 2024-01-01 | Artificial Intelligence, Deepfakes, and the Uncertain Future of Truth
SUPPORT

With artificial intelligence-based methods for creating deepfakes becoming increasingly sophisticated and accessible, deepfakes are raising a set of challenging policy, technology, and legal issues. Candidates in a political campaign can be targeted by manipulated videos in which they appear to say things that could harm their chances for election.

#6
Brennan Center for Justice 2025-01-01 | The Effect of AI on Elections Around the World and What to Do About It
SUPPORT

As deepfake tools become more sophisticated and accessible, they pose a significant threat to the democratic process around the world. Policymakers must recognize the urgency of the situation and take proactive measures to address this unprecedented challenge.

#7
arXiv 2025-04-04 | Charting the Landscape of Nefarious Uses of Generative Arti- ficial Intelligence for Online Election Interference - arXiv
SUPPORT

Generative Artificial Intelligence (GenAI) and Large Language Models (LLMs) pose significant risks, particularly in the realm of online election interference. Deepfake technology can generate realistic videos of political figures saying or doing things they never did, eroding public trust in authentic information sources.

#8
NCC Group 2024-03-12 | Combating Deepfakes and Disinformation in 2024 | NCC Group
SUPPORT

Deepfakes and generative AI used for disinformation undoubtedly pose a risk to fair democratic processes globally. Mitigating that risk will take a concentrated effort from stakeholders, including AI developers and operators, the press, governments, political candidates, social media platforms, and voters themselves.

#9
Brennan Center for Justice 2024-01-23 | Deepfakes, Elections, and Shrinking the Liar's Dividend | Brennan Center for Justice
NEUTRAL

Problematically, however, concern about deepfakes poses a threat of its own: unscrupulous public figures or stakeholders can use this heightened awareness to falsely claim that legitimate audio content or video footage is artificially generated and fake. The fact is, many deepfakes are poorly made and easy for humans to spot.

#10
Columbia Science and Technology Law Review 2024-07-15 | Deepfakes and Democracy: Free Speech vs. Election Integrity
SUPPORT

Deepfakes amplify misinformation's impact in unprecedented ways, erasing the line between truth and fabrication. In July 2024, a deepfake video of Kamala Harris spread rapidly, raising concerns about undetectable deceptions damaging elections. As deepfakes erode trust in content, the foundation of democracy—an informed electorate—will be at risk.

#11
CBS News 2025-11-21 | Georgia Rep. Mike Collins' campaign uses AI-generated deepfake of Senator Jon Ossoff in tight Senate showdown - CBS News
SUPPORT

A new political ad in Georgia's U.S. Senate race is raising concerns about the use of artificial intelligence in elections after Rep. Mike Collins' campaign released a deepfake video showing Sen. Jon Ossoff mocking farmers and defending a government shutdown. Ossoff never said any of it. The video, posted last week on social media, was created using artificial intelligence and features computer-generated audio of Ossoff claiming to support the shutdown and that he'd 'only seen a farm on Instagram.'

#12
Mea Digital Evidence Integrity 2026-01-01 | 8 Deepfake Threats to Watch in 2026
SUPPORT

Advanced AI-generated content can now create highly convincing videos of political figures saying things they've never said, or doing things they've never done. The timing of such releases, especially during election cycles, can change public opinion before the deepfake can be discredited. The real danger lies not just in the immediate impact of false content, but in the erosion of trust in politics, creating a 'liar's dividend' where genuine footage can be dismissed as fake.

#13
Stimson Center 2026-02-23 | AI in the Age of Fake (Imagined) Content - Stimson Center
SUPPORT

Specifically, in the political realm, the misuse of deepfakes has accelerated worldwide. According to a 2024 report by the cybersecurity firm Recorded Future, at least 38 countries experienced deepfake incidents targeting public figures within a single year. Most of these cases were linked to elections. Forged audio and video are now regularly attributed to political candidates, eroding public trust and manipulating opinions.

#14
Diálogo Político 2025-02-04 | Artificial Intelligence and elections: premature threats? - Diálogo Político
REFUTE

The fundamental concern with deepfakes is that their verisimilitude between reality and the manipulated audio, video, or images can mislead individuals into thinking that the content is real. And yet, election after election in 2024 showed that these warnings were overblown or at least premature.

#15
Biblioteka Nauki 2024-06-26 | The Impact of Deepfakes on Elections and Methods of Combating Disinformation in the Virtual World - Biblioteka Nauki
SUPPORT

The malicious use of deepfake technology can lead to violations of human rights and freedoms, or even facilitate criminal activities such as financial fraud. However, creating manipulated images can also pose other threats, including those to democratic states and the principles that govern them.

#16
Civicus.org 2025-03-01 | Future-Proofing Elections Against Deepfake Disinformation - Civicus.org
NEUTRAL

While deepfakes have dominated global headlines as an existential threat to democracy, the evidence from the 2024–2025 “super election cycle,” during which nearly half of the world's population voted, reveals a more complex reality. The much-feared “deepfake election” did not materialise; however, the convergence of cheapfakes, synthetic media, and algorithmic amplification continues to erode public trust in elections, journalism, and institutions.

#17
PBS News 2024-06-17 | How deepfake tech is a high risk, high reward tool for local political campaigns | PBS News
SUPPORT

The technology — which can do everything from streamlining mundane campaign tasks to creating fake images, video or audio — already has been deployed in some national races around the country and has spread far more widely in elections across the globe. Despite its power as a tool to mislead, efforts to regulate it have been piecemeal or delayed, a gap that could have the potential to significantly impact election outcomes.

#18
NextGen Policy 2025-01-01 | Tackling the Threat of Deepfakes in our Elections
SUPPORT

Now any individual with basic tech skills and a malicious desire and intent to harm, can sow chaos and potentially influence the outcome of an election. The deepfake could be a video of a president postponing the election due to a cybersecurity attack, a candidate accepting a bribe, or an election official stuffing a ballot box.

#19
Konrad-Adenauer-Stiftung 2024-05-01 | The Influence of Deep Fakes on Elections - Konrad-Adenauer-Stiftung
SUPPORT

The observed intensification of the use of deep fakes to intervene in the will-forming process in open societies or to influence election results is of particular importance in the super-election year. Fear, mistrust, and insecurity thus might be the most important effects of deep fakes.

#20
Colorado State University Source 2024-09-10 | How artificial intelligence is reshaping democracy
SUPPORT

From the rapid spread of misinformation to the rise of deepfake technology, AI is reshaping democratic processes with significant impact on democracy.

#21
Shodhsagar Law Journal 2024-06-01 | The Impact of Deepfake Technology on Elections and Democratic ...
SUPPORT

It also examines various deepfake detection methods, legal regulations, and policy measures that can lower the risk of deepfake-related scandals in elections.

#22
Institution for Social and Policy Studies The Negative Consequences of Informing Voters about Deepfakes: Evidence from Two Survey Experiments | Institution for Social and Policy Studies
REFUTE

Policymakers have expressed concern that deepfakes could mislead voters, but prior research has found that such videos have minimal effects. Our findings suggest that even if deepfakes are not themselves persuasive, information about deepfakes can nevertheless be weaponized to dismiss real political videos.

#23
LLM Background Knowledge 2024-08-01 | EU AI Act Regulations on Deepfakes in Elections
SUPPORT

The EU AI Act, effective 2024, classifies deepfakes as high-risk AI and requires labeling for synthetic content, including political uses, to mitigate election interference risks while balancing innovation.

Full Analysis

Expert review

How each expert evaluated the evidence and arguments

Expert 1 — The Logic Examiner

Focus: Inferential Soundness & Fallacies
Mostly True
7/10

The logical chain from evidence to claim is largely sound but contains an important scope distinction: the claim asserts a "significant threat," not a "realized catastrophic outcome." Sources 1, 3, 4, 6, 11, and 13 provide direct evidence of documented election-linked deepfake incidents (AI robocalls suppressing votes, deepfake political ads, 38 countries experiencing deepfake incidents targeting public figures), which logically supports the existence of a real and growing threat mechanism. The opponent's strongest counter-evidence — Source 16 (Civicus) and Source 14 (Diálogo Político) — establishes that a catastrophic "deepfake election" did not materialize in 2024–2025, and Source 22 (Yale ISPS) finds minimal persuasive effects from deepfake videos in controlled experiments; however, these refutations commit a scope mismatch fallacy: disproving that deepfakes have already caused electoral collapse does not disprove that they pose a "significant threat," especially when Source 16 itself concedes that synthetic media "continues to erode public trust in elections." The proponent correctly identifies that the opponent conflates "threat not yet fully realized" with "threat not significant," while the opponent correctly flags that multi-source consensus from advocacy-oriented sources risks overstating certainty — but the experimental evidence from Yale (Source 22) is narrowly scoped to persuasion effects and does not address procedural disinformation or trust erosion, which are the primary threat mechanisms cited. The claim is therefore Mostly True: the evidence logically supports a significant and growing threat to democratic elections through documented incidents and credible mechanisms, though the magnitude and realized impact remain contested by empirical evidence from the 2024 super-cycle.

Logical fallacies

Scope mismatch (Opponent): The opponent conflates 'the worst-case deepfake election did not materialize' with 'no significant threat exists' — these are not logically equivalent, and Source 16 (Civicus) itself acknowledges ongoing trust erosion even while noting the feared catastrophe did not occur.Hasty generalization (Proponent): Citing a handful of documented incidents (one robocall, one political ad) as proof of a broad 'significant threat' overgeneralizes from limited cases, though the Stimson Center's 38-country data point (Source 13) partially corrects this.Cherry-picking (Opponent): Emphasizing Yale ISPS's finding of 'minimal persuasion effects' (Source 22) while ignoring that this study measures only direct persuasion, not procedural disinformation or trust erosion — the primary threat mechanisms documented across Sources 1, 4, and 12.Appeal to authority without scope qualification (Proponent): Citing multiple high-authority sources that largely agree does not substitute for direct causal evidence that deepfakes have demonstrably altered election outcomes, which remains unproven.
Confidence: 8/10

Expert 2 — The Context Analyst

Focus: Completeness & Framing
Mostly True
7/10

The claim is broadly supported by a wide range of sources documenting real incidents (AI robocalls suppressing votes, deepfake political ads, global election-linked deepfake incidents in 38 countries per Source 13), but critical context is omitted: the most comprehensive empirical review of the 2024–2025 super election cycle (Source 16, Civicus) found the feared "deepfake election" did not materialize, Source 14 (Diálogo Político) concluded warnings were "overblown or premature," Source 22 (Yale ISPS) found deepfake videos have "minimal effects" on voters in controlled experiments, and Source 9 (Brennan Center) notes many deepfakes are easy to spot. The claim's framing of deepfakes as a "significant threat" is directionally accurate — real incidents have occurred, trust erosion is documented, and the technology is advancing — but without acknowledging that empirical evidence of large-scale electoral harm remains limited and that the most recent comprehensive election cycle did not see the catastrophic outcomes predicted, the claim overstates certainty and omits the nuanced, contested nature of the actual demonstrated impact.

Missing context

The 2024–2025 global super election cycle, covering nearly half the world's population, did not produce the feared 'deepfake election,' suggesting the threat has not yet materialized at the scale implied (Source 16, Civicus).Controlled experimental research (Yale ISPS, Source 22) found deepfake videos have 'minimal effects' on actual voter persuasion, complicating the claim of a 'significant' direct threat.Many deepfakes remain poorly made and detectable by humans, limiting their real-world impact in some contexts (Source 9, Brennan Center).The claim does not acknowledge the 'liar's dividend' counter-threat — that awareness of deepfakes can itself be weaponized to dismiss authentic footage, a distinct but related democratic harm (Sources 9, 12).Regulatory responses such as the EU AI Act (Source 23) and state-level legislation (Source 1) are already being enacted, partially mitigating the threat trajectory implied by the claim.
Confidence: 8/10

Expert 3 — The Source Auditor

Focus: Source Reliability & Independence
Mostly True
8/10

The most authoritative sources in this pool — Maryland General Assembly testimony (Source 1, 0.92), Brookings Institution (Sources 3 & 5, 0.85), arXiv peer-reviewed preprints (Sources 4 & 7, 0.80–0.85), and the Brennan Center for Justice (Sources 6 & 9, 0.75–0.82) — all independently confirm that deepfake technology poses a significant and growing threat to democratic elections, citing documented real-world incidents (AI robocalls suppressing votes, deepfake political ads, Slovakia election interference) and structural mechanisms like trust erosion and procedural disinformation. The opponent's strongest counter-sources are Civicus.org (Source 16, 0.70) and Diálogo Político (Source 14, 0.70), which note the feared "deepfake election" did not fully materialize in 2024–2025, and Yale ISPS (Source 22, 0.60, undated) finding "minimal effects" — but these are lower-authority, and critically, Civicus itself acknowledges ongoing erosion of public trust in elections from synthetic media, while the Yale study's unknown date and narrow experimental scope limit its weight against the broader, multi-source consensus. The claim is "Mostly True": the threat is real, documented, and confirmed by the most reliable independent sources, though the nuance that catastrophic electoral disruption has not yet fully materialized in a single election cycle is a legitimate caveat that prevents a score of 9–10.

Weakest sources

Source 22 (Yale ISPS) is undated, has a low authority score of 0.60, and its experimental finding of 'minimal effects' is narrow in scope — it does not address trust erosion or procedural disinformation, limiting its relevance to the broader claim.Source 23 (LLM Background Knowledge) is not an independent external source but rather internal model knowledge presented as a citation, making it methodologically unreliable and carrying no evidentiary weight.Source 18 (NextGen Policy, 0.68) is an advocacy organization with a clear policy agenda, introducing potential conflict of interest that reduces its independence.Source 12 (Mea Digital Evidence Integrity, 0.72) is a commercial cybersecurity firm with a financial interest in amplifying threat narratives around deepfakes, which undermines its objectivity.
Confidence: 7/10

Expert summary

See the full panel summary

Create a free account to read the complete analysis.

Sign up free
The claim is
Mostly True
7/10
Confidence: 8/10 Spread: 1 pts

The arguments

Two AI advocates debated this claim using the research gathered.

Argument for

P
Proponent Argues TRUE

Multiple high-authority, independent sources converge that as deepfakes become more sophisticated and accessible, they can convincingly impersonate candidates, spread procedural voting disinformation, and erode trust—core mechanisms that directly undermine electoral integrity (Source 1 Maryland General Assembly; Source 3 Brookings; Source 4 arXiv; Source 6 Brennan Center). This is not hypothetical: documented election-linked incidents (e.g., AI robocalls discouraging voting and deepfake political ads) and global reporting of accelerating, election-targeted deepfake misuse demonstrate a significant and growing threat to democratic elections (Source 1 Maryland General Assembly; Source 3 Brookings; Source 11 CBS News; Source 13 Stimson Center).

O
Opponent Rebuttal

Your reliance on isolated incidents — a single robocall and one political ad from Source 11 (CBS News) and Source 3 (Brookings) — commits the fallacy of hasty generalization: cherry-picking anecdotal cases does not establish a "significant threat" when the most comprehensive empirical review available, Source 16 (Civicus.org), examined nearly half the world's population voting across the 2024–2025 super election cycle and explicitly concluded the feared deepfake election "did not materialise." Furthermore, your convergence of sources argument collapses under scrutiny because Source 22 (Yale ISPS) — the only source grounded in controlled experimental evidence rather than speculation or advocacy — directly found that deepfake videos have "minimal effects" on voters, meaning your multi-source consensus reflects theoretical alarm, not demonstrated electoral harm.

Argument against

O
Opponent Argues FALSE

The empirical record from the most recent and comprehensive election cycle directly undermines this claim: Source 16 (Civicus.org) explicitly states that "the much-feared 'deepfake election' did not materialise" during the 2024–2025 super election cycle when nearly half the world's population voted, and Source 14 (Diálogo Político) corroborates this, concluding that "election after election in 2024 showed that these warnings were overblown or at least premature." Furthermore, Source 22 (Institution for Social and Policy Studies) from Yale found that deepfake videos have "minimal effects" on voters, and Source 9 (Brennan Center for Justice) acknowledges that "many deepfakes are poorly made and easy for humans to spot," collectively demonstrating that the claimed "significant threat" remains largely theoretical rather than a proven, realized danger to democratic elections.

P
Proponent Rebuttal

You're committing a false dichotomy by treating Civicus's point that a single “deepfake election” didn't materialize as proof there's no significant threat, even though that same report says synthetic media plus amplification is still eroding trust in elections and institutions (Source 16 Civicus.org) and multiple independent sources document real election-linked incidents like voter-suppression robocalls and deepfake candidate impersonation (Source 1 Maryland General Assembly; Source 3 Brookings Institution; Source 11 CBS News). And your reliance on “minimal effects” and “poorly made” deepfakes cherry-picks narrow findings while ignoring the core threat mechanism emphasized across the brief—procedural disinformation and trust erosion—where even a debunked or obvious fake can still depress turnout or poison discourse (Source 1 Maryland General Assembly; Source 2 IJFMR; Source 4 arXiv.org; Source 9 Brennan Center for Justice).

Your annotation will be reviewed by an editor before becoming visible.

Embed this verification

Copy this code and paste it in your article's HTML.