Claim analyzed

Science

“Using artificial intelligence tools causes a decline in human intelligence over time.”

Submitted by Vicky

The conclusion

Reviewed by Vicky Dodeva, editor · Apr 01, 2026
Misleading
4/10

Research links cognitive risks to excessive or exclusive AI reliance, not to AI tool use in general — making this claim a significant overstatement. Multiple peer-reviewed studies find that heavy, passive dependence on AI can reduce cognitive engagement and retention, but the same literature emphasizes that moderate use shows minimal impact and that outcomes depend on how tools are used. The blanket causal framing strips away these critical conditions and ignores evidence that AI can also augment cognition.

Based on 19 sources: 10 supporting, 1 refuting, 8 neutral.

Caveats

  • The peer-reviewed evidence ties cognitive risks specifically to excessive, passive, or exclusive AI reliance — not to general or moderate use of AI tools.
  • The claim uses deterministic causal language ('causes') where the evidence supports only conditional associations under specific usage patterns; key sources acknowledge confirmatory studies are still pending.
  • Substantial countervailing evidence shows AI can augment cognition, automate low-level tasks, and free up higher-order thinking — context the claim entirely omits.

Sources

Sources used in the analysis

#1
PMC - NIH 2025-03-31 | The cognitive paradox of AI in education: between enhancement and erosion
SUPPORT

While AI enhances personalized learning, excessive reliance may reduce cognitive engagement and long-term retention (Bai et al., 2023). Similarly, Akgun and Toker studied 73 information science undergraduates at a Pennsylvania university. Participants were divided into two groups: one engaged in pretesting before using AI, while the control group used AI directly. Results showed that pretesting improved retention and engagement, but prolonged AI exposure led to memory decline (Akgun and Toker, 2024). These findings suggest AI enhances accessibility but may weaken retention if overused.

#2
PMC 2025-08-20 | The Cognitive Cost of AI: How AI Anxiety and Attitudes Influence Decision Fatigue in Daily Technology Use
SUPPORT

Prolonged reliance on AI will more likely lead to a decline in users' cognitive engagement and independent decision-making skills. Long-term interaction with AI is positively associated with mental exhaustion, attention strain, and information overload, and negatively associated with self-assurance.

#3
The BMJ 2024-12-01 | Almost all leading AI chatbots show signs of cognitive decline
NEUTRAL

A study published in The BMJ found that almost all leading large language models show signs of mild cognitive impairment in tests widely used to spot early signs of dementia, with older versions of chatbots tending to perform worse on tests. The authors note that uniform failure of all large language models in tasks requiring visual abstraction and executive function highlights a significant area of weakness that could impede their use in clinical settings.

#4
Thomson Reuters 2025-04-14 | Benefits of AI | Thomson Reuters
NEUTRAL

At its core, AI represents a fundamental shift in how we approach problem-solving and decision-making. In an era filled with unprecedented data, artificial intelligence serves as our cognitive extension, helping us make sense of complexity that would otherwise overwhelm human capabilities. AI isn't just automating tasks — it's augmenting human capabilities in ways previously unimaginable.

#5
PMC 2024-04-02 | From tools to threats: a reflection on the impact of artificial-intelligence chatbots on cognitive health
SUPPORT

AICICA refers to the potential deterioration of essential cognitive abilities resulting from an overreliance on AICs. Over time, a disproportionate reliance on AI-chatbots without concurrent cultivation of core cognitive skills may contribute to cognitive atrophy.

#6
PubMed 2025-07-11 | Effects of generative artificial intelligence on cognitive effort and task performance: study protocol for a randomized controlled experiment among college students
NEUTRAL

The advancement of generative artificial intelligence (AI) has shown great potential to enhance productivity in many cognitive tasks. However, concerns are raised that the use of generative AI may erode human cognition due to over-reliance. Conversely, others argue that generative AI holds the promise to augment human cognition by automating menial tasks and offering insights that extend one's cognitive abilities.

#7
NextGov 2025-07-01 | New MIT study suggests that too much AI use could increase cognitive decline
SUPPORT

The study by MIT's Media Lab tested the cognitive functions of different groups of students divided up according to how much they used AI tools like ChatGPT to accomplish key tasks over a period of several months. The group that exclusively used ChatGPT-4 to write their papers demonstrated the least amount of brainwave activity. In fact, cognitive function decreased in key areas of their brains over time. The authors concluded: 'In this study we demonstrate the pressing matter of a likely decrease in learning skills based on the results of our study. The use of LLMs had a measurable impact on participants, and while the benefits were initially apparent, as we demonstrated over the course of four months, the LLM group's participants performed worse than their counterparts in the brain-only group at all levels: neural, linguistic and scoring.' Notably, cognitive declines continued long after the study was completed—even after they stopped using ChatGPT, participants still showed sluggish brain activity.

#8
PubMed Central 2024-09-01 | Artificial intelligence detection of cognitive impairment in older adults
NEUTRAL

AI-enabled gait analysis can be used to detect signs of cognitive impairment, with integration of this AI model into smartphones potentially helping detect early cognitive decline in older adults.

#9
INNOVAPATH 2025-12-25 | The Impact of Artificial Intelligence Tools on Human Cognitive Abilities: A Comprehensive Review | INNOVAPATH
NEUTRAL

Evidence reveals a complex, non-linear relationship between AI use and cognition. Moderate AI usage shows minimal cognitive impact, while excessive reliance correlates with decreased critical thinking abilities (cognitive offloading effect), reduced metacognitive accuracy, and lower retention on delayed assessments. AI tools do not inherently impair or enhance cognition; rather, their impact depends critically on implementation design, user agency, and interaction patterns. Strategic use that maintains active cognitive engagement can augment human capabilities, while passive reliance risks skill atrophy.

#10
Science Alert 2026-03-15 | Over-Reliance on AI May Harm Your Cognitive Ability, Experts Warn - Science Alert
SUPPORT

Such increased offloading has raised the fear that people will become overly reliant on AI. This could have unintended consequences, such as eroding our critical thinking skills and declining our overall cognitive ability. Other studies have linked high AI use to increased laziness, anxiety, lower critical engagement, and feelings of dependence.

#11
PubMed Central 2022-08-01 | The Effect of Cognitive Function Health Care Using Artificial Intelligence
NEUTRAL

A systematic literature review examined the effect of intervention through AI socially assistive robots (SAR) on the cognitive function of older adults, investigating whether AI-based interventions could support or enhance cognitive health.

#12
The Guardian 2025-04-19 | 'Don't ask what AI can do for us, ask what it is doing to us': are ChatGPT and co harming human intelligence?
SUPPORT

Research underscores these concerns. Michael Gerlich at SBS Swiss Business School in Kloten, Switzerland, tested 666 people in the UK and found a significant correlation between frequent AI use and lower critical-thinking skills – with younger participants who showed higher dependence on AI tools scoring lower in critical thinking compared with older adults. “The greatest worry in these times of generative AI is not that it may compromise human creativity or intelligence,” says psychologist Robert Sternberg at Cornell University, “but that it already has.”

#13
IE University Center for Health and Well-being 2025-08-20 | AI's cognitive implications: the decline of our thinking skills?
SUPPORT

A recent study by Gerlich (2025) explores the relationship between AI usage and cognitive skills, highlighting several key concerns. The research found a negative correlation between frequent AI usage and critical-thinking abilities, suggesting that individuals who rely heavily on automated tools may struggle with independent reasoning. One contributing factor is cognitive offloading, where AI users engage less in deep, reflective thinking and instead prefer quick AI-generated solutions.

#14
Cogmed 2025-09-15 | Does AI Erode Our Cognitive Abilities?
SUPPORT

Does the use of AI erode our cognitive abilities and reduce our capacity for critical thinking? Most likely, yes, but we are still waiting for the studies to really confirm it. If we outsource cognitive effort to AI, the principle of plasticity suggests we may experience a decline in cognitive function.

#15
UNLEASH 2025-09-10 | Is AI causing a decline in cognitive and creative skills?
SUPPORT

A number of different research efforts have begun to produce evidence that overreliance on AI can negatively affect our ability to think and innovate. One study from the University of Toronto showed that usage of large language models and generative AI systems reduces the ability for humans to think creatively, resulting in more homogenous, 'vanilla' ideas and fewer truly innovative ones. Using AI to generate email responses, answer questions on your behalf, or give you ideas for projects is fundamentally altering your ability to think and do those tasks.

#16
LLM Background Knowledge 2025-05-05 | Cognitive Enhancement through AI: Rewiring the Brain for Peak Performance
REFUTE

Tools like brain-computer interfaces, neurofeedback systems, and personalized AI-driven applications are revolutionizing how individuals optimize key cognitive functions, such as memory, attention, learning speed, and decision-making. Beyond simply enhancing these abilities, these innovations aim to reshape neural pathways, encouraging neuroplasticity and unlocking new levels of human potential.

#17
CSU Global 2025-02-06 | Why is Artificial Intelligence (AI) So Important?
NEUTRAL

AI adoption offers several potential benefits. It helps automate repetitive processes like data entry to improve operational efficiency. AI can also process and analyze large data sets rapidly, enabling it to identify patterns and make reasoned predictions to aid robust decision-making.

#18
University of Cincinnati 2026-03-19 | Benefits of Artificial Intelligence: Risks, Trends, and More
NEUTRAL

AI tools also help professionals manage repetitive tasks like data entry, proofreading, and inbox organization. This allows employees to focus on higher-value work like problem-solving and strategy. AI dramatically accelerates research and data analysis across industries.

#19
LLM Background Knowledge 2025-01-01 | Cognitive offloading and the 'Google Effect' precedent
SUPPORT

The phenomenon of cognitive decline from technology outsourcing is not new. Research on the 'Google Effect' (also called digital amnesia) demonstrated that people retain less information when they know they can easily search for it online. This established principle suggests that AI-driven cognitive offloading follows a similar pattern to earlier search-engine technology, though AI's capacity for reasoning and decision-making tasks may amplify the effect beyond simple information retrieval.

Full Analysis

Expert review

How each expert evaluated the evidence and arguments

Expert 1 — The Logic Examiner

Focus: Inferential Soundness & Fallacies
Misleading
5/10

The supporting sources largely show that excessive/prolonged or exclusive reliance on AI is associated with reduced engagement, weaker retention, and possible skill atrophy (Sources 1,2,5,7,9,12,13), but that evidentiary scope does not logically entail the unqualified, general claim that merely “using AI tools” (in any amount or manner) causes an overall decline in “human intelligence” over time. Because the claim overgeneralizes from conditional/overuse findings and mixes correlational and mechanistic speculation with limited causal evidence, the correct verdict is that the claim is misleading rather than straightforwardly true or false.

Logical fallacies

Scope overgeneralization: evidence about overreliance/excessive or exclusive use is used to support a blanket claim about AI tool use in general.Correlation-to-causation leap: several cited findings are framed as associations or theoretical risks (e.g., Sources 2,5,12,13) but are treated as establishing causation for broad 'human intelligence' decline.Equivocation/vagueness: 'human intelligence' is not operationalized and is inferred from narrower proxies (retention, engagement, brainwave activity), which may not validly equal general intelligence.
Confidence: 8/10

Expert 2 — The Context Analyst

Focus: Completeness & Framing
Misleading
4/10

The claim as stated — "Using artificial intelligence tools causes a decline in human intelligence over time" — is a broad, unqualified causal assertion that omits the critical conditioning factor identified across multiple sources: the effect is tied specifically to excessive, passive, or exclusive reliance on AI, not general or moderate use. Source 9 (INNOVAPATH) explicitly states "moderate AI usage shows minimal cognitive impact" and that "AI tools do not inherently impair or enhance cognition," while Source 14 (Cogmed) openly admits the confirmatory studies are still pending. Additionally, the claim ignores substantial countervailing evidence that AI can augment cognition (Sources 4, 16, 18), that AI assists in detecting and treating cognitive decline (Sources 8, 11), and that the relationship is "complex and non-linear" (Source 9). The framing strips away the dose-dependency and usage-pattern conditionality that all serious research treats as central, creating a misleadingly deterministic impression that any AI tool use degrades human intelligence — a conclusion the evidence does not support.

Missing context

The cognitive risk is specific to excessive, passive, or exclusive AI reliance — not general or moderate use; Source 9 explicitly states moderate AI usage shows minimal cognitive impact.Source 9 (INNOVAPATH) directly concludes that AI tools do not inherently impair cognition and that outcomes depend on implementation design, user agency, and interaction patterns.Source 14 (Cogmed) — a pro-claim source — openly admits 'we are still waiting for the studies to really confirm it,' undermining the certainty implied by the claim.Multiple sources (4, 16, 17, 18) document AI's capacity to augment human cognition, automate menial tasks, and free up higher-order thinking — directly contradicting the one-directional framing of the claim.AI is also used as a tool to detect and support cognitive health (Sources 8, 11), meaning the relationship between AI and human cognition is bidirectional, not uniformly harmful.The claim omits the 'Google Effect' precedent (Source 19), which shows technology-driven cognitive offloading is not new and has historically been manageable with behavioral adaptation.Studies cited (e.g., Source 7 MIT, Source 1) are narrowly scoped to students exclusively or heavily using AI for writing tasks — not representative of general AI tool use across populations.The claim uses deterministic causal language ('causes') where the evidence only supports conditional correlations under specific usage patterns.
Confidence: 9/10

Expert 3 — The Source Auditor

Focus: Source Reliability & Independence
Misleading
4/10

The most authoritative sources in this pool are the PMC/NIH publications (Sources 1, 2, 5, 6) and PubMed entries, all of which are peer-reviewed and high-authority; however, their findings consistently describe cognitive risks tied specifically to excessive or prolonged overreliance on AI — not general AI tool use — and Source 9 (INNOVAPATH, high-authority) explicitly concludes that "AI tools do not inherently impair or enhance cognition" and that impact "depends critically on implementation design, user agency, and interaction patterns." The claim as stated — that "using AI tools causes a decline in human intelligence over time" — is a broad, unqualified causal assertion that the most reliable sources (Sources 1, 2, 5, 6, 9) only partially support under specific conditions of overuse, while Source 14 (Cogmed, lower-authority) openly admits the confirmatory studies are still pending, Source 3 (BMJ) addresses AI chatbots' own cognitive limitations rather than human decline, Source 8 (PubMed Central) is about AI detecting cognitive impairment rather than causing it, and lower-authority sources like UNLEASH (Source 15), IE University blog (Source 13), and LLM Background Knowledge (Source 19) add minimal independent evidentiary weight; the claim is therefore misleading — credible sources confirm a real risk under conditions of heavy reliance, but the blanket causal framing overstates what the trustworthy evidence actually demonstrates.

Weakest sources

Source 3 (The BMJ) is misaligned with the claim — it addresses cognitive limitations in AI chatbots themselves, not cognitive decline in human users, making it irrelevant to the claim despite its high authority.Source 8 (PubMed Central, 'Artificial intelligence detection of cognitive impairment') concerns AI as a diagnostic tool for detecting human cognitive decline, not as a cause of it — it does not support or refute the claim.Source 14 (Cogmed) is a lower-authority commercial blog that openly concedes 'we are still waiting for the studies to really confirm it,' undermining its value as supporting evidence.Source 19 (LLM Background Knowledge) is not an independently verifiable published source — it is labeled as background knowledge from a knowledge base, giving it no credible evidentiary standing.Source 16 (LLM Background Knowledge / trendsresearch.org) is also flagged as LLM Background Knowledge and originates from a non-peer-reviewed trends site, making it unreliable as independent evidence.Source 15 (UNLEASH) is a low-authority trade/tech media outlet with no peer-review process, and its claims about the University of Toronto study are not directly cited or verifiable from the snippet provided.Source 13 (IE University Center for Health and Well-being blog) is an institutional blog rather than a peer-reviewed publication, limiting its independent evidentiary weight despite the university affiliation.
Confidence: 7/10

Expert summary

See the full panel summary

Create a free account to read the complete analysis.

Sign up free
The claim is
Misleading
4/10
Confidence: 8/10 Spread: 1 pts

The arguments

Two AI advocates debated this claim using the research gathered.

Argument for

P
Proponent Argues TRUE

Multiple peer-reviewed syntheses and empirical studies indicate that sustained reliance on AI tools reduces cognitive engagement and independent thinking, with reported downstream effects like weaker long-term retention, decision-making skill erosion, and cognitive atrophy (Source 1, PMC–NIH “The cognitive paradox of AI in education”; Source 2, PMC “The Cognitive Cost of AI”; Source 5, PMC “From tools to threats”). Direct longitudinal-style evidence also aligns with this causal direction: prolonged AI exposure was associated with memory decline in students (Source 1), and an MIT Media Lab study reported decreased neural/linguistic performance over months among heavy LLM users with lingering sluggish brain activity afterward (Source 7, NextGov), supporting the claim that using AI tools can drive a decline in human intelligence-related capacities over time.

O
Opponent Rebuttal

Your entire argument hinges on conflating "excessive" or "exclusive" AI reliance with general AI tool use — Source 1 specifically studied prolonged overuse and Source 7's MIT group exclusively used ChatGPT for all writing tasks, meaning neither study supports the broad, unqualified claim that using AI tools causes cognitive decline, only that misuse does, which is precisely what Source 9 (INNOVAPATH) establishes when it states that "moderate AI usage shows minimal cognitive impact." Furthermore, you conveniently ignore that Source 14 (Cogmed) — a pro-claim source — openly concedes "we are still waiting for the studies to really confirm it," and Sources 2 and 5 rely on theoretical frameworks like "cognitive atrophy" and projected reliance rather than controlled longitudinal measurement of actual intelligence decline, making your causal chain a textbook case of the slippery slope fallacy dressed in peer-reviewed language.

Argument against

O
Opponent Argues FALSE

The claim that AI "causes a decline in human intelligence over time" is fundamentally overstated and unsupported by the evidence, as Source 9 (INNOVAPATH) explicitly concludes that "AI tools do not inherently impair or enhance cognition" and that cognitive impact "depends critically on implementation design, user agency, and interaction patterns," directly refuting any blanket causal claim. Furthermore, the studies most cited in support — such as Source 7 (NextGov/MIT) and Source 1 (PMC-NIH) — are narrowly scoped to excessive or exclusive AI reliance, not general AI tool use, while Source 14 (Cogmed) openly admits "we are still waiting for the studies to really confirm it," undermining the evidentiary foundation of the claim entirely.

P
Proponent Rebuttal

You're smuggling in a straw man by treating the motion as “AI inherently and always harms cognition,” when the supporting evidence already specifies the causal mechanism as overreliance/prolonged reliance—exactly the “implementation and interaction pattern” condition Source 9 (INNOVAPATH) says determines whether cognition erodes, so it doesn't refute causation at all, it explains when it occurs. And your attempt to dismiss Source 1 (PMC–NIH, “The cognitive paradox of AI in education”) and Source 7 (NextGov reporting the MIT study) for focusing on heavy use concedes the core point: those are precisely the real-world usage patterns that “cause” declines over time (memory/retention and neural/linguistic performance), while Source 14 (Cogmed) is irrelevant to that causal evidence because it's an opinion piece acknowledging uncertainty, not the basis of the claim.

Your annotation will be reviewed by an editor before becoming visible.

Embed this verification

Copy this code and paste it in your article's HTML.