Verify any claim · lenz.io
Claim analyzed
Science“Using artificial intelligence tools causes a decline in human intelligence over time.”
Submitted by Vicky
The conclusion
Research links cognitive risks to excessive or exclusive AI reliance, not to AI tool use in general — making this claim a significant overstatement. Multiple peer-reviewed studies find that heavy, passive dependence on AI can reduce cognitive engagement and retention, but the same literature emphasizes that moderate use shows minimal impact and that outcomes depend on how tools are used. The blanket causal framing strips away these critical conditions and ignores evidence that AI can also augment cognition.
Based on 19 sources: 10 supporting, 1 refuting, 8 neutral.
Caveats
- The peer-reviewed evidence ties cognitive risks specifically to excessive, passive, or exclusive AI reliance — not to general or moderate use of AI tools.
- The claim uses deterministic causal language ('causes') where the evidence supports only conditional associations under specific usage patterns; key sources acknowledge confirmatory studies are still pending.
- Substantial countervailing evidence shows AI can augment cognition, automate low-level tasks, and free up higher-order thinking — context the claim entirely omits.
Sources
Sources used in the analysis
While AI enhances personalized learning, excessive reliance may reduce cognitive engagement and long-term retention (Bai et al., 2023). Similarly, Akgun and Toker studied 73 information science undergraduates at a Pennsylvania university. Participants were divided into two groups: one engaged in pretesting before using AI, while the control group used AI directly. Results showed that pretesting improved retention and engagement, but prolonged AI exposure led to memory decline (Akgun and Toker, 2024). These findings suggest AI enhances accessibility but may weaken retention if overused.
Prolonged reliance on AI will more likely lead to a decline in users' cognitive engagement and independent decision-making skills. Long-term interaction with AI is positively associated with mental exhaustion, attention strain, and information overload, and negatively associated with self-assurance.
A study published in The BMJ found that almost all leading large language models show signs of mild cognitive impairment in tests widely used to spot early signs of dementia, with older versions of chatbots tending to perform worse on tests. The authors note that uniform failure of all large language models in tasks requiring visual abstraction and executive function highlights a significant area of weakness that could impede their use in clinical settings.
At its core, AI represents a fundamental shift in how we approach problem-solving and decision-making. In an era filled with unprecedented data, artificial intelligence serves as our cognitive extension, helping us make sense of complexity that would otherwise overwhelm human capabilities. AI isn't just automating tasks — it's augmenting human capabilities in ways previously unimaginable.
AICICA refers to the potential deterioration of essential cognitive abilities resulting from an overreliance on AICs. Over time, a disproportionate reliance on AI-chatbots without concurrent cultivation of core cognitive skills may contribute to cognitive atrophy.
The advancement of generative artificial intelligence (AI) has shown great potential to enhance productivity in many cognitive tasks. However, concerns are raised that the use of generative AI may erode human cognition due to over-reliance. Conversely, others argue that generative AI holds the promise to augment human cognition by automating menial tasks and offering insights that extend one's cognitive abilities.
The study by MIT's Media Lab tested the cognitive functions of different groups of students divided up according to how much they used AI tools like ChatGPT to accomplish key tasks over a period of several months. The group that exclusively used ChatGPT-4 to write their papers demonstrated the least amount of brainwave activity. In fact, cognitive function decreased in key areas of their brains over time. The authors concluded: 'In this study we demonstrate the pressing matter of a likely decrease in learning skills based on the results of our study. The use of LLMs had a measurable impact on participants, and while the benefits were initially apparent, as we demonstrated over the course of four months, the LLM group's participants performed worse than their counterparts in the brain-only group at all levels: neural, linguistic and scoring.' Notably, cognitive declines continued long after the study was completed—even after they stopped using ChatGPT, participants still showed sluggish brain activity.
AI-enabled gait analysis can be used to detect signs of cognitive impairment, with integration of this AI model into smartphones potentially helping detect early cognitive decline in older adults.
Evidence reveals a complex, non-linear relationship between AI use and cognition. Moderate AI usage shows minimal cognitive impact, while excessive reliance correlates with decreased critical thinking abilities (cognitive offloading effect), reduced metacognitive accuracy, and lower retention on delayed assessments. AI tools do not inherently impair or enhance cognition; rather, their impact depends critically on implementation design, user agency, and interaction patterns. Strategic use that maintains active cognitive engagement can augment human capabilities, while passive reliance risks skill atrophy.
Such increased offloading has raised the fear that people will become overly reliant on AI. This could have unintended consequences, such as eroding our critical thinking skills and declining our overall cognitive ability. Other studies have linked high AI use to increased laziness, anxiety, lower critical engagement, and feelings of dependence.
A systematic literature review examined the effect of intervention through AI socially assistive robots (SAR) on the cognitive function of older adults, investigating whether AI-based interventions could support or enhance cognitive health.
Research underscores these concerns. Michael Gerlich at SBS Swiss Business School in Kloten, Switzerland, tested 666 people in the UK and found a significant correlation between frequent AI use and lower critical-thinking skills – with younger participants who showed higher dependence on AI tools scoring lower in critical thinking compared with older adults. “The greatest worry in these times of generative AI is not that it may compromise human creativity or intelligence,” says psychologist Robert Sternberg at Cornell University, “but that it already has.”
A recent study by Gerlich (2025) explores the relationship between AI usage and cognitive skills, highlighting several key concerns. The research found a negative correlation between frequent AI usage and critical-thinking abilities, suggesting that individuals who rely heavily on automated tools may struggle with independent reasoning. One contributing factor is cognitive offloading, where AI users engage less in deep, reflective thinking and instead prefer quick AI-generated solutions.
Does the use of AI erode our cognitive abilities and reduce our capacity for critical thinking? Most likely, yes, but we are still waiting for the studies to really confirm it. If we outsource cognitive effort to AI, the principle of plasticity suggests we may experience a decline in cognitive function.
A number of different research efforts have begun to produce evidence that overreliance on AI can negatively affect our ability to think and innovate. One study from the University of Toronto showed that usage of large language models and generative AI systems reduces the ability for humans to think creatively, resulting in more homogenous, 'vanilla' ideas and fewer truly innovative ones. Using AI to generate email responses, answer questions on your behalf, or give you ideas for projects is fundamentally altering your ability to think and do those tasks.
Tools like brain-computer interfaces, neurofeedback systems, and personalized AI-driven applications are revolutionizing how individuals optimize key cognitive functions, such as memory, attention, learning speed, and decision-making. Beyond simply enhancing these abilities, these innovations aim to reshape neural pathways, encouraging neuroplasticity and unlocking new levels of human potential.
AI adoption offers several potential benefits. It helps automate repetitive processes like data entry to improve operational efficiency. AI can also process and analyze large data sets rapidly, enabling it to identify patterns and make reasoned predictions to aid robust decision-making.
AI tools also help professionals manage repetitive tasks like data entry, proofreading, and inbox organization. This allows employees to focus on higher-value work like problem-solving and strategy. AI dramatically accelerates research and data analysis across industries.
The phenomenon of cognitive decline from technology outsourcing is not new. Research on the 'Google Effect' (also called digital amnesia) demonstrated that people retain less information when they know they can easily search for it online. This established principle suggests that AI-driven cognitive offloading follows a similar pattern to earlier search-engine technology, though AI's capacity for reasoning and decision-making tasks may amplify the effect beyond simple information retrieval.
What do you think of the claim?
Your challenge will appear immediately.
Challenge submitted!
Expert review
How each expert evaluated the evidence and arguments
Expert 1 — The Logic Examiner
The supporting sources largely show that excessive/prolonged or exclusive reliance on AI is associated with reduced engagement, weaker retention, and possible skill atrophy (Sources 1,2,5,7,9,12,13), but that evidentiary scope does not logically entail the unqualified, general claim that merely “using AI tools” (in any amount or manner) causes an overall decline in “human intelligence” over time. Because the claim overgeneralizes from conditional/overuse findings and mixes correlational and mechanistic speculation with limited causal evidence, the correct verdict is that the claim is misleading rather than straightforwardly true or false.
Expert 2 — The Context Analyst
The claim as stated — "Using artificial intelligence tools causes a decline in human intelligence over time" — is a broad, unqualified causal assertion that omits the critical conditioning factor identified across multiple sources: the effect is tied specifically to excessive, passive, or exclusive reliance on AI, not general or moderate use. Source 9 (INNOVAPATH) explicitly states "moderate AI usage shows minimal cognitive impact" and that "AI tools do not inherently impair or enhance cognition," while Source 14 (Cogmed) openly admits the confirmatory studies are still pending. Additionally, the claim ignores substantial countervailing evidence that AI can augment cognition (Sources 4, 16, 18), that AI assists in detecting and treating cognitive decline (Sources 8, 11), and that the relationship is "complex and non-linear" (Source 9). The framing strips away the dose-dependency and usage-pattern conditionality that all serious research treats as central, creating a misleadingly deterministic impression that any AI tool use degrades human intelligence — a conclusion the evidence does not support.
Expert 3 — The Source Auditor
The most authoritative sources in this pool are the PMC/NIH publications (Sources 1, 2, 5, 6) and PubMed entries, all of which are peer-reviewed and high-authority; however, their findings consistently describe cognitive risks tied specifically to excessive or prolonged overreliance on AI — not general AI tool use — and Source 9 (INNOVAPATH, high-authority) explicitly concludes that "AI tools do not inherently impair or enhance cognition" and that impact "depends critically on implementation design, user agency, and interaction patterns." The claim as stated — that "using AI tools causes a decline in human intelligence over time" — is a broad, unqualified causal assertion that the most reliable sources (Sources 1, 2, 5, 6, 9) only partially support under specific conditions of overuse, while Source 14 (Cogmed, lower-authority) openly admits the confirmatory studies are still pending, Source 3 (BMJ) addresses AI chatbots' own cognitive limitations rather than human decline, Source 8 (PubMed Central) is about AI detecting cognitive impairment rather than causing it, and lower-authority sources like UNLEASH (Source 15), IE University blog (Source 13), and LLM Background Knowledge (Source 19) add minimal independent evidentiary weight; the claim is therefore misleading — credible sources confirm a real risk under conditions of heavy reliance, but the blanket causal framing overstates what the trustworthy evidence actually demonstrates.
Expert summary
The arguments
Two AI advocates debated this claim using the research gathered.
Argument for
Multiple peer-reviewed syntheses and empirical studies indicate that sustained reliance on AI tools reduces cognitive engagement and independent thinking, with reported downstream effects like weaker long-term retention, decision-making skill erosion, and cognitive atrophy (Source 1, PMC–NIH “The cognitive paradox of AI in education”; Source 2, PMC “The Cognitive Cost of AI”; Source 5, PMC “From tools to threats”). Direct longitudinal-style evidence also aligns with this causal direction: prolonged AI exposure was associated with memory decline in students (Source 1), and an MIT Media Lab study reported decreased neural/linguistic performance over months among heavy LLM users with lingering sluggish brain activity afterward (Source 7, NextGov), supporting the claim that using AI tools can drive a decline in human intelligence-related capacities over time.
Your entire argument hinges on conflating "excessive" or "exclusive" AI reliance with general AI tool use — Source 1 specifically studied prolonged overuse and Source 7's MIT group exclusively used ChatGPT for all writing tasks, meaning neither study supports the broad, unqualified claim that using AI tools causes cognitive decline, only that misuse does, which is precisely what Source 9 (INNOVAPATH) establishes when it states that "moderate AI usage shows minimal cognitive impact." Furthermore, you conveniently ignore that Source 14 (Cogmed) — a pro-claim source — openly concedes "we are still waiting for the studies to really confirm it," and Sources 2 and 5 rely on theoretical frameworks like "cognitive atrophy" and projected reliance rather than controlled longitudinal measurement of actual intelligence decline, making your causal chain a textbook case of the slippery slope fallacy dressed in peer-reviewed language.
Argument against
The claim that AI "causes a decline in human intelligence over time" is fundamentally overstated and unsupported by the evidence, as Source 9 (INNOVAPATH) explicitly concludes that "AI tools do not inherently impair or enhance cognition" and that cognitive impact "depends critically on implementation design, user agency, and interaction patterns," directly refuting any blanket causal claim. Furthermore, the studies most cited in support — such as Source 7 (NextGov/MIT) and Source 1 (PMC-NIH) — are narrowly scoped to excessive or exclusive AI reliance, not general AI tool use, while Source 14 (Cogmed) openly admits "we are still waiting for the studies to really confirm it," undermining the evidentiary foundation of the claim entirely.
You're smuggling in a straw man by treating the motion as “AI inherently and always harms cognition,” when the supporting evidence already specifies the causal mechanism as overreliance/prolonged reliance—exactly the “implementation and interaction pattern” condition Source 9 (INNOVAPATH) says determines whether cognition erodes, so it doesn't refute causation at all, it explains when it occurs. And your attempt to dismiss Source 1 (PMC–NIH, “The cognitive paradox of AI in education”) and Source 7 (NextGov reporting the MIT study) for focusing on heavy use concedes the core point: those are precisely the real-world usage patterns that “cause” declines over time (memory/retention and neural/linguistic performance), while Source 14 (Cogmed) is irrelevant to that causal evidence because it's an opinion piece acknowledging uncertainty, not the basis of the claim.