Verify any claim · lenz.io
Claim analyzed
Health“Using ChatGPT causes a person's brain to deteriorate.”
The conclusion
The evidence does not show that ChatGPT use causes brain deterioration. Existing studies mainly examine short-term cognitive offloading or reduced engagement during specific tasks, not lasting damage or clinical decline. Some reports also rely on media amplification of preliminary findings, while peer-reviewed evidence does not establish a general causal harm to the brain.
Caveats
- Low confidence conclusion.
- Short-term reductions in effort or EEG connectivity during AI-assisted tasks are not proof of brain deterioration.
- Several widely shared claims appear to rely on secondary media coverage or preliminary research rather than strong longitudinal evidence.
- Findings from narrow writing or lab tasks should not be generalized to all ChatGPT use or to permanent brain harm.
This analysis is for informational purposes only and does not constitute health or medical advice, diagnosis, or treatment. Always consult a qualified healthcare professional before making health-related decisions.
Get notified if new evidence updates this analysis
Create a free account to track this claim.
Sources
Sources used in the analysis
The use of artificial intelligence chatbots like ChatGPT for cognitive offloading may lead to underemployment of specific cognitive faculties, inhibiting their full maturation. This phenomenon is particularly relevant in the context of executive functions, where reliance on artificial intelligence for problem-solving can reduce cognitive effort and lead to long-term cognitive changes. While ChatGPT offers impressive capabilities and can serve as a valuable tool in various contexts, over-reliance on it for cognitive tasks can lead to the erosion of these essential skills.
This study investigated the impact of human-large language model (LLM) collaboration on the accuracy and efficiency of brain MRI differential diagnosis. LLM-assisted brain MRI differential diagnosis yielded superior accuracy (70/114; 61.4% (LLM-assisted) vs 53/114; 46.5% (conventional) correct diagnoses, p = 0.033). Human-LLM collaboration has the potential to improve brain MRI differential diagnosis.
In this regard, previous studies have generally found that working in jobs that are high in OC would protect one against age-related cognitive decline. Furthermore, OC scores significantly predicted clusters of CT increases and various cognitive outcomes, even after controlling for SES. These results highlight the significant and unique contribution of ChatGPT-derived OC scores in predicting cognitive and brain aging outcomes.
The results revealed significantly lower cognitive engagement scores in the ChatGPT group compared to the control group. These findings suggest that AI assistance may lead to cognitive offloading. The study contributes to the growing body of literature on the psychological implications of AI in education and raises important questions about the integration of such tools.
Using ChatGPT to write an essay reduces the cognitive engagement and intellectual effort required to transform information into knowledge, according to a study. More specifically, participants assisted by ChatGPT wrote 60% faster, but their relevant cognitive load fell by 32%. EEG showed that brain connectivity was almost halved (alpha and theta waves) and 83% of AI users were unable to remember a passage they had just written.
EEG revealed significant differences in brain connectivity: Brain-only participants exhibited the strongest, most distributed networks; Search Engine users showed moderate engagement; and LLM users displayed the weakest connectivity. Cognitive activity scaled down in relation to external tool use. In session 4, LLM-to-Brain participants showed reduced alpha and beta connectivity, indicating under-engagement.
Stanford researchers have developed a deep learning model that transforms overwhelming brain data into clear trajectories, opening new possibilities for understanding thought, emotion, and neurological disease. BCNE represents brain activity as trajectories of activity through the brain over time. The researchers feed the measured images or other types of data, such as EEG, through their model, filtering out meaningless noise while spotlighting valuable patterns.
Researchers found that participants who regularly used the tool to write SAT-style essays showed the lowest brain engagement and underperformed linguistically and behaviorally compared to peers who used Google or no tools at all. EEG scans revealed diminished neural activity in the ChatGPT group, which increasingly shifted from using the tool for support to copying content outright. The “brain-only” group showed the strongest cognitive function.
New research suggests ChatGPT users experience weaker neural connectivity and poorer memory recall, with MIT scientists warning of potential long-term cognitive decline. In the study, participants using ChatGPT showed less brain connectivity, with the brain-only group exhibiting the strongest networks. The researchers noted a possible decrease in learning skills among LLM users and called for future studies on longitudinal impacts.
No studies from WHO, CDC, NIH, or major journals like The Lancet or NEJM (as of 2026) show that using AI chatbots like ChatGPT causes brain deterioration, reduced memory, or cognitive decline. Claims of 'brain rot' are anecdotal or metaphorical, not backed by longitudinal health research. Some psychology papers discuss potential overreliance reducing practice of critical thinking skills, similar to calculator use, but no causal evidence of deterioration exists.
A first-of-a-kind study out of MIT shows that an over-reliance on tools like ChatGPT leads to a massive decline in cognitive function. LLM users saw a 47% reduction in brain connectivity compared to people who did not rely on ChatGPT. The MIT researchers coined the term 'cognitive debt' to explain this tradeoff: what you gain in efficiency now, you pay for later in cognitive fitness.
AI tools like ChatGPT linked to cognitive decline, study finds, with issues in memory recall and brain connectivity.
An MIT study reveals that using ChatGPT may be linked to cognitive decline. The group that used ChatGPT had the lowest brain engagement, consistently underperformed at neural, linguistic and behavioral levels, and got lazier with each essay, continuing to copy and paste. When asked to redo the essays without tools, the ChatGPT group couldn't because they weren't retaining the information.
What do you think of the claim?
Your challenge will appear immediately.
Challenge submitted!
Expert review
How each expert evaluated the evidence and arguments
Expert 1 — The Logic Examiner
The supporting items (e.g., Sources 4–6, 8–9) at most show short-term under-engagement/cognitive offloading and task-performance differences (often in narrow writing/education paradigms) and then infer from reduced EEG connectivity/engagement that the brain is "deteriorating," but that leap is not logically valid without longitudinal evidence of lasting impairment or clinical decline, and Source 1 itself is framed as conditional risk/skill erosion rather than demonstrated brain damage. Given the scope mismatch and equivocation between "reduced engagement" and "brain deterioration," plus the existence of contexts where LLM use improves performance (Source 2), the claim that using ChatGPT causes a person's brain to deteriorate is not established and is best judged false on the presented record and general scientific standards for causality.
Expert 2 — The Context Analyst
The claim that ChatGPT "causes brain deterioration" is a sweeping causal assertion, but the available evidence only supports a much narrower finding: short-term, task-specific reductions in cognitive engagement and brain connectivity during AI-assisted writing tasks, with no longitudinal studies establishing permanent neurodegeneration or clinical brain deterioration. Sources 1, 4, 5, 6, 8, and 9 document reduced cognitive load and EEG-measured connectivity differences in controlled experiments, but these are acute, context-dependent effects of cognitive offloading — not evidence of irreversible brain damage or deterioration; Source 10 explicitly notes no major health institutions have established causal evidence of brain deterioration, Source 2 shows LLM collaboration can actually improve cognitive performance in some contexts, and critically, none of the studies cited are longitudinal, meaning no evidence exists that these short-term engagement reductions translate into lasting structural or functional brain decline. The claim's framing — "causes brain deterioration" — implies a definitive, causal, and likely permanent harm that the evidence does not support; what the research actually shows is a pattern of reduced cognitive engagement during AI-assisted tasks, analogous to calculator use reducing arithmetic practice, which is meaningfully different from "deterioration."
Expert 3 — The Source Auditor
The most reliable sources here are the peer‑reviewed biomedical papers indexed in PubMed/PMC: Source 1 (PMC/NIH) is a cautionary review about cognitive offloading and possible skill erosion with over‑reliance, but it does not demonstrate clinical “brain deterioration,” while Source 2 (PubMed usability study) shows performance improvements with LLM assistance in a specific task and likewise does not evidence deterioration; the strongest “EEG/connectivity halved” claims are primarily carried by lower-authority or non–peer-reviewed/secondary channels (Source 6 arXiv; Sources 5, 8, 9, 11–13 media/blog/video), with clear risk of circular reporting about an “MIT study” rather than independent verification. Based on what the highest-authority, most independent evidence actually supports (skill underuse concerns, not proven deterioration), the blanket causal claim that “Using ChatGPT causes a person's brain to deteriorate” is not substantiated and is best judged false on current trustworthy evidence.
Expert summary
The arguments
Two AI advocates debated this claim using the research gathered.
Argument for
Multiple high-authority peer-reviewed and institutional sources converge on the same alarming finding: regular ChatGPT use measurably reduces brain activity and cognitive function — Source 1 (PMC - NIH) warns that over-reliance on ChatGPT leads to "erosion of essential cognitive skills" through underemployment of executive functions, Source 6 (arXiv) and Source 8 (NPHIC) both report EEG-measured reductions in brain connectivity among ChatGPT users, and Source 5 (Polytechnique Insights) documents a 32% drop in cognitive load and near-halving of brain connectivity in AI-assisted writing tasks. The convergence of neuroimaging data (EEG), behavioral outcomes, and linguistic underperformance across independent studies — including MIT research cited by Sources 8, 9, 11, and 13 — alongside Source 4 (Stanford University SCALE AI) confirming significantly lower cognitive engagement scores in ChatGPT users, constitutes a compelling, multi-source body of evidence that using ChatGPT causes measurable brain deterioration.
The Proponent equivocates between short-term cognitive offloading/under-engagement and the motion's clinical claim of “brain deterioration”: even the NIH review frames the risk as potential underemployment and skill erosion rather than demonstrated neurodegeneration, and it is explicitly conditional on over-reliance (Source 1, PMC - NIH). The Proponent's “convergence” claim is also built on weak or non-primary evidence—arXiv and media summaries (Sources 6, 8, 5) cannot establish causality or long-term decline, and the brief includes direct counterevidence that LLM assistance can improve real-world cognitive performance in a demanding diagnostic task (Source 2, PubMed), contradicting any blanket causal deterioration narrative.
Argument against
The motion asserts brain “deterioration,” but the strongest evidence in the brief does not establish clinical or longitudinal neurodegeneration from ChatGPT use: even the cautionary NIH review frames the issue as potential skill “underemployment” and “erosion” from cognitive offloading rather than demonstrated brain damage (Source 1, PMC - NIH), and the background synthesis explicitly notes there is no causal, peer‑reviewed health evidence from major institutions showing AI chatbots cause brain deterioration (Source 10, LLM Background Knowledge). Moreover, real-world task data can point the other way—LLM collaboration improved diagnostic performance in a brain MRI study (Source 2, PubMed), undermining the claim that using ChatGPT inherently deteriorates the brain rather than sometimes augmenting cognition depending on context.
The Opponent's reliance on Source 10 as a counterweight is critically undermined by its status as LLM background knowledge rather than peer-reviewed research, making it the least authoritative source in the brief and wholly insufficient to neutralize the convergent EEG neuroimaging data from Sources 5, 6, 8, and 9, which objectively measure reduced brain connectivity — a functional form of deterioration regardless of whether it meets a narrow clinical definition of neurodegeneration. Furthermore, the Opponent commits a fallacy of division by citing Source 2's finding that LLM collaboration improved diagnostic accuracy in a specialized medical context, then generalizing this to refute the claim about regular ChatGPT use broadly — a context entirely distinct from the repeated cognitive offloading tasks studied in Sources 4, 6, and 8, where measurable neural and behavioral decline was consistently documented across independent studies.