Verify any claim · lenz.io
Claim analyzed
Science“The majority of online misinformation is spread by human users rather than automated bots.”
Submitted by Vicky
The conclusion
The weight of available research supports the claim that human users remain the primary drivers of online misinformation spread, though the picture is more nuanced than the claim suggests. The most rigorous large-scale studies show that false news diffusion patterns persist even after removing bot accounts, and human behavioral mechanisms — habitual sharing, platform incentives, superspreaders — remain dominant factors. However, bots punch well above their weight in specific contexts, and the rapid rise of AI-generated content since 2023 is narrowing the gap in ways not yet fully measured.
Based on 23 sources: 7 supporting, 13 refuting, 3 neutral.
Caveats
- Key supporting studies (MIT 2018, Internet Society 2018) predate the large language model era; AI-generated misinformation is scaling rapidly and may be shifting the human-bot balance in ways current research has not fully captured.
- Bots representing less than 1–6% of users can account for 30%+ of misinformation content in specific political contexts, meaning the human 'majority' is smaller than it appears when measured by impact rather than user count.
- The claim does not distinguish between misinformation creation and diffusion — bots may play a larger role in origination and amplification than in person-to-person sharing, and during crises bot-driven misinformation approaches parity with human-driven content.
Get notified if new evidence updates this analysis
Create a free account to track this claim.
Sources
Sources used in the analysis
Chatter on social media about global events comes from 20% bots and 80% humans. The chatter by bots and humans is consistently different: bots tend to use linguistic cues that can be easily automated (e.g., increased hashtags, and positive terms) while humans use cues that require dialogue understanding (e.g. replying to post threads).
The rapid adoption of large language models introduces AI-generated fake news that can be produced at scale, rapidly adapted, and optimized for fluency, significantly amplifying the volume and potential impact of misinformation.
Results show that social bots play a central role in misinformation dissemination, participating in news discussions to amplify echo chambers, manipulate public sentiment, and reverse public stances. Among the 5,750 users publishing misinformation, there are 3,799 active users, of which social bots account for 20.19%.
Despite the fact that they represent less than 1 percent of all users, the social media bots posted over 30% of all impeachment-related content on X, formerly known as Twitter, according to a study, “Bots, disinformation, and the first impeachment of U.S. President Donald Trump,” published in “PLOS One” in May 2023.
Overall, most online misinformation originates from a small minority of “superspreaders,” but social media amplifies their reach and influence. Psychological factors contribute significantly to this process: People are more likely to share misinformation when it aligns with personal identity or social norms, when it is novel, and when it elicits strong emotions.
Our analysis of information shared on Twitter during the 2016 U.S. presidential election has found that social bots played a disproportionate role in spreading misinformation online. A mere 6 percent of Twitter accounts that the study identified as bots were enough to spread 31 percent of the low-credibility information on the network.
The spread of false information is essentially not due to bots that are programmed to disseminate inaccurate stories. Instead, false news speeds faster around Twitter due to people retweeting inaccurate news items. “When we removed all of the bots in our dataset, [the] differences between the spread of false and true news stood,” says Soroush Vosoughi, a co-author of the new paper.
An estimated two-thirds of tweeted links to popular websites are posted by automated accounts – not human beings. Many are concerned that bots are used maliciously and negatively affect how well-informed Americans are about current events.
Fake news spreads much faster than real news, and real people – not bots – are to blame, according to a recent study. The team found that bots do accelerate the spread of fake news, but they also accelerate the spread of true news at about the same rate. “Bots cannot explain this massive difference between how fast and far and deeply and broadly false news spreads compared to the truth,” he said. “Human beings are responsible for that.”
False information spreads much faster and farther than the truth on Twitter-and although it is tempting to blame automated "bot" programs for this, human users are more at fault. When the researchers used an algorithm to weed out tweets likely posted and circulated by bots, both false and true news continued to circulate at the same rates.
The role of AI was highlighted, both as a tool for fact-checking and building truthiness identification bots, and as a potential amplifier of false narratives. Moreover, A.I. systems developed and deployed by online platforms to enhance their users' engagement significantly contribute to the effective and rapid dissemination of disinformation online, with specific bots potentially designed as fake-news super-spreaders.
This work examines the influence of misinformation and the role of AI agents, called bots, on social network platforms. Results show that misinformation was more prevalent during the first two periods. Human-generated misinformation tweets tend to have higher appeal and scope compared to bot-generated ones.
A USC-led study of more than 2,400 Facebook users suggests that platforms — more than individual users — have a larger role to play in stopping the spread of misinformation online. The research found that users' social media habits doubled and, in some cases, tripled the amount of fake news they shared, making habits more influential than political beliefs or lack of critical reasoning.
A study analyzing 35 million Facebook posts between 2017 and 2020 found that approximately 75% of news links shared on Facebook are reposted without the users ever reading the content. This phenomenon, termed “shares without clicks,” suggests that human careless sharing is a significant driver of misinformation spread.
AI is fundamentally changing how misinformation and disinformation are developed and spread. A recent NewsGuard report found that leading AI chatbots spread false information 35% of the time when prompted with questions about controversial news topics. This rate is nearly twice the observed rate just a year earlier.
AI tools themselves sometimes contribute to this phenomenon. A study by NewsGuard shows that in August 2025, the leading AI chatbots relayed false claims in 35% of cases, compared to 18% the previous year. Perplexity went from a 100% false information refutation rate in 2024 to a 46.67% error rate in 2025.
A study examining the spread of true and false news online suggested that falsehood spreads further and faster than truth across every topic, and that this was mostly down to humans – not the automated “bots” that many believed were largely responsible for disseminating the material.
New research from Gizem Ceylan, a postdoctoral scholar at Yale SOM, suggests that the reward systems of social media platforms inadvertently encourage users to spread misinformation. A majority of false stories are spread by a small number of frequent human users who are largely unconcerned with the content they post, driven by the platform's reinforcement of sharing with likes and comments.
During crises, approximately 47% of misinformation originates from anonymous or bot accounts, making source-tracking difficult for platforms.
These automated social media accounts can disseminate false or misleading information, leading to the spread of misinformation among social media users. By generating a large volume of posts, comments, and likes, social media bots can create the illusion of widespread support or opposition for a particular viewpoint, swaying public opinion in a desired direction.
AI-powered bots are automated accounts designed to mimic human behavior, and in 2017, it was estimated that approximately 23 million social bots accounted for 8.5% of total users on X (formerly Twitter). More than two-thirds of tweets originated from these automated accounts, amplifying the reach of disinformation and muddying the waters of public discourse.
Since the 2016 U.S. presidential election, social media bots have been identified as a factor in the spread of misinformation, with about two-thirds of Americans having heard about them and a large majority concerned about their malicious use.
Bots on social media play a major role in spreading misinformation and disinformation, influencing public opinion by shaping topics, creating false urgency, and using generative AI to create compelling, tailored content.
What do you think of the claim?
Your challenge will appear immediately.
Challenge submitted!
Expert review
How each expert evaluated the evidence and arguments
Expert 1 — The Logic Examiner
The proponent's case rests on a logically sound chain: Source 1 establishes that 80% of social media chatter is human-generated, Sources 7, 9, 10, and 17 provide direct experimental evidence that removing bots from datasets does not alter the differential spread of false vs. true news, and Sources 5 and 18 identify human behavioral mechanisms (superspreaders, platform reward systems) as primary drivers — this constitutes both direct and mechanistic support for the claim. The opponent's rebuttal introduces two significant logical problems: first, a false equivalence between disproportionate bot content generation in narrow political contexts (Sources 4, 6) and majority responsibility for overall misinformation spread — these are distinct metrics and the opponent conflates them; second, the "outdated data" argument is partially valid but overstated, as Sources 2, 15, and 16 document AI chatbot error rates and scalable fake news production potential, but none of them empirically demonstrate that automated agents now account for a majority of online misinformation spread, leaving the opponent's burden of proof unmet. The claim is "Mostly True" — the preponderance of evidence, including the most methodologically rigorous studies (MIT/Vosoughi et al.), logically supports human primacy in misinformation diffusion, though the opponent correctly identifies that bot disproportionality in specific contexts and the emerging AI landscape introduce genuine inferential uncertainty that prevents a clean "True" verdict.
Expert 2 — The Context Analyst
The claim that "the majority of online misinformation is spread by human users rather than automated bots" is supported by several studies (Sources 1, 7, 9, 10, 17) but critically omits important context: (1) the distinction between volume of chatter and disproportionate impact — Sources 4 and 6 show bots representing tiny fractions of users can account for 30%+ of misinformation content in specific contexts; (2) the claim conflates "majority of users spreading misinformation" with "majority of misinformation spread," which are different metrics; (3) the most supportive studies (Sources 7, 9, 10) are from 2018 and predate the LLM-driven AI misinformation explosion documented in Sources 2, 15, and 16, where AI chatbots now spread false information at 35% rates (nearly double a year prior); and (4) crisis contexts show bots account for ~47% of misinformation (Source 19), narrowing the human majority considerably. That said, the core finding — that human behavior is the primary driver of misinformation diffusion overall — is still supported by recent evidence (Source 1, 2025; Source 12, 2025; Source 14, 2025), and the claim does not assert bots play no role, only that humans are the majority; the weight of evidence, even accounting for bot disproportionality, still supports this as mostly true, though the framing omits the nuanced and rapidly evolving role of AI-generated misinformation.
Expert 3 — The Source Auditor
The most reliable and directly relevant evidence in the pool is mixed: high-authority peer-reviewed/academic items indicate substantial human involvement (e.g., Source 1 PubMed reports overall social-media “chatter” is ~80% human vs ~20% bots, and Source 7 MIT News—summarizing a major Science study—finds false-news diffusion advantages persist after removing bots), while other credible academic sources show bots can be disproportionately responsible for a large share of misinformation content in specific contexts (Source 4 GWU/PLOS One impeachment study; Source 6 IU OSoMe 2016 election analysis) and some newer work emphasizes bots' “central role” despite being a minority of accounts (Source 3 arXiv 2024). Because the claim is a broad “majority of online misinformation” statement and the strongest sources either measure different things (overall chatter vs misinformation) or are context-specific/outdated for today's AI environment, trustworthy evidence does not clearly establish that humans (rather than bots) account for a majority of misinformation spread overall, making the claim at best only partially supported.
Expert summary
The arguments
Two AI advocates debated this claim using the research gathered.
Argument for
Multiple independent studies and syntheses indicate that humans are the primary drivers of misinformation diffusion: a global comparison finds social-media “chatter” is about 80% human vs 20% bots (Source 1, PubMed), and large-scale Twitter analyses report that removing bots does not change the faster, farther spread of false news—implicating human retweeting behavior as the main cause (Source 7, MIT News; Source 9, Internet Society; Source 10, PBS; Source 17, Imperial College London). Complementing this, behavioral research attributes most misinformation propagation to a small set of frequent human “superspreaders” and platform reward dynamics that incentivize human sharing (Source 5, American Psychological Association; Source 18, Yale Insights), so even where bots can be disproportionately influential in some contexts, the overall majority of misinformation spread is best supported as human-driven.
Your reliance on Source 1's 80/20 human-to-bot ratio commits a volume fallacy — raw chatter volume is not equivalent to misinformation spread, and Sources 4 and 6 directly demonstrate that a tiny fraction of bots can account for 30–31% of misinformation content, meaning proportional presence tells us nothing about disproportionate impact. Furthermore, your citation of Sources 7, 9, and 10 is dangerously outdated — all from 2018 and predating the AI-generated misinformation explosion documented in Source 2 (arXiv, 2026) and Sources 15 and 16, which show AI chatbots now spread false information at nearly double the rate of just a year prior, fundamentally invalidating any conclusion drawn from pre-LLM era data about who bears primary responsibility for misinformation spread today.
Argument against
The claim that humans are the majority spreaders of misinformation is fundamentally undermined by disproportionate bot activity: Source 4 (The George Washington University) shows that bots representing less than 1% of users posted over 30% of all impeachment-related content, and Source 6 (Observatory on Social Media - Indiana University) found that just 6% of bot accounts spread 31% of low-credibility information — demonstrating that bots punch far above their weight in misinformation dissemination. Furthermore, Source 2 (arXiv) highlights that AI-generated fake news can now be produced at massive scale with optimized fluency, while Source 15 (Stimson Center) and Source 16 (Polytechnique Insights) confirm AI chatbots spread false information at rapidly escalating rates, collectively shifting the balance of misinformation spread decisively away from human users alone.
You're committing a scope-and-metric fallacy: citing niche cases where bots generate a large share of content (Source 4, George Washington University; Source 6, Indiana University) doesn't overturn broader evidence about who drives misinformation diffusion overall, especially when a global comparison still finds the bulk of social-media chatter is human (Source 1, PubMed) and large-scale diffusion analyses show false news' advantage persists even after removing bots (Source 7, MIT News; Source 9, Internet Society; Source 10, PBS; Source 17, Imperial College London). And your AI pivot is largely speculative or misapplied—Source 2 (arXiv) argues potential scale, while Source 15 (Stimson Center) and Source 16 (Polytechnique Insights) discuss chatbot error rates rather than measuring that automated agents now account for a majority of online misinformation spread, so you haven't actually met the motion's “majority” burden.