Claim analyzed

Science

“The majority of online misinformation is spread by human users rather than automated bots.”

Submitted by Vicky

The conclusion

Mostly True
7/10

The weight of available research supports the claim that human users remain the primary drivers of online misinformation spread, though the picture is more nuanced than the claim suggests. The most rigorous large-scale studies show that false news diffusion patterns persist even after removing bot accounts, and human behavioral mechanisms — habitual sharing, platform incentives, superspreaders — remain dominant factors. However, bots punch well above their weight in specific contexts, and the rapid rise of AI-generated content since 2023 is narrowing the gap in ways not yet fully measured.

Based on 23 sources: 7 supporting, 13 refuting, 3 neutral.

Caveats

  • Key supporting studies (MIT 2018, Internet Society 2018) predate the large language model era; AI-generated misinformation is scaling rapidly and may be shifting the human-bot balance in ways current research has not fully captured.
  • Bots representing less than 1–6% of users can account for 30%+ of misinformation content in specific political contexts, meaning the human 'majority' is smaller than it appears when measured by impact rather than user count.
  • The claim does not distinguish between misinformation creation and diffusion — bots may play a larger role in origination and amplification than in person-to-person sharing, and during crises bot-driven misinformation approaches parity with human-driven content.

Sources

Sources used in the analysis

#1
PubMed 2025-03-31 | A global comparison of social media bot and human characteristics - PubMed
NEUTRAL

Chatter on social media about global events comes from 20% bots and 80% humans. The chatter by bots and humans is consistently different: bots tend to use linguistic cues that can be easily automated (e.g., increased hashtags, and positive terms) while humans use cues that require dialogue understanding (e.g. replying to post threads).

#2
arXiv 2026-04-10 | Human vs. Machine Deception: Distinguishing AI-Generated and Human-Written Fake News Using Ensemble Learning - arXiv
NEUTRAL

The rapid adoption of large language models introduces AI-generated fake news that can be produced at scale, rapidly adapted, and optimized for fluency, significantly amplifying the volume and potential impact of misinformation.

#3
arXiv 2024-08-18 | How Do Social Bots Participate in Misinformation Spread? A Comprehensive Dataset and Analysis - arXiv
REFUTE

Results show that social bots play a central role in misinformation dissemination, participating in news discussions to amplify echo chambers, manipulate public sentiment, and reverse public stances. Among the 5,750 users publishing misinformation, there are 3,799 active users, of which social bots account for 20.19%.

#4
The George Washington University 2023-10-17 | Quantifying the Impact of Bots on Online Political Discussions | School of Engineering & Applied Science | The George Washington University
REFUTE

Despite the fact that they represent less than 1 percent of all users, the social media bots posted over 30% of all impeachment-related content on X, formerly known as Twitter, according to a study, “Bots, disinformation, and the first impeachment of U.S. President Donald Trump,” published in “PLOS One” in May 2023.

#5
American Psychological Association 2023-11-29 | How and why does misinformation spread? - American Psychological Association
SUPPORT

Overall, most online misinformation originates from a small minority of “superspreaders,” but social media amplifies their reach and influence. Psychological factors contribute significantly to this process: People are more likely to share misinformation when it aligns with personal identity or social norms, when it is novel, and when it elicits strong emotions.

#6
Observatory on Social Media - Indiana University 2018-01-01 | Twitter bots spread misinformation - Observatory on Social Media - Indiana University
REFUTE

Our analysis of information shared on Twitter during the 2016 U.S. presidential election has found that social bots played a disproportionate role in spreading misinformation online. A mere 6 percent of Twitter accounts that the study identified as bots were enough to spread 31 percent of the low-credibility information on the network.

#7
MIT News 2018-03-08 | Study: On Twitter, false news travels faster than true stories | MIT News
SUPPORT

The spread of false information is essentially not due to bots that are programmed to disseminate inaccurate stories. Instead, false news speeds faster around Twitter due to people retweeting inaccurate news items. “When we removed all of the bots in our dataset, [the] differences between the spread of false and true news stood,” says Soroush Vosoughi, a co-author of the new paper.

#8
Pew Research Center 2018-04-09 | Bots in the Twittersphere
REFUTE

An estimated two-thirds of tweeted links to popular websites are posted by automated accounts – not human beings. Many are concerned that bots are used maliciously and negatively affect how well-informed Americans are about current events.

#9
Internet Society 2018-03-21 | Fake News Spreads Fast, But Don't Blame the Bots - Internet Society
SUPPORT

Fake news spreads much faster than real news, and real people – not bots – are to blame, according to a recent study. The team found that bots do accelerate the spread of fake news, but they also accelerate the spread of true news at about the same rate. “Bots cannot explain this massive difference between how fast and far and deeply and broadly false news spreads compared to the truth,” he said. “Human beings are responsible for that.”

#10
PBS 2018-03-09 | False news travels 6 times faster on Twitter than truthful news - PBS
SUPPORT

False information spreads much faster and farther than the truth on Twitter-and although it is tempting to blame automated "bot" programs for this, human users are more at fault. When the researchers used an algorithm to weed out tweets likely posted and circulated by bots, both false and true news continued to circulate at the same rates.

#11
PLOS One - Research journals 2024-05-31 | Mapping automatic social media information disorder. The role of bots and AI in spreading misleading information in society | PLOS One - Research journals
SUPPORT

The role of AI was highlighted, both as a tool for fact-checking and building truthiness identification bots, and as a potential amplifier of false narratives. Moreover, A.I. systems developed and deployed by online platforms to enhance their users' engagement significantly contribute to the effective and rapid dissemination of disinformation online, with specific bots potentially designed as fake-news super-spreaders.

#12
arXiv 2025-05-07 | [2505.04028] Appeal and Scope of Misinformation Spread by AI Agents and Humans - arXiv
SUPPORT

This work examines the influence of misinformation and the role of AI agents, called bots, on social network platforms. Results show that misinformation was more prevalent during the first two periods. Human-generated misinformation tweets tend to have higher appeal and scope compared to bot-generated ones.

#13
USC 2023-01-17 | USC study reveals the key reason why fake news spreads on social media
REFUTE

A USC-led study of more than 2,400 Facebook users suggests that platforms — more than individual users — have a larger role to play in stopping the spread of misinformation online. The research found that users' social media habits doubled and, in some cases, tripled the amount of fake news they shared, making habits more influential than political beliefs or lack of critical reasoning.

#14
UF College of Journalism and Communications - University of Florida 2025-01-10 | Spreading Misinformation with Careless Sharing - UF College of Journalism and Communications - University of Florida
REFUTE

A study analyzing 35 million Facebook posts between 2017 and 2020 found that approximately 75% of news links shared on Facebook are reposted without the users ever reading the content. This phenomenon, termed “shares without clicks,” suggests that human careless sharing is a significant driver of misinformation spread.

#15
Stimson Center 2026-02-23 | AI in the Age of Fake (Imagined) Content - Stimson Center
REFUTE

AI is fundamentally changing how misinformation and disinformation are developed and spread. A recent NewsGuard report found that leading AI chatbots spread false information 35% of the time when prompted with questions about controversial news topics. This rate is nearly twice the observed rate just a year earlier.

#16
Polytechnique Insights 2025-12-09 | How AI is affecting quality of factual information - Polytechnique Insights
REFUTE

AI tools themselves sometimes contribute to this phenomenon. A study by NewsGuard shows that in August 2025, the leading AI chatbots relayed false claims in 35% of cases, compared to 18% the previous year. Perplexity went from a 100% false information refutation rate in 2024 to a 46.67% error rate in 2025.

#17
Imperial College London Do bots help to spread fake news? - Imperial College London
REFUTE

A study examining the spread of true and false news online suggested that falsehood spreads further and faster than truth across every topic, and that this was mostly down to humans – not the automated “bots” that many believed were largely responsible for disseminating the material.

#18
Yale Insights 2023-03-31 | How Social Media Rewards Misinformation | Yale Insights
SUPPORT

New research from Gizem Ceylan, a postdoctoral scholar at Yale SOM, suggests that the reward systems of social media platforms inadvertently encourage users to spread misinformation. A majority of false stories are spread by a small number of frequent human users who are largely unconcerned with the content they post, driven by the platform's reinforcement of sharing with likes and comments.

#19
SQ Magazine 2025-10-03 | Social Media Misinformation Statistics 2026: How Social Platforms Amplify False Content (with Data) - SQ Magazine
REFUTE

During crises, approximately 47% of misinformation originates from anonymous or bot accounts, making source-tracking difficult for platforms.

#20
Cheq AI 2023-07-12 | Social Bots: How Do They Shape Public Opinion? - Cheq AI
REFUTE

These automated social media accounts can disseminate false or misleading information, leading to the spread of misinformation among social media users. By generating a large volume of posts, comments, and likes, social media bots can create the illusion of widespread support or opposition for a particular viewpoint, swaying public opinion in a desired direction.

#21
Scroll.in 2024-04-14 | How AI bots spread fabricated information to sway public discourse - Scroll.in
REFUTE

AI-powered bots are automated accounts designed to mimic human behavior, and in 2017, it was estimated that approximately 23 million social bots accounted for 8.5% of total users on X (formerly Twitter). More than two-thirds of tweets originated from these automated accounts, amplifying the reach of disinformation and muddying the waters of public discourse.

#22
Pew Research Center 2018-10-15 | Social Media Bots Draw Public's Attention and Concern - Pew Research Center
NEUTRAL

Since the 2016 U.S. presidential election, social media bots have been identified as a factor in the spread of misinformation, with about two-thirds of Americans having heard about them and a large majority concerned about their malicious use.

#23
UVA Today 2024-10-30 | Q&A: Is That Real? Bots Make It Hard To Recognize Truth - UVA Today
REFUTE

Bots on social media play a major role in spreading misinformation and disinformation, influencing public opinion by shaping topics, creating false urgency, and using generative AI to create compelling, tailored content.

Full Analysis

Expert review

How each expert evaluated the evidence and arguments

Expert 1 — The Logic Examiner

Focus: Inferential Soundness & Fallacies
Mostly True
7/10

The proponent's case rests on a logically sound chain: Source 1 establishes that 80% of social media chatter is human-generated, Sources 7, 9, 10, and 17 provide direct experimental evidence that removing bots from datasets does not alter the differential spread of false vs. true news, and Sources 5 and 18 identify human behavioral mechanisms (superspreaders, platform reward systems) as primary drivers — this constitutes both direct and mechanistic support for the claim. The opponent's rebuttal introduces two significant logical problems: first, a false equivalence between disproportionate bot content generation in narrow political contexts (Sources 4, 6) and majority responsibility for overall misinformation spread — these are distinct metrics and the opponent conflates them; second, the "outdated data" argument is partially valid but overstated, as Sources 2, 15, and 16 document AI chatbot error rates and scalable fake news production potential, but none of them empirically demonstrate that automated agents now account for a majority of online misinformation spread, leaving the opponent's burden of proof unmet. The claim is "Mostly True" — the preponderance of evidence, including the most methodologically rigorous studies (MIT/Vosoughi et al.), logically supports human primacy in misinformation diffusion, though the opponent correctly identifies that bot disproportionality in specific contexts and the emerging AI landscape introduce genuine inferential uncertainty that prevents a clean "True" verdict.

Logical fallacies

False equivalence (Opponent): Conflating bots' disproportionate share of content volume in specific political contexts (Sources 4, 6) with majority responsibility for overall misinformation spread — these are categorically different metrics and the opponent treats them as interchangeable.Hasty generalization (Opponent): Extrapolating from narrow case studies (impeachment tweets, 2016 election) to a universal claim about who bears primary responsibility for all online misinformation spread.Appeal to novelty / genetic fallacy (Opponent's rebuttal): Dismissing Sources 7, 9, and 10 as 'dangerously outdated' without providing replacement evidence that actually measures majority spread attribution — the age of evidence does not automatically invalidate its logical conclusions, especially when no newer equivalent study is cited to contradict them.Scope mismatch (Opponent): Sources 15 and 16 measure AI chatbot error rates when prompted, not the proportion of total online misinformation attributable to automated agents — the opponent uses these as if they prove the latter when they only demonstrate the former.Volume fallacy (partially valid, raised by Opponent in rebuttal): The proponent's use of the 80/20 chatter ratio from Source 1 as a proxy for misinformation spread is an inferential gap — general chatter volume is not equivalent to misinformation diffusion share, though the proponent supplements this with more direct diffusion evidence from Sources 7, 9, and 10.
Confidence: 8/10

Expert 2 — The Context Analyst

Focus: Completeness & Framing
Mostly True
7/10

The claim that "the majority of online misinformation is spread by human users rather than automated bots" is supported by several studies (Sources 1, 7, 9, 10, 17) but critically omits important context: (1) the distinction between volume of chatter and disproportionate impact — Sources 4 and 6 show bots representing tiny fractions of users can account for 30%+ of misinformation content in specific contexts; (2) the claim conflates "majority of users spreading misinformation" with "majority of misinformation spread," which are different metrics; (3) the most supportive studies (Sources 7, 9, 10) are from 2018 and predate the LLM-driven AI misinformation explosion documented in Sources 2, 15, and 16, where AI chatbots now spread false information at 35% rates (nearly double a year prior); and (4) crisis contexts show bots account for ~47% of misinformation (Source 19), narrowing the human majority considerably. That said, the core finding — that human behavior is the primary driver of misinformation diffusion overall — is still supported by recent evidence (Source 1, 2025; Source 12, 2025; Source 14, 2025), and the claim does not assert bots play no role, only that humans are the majority; the weight of evidence, even accounting for bot disproportionality, still supports this as mostly true, though the framing omits the nuanced and rapidly evolving role of AI-generated misinformation.

Missing context

The claim conflates 'majority of users spreading misinformation' with 'majority of misinformation spread by volume or impact' — bots can account for 30%+ of misinformation content despite being a tiny fraction of users (Sources 4, 6).Key supporting studies (Sources 7, 9, 10) are from 2018 and predate the large-scale AI/LLM misinformation era; AI chatbots now spread false information at 35% rates, nearly double the prior year (Sources 15, 16).During crises, approximately 47% of misinformation originates from anonymous or bot accounts, significantly narrowing the human majority in high-stakes contexts (Source 19).The claim does not distinguish between misinformation origination (creation) and diffusion (spreading), which have different human vs. bot profiles.The rapidly escalating capability of AI-generated fake news at scale (Source 2, 2026) may be shifting the balance in ways not yet fully captured by current research.
Confidence: 8/10

Expert 3 — The Source Auditor

Focus: Source Reliability & Independence
Misleading
5/10

The most reliable and directly relevant evidence in the pool is mixed: high-authority peer-reviewed/academic items indicate substantial human involvement (e.g., Source 1 PubMed reports overall social-media “chatter” is ~80% human vs ~20% bots, and Source 7 MIT News—summarizing a major Science study—finds false-news diffusion advantages persist after removing bots), while other credible academic sources show bots can be disproportionately responsible for a large share of misinformation content in specific contexts (Source 4 GWU/PLOS One impeachment study; Source 6 IU OSoMe 2016 election analysis) and some newer work emphasizes bots' “central role” despite being a minority of accounts (Source 3 arXiv 2024). Because the claim is a broad “majority of online misinformation” statement and the strongest sources either measure different things (overall chatter vs misinformation) or are context-specific/outdated for today's AI environment, trustworthy evidence does not clearly establish that humans (rather than bots) account for a majority of misinformation spread overall, making the claim at best only partially supported.

Weakest sources

Source 19 (SQ Magazine) is low-reliability because it is a non-academic magazine-style compilation with unclear methodology and a hard-to-verify statistic presented without primary data context.Source 20 (Cheq AI) is weak/possibly conflicted because it is a vendor blog with marketing incentives and provides general claims about bot influence without quantifying majority vs minority contribution.Source 21 (Scroll.in) is weaker for this adjudication because it appears to recycle older estimates and makes strong quantitative claims (e.g., “more than two-thirds of tweets”) without clear linkage to primary, independently verifiable measurement in the article itself.
Confidence: 6/10

Expert summary

See the full panel summary

Create a free account to read the complete analysis.

Sign up free
The claim is
Mostly True
7/10
Confidence: 7/10 Spread: 2 pts

The arguments

Two AI advocates debated this claim using the research gathered.

Argument for

P
Proponent Argues TRUE

Multiple independent studies and syntheses indicate that humans are the primary drivers of misinformation diffusion: a global comparison finds social-media “chatter” is about 80% human vs 20% bots (Source 1, PubMed), and large-scale Twitter analyses report that removing bots does not change the faster, farther spread of false news—implicating human retweeting behavior as the main cause (Source 7, MIT News; Source 9, Internet Society; Source 10, PBS; Source 17, Imperial College London). Complementing this, behavioral research attributes most misinformation propagation to a small set of frequent human “superspreaders” and platform reward dynamics that incentivize human sharing (Source 5, American Psychological Association; Source 18, Yale Insights), so even where bots can be disproportionately influential in some contexts, the overall majority of misinformation spread is best supported as human-driven.

O
Opponent Rebuttal

Your reliance on Source 1's 80/20 human-to-bot ratio commits a volume fallacy — raw chatter volume is not equivalent to misinformation spread, and Sources 4 and 6 directly demonstrate that a tiny fraction of bots can account for 30–31% of misinformation content, meaning proportional presence tells us nothing about disproportionate impact. Furthermore, your citation of Sources 7, 9, and 10 is dangerously outdated — all from 2018 and predating the AI-generated misinformation explosion documented in Source 2 (arXiv, 2026) and Sources 15 and 16, which show AI chatbots now spread false information at nearly double the rate of just a year prior, fundamentally invalidating any conclusion drawn from pre-LLM era data about who bears primary responsibility for misinformation spread today.

Argument against

O
Opponent Argues FALSE

The claim that humans are the majority spreaders of misinformation is fundamentally undermined by disproportionate bot activity: Source 4 (The George Washington University) shows that bots representing less than 1% of users posted over 30% of all impeachment-related content, and Source 6 (Observatory on Social Media - Indiana University) found that just 6% of bot accounts spread 31% of low-credibility information — demonstrating that bots punch far above their weight in misinformation dissemination. Furthermore, Source 2 (arXiv) highlights that AI-generated fake news can now be produced at massive scale with optimized fluency, while Source 15 (Stimson Center) and Source 16 (Polytechnique Insights) confirm AI chatbots spread false information at rapidly escalating rates, collectively shifting the balance of misinformation spread decisively away from human users alone.

P
Proponent Rebuttal

You're committing a scope-and-metric fallacy: citing niche cases where bots generate a large share of content (Source 4, George Washington University; Source 6, Indiana University) doesn't overturn broader evidence about who drives misinformation diffusion overall, especially when a global comparison still finds the bulk of social-media chatter is human (Source 1, PubMed) and large-scale diffusion analyses show false news' advantage persists even after removing bots (Source 7, MIT News; Source 9, Internet Society; Source 10, PBS; Source 17, Imperial College London). And your AI pivot is largely speculative or misapplied—Source 2 (arXiv) argues potential scale, while Source 15 (Stimson Center) and Source 16 (Polytechnique Insights) discuss chatbot error rates rather than measuring that automated agents now account for a majority of online misinformation spread, so you haven't actually met the motion's “majority” burden.

Your annotation will be visible after submission.

Embed this verification

Copy this code and paste it in your article's HTML.