Library

966 published verifications avg. score 4.7/10 329 rated true or mostly true 629 rated false or misleading

“AI coding tools do not significantly improve real-world software developer productivity as of March 15, 2026.”

Misleading
· 500+ views

This claim oversimplifies a genuinely mixed picture. At the individual and task level, AI coding tools deliver measurable productivity gains — 30-55% faster task completion in controlled settings and hours saved weekly. However, at the organizational level, delivery metrics like DORA remain largely flat, review queues have ballooned, and one rigorous RCT found experienced developers were actually 19% slower. Even the most skeptical multi-study synthesis acknowledges ~10% organizational gains. Saying tools "do not significantly improve" productivity ignores real individual-level improvements while overstating organizational-level stagnation.

“An AI-generated podcast network publishes over 11,000 episodes per day by repurposing content from local news outlets without attribution.”

Mostly True

The claim is largely accurate. Multiple credible sources confirm that an AI podcast network (identified as "Daily News Now" or "Podcasts.ai") has been reported to produce approximately 11,000 episodes per day by repurposing local news content, often without crediting original outlets. However, the specific episode count traces back to a single investigation and has not been independently audited. The "without attribution" characterization applies to many — but not necessarily all — episodes, making the claim's absolute framing slightly overstated.

“Thousands of TikTok and Instagram videos promoting the Jenni AI study app did not disclose that they were paid advertisements.”

False

The claim that "thousands" of TikTok and Instagram videos promoting Jenni AI failed to disclose paid partnerships is not supported by available evidence. While Jenni AI did operate an affiliate/micro-influencer program, and one blogger noted suspected undisclosed affiliate links in "many" reviews, no audit, dataset, enforcement action, or quantitative analysis confirms non-disclosure at the scale of "thousands" of videos. The leap from anecdotal observations to a specific large-scale claim is unsupported speculation.

“NBC News correspondent Richard Engel was injured while reporting in Israel in early March 2026.”

False

This claim is false. Richard Engel was not injured while reporting in Israel in early March 2026. Engel himself called the injury rumors "totally not true" on a March 10 podcast and posted a video on March 12 showing him healthy and working. Snopes confirmed the rumor originated as AI-generated misinformation spread on Facebook. Multiple sources document Engel actively reporting from Israel throughout early March with no signs of injury, and NBC News issued no injury announcement.

“George Soros was placed under house arrest by United States federal authorities in March 2026.”

False

This claim is false. There is no credible evidence that George Soros was placed under house arrest by U.S. federal authorities in March 2026. Multiple independent fact-checks found no DOJ or FBI statements, no court filings, and no reporting from any major news outlet supporting this claim. The only source backing it is an anonymous, uncorroborated crypto social media post. While the DOJ did direct prosecutors to investigate Soros-linked organizations in 2025, that activity involved foundations — not any personal detention of Soros himself.

“A digitally altered or fake image depicting Ian Huntley in a hospital bed circulated online in March 2026.”

Mostly True

The claim is well-supported. UKNIP, a credible news source, reported on March 10, 2026 that misleading images falsely depicting Ian Huntley on his deathbed circulated online and appeared to be AI-generated or taken from unrelated medical imagery. This was corroborated by additional outlets. The fake image emerged amid widespread misinformation following a real prison attack on Huntley in late February 2026. The only caveat is that the exact origin and scale of circulation remain unclear.

“AI deepfake detection technology is highly accurate and reliable as of March 15, 2026.”

Misleading
· 50+ views

While some leading deepfake detection tools report 92–98% accuracy in controlled lab settings, these figures come largely from vendor benchmarks, not independent real-world testing. Multiple sources — including academic challenge benchmarks and forensic experts — document that detection accuracy drops by 45–50% under real-world conditions such as compression, low-quality media, and novel AI generators. Some deployed systems are only ~80% effective. Calling the technology "highly accurate and reliable" as a blanket characterization significantly overstates its current operational performance.

“AI chatbots, such as ChatGPT, provide medical advice that is consistently reliable and safe for users.”

False
· 50+ views

The claim that AI chatbots like ChatGPT provide "consistently reliable and safe" medical advice is not supported by the evidence. Multiple high-quality studies from 2024–2026 show ChatGPT gave incorrect advice in over 51% of medical emergencies, exhibited hallucination rates of 50–82%, and correctly identified conditions in fewer than 34.5% of real-world cases. ECRI designated AI chatbot misuse as the top health technology hazard for 2026. While chatbots show promise in narrow, controlled tasks, their performance is neither consistent nor safe for general medical advice.

“The United States military conducted a missile strike on an Iranian girls' school in March 2026.”

Misleading

A U.S. missile did reportedly strike an Iranian girls' school, according to multiple credible outlets citing a preliminary Pentagon assessment. However, the claim omits critical context: the strike was a targeting error made while attacking an adjacent IRGC military base, not a deliberate strike "on" the school. Outdated targeting data reportedly caused the misidentification. The phrasing "conducted a missile strike on a girls' school" implies intentional targeting, which no credible source supports. A Pentagon investigation remains ongoing.

“Video footage circulating in March 2026 purportedly showing Iranian missiles striking Tel Aviv is authentic and depicts current events.”

False

While Iranian missiles did strike or target the Tel Aviv area in March 2026 — confirmed by multiple credible outlets — the specific viral footage circulating online is not authentic. Snopes traced one widely shared clip to June 2025 events, Lead Stories identified another as AI-generated, and BOOM independently confirmed multiple circulating videos were old or fabricated. The real conflict does not validate the fake footage. The claim falsely presents debunked viral clips as genuine current-event video.

“A viral video shows Benjamin Netanyahu with six fingers, which is cited as evidence that the footage is AI-generated.”

Misleading

A viral video from Netanyahu's March 12 press conference did circulate widely, with social media users claiming a freeze-frame showed a sixth finger as proof of AI generation. However, multiple fact-checkers (PolitiFact, dedicated forensic analyses) confirmed the video shows five fingers — the "sixth" was an optical illusion caused by palm anatomy, lighting, and compression. AI detection tools found no evidence of synthetic media. The claim accurately describes a real social media event but misleadingly frames a debunked illusion as though the video genuinely depicts six fingers.

“Trent Reznor stated that he thinks there should be separate bathrooms for supporters of Make America Great Again (MAGA) because he does not feel comfortable with them around women and children.”

False

This quote was never said by Trent Reznor. Snopes traced the "separate bathrooms for MAGA" quote to an anonymous Instagram user and rated it "Incorrect Attribution." The official Nine Inch Nails website explicitly denied Reznor ever made such a statement, and no verified interview or social media post contains it. While Reznor has a well-documented history of criticizing Trump, that does not validate a fabricated quote attributed to him.

“Long-term use of wireless earbuds may negatively affect brain function due to electromagnetic field exposure.”

False
· 50+ views

No peer-reviewed study has demonstrated that wireless earbuds impair brain function. Bluetooth earbuds emit roughly 100–1,000 times less RF radiation than cell phones held to the head. The WHO, CDC, and Bluetooth-specific research consistently find no adverse neurological effects at these power levels. The claim's key supporting evidence comes from cell phone studies on children—a fundamentally different exposure scenario. While long-term earbud-specific research is limited, presenting speculative extrapolation as plausible risk is not supported by current science.

“Wireless earbuds communicate with each other by transmitting signals through the human brain.”

False

Wireless earbuds do not communicate by transmitting signals through the human brain. They use Bluetooth radio waves transmitted through the air, with one earbud typically relaying audio to the other. Even advanced technologies like Near-Field Magnetic Induction (NFMI) create a body-area network around the user — not through brain tissue. The only source making the "through the brain" claim is a low-credibility EMF-concern blog contradicted by every authoritative technical source reviewed.

“Eating chocolate every day reduces the risk of heart disease.”

Misleading

The claim overstates the evidence. While observational studies link moderate chocolate consumption to lower cardiovascular risk, the strongest randomized trial (COSMOS) found no significant reduction in total cardiovascular events. Benefits appear limited to modest amounts of high-flavanol dark chocolate — not "chocolate every day" broadly. The claim conflates correlation with causation, ignores dose-dependent risks (a J-shaped curve where excess intake may be harmful), and equates cocoa flavanols with everyday commercial chocolate.

“Listening to Mozart's music increases cognitive intelligence in babies.”

False
· 50+ views

This claim is false. The "Mozart effect" originated from a 1993 study on college students — not babies — and produced only a brief, temporary boost in spatial reasoning, not general cognitive intelligence. Multiple meta-analyses and peer-reviewed reviews have found no persuasive evidence that passively listening to Mozart increases cognitive intelligence in infants. The original researcher herself stressed the effect does not extend to general intelligence. The widespread belief persists as a popular myth unsupported by scientific evidence.

“Multitasking reduces productivity.”

True
· 100+ views

The claim is well-supported by robust scientific evidence. Research from the APA, NIH, Stanford, and peer-reviewed experimental studies consistently shows that what people call "multitasking" — rapidly switching between tasks — imposes measurable cognitive costs, increasing errors and reducing output by an estimated 20–40%. While a tiny fraction (~2.5%) of people may be immune to these effects, and simple compatible tasks may not suffer the same penalties, the claim accurately reflects the strong scientific consensus for the vast majority of real-world work contexts.

“Vaccines contain ingredients that are harmful to human health.”

Misleading

This claim is misleading. While it's true that rare allergic reactions to vaccine excipients (like gelatin or PEG) occur in roughly 1 per million doses, the unqualified statement implies vaccines are broadly dangerous. The overwhelming scientific consensus — including WHO, the CDC, the AAP, and a landmark study of 1.2 million children — confirms that vaccine ingredients like aluminum adjuvants and thimerosal are safe at the doses used, with no causal link to autism, neurological disorders, or systemic harm.

“Most human decisions are made unconsciously and are rationalized after the fact.”

Misleading

Unconscious processes do influence many decisions, and post-hoc rationalization is a documented psychological phenomenon. However, the claim that "most" decisions are made unconsciously and rationalized afterward significantly overstates the evidence. Key neuroscience findings come from narrow lab tasks (e.g., simple button presses), not everyday decision-making. Critical peer-reviewed reviews warn that unconscious influence claims have been systematically inflated. The popular "95%" statistic lacks rigorous scientific backing. The claim contains a real kernel of truth but its sweeping framing is not supported.

“The 10,000-hour rule reliably predicts the attainment of expertise in a given field.”

False
· 100+ views

The 10,000-hour rule does not reliably predict expertise. Meta-analyses show deliberate practice explains only 18–26% of skill variance across domains. Individual variation is enormous — chess masters have achieved mastery in as few as 3,016 hours while others never reached it after 25,000+. The "rule" is a popularized oversimplification of one violinist study's average, and its originator, K. Anders Ericsson, distanced himself from this framing. Genetics, instruction quality, and learning rates matter significantly.