Library

27 published verifications about artificial intelligence artificial intelligence ×

“Artificial intelligence poses a risk of causing human extinction.”

Mostly True

The claim that AI poses a risk of causing human extinction is supported by credible sources, including peer-reviewed research, the International AI Safety Report 2026, and statements signed by hundreds of leading AI scientists. Even skeptical analyses (e.g., Brookings) do not deny the risk exists — they argue it is speculative and should not dominate policy priorities. The claim is accurate as a statement about the existence of a recognized risk, but readers should understand that no established scientific consensus quantifies this risk as probable or imminent.

“At least one AI-powered video face-swap tool offers a free tier that supports video durations of up to 5 minutes.”

Misleading

The only evidence for a free 5-minute video face-swap tier comes from a single vendor's own marketing page (VoidMagic), with no independent review or test confirming the claim. Across the broader evidence, free tiers from comparable tools consistently cap video length at 10 to 120 seconds. Without third-party corroboration, the 5-minute assertion remains unverified and likely overstated, making the claim as presented misleading.

“More than 50% of online content is generated by artificial intelligence rather than written by humans.”

False

The available evidence does not support the statement that most online content is AI-generated. The strongest broad estimate cited is below 50%, while the higher numbers refer to narrower categories such as newly published pages, English-language articles, pages containing some AI text, or automated traffic rather than human-versus-AI authorship. That makes the claim an overstatement of what current evidence shows.

“Coupang, Naver, and Gmarket have made substantial investments in AI-driven retail infrastructure in South Korea.”

Mostly True

The available evidence supports the broad point that all three companies are investing meaningfully in AI capabilities that support retail in South Korea. Coupang’s case is the strongest, while Naver’s spending is partly broader AI infrastructure and Gmarket’s evidence relies more on announced budgets and rollout plans. The statement is directionally accurate but somewhat overstated as fully realized, retail-specific spending across all three.

“Artificial intelligence systems can produce high confidence scores for predictions that are actually incorrect.”

True

Extensive empirical research confirms that AI models sometimes output very high confidence scores for answers that are wrong. Demonstrations span image, language, and clinical systems from 2017-2026, establishing miscalibration as a known risk. That corrective techniques exist does not negate the documented fact that such overconfident errors occur.

“High accuracy in an artificial intelligence model does not guarantee fair outcomes, as some demographic groups may be systematically disadvantaged even when overall model accuracy is high.”

True

Extensive research shows overall model accuracy can hide large subgroup errors, allowing racial, gender, or age groups to be disadvantaged even when headline accuracy is high. Because fairness depends on distributional impacts, not aggregate accuracy, high performance provides no assurance of equitable treatment. Evidence from healthcare, finance, and vision systems consistently confirms this gap.

“In contemporary AI systems, deferring a decision to a human operator is regarded as an advantage.”

Mostly True

Deferring decisions to human operators is indeed widely regarded as an advantage in contemporary AI systems, supported by binding regulations like the EU AI Act, major technology companies, and peer-reviewed research. However, the claim omits significant qualifications: authoritative sources document that human-in-the-loop oversight is prone to automation bias, can create false security, and may degrade over time as human decision-making skills atrophy. The claim accurately reflects the dominant institutional and regulatory posture but presents an incomplete picture by not acknowledging these well-documented limitations.

“Artificial intelligence will cause widespread job loss among software engineers.”

False
· 100+ views

The available evidence does not support the prediction that AI will cause widespread job loss among software engineers. High-authority sources from Morgan Stanley, MIT Sloan, arXiv, and Snowflake consistently point toward augmentation, productivity gains, and net job growth rather than broad displacement. The evidence cited in favor of the claim — worse outcomes for recent graduates in AI-exposed fields, economy-wide self-reports — does not isolate software engineers, does not establish AI as the causal driver, and conflates hiring difficulty with job destruction.

“Doctronic, an AI company, is prescribing renewal medications to patients in Utah without physician involvement.”

Misleading

Utah's Doctronic pilot is designed to eventually allow AI-driven prescription renewals without routine physician sign-off, but the claim significantly overstates current reality. As of early 2026, the program's active phase requires physician review of all renewals before they reach pharmacies. Even in later phases, escalation pathways to licensed physicians remain structurally embedded. The present-tense assertion of "no physician involvement" conflates the program's future autonomous design with its current operational requirements.

“Oxford University has predicted that the percentage of jobless people will decline as artificial intelligence advances.”

False

No Oxford University source has made the specific prediction attributed to it. Oxford-affiliated research discusses AI's complex labor market effects — noting that mass displacement fears may be overstated and that AI could create new roles — but none of these findings constitute a forecast that the percentage of jobless people will decline as AI advances. The claim conflates cautious, nuanced commentary with a definitive institutional prediction that does not exist in the evidence.

“Artificial intelligence will displace more jobs than it creates on a net basis.”

Misleading

The claim that AI will displace more jobs than it creates on a net basis overstates the available evidence. While documented displacement exists in specific sectors (e.g., computer systems design, entry-level roles, AI-vulnerable occupations), the most authoritative aggregate assessments — from the Federal Reserve, World Economic Forum, PwC, and Goldman Sachs — show near-zero net headcount effects or project net job creation. The claim treats localized displacement as proof of an economy-wide net loss, which current evidence does not support.

“Researchers deliberately fabricated a fictitious disease called Bixonimania using AI-generated preprints and found that AI systems subsequently treated it as a legitimate medical condition.”

Mostly True

The Bixonimania experiment is documented in an arXiv preprint and echoed by a Johns Hopkins-affiliated post, and no source contradicts its account. However, the specific claim rests on a single non-peer-reviewed preprint with no independent high-authority confirmation. The broader phenomenon — AI systems confidently elaborating on fabricated medical content — is well-established across multiple peer-reviewed studies, lending plausibility. The claim accurately reflects what was reported but should be understood as describing a preprint finding, not a peer-reviewed, independently replicated result.

“Artificial intelligence will replace the majority of human jobs.”

False
· 50+ views

No credible economic or labor market research supports the claim that AI will replace the majority of human jobs. Leading institutions — including BCG, Goldman Sachs, Forrester, MIT Sloan, and Anthropic — project job displacement in the 6–15% range, with AI reshaping and augmenting far more roles than it eliminates. Even the most pessimistic long-run forecast in the evidence (~10 million jobs by 2050) falls far short of the "majority" threshold. No systematic increase in unemployment has been observed since AI's mainstream adoption.

“The majority of startup failures are primarily caused by issues related to artificial intelligence.”

False

This claim is not supported by the evidence. Large-scale startup failure databases consistently show the leading causes are no market need (42%), running out of cash (29%), wrong team (23%), and competition (19%) — none of which are AI-related. While AI startups do fail at high rates, even those failures are largely attributed to classic business problems like poor product-market fit. The claim conflates "AI startups failing" with "startup failures caused by AI," which are fundamentally different statements.

“Artificial intelligence is responsible for generating the majority of software code being written as of 2026.”

False

The claim that AI generates the majority of software code as of 2026 is not supported by the evidence. The most rigorous measurements place AI-authored code at 22–29% of actual code output, while the often-cited 41% figure from JetBrains refers to lines "touched" by AI — not independently generated. High adoption rates for AI coding tools do not equate to AI writing most code. No credible primary dataset shows AI-generated code exceeding 50% globally.

“TurboQuant compression technology can optimize AI memory usage by more than 5 times.”

Mostly True

Google Research confirms TurboQuant achieves at least 6x memory reduction — exceeding the claimed 5x threshold — but this figure applies specifically to the LLM key-value (KV) cache during inference, not total system memory. The KV cache is the dominant memory bottleneck in LLM inference, making the claim substantially accurate in that context. However, the phrasing "AI memory usage" is broader than what the evidence strictly supports, and results remain benchmark-based with real-world deployment unconfirmed.

“As of March 29, 2026, artificial intelligence systems outperform humans in general computer use tasks.”

False

The claim that AI systems outperform humans in general computer use tasks as of March 29, 2026 is not supported by the evidence. The strongest supporting data comes from a narrow benchmark of "economically valuable tasks" (GDPVal), which does not represent the full breadth of general computer use. Independent academic sources indicate AI systems still show significant performance gaps on harder, open-ended tasks. Speculative forecasts about enterprise applications do not constitute demonstrated across-the-board superiority over humans.

“Artificial intelligence will result in a net loss of jobs, replacing more jobs than it creates.”

Misleading

Misleading. The claim presents a contested, speculative outcome as settled fact. Current measured data shows AI-linked job creation outpacing AI-linked cuts by roughly 2-to-1, and leading academic institutions (Stanford, Anthropic) find no systematic unemployment increase for AI-exposed workers. Frequently cited figures like "300 million jobs" represent exposure or risk, not confirmed net losses. The long-run net effect remains genuinely uncertain, with major forecasters disagreeing on direction — making a definitive "net loss" assertion unsupported by the evidence.

“AI-generated deepfake X-ray images are sufficiently realistic to cause radiologists to make incorrect diagnoses.”

Misleading

The evidence confirms that AI-generated deepfake X-rays can deceive radiologists — with only 41% spontaneously detecting fakes in a major 2026 study — but it does not demonstrate that this deception causes incorrect diagnoses. The same study found comparable diagnostic accuracy on real versus synthetic images (91.3% vs. 92.4%), undermining the claim's causal assertion. The claim conflates "hard to detect" with "causes misdiagnosis," an inferential leap the available research does not support.

“Using artificial intelligence tools causes a decline in human intelligence over time.”

Misleading
· 50+ views

Research links cognitive risks to excessive or exclusive AI reliance, not to AI tool use in general — making this claim a significant overstatement. Multiple peer-reviewed studies find that heavy, passive dependence on AI can reduce cognitive engagement and retention, but the same literature emphasizes that moderate use shows minimal impact and that outcomes depend on how tools are used. The blanket causal framing strips away these critical conditions and ignores evidence that AI can also augment cognition.