Library

16 published verifications about artificial intelligence artificial intelligence ×

“Artificial intelligence will cause widespread job loss among software engineers.”

False
· 100+ views

The available evidence does not support the prediction that AI will cause widespread job loss among software engineers. High-authority sources from Morgan Stanley, MIT Sloan, arXiv, and Snowflake consistently point toward augmentation, productivity gains, and net job growth rather than broad displacement. The evidence cited in favor of the claim — worse outcomes for recent graduates in AI-exposed fields, economy-wide self-reports — does not isolate software engineers, does not establish AI as the causal driver, and conflates hiring difficulty with job destruction.

“Doctronic, an AI company, is prescribing renewal medications to patients in Utah without physician involvement.”

Misleading

Utah's Doctronic pilot is designed to eventually allow AI-driven prescription renewals without routine physician sign-off, but the claim significantly overstates current reality. As of early 2026, the program's active phase requires physician review of all renewals before they reach pharmacies. Even in later phases, escalation pathways to licensed physicians remain structurally embedded. The present-tense assertion of "no physician involvement" conflates the program's future autonomous design with its current operational requirements.

“Oxford University has predicted that the percentage of jobless people will decline as artificial intelligence advances.”

False

No Oxford University source has made the specific prediction attributed to it. Oxford-affiliated research discusses AI's complex labor market effects — noting that mass displacement fears may be overstated and that AI could create new roles — but none of these findings constitute a forecast that the percentage of jobless people will decline as AI advances. The claim conflates cautious, nuanced commentary with a definitive institutional prediction that does not exist in the evidence.

“Artificial intelligence will displace more jobs than it creates on a net basis.”

Misleading

The claim that AI will displace more jobs than it creates on a net basis overstates the available evidence. While documented displacement exists in specific sectors (e.g., computer systems design, entry-level roles, AI-vulnerable occupations), the most authoritative aggregate assessments — from the Federal Reserve, World Economic Forum, PwC, and Goldman Sachs — show near-zero net headcount effects or project net job creation. The claim treats localized displacement as proof of an economy-wide net loss, which current evidence does not support.

“Researchers deliberately fabricated a fictitious disease called Bixonimania using AI-generated preprints and found that AI systems subsequently treated it as a legitimate medical condition.”

Mostly True

The Bixonimania experiment is documented in an arXiv preprint and echoed by a Johns Hopkins-affiliated post, and no source contradicts its account. However, the specific claim rests on a single non-peer-reviewed preprint with no independent high-authority confirmation. The broader phenomenon — AI systems confidently elaborating on fabricated medical content — is well-established across multiple peer-reviewed studies, lending plausibility. The claim accurately reflects what was reported but should be understood as describing a preprint finding, not a peer-reviewed, independently replicated result.

“The majority of startup failures are primarily caused by issues related to artificial intelligence.”

False

This claim is not supported by the evidence. Large-scale startup failure databases consistently show the leading causes are no market need (42%), running out of cash (29%), wrong team (23%), and competition (19%) — none of which are AI-related. While AI startups do fail at high rates, even those failures are largely attributed to classic business problems like poor product-market fit. The claim conflates "AI startups failing" with "startup failures caused by AI," which are fundamentally different statements.

“Artificial intelligence is responsible for generating the majority of software code being written as of 2026.”

False

The claim that AI generates the majority of software code as of 2026 is not supported by the evidence. The most rigorous measurements place AI-authored code at 22–29% of actual code output, while the often-cited 41% figure from JetBrains refers to lines "touched" by AI — not independently generated. High adoption rates for AI coding tools do not equate to AI writing most code. No credible primary dataset shows AI-generated code exceeding 50% globally.

“TurboQuant compression technology can optimize AI memory usage by more than 5 times.”

Mostly True

Google Research confirms TurboQuant achieves at least 6x memory reduction — exceeding the claimed 5x threshold — but this figure applies specifically to the LLM key-value (KV) cache during inference, not total system memory. The KV cache is the dominant memory bottleneck in LLM inference, making the claim substantially accurate in that context. However, the phrasing "AI memory usage" is broader than what the evidence strictly supports, and results remain benchmark-based with real-world deployment unconfirmed.

“As of March 29, 2026, artificial intelligence systems outperform humans in general computer use tasks.”

False

The claim that AI systems outperform humans in general computer use tasks as of March 29, 2026 is not supported by the evidence. The strongest supporting data comes from a narrow benchmark of "economically valuable tasks" (GDPVal), which does not represent the full breadth of general computer use. Independent academic sources indicate AI systems still show significant performance gaps on harder, open-ended tasks. Speculative forecasts about enterprise applications do not constitute demonstrated across-the-board superiority over humans.

“AI-generated deepfake X-ray images are sufficiently realistic to cause radiologists to make incorrect diagnoses.”

Misleading

The evidence confirms that AI-generated deepfake X-rays can deceive radiologists — with only 41% spontaneously detecting fakes in a major 2026 study — but it does not demonstrate that this deception causes incorrect diagnoses. The same study found comparable diagnostic accuracy on real versus synthetic images (91.3% vs. 92.4%), undermining the claim's causal assertion. The claim conflates "hard to detect" with "causes misdiagnosis," an inferential leap the available research does not support.

“Using artificial intelligence tools causes a decline in human intelligence over time.”

Misleading

Research links cognitive risks to excessive or exclusive AI reliance, not to AI tool use in general — making this claim a significant overstatement. Multiple peer-reviewed studies find that heavy, passive dependence on AI can reduce cognitive engagement and retention, but the same literature emphasizes that moderate use shows minimal impact and that outcomes depend on how tools are used. The blanket causal framing strips away these critical conditions and ignores evidence that AI can also augment cognition.

“The Apple Watch can predict heart failure with high accuracy using an AI model that analyzes peak oxygen uptake (pVO2) data.”

Misleading
· 50+ views

The claim overstates what current evidence supports. While the TRUE-HF AI model uses Apple Watch data to estimate daily fitness surrogates correlated with pVO2, the Apple Watch does not directly measure peak oxygen uptake — it estimates submaximal VO2max with known error and bias. Published findings show promising risk associations (e.g., threefold higher event risk per 10% fitness drop), but no validated "high accuracy" prediction metrics (AUC, sensitivity, specificity) for heart failure have been reported for this specific pVO2-based approach. The research is promising but preliminary.

“Artificial intelligence will have a net positive impact on the climate.”

Misleading
· 100+ views

This claim overstates the certainty of AI's climate benefits. Leading authorities like the IEA and UNFCCC describe AI's potential emissions reductions as conditional — dependent on widespread adoption, smart governance, and clean energy supply. Meanwhile, AI-driven data center growth is already increasing emissions, with energy demand projected to reach ~1,050 TWh by 2026, much of it fossil-powered. AI could be net positive for the climate under the right conditions, but the unconditional claim that it will be is not supported by current evidence.

“Artificial intelligence will not fully replace human accountants in the accounting profession by 2036.”

Mostly True
· 100+ views

The claim is well-supported. No credible source predicts the complete elimination of human accountants by 2036. Multiple authoritative sources — including Stanford GSB, Deloitte leadership, PwC research, and WEF-linked analyses — consistently project that AI will automate routine accounting tasks but that human judgment, ethical oversight, and advisory roles will persist. However, the claim's "not fully replace" framing sets a very high bar that can obscure the reality: the profession faces steep declines, with most transactional work potentially automated by 2035 and significant job displacement well before 2036.

“It is possible to use artificial intelligence to develop an investment strategy that consistently outperforms the stock market.”

False
· 250+ views

The claim that AI can "consistently" outperform the stock market is not supported by the available evidence. While AI-driven strategies have shown impressive results in specific contexts — competition rankings, single strong years, and research frameworks — no source demonstrates durable, net-of-fees outperformance across multiple market regimes. Academic research and institutional analysis indicate that as AI adoption spreads, the very edges it exploits tend to erode through increased market efficiency, transaction costs, and crowding effects.

“Some major software companies currently report that the majority of their source code is written by artificial intelligence.”

Mostly True
· 500+ views

The claim is largely accurate. Google and Anthropic—both major software companies—have publicly stated that a majority of their new code is AI-generated (Google citing over 50% of weekly production check-ins, Anthropic citing 70-90% company-wide). However, these are self-reported figures from AI-focused firms, the metric typically refers to new code check-ins rather than entire codebases, and industry-wide averages remain well below 50%. The claim is true as stated but could easily be misread as an industry-wide trend.