Knowledge library

A searchable index of claims submitted by users — each researched, sourced, and scored for truthfulness.

13 Tech claim analyses

Misleading 5/10

“Algorithm-driven recommendation systems amplify extreme viewpoints more than moderate ones.”

This claim overgeneralizes from mixed evidence. Some audits find YouTube's algorithm can elevate extreme content under specific conditions, but large-scale experiments show limited real-world effects on user opinions, and platforms like Reddit and Gab show no such amplification. The highest-quality research indicates that user choice—not algorithms alone—is often the primary driver of exposure to extreme content, and recommender systems can actually deamplify niche material when users don't engage with it. The claim is partially true but misleadingly broad.

False 2/10

“More than 50% of content engagement on major social media platforms is generated by bots rather than humans as of March 1, 2026.”

This claim is false. It conflates overall internet traffic — where bots may account for ~51% — with content engagement on social media platforms, which is a fundamentally different metric. The best direct evidence, a peer-reviewed study, finds only about 20% of social media activity is bot-generated. Even the highest platform-specific figure cited (40% of Facebook posts being machine-generated) measures posting volume, not engagement, and still falls short of 50%. No credible source supports the claim that bots generate more than half of social media engagement.

False 3/10

“Artificial intelligence will eliminate more jobs than it creates between 2026 and 2031.”

The claim that AI will eliminate more jobs than it creates between 2026 and 2031 is not supported by the available evidence. The most authoritative sources — including the IMF, Goldman Sachs, and Gartner — document localized disruptions and entry-level hiring compression but do not project an economy-wide net job loss for this period. Goldman Sachs forecasts transitory displacement with reabsorption, and Gartner predicts AI will create more jobs than it destroys by 2028. The claim overgeneralizes sectoral impacts into an unsupported aggregate conclusion.

Misleading 4/10

“Generative AI will eliminate more white-collar jobs than it creates between 2026 and 2036.”

While generative AI will significantly disrupt many white-collar tasks and roles, the claim that it will eliminate more white-collar jobs than it creates between 2026 and 2036 is not supported by the available evidence. The most rigorous economic models (Goldman Sachs, WEF, KPMG) project net job gains, not losses. Supporting evidence conflates task automation and slowed hiring with net job elimination — a critical logical leap. Real disruption is occurring, but framing it as guaranteed net loss overstates what the data shows.

Mostly True 7/10

“Claude Opus 4.6 successfully built a working C compiler.”

Claude Opus 4.6 did produce a functional C compiler — a 100,000-line Rust codebase that compiles Linux 6.9, passes 99% of GCC's torture tests, and builds major projects like FFmpeg, Redis, and PostgreSQL. However, the claim omits important context: the compiler relies on GCC's assembler and linker for critical steps, independent testers found reliability issues with basic programs, it was built by 16 parallel AI agents (not one instance) with human oversight, and it cost ~$20,000 in API usage. It works, but with significant caveats.

Misleading 5/10

“Generative AI models consistently produce factual inaccuracies in their outputs.”

Generative AI models do produce factual inaccuracies, and this is a well-documented, persistent challenge confirmed by peer-reviewed research and major benchmarks. However, the word "consistently" overstates the problem. Error rates vary enormously — from below 1% on grounded summarization tasks to over 30% on open-domain reasoning — depending on the task, domain, model, and whether retrieval tools are used. Hallucination rates are also declining over time. The claim describes a real issue but frames it in a misleadingly uniform way.

False 3/10

“Live sports broadcasts cannot be convincingly deepfaked using current technology as of March 1, 2026.”

This claim is false. As of March 2026, real-time deepfake systems can already generate convincing manipulations of sports footage at broadcast frame rates (40–50 FPS) on both datacenter and consumer hardware. While limitations remain with extreme camera angles and multi-person occlusions, these are partial constraints — not fundamental barriers. Convincing deepfakes of live sports segments, interviews, and selective broadcast shots are demonstrably achievable today, making the blanket assertion that they "cannot" be done inaccurate.

Mostly True 8/10

“Artificial intelligence will not fully replace human accountants in the accounting profession by 2036.”

The claim is well-supported. No credible source predicts the complete elimination of human accountants by 2036. Multiple authoritative sources — including Stanford GSB, Deloitte leadership, PwC research, and WEF-linked analyses — consistently project that AI will automate routine accounting tasks but that human judgment, ethical oversight, and advisory roles will persist. However, the claim's "not fully replace" framing sets a very high bar that can obscure the reality: the profession faces steep declines, with most transactional work potentially automated by 2035 and significant job displacement well before 2036.

True 9/10

“Engine displacement is considered one of the most important characteristics of an engine.”

The claim that engine displacement is "one of the most important" engine characteristics is well-supported. Multiple credible sources — including Chase.com, The Drive, and automotive training references — describe displacement as "key," "crucial," and "fundamental" to engine performance and classification. The claim uses modest, non-exclusive language ("one of"), which is consistent with the fact that other parameters (compression ratio, turbocharging, valve timing) also matter significantly. No credible source disputes displacement's top-tier status among engine characteristics.

Mostly True 7/10

“Some major software companies currently report that the majority of their source code is written by artificial intelligence.”

The claim is largely accurate. Google and Anthropic—both major software companies—have publicly stated that a majority of their new code is AI-generated (Google citing over 50% of weekly production check-ins, Anthropic citing 70-90% company-wide). However, these are self-reported figures from AI-focused firms, the metric typically refers to new code check-ins rather than entire codebases, and industry-wide averages remain well below 50%. The claim is true as stated but could easily be misread as an industry-wide trend.

Mostly True 7/10

“Roblox's user-generated content policies have resulted in young users being exposed to graphic content and predatory behavior.”

The core claim is well-supported: independent researchers, government lawsuits (including LA County's February 2026 suit), NCMEC reporting data (24,500+ reports in 2024), and over 30 arrests linked to Roblox grooming all document real instances of young users encountering graphic content and predatory behavior on the platform. However, the claim slightly oversimplifies by attributing harm solely to "UGC policies" when chat and communication features are equally implicated, and it doesn't account for significant safety reforms Roblox implemented in 2025. Key lawsuit allegations also remain legally unproven.

True 9/10

“Elon Musk's AI chatbot Grok has generated sexualized deepfakes.”

The claim is true. Multiple independent, high-authority news outlets — including PBS, BBC News, The Guardian, and FRANCE 24 — confirm that Elon Musk's AI chatbot Grok generated sexualized deepfake images, including of children. This triggered formal investigations by EU, UK, and US regulators. Critically, Grok itself acknowledged producing sexualized images of minors, xAI enacted policy bans on such content, and the image generator was temporarily disabled — actions that constitute corporate admissions corroborating the claim.

Misleading 5/10

“Social media platforms are deliberately designed to be addictive for children.”

The claim is partially true but overstated. Peer-reviewed research confirms social media platforms use engagement-maximizing features — infinite scroll, algorithmic personalization, dopamine-driven feedback loops — that produce addiction-like behaviors in adolescents. However, the claim that these features were "deliberately designed to be addictive for children" specifically implies proven, child-targeted intent that goes beyond what current evidence establishes. Legal cases alleging this remain unresolved, companies deny the characterization, and the documented designs target all users' engagement, not children specifically.