64 Tech claim verifications avg. score 5.1/10 26 rated true or mostly true 38 rated false or misleading
“Some major software companies currently report that the majority of their source code is written by artificial intelligence.”
The claim is largely accurate. Google and Anthropic—both major software companies—have publicly stated that a majority of their new code is AI-generated (Google citing over 50% of weekly production check-ins, Anthropic citing 70-90% company-wide). However, these are self-reported figures from AI-focused firms, the metric typically refers to new code check-ins rather than entire codebases, and industry-wide averages remain well below 50%. The claim is true as stated but could easily be misread as an industry-wide trend.
“Roblox's user-generated content policies have resulted in young users being exposed to graphic content and predatory behavior.”
The core claim is well-supported: independent researchers, government lawsuits (including LA County's February 2026 suit), NCMEC reporting data (24,500+ reports in 2024), and over 30 arrests linked to Roblox grooming all document real instances of young users encountering graphic content and predatory behavior on the platform. However, the claim slightly oversimplifies by attributing harm solely to "UGC policies" when chat and communication features are equally implicated, and it doesn't account for significant safety reforms Roblox implemented in 2025. Key lawsuit allegations also remain legally unproven.
“Elon Musk's AI chatbot Grok has generated sexualized deepfakes.”
The claim is true. Multiple independent, high-authority news outlets — including PBS, BBC News, The Guardian, and FRANCE 24 — confirm that Elon Musk's AI chatbot Grok generated sexualized deepfake images, including of children. This triggered formal investigations by EU, UK, and US regulators. Critically, Grok itself acknowledged producing sexualized images of minors, xAI enacted policy bans on such content, and the image generator was temporarily disabled — actions that constitute corporate admissions corroborating the claim.
“Social media platforms are deliberately designed to be addictive for children.”
The claim is partially true but overstated. Peer-reviewed research confirms social media platforms use engagement-maximizing features — infinite scroll, algorithmic personalization, dopamine-driven feedback loops — that produce addiction-like behaviors in adolescents. However, the claim that these features were "deliberately designed to be addictive for children" specifically implies proven, child-targeted intent that goes beyond what current evidence establishes. Legal cases alleging this remain unresolved, companies deny the characterization, and the documented designs target all users' engagement, not children specifically.