Tech

64 Tech claim verifications avg. score 5.1/10 26 rated true or mostly true 38 rated false or misleading

“An AI-generated podcast network publishes over 11,000 episodes per day by repurposing content from local news outlets without attribution.”

Mostly True

The claim is largely accurate. Multiple credible sources confirm that an AI podcast network (identified as "Daily News Now" or "Podcasts.ai") has been reported to produce approximately 11,000 episodes per day by repurposing local news content, often without crediting original outlets. However, the specific episode count traces back to a single investigation and has not been independently audited. The "without attribution" characterization applies to many — but not necessarily all — episodes, making the claim's absolute framing slightly overstated.

“Thousands of TikTok and Instagram videos promoting the Jenni AI study app did not disclose that they were paid advertisements.”

False

The claim that "thousands" of TikTok and Instagram videos promoting Jenni AI failed to disclose paid partnerships is not supported by available evidence. While Jenni AI did operate an affiliate/micro-influencer program, and one blogger noted suspected undisclosed affiliate links in "many" reviews, no audit, dataset, enforcement action, or quantitative analysis confirms non-disclosure at the scale of "thousands" of videos. The leap from anecdotal observations to a specific large-scale claim is unsupported speculation.

“AI deepfake detection technology is highly accurate and reliable as of March 15, 2026.”

Misleading

While some leading deepfake detection tools report 92–98% accuracy in controlled lab settings, these figures come largely from vendor benchmarks, not independent real-world testing. Multiple sources — including academic challenge benchmarks and forensic experts — document that detection accuracy drops by 45–50% under real-world conditions such as compression, low-quality media, and novel AI generators. Some deployed systems are only ~80% effective. Calling the technology "highly accurate and reliable" as a blanket characterization significantly overstates its current operational performance.

“A viral video shows Benjamin Netanyahu with six fingers, which is cited as evidence that the footage is AI-generated.”

Misleading

A viral video from Netanyahu's March 12 press conference did circulate widely, with social media users claiming a freeze-frame showed a sixth finger as proof of AI generation. However, multiple fact-checkers (PolitiFact, dedicated forensic analyses) confirmed the video shows five fingers — the "sixth" was an optical illusion caused by palm anatomy, lighting, and compression. AI detection tools found no evidence of synthetic media. The claim accurately describes a real social media event but misleadingly frames a debunked illusion as though the video genuinely depicts six fingers.

“Wireless earbuds communicate with each other by transmitting signals through the human brain.”

False

Wireless earbuds do not communicate by transmitting signals through the human brain. They use Bluetooth radio waves transmitted through the air, with one earbud typically relaying audio to the other. Even advanced technologies like Near-Field Magnetic Induction (NFMI) create a body-area network around the user — not through brain tissue. The only source making the "through the brain" claim is a low-credibility EMF-concern blog contradicted by every authoritative technical source reviewed.

“Smartphones use their microphones to actively listen to users' conversations in order to serve targeted advertisements.”

False
· 250+ views

No credible, independent evidence supports the claim that smartphones actively listen through microphones to serve targeted ads. The primary supporting evidence — a leaked CMG marketing pitch deck — was walked back by the company itself. Independent scientific studies, including a Northeastern University analysis of 17,000+ Android apps, found no unauthorized microphone activation. The "eerily accurate" ads people experience are well-explained by extensive metadata collection: location data, browsing history, app usage, purchase records, and cross-device tracking — no eavesdropping required.

“5G networks operate on some of the same frequency bands that have been used in military-developed directed energy weapons.”

Mostly True

The claim is technically accurate but lacks important context. Military high-power microwave weapons do operate across broad frequency ranges (L through K band) that encompass 5G bands like 28 GHz and 39 GHz. However, the most commonly cited weapon — the Active Denial System — operates at 95 GHz, which is NOT a 5G frequency. Crucially, sharing a frequency band does not imply any functional similarity: 5G signals and directed energy weapons differ by orders of magnitude in power, beam focus, and intent.

“Automated bots account for more than 50% of global internet traffic.”

Mostly True

The claim is largely supported by Imperva/Thales' 2025 Bad Bot Report, which found automated bots made up 51% of global web traffic in 2024 — the first time bots surpassed humans. However, this figure comes from a single cybersecurity vendor with commercial incentives, and most sources citing it are echoing the same dataset rather than providing independent confirmation. The 50% threshold is crossed by just one percentage point, and the broad definition of "bots" includes legitimate crawlers and API calls, which may overstate the threat implied by the claim.

“Social media algorithms are intentionally designed to amplify outrage and contribute to the spread of cancel culture.”

Misleading

The claim has a real empirical core: engagement-optimizing algorithms do amplify emotionally charged and outrage-driven content, as demonstrated by randomized experiments. However, the claim overstates the evidence in two key ways. First, "intentionally designed to amplify outrage" conflates engagement optimization (a documented design goal) with deliberate outrage engineering (not established). Second, the link to cancel culture is plausible but not rigorously demonstrated—cancel culture is driven by multiple social, cultural, and media factors beyond algorithmic design.

“Windows 12 is scheduled to launch in 2026.”

False

Windows 12 is not scheduled to launch in 2026. The rumor traces back to a single PCWorld article that was retracted by its own publisher for failing editorial standards. The highest-authority tech outlets — Windows Central and PC Gamer — cite direct Microsoft sources confirming there is no plan to ship Windows 12 this year. The "Hudson Valley" codename fueling speculation was actually Windows 11 24H2, which already shipped. Microsoft has made zero official announcements about Windows 12; expert projections point to 2027 at the earliest.

“Algorithm-driven recommendation systems amplify extreme viewpoints more than moderate ones.”

Misleading
· 100+ views

This claim overgeneralizes from mixed evidence. Some audits find YouTube's algorithm can elevate extreme content under specific conditions, but large-scale experiments show limited real-world effects on user opinions, and platforms like Reddit and Gab show no such amplification. The highest-quality research indicates that user choice—not algorithms alone—is often the primary driver of exposure to extreme content, and recommender systems can actually deamplify niche material when users don't engage with it. The claim is partially true but misleadingly broad.

“More than 30% of code written in 2026 is generated by AI tools.”

False
· 250+ views

The claim that more than 30% of code written in 2026 is generated by AI tools is not supported by the strongest available evidence. The largest empirical study — covering 4.2 million developers from November 2025 through February 2026 — found AI-authored production code at 26.9%, below the 30% threshold. Higher estimates (41–42%) come from surveys that conflate "AI-assisted" with "AI-generated" code, inflating the figure. While AI coding tool adoption is widespread, usage rates do not equate to code generation share.

“More than 50% of content engagement on major social media platforms is generated by bots rather than humans as of March 1, 2026.”

False
· 250+ views

This claim is false. It conflates overall internet traffic — where bots may account for ~51% — with content engagement on social media platforms, which is a fundamentally different metric. The best direct evidence, a peer-reviewed study, finds only about 20% of social media activity is bot-generated. Even the highest platform-specific figure cited (40% of Facebook posts being machine-generated) measures posting volume, not engagement, and still falls short of 50%. No credible source supports the claim that bots generate more than half of social media engagement.

“Jeffrey Epstein created Bitcoin.”

False
· 50+ views

This claim is false. Bitcoin was created by the pseudonymous Satoshi Nakamoto, who published its whitepaper in October 2008 and launched the network in January 2009. Jeffrey Epstein's documented involvement in cryptocurrency — investments in Coinbase, Blockstream, and MIT's Digital Currency Initiative — all occurred in 2014–2015, years after Bitcoin already existed. Viral emails claiming Epstein was Satoshi Nakamoto were confirmed to be doctored fakes. No credible evidence links Epstein to Bitcoin's creation.

“Generative AI will eliminate more white-collar jobs than it creates between 2026 and 2036.”

Misleading
· 100+ views

While generative AI will significantly disrupt many white-collar tasks and roles, the claim that it will eliminate more white-collar jobs than it creates between 2026 and 2036 is not supported by the available evidence. The most rigorous economic models (Goldman Sachs, WEF, KPMG) project net job gains, not losses. Supporting evidence conflates task automation and slowed hiring with net job elimination — a critical logical leap. Real disruption is occurring, but framing it as guaranteed net loss overstates what the data shows.

“Claude Opus 4.6 successfully built a working C compiler.”

Mostly True
· 100+ views

Claude Opus 4.6 did produce a functional C compiler — a 100,000-line Rust codebase that compiles Linux 6.9, passes 99% of GCC's torture tests, and builds major projects like FFmpeg, Redis, and PostgreSQL. However, the claim omits important context: the compiler relies on GCC's assembler and linker for critical steps, independent testers found reliability issues with basic programs, it was built by 16 parallel AI agents (not one instance) with human oversight, and it cost ~$20,000 in API usage. It works, but with significant caveats.

“Generative AI models consistently produce factual inaccuracies in their outputs.”

Misleading
· 250+ views

Generative AI models do produce factual inaccuracies, and this is a well-documented, persistent challenge confirmed by peer-reviewed research and major benchmarks. However, the word "consistently" overstates the problem. Error rates vary enormously — from below 1% on grounded summarization tasks to over 30% on open-domain reasoning — depending on the task, domain, model, and whether retrieval tools are used. Hallucination rates are also declining over time. The claim describes a real issue but frames it in a misleadingly uniform way.

“Live sports broadcasts cannot be convincingly deepfaked using current technology as of March 1, 2026.”

False
· 100+ views

This claim is false. As of March 2026, real-time deepfake systems can already generate convincing manipulations of sports footage at broadcast frame rates (40–50 FPS) on both datacenter and consumer hardware. While limitations remain with extreme camera angles and multi-person occlusions, these are partial constraints — not fundamental barriers. Convincing deepfakes of live sports segments, interviews, and selective broadcast shots are demonstrably achievable today, making the blanket assertion that they "cannot" be done inaccurate.

“Artificial intelligence will not fully replace human accountants in the accounting profession by 2036.”

Mostly True
· 100+ views

The claim is well-supported. No credible source predicts the complete elimination of human accountants by 2036. Multiple authoritative sources — including Stanford GSB, Deloitte leadership, PwC research, and WEF-linked analyses — consistently project that AI will automate routine accounting tasks but that human judgment, ethical oversight, and advisory roles will persist. However, the claim's "not fully replace" framing sets a very high bar that can obscure the reality: the profession faces steep declines, with most transactional work potentially automated by 2035 and significant job displacement well before 2036.

“Engine displacement is considered one of the most important characteristics of an engine.”

True
· 100+ views

The claim that engine displacement is "one of the most important" engine characteristics is well-supported. Multiple credible sources — including Chase.com, The Drive, and automotive training references — describe displacement as "key," "crucial," and "fundamental" to engine performance and classification. The claim uses modest, non-exclusive language ("one of"), which is consistent with the fact that other parameters (compression ratio, turbocharging, valve timing) also matter significantly. No credible source disputes displacement's top-tier status among engine characteristics.