Library

2 claim verifications about Artificial Intelligence Chatbots Artificial Intelligence Chatbots ×

“Five major tech companies, including Anthropic, OpenAI, and Microsoft, have launched AI chatbots specifically for consumer health support in 2026.”

False

The specific claim that five major tech companies launched consumer health chatbots in 2026 is not supported by the evidence. Multiple credible sources confirm dedicated health AI products from only three companies: Anthropic (Claude for Healthcare), OpenAI (ChatGPT Health), and Microsoft (Copilot Health). A possible fourth (Amazon) is weakly documented by a single source describing a different type of tool, and no fifth company launch is substantiated. The numerical assertion — the claim's defining element — is unverified.

“AI chatbots, such as ChatGPT, provide medical advice that is consistently reliable and safe for users.”

False

The claim that AI chatbots like ChatGPT provide "consistently reliable and safe" medical advice is not supported by the evidence. Multiple high-quality studies from 2024–2026 show ChatGPT gave incorrect advice in over 51% of medical emergencies, exhibited hallucination rates of 50–82%, and correctly identified conditions in fewer than 34.5% of real-world cases. ECRI designated AI chatbot misuse as the top health technology hazard for 2026. While chatbots show promise in narrow, controlled tasks, their performance is neither consistent nor safe for general medical advice.