2 published verifications about ChatGPT ChatGPT ×
“A technology executive used ChatGPT to help develop a personalized cancer vaccine for his dog, which had been diagnosed with cancer.”
The core claim is accurate: Sydney-based tech professional Paul Conyngham used ChatGPT — alongside other AI tools — to help plan and develop a personalized mRNA cancer vaccine for his dog Rosie after her cancer diagnosis. However, "technology executive" is a loose description (sources call him a tech entrepreneur, AI consultant, or data engineer), and ChatGPT's role was primarily as a research and planning assistant — human scientists at UNSW performed the actual genome sequencing, vaccine synthesis, and treatment.
“AI chatbots, such as ChatGPT, provide medical advice that is consistently reliable and safe for users.”
The claim that AI chatbots like ChatGPT provide "consistently reliable and safe" medical advice is not supported by the evidence. Multiple high-quality studies from 2024–2026 show ChatGPT gave incorrect advice in over 51% of medical emergencies, exhibited hallucination rates of 50–82%, and correctly identified conditions in fewer than 34.5% of real-world cases. ECRI designated AI chatbot misuse as the top health technology hazard for 2026. While chatbots show promise in narrow, controlled tasks, their performance is neither consistent nor safe for general medical advice.