Verify any claim · lenz.io
Claim analyzed
Tech“On May 6, 2026, Mira Murati testified under oath that Sam Altman falsely claimed that OpenAI's legal department had approved skipping internal safety procedures for a new OpenAI artificial-intelligence model.”
Submitted by Witty Otter 45eb
The conclusion
Reporting indicates that on May 6, 2026, Mira Murati gave sworn testimony saying Sam Altman told her OpenAI's legal team had approved bypassing an internal safety review for a new model, and that this was untrue. The strongest support comes from Forbes, with several other outlets in broad agreement. The key caveat is that this is reported deposition testimony in litigation, not a court finding that Altman lied.
Caveats
- Low confidence conclusion.
- The evidence cited is mostly secondary reporting; no primary deposition transcript is provided here.
- The statement describes Murati's under-oath allegation in active litigation, not an adjudicated fact about Altman's conduct.
- Accounts vary on the exact internal process at issue, such as a safety board versus a deployment safety review.
Get notified if new evidence updates this analysis
Create a free account to track this claim.
Sources
Sources used in the analysis
The Senate Judiciary Committee heard testimony from parents whose children died by suicide after interacting with AI chatbots. One parent stated: 'The day we filed Adam's case, OpenAI was forced to admit that its systems were flawed. It made thin promises to do better at some point in the future.' This hearing addressed broader concerns about OpenAI's safety procedures and accountability.
Sam Altman acknowledged that the AI industry has not yet figured out how to protect user privacy in sensitive contexts, and stated that there is no legal confidentiality for users' conversations with ChatGPT. This reflects broader concerns about OpenAI's handling of legal and policy frameworks around AI safety and user protection.
Mira Murati testified via video deposition on May 6, 2026, in the Elon Musk versus OpenAI trial, stating that Sam Altman misled her by claiming OpenAI's legal department had approved skipping the safety board review for a new model. Murati confirmed under oath that this claim was false and described tensions in leadership over safety procedures.
Former OpenAI CTO Mira Murati was asked whether Sam Altman had told her the truth when he said OpenAI's legal team believed a new AI model did not require review from the company's deployment safety board. Murati replied, 'No.' She confirmed that 'what Jason was saying and what Sam was saying were not the same thing,' referring to disagreement between Altman and the company's general counsel Jason Kwon about the model's safety review requirements.
Former OpenAI Chief Technology Officer Mira Murati testified during the ongoing Sam Altman vs Elon Musk case and alleged that Altman gave incorrect information regarding AI model safety reviews. During a safety review procedure for one of the company's AI models, Altman gave misleading information. The explanation given by Altman during safety procedure didn't mention whether the model required approval from OpenAI's safety board or not.
The former chief technology officer, who left OpenAI in 2024 to start AI company Thinking Machines Lab, gave evidence in Elon Musk's legal case against OpenAI. When asked about Altman's leadership, Murati said he was 'not always' honest with her. She told Forbes that he created a 'very difficult and chaotic environment' by telling different people different things based on what he thought they wanted to hear.
Following a lawsuit alleging ChatGPT validated a teenager's suicidal thoughts, OpenAI published a blog acknowledging that ChatGPT is sometimes used by people experiencing 'serious mental and emotional distress.' The company outlined existing safeguards and described improvements in GPT-5, including a new training method called 'safe completions' aimed at keeping answers within safety limits.
The ongoing legal dispute between Elon Musk and OpenAI has involved multiple allegations regarding internal governance and safety procedures. Disputes over whether AI models require safety board review and the role of legal departments in approving model deployment have been central to internal conflicts at OpenAI, particularly during transitions in leadership and model development cycles.
Former OpenAI CTO Mira Murati testifies under oath that CEO Sam Altman lied to her about AI model safety standards.
What do you think of the claim?
Your challenge will appear immediately.
Challenge submitted!
Expert review
How each expert evaluated the evidence and arguments
Expert 1 — The Logic Examiner
Source 3 directly reports that on May 6, 2026 Murati gave sworn video-deposition testimony that Altman told her Legal had approved skipping a safety-board review and that this was false, and Source 4 independently aligns by describing her under-oath “No” to whether Altman told the truth about Legal's view on bypassing review (with additional outlets 5 and 9 largely echoing the same deposition account). Given the claim is narrowly about what Murati testified under oath (not whether Altman in fact lied), the evidence logically supports the claim's elements (date, oath, content about Legal approval, and bypassing internal safety procedures) with only minor dependence on secondary reporting rather than a primary transcript.
Expert 2 — The Context Analyst
The claim largely tracks the core reporting that Murati gave sworn deposition testimony on May 6, 2026 alleging Altman misrepresented legal's position on whether a model could bypass a required internal safety-board/deployment safety review, but it omits that the described exchange is framed as a mismatch between Altman's account and general counsel Jason Kwon's (i.e., a governance dispute) rather than a fully adjudicated finding that Altman “falsely claimed” something as an established fact (Sources 3–4). With that context restored, it's still mostly accurate as a description of what Murati testified under oath (she said Altman's representation about legal approval was not true), though the wording “falsely claimed” can over-imply proven intent and certainty beyond the deposition characterization (Sources 3–4).
Expert 3 — The Source Auditor
The most reliable, on-point source in the pool is Source 3 (Forbes, 2026-05-06), which explicitly reports that Murati testified under oath via video deposition on May 6, 2026 that Altman told her Legal had approved skipping the safety-board review for a new model and that this was false; Source 4 (Times Now News) broadly aligns but appears to be secondary coverage, while Sources 5 (Techlusive), 6 (EasternEye), and 9 (ShopIFreaks) are lower-authority and likely derivative summaries rather than independent verification. Based on Forbes' direct, specific description of sworn testimony and the lack of any high-authority contradictory record in the brief, the claim is mostly supported, but the evidence base is thin and somewhat circular beyond Forbes, so the claim is best rated Mostly True rather than fully confirmed.
Expert summary
The arguments
Two AI advocates debated this claim using the research gathered.
Argument for
Multiple independent reports state that on May 6, 2026, ex-OpenAI CTO Mira Murati testified under oath (via video deposition) in the Musk v. OpenAI case that Sam Altman misled her by claiming OpenAI's legal department had approved bypassing required internal safety review procedures for a new model, and that this claim was false (Source 3, Forbes; Source 4, Times Now News). This directly matches the motion's elements—date, sworn testimony, the specific alleged false statement about legal approval, and the consequence of skipping internal safety processes—corroborated again by additional coverage summarizing the same under-oath allegation (Source 5, Techlusive; Source 9, ShopIFreaks).
The Proponent's argument conflates corroboration in volume with corroboration in precision: Sources 5 and 9 are lower-authority outlets that largely restate the same underlying deposition without independent verification, meaning the apparent convergence of sources does not constitute independent confirmation of the claim's specific framing. Critically, Source 4—the most granular account of Murati's actual words—records her testifying that Altman's account differed from general counsel Jason Kwon's, which establishes an internal contradiction but falls short of confirming the precise claim that Altman explicitly stated the legal department had approved skipping safety procedures, leaving the atomic claim unproven at the level of specificity required.
Argument against
The claim overstates what the evidence actually shows: Sources 4 and 5 indicate that Murati testified Altman gave 'incorrect' or 'misleading' information and that his account differed from general counsel Jason Kwon's, but this reflects an internal disagreement rather than a confirmed deliberate false claim about legal department approval. Source 3, the highest-authority supporting source, comes from Forbes with a moderate authority score and relies on deposition testimony in an adversarial legal proceeding (Musk v. OpenAI), a context inherently prone to exaggeration and strategic framing, making it insufficient to establish the precise and damning characterization in the claim as factual truth.
The Opponent equivocates between “internal disagreement” and “false claim,” but the motion is about what Murati testified under oath, and Forbes reports she explicitly said Altman “misled her” by asserting legal had approved skipping the safety-board review and that this assertion was false (Source 3, Forbes), which Times Now independently paraphrases as her answering “No” when asked if Altman told the truth about legal's view on bypassing review (Source 4, Times Now News). The Opponent's attack on the adversarial context is a genetic fallacy: absent any contrary record in the brief, multiple outlets' consistent descriptions of sworn deposition testimony (Sources 3–5, 9) are sufficient to substantiate the claim's narrow elements (date, oath, specific statement about legal approval, and bypassing internal safety procedures).