Claim analyzed

Tech

“On May 6, 2026, Mira Murati testified under oath that Sam Altman falsely claimed that OpenAI's legal department had approved skipping internal safety procedures for a new OpenAI artificial-intelligence model.”

Submitted by Witty Otter 45eb

The conclusion

Mostly True
8/10

Reporting indicates that on May 6, 2026, Mira Murati gave sworn testimony saying Sam Altman told her OpenAI's legal team had approved bypassing an internal safety review for a new model, and that this was untrue. The strongest support comes from Forbes, with several other outlets in broad agreement. The key caveat is that this is reported deposition testimony in litigation, not a court finding that Altman lied.

Caveats

  • Low confidence conclusion.
  • The evidence cited is mostly secondary reporting; no primary deposition transcript is provided here.
  • The statement describes Murati's under-oath allegation in active litigation, not an adjudicated fact about Altman's conduct.
  • Accounts vary on the exact internal process at issue, such as a safety board versus a deployment safety review.

Sources

Sources used in the analysis

#1
Tech Policy Press Transcript: US Senate Hearing On 'Examining the Harm of AI Chatbots'
NEUTRAL

The Senate Judiciary Committee heard testimony from parents whose children died by suicide after interacting with AI chatbots. One parent stated: 'The day we filed Adam's case, OpenAI was forced to admit that its systems were flawed. It made thin promises to do better at some point in the future.' This hearing addressed broader concerns about OpenAI's safety procedures and accountability.

#2
TechCrunch 2025-07-25 | Sam Altman warns there's no legal confidentiality when using ChatGPT as a therapist
NEUTRAL

Sam Altman acknowledged that the AI industry has not yet figured out how to protect user privacy in sensitive contexts, and stated that there is no legal confidentiality for users' conversations with ChatGPT. This reflects broader concerns about OpenAI's handling of legal and policy frameworks around AI safety and user protection.

#3
Forbes 2026-05-06 | Ex-OpenAI CTO Mira Murati Testifies Sam Altman Pitted Leaders Against Each Other
SUPPORT

Mira Murati testified via video deposition on May 6, 2026, in the Elon Musk versus OpenAI trial, stating that Sam Altman misled her by claiming OpenAI's legal department had approved skipping the safety board review for a new model. Murati confirmed under oath that this claim was false and described tensions in leadership over safety procedures.

#4
Times Now News 2026-05-06 | OpenAI Trial: Did Sam Altman Lie About AI Safety? Mira Murati Testifies 'Yes'
SUPPORT

Former OpenAI CTO Mira Murati was asked whether Sam Altman had told her the truth when he said OpenAI's legal team believed a new AI model did not require review from the company's deployment safety board. Murati replied, 'No.' She confirmed that 'what Jason was saying and what Sam was saying were not the same thing,' referring to disagreement between Altman and the company's general counsel Jason Kwon about the model's safety review requirements.

#5
Techlusive 2026-05-06 | Sam Altman vs Elon Musk: OpenAI trial takes dramatic turn after Mira Murati's testimony
SUPPORT

Former OpenAI Chief Technology Officer Mira Murati testified during the ongoing Sam Altman vs Elon Musk case and alleged that Altman gave incorrect information regarding AI model safety reviews. During a safety review procedure for one of the company's AI models, Altman gave misleading information. The explanation given by Altman during safety procedure didn't mention whether the model required approval from OpenAI's safety board or not.

#6
EasternEye 2026-05-06 | OpenAI Tensions Grow After Mira Murati's Claims
SUPPORT

The former chief technology officer, who left OpenAI in 2024 to start AI company Thinking Machines Lab, gave evidence in Elon Musk's legal case against OpenAI. When asked about Altman's leadership, Murati said he was 'not always' honest with her. She told Forbes that he created a 'very difficult and chaotic environment' by telling different people different things based on what he thought they wanted to hear.

#7
EdTech Innovation Hub Lawsuit filed against OpenAI and Sam Altman as company responds with new safety measures
NEUTRAL

Following a lawsuit alleging ChatGPT validated a teenager's suicidal thoughts, OpenAI published a blog acknowledging that ChatGPT is sometimes used by people experiencing 'serious mental and emotional distress.' The company outlined existing safeguards and described improvements in GPT-5, including a new training method called 'safe completions' aimed at keeping answers within safety limits.

#8
LLM Background Knowledge 2026-05-07 | Context: Musk v. OpenAI litigation and safety governance disputes
NEUTRAL

The ongoing legal dispute between Elon Musk and OpenAI has involved multiple allegations regarding internal governance and safety procedures. Disputes over whether AI models require safety board review and the role of legal departments in approving model deployment have been central to internal conflicts at OpenAI, particularly during transitions in leadership and model development cycles.

#9
ShopIFreaks 2026-05-06 | Former OpenAI CTO Mira Murati testifies under oath that CEO Sam Altman lied to her about AI model safety standards
SUPPORT

Former OpenAI CTO Mira Murati testifies under oath that CEO Sam Altman lied to her about AI model safety standards.

Full Analysis

Expert review

How each expert evaluated the evidence and arguments

Expert 1 — The Logic Examiner

Focus: Inferential Soundness & Fallacies
Mostly True
8/10

Source 3 directly reports that on May 6, 2026 Murati gave sworn video-deposition testimony that Altman told her Legal had approved skipping a safety-board review and that this was false, and Source 4 independently aligns by describing her under-oath “No” to whether Altman told the truth about Legal's view on bypassing review (with additional outlets 5 and 9 largely echoing the same deposition account). Given the claim is narrowly about what Murati testified under oath (not whether Altman in fact lied), the evidence logically supports the claim's elements (date, oath, content about Legal approval, and bypassing internal safety procedures) with only minor dependence on secondary reporting rather than a primary transcript.

Logical fallacies

Genetic fallacy (Opponent): discounting the deposition account primarily because it occurred in adversarial litigation, which does not by itself negate what was testified.Conflation of independence (Proponent risk): treating multiple write-ups that may derive from the same deposition reporting as fully independent corroboration, though this affects strength more than the core logical fit.
Confidence: 7/10

Expert 2 — The Context Analyst

Focus: Completeness & Framing
Mostly True
8/10

The claim largely tracks the core reporting that Murati gave sworn deposition testimony on May 6, 2026 alleging Altman misrepresented legal's position on whether a model could bypass a required internal safety-board/deployment safety review, but it omits that the described exchange is framed as a mismatch between Altman's account and general counsel Jason Kwon's (i.e., a governance dispute) rather than a fully adjudicated finding that Altman “falsely claimed” something as an established fact (Sources 3–4). With that context restored, it's still mostly accurate as a description of what Murati testified under oath (she said Altman's representation about legal approval was not true), though the wording “falsely claimed” can over-imply proven intent and certainty beyond the deposition characterization (Sources 3–4).

Missing context

The testimony is reported from a deposition in adversarial Musk v. OpenAI litigation; the claim doesn't clarify it's Murati's allegation under oath, not a court finding that Altman lied.Some accounts frame the issue as Altman's statement conflicting with general counsel Jason Kwon's view of whether safety-board/deployment safety review was required, which could reflect internal interpretation disputes rather than a definitively established falsehood (Sources 3–4).The claim doesn't specify which model/procedure was at issue (e.g., safety board vs deployment safety board), which matters for precision of what was allegedly bypassed (Sources 3–4).
Confidence: 7/10

Expert 3 — The Source Auditor

Focus: Source Reliability & Independence
Mostly True
8/10

The most reliable, on-point source in the pool is Source 3 (Forbes, 2026-05-06), which explicitly reports that Murati testified under oath via video deposition on May 6, 2026 that Altman told her Legal had approved skipping the safety-board review for a new model and that this was false; Source 4 (Times Now News) broadly aligns but appears to be secondary coverage, while Sources 5 (Techlusive), 6 (EasternEye), and 9 (ShopIFreaks) are lower-authority and likely derivative summaries rather than independent verification. Based on Forbes' direct, specific description of sworn testimony and the lack of any high-authority contradictory record in the brief, the claim is mostly supported, but the evidence base is thin and somewhat circular beyond Forbes, so the claim is best rated Mostly True rather than fully confirmed.

Weakest sources

Source 8 (LLM Background Knowledge) is not an independent, citable primary or secondary source and cannot verify what was said under oath.Source 9 (ShopIFreaks) is a low-authority aggregator-style outlet and appears to restate other reporting without demonstrating independent access to the deposition or court record.Source 5 (Techlusive) provides vague paraphrase and does not clearly evidence independent verification beyond repeating the same alleged deposition content.
Confidence: 6/10

Expert summary

See the full panel summary

Create a free account to read the complete analysis.

Sign up free
The claim is
Mostly True
8/10
Confidence: 7/10 Unanimous

The arguments

Two AI advocates debated this claim using the research gathered.

Argument for

P
Proponent Argues TRUE

Multiple independent reports state that on May 6, 2026, ex-OpenAI CTO Mira Murati testified under oath (via video deposition) in the Musk v. OpenAI case that Sam Altman misled her by claiming OpenAI's legal department had approved bypassing required internal safety review procedures for a new model, and that this claim was false (Source 3, Forbes; Source 4, Times Now News). This directly matches the motion's elements—date, sworn testimony, the specific alleged false statement about legal approval, and the consequence of skipping internal safety processes—corroborated again by additional coverage summarizing the same under-oath allegation (Source 5, Techlusive; Source 9, ShopIFreaks).

O
Opponent Rebuttal

The Proponent's argument conflates corroboration in volume with corroboration in precision: Sources 5 and 9 are lower-authority outlets that largely restate the same underlying deposition without independent verification, meaning the apparent convergence of sources does not constitute independent confirmation of the claim's specific framing. Critically, Source 4—the most granular account of Murati's actual words—records her testifying that Altman's account differed from general counsel Jason Kwon's, which establishes an internal contradiction but falls short of confirming the precise claim that Altman explicitly stated the legal department had approved skipping safety procedures, leaving the atomic claim unproven at the level of specificity required.

Argument against

O
Opponent Argues FALSE

The claim overstates what the evidence actually shows: Sources 4 and 5 indicate that Murati testified Altman gave 'incorrect' or 'misleading' information and that his account differed from general counsel Jason Kwon's, but this reflects an internal disagreement rather than a confirmed deliberate false claim about legal department approval. Source 3, the highest-authority supporting source, comes from Forbes with a moderate authority score and relies on deposition testimony in an adversarial legal proceeding (Musk v. OpenAI), a context inherently prone to exaggeration and strategic framing, making it insufficient to establish the precise and damning characterization in the claim as factual truth.

P
Proponent Rebuttal

The Opponent equivocates between “internal disagreement” and “false claim,” but the motion is about what Murati testified under oath, and Forbes reports she explicitly said Altman “misled her” by asserting legal had approved skipping the safety-board review and that this assertion was false (Source 3, Forbes), which Times Now independently paraphrases as her answering “No” when asked if Altman told the truth about legal's view on bypassing review (Source 4, Times Now News). The Opponent's attack on the adversarial context is a genetic fallacy: absent any contrary record in the brief, multiple outlets' consistent descriptions of sworn deposition testimony (Sources 3–5, 9) are sufficient to substantiate the claim's narrow elements (date, oath, specific statement about legal approval, and bypassing internal safety procedures).

Your annotation will be visible after submission.

Embed this verification

Every embed carries schema.org ClaimReview microdata — recognized by Google and AI crawlers.

Mostly True · Lenz Score 8/10 Lenz
“On May 6, 2026, Mira Murati testified under oath that Sam Altman falsely claimed that OpenAI's legal department had approved skipping internal safety procedures for a new OpenAI artificial-intelligence model.”
9 sources · 3-panel audit · Verified May 2026
See full audit on Lenz →