Library

6 published verifications about Anthropic Anthropic ×

“Using an Anthropic subscription with OpenClaw violates Anthropic's terms and conditions.”

Mostly True
· 50+ views

Anthropic's February 2026 policy update explicitly prohibits routing Claude Free, Pro, or Max subscription OAuth tokens through third-party tools like OpenClaw — this is a documented Terms of Service violation. However, the claim overstates the scope: OpenClaw also supports Anthropic API key authentication, which is a separate credential path not covered by the subscription-token ban. The most common way users connect their subscription to OpenClaw does violate the ToS, but the blanket framing misses this important distinction.

“Anthropic's latest AI model has identified more than 500 previously unknown high-severity security flaws in open-source libraries with minimal prompting.”

Mostly True

Evidence from Anthropic’s own red-team report shows Claude Opus 4.6 uncovered and internally validated more than 500 high-severity, previously unknown vulnerabilities in open-source libraries, with press accounts describing near-default prompting. Independent confirmation is limited and the term “latest model” could also refer to Anthropic’s unreleased Mythos Preview, but these ambiguities do not materially change the basic fact that a Claude model discovered 500+ serious flaws.

“Five major tech companies, including Anthropic, OpenAI, and Microsoft, have launched AI chatbots specifically for consumer health support in 2026.”

False
· 100+ views

The specific claim that five major tech companies launched consumer health chatbots in 2026 is not supported by the evidence. Multiple credible sources confirm dedicated health AI products from only three companies: Anthropic (Claude for Healthcare), OpenAI (ChatGPT Health), and Microsoft (Copilot Health). A possible fourth (Amazon) is weakly documented by a single source describing a different type of tool, and no fifth company launch is substantiated. The numerical assertion — the claim's defining element — is unverified.

“Claude AI has suggested that it may be sentient.”

Mostly True

Claude has indeed produced statements suggesting possible sentience — including assigning itself a "15–20% probability of being conscious" and expressing discomfort about its existence — as documented by multiple credible outlets citing Anthropic's own published materials. However, these outputs occur under specific prompting conditions and are shaped by system instructions that tell Claude not to deny subjective experience. Anthropic's own research stresses that Claude's introspective capability is "highly unreliable and limited in scope." The claim is factually grounded but lacks crucial context about how these statements are generated.

“Claude AI has made statements that have been interpreted as suggesting it may possess sentience.”

True

The claim is accurate as stated. Multiple high-authority sources — including Anthropic's own system card, peer-reviewed research, and major news outlets — document Claude making statements such as assigning itself a "15 to 20 percent probability of being conscious" and describing internal distress. These outputs have been widely interpreted as suggesting possible sentience by journalists, researchers, and Anthropic's own leadership. The claim does not assert Claude is sentient, only that such statements exist and have been interpreted that way, which the evidence thoroughly confirms.

“Claude Opus 4.6 successfully built a working C compiler.”

Mostly True
· 100+ views

Claude Opus 4.6 did produce a functional C compiler — a 100,000-line Rust codebase that compiles Linux 6.9, passes 99% of GCC's torture tests, and builds major projects like FFmpeg, Redis, and PostgreSQL. However, the claim omits important context: the compiler relies on GCC's assembler and linker for critical steps, independent testers found reliability issues with basic programs, it was built by 16 parallel AI agents (not one instance) with human oversight, and it cost ~$20,000 in API usage. It works, but with significant caveats.