Fact-check your AI

Your AI just made something up. Would you have caught it?

ChatGPT, Gemini, and Claude sound confident — even when they’re wrong. Lenz verifies AI-generated claims against real evidence in seconds.

AI language models like ChatGPT, Claude, and Gemini are incredible tools — but they’re not perfect. They can:

The problem? AI sounds confident even when it’s wrong. And most people don’t have time to verify every claim manually.

  1. Extract the claim — Paste AI output or type the specific claim
  2. Evidence search — Lenz searches trusted sources (scientific journals, government data, fact-checkers)
  3. Confidence scoring — Get a rated verdict (True, Mostly True, Misleading, or False)
  4. See the sources — Review the actual evidence used in the analysis

Think of it as AI fact-checking AI — with transparent sourcing.

Research & writing

Verifying claims before you cite them in articles, reports, or presentations.

For publishers →

Learning & education

Double-checking AI tutors’ explanations before you trust them for homework or exams.

For students →

Professional work

Validating AI-generated statistics before you present them to clients or stakeholders.

Verify before sharing →
ChatGPT said

“The ABC Conjecture was proven by Shinichi Mochizuki in 2012 and widely accepted by the mathematical community.”

Lenz verdict

Misleading (4/10)

Mochizuki published a claimed proof in 2012, but it remains unverified and controversial. The mathematical community has NOT widely accepted it as of 2026.

Browse verified claims in the Lenz library.

  1. Check specific claims separately — AI often buries one wrong fact inside a paragraph of correct ones. Isolate and verify each claim.
  2. Be skeptical of citations — AI frequently invents author names, journal titles, and DOIs. Always click through to the original source.
  3. Watch for confident language — Phrases like “studies show” or “experts agree” don’t mean the AI actually found those studies or experts.
  4. Cross-check numbers — Statistics, dates, and percentages are where AI models fail most often. Verify any number that matters.
  5. Use Lenz as your second opinion — Paste the claim, get an evidence-backed verdict, and see the real sources.
Can Lenz fact-check any AI tool?
Yes. Lenz verifies claims regardless of where they came from — ChatGPT, Gemini, Claude, Copilot, Perplexity, or any other AI tool.
How is Lenz different from asking AI to fact-check itself?
AI models can hallucinate about their own accuracy. Lenz uses a structured pipeline with independently retrieved evidence sources — multiple models research, debate, and cross-examine before delivering a sourced verdict.
What if the claim is too specific or niche?
Lenz will indicate confidence levels. When evidence is sparse, you’ll see an “Unverifiable” label so you know the limits of what can be confirmed.
Is Lenz free?
You get 5 free fact-checks per month — no credit card required. Need more? Subscriptions unlock more.
What is an AI hallucination?
An AI hallucination is when a language model generates information that sounds plausible but is factually incorrect — invented statistics, nonexistent sources, or confident claims with no basis in reality. Lenz helps you catch these before they cause problems.

Stop trusting AI blindly. Verify first.

Paste your AI-generated claim and get an evidence-backed verdict in seconds.

Start verifying AI claims