How It Works

Lenz is a fact-checking tool that uses AI to automate research, select independent sources, and analyse claims for truthfulness. A human editor later reviews every result.

Every claim you submit passes through six steps designed to be thorough, transparent, and impartial. Here’s what happens behind the scenes.

  1. Step 1

    Claim Framing

    Your claim is received and cleaned up. We strip away emotional language and bias, distill the core factual statement, and prepare targeted search queries — so the rest of the process starts from a neutral, testable hypothesis.

  2. Step 2

    Research

    We search the web in parallel using multiple queries, collecting diverse, high-quality sources. Each source is scored for authority, recency, and relevance, and tagged as supporting, refuting, or neutral — building a balanced research brief.

  3. Step 3

    The Debate

    We use two separate AI models to argue opposing sides in two rounds. First, one builds the strongest case that the claim is true while the other constructs the most compelling argument against it. Then each reads the opponent’s argument and writes a targeted rebuttal, exposing weak points in reasoning or evidence. Both draw exclusively from the collected evidence, ensuring every angle is stress-tested before the conclusion is reached.

  4. Step 4

    Adjudication

    Three separate AI models — each evaluating a different axis — independently review the evidence and debate arguments. One audits source reliability and independence, another examines whether the evidence logically supports the claim, and a third checks for missing context or misleading framing. Each scores the claim and explains its reasoning.

  5. Step 5

    The Conclusion

    All analyses are synthesized into a single clear conclusion — True, Mostly True, Misleading, or False — with an Lenz Score from 1 to 10. A concise summary explains where the reviewers agreed or disagreed, and surfaces any important bias or logic warnings.

  6. Step 6

    Human Verification

    Claims are reviewed and verified by human editors on our team at a later stage. An editor independently checks the automated analysis — reviewing the sources, reasoning, and conclusion — and confirms whether the report meets our quality and accuracy standards. Claims that have passed this review are clearly marked as human-verified on their claim page, giving you an extra layer of confidence in the result.

Why not just ask ChatGPT?

You can ask any chatbot whether something is true — and you’ll get a confident-sounding answer. But that answer draws on whatever the model absorbed during training, with no obligation to check its own claims against real sources. When it doesn’t know, it guesses — and it never tells you it’s guessing.

Lenz is built differently:

  • Source-first, not memory-first. Instead of relying mostly on what a model “remembers,” every claim is checked against independently retrieved, scored, and cited sources. The evidence drives the conclusion, not the model’s prior beliefs.
  • A panel, not a single voice. Multiple AI models from different providers evaluate each claim separately. Because they have different training data and different blind spots, one model’s hallucination is another’s red flag — dramatically reducing the chance of a confidently wrong answer slipping through.
  • Engineered rigour at every step. The pipeline doesn’t just “ask” a model for its opinion. Each stage — framing, research, debate, adjudication, conclusion — follows carefully designed prompts that enforce structured reasoning, demand citations, detect bians, and penalise unsupported assertions. The process is systematic by design, not by luck.

A human editor later reviews every result.

Oh, one more thing — the pipeline, the product, the code, and yes, even this page were built with AI. With a human in the loop to keep things on course, of course.