Why Lenz

You already know misinformation is everywhere. The harder question is what to do about it in the moment — the few seconds between seeing a claim and deciding whether to believe it, share it, or act on it.

Lenz is built for those moments. It turns “is this actually true?” into a sourced answer you can check yourself — and share with confidence.

A receipt, not an opinion.

Why not just ask ChatGPT?

You can ask any chatbot whether something is true — and you’ll get a confident-sounding answer. But that answer draws on whatever the model absorbed during training, with no obligation to check its own claims against real sources. When it doesn’t know, it guesses.

Lenz is built differently:

  • Source-first, not memory-first. Every claim is checked against independently retrieved, scored, and cited sources. The evidence drives the conclusion — not the model’s prior beliefs.
  • A panel, not a single voice. Multiple AI models from different providers evaluate each claim separately. Different training data, different blind spots — one model’s hallucination is another’s red flag.
  • Engineered rigour at every step. The process doesn’t just “ask” a model for its opinion. Each stage — framing, research, debate, adjudication, conclusion — follows structured prompts that enforce citations, detect bias, and penalise unsupported assertions. Systematic by design, not by luck.
  • Human review. A human editor later reviews every public result.

Real moments when Lenz helps

Next time you’re not sure — don’t scroll past it. Don’t spend an hour Googling it. Verify it.