AI Research Tools, and Pre-checking the Claims

Consensus, Scite, and Perplexity find and synthesize research. Lenz pre-checks one specific claim against the evidence in about a minute — while you’re still in the work.

You stop, mid-work, and ask: “Wait, is this specific thing actually true?”

  • A statistic you’re about to put in a paper.
  • A study finding a colleague mentioned, repeated three times, never personally checked.
  • A confident answer ChatGPT or Gemini just gave, complete with reasoning.
  • A headline that’s either a major story or a major distortion.
  • A claim in a draft that feels a half-step beyond what the evidence supports.

Same shape every time: one specific claim, important enough to pre-check before you cite, share, or build on it — not important enough to spend an afternoon on.

Consensus — AI search over 200M+ peer-reviewed papers. Best for “what does the research say about X?”

Scite — Smart Citations across 1.2B citations, mapped as supports / contrasts / mentions. Best for “how has this study been received?”

Perplexity — general-purpose AI search with inline citations. Best for “give me a quick sourced overview.”

Lenz adds a different job: the pre-check on one specific claim.

Pre-checking a claim is its own job.

A different question, with a different shape — and it has to be fast enough to run in the middle of the work.

Hand Lenz one claim. About a minute later, you have a sourced verdict.

  • Independent investigation. Lenz scans the open web for the strongest available evidence on the claim — across studies, news, datasets, primary documents, and reputable secondary sources.
  • Two-sided debate. Two AI advocates argue opposing sides. Three independent expert models, from different providers, score each side.
  • Sourced verdict. A 1–10 score, a structured conclusion, and the full reasoning trail. When the panel disagrees, you see the disagreement — not a smoothed-over average.

Two boundaries. Claims, not sources — Lenz doesn’t validate whether a particular citation exists or accurately represents a paper. And pre-check, not peer review — deep methodological evaluation is still your job.

  • …cite it. A statistic or finding you’re about to commit to in writing. The cited references double as a starting bibliography.
  • …trust the AI on this. ChatGPT, Gemini, Claude, Copilot all sound equally confident whether they’re right or wrong. Run the claim through Lenz before you build on it.
  • …invest an afternoon in this paper. A strong abstract from a study someone forwarded. Pre-check the headline finding in a minute; if it survives, it’s worth your real reading time.
  • …forward this. A headline, a viral post, a WhatsApp message. The cost of pre-checking is one minute. The cost of being wrong on the chain is your name on it.
  • …accept the loud side’s framing. A polarized claim where both sides spin. You want the evidence weighed and the disagreement made explicit when it exists.
Is Lenz a replacement for Consensus, Scite, or Perplexity?
No. Those tools handle discovery, citation context, and synthesis. Lenz handles the pre-check on a single claim. Use them to find and aggregate; use Lenz when you want a fast, structured second opinion on whether one specific statement holds up against the evidence.
Does Lenz verify whether a citation exists or accurately represents a paper?
No. Lenz verifies claims, not sources. It takes a statement and evaluates whether it holds against the broader evidence. It does not validate that a particular paper exists or that someone summarized it correctly. For that, use a citation- checking tool.
Who is Lenz actually for?
Anyone with a moment of doubt about a specific claim — researchers pre-checking a finding before they cite it, journalists fact-checking before publication, students verifying what a textbook or AI tool said, knowledge workers reviewing AI-generated reports, curious readers who don’t want to scroll past a headline that might be wrong. Same mechanism, different moments.
How is this different from a literature search?
A literature search returns a list of papers. Lenz returns a verdict on a specific claim. Different question, different output. The two compose: discover with your existing tools, pre-check the specific claims with Lenz. More on why Lenz exists or how the verification process works.
How long does Lenz take, and is it a replacement for deep methodological review?
About a minute per claim — fast enough to run during the work. It is not a replacement for deep methodological review (reading the paper, evaluating methods, checking replications). Lenz is the pre-check that tells you whether deep evaluation is worth doing, and gives you a structured second opinion in the meantime.
Why a debate-and-adjudication mechanism instead of a single AI summary?
Evidence is often mixed, and a single summarizing model averages over the disagreement. Lenz makes the disagreement explicit: two AI advocates argue opposing sides; three independent expert models score each side. When they agree, you get a confident verdict. When they disagree, you see the disagreement. Transparency is the point.