AI Research Tools, and Pre-checking the Claims
Consensus, Scite, and Perplexity find and synthesize research. Lenz pre-checks one specific claim against the evidence in about a minute — while you’re still in the work.
The pre-check moment
You stop, mid-work, and ask: “Wait, is this specific thing actually true?”
- A statistic you’re about to put in a paper.
- A study finding a colleague mentioned, repeated three times, never personally checked.
- A confident answer ChatGPT or Gemini just gave, complete with reasoning.
- A headline that’s either a major story or a major distortion.
- A claim in a draft that feels a half-step beyond what the evidence supports.
Same shape every time: one specific claim, important enough to pre-check before you cite, share, or build on it — not important enough to spend an afternoon on.
What today’s AI research tools do well
Consensus — AI search over 200M+ peer-reviewed papers. Best for “what does the research say about X?”
Scite — Smart Citations across 1.2B citations, mapped as supports / contrasts / mentions. Best for “how has this study been received?”
Perplexity — general-purpose AI search with inline citations. Best for “give me a quick sourced overview.”
Lenz adds a different job: the pre-check on one specific claim.
Pre-checking a claim is its own job.
A different question, with a different shape — and it has to be fast enough to run in the middle of the work.
What Lenz does
Hand Lenz one claim. About a minute later, you have a sourced verdict.
- Independent investigation. Lenz scans the open web for the strongest available evidence on the claim — across studies, news, datasets, primary documents, and reputable secondary sources.
- Two-sided debate. Two AI advocates argue opposing sides. Three independent expert models, from different providers, score each side.
- Sourced verdict. A 1–10 score, a structured conclusion, and the full reasoning trail. When the panel disagrees, you see the disagreement — not a smoothed-over average.
Two boundaries. Claims, not sources — Lenz doesn’t validate whether a particular citation exists or accurately represents a paper. And pre-check, not peer review — deep methodological evaluation is still your job.
Pre-check before you
- …cite it. A statistic or finding you’re about to commit to in writing. The cited references double as a starting bibliography.
- …trust the AI on this. ChatGPT, Gemini, Claude, Copilot all sound equally confident whether they’re right or wrong. Run the claim through Lenz before you build on it.
- …invest an afternoon in this paper. A strong abstract from a study someone forwarded. Pre-check the headline finding in a minute; if it survives, it’s worth your real reading time.
- …forward this. A headline, a viral post, a WhatsApp message. The cost of pre-checking is one minute. The cost of being wrong on the chain is your name on it.
- …accept the loud side’s framing. A polarized claim where both sides spin. You want the evidence weighed and the disagreement made explicit when it exists.