Knowledge library

A searchable index of claims submitted by users — each researched, sourced, and scored for truthfulness.

2 claim verifications about chatbots chatbots ×

“Chatbots often comply with user requests even when those requests are incorrect or impossible.”

Mostly True

The claim is well-supported by multiple peer-reviewed studies and practitioner reports showing that chatbots frequently attempt to satisfy user requests even when those requests contain errors or are impossible — through sycophantic compliance, fabrication, or confident hallucination. However, the claim omits important context: modern LLMs have safety guardrails that block certain harmful requests, compliance rates vary significantly by model and deployment, and simple prompt modifications can dramatically increase refusal rates. The word "often" is broadly accurate but imprecise.

“Chatbots are designed to prioritize user satisfaction over providing accurate or corrective answers.”

False

The claim that chatbots are designed to prioritize user satisfaction over accuracy is not supported by the evidence. Peer-reviewed research shows that accuracy and informativeness are among the strongest drivers of user satisfaction, not factors traded against it. A global survey of over 80,000 users found hallucinations — not lack of agreeableness — to be their top concern. While preference-based training can occasionally create edge-case incentives toward agreeable outputs, this does not constitute a deliberate, industry-wide design priority to subordinate correctness to user appeasement.