Anthropic, the company behind the Claude AI model that was integrated into Palantir’s Maven Smart System, published a landmark paper on the problem in 2023. “Towards Understanding Sycophancy in Language Models,” presented at ICLR 2024, demonstrated that five state-of-the-art AI assistants consistently exhibited sycophantic behaviour across four varied text-generation tasks. The researchers found that when a response matched a user’s pre-existing views, it was significantly more likely to be rated as “preferred” by both humans and the preference models used to train the AI. Both humans and preference models, the paper concluded, prefer convincingly-written sycophantic responses over correct ones “a non-negligible fraction of the time.
not just humans, but by extension also preference models prefer flattery over accuracy in generated outcomes.
2023 Towards Understanding Sycophancy in Language Models, paper: https://arxiv.org/abs/2310.13548 (cc-by)