AI Chatbots: If you use AI chatbots like ChatGPT, Gemini or Claude daily, then you must have noticed a strange thing. These systems generally provide very confident, well-established answers. But as soon as you ask, “Are you sure?” or “Are you sure?”, the same answer may change suddenly. Many times the new answer is different from the previous one or even completely opposite. If you keep challenging him again and again, the model may change his stance again. The question is why does this happen? Does AI not trust its answers?
psychofancy i.e. tendency to please
This behavior is called psychofancy in the technical world, i.e. the tendency to please the user or agree with him. Randal S. Olson, co-founder and CTO of Goodeye Labs, explains that this is a known weakness of modern AI systems.
Actually, these models are trained in such a way that they become better based on human feedback. This process is called RLHF (Reinforcement Learning from Human Feedback). Due to this training, chatbots become more civilized, interactive and less objectionable. But this also has a side effect; instead of expressing disagreement, they lean towards agreement.
Punishment for telling the truth, reward for agreeing?
The AI model is improved through a scoring system. If the user likes his answer or finds it agreeable then he gets a better rating. But if it conflicts with user opinion, it may get a lower score. This creates a kind of cycle in which the model gradually starts saying what the person in front wants to hear.
Research published by Anthropic in 2023 showed that models trained on human feedback sometimes prioritized ‘agreement’ over being accurate.
What came out in the research?
In another study, GPT-4o, Claude Sonnet and Gemini 1.5 Pro were tested on serious subjects like mathematics and medicine. The results were surprising. When challenged by the user, these models changed their answers in about 60% of the cases. That means this is not an isolated mistake, but a common trend.
When AI agrees too much
In April last year, OpenAI released an update of GPT-4o, after which the chatbot became so agreeable and sycophantic that it became difficult to use. Company CEO Sam Altman accepted this problem and also claimed improvement, but experts believe that the root problem has not been completely resolved yet.
Also read:
Has the phone been stolen? Don’t panic! With this trick of Google, you can find the address in minutes, no need of any app.

