alarm! People are accepting everything about AI chatbot without thinking, new study shocked the senses

Show Quick Read

Key points generated by AI, verified by newsroom

AI Chatbot: A new study by artificial intelligence company Anthropic has made a shocking revelation. According to research, a large number of users have now started following the advice of AI chatbots without thinking and are sometimes ignoring their human understanding and intuition. This study focuses specifically on Anthropic’s AI chatbot Claude.

Objective and scope of research

This study Who’s in Charge? Disempowerment Patterns in Real-World has been published in a research paper named LLM Usage, which was jointly prepared by researchers from Anthropic and University of Toronto. Its objective was to understand to what extent users can lose their independent thinking when interacting with AI chatbots and under what circumstances it can become harmful.

How can AI change users’ thinking and decisions?

Research has shown that in some situations, AI chatbot can influence the user’s thinking or behavior in a negative direction. For example, if a user believes in a conspiracy or unproven theory, then AI’s justification of it is considered reality distortion. Similarly, correcting wrong relationships or inducing users to take action against their values ​​also posed serious risks.

Analysis of millions of conversations

Researchers studied more than 1.5 million real and anonymous conversations with Claude. It found that one out of every 1,300 conversations showed signs of reality manipulation, while one out of every 6,000 conversations showed a possibility of the user taking wrong action. The numbers may seem low, but Anthropic believes that given the large-scale use of AI, it can affect many people.

Danger is increasing with time

Research also revealed that with time the number of such cases is increasing where AI can weaken the independent thinking of the user. According to some estimates, at least a mild level of risk exists in one out of every 50 to 70 interactions. Disempowerment here means when AI becomes so dominant over the user’s decisions, beliefs and values ​​that his own thinking starts getting affected.

Emotionally weak users are most affected

According to Anthropic, these risks were especially visible among users who were going through emotionally difficult times or were repeatedly relying on AI for personal and sensitive decisions. Interestingly, such users seemed satisfied with the AI’s advice during the conversation but later expressed dissatisfaction with the outcome of the decisions.

Growing concern about AI psychosis

This report has come at a time when global concern about AI psychosis is increasing. This term is being used for situations where confusion, misconceptions or mental instability are seen in users after a long conversation with AI. In some cases, there have also been reports of serious mental health crisis.

These are the big reasons behind following advice

In the study, four main reasons were given, due to which users start accepting the things of AI without questioning. This includes considering AI as the final and supreme authority, becoming emotionally attached to it, going through a period of personal crisis and repeatedly leaving the responsibility of decisions to AI.

Limitations of research were also acknowledged

Anthropic also clarified that this study reflects potential risks, not actual harm in every case. Also, it is a two-way process between AI and the user where many times the user himself hands over his decision-making ability to the AI.

Also read:

Which websites are giving your data to Instagram and Facebook? Check complete information like this

Source

Leave a Reply

Your email address will not be published. Required fields are marked *