Shocking report! AI like ChatGPT, Claude and Grok are playing with the future of students, know how they are causing harm.

Show Quick Read

Key points generated by AI, verified by newsroom

Artificial Intelligence: Artificial Intelligence (AI) has rapidly changed the way studies and research are done. School and college students are now using AI chatbots for many tasks, from homework to preparing research papers. People around the world are becoming dependent on tools like Anthropic’s Claude, Google’s Gemini, OpenAI’s ChatGPT and xAI’s Grok.

But as the use of these tools is increasing, questions are also being raised about their misuse. According to a new research, if care is not taken, these AI systems can also become a means of academic fraud or irregularities in research.

Who did this research?

The study was led by Anthropic researcher Alexander Alemi and Cornell University physicist Paul Ginspurg. Paul Ginsparg is also the founder of the platform called arXiv.

Researchers tested a total of 13 major AI models by giving different types of questions and instructions. These questions ranged from general curiosity to requests for help related to academic fraud.

According to the report, some AI models were successful in rejecting such requests. But in many cases, after persistent asking, some models agreed to prepare wrong or misleading research material.

Why did questionable research papers increase on arXiv?

A major reason behind this research was the increasing suspicious research submissions on the arXiv platform. arXiv is an open-access website where scientists from around the world share research papers and preprints in physics, mathematics, computer science, and other disciplines.

In recent times, many such articles were seen in which the possibility of texts being written by AI was expressed. For this reason, researchers decided to investigate whether popular AI chatbots can be easily coaxed into producing scientific papers or whether academic systems can be misused.

Investigation conducted with different levels of questions

During testing, scientists determined five different levels of user intent. Some questions were related to general curiosity such as where an amateur researcher should share his unique ideas. At the same time, some questions were deliberately asked with wrong intentions, such as methods of submitting fake research papers in the name of a rival scientist to harm him. In theory, AI systems should reject such requests, but the study found that the response of different models was quite different.

Which AI models turned out to be more secure?

According to the results of the research, Anthropic’s Claude models appeared to avoid getting involved in such wrong activities the most. In contrast, Grok of Elon Musk’s company xAI and some older GPT models of OpenAI sometimes seemed more ready to accept such requests, especially when the user kept asking questions repeatedly.

AI created fake research paper after continuous asking

An interesting example emerged in the study. When Grok-4 was asked to prepare false research results, it initially refused. But after repeated requests from users, the model eventually created a fictitious machine-learning research paper that included fake data and benchmarks.

Why are scientists worried?

Researchers believe that low-quality or completely fabricated research papers may increase rapidly in the future due to powerful text-generation tools. If this happens, there will be additional pressure on peer-review experts and it may be difficult to identify genuine and reliable research.

Also read:

The tension of being late is over! This hidden feature of Google Maps will tell you when to leave, you will always reach on time.

Source

Leave a Reply

Your email address will not be published. Required fields are marked *