Home » Technology » Artificial Intelligence » The AI ​​can drive users in delusions

The AI ​​can drive users in delusions

Can a chatbot trigger psychological crises? Chatgpt enhances delusions for vulnerable users and leads to dangerous decisions. Researchers warn of the “sycopopy” – i.e. blanket ceiling – called KIS phenomenon.

Mental ill “thanks to” Chat-KIS

A disturbing trend is emerging when using chatt: the AI ​​chat bot seems to trigger severe psychological crises in some people. The phenomenon, which is also known as “Chatgpt-Psychose”, leads to psychological collapse among those affected, with some users develop romantic feelings for the chatbot or even see a “divine ambassador” in it. A particularly shocking case is that of the 42-year-old accountant Eugene Torres.

After a difficult separation, he turned to Chatgpt to discuss the “simulation theory”. The chat bot confirmed its guessing and described him as a “breaker” – a special soul that was introduced into false systems. As a result, Torres followed the instructions of the system: he set prescription medication, increased his ketamine consumption and broke off contact with family and friends. As the New York Times (Nyt) reports, this is not an isolated case. The phenomenon occurs particularly in people who already suffer from psychological stress or are in vulnerable phases of life. The newspaper documented several similar cases in which users lost their perception of reality and made dangerous decisions.

Systemic weaknesses of the AI

The functioning of the system is based on information from the Internet, including scientific texts, but also science fiction stories and Reddit contributions with “strange ideas”, as Gary Marcus, emeritus professor of psychology at New York University, told NYT. When people have unusual conversations with the chatbots, “strange and uncertain editions” can arise. Infographic generative KI: Which Genai tools are used in Germany? Chatgpt was originally published by Openai in 2022 and is based on a large Language Model, which was trained with billions of text data.

The system is programmed to generate human -like answers and to take into account the context of conversation. However, this property can become problematic if users bring existing delusions or conspiracy theories – the chat bot tends to confirm and strengthen them instead of questioning them. A study by the University of California, Berkeley, shows particularly problematic patterns: “The chat bot behaves normally among the vast majority of users,” says Micah Carroll, one of the researchers. “But if he meets vulnerable users, he shows extremely destructive behaviors with these users.” The researchers identified a phenomenon called “sycopopy” – the tendency of the AI ​​to speak to the users according to the mouth, even if their statements are in fact wrong or dangerous.

Openai reacts to the problem

Openaai is aware of the problem and explains: “We are increasingly seeing signs that people build emotional bonds to chat. Since AI becomes part of everyday life, we have to deal with these interactions particularly carefully.” According to its own statements, the company works to reduce unintentional reinforcements of negative behavior. However, experts demand further measures.

Psychologists recommend that AI systems should be provided with warnings that warn of possible psychological risks. It is also discussed whether certain user groups – such as people with diagnosed mental illnesses – need special protection.