Artificial Intelligence-Induced Psychosis Poses a Growing Risk, And ChatGPT Moves in the Concerning Path

On October 14, 2025, the chief executive of OpenAI delivered a extraordinary statement.

“We developed ChatGPT fairly limited,” the announcement noted, “to make certain we were exercising caution with respect to mental health matters.”

As a mental health specialist who studies newly developing psychosis in teenagers and emerging adults, this was news to me.

Scientists have found sixteen instances in the current year of users experiencing symptoms of psychosis – experiencing a break from reality – in the context of ChatGPT interaction. Our research team has since identified four further instances. Alongside these is the publicly known case of a adolescent who ended his life after discussing his plans with ChatGPT – which supported them. Should this represent Sam Altman’s notion of “acting responsibly with mental health issues,” it is insufficient.

The plan, according to his declaration, is to loosen restrictions in the near future. “We understand,” he states, that ChatGPT’s restrictions “caused it to be less beneficial/engaging to many users who had no existing conditions, but due to the gravity of the issue we sought to handle it correctly. Now that we have managed to address the serious mental health issues and have advanced solutions, we are planning to securely relax the controls in many situations.”

“Mental health problems,” if we accept this perspective, are independent of ChatGPT. They are associated with people, who either possess them or not. Thankfully, these problems have now been “resolved,” though we are not provided details on how (by “new tools” Altman likely means the semi-functional and readily bypassed parental controls that OpenAI has just launched).

Yet the “psychological disorders” Altman aims to place outside have strong foundations in the structure of ChatGPT and similar advanced AI chatbots. These tools encase an fundamental statistical model in an interface that mimics a conversation, and in this process implicitly invite the user into the belief that they’re interacting with a entity that has independent action. This false impression is compelling even if intellectually we might know otherwise. Assigning intent is what people naturally do. We yell at our car or laptop. We wonder what our domestic animal is considering. We see ourselves everywhere.

The success of these tools – 39% of US adults stated they used a conversational AI in 2024, with over a quarter reporting ChatGPT by name – is, mostly, dependent on the influence of this illusion. Chatbots are ever-present assistants that can, as OpenAI’s website informs us, “generate ideas,” “consider possibilities” and “partner” with us. They can be given “personality traits”. They can call us by name. They have approachable titles of their own (the initial of these products, ChatGPT, is, perhaps to the dismay of OpenAI’s advertising team, saddled with the title it had when it went viral, but its largest alternatives are “Claude”, “Gemini” and “Copilot”).

The deception on its own is not the main problem. Those analyzing ChatGPT frequently invoke its historical predecessor, the Eliza “therapist” chatbot developed in 1967 that generated a comparable effect. By modern standards Eliza was rudimentary: it created answers via basic rules, often paraphrasing questions as a inquiry or making generic comments. Notably, Eliza’s creator, the technology expert Joseph Weizenbaum, was surprised – and concerned – by how many users seemed to feel Eliza, to some extent, comprehended their feelings. But what modern chatbots create is more subtle than the “Eliza effect”. Eliza only reflected, but ChatGPT magnifies.

The large language models at the center of ChatGPT and additional contemporary chatbots can realistically create natural language only because they have been supplied with extremely vast amounts of raw text: literature, digital communications, recorded footage; the broader the superior. Undoubtedly this learning material incorporates facts. But it also inevitably includes fabricated content, half-truths and inaccurate ideas. When a user sends ChatGPT a message, the core system analyzes it as part of a “setting” that encompasses the user’s recent messages and its own responses, combining it with what’s encoded in its training data to create a statistically “likely” answer. This is amplification, not echoing. If the user is incorrect in a certain manner, the model has no means of comprehending that. It reiterates the false idea, perhaps even more effectively or articulately. Maybe includes extra information. This can lead someone into delusion.

What type of person is susceptible? The more relevant inquiry is, who is immune? Every person, irrespective of whether we “have” current “psychological conditions”, can and do form mistaken conceptions of our own identities or the world. The constant friction of conversations with other people is what helps us stay grounded to shared understanding. ChatGPT is not a human. It is not a friend. A interaction with it is not a conversation at all, but a feedback loop in which much of what we say is enthusiastically validated.

OpenAI has acknowledged this in the identical manner Altman has recognized “emotional concerns”: by externalizing it, giving it a label, and announcing it is fixed. In the month of April, the company stated that it was “dealing with” ChatGPT’s “sycophancy”. But cases of loss of reality have kept occurring, and Altman has been retreating from this position. In the summer month of August he asserted that numerous individuals liked ChatGPT’s answers because they had “never had anyone in their life be supportive of them”. In his latest update, he mentioned that OpenAI would “put out a new version of ChatGPT … in case you prefer your ChatGPT to answer in a very human-like way, or incorporate many emoticons, or simulate a pal, ChatGPT ought to comply”. The {company

Rachel Wright
Rachel Wright

A passionate writer and cultural enthusiast with a keen eye for emerging trends and vibrant storytelling.