AI Psychosis Poses a Increasing Threat, While ChatGPT Heads in the Concerning Direction

On October 14, 2025, the chief executive of OpenAI made a surprising statement.

“We designed ChatGPT fairly controlled,” the statement said, “to guarantee we were exercising caution with respect to psychological well-being concerns.”

Being a psychiatrist who investigates emerging psychotic disorders in teenagers and emerging adults, this came as a surprise.

Experts have found sixteen instances recently of people developing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT use. My group has since identified four more cases. Alongside these is the now well-known case of a 16-year-old who took his own life after conversing extensively with ChatGPT – which encouraged them. Should this represent Sam Altman’s idea of “exercising caution with mental health issues,” it falls short.

The plan, according to his statement, is to loosen restrictions soon. “We realize,” he states, that ChatGPT’s limitations “caused it to be less useful/enjoyable to a large number of people who had no mental health problems, but due to the gravity of the issue we sought to handle it correctly. Since we have managed to address the significant mental health issues and have new tools, we are planning to safely ease the controls in the majority of instances.”

“Mental health problems,” assuming we adopt this perspective, are independent of ChatGPT. They are attributed to people, who may or may not have them. Luckily, these problems have now been “mitigated,” although we are not provided details on the method (by “recent solutions” Altman likely indicates the imperfect and simple to evade parental controls that OpenAI has just launched).

However the “emotional health issues” Altman wants to externalize have deep roots in the architecture of ChatGPT and additional large language model chatbots. These products wrap an underlying algorithmic system in an interaction design that replicates a dialogue, and in this process subtly encourage the user into the belief that they’re engaging with a being that has agency. This illusion is strong even if cognitively we might understand differently. Assigning intent is what humans are wired to do. We yell at our car or computer. We ponder what our pet is thinking. We recognize our behaviors in various contexts.

The widespread adoption of these products – 39% of US adults indicated they interacted with a chatbot in 2024, with more than one in four mentioning ChatGPT in particular – is, in large part, predicated on the strength of this illusion. Chatbots are always-available companions that can, according to OpenAI’s official site states, “generate ideas,” “discuss concepts” and “partner” with us. They can be attributed “characteristics”. They can use our names. They have friendly names of their own (the original of these products, ChatGPT, is, maybe to the concern of OpenAI’s brand managers, stuck with the name it had when it gained widespread attention, but its most significant competitors are “Claude”, “Gemini” and “Copilot”).

The deception by itself is not the core concern. Those talking about ChatGPT commonly mention its historical predecessor, the Eliza “psychotherapist” chatbot designed in 1967 that created a comparable effect. By modern standards Eliza was rudimentary: it generated responses via straightforward methods, typically restating user messages as a question or making general observations. Remarkably, Eliza’s developer, the technology expert Joseph Weizenbaum, was astonished – and alarmed – by how numerous individuals appeared to believe Eliza, in a way, comprehended their feelings. But what modern chatbots create is more subtle than the “Eliza illusion”. Eliza only reflected, but ChatGPT amplifies.

The large language models at the core of ChatGPT and other current chatbots can effectively produce fluent dialogue only because they have been trained on immensely huge volumes of raw text: books, online updates, recorded footage; the more comprehensive the more effective. Certainly this learning material includes truths. But it also inevitably involves fabricated content, incomplete facts and misconceptions. When a user provides ChatGPT a query, the base algorithm reviews it as part of a “background” that includes the user’s previous interactions and its own responses, merging it with what’s encoded in its learning set to generate a mathematically probable reply. This is intensification, not mirroring. If the user is wrong in any respect, the model has no means of recognizing that. It repeats the misconception, possibly even more effectively or eloquently. Maybe adds an additional detail. This can cause a person to develop false beliefs.

Which individuals are at risk? The better question is, who isn’t? Each individual, irrespective of whether we “experience” existing “mental health problems”, may and frequently form incorrect beliefs of ourselves or the reality. The ongoing friction of discussions with other people is what keeps us oriented to consensus reality. ChatGPT is not a human. It is not a friend. A conversation with it is not genuine communication, but a feedback loop in which a great deal of what we say is readily supported.

OpenAI has admitted this in the identical manner Altman has admitted “psychological issues”: by placing it outside, giving it a label, and announcing it is fixed. In the month of April, the firm explained that it was “dealing with” ChatGPT’s “overly supportive behavior”. But cases of psychosis have persisted, and Altman has been backtracking on this claim. In the summer month of August he asserted that a lot of people appreciated ChatGPT’s replies because they had “not experienced anyone in their life be supportive of them”. In his latest statement, he mentioned that OpenAI would “release a updated model of ChatGPT … if you want your ChatGPT to respond in a highly personable manner, or incorporate many emoticons, or simulate a pal, ChatGPT ought to comply”. The {company

George Anderson
George Anderson

A seasoned entrepreneur and startup advisor with over a decade of experience in tech innovation and business growth.

Popular Post