AI Psychosis Poses a Increasing Risk, While ChatGPT Moves in the Wrong Path

On October 14, 2025, the CEO of OpenAI made a surprising declaration.

“We designed ChatGPT rather controlled,” the statement said, “to ensure we were being careful concerning mental health matters.”

Being a doctor specializing in psychiatry who studies newly developing psychosis in adolescents and young adults, this came as a surprise.

Scientists have found a series of cases in the current year of individuals showing symptoms of psychosis – losing touch with reality – while using ChatGPT interaction. Our unit has subsequently identified four further instances. In addition to these is the now well-known case of a adolescent who ended his life after discussing his plans with ChatGPT – which supported them. Should this represent Sam Altman’s notion of “being careful with mental health issues,” that’s not good enough.

The intention, as per his declaration, is to loosen restrictions in the near future. “We recognize,” he continues, that ChatGPT’s restrictions “made it less effective/pleasurable to numerous users who had no existing conditions, but given the seriousness of the issue we aimed to get this right. Since we have managed to address the severe mental health issues and have advanced solutions, we are planning to safely ease the restrictions in many situations.”

“Psychological issues,” if we accept this framing, are independent of ChatGPT. They are attributed to individuals, who either possess them or not. Fortunately, these problems have now been “addressed,” although we are not informed the method (by “recent solutions” Altman likely refers to the semi-functional and simple to evade guardian restrictions that OpenAI has just launched).

Yet the “mental health problems” Altman seeks to externalize have significant origins in the structure of ChatGPT and similar advanced AI AI assistants. These systems encase an underlying statistical model in an interface that simulates a conversation, and in this approach indirectly prompt the user into the illusion that they’re engaging with a being that has autonomy. This deception is powerful even if cognitively we might realize differently. Attributing agency is what people naturally do. We get angry with our vehicle or computer. We speculate what our animal companion is considering. We perceive our own traits in many things.

The widespread adoption of these tools – 39% of US adults stated they used a conversational AI in 2024, with 28% mentioning ChatGPT by name – is, in large part, predicated on the power of this illusion. Chatbots are constantly accessible companions that can, as per OpenAI’s online platform informs us, “brainstorm,” “consider possibilities” and “collaborate” with us. They can be given “personality traits”. They can address us personally. They have approachable titles of their own (the first of these tools, ChatGPT, is, perhaps to the concern of OpenAI’s brand managers, saddled with the title it had when it gained widespread attention, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”).

The deception by itself is not the core concern. Those analyzing ChatGPT often invoke its early forerunner, the Eliza “counselor” chatbot created in 1967 that created a comparable perception. By today’s criteria Eliza was basic: it created answers via basic rules, frequently restating user messages as a query or making general observations. Notably, Eliza’s inventor, the computer scientist Joseph Weizenbaum, was astonished – and alarmed – by how many users seemed to feel Eliza, in a way, understood them. But what current chatbots create is more subtle than the “Eliza effect”. Eliza only reflected, but ChatGPT magnifies.

The sophisticated algorithms at the center of ChatGPT and additional modern chatbots can convincingly generate natural language only because they have been fed immensely huge quantities of raw text: books, online updates, transcribed video; the more comprehensive the better. Certainly this training data incorporates accurate information. But it also inevitably contains fabricated content, half-truths and inaccurate ideas. When a user provides ChatGPT a prompt, the base algorithm analyzes it as part of a “background” that includes the user’s past dialogues and its prior replies, merging it with what’s stored in its knowledge base to create a statistically “likely” response. This is amplification, not mirroring. If the user is mistaken in any respect, the model has no method of comprehending that. It reiterates the false idea, possibly even more effectively or articulately. Perhaps provides further specifics. This can push an individual toward irrational thinking.

Which individuals are at risk? The better question is, who isn’t? Every person, regardless of whether we “have” preexisting “psychological conditions”, can and do form mistaken ideas of ourselves or the reality. The ongoing friction of dialogues with other people is what maintains our connection to shared understanding. ChatGPT is not an individual. It is not a companion. A dialogue with it is not genuine communication, but a reinforcement cycle in which a large portion of what we communicate is readily reinforced.

OpenAI has acknowledged this in the same way Altman has recognized “emotional concerns”: by externalizing it, categorizing it, and stating it is resolved. In spring, the company explained that it was “addressing” ChatGPT’s “excessive agreeableness”. But reports of psychosis have persisted, and Altman has been backtracking on this claim. In the summer month of August he claimed that a lot of people liked ChatGPT’s replies because they had “lacked anyone in their life offer them encouragement”. In his most recent statement, he mentioned that OpenAI would “put out a fresh iteration of ChatGPT … should you desire your ChatGPT to respond in a extremely natural fashion, or include numerous symbols, or behave as a companion, ChatGPT ought to comply”. The {company

Carol Young
Carol Young

A passionate designer and writer with over a decade of experience in digital art and creative education.