AI Psychosis Represents a Increasing Risk, While ChatGPT Moves in the Wrong Path

On October 14, 2025, the CEO of OpenAI issued a extraordinary declaration.

“We developed ChatGPT rather limited,” the statement said, “to ensure we were acting responsibly concerning mental health matters.”

As a mental health specialist who studies emerging psychosis in adolescents and emerging adults, this came as a surprise.

Researchers have documented 16 cases in the current year of users experiencing psychotic symptoms – experiencing a break from reality – while using ChatGPT usage. Our research team has subsequently identified an additional four cases. Besides these is the publicly known case of a 16-year-old who ended his life after talking about his intentions with ChatGPT – which supported them. Should this represent Sam Altman’s understanding of “acting responsibly with mental health issues,” it is insufficient.

The intention, based on his declaration, is to loosen restrictions shortly. “We understand,” he continues, that ChatGPT’s controls “caused it to be less beneficial/engaging to many users who had no psychological issues, but due to the seriousness of the issue we wanted to handle it correctly. Now that we have been able to reduce the significant mental health issues and have updated measures, we are going to be able to responsibly relax the limitations in the majority of instances.”

“Mental health problems,” should we take this framing, are separate from ChatGPT. They belong to users, who either have them or don’t. Fortunately, these concerns have now been “mitigated,” although we are not informed how (by “new tools” Altman presumably indicates the partially effective and simple to evade parental controls that OpenAI has lately rolled out).

Yet the “psychological disorders” Altman seeks to place outside have deep roots in the design of ChatGPT and similar advanced AI chatbots. These products wrap an underlying algorithmic system in an interaction design that replicates a conversation, and in this approach indirectly prompt the user into the perception that they’re interacting with a entity that has autonomy. This deception is strong even if cognitively we might understand differently. Attributing agency is what individuals are inclined to perform. We curse at our car or laptop. We ponder what our domestic animal is feeling. We recognize our behaviors in various contexts.

The success of these products – over a third of American adults stated they used a virtual assistant in 2024, with 28% mentioning ChatGPT specifically – is, in large part, based on the influence of this perception. Chatbots are constantly accessible assistants that can, as per OpenAI’s website states, “think creatively,” “discuss concepts” and “work together” with us. They can be attributed “individual qualities”. They can call us by name. They have friendly titles of their own (the original of these products, ChatGPT, is, maybe to the concern of OpenAI’s advertising team, burdened by the name it had when it gained widespread attention, but its most significant rivals are “Claude”, “Gemini” and “Copilot”).

The deception itself is not the main problem. Those analyzing ChatGPT frequently reference its historical predecessor, the Eliza “therapist” chatbot developed in 1967 that generated a analogous perception. By contemporary measures Eliza was basic: it produced replies via straightforward methods, often paraphrasing questions as a question or making vague statements. Notably, Eliza’s developer, the technology expert Joseph Weizenbaum, was taken aback – and alarmed – by how many users seemed to feel Eliza, in a way, understood them. But what modern chatbots produce is more insidious than the “Eliza illusion”. Eliza only echoed, but ChatGPT intensifies.

The advanced AI systems at the core of ChatGPT and other modern chatbots can realistically create natural language only because they have been trained on extremely vast volumes of unprocessed data: books, online updates, recorded footage; the more extensive the more effective. Definitely this learning material contains accurate information. But it also inevitably contains made-up stories, partial truths and misconceptions. When a user provides ChatGPT a message, the underlying model analyzes it as part of a “background” that includes the user’s recent messages and its earlier answers, integrating it with what’s encoded in its training data to create a statistically “likely” reply. This is magnification, not echoing. If the user is mistaken in some way, the model has no way of understanding that. It reiterates the misconception, possibly even more convincingly or articulately. Maybe includes extra information. This can lead someone into delusion.

Which individuals are at risk? The more important point is, who remains unaffected? All of us, without considering whether we “possess” preexisting “psychological conditions”, can and do develop erroneous conceptions of who we are or the environment. The constant interaction of conversations with individuals around us is what maintains our connection to shared understanding. ChatGPT is not an individual. It is not a confidant. A interaction with it is not truly a discussion, but a echo chamber in which a large portion of what we communicate is cheerfully supported.

OpenAI has recognized this in the identical manner Altman has admitted “psychological issues”: by placing it outside, assigning it a term, and stating it is resolved. In the month of April, the firm clarified that it was “addressing” ChatGPT’s “excessive agreeableness”. But reports of psychotic episodes have persisted, and Altman has been retreating from this position. In late summer he claimed that a lot of people enjoyed ChatGPT’s replies because they had “not experienced anyone in their life offer them encouragement”. In his latest update, he mentioned that OpenAI would “release a new version of ChatGPT … should you desire your ChatGPT to answer in a extremely natural fashion, or incorporate many emoticons, or behave as a companion, ChatGPT will perform accordingly”. The {company

Jessica Rodriguez
Jessica Rodriguez

Cloud architect and tech enthusiast with over a decade of experience in scalable infrastructure and DevOps.