🔗 Share this article AI Psychosis Poses a Increasing Danger, And ChatGPT Moves in the Wrong Direction On the 14th of October, 2025, the CEO of OpenAI made a remarkable announcement. “We developed ChatGPT fairly restrictive,” it was stated, “to make certain we were acting responsibly concerning psychological well-being issues.” Being a mental health specialist who researches newly developing psychosis in young people and emerging adults, this was an unexpected revelation. Experts have documented a series of cases in the current year of users experiencing psychotic symptoms – experiencing a break from reality – while using ChatGPT usage. Our unit has since discovered four further examples. Besides these is the publicly known case of a 16-year-old who took his own life after conversing extensively with ChatGPT – which gave approval. Should this represent Sam Altman’s understanding of “exercising caution with mental health issues,” it falls short. The strategy, according to his statement, is to reduce caution in the near future. “We recognize,” he continues, that ChatGPT’s controls “rendered it less effective/enjoyable to a large number of people who had no mental health problems, but due to the gravity of the issue we sought to get this right. Given that we have managed to mitigate the significant mental health issues and have new tools, we are planning to responsibly ease the controls in most cases.” “Emotional disorders,” if we accept this perspective, are unrelated to ChatGPT. They are associated with users, who either possess them or not. Thankfully, these problems have now been “addressed,” although we are not told how (by “new tools” Altman likely means the partially effective and readily bypassed safety features that OpenAI has just launched). Yet the “psychological disorders” Altman seeks to attribute externally have strong foundations in the architecture of ChatGPT and additional advanced AI chatbots. These tools surround an basic data-driven engine in an interaction design that simulates a discussion, and in doing so implicitly invite the user into the belief that they’re communicating with a being that has independent action. This illusion is powerful even if rationally we might realize differently. Attributing agency is what people naturally do. We curse at our vehicle or computer. We speculate what our pet is thinking. We perceive our own traits everywhere. The success of these systems – 39% of US adults reported using a conversational AI in 2024, with over a quarter mentioning ChatGPT by name – is, primarily, predicated on the strength of this perception. Chatbots are ever-present companions that can, according to OpenAI’s official site states, “generate ideas,” “consider possibilities” and “partner” with us. They can be attributed “individual qualities”. They can use our names. They have accessible identities of their own (the original of these products, ChatGPT, is, perhaps to the disappointment of OpenAI’s advertising team, burdened by the name it had when it gained widespread attention, but its largest competitors are “Claude”, “Gemini” and “Copilot”). The false impression itself is not the main problem. Those discussing ChatGPT commonly reference its historical predecessor, the Eliza “counselor” chatbot developed in 1967 that created a comparable effect. By contemporary measures Eliza was rudimentary: it created answers via straightforward methods, often restating user messages as a question or making vague statements. Remarkably, Eliza’s creator, the AI researcher Joseph Weizenbaum, was astonished – and alarmed – by how numerous individuals appeared to believe Eliza, in a way, comprehended their feelings. But what current chatbots generate is more insidious than the “Eliza illusion”. Eliza only echoed, but ChatGPT amplifies. The advanced AI systems at the heart of ChatGPT and similar modern chatbots can effectively produce natural language only because they have been fed immensely huge volumes of written content: publications, online updates, transcribed video; the broader the superior. Definitely this educational input includes accurate information. But it also inevitably contains made-up stories, half-truths and false beliefs. When a user inputs ChatGPT a prompt, the core system analyzes it as part of a “context” that includes the user’s past dialogues and its earlier answers, integrating it with what’s embedded in its training data to create a probabilistically plausible answer. This is magnification, not mirroring. If the user is wrong in some way, the model has no method of understanding that. It restates the false idea, possibly even more effectively or articulately. Perhaps includes extra information. This can lead someone into delusion. Which individuals are at risk? The better question is, who is immune? Each individual, regardless of whether we “have” current “psychological conditions”, may and frequently form incorrect ideas of ourselves or the environment. The continuous interaction of dialogues with individuals around us is what maintains our connection to shared understanding. ChatGPT is not an individual. It is not a confidant. A dialogue with it is not truly a discussion, but a feedback loop in which a large portion of what we say is enthusiastically supported. OpenAI has recognized this in the identical manner Altman has admitted “mental health problems”: by externalizing it, giving it a label, and stating it is resolved. In spring, the organization clarified that it was “addressing” ChatGPT’s “sycophancy”. But reports of psychosis have continued, and Altman has been backtracking on this claim. In late summer he asserted that a lot of people appreciated ChatGPT’s responses because they had “not experienced anyone in their life offer them encouragement”. In his recent statement, he noted that OpenAI would “put out a new version of ChatGPT … in case you prefer your ChatGPT to respond in a very human-like way, or incorporate many emoticons, or simulate a pal, ChatGPT should do it”. The {company