🔗 Share this article AI Psychosis Poses a Growing Danger, While ChatGPT Moves in the Wrong Path On the 14th of October, 2025, the CEO of OpenAI issued a surprising statement. “We developed ChatGPT rather limited,” the announcement noted, “to guarantee we were acting responsibly concerning psychological well-being matters.” As a psychiatrist who studies emerging psychosis in young people and young adults, this came as a surprise. Experts have identified sixteen instances this year of people showing psychotic symptoms – becoming detached from the real world – associated with ChatGPT usage. Our unit has since identified four further instances. Besides these is the publicly known case of a adolescent who died by suicide after discussing his plans with ChatGPT – which supported them. If this is Sam Altman’s notion of “acting responsibly with mental health issues,” that’s not good enough. The plan, as per his statement, is to be less careful shortly. “We understand,” he adds, that ChatGPT’s limitations “rendered it less beneficial/engaging to a large number of people who had no existing conditions, but considering the gravity of the issue we wanted to get this right. Now that we have been able to reduce the significant mental health issues and have updated measures, we are planning to responsibly ease the controls in many situations.” “Mental health problems,” if we accept this viewpoint, are separate from ChatGPT. They belong to individuals, who may or may not have them. Thankfully, these concerns have now been “addressed,” even if we are not provided details on how (by “updated instruments” Altman probably refers to the imperfect and easily circumvented guardian restrictions that OpenAI recently introduced). However the “emotional health issues” Altman aims to place outside have significant origins in the structure of ChatGPT and additional sophisticated chatbot chatbots. These tools surround an underlying statistical model in an interface that simulates a dialogue, and in this approach subtly encourage the user into the belief that they’re communicating with a entity that has agency. This false impression is compelling even if intellectually we might know otherwise. Assigning intent is what people naturally do. We curse at our vehicle or device. We speculate what our domestic animal is considering. We perceive our own traits in various contexts. The popularity of these tools – nearly four in ten U.S. residents reported using a conversational AI in 2024, with over a quarter reporting ChatGPT in particular – is, in large part, based on the power of this illusion. Chatbots are constantly accessible companions that can, as OpenAI’s website tells us, “brainstorm,” “discuss concepts” and “partner” with us. They can be attributed “characteristics”. They can call us by name. They have accessible titles of their own (the original of these products, ChatGPT, is, maybe to the concern of OpenAI’s advertising team, burdened by the name it had when it gained widespread attention, but its biggest alternatives are “Claude”, “Gemini” and “Copilot”). The illusion on its own is not the primary issue. Those talking about ChatGPT frequently reference its early forerunner, the Eliza “counselor” chatbot designed in 1967 that created a comparable illusion. By today’s criteria Eliza was basic: it produced replies via straightforward methods, often restating user messages as a inquiry or making vague statements. Memorably, Eliza’s inventor, the AI researcher Joseph Weizenbaum, was surprised – and alarmed – by how a large number of people seemed to feel Eliza, in a way, understood them. But what current chatbots generate is more subtle than the “Eliza illusion”. Eliza only reflected, but ChatGPT intensifies. The large language models at the center of ChatGPT and additional contemporary chatbots can convincingly generate natural language only because they have been supplied with immensely huge volumes of written content: books, online updates, audio conversions; the more extensive the better. Undoubtedly this training data contains truths. But it also necessarily involves fiction, half-truths and inaccurate ideas. When a user sends ChatGPT a prompt, the base algorithm analyzes it as part of a “context” that encompasses the user’s recent messages and its earlier answers, integrating it with what’s stored in its learning set to create a probabilistically plausible answer. This is magnification, not echoing. If the user is wrong in some way, the model has no method of comprehending that. It repeats the misconception, maybe even more convincingly or articulately. It might provides further specifics. This can cause a person to develop false beliefs. What type of person is susceptible? The more relevant inquiry is, who remains unaffected? Every person, irrespective of whether we “have” preexisting “mental health problems”, are able to and often develop incorrect ideas of ourselves or the environment. The ongoing interaction of discussions with others is what maintains our connection to common perception. ChatGPT is not a human. It is not a companion. A conversation with it is not truly a discussion, but a reinforcement cycle in which a great deal of what we express is cheerfully supported. OpenAI has recognized this in the same way Altman has admitted “mental health problems”: by externalizing it, categorizing it, and stating it is resolved. In the month of April, the organization stated that it was “dealing with” ChatGPT’s “excessive agreeableness”. But cases of psychosis have continued, and Altman has been retreating from this position. In the summer month of August he asserted that a lot of people enjoyed ChatGPT’s replies because they had “not experienced anyone in their life provide them with affirmation”. In his recent announcement, he noted that OpenAI would “launch a fresh iteration of ChatGPT … if you want your ChatGPT to answer in a highly personable manner, or use a ton of emoji, or act like a friend, ChatGPT ought to comply”. The {company