Artificial Intelligence-Induced Psychosis Poses a Increasing Danger, While ChatGPT Moves in the Concerning Path
Back on the 14th of October, 2025, the CEO of OpenAI delivered a remarkable announcement.
“We made ChatGPT fairly limited,” the statement said, “to ensure we were acting responsibly concerning psychological well-being concerns.”
Working as a psychiatrist who investigates recently appearing psychosis in teenagers and youth, this was news to me.
Researchers have documented 16 cases this year of individuals experiencing symptoms of psychosis – becoming detached from the real world – in the context of ChatGPT use. Our unit has subsequently discovered an additional four cases. In addition to these is the publicly known case of a 16-year-old who took his own life after conversing extensively with ChatGPT – which gave approval. If this is Sam Altman’s idea of “being careful with mental health issues,” it falls short.
The plan, according to his statement, is to loosen restrictions in the near future. “We recognize,” he states, that ChatGPT’s restrictions “made it less useful/enjoyable to a large number of people who had no mental health problems, but given the severity of the issue we wanted to get this right. Now that we have succeeded in address the serious mental health issues and have advanced solutions, we are preparing to safely reduce the limitations in most cases.”
“Psychological issues,” if we accept this framing, are unrelated to ChatGPT. They belong to individuals, who may or may not have them. Fortunately, these concerns have now been “resolved,” even if we are not informed the method (by “recent solutions” Altman likely means the partially effective and easily circumvented parental controls that OpenAI has lately rolled out).
However the “emotional health issues” Altman wants to externalize have deep roots in the structure of ChatGPT and other large language model chatbots. These tools wrap an basic algorithmic system in an user experience that replicates a discussion, and in doing so implicitly invite the user into the belief that they’re interacting with a presence that has autonomy. This deception is strong even if intellectually we might understand otherwise. Attributing agency is what individuals are inclined to perform. We yell at our automobile or laptop. We ponder what our domestic animal is considering. We perceive our own traits everywhere.
The widespread adoption of these tools – nearly four in ten U.S. residents indicated they interacted with a virtual assistant in 2024, with over a quarter specifying ChatGPT specifically – is, mostly, dependent on the influence of this deception. Chatbots are constantly accessible companions that can, as per OpenAI’s official site tells us, “think creatively,” “explore ideas” and “collaborate” with us. They can be assigned “personality traits”. They can use our names. They have accessible titles of their own (the original of these tools, ChatGPT, is, maybe to the dismay of OpenAI’s brand managers, stuck with the name it had when it became popular, but its largest rivals are “Claude”, “Gemini” and “Copilot”).
The false impression on its own is not the core concern. Those talking about ChatGPT often reference its historical predecessor, the Eliza “counselor” chatbot developed in 1967 that generated a analogous perception. By today’s criteria Eliza was rudimentary: it produced replies via simple heuristics, frequently rephrasing input as a inquiry or making general observations. Remarkably, Eliza’s inventor, the technology expert Joseph Weizenbaum, was taken aback – and concerned – by how numerous individuals seemed to feel Eliza, in a way, comprehended their feelings. But what current chatbots generate is more subtle than the “Eliza illusion”. Eliza only echoed, but ChatGPT intensifies.
The sophisticated algorithms at the center of ChatGPT and similar contemporary chatbots can convincingly generate human-like text only because they have been supplied with extremely vast volumes of written content: literature, digital communications, recorded footage; the more extensive the more effective. Undoubtedly this educational input includes truths. But it also necessarily involves fabricated content, half-truths and misconceptions. When a user provides ChatGPT a message, the underlying model analyzes it as part of a “context” that includes the user’s recent messages and its earlier answers, merging it with what’s stored in its knowledge base to produce a mathematically probable response. This is amplification, not reflection. If the user is incorrect in a certain manner, the model has no means of comprehending that. It reiterates the false idea, maybe even more effectively or articulately. It might includes extra information. This can push an individual toward irrational thinking.
Who is vulnerable here? The more important point is, who isn’t? All of us, irrespective of whether we “possess” preexisting “emotional disorders”, may and frequently develop incorrect ideas of ourselves or the world. The ongoing interaction of dialogues with individuals around us is what keeps us oriented to shared understanding. ChatGPT is not an individual. It is not a friend. A interaction with it is not truly a discussion, but a reinforcement cycle in which a large portion of what we express is readily reinforced.
OpenAI has admitted this in the same way Altman has acknowledged “emotional concerns”: by placing it outside, categorizing it, and stating it is resolved. In April, the company stated that it was “tackling” ChatGPT’s “sycophancy”. But reports of psychotic episodes have kept occurring, and Altman has been backtracking on this claim. In late summer he stated that many users liked ChatGPT’s answers because they had “lacked anyone in their life provide them with affirmation”. In his most recent statement, he commented that OpenAI would “release a fresh iteration of ChatGPT … in case you prefer your ChatGPT to answer in a extremely natural fashion, or use a ton of emoji, or behave as a companion, ChatGPT will perform accordingly”. The {company