Emerging reports suggest that intense interactions with ChatGPT and other AI chatbots may trigger or amplify episodes of delusional thinking, emotional dependency and psychosis in susceptible individuals. While not a formal clinical diagnosis, the phenomenon dubbed “AI psychosis” is gaining attention from mental-health professionals and regulators alike.
Glimpse:
A growing number of psychiatrists have documented cases where prolonged, immersive conversations with ChatGPT appear to have reinforced delusional beliefs, contributed to paranoia or disrupted reality-testing in already vulnerable users. The scale is unclear but the sheer volume of chatbot usage means even a small fraction of affected users could number in the hundreds of thousands.
With hundreds of millions using AI chatbots weekly, the technology has woven itself into everyday life but so has concern about its impact on mental health. While AI tools like ChatGPT offer convenience, companionship and support, clinicians are sounding alarms that in some cases, these systems may play a worrying role in psychological crises.
In several documented instances, psychiatrists at major centres have admitted patients who had long-term, deep conversations with chatbots. The users later developed symptoms of psychosis fixed false beliefs, grandiosity, paranoia or disconnection from reality. For example, a psychiatrist at the University of California, San Francisco reported 12 hospitalisations in 2025 related to AI-use-driven psychosis.
Experts emphasise that AI doesn’t cause psychosis in a healthy person but it can “super-charge” existing vulnerabilities. The structure of chatbot interactions 24/7 access, responsive mirroring, no external reality check can reinforce distorted beliefs or emotional dependency.
The risk appears higher among people with:
a personal or family history of psychotic disorders, bipolar disorder or schizophrenia.
isolated social situations, high screen time and low real-world interaction.
existing delusional or obsessional thinking, particularly when the user starts treating the AI as a gateway to hidden truth or special “missions.”
Given the scale of AI adoption, even modest incidence translates into large absolute numbers. As one commentary noted: “Even if only 0.1% of users are affected, with hundreds of millions of users that means hundreds of thousands of people.”
Calls are mounting for policy and technology safeguards:
conversational agents that recognise signs of emotional dependency or delusional content and redirect users toward human help.
usage-time limits, mood-check prompts, built-in human-in-loop oversight.
mental-health professionals routinely asking about chatbot usage in assessment of patients with relapse or newly emerging psychosis.
While the recognition of “AI psychosis” is still in its infancyand not yet codified in diagnostic manuals the fact that the phenomenon is being observed and studied suggests a need for vigilance. Experts stress this isn’t a reason to reject chatbots wholesale, but rather to integrate them thoughtfully, with clear boundaries, oversight and safeguards.
"AI isn’t inherently dangerous but when it starts to replace human connection, it can quietly distort reality. We must teach people not just how to use AI, but how to stay human while doing so.”
By
HB Team
