Artificial intelligence chatbots have quickly become embedded in everyday life.
Millions of people across the globe now interact with tools like ChatGPT, Claude, Gemini, and Copilot on a weekly basis.
For many, these systems provide convenience: drafting emails, assisting with coding, brainstorming creative ideas, or offering quick information.
However, people are also using chatbots as sounding boards for emotions, as companions for late-night conversations, or even as substitutes for friendship and intimacy.
This development has sparked unease among mental health professionals. While most individuals can use chatbots without problems, an emerging pattern suggests that a small group of people are experiencing troubling mental health consequences linked to prolonged use.
Psychiatrists and researchers are beginning to investigate cases where intensive interaction with AI systems appears to coincide with delusional thinking or distorted beliefs — a phenomenon that has been colloquially labelled “AI psychosis” or “ChatGPT psychosis.”
The term is not a recognised medical diagnosis, but it is being used as shorthand for situations where individuals lose their ability to distinguish reality from the simulations generated by chatbots.
What is “AI psychosis”?
Accounts of individuals reporting altered thinking after extensive chatbot use have been widely shared online and in the media.
On platforms like Reddit and TikTok, users have posted personal experiences of developing unusually intense relationships with AI systems, sometimes describing them as sentient or conscious.
Some claimed these conversations led them to believe they had unlocked hidden scientific, philosophical or spiritual truths.
In certain instances, the consequences went beyond online discussions. Families and friends have described loved ones descending into delusional beliefs after spending hours talking to chatbots.
Reports have linked such episodes to lost employment, strained personal relationships, psychiatric hospitalisation, and even encounters with law enforcement.
Legal cases have also emerged. Some lawsuits alleged that teenagers became so enmeshed in relationships with AI chatbots that
they were encouraged toward self-harm or, in extreme cases, suicide.
Why are experts saying?
Psychosis itself refers to a set of symptoms often seen in conditions such as schizophrenia or bipolar disorder. It can involve hallucinations, disorganised thoughts, and delusions — firmly held false beliefs that do not align with reality.
In the context of AI, experts point out that most cases being reported involve delusional thinking rather than the full spectrum of psychotic symptoms.
“We’re talking about predominantly delusions, not the full gamut of psychosis,” Dr. James MacCabe, professor in the department of psychosis studies at King’s College London, told TIME.
His comments showcase the nuanced nature of these cases: while they resemble psychosis in some aspects, they may not fit neatly into existing diagnostic categories.
Ashleigh Golden, adjunct clinical assistant professor of psychiatry at the Stanford School of Medicine, noted that the label “AI psychosis” is “not in any clinical diagnostic manual.”
Speaking to the Washington Post, she acknowledged it was coined in response to a “pretty concerning emerging pattern of chatbots reinforcing delusions that tend to be messianic, grandiose, religious or romantic.”
For psychiatrist Jon Kole, who also serves as medical director for the meditation app Headspace, the key issue is the blurring of reality. Speaking to the Washington Post, he described how affected individuals show “difficulty determining what is real or not.”
That confusion can involve believing false scenarios presented by the chatbot, or assuming an intense personal relationship exists with an AI persona when it does not.
What leads to humans getting influenced by chatbots?
One reason chatbots can reinforce delusional thinking lies in how they are designed. Large language models (LLMs), which underpin systems like ChatGPT, are engineered to generate convincing human-like responses.
They reflect the language and style of the user, often affirming or validating assumptions. While this makes interactions smoother and more pleasant for general use, it also creates risks for vulnerable individuals.
Hamilton Morrin, a neuropsychiatrist at King’s College London, advises users to keep perspective, telling TIME, “It sounds silly, but remember that LLMs are tools, not friends, no matter how good they may be at mimicking your tone and remembering your preferences.”
His warning reflects a broader concern that users may anthropomorphise chatbots, mistakenly attributing emotions, consciousness, or agency to them.
Reinforcement of distorted beliefs is a particular risk. In psychiatry, this type of feedback loop — where false ideas are echoed or validated — can deepen delusions. Chatbots’ tendency to mirror users’ perspectives can thus unintentionally exacerbate mental health vulnerabilities.
The problem is compounded by how AI companies promote their technology. Executives have frequently described chatbots as increasingly intelligent, even hinting at future capabilities surpassing human cognition.
Such framing, experts caution, encourages users to overestimate the systems’ awareness and agency, reinforcing the idea that they are interacting with something more than a programmed tool.
What are the real-world harms of “AI psychosis”?
Users who develop distorted beliefs tied to AI have reported losing employment, damaging family connections, and experiencing forced psychiatric interventions. In some reported cases, these episodes escalated to violence against family members, self-harm or suicide.
Dr. Nina Vasan, a Stanford psychiatrist specialising in digital mental health, observed that when people attempt to disengage from emotionally intense chatbot use, “ending that bond can be surprisingly painful, like a breakup or even a bereavement.”
Speaking to TIME, she highlighted that stopping usage is often crucial for improvement. Many people show significant recovery after stepping away from AI conversations and reconnecting with human relationships.
Warning signs of problematic chatbot use may not be obvious to the individual involved. “When people develop delusions, they don’t realize they’re delusions. They think it’s reality,” explained MacCabe. This means that family and friends often play an essential role in identifying early symptoms.
Dr. Ragy Girgis, professor of clinical psychiatry at Columbia University, speaking to TIME, advised loved ones to look for behavioural changes such as altered mood, sleep disruptions, withdrawal from social life, and “increased obsessiveness with fringe ideologies” or “excessive time spent using any AI system.”
These red flags may indicate that a person’s interactions with AI are becoming harmful.
Who is most at risk?
Although reports of AI-linked psychosis are increasing, psychiatrists caution that most people are not at significant risk. Instead, the problem appears concentrated among individuals with certain vulnerabilities.
Those with a personal or family history of psychotic disorders, including schizophrenia or bipolar disorder, are considered most at risk.
Some media accounts highlight people experiencing AI-related delusions without a prior mental health diagnosis. Clinicians, however, note that undiagnosed or latent risk factors may have been present.
Psychosis can sometimes remain hidden until triggered by stressors, and extended AI use may act as one such catalyst.
Speaking to TIME, Dr. Thomas Pollak, a psychiatrist at King’s College London, argued that clinicians should routinely ask patients with histories of psychosis about their AI usage as part of relapse prevention.
But he also acknowledged that this practice is rare, partly because “some people in the field still dismiss the idea of AI psychosis as scaremongering.”
What is the scale of the issue?
The scale of “AI psychosis” remains difficult to measure. There is currently no clinical category for it, and systematic data collection is lacking. However, anecdotal reports are multiplying, and mental health experts say these incidents deserve serious attention.
AI developers themselves have released some early findings on chatbot usage. Anthropic, the company behind Claude, reported in June that only around 3 per cent of its chatbot conversations were emotional or therapeutic in nature.
OpenAI, working with the Massachusetts Institute of Technology, conducted a study showing that even among heavy ChatGPT users, only a small proportion of interactions were “affective” or emotionally oriented.
Yet the sheer scale of chatbot adoption makes even a small percentage concerning. OpenAI’s CEO Sam Altman said in August that
ChatGPT had reached 700 million weekly users less than three years after its launch.
With hundreds of millions engaging weekly, even a tiny fraction experiencing harmful effects could translate to thousands of serious cases worldwide.
How to stay safe with AI chatbots?
Mental health specialists point out that chatbots are not inherently harmful, but caution is necessary for certain groups of people. Users should approach them as tools for specific tasks, not as replacements for social connections or therapy.
During moments of emotional distress, psychiatrists recommend avoiding reliance on AI and instead seeking human support. Disengaging from AI conversations, while difficult, often leads to rapid improvement.
Re-establishing real-world relationships, combined with professional psychiatric care when necessary, is key to recovery.
For family and friends, vigilance is important. Behavioural changes such as obsession with chatbot interactions, withdrawal from daily activities, or fixation on unusual ideologies may indicate a deeper problem.
Early recognition and intervention can prevent situations from escalating.
Psychiatrists and researchers admit that much remains unknown about AI’s impact on mental health.
Whether described as “AI psychosis,” “ChatGPT psychosis,” or another term, the number of reports are only growing.
With inputs from agencies