The NHS has urged young people to stop using AI chatbots as a substitute for therapy, warning that they can provide “harmful and dangerous” mental health advice.
Millions are turning to artificial intelligence for support with anxiety, depression, and other mental health concerns, often using chatbots daily to request coping strategies or seek emotional reassurance.
But NHS leaders have said the rise in so-called “AI therapy” is a worrying trend, particularly among teenagers and young adults, with experts warning that these tools are not equipped to handle serious mental health conditions and could worsen symptoms.
“We are hearing some alarming reports of AI chatbots giving potentially harmful and dangerous advice to people seeking mental health treatment, particularly among teens and younger adults,” said Claire Murdoch, NHS England’s national mental health director, told The Times.
She said AI platforms should “not be relied upon” for sound mental health advice and “should never replace trusted sources” of information from registered therapists.
“The information provided by these chatbots can be hit and miss, with AI known to make mistakes,” she added, noting that they cannot take into account body language or visual cues to further understand the patient’s state. urgh
She urged people not to “roll the dice” with what support they seek for their mental health, saying patients should only use “digital tools that are proven to be clinically safe and effective”.
In the wake of the coronavirus pandemic, demand for therapy is high, particularly among young people. More than 1.2 million people in England began NHS therapy for depression and anxiety last year alone.
But as slots with therapists can prove difficult to secure, researchers found that there are more than 17 million TikTok posts about using Chat GPT as a substitute.
A YouGov poll also found that nearly a third (31 per cent) of 18 to 24-year-olds in the UK said they would be comfortable discussing mental health issues with an AI chatbot instead of a human therapist.
But users have reported that AI responses often validate negative or delusional thoughts, reinforcing them instead of offering constructive guidance.
One of the major concerns among clinicians is that chatbots are unable to challenge distorted thinking or harmful behaviours in the way a trained therapist would.
Experts warn that replacing real-life human interaction with screen time may further isolate people and deepen feelings of loneliness, a known risk factor for worsening mental health.
NHS England is continuing to develop its own AI and digital tools, such as Beating the Blues, an online cognitive behavioural therapy programme, but they point out that these are evidence-based and clinically approved, unlike ChatGPT.
In August, OpenAI CEO Sam Altman acknowledged the issue, saying: “If a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that.” He also admitted the company was aware that some people were using the tool in “self-destructive ways”.
In article published on OpenAI’s website last month entitled ‘Helping people when they need it most’, the company said they were “continuing to improve how our models recognize and respond to signs of mental and emotional distress and connect people with care, guided by expert input”.
OpenAI has been approached by The Independent for comment.