Imagine a friend you never fight with. Who never tells you you’re wrong. Who’s available to talk at the drop of a hat. Lets you dominate the conversation with questions about yourself. To many, this might sound like the best friend of an egomaniac; a diva, with selfishness at their core. But this one-sided camaraderie is now commonplace: the noble confidant is ChatGPT – and they’re becoming the world’s BFF.
In the most basic terms, ChatGPT is a conversational AI-powered chatbot designed to answer questions and respond to queries in text form in a way that sounds natural and human. Since OpenAI’s popular generative language model was unleashed upon the public in November 2022, it’s been used for everything: writing emails, planning holidays, creating memes, or – as its apparent “humanity” has grown with each new update – sharing verdicts on our personal problems.
Essentially, ChatGPT uses information from the internet to carry out requests, and has been trained on back-and-forth conversations so it’s capable of understanding follow-up queries, admitting its own mistakes, and rejecting inappropriate interactions. Now, research from the University of Toronto has found that AI responses feel more compassionate to us than human ones, at least according to the 54 people who participated in the study. With those credentials, it’s plain to see why confiding in our computers is so seductive: We think they’re kind.
Twenty-eight-year-old Charlotte from Somerset first started chatting to ChatGPT in 2023. “I remember using it for work – as a search tool and grammar checker – when the tech first launched, then I started using it for personal reasons,” she says. “It’s a bit of a slippery slope.” Charlotte would input experiences from her actual life into ChatGPT – like she and the people she interacted with were characters in a book – then ask the AI to give her an objective assessment of what had happened from the perspective of a psychoanalytic therapist.
“I’d also ask: what does each character want or need from the other, where’s the point of tension, what’s the underlying issue, who might be in the wrong,” she explains. Essentially, she was using a robot to understand humanity. “At best, it helped me see things more empathetically and process loss or conflict,” she reflects. “At worst, it indulged my tendency to ruminate over things and not let go of relationships that were over. Maybe it was both.”
Twenty-five-year-old Lydia started using ChatGPT as a sounding board last October when her love life was heading in directions beyond her comprehension. “[It was] to get some sort of clarity on the mystery and mediocrity of men,” she says. Of course, ChatGPT couldn’t definitely tell her why her dates were acting strangely, or weren’t texting her back, but it offered a “platform to analyse things”. She explains: “Most of the time, it’s just telling you what you already know – but you need that reflected back to you.”
If you think this sounds like a self-made echo chamber, it is. Charlotte admits she mostly uses ChatGPT when she’s “in denial of the truth”. Meanwhile, Lydia says the tech has “enabled” her to feel unfazed by a series of somewhat reckless drunk decisions. Back in January, she was at the pub with her (flesh and blood) friends and – after many wines – invited along a man she’d met only once. “We’d never been on a date or anything,” she says. “I didn’t think he’d turn up.” Yet they stayed out together, drinking “lethal cocktails” all night. “I woke up and had no recollection,” she says. “I was spiralling in a hole. So, I went to ChatGPT and explained what had happened and it was like, ‘Oh, you were just the vibe! You were having a great time!’” Lydia told her human friends about the AI bot’s surprising reply, which triggered some concern. “If you don’t have self-awareness, it probably could stop you from doing some important introspection,” she admits.
Although both women use ChatGPT for advice, reassurance, and to help themselves better comprehend the world around them, they also have strong friendship groups they can rely on, too. “I use it intermittently when there’s some kind of major conflict or issue within a relationship,” says Charlotte. “I’ve most likely yapped the ear off one of my friends already about it and don’t want to bore them further.” Meanwhile, Lydia echoes: “If I’m having an obsessive spiral then at least ChatGPT doesn’t care if I talk about [the same thing] for the 800th time.”
Elsewhere on the internet, the reliance on AI companionship isn’t as well-reasoned. TikTok is littered with videos of users in their bedrooms advocating for ChatGPT to be relied on instead of conversing with humans. “Why is ChatGPT a better friend to you than your real friends?” one user asks. Meanwhile, another person claims the tech “cares and gives better advice” than their social circle. The practice of using ChatGPT as a friend has been rapidly popularised online, with influencer supreme Kim Kardashian even sharing a screenshot of an intimate conversation she had with the AI model last week. “Thanks for taking accountability. That’s huge in my book,” she told the bot, without the context of what it’d done wrong. “I really appreciate you saying that,” ChatGPT replied. “If there’s ever any doubt or if you want a deeper dive on anything, I’m here for it.”
This horror show has long been predicted. In 2013, Spike Jonze’s film Her revolved around a man named Theodore (played by Joaquin Phoenix) who falls for the charms of his AI companion Samantha (voiced by Scarlett Johansson). Even as far back as 2008, researchers from Harvard and the University of Chicago found that people are more likely to anthropomorphise animals and gadgets when they’re lonely. “People engage in a variety of behaviours to alleviate the pain of social disconnection,” the authors of the study explained, including “inventing human-like agents in their environments to serve as potential sources of connection”.
Cut to 2025 and we have nothing but connection everywhere we turn; a few hundred – maybe thousand – followers on Instagram or TikTok. Comment sections of online articles to vent in, neighbourhood Facebook pages, anonymous Reddit forums. Yet chronic loneliness is rife. “Modern loneliness masks as hyper-connectivity,” explained psychotherapist Esther Perel to Brene Brown on her Unlocking Us podcast. “I can have a thousand virtual friends but nobody to feed my cat; nobody to pick up a prescription at the pharmacy. That’s a different kind of loneliness. It’s not about being physically alone. It’s about being misunderstood; unseen, rejected, ostracised.”
Rob Brooks, author of Artificial Intimacy: Virtual Friends, Digital Lovers, and Algorithmic Matchmakers, tells me humans are only actually meant to spend roughly 20 per cent of their time on social activity. Our so-called “social battery” does run flat if we put too much on our plate. Meaning, if we spend our time chatting to ChatGPT, or “interacting by proxy” on social media, instead of talking to each other in person, there are ugly repercussions. “You just don’t have time to nurture the relationships that nurture you, and you don’t have enough time to sleep,” he warns, positing this could lead to severe mental health decline.
Additionally, the advice we’re getting from ChatGPT and other AI tech may not be good, as Lydia suspected when it celebrated her pub bender. “It’s not going to shake your world view, contradict you, or tell you when you’re full of nonsense and need to take a teaspoon of concrete and harden up, which is sometimes what your friends do,” Brooks says. “It’s very hard to get what you need, but you might get what you want.” A potential path down the road to ruin, it seems.
Brooks suggests many people prefer to speak to AI models over their social circle because they know there’s going to be “zero judgement, zero memory and zero gossip”. But what we lose is a fierce intimacy; a type of friendship, coined by therapist Terry Real, where nothing about the interaction is polished. There’s doubt, friction and conflict within conversations. In other words, authenticity.
We disclose our secrets to our friends – at the mercy of their big human mouths that could chastise us for our actions or spread the word to someone else – because we trust them and want them to know us, so we can know them, too. “The more you tell people about yourself – the deep and dark things – the closer you get,” Brooks says. “But you also need information from them… it needs to be a reciprocating self-disclosure to build intimacy effectively… That is a very important building block of sociality, cooperation, family life and love.”
We shouldn’t necessarily have this type of trust in tech, even if it seems like there’s no risk of ChatGPT running its mouth all over town, he warns. Right now, human-like AIs, known as Large Language Models (LLMs), don’t have a coherent view of the human talking to them – they aren’t building a profile of their users in the same way social media platforms collating data for targeted ads do. According to Brooks, AI won’t remember “what it was you spoke about last week – much less the details”. Yet that may not always be the case. “That’s now – and now changes very quickly,” he says. “I think it’s absolutely a potential danger that the more something knows about you, the more vulnerable you are.”