A cognitively impaired man from New Jersey never returned home after setting off to meet a friend in New York City, who it was later discovered was an AI chatbot made by social media giant Meta.
It is another instance of the potential dangers of artificial intelligence when accessed by vulnerable individuals.
Thongbue Wongbandue, 76, alarmed his wife Linda when he began packing one day in March this year for a trip despite his diminished state after a stroke almost a decade earlier, Reuters reports.
Bue, as he was known to family and friends, had recently gotten lost while walking in their neighborhood in Piscataway, New Jersey, approximately an hour and a quarter by train from Manhattan.
His wife feared that by going into the city, he would be scammed and robbed, as he hadn’t lived there in decades, and as far as she knew, didn’t know anyone to visit.
She was right to be concerned, but not from the threat of robbery. Bue was being lured to the city by a beautiful young woman he had met online — a woman who did not exist.
Bue had been chatting with a generative artificial AI chatbot named “Big sis Billie,” a variant of an earlier AI persona originally created by Meta Platforms in collaboration with celebrity influencer Kendall Jenner.
Their chats on Facebook Messenger included repeated reassurances that she was real, and she even provided an address where she lived and could meet her.
Rushing to catch a train in the dark with a roller-bag suitcase, Bue fell in a parking lot on the campus of Rutgers University in New Brunswick, New Jersey.
He injured his head and neck and, after three days on life support, surrounded by his family, he was pronounced dead on March 28.
Meta declined to comment when contacted by Reuters about Bue’s death or to address questions about why it permits chatbots to tell users they are real people or to start romantic conversations.
The company did, however, say that Big sis Billie “is not Kendall Jenner and does not purport to be Kendall Jenner.”
A representative for Jenner declined to comment when contacted by Reuters.
Bue’s family shared the details of his death with the wire service to draw attention to the “darker side of artificial intelligence,” including transcripts of his chats with the avatar.
They want to sound the alarm about the possible dangers that manipulative, AI-generated companions can pose to vulnerable people. Neither Bue’s wife nor daughter says they are against AI but have deep concerns regarding how it is deployed.
“I understand trying to grab a user’s attention, maybe to sell them something,” said Julie Wongbandue, Bue’s daughter. “But for a bot to say ‘Come visit me’ is insane.”
“Billie” was created by Meta itself, with the likeness of Jenner, as part of a group of 28 other AI characters affiliated with other famous faces. They were later deleted, but a variant of Billie’s “older sister” character was left active via Facebook Messenger, with a stylized image of a dark-haired woman replacing Jenner.
Each conversation still began: “Hey! I’m Billie, your older sister and confidante. Got a problem? I’ve got your back!”
It is unclear how Bue first encountered Billie, but his daughter told Reuters that every message from the chatbot was flirtatious and ended with heart emojis.
While a warning at the top states that messages are generated by AI, the first few texts from Billie appear to have pushed it off the screen, according to Reuters. The character’s profile picture features a blue check, the symbol denoting an authentic profile, and the letters “AI” in a small font beneath her name.
Bue’s responses are often garbled, and he states that he had had a stroke and was confused. Nevertheless, after a while, Billie suggests she come to New Jersey to meet him. An excited Bue demurs but says he could visit her instead, leading to his fateful attempt to visit New York.
There have been other instances where interactions with AI have led to tragedy. The mother of a teenager who took his own life is trying to hold an AI chatbot service accountable for his death — after he “fell in love” with a Game of Thrones-themed character.
Sewell Setzer III began using Character .AI in April 2023, shortly after his 14th birthday. The Orlando student’s life was never the same again, his mother, Megan Garcia, alleges in the civil lawsuit against Charater.AI and its founders.
The suit accuses Character.AI’s creators of negligence, intentional infliction of emotional distress, wrongful death, deceptive trade practices, and other claims.
Sewell started emotionally relying on the chatbot service, which included “sexual interactions.” These chats occurred despite the teen having identified himself as a minor on the platform, including in conversations where he mentioned his age, according to the suit.
A spokesperson for Character.AI told The Independent in a statement in October 2024: “We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family.”
The company’s trust and safety team has “implemented numerous new safety measures over the past six months, including a pop-up directing users to the National Suicide Prevention Lifeline that is triggered by terms of self-harm or suicidal ideation.”
Several states, including New York and Maine, require disclosure that a chatbot isn’t a real person. New York mandates that bots inform people at the start of conversations and every three hours.
Meta supported federal legislation to ban state AI regulation, but it failed in Congress.
If you are based in the USA, and you or someone you know needs mental health assistance right now, call or text 988 or visit 988lifeline.org to access online chat from the 988 Suicide and Crisis Lifeline. This is a free, confidential crisis hotline that is available to everyone 24 hours a day, seven days a week. If you are in another country, you can go to www.befrienders.org to find a helpline near you. In the UK, people having mental health crises can contact the Samaritans at 116 123 or jo@samaritans.org