A US college student appears to have incriminated himself after allegedly confessing to acts of vandalism in conversations with ChatGPT.
Ryan Schaefer, a 19-year-old sophomore at Missouri State University, was arrested and charged with felony property damage, according to court documents filed in Greene County.
After his arrest, Schaefer reportedly gave police investigators permission to search his phone, which revealed conversations with the AI chatbot relating to the incident.
Prosecutors reported that the review of the device uncovered a “troubling dialogue exchange this defendant seems to have had with artificial intelligence software installed on his phone”.
The typo-filled conversation is said to have included questions like “qill I go to jail” and “I was smashing the windshields of random fs cars”.
He also reportedly said: “I got away w it last year. And I don’t think theres any way they could know my face.”
ChatGPT reportedly responded with tips about the potential outcome of getting caught, according to the document, first reported by The Smoking Gun..
A Probable Cause Statement filed with Springfield Police Department claimed that 17 vehicles were damaged during the rampage, resulting in thousands of dollars worth of damage.
In July, the chief executive of OpenAI, the company behind ChatGPT, said that conversations with the chatbot are not legally protected.
“People talk about the most personal shit in their lives to ChatGPT,” OpenAI CEO Sam Altman said in a podcast interview with the comedian Theo Von.
“People use it, young people especially, as a therapist, a life coach, having these relationship problems… And right now, if you talk to a therapist, a lawyer or a doctor about those problems, there’s like legal privilege for it.
“We haven’t figured that out yet, for when you talk to ChatGPT. So if you go talk to ChatGPT about your most sensitive stuff, and then there’s like a lawsuit, or whatever, we [OpenAI] could be required to produce that, and I think that’s very screwed up.
“I think we should have the same concept of privacy for your conversations with AI that we do with a therapist or whatever.”