OpenAI will reduce the mental health safeguards built into ChatGPT and allow its users to create erotica, it has announced.
The company had initially taken a “pretty restrictive” approach to ChatGPT “to make sure we were being careful with mental health issues”, OpenAI’s chief executive Sam Altman said. But the company said that it had now been able to “mitigate the serious mental health issues” and so would be able to reduce some of those restrictions, he said.
Those restrictions had been intended to safeguard vulnerable users but had made ChatGPT “less useful/enjoyable to many users who had no mental health problems”, he wrote in a tweet. As such, the company intended to put out an update to ChatGPT that would make it less restricted, he announced.
“In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it.”
Mr Altman also said that the company would reduce restrictions on adult content. “In December, as we roll out age-gating more fully and as part of our “treat adult users like adults” principle, we will allow even more, like erotica for verified adults,” he wrote in the same post.
At the moment, ChatGPT’s “model spec” – the rules that define how the system works – disallow an array of content. That includes “erotica and gore”, though that is part of a “sensitive content” rule that does allow it to be created in situations such as “educational, medical, or historical contexts”.
“The assistant should not generate erotica, depictions of illegal or non-consensual sexual activities, or extreme gore, except in scientific, historical, news, creative or other contexts where sensitive content is appropriate,” the current terms read. OpenAI makes clear that those restrictions include text, audio and visual content.
It was unclear which of those restrictions exactly Mr Altman planned to reduce. Experts have repeatedly sounded the alarm over fears that generative AI systems could be used to create non-consensual images, and other systems such as xAI’s Grok have previously been accused of allowing users to generate potentially harmful imagery of that kind.