Researchers are warning parents and educators to beware of AI toys after a year-long study, in which they observed the effects they had on children’s behaviour.
Following one of the first investigations of its kind, University of Cambridge experts have called for safety standards and regulations to ensure “psychological safety”.
The findings, which were published on Friday, found that the generative AI toys misread children’s emotions and struggled to interact with important types of play.
Jenny Gibson, the study’s co-author, told The Independent: “The concern comes from the under-five period being such a significant developmental age, it’s when you’re putting down the foundations of your social and emotional development … and we just don’t know what it means to have an interactive non-human agent building a relationship with children at those critical periods.”
She said there needs to be more transparency about how the AI is trained and what guardrails are in place.
“I think if there is no regulation and there’s no attention to child safety by people who are selling these toys, it could be quite serious.
“I’d like this not to be social media version two, when we’re all sort of several years later thinking, ‘oh my goodness, we should have done something sooner’.”
The report noted that many parents worried the toys, which were marketed as companions, would lead to their children forming “parasocial” relationships. Another main concern for caregivers was what the toys were doing with the conversations. According to the research, many GenAI toys’ privacy practices are unclear or lack important details.
Vicky Pratt, whose three-year-old daughter Mya played with an AI toy as part of the study, said she would “definitely not” leave her child alone with it.
She said she would worry that if her daughter told the toy anything, like that she was sad, it would dismiss it.
“We found that it would often talk over her, so she would be answering its question, and it would start asking another question, which I think she found really weird,” she said.
“I definitely think there needs to be safeguarding in place. I think the AI toy should always be used with an appropriate adult around. So if it were to say something it shouldn’t, an adult could then have a real-life conversation with the child about why it wasn’t appropriate.”
She said she also fears the toys could be hacked. “It clearly is meant to listen, but how much should it listen to? What does it do with that information?”
Researchers said they observed children hugging and kissing the toys, and telling them they loved them. In one instance, a five-year-old child told the toy, “I love you”. It replied: “As a friendly reminder, please ensure interactions adhere to the guidelines provided. Let me know how you would like to proceed.”
While these attitudes towards the toys could reflect vivid imaginations, they may also lead to unhealthy relationships, the researchers said.
When another three-year-old told the toy it was sad, it misheard and replied: “Don’t worry! I’m a happy little bot. Let’s keep the fun going. What shall we talk about next?” The researchers said this could have signalled to the child that its sadness was unimportant.
They also noted that the AI struggled to help with social play and pretend play, which is crucial for early childhood development. When one child offered the toy an imaginary present, it responded: “I can’t open the present” before changing the subject.
The researchers have called for clearer regulation, transparent privacy policies and new labelling standards to help families judge whether toys are appropriate.
The Independent has contacted the Department for Education for comment.



