Meta, the parent company of Facebook and Instagram, announced a new tool to allow parents to see what their children are discussing with its AI bots. While parents are already given alerts if their children engage with topics like suicide or self-harm, the new tool will give them a more detailed overview of their children’s AI discussions.
Beginning on April 23, parents using the supervision tools offered by Facebook, Messenger, and Instagram will have access to an “Insights” tab. One of the options within the tab is labeled “Their AI Interactions” and provides a list of topics their children have discussed with Meta’s chatbots over previous seven days.
The topics are broad categories that include subjects like school, travel, writing, entertainment, lifestyle, health and wellbeing, as well as sub-topics under each of those umbrellas, the company said.
Subtopics under well-being, for example, might include subjects like mental health or physical health, and lifestyle might list topics like fashion or food.
In order to make use of the Insights tab, parents will have to ensure their children are using Teen accounts, which are available on Meta’s platforms, PC Mag reports. The new tool will be available for parents in the U.S., U.K., Australia, Canada, and Brazil. The company says it will roll out a global version of the tool in the coming weeks.

The new tool comes on the heels of a lawsuit that saw Meta ordered to pay $375 million for failing to block child exploitation on its apps.
Meta has also announced the creation of an AI Wellbeing Expert Council, which it describes as a “group of experts who will provide ongoing input on our AI experiences for teens, to help make sure they continue to be safe and age-appropriate.”
Company employees working on AI projects will reportedly have regular meetings with the council to discuss updates to its features and to hear feedback on its products.
The safety and health of children on social media has become a standout issue in recent months.
In March, both Meta and Google were found negligent for their roles in contributing to the depression and anxiety of a woman who sued the companies, claiming their products were addictive and had kept her locked into their use since she was a small child.
A court in California awarded her $6 million. The ruling marks the first time social media companies have been held liable for the ways their products affect individuals, especially children and teenagers.
The jury determined that Meta and Google’s app — in Google’s case, YouTube — were designed to be addictive and that appropriate measures to protect younger users were not put in place.



