Graphic and violent videos of women being tortured and murdered are being created using Google’s AI generator and shared across the internet, intensifying concerns that the technology is fuelling misogynistic abuse.
One account on YouTube, called WomanShotA.I, uploaded dozens of such videos showing women pleading for their lives before being shot, which had garnered nearly 200,000 views since June. It was removed only after tech reporting site 404 Media alerted the platform.
The videos, made using Google’s AI-generator Veo 3, were prompted to be made by humans and then shared across the internet.
Some of the videos were titled “captured girls shot in head”, “Japanese schoolgirls shot in breast”, “female reporter tragic end”.
Durham University law professor Clare McGlynn, a leading expert on violence against women and girls and gender equality, said when she spotted the video channel: “It lit a flame inside of me that just struck me so immediately that this is exactly the kind of thing that is likely to happen when you don’t invest in proper trust and safety before you launch products.”

Professor McGlynn stressed that Google and other AI developers must implement stronger safeguards before releasing their tools and address the issues as they come, condemning the industry’s rush to produce technology.
She told The Independent: “Google says that this sort of material does not fall within their terms and conditions. They supposedly don’t allow material with graphic violence and sexual violence, etc, yet this was able to be produced.
“What that says to me is they simply didn’t care enough; they didn’t have enough guardrails in place to stop this being produced.”
She said the fact that this content is able to be shared on YouTube, a popular platform for young people, is worrying as she fears it could normalise the behaviour.
YouTube, which is owned by Google, said in a statement that its generative AI follows user prompts, and the channel had been terminated for breaching its terms of service. The channel had already been removed in the past.

Generative AI’s policies state that users must not engage in sexually explicit, violent, hateful, or harmful activities, or generate or distribute content that facilitates violence or the incitement of violence. Google did not respond to questions regarding how many videos of this nature it was aware had been created using its AI.
Alexandra Deac, a researcher for Child Online Harms Policy Think Tank, believes that the issue needs to be treated as a public health priority.
She said: “The fact that AI-generated violent content of this nature can be created and shared so easily is deeply worrying. For children and young people, exposure to this material can have lasting effects, including impacting mental health and wellbeing.”
Ms Deac said new threats continue to emerge online, and “the scale of exposure to violent, sexualised, or AI-generated harmful content is too great to leave to parents alone”.
The UK’s Internet Watch Foundation has identified 17 incidents of AI-generated child sexual abuse material on a chatbot website, which has not been named, since June.

It said users were able to interact with chatbots that simulate “abhorrent” sexual scenarios with children, some as young as seven.
Olga Jurasz, law professor and director of the Centre for Protecting Women Online, said the videos “hugely contribute towards the maintenance of a culture of sexism, of misogyny, a world where gender stereotypes thrive and are promoted, where women are seen as inferior, as ones that can be beaten, that can be violated, whose dignity does not really count”.
Dr Jurasz said a growing number of videos depicting extreme violence against women were being created and circulated online, which often promotes similar acts of violence both on and off the internet.
“It is a huge problem when we see videos or images that are AI-generated that portray sexualised violence against women and sexualised torture,” she added.

A Department for Science, Innovation, and Security spokesperson said: “This government is determined to do everything we can to end violence against women and girls – including online-based gender violence – which is why we have set out an unprecedented mission to halve it within a decade.
“Social media sites, search engines and AI chatbots that fall under the Online Safety Act must protect users from illegal content, such as extreme sexual violence, and children from harmful content, including where it is AI-generated.
“In addition to criminalising the creation of non-consensual intimate images, we have also introduced laws which will make it illegal to possess, create or distribute AI tools designed to generate heinous child sex abuse material.”
The Online Safety Act came into effect in March this year after long delays and is designed to make the internet “safer”, especially for children. A central element of the bill is the new duties it places upon social media firms and the powers it gives Ofcom to enforce them.
Earlier this year, prime minister Sir Keir Starmer vowed to make Britain an AI “superpower”, promising breakthroughs that it would export to the rest of the world. There have been several advancements since then, particularly in the health sector, where it has been rapidly adopted.
Last month, AI giant Nvidia announced a £2bn investment in the UK’s AI sector as part of the “biggest-ever tech agreement” between the UK and the United States.