Cybercriminals have been struggling to use artificial intelligence (AI) to benefit their work, new research which analysed around 100 million posts from underground and dark web cybercrime communities suggests.
The researchers found that most cybercriminals lack the skills or resources to use such innovation in their criminal activities.
The team of researchers from the universities of Edinburgh, Strathclyde and Cambridge analysed discussions from the CrimeBB database that contains more than 100 million posts scraped from dark web and underground cybercrime forums.
A combination of machine learning tools and manual sampling techniques were used to analyse the conversations.
The researchers were searching for posts that discussed how cybercriminals, often dubbed hackers, were experimenting with AI technologies from November 2022 onwards, which also marked the release of ChatGPT.
They found that rather than reducing the skill barrier level for committing cybercrime, AI coding assistants are mostly proving useful for those who are already skilled as significant skills and knowledge are needed to to use the AI tools effectively.
The researchers found that AI was used most successfully for running social media bots that conduct misogynistic harassment and make money from fraud, and for hiding patterns that are often detectable by cybersecurity defenders.
However they noted that reassuringly, guardrails on the major chatbots are having a significant impact in reducing harm.
Dr Ben Collier, senior lecturer in digital methods at the University of Edinburgh’s School of Social and Political Science, said: “Cybercriminals are experimenting with these tools, but as far as we can tell it’s not delivering them real benefits in their own work.
“Our message to industry is: don’t panic yet.
“The immediate danger comes from companies and members of the public adopting poorly secured AI systems themselves, opening them up to catastrophic new attacks that can be performed by cybercriminals with little effort or skill.”
The researchers found that many of those in cybercrime communities were panicking about potentially losing their “day jobs” in IT due to the impact of AI in mainstream software industries.
This could potentially then drive them and others towards more cybercriminal activity, the research suggests.
The report authors warn that the main risks to industry are likely to be from adopting poorly secured agentic AI systems – a form of AI that can act autonomously, carrying out actions on specific tasks and making decisions.
They also warn of risks around insecure “vibecoded” products – where computer code has been written using AI – by legitimate industry.
The findings have been peer reviewed and will be presented at the Workshop on the Economics of Information Security in Berkeley, US, in June.






