Artificial intelligence that rivals or surpasses human intelligence is “coming into view”, according to the head of ChatGPT creator OpenAI, though its benefits will not be distributed equally.
The creation of human-level AI, known as artificial general intelligence (AGI), is the core mission of OpenAI, though its development has led to concerns that such technology could pose an existential threat to humanity.
OpenAI CEO Sam Altman wrote in a lengthy blog post that AGI will be available to everyone within the next decade – but its arrival will likely bring major disruption to society and the economy.
“Systems that start to point to AGI are coming into view, and so we think it’s important to understand the moment we are in,” he wrote.
“The future will be coming at us in a way that is impossible to ignore, and the long-term changes to our society and economy will be huge.”
Among the changes could be “the balance of power between capital and labour”, according to Mr Altman, which could further exacerbate wealth inequality.
Advanced AI could increasingly be used by authoritarian governments “to control their population through mass surveillance and loss of autonomy”, the tech boss also warned, while adding that the biggest benefits brought about by AGI will likely be in the scientific domain.
“We expect the impact of AGI to be uneven,” he wrote. “Although some industries will change very little, scientific progress will likely be much faster than it is today; this impact of AGI may surpass everything else.”
Mr Altman’s blog post comes less than a month after a former OpenAI safety officer revealed that he left the company amid concerns about the trajectory of AGI development.
“Honestly I’m pretty terrified by the pace of AI development these days,” Steven Adler, who joined the firm eight months before the release of ChatGPT, wrote in a series of posts on X.
“An AGI race is a very risky gamble, with huge downside. No lab has a solution to AI alignment [ensuring AI’s objectives match those of humans]. And the faster we race, the less likely that anyone finds one in time.
“Today, it seems like we’re stuck in a really bad equilibrium. Even if a lab truly wants to develop AGI responsibly, others can still cut corners to catch up, maybe disastrously. And this pushes all to speed up. I hope labs can be candid about real safety regs needed to stop this.”
AGI development and regulation will likely be discussed at the Artificial Intelligence Action Summit, taking place in Paris this week, which Mr Altman will be attending.