ChatGPT creator OpenAI has secured a $200 million contract with the US military to develop AI for “warfighting” and other national security challenges.
The deal marks a major shift in policy for the artificial intelligence firm, which had previously banned anyone from using its models for military and warfare purposes.
It forms part of a new initiative launched by the company called OpenAI for Government, which includes a secure version of ChatGPT for government employees.
The Pentagon said in a statement that the deal will see OpenAI develop “prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains”.
OpenAI’s statement did not mention warfighting or weaponising AI, instead focussing on more benign uses for its technology.
The firm said the deal would see its artificial intelligence tools used for “administrative operations” within the Defense Department.
This will range from“ improving how service members and their families get health care, to streamlining how they look at program and acquisition data, to supporting proactive cyber defense”.
Last year, OpenAI quietly removed terms in its user guidelines that prohibited the use of its AI for military purposes.
Previously, the usage policy included a ban on using its artificial intelligence on “weapons development” and “military and warfare”, however the language shifted to allow for state militaries to use the AI in some capacity.
“We believe you should have the flexibility to use our services as you see fit, so long as you comply with the law,” the latest guidelines state.
“Don’t use our service to harm yourself or others –for example, don’t use our services to… develop or use weapons, injure others or destroy property, or engage in unauthorized activities that violate the security of any service or system.”
In a blog post in October, OpenAI further detailed how it saw its technology being used by governments, claiming that AI could “help protect people, deter adversaries, and even prevent future conflict”.
OpenAI boss Sam Altman appeared at Vanderbilt University’s Summit on Modern Conflict and Emerging Threats in April, refusing to rule out the possibility of weaponizing AI.
“I will never say never because the world could get really weird,” he said. “At that point, you sort of have to look at what’s happening and say, ‘Let’s make a trade-off among some really bad options’.”