Experts say that artificial intelligence tech giant OpenAI seems to be weakening its stance against business with militaries.
OpenAI claimed that this change to weaken the ban on using AI on military applications was implemented to make the document of the policy "clearer" and "more readable". Niko Felix, the spokesperson for OpenAI, emphasised the principle of not "use [their] service to harm yourself or others".
Nico Felix, in an email to The Intercept, said: "We aimed to create a set of universal principles that are both easy to remember and apply."
"A principle like 'Don't harm others' is broad yet easily grasped and relevant in numerous contexts...we specifically cited weapons and injury to others as clear examples," Felx added.
Furthermore, Heidy Khlaaf, engineering director at the cybersecurity firm Trail of Bits, noted that the shift to a broader policy may have implications for AI safety:
Khlaaf said: "OpenAI is well aware of the risk and harms that may arise due to the use of their technology and services in military applications."
While this new ruling may not have direct lethal capabilities, the possible use of OpenAI in military contexts could contribute to imprecise and biased operations, leading to increased possibility of harm and civilian casualties.
This is particularly likely as military operations worldwide, including the Pentagon, are seeking to integrate machine learning techniques into their operations. In a November address, Deputy Secretary of Defense Kathleen Hicks stated that AI is "a key part of the comprehensive, warfighter-centric approach to innovation that Secretary [Lloyd] Austin and I have been driving from Day 1".
The outputs of a large language model such as Chat-GPT are often very convincing, however, they are optimised for coherence rather than a firm grasp of reality and make accuracy and factuality a problem.
AI has already been utilised by American military operations in the Russia-Ukraine war, and The National Geospatial-Intelligence Agency, which directly aids US combat efforts, has openly speculated about using ChatGPT to aid its human analysts.
AI has also been used in military intelligence and targeting systems by Israeli forces in an AI system called The Gospel to pinpoint targets and "reduce human casualties" in its attacks on Gaza.
Sarah Myers West, managing director of the AI Now Institute and former AI policy analyst at the Federal Trade Commission, stated: "Given the use of AI systems in the targeting of civilians in Gaza, it's a notable moment to make the decision to remove the words 'military and warfare' from OpenAI's permissible use policy."
Experts say that OpenAI seems to be weakening its stance against business with military operations.
Lucy Suchman, professor emerita of anthropology of science and technology at Lancaster University, stated: "I think the idea that you can contribute to warfighting platforms while claiming not to be involved in the development or use of weapons would be disingenuous."
This shows that some may distrust the AI giant, fearing future implications of AI being involved more and more in military operations tactics as policies become broader.
This comes as OpenAI released a new GPT store, working as a marketplace for AI apps built with OpenAI's technology.
In a blog post, OpenAI said developers had already created three million "custom versions of ChatGPT", with many starting to share them for the wider public to use.
The store is only accessible to premium ChatGPT users, who pay a monthly $20 fee to access a faster version of the app, as well as teams and enterprises paying a fee.
This marks a step closer to OpenAI becoming a lot like Apple, with the GPT store drawing similarities to the App Store.
Kaja Traczyk is a reporter for the International Business Times UK and a Journalism Undergraduate with experience in news writing, reporting, and researching.