Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Sage Lazzaro

What OpenAI's hiring of a top Palantir executive means

Marines unload Amphibious Combat Vehicles at Camp Pendleton, CA on Wednesday, October 16, 2024 following a 6-month deployment. The vehicles are part of the first ACV platoon to be used by the Marines in a deployment and to train with military partners in the Indo-Pacific. (Credit: Paul Bersebach—MediaNews Group/Orange County Register via Getty Images)

Hello and welcome to Eye on AI. In this edition…OpenAI leans into its military ambitions; Amazon goes nuclear, too; Mistral releases AI models for laptops and phones; and AI companies fall short on assessment for EU compliance. 

OpenAI has been bleeding executives and top talent, but this week, it made a big hire. Well, a couple of them. In Tuesday’s newsletter, Jeremy covered the hiring of prominent Microsoft AI researcher Sebastian Bubeck. But today, I want to talk about a different hire this week: Dane Stuckey announced on X that he’s joining OpenAI as its newest chief information security officer (CISO) after a decade at Palantir, where he worked on the information security team and was most recently CISO. 

For many in the tech world, any mention of Palantir raises red flags. The secretive firm—cofounded by Peter Thiel and steeped in military contracts—has garnered intense scrutiny over the years for its surveillance and predictive policing technologies, taking up the controversial Project Maven that inspired walk-outs at Google, and its long-running contract with U.S. Immigration and Customs Enforcement (ICE) to track undocumented immigrants. 

Taken by itself, Stuckey’s hiring could just be that—a new hire. But it comes as OpenAI appears to be veering into the world of defense and military contracts.   

OpenAI’s military moment

In January, OpenAI quietly removed language from its usage policies that prohibited the use of its products for “military and warfare.” A week later, it was reported that the company was working on software projects for the Pentagon. More recently, OpenAI partnered with Carahsoft, a government contractor that helps the government buy services from private companies quickly and with little administrative burden, with hopes to secure work with the Department of Defense, according to Forbes.

Meanwhile, Fortune’s Kali Hays reported this week that the Department of Defense has 83 active contracts with various companies and entities for generative AI work, with the amounts of each contract ranging from $4 million to $60 million. OpenAI was not specifically named among the contractors, but its work may be obscured through partnerships with other firms that are listed as the primary contractor.

OpenAI’s GPT-4 model was at the center of a recent partnership between Microsoft, Palantir, and various U.S. defense and intelligence agencies. The entities joined in August to make a variety of AI and analytics services available to U.S. defense and intelligence agencies in classified environments.

With all the debates around how AI should and should not be used, the use for war and military purposes is easily the most controversial. Many, such as former Google CEO and prominent defense industry figure Eric Schmidt, have compared the arrival of AI to the advent of nuclear weapons. Advocacy groups have warned about the risks—especially considering the known biases in AI models and their tendencies to make up information. And many have mused over the morality of autonomous weapons, which could take lives without any human input or direction. 

The big picture

These types of pursuits have proven to be a major flash point for tech companies. In 2018, thousands of Google employees protested the company’s pursuit of a Pentagon contract known as Project Maven, fearing the technology they create would be used for lethal purposes and arguing they didn’t sign up to work with the military. 

While OpenAI has maintained it will still prohibit use of its technologies for weapons, we’ve already seen that it’s a slippery slope. The company is not only allowing, but also seeking out military uses it forbade this time last year. Plus, there are many concerning ways models could be used to directly support deadly military operations that don’t involve them functioning directly as weapons. 

There’s no telling if the march of exits from OpenAI this year is directly related in any part to its military ambitions. While some who left stated concerns over safety, most offered only boiler plate fodder around pursuing new opportunities in their public resignations. What’s clear, however, is that the OpenAI of 2024 and the foreseeable future is a very different company than the one they joined years ago. 

Now, here’s more AI news. 

Sage Lazzaro
sage.lazzaro@consultant.fortune.com
sagelazzaro.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.