Support truly
independent journalism
A hacker gained access to the internal messaging systems of artificial intelligence developer OpenAI and “stole details” of its technologies, it has been revealed.
The data breach occurred earlier this year, though the company chose not to make it public or inform authorities because it did not consider the incident a threat to national security.
Sources close to the matter told The New York Times, that the hacker lifted details of the AI technologies from discussions in an online forum where employees talked about OpenAI’s latest technologies.
They did not, however, get into the systems where the company houses and builds its artificial intelligence, the sources said.
OpenAI executives revealed the incident to employees during a meeting at the company’s San Francisco offices in April 2023. The board of directors was also informed.
However, the sources told the newspaper that executives decided not to share the news publicly because no information about customers or partners had been stolen.
The incident was not considered a threat to national security because they believed the hacker was a private individual with no known ties to a foreign government. As such, the OpenAI bosses allegedly did not inform the FBI or other law enforcement.
But for some employees, The Times reported, the news raised fears that foreign adversaries such as China could steal AI technology that could eventually endanger US national security.
It also led to questions about how seriously OpenAI was treating security, and exposed fractures inside the company about the risks of artificial intelligence.
After the breach, Leopold Aschenbrenner, an OpenAI technical program manager, focused on ensuring that future AI technologies do not cause serious harm, sent a memo to the company’s board of directors.
Aschenbrenner argued that the company was not doing enough to prevent the Chinese government and other foreign adversaries from stealing its secrets.
He also said OpenAI’s security wasn’t strong enough to protect against the theft of key secrets if foreign actors were to infiltrate the company.
Aschenbrenner later alleged that OpenAI had fired him this spring for leaking other information outside the company and argued that his dismissal had been politically motivated. He alluded to the breach on a recent podcast, but details of the incident have not been previously reported.
“We appreciate the concerns Leopold raised while at OpenAI, and this did not lead to his separation,” an OpenAI spokeswoman, Liz Bourgeois, told The New York Times.
“While we share his commitment to building safe AGI, we disagree with many of the claims he has since made about our work.
“This includes his characterizations of our security, notably this incident, which we addressed and shared with our board before he joined the company.”