What’s in a name? Take OpenAI, for instance, the game-changing creator of ChatGPT that was founded as a nonprofit with the goal of bringing safety and transparency to the development of artificial intelligence. This week, it unveiled a new and improved A.I. tool called GPT-4 without revealing much about the long-anticipated program’s inner workings. Critics, including Elon Musk, are questioning the direction of the more secretive “capped-profit” venture heavily backed by Microsoft.
Musk, the Tesla and Twitter CEO who cofounded OpenAI when it launched in 2015—and made a hefty donation—noted the change last month, tweeting:
“OpenAI was created as an open source (which is why I named it “Open” AI), non-profit company to serve as a counterweight to Google, but now it has become a closed source, maximum-profit company effectively controlled by Microsoft. Not what I intended at all.”
Of course, this might be dismissed as the grumblings of a mercurial billionaire—he also complained this week about how “a non-profit to which I donated ~$100M somehow became a $30B market cap for-profit”—but Musk isn’t alone in noting the incongruity now lurking in the name OpenAI.
Ben Schmidt, an A.I. expert and executive at information cartography firm Nomic AI, joined in with a tweet on Tuesday: “I think we can call it shut on 'Open' AI: the 98-page paper introducing GPT-4 proudly declares that they're disclosing *nothing* about the contents of their training set.” Schmidt shared a portion of the paper, titled GPT-4 Technical Report, that reads, “Given both the competitive landscape and the safety implications of large-scale models like GPT-4, this report contains no further details about the architecture (including model size), hardware, training compute, dataset construction, training method, or similar.”
“I don’t mind sharing nothing about the technical details of your model, but at least rename the company from @OpenAI to ClosedAI,” tweeted Walid Magdy, a fellow at the A.I.-focused Alan Turing Institute in London.
When podcaster Lex Fridman tweeted on Thursday that he would interview OpenAI CEO Sam Altman next week and asked for suggestions on what to ask him, the irony of the company’s name came up several times.
Anton Wiehe, cofounder of German A.I. startup AdaLab, suggested asking whether OpenAI would “consider a name change” and why it’s “against open-source now.”
“We were wrong”
Ilya Sutskever, OpenAI’s chief scientist and cofounder, recently addressed the company changing its approach to sharing its research, noting the potential dangers posed by A.I. and, later perhaps, artificial general intelligence (AGI), when machines will supposedly be able to understand or learn any intellectual task a human can. In an interview with The Verge published on Wednesday, he said:
“We were wrong. Flat out, we were wrong. If you believe, as we do, that at some point, A.I.—AGI—is going to be extremely, unbelievably potent, then it just does not make sense to open-source. It is a bad idea…I fully expect that in a few years it’s going to be completely obvious to everyone that open-sourcing A.I. is just not wise.”
He added that “at some point it will be quite easy, if one wanted, to cause a great deal of harm with those models. And as the capabilities get higher it makes sense that you don’t want to disclose them.”
In one small sign of the malicious intent out there, scammers are now using voice-cloning A.I. tools to sound like victims’ relatives in desperate need of financial help.
Yet Sutskever also acknowledged that “the safety side is not yet as salient a reason as the competitive side,” noting that “it’s competitive out there” and “there are many, many companies who want to do the same thing.”
Actually open A.I.
But if the company is now focused on keeping its secrets for competitive reasons, does the name OpenAI still make sense?
In its founding statement, OpenAI, then still a nonprofit, stated, “Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.”
Another reason for OpenAI not sharing GPT-4’s inner workings might be legal liability. A.I. language models and image generators are trained on large amounts of data on the internet, some of which might be protected by copyright.
A group of artists is currently suing three A.I. startups whose products let users generate sophisticated images by simply entering text prompts. They argue the companies make use of a data set that indexes billions of images from across the internet, among them the works by artists who did not give consent and were not compensated.
According to The Verge, Sutskever, when asked if OpenAI could state definitively that its training data doesn’t include pirated material, did not reply.
Fortune reached out to OpenAI for comments but did not receive a reply.
Among those criticizing OpenAI over its name is Emad Mostaque, the founder and CEO of Stability AI, the company behind image generator Stable Diffusion, which competes with OpenAI’s DALL-E 2 and is among the tools targeted in the artists’ lawsuit.
Stable Diffusion is open source, and Mostaque has been using that fact to rib OpenAI for its name and for not taking the same approach. On Thursday, he tweeted an appeal to OpenAI employees, writing: “Open offer to anyone @OpenAI who actually wants to work on Open AI: We will match your salary, benefits etc but you can work on any open source AI projects you like, ours or others. Collaborate, be open and prioritise good outcomes over self interest.”
On his Twitter bio, he also changed the URL linking to his company’s website to read “actuallyopenai.com.”
The view of OpenAI’s Sutskever that “open-sourcing AI is just not wise” may well gain more adherents as instances of bad actors misusing A.I. increase. But even if that does happen, doubts about OpenAI’s name will likely persist.