Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
David Meyer

Sam Altman calls time on the giant A.I. model party

Sam Altman, CEO of OpenAI, walks from lunch during the Allen & Company Sun Valley Conference on July 06, 2022 in Sun Valley, Idaho. (Credit: Kevin Dietsch—Getty Images)

Hi there—David Meyer here in Berlin, filling in for Jeremy again.

Ever-bigger large-language models are not the future. So says none other than Sam Altman, whose OpenAI has set the world on fire with the superstar of large language models, GPT.

“I think we’re at the end of the era where it’s gonna be these giant models, and we’ll make them better in other ways,” Altman said at an MIT event last week, according to a TechCrunch report. His stated reasoning is that it’s better to focus on “rapidly increasing capability” rather than parameter count, and if it’s possible to achieve capability improvements with lower parameter counts or by harnessing multiple smaller models together, then great.

As VentureBeat pointed out, there is likely a cost driver behind Altman’s thoughts. LLMs are really, really expensive—GPT-4’s training reportedly cost $100 million. This cost is one reason why Microsoft is reportedly developing its own, finely tuned A.I. chip, and it’s probably been a factor in Google’s rapidly-crumbling reluctance to dive headfirst into the generative-A.I. lake.

But while OpenAI is in no hurry to develop GPT-5, the competition continues to pile in. Amazon just unveiled its Titan family of LLMs (for generating text and for translating text into representations of semantic meaning). And Elon Musk, fresh from signing that six-month-moratorium letter, is also up to something—he’s reportedly incorporated a company called X.AI and bought thousands of Nvidia GPUs to build his own LLM. Musk also told Fox News’ Tucker Carlson that he plans to take on OpenAI’s “politically correct” ChatGPT with something he called TruthGPT, a “maximum truth-seeking A.I. that tries to understand the nature of the universe.” (No biggie.)

Whether these next-generation LLMs gain their power through girth or through other means, they are most definitely in policymakers’ sights.

Partly inspired by the moratorium letter—although they called it “unnecessarily alarmist”—some of the members of the European Parliament who are working on the bloc’s A.I. Act said in an open letter yesterday that they “are determined to provide…a set of rules specifically tailored to foundation models, with the goal of steering the development of very powerful artificial intelligence in a direction that is humancentric, safe, and trustworthy.”

The lawmakers called for a high-level summit between U.S. President Joe Biden and European Commission President Ursula von der Leyen, “with the view to agree on a preliminary set of governing principles for the development, control, and deployment of very powerful artificial intelligence." They acknowledged that the EU’s A.I. Act could serve as a blueprint for other countries’ regulations—and given that recent tweaks to the bill reportedly include forcing OpenAI et al. to declare the use of copyrighted material in the training of their A.I. models and making vendors liable for the misuse of their models, this blueprint for A.I. regulation could have seismic repercussions across the industry.

In the end, size may indeed not matter when compared to what you do—and don’t do—with your foundation models. And that's something you can expect regulators to increasingly have a say in.

David Meyer
Twitter: @superglaze
david.meyer@fortune.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.