Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Eric Hal Schwartz

OpenAI has a new scale for measuring how smart their AI models are becoming – which is not as comforting as it should be

OpenAI logo.

OpenAI has developed an internal scale for charting the progress of its large language models moving toward artificial general intelligence (AGI), according to a report from Bloomberg

AGI usually means AI with human-like intelligence and is considered the broad goal for AI developers. In earlier references, OpenAI defined AGI as "a highly autonomous system surpassing humans in most economically valuable tasks." That's a point far beyond current AI capabilities. This new scale aims to provide a structured framework for tracking the advancements and setting benchmarks in that pursuit.

The scale introduced by OpenAI breaks down the progress into five levels or milestones on the path to AGI. ChatGPT and its rival chatbots are Level 1. OpenAI claimed to be on the brink of reaching Level 2, which would be an AI system capable of matching a human with a PhD when it comes to solving basic problems. That might be a reference to GPT-5, which OpenAI CEO Sam Altman has said will be a "significant leap forward." After Level 2, the levels become increasingly complex. Level 3 would be an AI agent capable of handling tasks for you without you being there, while a Level 4 AI would actually invent new ideas and concepts. At Level 5, the AI would not only be able to take over tasks for an individual but for entire organizations.

Level Up

The level idea makes sense for OpenAI or really any developer. In fact, a comprehensive framework not only helps OpenAI internally but may also set a universal standard that could be applied to evaluate other AI models.

Still, achieving AGI is not going to happen immediately. Previous comments by Altman and others at OpenAI suggest as little as five years, but timelines vary significantly among experts. The amount of computing power necessary and the financial and technological challenges are substantial.

That's on top of the ethics and safety questions sparked by AGI. There's some very real concern about what AI at that level would mean for society. And OpenAI's recent moves may not reassure anyone. In May, the company dissolved its safety team following the departure of its leader and OpenAI co-founder, Ilya Sutskever. High-level researcher Jan Leike also quit, citing concerns that OpenAI's safety culture was being ignored. Nonetheless, By offering a structured framework, OpenAI aims to set concrete benchmarks for its models and those of its competitors and maybe help all of us prepare for what's coming.

You might also like...

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.