Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Amanda Caswell

The ‘Hapsburg AI’ effect: Why the next generation of models may be faster, but not smarter

Artificial intelligence "AI" and brain glowing next to a smartphone screen.

Every few months, there’s a new model, a new breakthrough, a new benchmark beaten. ChatGPT gets smarter, but then Gemini gets faster and takes the crown. Suddenly, Claude quitely becomes more capable. And we can't forget about the reasoning models arriving in droves. AI agents are promised. The curve just keeps climbing.

But here’s a question that doesn’t get asked enough: What if it doesn’t?

What if AI eventually hits a wall — not because companies stop trying, but because intelligence itself has limits? It’s a deceptively simple question with enormous implications. And the more I’ve dug into it, the more I’ve realized: this isn’t just a tech question. It’s a scientific, philosophical and deeply human one.

AI is only as smart as its ingredients

(Image credit: Future)

At the most basic level, today’s AI systems are built from three things:

  • Data: books, articles, code, images, videos, and conversations
  • Compute: massive amounts of processing power
  • Human design: the architectures, objectives, and training methods created by researchers

Right now, we tend to treat these as if they’re limitless. But they aren’t.

That leads to an uncomfortable thought: if AI learns from human-created data, can it ever truly move beyond the boundaries of human knowledge?

Large language models don’t “discover” the world the way humans do. They don’t run experiments in a lab, go outside or have lived experiences. They are incredibly sophisticated pattern-matching machines trained on what we’ve already produced.

That raises a real possibility: AI might get better at using human knowledge — but not necessarily go beyond it in a fundamental way.

The data problem: are we running out of “new” knowledge?

(Image credit: Yuichiro Chino/Getty Images)

One of the biggest bottlenecks in AI progress is something surprisingly mundane: data. For instance, OpenAI may buy Pinterest and you can bet a big driver of that decision is more data.

That's because the best AI models have already “read” nearly everything humans have put online. But that pool is finite. Researchers are openly discussing a potential “data wall” — the point at which we’ve largely exhausted high-quality, human-generated text.

The industry’s workaround? Synthetic data — AI training on data created by other AI. But the risk here is what some researchers call the “Hapsburg AI” effect. It's like a form of inbreeding when models train too heavily on their own output; the risk is model collapse — losing nuance, creativity and the messy edge cases that make human thought valuable.

The result could be AI that keeps improving at narrow skills, but stops making the kind of broad, surprising leaps we’ve seen in recent years.

Could AI create new intelligence?

(Image credit: Shutterstock)

Here’s where things get more interesting. Some researchers argue that AI won’t need human data forever. They believe future systems could:

  • Run their own experiments
  • Simulate environments
  • Generate new scientific hypotheses
  • Discover patterns humans haven’t noticed
  • Even design better AI systems than humans can build

We've already seen what they've done with Moltbook, so mabe the next frontier isn’t just better code — it’s robotics and AI-driven scientific labs, where machines can interact with the physical world instead of just reading about it.

If this happens, AI might break free from the “human ceiling” and enter a new phase of machine-driven intelligence.

The 'surpasser' paradox

(Image credit: Moonvalley)

But this creates a deeper tension. If an AI is trained primarily on human knowledge, can it ever truly surpass us?

Right now, models are brilliant at interpolation — connecting dots within the known human experience. They’re incredible at summarizing, synthesizing and reorganizing what we already know.

They are far weaker at extrapolation — inventing entirely new “dots.” In other words, they aren't very creative. To truly surpass humans, AI may need to stop being a library of everything we’ve written and start being an independent explorer of reality.

The wit machine vs. the bureaucrat

(Image credit: Getty Images)

There’s another, more human kind of wall AI might hit: the difference between calculation and wit.

As AI scales, it often drifts toward the “mean.” It becomes an ultra-efficient bureaucrat — precise, reliable and safe, but less sharp, weird or surprising.

Wit isn’t just about being funny. It’s about the lateral leap — connecting two unrelated ideas in a way that feels fresh, insightful, or slightly subversive.

So, if AI hits a wall, it might be here. We could end up with machines that can calculate the trajectory of a star or optimize global supply chains — yet still struggle to write a joke that truly lands, or craft a metaphor that makes you see the world differently.

The “Wit Machine” becomes the ultimate test: can AI learn to be interesting or will it become the world’s most knowledgeable, yet oddly boring assistant?

Is intelligence built into the universe?

(Image credit: Shutterstock)

Let’s zoom out from tech for a moment.

Some scientists believe intelligence — whether biological or artificial — may be constrained by the laws of physics. Two big ideas support this:

  • Computational irreducibility. Some problems (like predicting the weather or modeling the human brain in full detail) may be impossible to shortcut. You can’t “solve” them faster than real time — you simply have to watch them unfold. If that’s true, then no amount of smarter AI can fully bypass certain limits of prediction and understanding.
  • The energy ceiling. Intelligence requires energy. If the next leap in AI requires the power of a small city — or even a small sun — to process a single thought, we hit a physical wall long before a cognitive one.

In that case, the real limit isn’t “how smart can AI get?” but “how much energy can intelligence consume?”

So… will AI hit a wall? The honest answer is we don’t know. By the way, I tried asking it, AI doesn't know either.

But here are three plausible futures, not because we failed. But because intelligence itself has boundaries.

  • The slow plateau. Progress continues, but becomes incremental. AI turns into a utility like electricity — indispensable and powerful, but no longer delivering shocking leaps in “smartness.”
  • Escape velocity. AI breaks free from human data by running experiments, simulating worlds, and discovering new scientific or mathematical truths humans haven’t conceived of.
  • A universal ceiling. We eventually discover that there is a maximum intelligence allowed by the universe — and both humans and machines are already approaching it.

Bottom line

Right now, AI feels unstoppable and not everyone likes it or wants to use it. But history shows that every technology eventually encounters constraints — whether technical, physical or conceptual.

The real question for the next decade isn’t just: “How much smarter can AI get?” It’s: “Is there a point where ‘smarter’ no longer exists?”

And that might be one of the most important questions we ask in the AI era.


Follow Tom's Guide on Google News and add us as a preferred source to get our up-to-date news, analysis, and reviews in your feeds.


More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.