Get all your news in one place.
100’s of premium titles.
One app.
Start reading

Where AI evolves from here

Microsoft researchers say the latest model of OpenAI's GPT "is a significant step towards AGI" — artificial general intelligence, the longtime grail for AI developers.

The big picture: If you think of AI as a technology ascending (or being pushed up) a ladder, Microsoft's paper claims that GPT-4 has climbed several rungs higher than anyone thought.


Driving the news: Microsoft released the "Sparks of Artificial General Intelligence" study in March, and it resurfaced in a provocative New York Times story Tuesday.

  • The researchers found that "GPT-4 can solve novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more, without needing any special prompting."
  • Among other remarkable responses, they asked GPT-4 how to stack "a book, 9 eggs, a laptop, a bottle and a nail," and it provided a plausible plan.

Catch up quick: Three key terms to understand in this realm are generative AI, artificial general intelligence (AGI), and sentient AI.

  • Generative AI sounds like a person.
  • AGI reasons like a person.
  • Sentient AI thinks it's a person.

GPT-4, ChatGPT, Dall-E and the other AI programs that have led the current industry wave are all forms of generative AI.

  • These are big software programs — mostly, "large language models" or LLMs — that are trained on troves of text, images or other data to perform one trick over and over: filling in the next word or pixel in a pattern that the user has requested. Developers then fine-tune these models for more specific applications.
  • Generative AI does amazing things today, but experts are divided over whether it is on course to evolve toward the loftier status of artificial general intelligence (AGI).

AGI has a variety of definitions, all centering on the notion of human-level intelligence that can evaluate complex situations, apply common sense, and learn and adapt.

  • The "Sparks" paper authors define AGI as "systems that demonstrate broad capabilities of intelligence, including reasoning, planning, and the ability to learn from experience, and with these capabilities at or above human-level."
  • OpenAI's mission statement is to build AGI and ensure that it "benefits all of humanity."

Many experts, like Microsoft's authors, see a clear path from the context-awareness of today's generative AI to building a full AGI.

  • Another expert contingent believes that generative AI is likely to plateau at some point, and the quest for AGI will need to explore different avenues to advance.

Beyond the goal of AGI lies the more speculative notion of "sentient AI," the idea that these programs might cross some boundary to become aware of their own existence and even develop their own wishes and feelings.

  • Last year, a Google engineer went public with a claim that the firm's LaMDA language model had become sentient enough that it should be granted the AI equivalent of human rights. Google later fired him.

Virtually no one else is arguing that ChatGPT or any other AI today has come anywhere near sentience. But plenty of experts and tech leaders think that might happen someday, and that there's a slim chance such a sentient AI could go off the rails and wreck the planet or destroy the human species.

  • These are the concerns that drove many industry insiders to sign an open letter in March calling for a six-month pause in the development of the next generation of AI.
  • Other experts discount that worry as a distant and unlikely scenario — and believe that it distracts from closer-to-reality harms stemming from actual AI systems in use today that make biased decisions, disrupt employment and confuse fiction with fact.

Our thought bubble: The questions these categories raise divide people into two camps.

  • Pragmatists argue that at some point, generative AI will get good enough at pattern-matching the real world that it will function as well as, or better than, a human being.
  • Then it will get equipped with enough sensors and robotics to sense and act in the physical world — and eventually it will become futile to try to exclude such technological creations from the "sentient beings" category.
  • Essentialists, on the other hand, argue that there will always be something about human beings, and human being, that's distinct from artificial life — rooted in biology (our bodies), spirituality (the idea of a soul) or epistemology (our self-knowledge).

The bottom line: For help navigating this landscape, you're likely to find as much value in the science fiction novels of Philip K. Dick as in the day's news.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.