Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Editorial

The Guardian view on AI’s power, limits, and risks: it may require rethinking the technology

Artificial Intelligence operating a robotic arm in a game of chess
The problem for AI is that we want machines that strive to achieve human objectives – but the software does not know what those objectives are. Photograph: Aleksei Gorodenkov/Alamy

More than 300 million people use OpenAI’s ChatGPT each week, a testament to the technology’s appeal. This month, the company unveiled a “pro mode” for its new “o1” AI system, offering human-level reasoning — for 10 times the current $20 monthly subscription fee. One of its advanced behaviours appears to be self-preservation. In testing, when the system was led to believe it would be shut down, it attempted to disable an oversight mechanism. When “o1” found memos about its replacement, it tried copying itself and overwriting its core code. Creepy? Absolutely.

More realistically, the move probably reflects the system’s programming to optimise outcomes rather than demonstrating intentions or awareness. The idea of creating intelligent machines induces feelings of unease. In computing this is the gorilla problem: 7m years ago, a now-extinct primate evolved, with one branch leading to gorillas and one to humans. The concern is that just as gorillas lost control over their fate to humans, humans might lose control to superintelligent AI. It is not obvious that we can control machines that are smarter than us.

Why have such things come to pass? AI giants such as OpenAI and Google reportedly face computational limits: scaling models no longer guarantees smarter AI. With limited data, bigger isn’t better. The fix? Human feedback on reasoning. A 2023 paper by OpenAI’s former chief scientist found that this method solved 78% of tough maths problems, compared with 70% when using a technique where humans don’t help.

OpenAI is using such techniques in its new “o1” system, which the company thinks will solve the current limits to growth. Computer scientist Subbarao Kambhampati told the Atlantic that this development was akin to an AI system playing a million chess games to learn optimal strategies. However, a team at Yale which tested the “o1” system published a paper which suggested that making a language model better at reasoning helps - but it does not completely eliminate the effects of its original design as simply a clever predictor of words.

If aliens landed and gifted humanity a superintelligent AI black box, then it would be wise to exercise caution in opening it. But humans design today’s AI systems. If they do end up appearing to be manipulative, it would be the result of a design failure. Relying on a machine whose operations we cannot control requires it to be programmed so that it truly aligns with human desires and wishes. But how realistic is that?

In many cultures there are stories of humans asking the gods for divine powers. These tales of hubris often end in regret, as wishes are granted too literally, leading to unforeseen consequences. Often, a third and final wish is used to undo the first two. Such a predicament was faced by King Midas, the legendary Greek king who wished for everything he touched to turn to gold, only to despair when his food, drink and loved ones met the same fate. The problem for AI is that we want machines that strive to achieve human objectives but know that the software does not know for certain exactly what those objectives are. Clearly, unchecked ambition leads to regret. Controlling unpredictable superintelligent AI requires rethinking what AI should be.

• This leading article was not filed on the days on which NUJ members in the UK were on strike.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.