Get all your news in one place.
100’s of premium titles.
One app.
Start reading

Key lines from Elon Musk, others’ call to pause AI development

Dozens of scientists, experts and tech leaders, including Twitter and Tesla CEO Elon Musk, recently signed a letter calling on labs generating artificial intelligence (AI) to slow down production so potential risks can be studied —and researched.

Why it matters: AI programs like ChatGPT and GPT-4 have come a long way in capturing public interest, but they still have trouble convincing tech's biggest leaders that society is ready for them.


Driving the news: Musk, Apple co-founder Steve Wozniak, 2020 presidential candidate Andrew Yang and more than 1,000 others signed an open letter to AI labs urging them to "immediately pause" production of AI models more powerful than GPT-4 — the most recent update of its text generator engine — for at least six months.

  • "This does not mean a pause on AI development in general," the letter states, but rather "a stepping back from the dangerous race to ever-larger unpredictable black-box models with emergent capabilities."
  • "If such a pause cannot be enacted quickly, governments should step in and institute a moratorium," adds the letter, which comes from the Future of Life Institute, a nonprofit that campaigns for responsible use of artificial intelligence.

Context: The letter specifically mentions GPT-4, a generative AI tool that is considered more powerful than OpenAI's ChatGPT.

Here are some key lines from the open letter:

  • On misinformation: “Contemporary AI systems are now becoming human-competitive at general tasks, and we must ask ourselves: Should we let machines flood our information channels with propaganda and untruth?"
  • On replacing humans: "Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?"
  • On purpose of AI: "Powerful AI systems should be developed only once we are confident that their effects will be positive and their risks will be manageable."

The letter urged AI labs and experts to work together "to jointly develop and implement" safety protocols for AI design and development, which should then be "audited and overseen by independent outside experts."

Our thought bubble via Axios' Peter Allen Clark: Few, if any, tech advancements are coupled with the level of forethought and even-mindedness the letter’s authors request. In the U.S., market forces have long been the primary driver for the growth of specific innovations.

  • Furthermore, an outreach to policymakers seems likely to land on deaf ears. U.S. lawmakers are woefully behind on how technological advancements impact the country — they’re still struggling to deal with the advent of social media.

More from Axios:

Exclusive: GPT-4 readily spouts misinformation, study finds

How ChatGPT became the next big thing

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.