The "godfather of AI" quit Google and joined a growing chorus of experts warning that the rush to deploy artificial intelligence could lead to disaster.
Why it matters: When some of the smartest people building a technology warn it could turn on humans and shred our institutions, it's worth listening.
Driving the news: Geoffrey Hinton, a top machine-learning pioneer, says he left Google so he could speak freely about the dangers of rushing generative AI products.
- "It is hard to see how you can prevent the bad actors from using it for bad things," Hinton, 75, told The New York Times.
Axios asked AI experts — developers, researchers and regulators — to sketch their most plausible disaster fears. Their top 5:
1. Cyberattacks explode. The right prompts can now generate working malicious code, meaning more, bigger and increasingly diverse cyberattacks.
- Dario Amodei, CEO at Anthropic, which offers a rival to ChatGPT, told Axios CEO Jim VandeHei that a massive expansion of such attacks is his biggest near-term worry.
2. Scams sharpen. Forget clumsy emails: Using social media posts and other personal information, the new AI-assisted phishing and fraud schemes will take the form of real-sounding pleas for help in the faked voices of your friends and relatives. (The "bad actors" are already at it.)
3. Disinformation detonates. Propaganda and partisan assault will be optimized by algorithms and given mass distribution by tech giants.
- Multimodal AI — text, speech, video — could make it impossible for the public to separate fact and fiction. (This one's already happening too.)
- Displaced workers could turn to violent protests or isolationist politics.
4. Surveillance locks in. America’s 70 million CCTV cameras and unregulated personal data already enable authorities to match people to footage. Israel uses facial recognition technology to monitor Palestinians, while China uses AI tools to target its Uyghur minority.
- AI can supercharge this kind of tracking for both corporations and governments, enabling behavior prediction on a mass scale but with personalized precision.
- That creates opportunities for “incentivizing conformity, and penalizing dissent,” Elizabeth Kerley, of the International Forum for Democratic Studies, told Axios.
5. Strongmen crack down. Mass digital data collection can give would-be autocrats a means to anticipate and defuse social anger that bypasses democratic debate — “with no need to tolerate the messiness of free speech, free assembly, or competitive politics,” per Kerley.
- MIT’s Daron Acemoglu, author of "Why Nations Fail" and "Redesigning AI," told Axios he worries “democracy cannot survive” such a concentration of power without guardrails.
- India’s Narendra Modi, who is already engaging in democratic backsliding, could be the next digital strongman to weaponize AI against democracy. India has the highest acceptance rates of AI globally, according to a KPMG survey of 17 countries.
What's next: Democracies have a limited time window to act by, for instance, imposing legal constraints on AI providers.
- Seth Dobrin, president of the Responsible AI Institute, says the U.S. needs an FDA for AI.
- Others think progress is more likely to be achieved via a lighter-touch oversight body that could conduct audits and raise red flags.
Yes, but: The tech industry's AI product race shows no sign of slowing.
- Although Google CEO Sundar Pichai has warned there is a "mismatch" between how fast AI is developing and how quickly our institutions can adapt, he has also responded to competition from Microsoft and OpenAI by flooring the gas pedal on the company's AI product launches.
The bottom line: Those setting the AI pace are “trying to move fast to pretend that they're not breaking things,” Marietje Schaake — the former EU official who is now international policy advisor at Stanford’s Institute for Human-Centered AI — told Axios.
- “The idea that this stuff could actually get smarter than people ... I thought it was way off," Hinton told the Times, "I thought it was 30 to 50 years or even longer away. Obviously, I no longer think that.”