Top AI experts at OpenAI, Anthropic and other companies warn of rising dangers of their technology, with some quitting in protest or going public with grave concerns.
Why it matters: Leading AI models, including Anthropic's Claude and OpenAI's ChatGPT, are getting a lot better, a lot faster, and even building new products themselves.
- That excites AI optimists — and scares the hell out of several people tasked with policing their safety for society.
Driving the news: On Monday, an Anthropic researcher announced his departure, in part to write poetry about "the place we find ourselves."
- An OpenAI researcher also left this week citing ethical concerns. Another OpenAI employee, Hieu Pham, wrote on X: "I finally feel the existential threat that AI is posing."
- Jason Calacanis, tech investor and co-host of the All-In podcast, wrote on X: "I've never seen so many technologists state their concerns so strongly, frequently and with such concern as I have with AI."
- The biggest talk among the AI crowd Wednesday was entrepreneur Matt Shumer's post comparing this moment to the eve of the pandemic. It went mega-viral, gathering 56 million views in 36 hours, as he laid out the risks of AI fundamentally reshaping our jobs and lives.
Reality check: Most people at these companies remain bullish that they'll be able to steer to technology smartly without societal damage or big job loss. But the companies themselves admit the risk.
- Anthropic published a report showing that, while low risk, AI can be used in heinous crimes, including the creation of chemical weapons. This so-called "sabotage report" looked at the risks of AI without human intervention, purely on its own.
- At the same time, OpenAI dismantled its mission alignment team, which was created to ensure AGI (artificial general intelligence) benefits all of humanity, Platformer author Casey Newton reported Wednesday.
Between the lines: While the business and tech worlds are obsessed with this topic, it hardly registers in the White House and Congress.
Threat level: The latest round of warnings follows evidence that these new models can build complex products themselves and then improve their work without human intervention.
- OpenAI's last model helped train itself. Anthropic's viral Cowork tool built itself, too.
- These revelations — in addition to signs that AI threatens big categories of the economy, including software or legal services — prompted a lot of real-time soul-searching.
The bottom line: The AI disruption is here, and its impact is happening faster and more broadly than many anticipated.