OpenAI CEO Sam Altman is doing something no tech titan has ever done: He's publishing a detailed blueprint for how government should tax, regulate and redistribute the wealth from the very technology he's racing to build and spread.
Why it matters: Altman told us in a half-hour interview that AI superintelligence is so close, so mind-bending, so disruptive that America needs a new social contract — on the scale of the Progressive Era in the early 1900s, and the New Deal during the Great Depression.
The big picture: The threats of inaction or slow action are grave, Altman warns — widespread job loss, cyberattacks, social upheaval, machines man can't control. The two most immediate threats, he said, are cyberattacks and biological attacks:
- We've told you that top tech, business and government officials fear profound advances in soon-to-be-released AI models could enable a world-shaking cyberattack this year. "I think that's totally possible," Altman said. "I suspect in the next year, we will see significant threats we have to mitigate from cyber."
- AI companies know some random idiot, or some rogue nation, could use their models to conjure the next pandemic. "Wonderful things are going to happen there — we'll see a bunch of diseases get cured," Altman said. But he also knows terrorist groups could use the models to try to create novel pathogens: "[T]hat's no longer a theoretical thing, or it's not going to be for much longer."
OpenAI's 13-page blueprint, "Industrial Policy for the Intelligence Age: Ideas to Keep People First," aims to reset America's social contract.
- Altman told us the document isn't a prescription but a starting point: "We want to put these things into the conversation. Some will be good. Some will be bad. But ... we do feel a sense of urgency. And we want to see the debate of these issues really start to happen with seriousness."
Here are Altman's most provocative ideas:
- A Public Wealth Fund. OpenAI proposes giving every American citizen a direct stake in AI-driven economic growth through a nationally managed fund, seeded in part by AI companies themselves, that "could invest in diversified, long-term assets that capture growth in both AI companies and the broader set of firms adopting and deploying AI." This is the most radical idea in the document.
- Robot taxes. The document floats "taxes related to automated labor," and shifting the tax base from payroll toward capital gains and corporate income — since AI could hollow out the wage-and-payroll revenue that funds Social Security, Medicaid and SNAP.
- A four-day workweek. OpenAI suggests incentivizing companies and unions to run pilots of 32-hour workweeks at full pay, converting AI-driven efficiency to time back for workers — an "efficiency dividend."
- "Right to AI." The plan frames AI access as being as foundational as literacy, electricity and internet — and says access should be affordable for workers, small businesses, schools, libraries and underserved communities.
- Containment playbooks for rogue AI. In the most chilling passage, OpenAI acknowledges scenarios where dangerous AI systems "cannot be easily recalled" because they're autonomous and capable of replicating themselves. Their answer: coordination that includes government.
- Auto-triggering safety net. The blueprint envisions tripwires tied to economic data. When AI displacement metrics hit preset thresholds, temporary increases in public support — unemployment benefits, wage insurance, cash assistance — automatically kick in. When conditions stabilize, the measures phase out.
Between the lines: Let's stipulate that Altman has every reason to hype the technology to raise more money at higher valuations — and to position himself as a thoughtful architect of a plan to protect us from the AI he's rushing to market. But his OpenAI models are among the best-funded, best-performing, fastest-selling on Earth.
- "There's many companies developing this," Altman told us. "I'm only one voice inside [this] company — obviously, a big one. But this is an unbelievable honor, cool thing, scary thing altogether to get to be in this moment."
- Asked why people should trust him, Altman said: "I think almost everybody involved in our industry feels the gravity of what we're doing. ... We all take that responsibility very seriously. We feel that way every day. We also think it's very important that no one person is making the decisions by themselves that are going to impact all of us."
The document is as much corporate strategy as policy paper. OpenAI is trying to position itself as the responsible actor in the room — the company that warned you and offered solutions — a lane Anthropic first filled.
- It's also a play to shape regulation before regulation shapes them.
The bottom line: The man betting everything on superintelligence is telling the world that this thing is coming so fast, and so hard, that capitalism as we know it won't be enough. Whether you believe the altruism or see the strategy, the admission alone is historic — and worth deep reflection.
- 👀 Watch a video of Mike's interview with Sam … Read the blueprint.
(Disclosure: Axios and OpenAI have a licensing and technology agreement that allows OpenAI to access part of Axios' story archives while helping fund the launch of Axios into several local cities and providing some AI tools. Axios has editorial independence.)