
OpenAI is urging governments to rethink the foundations of the economy, including how people work, earn, and pay taxes, as the shift to artificial intelligence (AI) continues.
The company outlined “initial ideas” in a policy document on Monday to mitigate the disruption that artificial intelligence is bringing to the United States and the global economy.
One of the major policy suggestions is to create a public wealth fund that provides all citizens with a stake in AI-driven economic growth. It could invest in diversified, long-term assets that capture growth in both AI companies and broader companies that are adopting and deploying the technologies, with the returns going straight to the citizens, the document said.
Governments should also incentivise companies to launch four-day workweek pilot programs with “no loss in pay,” as a way to offset the productivity gained from using AI, the company said.
Lawmakers should also consider modernising the American tax system to increase taxation on corporate income and capital gains, rather than labour income, which could be affected by a wave of AI-related job losses and impact social benefits. Governments could also consider a tax when a company uses automated instead of human labour, the report said.
It also suggests that benefit systems like retirement pensions and healthcare access be built into “portable accounts,” that will follow people across jobs, industries and entrepreneurial ventures.
This is not the first time that OpenAI or the CEOs of other AI companies have offered suggestions on how to mitigate changes coming to the job market from AI growth.
Tech leaders, including xAI’s Elon Musk and OpenAI’s Sam Altman, advocate for a universal basic income as a likely necessity in a world where traditional work is being replaced by AI.
Others, such as Nvidia’s Jensen Huang and Zoom’s Eric Yuan, have supported a four or three-day work week given the gains from AI productivity.
Anthropic CEO Dario Amodei wrote a sweeping essay in January that warned that superintelligent AI, which will be able to outpace humans and will be difficult to control, is a “recipe for existential danger.”
In that essay, he suggested that controlling the export of key technologies, such as semiconductor chips, which are used to train large language models (LLM), as one key solution.
He also called for transparency laws that compel AI companies to disclose how they guide their models’ behaviour.