For most of 2023, the tech world has been waiting for something, anything to happen on AI regulation. Thought leaders from the industry, from Elon Musk to Bill Gates to OpenAI kingpin Sam Altman, even visited Congress multiple times to advise DC on how to act on the technology widely seen as revolutionary. On Monday, the White House took action with an executive order that reaffirms its commitment to regulating AI. The procedures outlined in the document are similar to what other governments have said about regulating AI, but experts in the field say it goes further than that.
Called “comprehensive” and “smart” by AI experts who spoke with Fortune, the document also has legal teeth the government can sink into AI powerhouses that don't comply with its directives. “This is the first executive order we’ve seen that actually has a legal requirement in addition to something that’s just highly recommended,” said Neil Serebryany, CEO of CalypsoAI, which helps organizations like the Air Force and Lockheed Martin secure its AI models.
The order is wide-ranging, addressing the biggest problems that ethics experts and consumers say have arisen in recent months, including worker displacement, nefarious use cases, and the potential for AI to exacerbate discrimination. It offers a mix of solutions, from creating new reporting services that remedy harmful AI practices to developing best practices across different industries interacting with the technology.
Under the executive order—which is a directive issued by the president, i.e., the executive branch, and can be enforced as law—the most powerful AI companies must share their safety test results and other information with the government. President Joe Biden in July secured voluntary commitments to share information from AI companies including Google, Meta, Microsoft, and OpenAI, but the new order is legally binding. It holds companies to this standard through the Defense Production Act, a Cold War-era law that allows presidents emergency control over domestic industries. Presidents Trump and Biden both invoked the law to manage the Covid-19 response. Under this law, a government body—likely the Solicitor General—can sue companies that don’t comply, Serebryany told Fortune. In tying this section of the order to the Defense Production Act, Biden has prepped the industry for additional regulation, he said.
‘Ahead of things’
Responses from AI experts and industry personnel who spoke with Fortune have been largely positive. “We’re grateful to President Biden, Vice President Harris, and the Biden Administration for their leadership and work to ensure that the federal government harnesses the potential of AI, and that its benefits reach all Americans,” an OpenAI spokesperson said in an emailed statement to Fortune.
“It’s really the first time where the government is ahead of things,” said Michael Santoro, business professor at Santa Clara University who works in public policy and tech ethics.
In addition to the section on reporting results to the government, there were a few other points from the order that stood out to the AI community. The Departments of Energy and Homeland Security will address the chemical, biological, and nuclear risks of AI, which Santoro called an “aggressive” approach. The National Institute of Standards and Technology will also set standards around red-team testing, or when ethical hackers search for the weak points in a technology to prevent future attacks.
Many of the actions Biden lists in his order aren’t specific. The administration writes it will “develop tools” and “develop best practices” without much additional detail, but that’s how it should be, Serebryany told Fortune. AI innovation moves so quickly that tangible goals might be outdated in a few months. The government can continue to address new issues as they arise under these vague measures, he said.
Infringing on EU territory
While the executive order sets standards within the U.S., its implications reverberate across the Atlantic. The EU has heretofore largely been the chief regulator of technology around the world. Companies that function across multiple regions, including all Big Tech companies, want one set of rules to follow, and that means complying with the strictest governing body. In tech, that’s the EU.
The U.S. has largely regulated tech through state-level legislation and standards set by class action lawsuits, but there is limited federal regulation around existing tech issues, like privacy. Biden’s executive order initiates a battle between the U.S. and EU as to who is going to regulate AI and how it will be regulated, said Santoro.
“There’s going to be a time when the United States and Europe are going to be staring across the table at each other and negotiating how tech is going to be regulated,” said Santoro.
Companies might prefer the U.S. approach, he said, because it acknowledges the potential for AI to positively transform the world. In the order, Biden lists how AI can aid the development of life-saving drugs and act as a resource to educators. The E.U. largely focuses on the risks of the technology, rather than the positives, Santoro said. The U.S. economy also relies in part on its technology sector—housing major AI companies like Microsoft, OpenAI and Google—so it can’t regulate so much that it limits its gross domestic product.
“Europe isn’t always as effective as it would like to be in setting regulation for the entire world,” Santoro told Fortune. “So here is an example where the stakes are way too high for companies to sit back and wait for Europe to regulate and have the U.S. be dormant.”