It will likely take an AI-related catastrophe before any international rulebook or organization begins regulating AI technologies.
Why it matters: AI innovators and researchers worry about both the doomsday scenario of a runaway super-AI and the less science-fictional but more likely harms that could follow from hasty deployment of the technology, in the form of cyberattacks, scams, disinformation, surveillance, and bias.
Driving the news: Tech policy makers meet in Sweden Tuesday, at the edge of the Arctic Circle, for the twice-yearly Transatlantic Trade and Technology Council.
- They're mostly skirting the calls for regulation from leading CEOs working on AI, and are instead focused on what they can do to limit China's access to chips and critical minerals, alongside baby steps towards shared terminology around AI risks.
- Microsoft president Brad Smith told "Face the Nation" he expects U.S. regulation within a year.
What’s happening: CEOs say they support global governance of the most serious risks associated with AI.
- The founders of OpenAI, the company behind ChatGPT, think the International Atomic Energy Agency — which exists to ensure nuclear tech is used for peaceful purposes — is a good model for limiting AI that shows "superintelligence."
- The Organization for Economic Cooperation and Development — an economic think tank for governments — called for global technical standards for trustworthy AI in principles published in 2019.
The big picture: There's no precedent for global regulation of a potentially dangerous field or specific technology without the cue of some catastrophic event.
- The United Nations was built from the ashes of World War II.
- It took the U.S.'s use of nuclear weapons against civilians and a nuclear arms race that threatened global devastation to eventually prompt the adoption of guardrails in that field.
Between the lines: The IAEA opened 12 years after nuclear bombs were dropped on Hiroshima and Nagasaki.
- It took another 13 years for the Nuclear Non-Proliferation Treaty to come into effect, and even then that didn't stop India, Pakistan and most notoriously North Korea from developing warheads.
What they're saying: Sam Altman and his OpenAI co-founders want to see “an international authority that can inspect systems, require audits, test for compliance with safety standards, place restrictions on degrees of deployment and levels of security.”
- Given that neither national or international authorities can keep pace AI innovation, the founders suggest companies "begin implementing elements of what such an agency might one day require," followed by national governments, and eventually a global suite of governments.
Microsoft's Smith is in lockstep with OpenAI (which Microsoft funds) in wanting "proper control over AI," included both government-licensed models and privately watermarked content.
- Smith supports specific regulations for three layers of the AI technology stack — applications, models and infrastructure — without getting into details about how this could work globally.
Sundar Pichai, Google's CEO, told "60 Minutes" he supports a global treaty system for managing AI.
BSA, a software trade association that includes Adobe, Cisco, IBM, Oracle and Salesforce as members, has been advocating for AI regulation since 2021.
Flashback: The speediest modern example of international action in the face of a technological threat was set by the negotiators of the Montreal Protocol in the 1980s, who took four years to ban around 100 chemicals that had created a dangerous hole in the Earth's ozone layer.
- Work began in 1985, the United States Senate unanimously ratified the deal in 1988, and it came into effect in 1989.
- Some argue COVAX, the global COVID vaccine delivery partnership, represents a more rapid global mobilization. But its results were mixed and the International Health Regulations that guide pandemic responses remain largely toothless.
Reality check: While CEOs have offered unusually strong support for regulation in theory, their actions are often inconsistent, and recall the efforts of social media platforms to resist regulation in the 2010s.
- ChatGPT doesn't comply with the OECD AI principles which demand explainable AI. Altman last week floated the idea of pulling out of EU markets because of "over-regulation," before backtracking on Friday.
- Google is declining to offer its Bard chatbot in the EU and Canada for unstated reasons — but it might have something to do with privacy investigations of ChatGPT underway in Italy, Germany, France, Spain and Canada.