Yesterday’s mass call for a six-month moratorium on the development of next-generation A.I. models, for safety purposes, has not elicited much response from the companies at the center of the fray, such as OpenAI, Microsoft and Google. But the letter—signed by everyone from Elon Musk to philosopher and historian Yuval Noah Harari—has certainly sparked a great deal of commentary, much of it critical.
The thing is, those criticizing the call for a pause on A.I. development have quite different reasons for doing so. Here’s a rough guide to the various arguments we’re seeing.
1) The signatories over-hype A.I. and generally have the wrong motivations. Computational linguistics expert Emily Bender mocks the letter for claiming that “contemporary A.I. systems are now becoming human-competitive at general tasks.” While she agrees with some of what the letter calls for, she accuses its authors of “unhinged A.I. hype, helping those building this stuff sell it,” and argues that policymakers should rather focus on how technology is being used to “concentrate and wield power.” A similar critique is made by the German tech critic Jürgen Geuter (alias "tante"), who dismisses the letter as “wishful worries of a group of people who read way too much science fiction and way too little about the political economy and structures of reality.” Both Bender and Geuter suggest the call is suspicious because it was published by the Future of Life Institute, which espouses a controversial ethical stance called longtermism that focuses on humanity’s very-long-term survival.
2) The pause would do nothing to mitigate existing A.I. threats. Princeton computer scientists Sayash Kapoor and Arvind Narayanan similarly argue that the letter focuses on speculative threats while proposing nothing to mitigate the harms that can arise from today’s A.I. technology: the spread of misinformation through careless use, the unpaid exploitation of existing artworks and writings, and security risks such as the leaking of personal data and the propagation of worms.
3) Business is business. Tim Hwang, the CEO of regulatory-data outfit FiscalNote—which is now letting OpenAI’s ChatGPT draw on some of its repository—yesterday told me he thought the letter’s authors had valid concerns, but: “It’s hard to put the genie back in the bottle at this point. You’ve got an arms race of tens of billions of dollars between the world’s largest technology companies. There’s too much at stake at this point in multiple different geographies to be able to roll back time here. It’s also somewhat impractical, I think, to tell an entire industry to stop making money.”
4) Why hand an advantage to China? “Let’s pretend magically that OpenAI, Amazon, Microsoft, and Google stop, do you really think the Chinese are going to stop? Or the Russians? There’s no way,” veteran tech investor Daniel Petre told the Australian Financial Review. The Center for Data Innovation, a reliably pro-Big Tech think tank, also raised the specter of China racing ahead in its argument that the U.S. should accelerate rather than pause its A.I. development.
5) It could set a dangerous precedent. The influential computer scientist Andrew Ng called the moratorium call “a terrible idea” because government intervention would be the only possible way to enforce it. “Having governments pause emerging technologies they don’t understand is anti-competitive, sets a terrible precedent, and is awful innovation policy,” he tweeted. A sort-of counterpoint from Arati Prabhakar, director of the White House Office of Science and Technology Policy, at an Axios conference yesterday: “There's a lot of conversation about, 'let's pull the plug,' but I'm not sure there is a single plug.”
It doesn't sound like a moratorium is coming soon—however, the Federal Trade Commission just received an official complaint asking it to freeze ChatGPT's development, and a similar call also just went out to European regulators, so let's see. More news below.
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
David Meyer
Data Sheet’s daily news section was written and curated by Andrea Guzman.