Hello, and welcome to Eye on A.I.
I spent most of last week at Viva Tech, the big technology trade show in Paris. Unsurprisingly, the hottest topic in the exhibition hall was generative A.I. Elon Musk, whose appearance at Viva Tech was a major coup for the show’s organizers, said A.I. “is probably the most disruptive technology ever.” He mentioned Tesla’s work on two A.I.-enabled technologies, self-driving cars and humanoid robots, and said he thought it was these technologies (both of which are a subset of what he called “autonomy”), not anything that Tesla produced today, that explained the company’s $816 billion market cap. Musk said that, in an optimistic scenario, A.I. will usher in “an age of abundance where any goods or services that you want, you can just have.” But he also said even if there were “some sort of A.I. apocalypse, I think I would still want to be alive at this time to see it,” before adding, in a joking aside, “and hopefully not cause it.”
At the same time Musk was speaking on Friday, I was moderating a panel on “Media Literacy in the Age of A.I.” One of the panelists, Claire Leibowicz, who is head of A.I. and media integrity at the Partnership on A.I., argued that the “volume, accessibility and reach” that generative A.I. potentially gives to misinformation does represent a difference from previous forms of manipulated content. She said she was particularly worried that a flood of A.I.-generated content would lead to a phenomenon known as “the liar’s dividend”—the idea that in a media environment where concerns about deepfakes and other A.I.-manipulated content degrade public trust in all information, the real winners are politicians, governments, and corporations who gain the ability to escape accountability simply by disputing any true accusation.
Another panelist, Sonja Solomun, deputy director of the Centre for Media, Technology and Democracy at McGill University in Montreal, agreed, comparing exposure to disinformation to cocaine—snort it a few times, and there’s no lasting harm. But repeated exposure, she warned, fundamentally alters your brain chemistry.
Meanwhile, Charlie Beckett, professor of media and communications at the London School of Economics, said that while he supported ideas such as digital watermarking that would make it easier to tell when content was A.I.-generated or when photos had been manipulated, people should not be lulled into thinking these technologies are silver bullets for what is actually a complex, societal issue. He pointed out that even the intrinsic value of trust in media and institutions is context-dependent. The country where surveys consistently find the public most trusts both the media and politicians? China. “Personally, I don’t see China as a great model for an information ecosystem,” he said.
Elsewhere at the conference, I heard Nick Thompson, the CEO of The Atlantic who is known for his daily “the most interesting thing in tech” videos on LinkedIn and other social media, opining on A.I. and regulation. He said it remained to be seen whether regulation would be “large and dumb”—a complex web of rules and requirements that would enable regulatory capture by the biggest and best-funded technology players—or “small and dumb,” being too light-touch and enabling harms to proliferate, or “small and smart,” getting the balance just about right. After his talk, I asked him whether he thought calls for an international agency to monitor the development of advanced A.I. systems, perhaps along the lines of the International Atomic Energy Agency, fell into the “large and dumb” camp or the “small and smart” camp. He mulled it over for a long few seconds, then replied “small and smart,” before walking off.
Well, the A.I. regulation that is actually closest to being on the books is the European Union’s A.I. Act, which cleared a key milestone last week when the European Parliament passed its version of the legislation. Now the text will be further negotiated between the Parliament, the European Council, which represents the interests of various national European governments, and the European Commission, which is the bloc’s executive arm. A final version is likely to be enacted later this year and come into force in 2025. But many experts think the basic shape of the law is unlikely to change too radically.
Exactly how the current draft of the A.I. Act came about was the subject of a Time magazine scoop today. Using documents obtained under freedom of information requests and other reporting, Time’s Billy Perrigo showed that even though OpenAI CEO Sam Altman has been on a global tour, repeatedly calling for regulation, his company had quietly lobbied the European Commission and Council for less onerous requirements. Specifically, Time obtained a lobbying document in which OpenAI argued against a draft proposal that would have seen generative A.I. foundation models, such as OpenAI’s GPT large language models, automatically classified as “high risk” under the A.I. Act’s regulatory schema. The lobbying apparently succeeded: In the version passed by the European Parliament, foundation models are not subject to the same risk assessments as other forms of A.I. (Instead, companies developing foundation models must meet several transparency requirements and must ensure their models comply with existing European law. Even that requirement, Altman has implied, may be hard for OpenAI to meet because of issues around whether the company has a legitimate legal basis for handling European citizens' data in training its A.I. models—and perhaps also when EU citizens are using OpenAI’s products, such as ChatGPT.)
Just how tough it may be for companies to comply with the EU A.I. Act is apparent in a flowchart that Brian Wong, a partner, and Tom Whittaker, a senior associate, at the law firm Burges Salmon created to help clients prepare for the new law. (You can have a look at it here.) It depicts a Rube Goldberg-esque pathway that companies will need to navigate in order to comply. Whittaker told me that for companies that are already in heavily-regulated industries, such as financial services, health care, transportation, and the government sector, complying with the new A.I. Act might not be too much of a stretch. But for others, and for many smaller businesses, he acknowledges “the economic burden will be greater.”
Whittaker tells me that he thinks the new law will probably slow the adoption of A.I. by companies in the EU and could delay the roll-out of A.I. systems to EU customers (like Europe’s strict data protection law, GDPR, the new EU A.I. Act applies to any company with customers and employees in Europe, not just those headquartered there and there are signs that companies are already pushing back plans to make generative A.I. chatbots available to EU consumers because of concerns about complying with EU laws). Whittaker says that one of the biggest uncertainties will be around which specific regulator within each EU country has responsibility for enforcing the A.I. Act, as different regulators are likely to place emphasis on different parts of the law. He also says that there is a real need for global standards around A.I. fairness, transparency, accountability, and safety. But, in the end, Whittaker said it was better to go slow on A.I. adoption and “ensure that fundamental rights are protected.”
With that, here’s the rest of this week’s news in A.I.
Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com