Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jenn Brice

Google DeepMind director calls for clarity and consistency in AI regulations

(Credit: Kristy Walker/Fortune)

In the wake of California’s governor vetoing what would have been sweeping AI safety legislation, a Google DeepMind executive is calling for consensus on what constitutes safe, responsible, and human-centric artificial intelligence.

“That’s my hope for the field, is that we can get to consistency, so that we can see all of the benefits of this technology,” said Terra Terwilliger, director of strategic initiatives at Google DeepMind, the company’s AI research unit. She spoke at the Fortune Most Powerful Women Summit on Wednesday along with January AI CEO and cofounder Noosheen Hashemi, Eclipse Ventures general partner Aidan Madigan-Curtis, and Dipti Gulati, CEO for audit and assurance at Deloitte & Touche LLP U.S.

The women addressed SB-1047, the much discussed California bill that would have required developers of the largest AI models to meet certain safety testing and risk mitigation requirements. Madigan-Curtis suggested that if companies like OpenAI are building models that really are as powerful as they say they are, there should be some legal obligations to develop them safely. 

“That is kind of how our system works, right? It’s the push and the pull,” Madigan-Curtis said. “The thing that makes being a doctor scary is that you can get sued for medical malpractice.”

She noted the now-dead California bill’s “kill-switch” provision, which would have required companies to create a way to turn their model off if it was somehow being used for something catastrophic, such as to build weapons of mass destruction.

“If your model is being used to terrorize a certain population, shouldn’t we be able to turn it off, or, you know, prevent the use?” she asked.

DeepMind’s Terwilliger wants to see regulation that accounts for different levels of the AI stack. She said foundational models have different responsibilities from applications that use that model.

“It’s really important that we all lean into helping regulators understand these distinctions so that we have regulation that will be stable and will make sense,” she said.

But the push to build responsibly shouldn’t have to come from the government, Terwilliger said. Even with regulatory requirements in flux, building AI responsibly will be key to long-term adoption of the technology, she added. That applies to every level of the technology, from making sure data is clean, to setting up guardrails for the model.

“I think we have to believe that responsibility is a competitive advantage, and so understanding how to be responsible at all levels of that stack is going to make a difference,” she said.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.