Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Businessweek
Businessweek
Business
Lucy Papachristou and Jillian Deutsch

ChatGPT Advances Are Moving So Fast Regulators Can’t Keep Up

Calls for governments to regulate artificial intelligence far predate OpenAI’s release of ChatGPT in late 2022. But officials haven’t come up with an approach to deal with AI’s potential to enable mass surveillance, exacerbate long-standing inequities or put humans in physical danger. With those challenges looming, the sudden emergence of so-called generative AI—systems such as chatbots that create content on their own—is presenting a host of new ones.

“We need to regulate this, we need laws,” says Janet Haven, executive director of Data & Society, a nonprofit research organization in New York. “The idea that tech companies get to build whatever they want and release it into the world and society scrambles to adjust and make way for that thing is backwards.”

The most developed proposal today for regulating AI comes from the European Union, which first issued its Artificial Intelligence Act in 2021. The legislation, whose final form is still being debated, would put aggressive safeguards in place when the technology is being used for “high risk” cases, including employment decisions or in some law enforcement operations, while leaving more room for experimentation with lower-risk applications. Some of the lawmakers behind the act want to designate ChatGPT as high risk, an idea others object to. As it’s written, the bill focuses on how technologies are used rather than on the specific technologies themselves.

In the US, local, state and federal officials have all begun to take some steps toward developing rules. The Biden administration last fall presented its blueprint for an “AI Bill of Rights,” which addresses issues such as discrimination, privacy and the ability for users to opt out of automated systems. But the guidelines are voluntary, and some experts say generative AI has already raised issues—including the potential for mass-produced disinformation—that the blueprint doesn’t address. There’s growing concern that chatbots will make it harder for people to trust anything they encounter online. “This is part of the trajectory towards a lack of care for the truth,” says Will McNeill, a professor at the University of Southampton in the UK who specializes in AI ethics.

A few public agencies in the US are trying to limit how generative AI tools are used before they take hold: The New York City Department of Education prohibits ChatGPT on its devices and networks. Some US financial institutions have also banned the tool. For AI more broadly, companies have been rapidly adopting new guidelines in recent years with “no substantial increases” in risk mitigation, according to a 2022 survey by McKinsey & Co.

Without clear policies, the main thing holding back AI seems to be the limits the companies building the tech place on themselves. “For me, the thing that will raise alarm bells is if organizations are driving towards commercializing without equally talking about how they are ensuring it’s being done in a responsible way,” says Steven Mills, chief AI ethics officer at Boston Consulting Group Inc. “We’re still not sure yet what these technologies can do.”

Companies such as Google, Microsoft and OpenAI that are working on generative AI have been vocal about how seriously they take the ethical concerns about their work. But tech leaders have also cautioned against overly stringent regulations, with US-based companies warning Western governments that an overreaction will give China, which is aggressively pursuing AI, a geopolitical advantage. Former Google Chief Executive Officer Eric Schmidt, now chair of the nonprofit Special Competitive Studies Project, testified at a congressional hearing on March 8 that it’s important AI tools reflect American values and that the government should primarily “work on the edges where you have misuse.”

For its part, China is already planning rules to limit generative AI and has stopped companies from using apps or websites that route to ChatGPT, according to local news reports. Some experts believe these measures are an attempt to implement a censorship regime around the tools or to give Chinese competitors a leg up. But technologists may be pushing ahead too rapidly for officials to keep up. On March 14, OpenAI released a new version of the technology that powers ChatGPT, describing it as more accurate, creative and collaborative.Read next: Google’s Plan to Catch ChatGPT Is to Stuff AI Into Everything

©2023 Bloomberg L.P.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.