Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Lionel Lim

Governments may take a softer approach to encourage responsible AI: 'Over regulation will stifle AI innovation'

(Credit: Graham Uden for Fortune)

Governments are trying to navigate a tricky balance with generative AI. Regulate too hard, and you risk stifling innovation. Regulate too lightly, and you open the door to disruptive threats like deep fakes and misinformation. Generative AI can augment both the capabilities of nefarious actors, and those trying to defend against them.

During a breakout session on responsible AI innovation last week, speakers at Fortune Brainstorm AI Singapore acknowledged that a global one-size-fits-all set of AI rules would be difficult to achieve. 

Governments already differ in terms of how much they want to regulate. The European Union, for example, has a comprehensive set of rules that govern how companies develop and apply AI applications.

Other governments, like the U.S., are developing what Sheena Jacob, head of intellectual property at CMS Holborn Asia, calls a “framework guidance”: No hard laws, but instead nudges in a preferred direction.

“Over regulation will stifle AI innovation,” Jacob warned.

She cited Singapore as an example of where innovation is happening, outside of the U.S. and China. While Singapore has a national AI strategy, the city-state does not have laws that directly regulate AI. Instead, the overall framework counts on stakeholders like policymakers and the research community to “collectively do their part” to facilitate innovation in a “systemic and balanced approach."

Like many others at Brainstorm AI Singapore, speakers at last week's breakout acknowledged that smaller countries can still compete with larger countries in AI development. 

“The whole point of AI is to level the playing field,” said Phoram Mehta, APAC chief information security officer at PayPal. (PayPal was a sponsor of last week's breakout session)

But experts also warned against the dangers of neglecting AI’s risks.

“What people really miss out is that AI cyber hacking is a cybersecurity risk at a board level that’s bigger than anything else,” said Ayesha Khanna, co-founder of Addo AI and a co-chair of Fortune Brainstorm AI Singapore. “If you were to do a prompt attack and just throw hundreds of prompts that were…poisoning the data on the foundational model, it can completely change the way an AI works.”

Microsoft announced in late June that it had discovered a way to jailbreak a generative AI model, causing it to ignore its guardrails against generating harmful content related to topics like explosives, drugs, and racism.

But when asked how companies can block malicious actors from their systems, Mehta suggested that AI can help the “good guys” too.

AI is “helping the good guys level the playing field…it’s better to be prepared and use AI in those defences, rather than waiting for it and seeing what types of responses we can get.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.