Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Steve Mollman

Sure, A.I. has some 'real risks,' but the human extinction fears are a ‘distraction,’ says CEO of a $2 billion unicorn backed by Oracle and Nvidia

Aidan Gomez, co-founder and chief executive officer of Cohere (Credit: Christinne Muschi/Bloomberg via Getty Images)

Artificial intelligence destroying humanity used to be the stuff of sci-fi blockbusters. More recently, billionaires, lawmakers, and large swaths of the public have fretted about it for real.

But Aidan Gomez, cofounder and CEO of Cohere—a red-hot A.I. startup recently backed by database giant Oracle and chipmaker Nvidia—thinks such fears are overblown. Worse, they’re distracting us from “real risks with this technology,” he said in a Financial Times interview published Thursday. 

Oracle said this week it will use Cohere’s technology to let its business customers build their own generative A.I. apps. Cohere is in some ways to Oracle what OpenAI is to Microsoft, with each startup receiving hefty investments from their Big Tech partners who in turn use their A.I. technology. The difference is that Cohere is designed for corporate customers that want to train A.I. models on their own data without sharing it, whereas OpenAI has tapped more readily available information to train its buzzy A.I. chatbots ChatGPT and GPT-4. 

Gomez, previously a researcher at Google Brain, one Google’s A.I. arms, sharply criticized the open letter signed in March by tech luminaries—including Tesla CEO Elon Musk and Apple cofounder Steve Wozniak—that called for a six-month pause on development of A.I. systems more advanced than GPT-4 to give policymakers a chance to catch up.  Aside from it being “not plausibly implementable,” Gomez told the FT, the letter talked “about a superintelligent artificial general intelligence (AGI) emerging that can take over,” a scenario he considers exceptionally improbable.

(The letter asking for the pause reads in part, “Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”)

“To spend all of our time debating whether our species is going to go extinct because of a takeover by a superintelligent AGI is an absurd use of our time and the public’s mindspace,” Gomez argued. 

Real A.I. risks

Instead, he said, “there are real risks” that need to be addressed today. One immediate concern is that “we can now flood social media with accounts that are truly indistinguishable from a human, so extremely scalable bot farms can pump out a particular narrative.”

Asked about the danger of such capabilities undermining democratic processes—with the U.S. presidential election looming—he replied:

“Things get normalized just by exposure, exposure, exposure, exposure. So, if you have the ability to just pump people the same idea again and again and again, and you show them a reality in which it looks like there’s consensus—it looks like everyone agrees X, Y and Z—then I think you shape that person and what they feel and believe, and their own opinions. Because we’re herd animals.”

To address the problem, he said, we need mitigation strategies such as human verification, so we “can filter our feeds to only include the legitimate human beings who are participating in the conversation.”

He credits Musk for the blue-check revamp at Twitter, despite its rough start and surrounding controversy. Under Musk, the marks previously given to notable figures for free are now given to anyone for a monthly subscription, with Musk describing it in late March as the “only realistic way to address advanced A.I. bot swarms taking over.” 

“You can complain about the price or whatever—maybe it should be free if you upload your driver’s license, or something,” said Gomez. But “it’s really important that we have some degree of human verification on all our major social media.”

Another pressing risk, he said, is with people trusting A.I. chatbots for medical advice. ChatGPT and its ilk are known to “hallucinate,” or basically make things up, which can be problematic when your health is on the line. Gomez didn’t offer specific ways to address the danger, but warned: “We shouldn’t have reckless deployment of end-to-end medical advice coming from a bot without a doctor’s oversight. That’s just not the right way to deploy these systems… They’re not at that level of maturity where that’s an appropriate use of them.”

Given the “real risks” and “real room for regulation” with A.I. technology, he said, he hopes the public takes the “fantastical stories” from A.I. doom merchants with a grain of salt. 

“They’re distractions,” he said, “from the conversations that should be going on.”

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.