As artificial intelligence advances at an unprecedented pace, factions are emerging within Silicon Valley.
One camp, often dubbed "doomers," frets about the risk of an apocalyptic scenario in which AI brings about the destruction of the world. On the flip side, there are proponents of effective accelerationism (e/acc) who firmly believe in AI's ability to bring about positive transformations in our world and advocate for a hastened development of AI to unlock its potential benefits.
Entrepreneur and venture capitalist Vinod Khosla, who cofounded Sun Microsystems four decades ago, views the "doomers" as conspiracy theorists donning tinfoil hats.
"The doomers are focusing on the wrong risks," Khosla said on stage at Fortune‘s Brainstorm AI conference in San Francisco on Tuesday, adding that while believes the risk of sentient AI killing humanity exists, it's about the same risk as an asteroid hitting our planet and destroying us all. "By far, orders of magnitude, higher risk to worry about, is China, not sentient AI killing us off.”
"It's sort of not worthy of a conversation to be honest," Khosla added, regarding the risks of sentient AI.
Khosla was an early backer of the high-profile AI startup OpenAI, which recently went through a tumultuous period after the board ousted CEO Sam Altman thanks to an unusual corporate structure where OpenAI's nonprofit entity governs its for-profit subsidiary. Although Altman swiftly reclaimed his position as OpenAI's CEO, Khosla believes the entire episode underscores the problem with the wary mindset regarding AI that's become popular in some circles, including the former OpenAI board members who orchestrated Altman's ouster.
“There were a bunch of misinformed board members applying the wrong religion instead of making rational decisions,” Khosla said. "The company is much better off today than it was a month ago.”
Earlier on Tuesday, LinkedIn cofounder Reid Hoffman—another early OpenAI investor—called Altman's ousting " a failure of board governance." Altman is an "an amazing CEO" and "I'm really glad he's back in place," Hoffman said.
Hoffman also shares Khosla's attitude about the benefits of AI outweighing the risks, although he expressed it in far less strident terms: “Yes, we need to pay attention—and be in the dialogue about—the risks, but the real important thing is to not fumble the future.”
For Khosla, it's not a Terminator scenario he's worried about. The reality of artificial intelligence includes more practical risks, Khosla said, such as China developing advanced AI used to influence elections by targeting individual voters with thousands of bots.
"We should be worrying about the longer term over the next 25 years whoever wins the AI race will win the economic race," Khosla said.
Read more from the Fortune Brainstorm AI conference:
Box CEO Aaron Levie’s top takeaway from OpenAI meltdown: ‘Don’t have weird corporate structures’
Most companies using AI are ‘lighting money on fire,’ says Cloudflare CEO Matthew Prince
Khan Academy’s founder says AI ‘coaches’ will soon submit essays to teachers instead of students