Hi there, it’s Rachyl Jones with the tech team. On Tuesday, OpenAI CEO Sam Altman spoke at the WSJ Tech Live conference in Laguna Beach, Calif.—just another day in this year of AI, in which AI experts, especially the crew from OpenAI, have become seemingly ubiquitous on the tech conference circuit and in the news.
With new technologies, and specifically artificial general intelligence, humanity will be able to “solve all sorts of problems,” said Altman. In 10 years, he predicted, “people are going to say, ‘How can we say we didn’t want this?’”
This argument might sound familiar to those who braved through the 5,200-word manifesto published a day earlier by venture capitalist and billionaire Marc Andreessen (or Fortune’s 700-word summary of it). The tract made a splash online, and not for its shiny view of the future. The writing—which paints a world in which technology solves all of humanity’s problems—has been criticized for lacking data, historical context, and alternative perspectives. And its combative tone seemed intended to strike a nerve. In an opinion piece for the Washington Post, columnist and former executive editor at Fortune Adam Lashinsky called it a “self-serving cry for help.” Fortune found some of Andreessen’s points extreme and implausible (previously covered in Data Sheet), while social media users called it culty, horrifying, and dangerously misguided.
When Altman took the stage at the WSJ conference, about 24 hours after Andreessen hit “Publish,” the OpenAI CEO had to answer for contentious topics such as OpenAI’s safety features, the regulatory environment, and how his technology will change the job market. But Altman’s comments, though often not significantly different in substance from Andreessen’s, caused far less outcry.
Where Andreessen says universal basic income could “turn people into zoo animals to be farmed by the state,” Altman says more simply, it’s “not enough” to give people these kinds of regular government payments to drive innovation. They make the same point, but one is easier to digest. (Also, are zoo animals farmed, Marc?)
Andreessen says trust and safety measures are the “enemy,” whereas Altman says safety can’t be determined in a lab but must be influenced by public use. Both argue safety shouldn’t stand in the way of releasing new technologies, but Altman didn’t go out of his way to peeve the ESG crowd. The two also agreed humanity needs both abundant intelligence and energy to move forward, though Andreessen coupled it with the claim that slowing AI innovation will cost lives, and that “is a form of murder.”
Altman isn’t the only one in Silicon Valley rushing to defend tech optimism. At TED AI this week, Andrew Ng, who cofounded the Google Brain AI research team and teaches computer science at Stanford, acknowledged fears around AI but promised a remedy. The anxiety is misplaced, he said during his presentation. To the critics who want to stop AI, “You’re wrong; AI is not the problem but the solution,” Ng said. Compare that with Andreessen’s opening line to anyone with concerns about AI: “We are being lied to.”
It’s not just a matter of style. The kind of optimism espoused by Andreessen leaves no room for questions, doubts, or debate. It’s more like blind faith.
It doesn’t have to be that way.
“I am a techno-optimist, too,” said Gary Marcus, who founded AI companies Robust.AI and Geometric.AI, in a tweet on X, responding to Andreessen’s manifesto. “But being long-term optimistic about technology doesn’t mean you have to ignore the short-term (or long-term) risks of technology.
“It means working to recognize the risks, and working to address them, so that we can reach the positive outcomes that are promised, not casting those who recognize those risks as the enemy,” he writes.
Rachyl Jones
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
Today’s edition was curated by David Meyer.