Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ian Krietzberg

Majority of people at odds with tech billionaires over artificial intelligence

When OpenAI chief Sam Altman testified before the Senate in May, he discussed the myriad threats posed by artificial intelligence and touted the importance of regulating his exponentially growing industry.

But Altman's perspective on AI boils down to a brief line in his opening statement: "We believe that the benefits of the tools we have deployed so far vastly outweigh the risks." 

Related: Meet Sam Altman, the man behind OpenAI's revolutionary ChatGPT

To Altman, the risks, the ones that these supposed benefits outweigh, are existential in nature. 

OpenAI has said explicitly on multiple occasions that it is on a mission to create artificial general intelligence (AGI), AI that is generally more intelligent than humans. Despite the skepticism of many experts who don't think AGI will ever be possible, OpenAI is certain that the creation of AGI could boost the economy and allow humanity to flourish "to a degree that is impossible for any of us to visualize yet." 

Altman has previously discussed such benefits as curing cancer and solving climate change as some of the potential results of a generally intelligent model. 

But the risks of such a model, the company itself has said, are "existential" and "extraordinary. A misaligned superintelligent AGI could cause grievous harm to the world; an autocratic regime with a decisive superintelligence lead could do that too."

Despite the severity of these risks, however unlikely they might be, OpenAI and its peers are racing ahead to achieve systems with greater and greater capabilities. 

More artificial intelligence stories:

But according to a new poll by the Institute for AI Policy, the vast majority of people feel the opposite way when it comes to AI. 

71% of those surveyed said that the potential risks of AI are greater than its potential benefits; developers should "err on the side of caution." 

A further 66% of those surveyed said that AI companies should be required to have regulatory representation on their boards. 

Earlier polling by the AIPI found that more than 80% of respondents surveyed don't trust tech executives to regulate themselves when it comes to AI; 82% of people want to slow down the development of the technology, while 83% believe AI could accidentally cause a catastrophic incident. 

This highlights the less dramatic, though no less severe, concerns that exist around AI: worker exploitation, enhanced fraud, the dramatic exacerbation of social inequities, and potential political instability. 

"There's going to be a hell of a lot of abuse of these technologies, and not necessarily always deliberately, but simply out of complacency or out of ignorance," Nell Watson, a prominent AI researcher and tech ethicist told TheStreet. "And that's really the problem with AI. It will give you what you think you want, but it turns out that's not actually what you intended." 

On Sept. 13, prominent tech executives were in Washington to discuss the potential regulation of artificial intelligence. The participants included OpenAI CEO Sam Altman, Elon Musk, Meta's Mark Zuckerberg, Google's Chief Executive Sundar Pichai and Nvidia's CEO Jensen Huang, among others.

If you work for OpenAI, contact Ian by email ian.krietzberg@thearenagroup.net or Signal 732-804-1223

Action Alerts PLUS offers expert portfolio guidance to help you make informed investing decisions. Sign up now.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.