What you need to know
- Sam Altman claims AI will be smart enough to solve the consequences of rapid advances in the landscape, including the destruction of humanity.
- The CEO hopes researchers figure out how to prevent AI from destroying humanity.
- Altman indicated that AGI might be achieved sooner than anticipated, further stating the expressed safety concerns won't manifest at that moment as it will whoosh by with "surprisingly little" societal impact.
Aside from the security and privacy concerns around the rapid advancement of generative AI, the possibility of further advances in the landscape remains a major risk. Top tech companies, including Microsoft, Google, Anthropic, and OpenAI are heavily invested in the landscape but the lack of policies to govern its development is highly concerning as it might be difficult to establish control if/when AI veers off the guardrails and spirals out of control.
When asked if he has faith someone will figure out a way to bypass the existential threats posed by superintelligent AI systems at the New York Times Dealbook Summit, OpenAI CEO Sam Altman indicated:
“I have faith that researchers will figure out to avoid that. I think there’s a set of technical problems that the smartest people in the world are going to work on. And, you know, I’m a little bit too optimistic by nature, but I assume that they’re going to figure that out.”
The executive further insinuated that by then, AI might have become smart enough to solve the crisis itself.
Perhaps more concerning, a separate report suggested a 99.999999% probability that AI will end humanity according to p(doom). For context, p(doom) refers to generative AI taking over humanity or even worse — ending it. The AI safety researcher behind the study, Roman Yampolskiy further indicated that it would be virtually impossible to control AI once we hit the superintelligent benchmark. Yampolskiy indicated that the only way around this issue is not to build AI in the first place.
However, OpenAI is seemingly on track to remove the AGI benchmark from its bucket list. Sam Altman recently indicated that the coveted benchmark might be here sooner than anticipated. Contrary to popular belief, the executive claims the benchmark will whoosh by with "surprisingly little" societal impact.
At the same time, Sam Altman recently wrote an article suggesting superintelligence might be only "a few thousand days away." However, the CEO indicated that the safety concerns expressed don't come at the AGI moment.
Building toward AGI might be an uphill task
OpenAI was recently on the verge of bankruptcy with projections of making a $5 billion loss within the next few months. Multiple investors, including Microsoft and NVIDIA, extended its lifeline through a round of funding, raising $6.6 billion, ultimately pushing its market cap to $157 billion.
However, the funding round came with several bottlenecks, including pressure to transform into a for-profit venture within 2 years or risk refunding the money raised by investors. This could open up the ChatGPT maker to issues like outsider interference and hostile takeovers from companies like Microsoft, which analysts predict could acquire OpenAI in the next 3 years.
Related: Sam Altman branded "podcasting bro" for absurd AI vision
OpenAI might have a long day at the office trying to convince stakeholders to support this change. Former OpenAI co-founder and Tesla CEO Elon Musk filed two lawsuits against OpenAI and Sam Altman citing a stark betrayal of its founding mission and alleged involvement in racketeering activities.
Market analysts and experts predict investor interest in the AI bubble is fading. Consequently, they might eventually pull their investments and channel them elsewhere. A separate report corroborates this theory and indicates that 30% of AI-themed projects will be abandoned by 2025 after proof of concept.
There are also claims that top AI labs, including OpenAI, are struggling to build advanced AI models due to a lack of high-quality data for training. OpenAI CEO Sam Altman refuted the claims, stating "There's no wall" to scaling new heights and advances in AI development. Ex-Google CEO Eric Schmidt reiterated Altman's sentiments, indicating "There's no evidence scaling laws have begun to stop."