What you need to know
- Microsoft CEO Satya Nadella and OpenAI's Sam Altman recently touched base with The Economist's Editor-in-Chief to talk about the future of AI, what's next for ChatGPT, and more.
- Sam Altman pointed out that he measures the success of ChatGPT based on the percent of human work it's able to accomplish, further highlighting his amusement at the high adoption of the tech as a productivity tool at the workplace.
- OpenAI is till working toward achieving AGI superintelligence with Altman pointing out that speculations of it causing more harm than good might not come to pass.
- Altman pointed out that there's no big red button in place to press when everything spirals out of control, rather it's the small and significant decisions made a long the way while making these decisions that mitigate such risks.
Since the OpenAI fiasco, where the board of directors decided to oust Sam Altman from the company and later rehire him back as CEO within a week is well behind us, a lot of people wonder what's next for Microsoft and OpenAI as far as generative AI is concerned.
Well, the closest answer you'll probably get for this is from a recent interview where The Economist's Editor-in-Chief, Zanny Minton Beddoes, touched base with OpenAI's Sam Altman and Microsoft CEO Satya Nadella to talk about the future of AI, direction for ChatGPT moving forward, potential and dangers of AGI superintelligence, and finally regulation of the technology.
What does 2024 look like for ChatGPT?
The session starts with The Economist's editor-in-chief (EIC) asking Sam Altman what the future holds for ChatGPT. Altman jokingly responded by highlighting how significant milestones like the launch of the GPT-4 LLM model and subtle breakthroughs that could potentially lead to superintelligence drive concerned users over the edge.
He added that users tend to believe these milestones will lead to a dynamic change overnight, citing loss of jobs and more. Interestingly, this interest in the tech and developments surrounding it is short-lived ("two-week freakout"), with most users shifting their focus to critique the advancements, such as GPT-4, and citing performance issues such as slow speeds, decline in accuracy, and more.
Microsoft's Satya Nadella defines generative AI as a groundbreaking technology whose diffusion occurred almost instantaneously across the globe, adding significant value across various sectors, including education and medicine. The CEO cited that people now have free access to "better health advice and a better-personalized tutor" through AI-powered chatbots like Microsoft Copilot and ChatGPT.
RELATED: Microsoft launches paid subscription for Copilot
Sam Altman disclosed that he measures the success of LLM models by evaluating and measuring the percentage of human tasks it can achieve. He added that it's pretty satisfying to see adoption rate of the technology as "a companion for knowledge work" in masses across organization and how it's being integrated into the workflow as a productivity tool.
OpenAI is still big on achieving AGI superintelligence
Sam Altman cited that OpenAI is still working towards achieving AGI superintelligence, though he didn't indicate whether the company was taking a radical or incremental trajectory while chasing it down. The breakthrough could potentially surpass the cognitive abilities of humans, as it stacks miles ahead of the already impressive chatbots like Microsoft Copilot.
Altman believes that they'll be able to hit the superintelligence benchmark, and if history is anything to go by, users will have the stand "two-week freakout" about it, and then things will roll back to normal. According to the CEO:
"One thing I say a lot is that no one knows what happens next, and I can't see the other side of that horizon with any detail. But it does seem like the deep human motivations will not go anywhere."
Regulation of generative AI
The Economist's Zanny Minton Beddoes pointed out users are alarmed and concerned about the existence of technology that supersedes human knowledge, as it could potentially cause a lot of harm if no guardrails and elaborate measures are put in place to establish control over it.
Altman cited the contemporary accounts of technological revolutions, further indicating that the experts' predictions were wrong in most accounts. He indicated that this, too, might be the case for the speculations revolving around AGI superintelligence and that it might cause more harm than good.
As we forge towards hitting this benchmark, regulators like the US government, under Biden's administration, have already shot their first bullet to establish control over the technology by issuing an executive order addressing some concerns around AI safety and privacy.
Beyond that, it also impedes chipmaker efforts by preventing them from shipping GPUs over to China over safety concerns, which has fueled the long-standing rivalry between the US and China more to the point that Microsoft is debating whether to retain its Beijing-based AI research lab.
The US government has previously indicated that the imposed exportation rules banning chipmakers from shipping GPUs to China are not in place to rundown China's economy but to address the possible exploitation of the tech for military use.
Concluding the interview, Altman was asked whether he'd pull the plug on AI advances and superintelligence if he sensed potential danger looming in the air. He said no "big magic red button" can be pressed to blow up the data center in such an occurrence, which most people presume exists.
He added that this boils down to the little yet significant decisions made while making these advances, where elaborate measures are put in place to establish control over how far this technology can be pushed by users, ultimately mitigating such risks.
OpenAI might already be well on its way to hitting the superintelligence benchmark after the company's staffers wrote a letter to the board of directors highlighting a potential breakthrough in the space. The breakthrough could see the company hit the benchmark within a decade.
Do you think there's enough regulation to ensure that AI technological advances don't spiral out of control? Share your thoughts with us in the comments.