OpenAI's CEO Sam Altman testified before a Senate hearing on AI oversight earlier in the week. He laid out a passionate case for quick and expansive government regulation of the exponentially growing technology.
"If this technology goes wrong, it could go quite wrong," he noted at the time.
DON'T MISS: AI Companies Beg For Regulation
Despite Altman's fervent calls for regulation, oversight and licensing, OpenAI is showing no signs of slowing things down.
A New Way to Use ChatGPT
The Microsoft (MSFT)-backed tech company said May 18 that it is bringing ChatGPT mobile.
The mobile rollout will begin in the U.S. with a ChatGPT app for iOS. The app will be expanding globally in the coming weeks.
OpenAI said that an Android version of the app will be available soon, though it didn't offer a date.
The app -- which is free to use -- features the synchronization of users' history across all devices. It also features an integration of Whisper, an automatic speech recognition system (ARS) that can recognize, translate and transcribe auditory inputs.
The mobile launch comes just a week after OpenAI brought web browsing and plugins in a new beta version update to ChatGPT Plus subscribers.
These updates make the chatbot significantly more powerful by giving it access to the internet, up-to-date events and proprietary data sources. By the same token, that increased functionality through enhanced internet access poses an even greater threat.
The Risks of AI
OpenAI -- who, emphasized by Altman's testimony, has been up-front about the risks posed by its own software from the beginning -- listed some of these threats on its website.
"There’s a risk that plugins could increase safety challenges by taking harmful or unintended actions, increasing the capabilities of bad actors who would defraud, mislead, or abuse others," OpenAI wrote in March. "For example, our red teamers discovered ways for plugins -- if released without safeguards -- to perform sophisticated prompt injection, send fraudulent and spam emails, bypass safety restrictions, or misuse information sent to the plugin."
The software developers, however, said that they have used these findings to enact certain safeguards, though it didn't mention what these safeguards are.
Speaking at the same Senate oversight hearing, AI expert Professor Gary Marcus testified that the introduction of plugins could have "profound security consequences."
"A month after GPT-4 was released, OpenAI released ChatGPT plugins," Marcus said. This "quickly led to something called AutoGPT, with direct access to the internet, the ability to write source code, and increased powers of automation."
"We have built machines that are like bulls in a china shop," Marcus added, "powerful, reckless, and difficult to control."
Get exclusive access to portfolio managers and their proven investing strategies with Real Money Pro. Get started now.