Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Ian Krietzberg

Huge new ChatGPT update highlights the dangers of AI hype

OpenAI dramatically changed the public discourse around artificial intelligence when it launched ChatGPT last year; for the first time, the public had access to an AI model that could seemingly do everything from penning essays to helping investors with stock picks. The resultant environment has been one of a feverish game of catch-up for other tech giants as excitement over the budding technology has consistently lifted tech stocks higher. 

The viral chatbot is now getting a major update. OpenAI said Sept. 25 that it is beginning to roll out new voice and image capabilities within ChatGPT, which allows the model to process visual and auditory, in addition to linguistic, data. 

Related: The ethics of artificial intelligence: A path toward responsible AI

Users can upload images and "troubleshoot why your grill won’t start, explore the contents of your fridge to plan a meal, or analyze a complex graph for work-related data," OpenAI said in a blog post.

Plus and Enterprise users will gain access to the new update over the next two weeks; the feature will gradually roll out to other ChatGPT users over an undisclosed timeline. 

"However, these capabilities also present new risks, such as the potential for malicious actors to impersonate public figures or commit fraud," OpenAI added. 

The update — and the way it was announced — highlights an ongoing issue of hype that has permeated the industry since ChatGPT first went live. 

Several prominent AI researchers took to X to explain that, despite OpenAI's phrasing, no, ChatGPT cannot see, hear or speak. 

As Suresh Venkatasubramanian, an AI researcher and professor who in 2021 served as a White House tech advisor, told TheStreet in an interview last month: "It's the rhetoric and the dressing around (AI), that carries these tools into places where they don't yet have a justification for being used, but are still being used. And that's where we see the problems."

Safety concerns around new capabilities 

Dr. John Licato, an AI researcher and computer science professor, said that this push into more modalities has been expected for a while and could lead to a litany of powerful applications. His concern, however, is that OpenAI has kept its models closed off so that researchers remain unable to understand the actual capabilities of these models. 

"What worries me is that these new releases are coming under OpenAI's business model, which is not very open at all," Licato told TheStreet. "So long as OpenAI refuses to disclose the data they use to train their models, and the ways in which they update their models with user interaction data, we can never have any substantial guarantee of their safety."

More Artificial Intelligence:

For AI expert and Ivanti CPO Dr. Srinivas Mukkamala, the update further highlights concerns about ever-increasing AI-driven fraud, concerns that have largely been pushed to the side over baseless predictions made by several prominent tech executives about the hypothetical threat of an artificial superintelligence. 

“In the grand push for more data to power AI and machine learning, we’ve entirely forgotten that we are still figuratively throwing ourselves into those machines," Mukkamala told TheStreet, noting that OpenAI's acknowledgment of the risks posed by the update is a good thing. He remains concerned, however, that the company does not have strong enough guardrails in place. 

"The initial offering of ChatGPT has been abused regularly and guardrails have been circumvented with just a little creativity and some additional prompts, proving that this new technology is vulnerable to abuse by threat actors and scammers," he said. "As an industry, we’ve been so focused on gathering more data that we haven’t spent enough time making sure that we are using the right people to gather the data, and especially that we are using data in the right way. 

"The solution for this boils down to accountability for how we gather, store and use the data that has been used in these powerful models and algorithms."

If you work for OpenAI, contact Ian by email ian.krietzberg@thearenagroup.net or Signal 732-804-1223

Get investment guidance from trusted portfolio managers without the management fees. Sign up for Action Alerts PLUS now.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.