Get all your news in one place.
100’s of premium titles.
One app.
Start reading
TechRadar
TechRadar
Alex Blake

OpenAI is worried that ChatGPT-4o users are developing feelings for the chatbot

OpenAI unveils GPT-4o.

The introduction of GPT-4o has been seen as a major step up in the abilities of OpenAI’s ChatGPT chatbot, as it's now able to produce more lifelike responses and can work with a wider range of inputs. However, there may be a downside to this increased sophistication, with OpenAI itself warning that GPT-4o’s capabilities seem to be causing some users to become increasingly attached to the chatbot, with potentially worrying consequences.

Writing in a recent 'system card' blog post for GPT-4o, OpenAI outlined many of the risks associated with the new chatbot model. One of them is “anthropomorphization and emotional reliance,” which “involves attributing human-like behaviors and characteristics to nonhuman entities, such as AI models.”

When it comes to GPT-4o, OpenAI says that “During early testing … we observed users using language that might indicate forming connections with the model. For example, this includes language expressing shared bonds, such as “This is our last day together”.”

As the blog post explained, such behavior may seem innocent on the surface, but it has the potential to lead to something more problematic, both for individuals and for society at large. To skeptics, it will come as further evidence of the dangers of AI and of the rapid, unregulated development of the technology.

Falling in love with AI

(Image credit: Shutterstock/Daniel Chetroni)

As OpenAI’s blog post admits, forming attachments to an AI might reduce a person’s need for human-to-human interactions, which in turn may affect healthy relationships. As well as that, OpenAI states that ChatGPT is “deferential,” allowing users to interrupt and take over conversations. That kind of behavior is seen as normal with AIs, but it’s rude when done with other humans. If it becomes more normalized, OpenAI believes it could impact regular human interactions.

The subject of AI attachment is not the only warning that OpenAI issued in the post. OpenAI also noted that GPT-4o can sometimes “unintentionally generate an output emulating the user’s voice” – in other words, it could be used to impersonate someone, giving everyone from criminals to malicious ex-partners opportunities to engage in nefarious activities.

Yet while OpenAI says it has enacted measures to mitigate this and other risks, when it comes to users becoming emotionally attached to ChatGPT it doesn’t appear that OpenAI has any specific measures in place yet. The company merely said that “We intend to further study the potential for emotional reliance, and ways in which deeper integration of our model’s and systems’ many features with the audio modality may drive behavior.”

Considering the clear risks of people becoming overly dependent on an artificial intelligence, and the potential wider ramifications if this happens on a large scale, one would hope that OpenAI has a plan that it's able to deploy sooner rather than later. Otherwise, we could be looking at another example of an insufficiently regulated new technology having worrying unintended consequences for individuals, and for society as a whole.

You might also like

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.