Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Street
The Street
Samuel O'Brient

Apple AI tool falsely announces death of trending murder suspect

Over the past quarter, Apple  (AAPL)  has been in full focus after releasing Apple Intelligence, its highly anticipated artificial intelligence (AI) platform.


💰💸 Don’t miss the move: SIGN UP for TheStreet’s FREE Daily newsletter 💰💸


This product suite is part of the iOS 18, iPadOS 18, and macOS Sequoia operating systems and includes plenty of new features, including generative AI tools for writing and image generation. Apple is also allowing users to reduce the interruptions they receive throughout the day by summarizing their iPhone notifications. 

However, this new feature may still have some glitches, as it has landed the company in a difficult situation.

Last week, iPhone users received a notification that quickly caught their attention as it featured a shocking news announcement. While the erroneous story has since been revealed to be false, Apple is now facing the fallout as its customers react to what appears to be a new public failure of AI.

Related: Tim Cook's net worth: How much the Apple CEO's stock is worth

The newest Apple iPhone comes with Apple Intelligence, an AI platform that recently caused some problems.

Getty

Apple Intelligence may not be so smart after all

Since the launch of ChatGPT sparked the current AI frenzy that has overtaken markets, questions have been raised as to how well AI can truly process information. While many chatbots have been praised for their ability to answer questions and summarize data, Apple’s new AI problem suggests that some platforms may still be struggling to digest and summarize news.

Related: Major Apple chip supplier is expanding into the U.S.

On Friday, December 13, Apple Intelligence sent out an alert that summarized three breaking news stories from BBC News, at least as the AI platform interpreted them, each separated by a semicolon. The first one featured four words: “Luigi Mangione shoots himself.”

Mangione, the man accused of shooting and killing former UnitedHealthcare  (UNH)  CEO Brian Thompson on December 4, 2024, has been the subject of many news stories since his arrest in Altoona, Pennsylvania on December 9. BBC quickly filed a complaint, clarifying that it had not published any article claiming that Mangione had shot himself and today, reported that it had started urging Apple to scrap the generative AI tool.

So, how did this happen? According to one expert, the reason can be traced back to the language language models (LLMs) that Apple Intelligence utilizes. Komninos Chatzipapas, Founder of HeraHaven AI, walked TheStreet through what likely happened with Apple Intelligence, stating:

“LLMs like GPT-4o (which is what powers Apple Intelligence) don't really have any inherent understanding of what's true and what's not. They're statistical models that have been trained on billions of words of text.

Through this process, they've become really good at predicting the next set of words that would come after an instruction, which lets them generate coherent & convincing but potentially misleading info, because of bias that could be introduced during training.”

In this case, Apple used an LLM to generate a summary of a BBC article on Mangione. Chatzipapas hypothesizes that when Apple trained the summarization model used in this case, its workers fed the model multiple examples of articles that wrote about someone shooting themself.

More Tech Stocks:

This likely resulted in the model learning a "shooters shoot themselves" pattern, a narrative it recited incorrectly when summarizing a report from BCC that involved both Mangione and shooting.

Apple and the latest failure of AI

With AI technology advancing at the current levels, it can be easy to forget that it has limitations. But recently, multiple failures of AI have become apparent. Multiple health insurance providers, including UnitedHealthcare, have come under scrutiny for using AI systems to access medical claims, which some believe has led to people being denied care or treatments that they need.

Now Apple Intelligence’s botched news notification announcement further highlights the kinds of problems that can be caused by AI systems. Lars Nyman, chief marketing officer of cloud computing company CUDO Compute, provided context on this topic.

Related: Fintech startup is helping people invest in SpaceX

“When a generative AI pushes a patently false — and emotionally charged — notification, it’s more than a glitch,” he states. “It's an early warning about speed-over-precision decision-making in AI rollouts. And I think it's fair to say that Apple have found themself on the back foot with regards to the AI revolution.

Nyman adds that he believes that due to the lack of guardrails surrounding Apple’s AI rollout, this type of blunder was likely inevitable. “In a rush to stay ahead of the competition, Apple may have skipped over some of the “slow down and think” steps," he said. "Possibly also a little bit of hubris to boot.”

As Chatzipapas highlights, the biases that impact AI models can sometimes be reduced through experimental techniques, such as having another model fact-check the LLM doing the searching. “However,” he adds, “work on this front is still ongoing.”

Related: The 10 best investing books (according to stock market pros)

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.