Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Guide
Tom’s Guide
Technology
Christoph Schwaiger

Apple just revealed an AI technique to better compete against ChatGPT

IPhones showing Apple Intelligence features.

When an AI lab updates its underlying large language model it can often result in unexpected behavior including a complete change to the way it responds to queries. Researchers at Apple have developed new ways to improve a user’s experience when an AI model they were used to working with gets upgraded.

In a paper, Apple's researchers said that users develop their own system to interact with an LLM, including prompt styles and techniques. Switching to a newer model can be a draining task that dampens their experience using the AI model.

An update could result in forcing users to change the way they write prompts and while early adopters of models from ChatGPT might accept this, a mainstream audience using iOS will likely find this unacceptable.

To solve this issue, the team looked into creating metrics to compare regression and inconsistencies between different model versions and also developed a training strategy to minimize those inconsistencies from happening in the first place. 

While it isn't clear whether this will be part of a future iOS Apple Intelligence, it's clear Apple is preparing itself for what happens when it does update its underlying models, ensuring Siri responds in the same way, to the same queries in future.

Making AI backwards compatible 

Using their new method the researchers said they managed to reduce negative flips, which is when an old model gives a correct answer while a newer model gives an incorrect one, by up to 40%.

The paper’s authors also argued in favor of ensuring that mistakes a new model makes are consistent with those you might see an older model make.

“We argue that there is value in being consistent when both models are incorrect,” they said, adding that, “A user may have developed coping strategies on how to interact with a model when it is incorrect.” Inconsistencies would therefore lead to user dissatisfaction.

Flexing their MUSCLE

Given the fast pace with which chatbots like ChatGPT and Google’s Gemini are being updated, Apple’s research has the potential to make newer versions of these tools more dependable

They called the method used to overcome these obstacles MUSCLE (an acronym for Model Update Strategy for Compatible LLM Evolution) which does not require the base model’s training to be changed and relies on training adapters, which are basically plugins for LLMs. They referred to these as compatibility adapters.

To test if their system worked, the research team updated LLMs like Llama and Phi and sometimes found negative flips of up to 60% in different tasks. Tests they ran included asking the updated models math questions to see if they still got the answer to a particular problem correct. 

Using their proposed MUSCLE system, the researchers say they managed to mitigate quite a number of those negative flips. Sometimes by up to 40%.

Given the fast pace with which chatbots like ChatGPT and Google’s Gemini are being updated, Apple’s research has the potential to make newer versions of these tools more dependable. It would be a pity if users had to make tradeoffs between switching to newer models but suffering from a worse user experience.

More from Tom's Guide

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.