Apple's new AI language modeling system, called ReaLM (Reference Resolution As Language Modeling), is expected to compete with OpenAI's GPT systems. ReaLM will make interactions with virtual assistants like Siri more intuitive as the AI system can "see" and interpret on-screen content.
As reported by Business Insider, the new Apple system "outperforms other large language models like GPT-4 when determining context and what linguistic expressions refer to." According to the researchers who created ReaLM, anyway. As ReaLM is a small language model, its less complex than large language models like GPT, which makes it faster at on-device tasks than larger models which require more computing power.
What you need to know
If Apple's research team is correct, ReaLM's context deciphering system will make ReaLM a better option for gleaning information from screenshots and other on-page images like recipes or phone numbers. While OpenAI's GPT 4 can contextualize images, the system was mostly trained on real-world images and not screenshots so it will be less accurate at deciphering that information than Apple's ReaLM system.
While reading screenshots isn't the most glamorous use of artificial intelligence, it is one that plenty of people can find a use for in their daily lives. Moreso than image or text generation, in any case.
Being a small language model on-device will also make ReaLM powered AI assistants much faster at returning that information than GPT powered systems. It is unclear at this time if ReaLM will be integrated into Apple's Siri assistant or other Apple products.
While this is a small step compared to Google, Microsoft, and Amazon's AI ventures, it does fit with Apple's general MO. The tech company tends to be slower to adopt new systems than its competition, which often results in smoother integration. So we will likely see Apple step up its AI game in the future in small ways.