This article has been reviewed according to Science X's editorial process and policies. Editors have highlighted the following attributes while ensuring the content's credibility:

fact-checked

preprint

trusted source

proofread

Apple claims its new AI outperforms GPT-4 on some tasks by including on-screen content and background context

Apple
Credit: Armand Valendez from Pexels

A team of AI researchers at Apple claims that their AI system, Reference Resolution As Language Modeling (ReALM), can outperform GPT-4 on some kinds of queries. They have published a paper on the arXiv preprint server describing their new system and its new information-gathering abilities.

Over the past couple of years, LLMs such as GPT-4 have dominated the computing landscape as companies have competed to improve their products and gain more users. Apple has been noticeably lagging in this area—its Siri has not added much in the way of artificial intelligence.

In this new effort, the team at Apple claims that its ReALM system is not just an attempt to catch up—it is a product that outperforms all the other LLMs currently available to the general public on certain types of queries.

In their paper, the team at Apple explains that their LLM provides more accurate answers to user questions because it has the ability to use ambiguous on-screen references and to access both conversational and background information. Put another way, it can look at the user's screen as part of its search process, looking for clues regarding what the user was doing before they posed the query.

It can also look at other processes currently running on the device from which queries are made as another means to gain insight into the train of thought that led to the query. By adding such information to traditional LLM processing techniques, the team claims, it has made their system much more likely to give the user the information they were looking for.

The team says that it has tested its system against multiple LLMs, including GPT-4, and further claims that it scored better than all of them on certain types of tasks. They also noted that Apple plans to insert ReALM between its devices and Siri, allowing Siri to provide much better answers than it does now—though it will likely only be available to users who upgrade to iOS 18 when it is released this summer.

More information: Joel Ruben Antony Moniz et al, ReALM: Reference Resolution As Language Modeling, arXiv (2024). DOI: 10.48550/arxiv.2403.20329

Journal information: arXiv

© 2024 Science X Network

Citation: Apple claims its new AI outperforms GPT-4 on some tasks by including on-screen content and background context (2024, April 9) retrieved 30 April 2024 from https://techxplore.com/news/2024-04-apple-ai-outperforms-gpt-tasks.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.

Explore further

Keeping your data from Apple is harder than expected, finds study

132 shares

Feedback to editors