Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Conversation
The Conversation
John Chiverton, Senior Lecturer in Electronic and Computer Engineering, University of Portsmouth

How AI features in smartphones are reducing their dependence on the cloud

Karlis Dambrans / Shuttestock

New phones are being launched with features enabled by artificial intelligence (AI). The latest of these was Google’s flagship Google Pixel 9 phone. Samsung’s Galaxy S24 phone, released at the beginning of 2024, also features a range of AI-enabled photo editing features.

The hidden story behind devices like these is how companies have managed to migrate the processing required for these AI features from the cloud to the device in the palm of your hand.

In the Google Pixel 9 phone, a feature called Magic Editor allows users to “re-imagine” their photos using generative AI. What this means in practice is the ability to reposition the subject in the photo, erase someone else from the background, or adjust the grey sky to a blue one. It is done by providing suitable prompts and letting the app do the rest.

The phone’s generative AI features also allows you to add people or objects to your pictures by typing in a text prompt.

Of course, users have always been able to do this using photo editing software, but making the result look natural and not as if it has been obviously edited, takes some skill. Magic Editor promises to use AI to perform these complex photo edits with “simple and intuitive actions”.

Another feature called “Add Me”, allows users to take a group photo without having to hand your phone to a stranger. The phone’s owner simply takes a photo of the group, then hands it to a friend and steps into the same place they’ve just taken a snap of. The phone then stitches the two shots together.

Another feature called “Best Take” can be used to select the best elements from a series of very similar images and combine them all into one picture. Google’s chatbot technology powers a digital assistant and other features on the phone.

Features on phones have come a long way since the first digital phones; or when phones started to have their own integrated cameras.

2001 BBC Archive: First Smart Phone (Cameras on phones)

To the edge

Traditionally, the processing required for such AI-based functions has been too demanding to host on a device like a phone. Instead, it is offloaded to online cloud services powered by large, powerful computer servers.

However, companies are increasingly recognising the need to perform much of the processing to customer devices, potentially putting greater control in the hands of consumers.

This involves migrating significant amounts of AI computational processing to what companies call the “edge”. The edge describes what are typically consumer devices like phones with reduced processing performance.

The difference between how cloud-based and edge-based AI work:

In order to do this, the power demands for processing need to be reduced. Companies have achieved this migration with specialised microprocessors that are specifically tailored to AI-based processes.

For instance, Google’s Tensor AI processors, referred to as Tensor Processing Units (TPU)s appear to be central to the features available on their Pixel mobiles. The edge based processors are capable of efficiently applying AI models to data acquired or stored on mobile devices using specialised software.

These TPUs include networks of components called systolic arrays, which enable large amounts of data to be processed simultaneously. This efficient design saves power and computation time.

This is crucial because of the huge number of calculations that need to be performed to make a single AI decision. This is something that processors, such as Google’s TPUs, have become much better at in the last few years.

Indeed, the initial TPUs, first designed in 2015, were created to help speed up the computations performed by large, cloud-based servers during the training of AI models. In 2018, the first TPUs designed to be used by computers at the “edge” were released by Google. Then, in 2021, the first TPUs designed for phones appeared – again, for the Google Pixel.

There’s huge competition to integrate greater amounts of AI onto mobile phones grows. That means we’re likely to see even more innovative technology arrive on the market in coming years.

The Conversation

John Chiverton received in-kind funding from Google in 2018 to use their Google Compute Engine servers.

This article was originally published on The Conversation. Read the original article.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.