In 2024, smartphones seem to be truly living up to the ‘smart’ aspect of their names. Back in 2021, when the Google Pixel 6 Pro was released, we saw the beginnings
of explicit AI integration with features such as Google’s Magic Eraser. And,
at last year’s Mobile World Congress, Qualcomm Snapdragon treated me
to an exclusive preview of Stable Diffusion’s generative AI tools working on a mobile device for the first time.
If you’re familiar with generative AI tools such as Stable Diffusion, you’ll be aware of how mind-blowing it is that the computing power required to run complex generative AI tasks was possible on a smartphone a year ago.
Oppo’s latest flagship device, the Find X7 Ultra, is pushing things forward with the world’s first Quad Main Camera system, comprising four 50MP cameras, two of which feature periscope zoom lens systems. At the heart of the Find X7 Ultra’s camera system is the Sony LYT-900 sensor.
It’s a one-inch type sensor, designed specifically with computational photography in mind, and is powered by Snapdragon 8 Gen 3, which has a custom, dedicated Image Signal Processor. Sharing a similar classy chrome and leather aesthetic, I’d like to think of this device as the spiritual successor to the Panasonic Lumix DMC-CM1, which was well ahead of its time as the first mobile to feature a one-inch sensor way back in 2015. The Find X7 Ultra takes the torch from the CM1 and runs with it, offering what is effectively four flagship smartphone cameras in one device with a combined equivalent optical focal range of 14-270mm, it’s an impressive package.
We saw a belligerent use of the term ‘AI’ during the Samsung Unpacked launch event in mid-January, where its latest smartphones were announced. The S24 series devices rely heavily on AI to deliver some of their key features. For example, you can select and move objects within your pictures on the device and it will cleverly use content-aware generative fill to replace the gaps left by the removed objects. It’s a similar implementation to the tools deployed in the latest Google Pixels.
Galaxy flagships are particularly famous for their extensive zoom ranges. However, this year’s Ultra device ditches 10x optical zoom for a maximum of 5x magnification. Instead, the Galaxy S24 Ultra achieves the rest of its upper zoom range with AI-powered digital zoom eking out detail from its ‘Quad Tele’ zoom camera setup.
An especially interesting addition to the Galaxy S24 series is the ability of the device to take advantage of distributed computing power. This means that photo or video content captured and created on the device can use the power of cloud-based AI to process your content as you shoot and edit, spreading the computing load around and enabling a whole raft of groundbreaking functions.
Many of the new AI-powered features that we’re starting to see rolling out in smartphones – or ‘connected cameras’, as I like to call them – this year will undoubtedly have an effect on the way people capture and create content on the go. We’ve all seen the predominantly negative impact that the introduction
of filters had on portraiture, so I’m intrigued to see what new trends emerge as these latest AI-driven creative tools become standard.
How do you feel about the inevitable consequences of cameras being developed around the use of AI? Does it take something away from the art of creating photos and videos, or are you excited about what’s ahead? Personally, I’m sick of hearing about ‘AI’, but it’s firmly a thing now, so we have to talk about it! - so do forgive me!