Some way down the long list of upgrades in the new Canon EOS R1 and Canon EOS R5 Mark II, headlined by stacked sensors, new autofocus skills and a second processor, were a couple of all-new editing features: in-camera neural network upscaling and noise reduction.
Powered by what Canon calls deep learning and is colloquially called AI, the upscaling feature lets you increase the size of an image by 400%, while noise reduction can effectively reduce noise by two stops. The potential of this tech is huge.
Upscaling increases the EOS R1's 24MP images to 96MP, and the EOS R5 Mark II's 45MP shots to 180MP (both currently for JPEG or HEIF images only, not raw images).
In another example, you can apply noise reduction (available in each camera's raw editor) to, say, ISO 51,200 images and clean up these high-ISO shots so they look like they were shot at ISO 12,800. Sounds amazing, right? But are these in-camera editing tools any good?
I was one of the few to have a hands-on session with both cameras ahead of their launch and eagerly tried out the tools. It's early days, and I was working on images from beta samples of each camera, but the results were surprisingly good.
A computer inside your camera
Upscaling is nothing new, it's been in Photoshop for years. Topaz Photo AI is a popular option, too, with an upscale tool of up to 600%. These third-party services increase the resolution of images and work pretty well, but do so by interpolation and, essentially, guesswork using surrounding pixels.
Canon's neural network tech isn't exactly new, either. Canon announced it was working on this image-processing tech well over a year ago and we covered it in an article about how AI is transforming how cameras work. However, until now, this tech was only available in Canon's paid-for Cloud Raw service, which supports files from its latest mirrorless cameras like the EOS R6 Mark II and EOS R10, and its DPP software (which, by the way, is painfully slow and can't touch the likes of Adobe Lightroom).
That all changed with the EOS R1 and EOS R5 Mark II, the first cameras to feature these editing tools. You can't upscale raw images yet, just JPEG or HEIF files (I asked Canon why, but didn't get a clear answer). However, neural network noise reduction is available in the RAW processing editor.
Upscaling is applied to images you've already shot, unlike the 'high-res shot' mode found in other cameras like the OM System OM-1 II. High-res shot uses pixel shift and rapidly shoots multiple images that are combined into one larger image 2x or 4x the size, but they're susceptible to ghosting if there's any movement in your shot. Upscaling doesn't suffer ghosting and is hands down the better option for most people.
Canon's in-camera upscaling and noise reduction is, at least in theory, better than third-party software because it's fine-tuned to the camera and lens used for the shot, including their settings, and uses that information to make those improvements. Put simply, Canon's in-camera AI should perform better than third-party software.
How good are Canon's in-camera AI editing tools?
Typically I'd expect the best photo editing software with a powerful upscaling tool to take a few minutes on a single image using a computer – there's a lot of processing power involved. However, I was surprised that it took under a minute using both EOS R1 and EOS R5 Mark II cameras.
Either Canon's cameras are incredibly powerful, they're not performing as high a quality upscale, or they don't need to do as much hard work because they're fine-tuned for their own images, including the camera and lens settings used.
By the way, I tried to use the in-camera tools for images on a memory card from another camera and it wouldn't work: the EOS R1 and EOS R5 Mark II only upscale and apply AI noise reduction to their own files.
The EOS R1 upscaled images are 12,000 x 8000 pixels from 6000 x 4000 pixels and you can see for yourself what a 4x bigger image looks like alongside the original, above. The same scale applies to the EOS R5 Mark II.
A neat hack for upscaled images is the ability to crop by 4x and still have the same number of pixels as the original file: handy for filling your frame with otherwise distant subjects, like the breakdancer (see below).
As far as I can tell, detail in upscaled images is surprisingly good, but I feel like the tool is held back because you can only work on JPEGs, and when you're pixel peeping detail looks a little processed.
Noise reduction is a subtler effect, but from the limited samples I've tried it on, it can clean up the patchiness you get in high ISO images without smudging detail too much, like regular noise reduction. It can't recover detail – that's what shooting in raw is for because you can brighten or darken images in-camera to recover detail – but it does clean it up impressively.
Practically speaking, employing these tools right now is a bit fiddly. Images can only be upscaled one at a time and once you've completed the upscale you're taken to the end of the image gallery in playback. This means lots of scrolling to find the next image to work on, especially if you have thousands of images on the memory card. I hope Canon improves the user experience.
The tool is also being rolled out to DPP and, more helpfully, to the likes of Lightroom via a Neural network Image Processing (NnIP) plug-in, so you'll be able to make the most of upscale on desktop soon using your favorite Adobe software.
What does in-camera AI mean for cameras and lenses moving forward?
Ever since I saw that neural network tech is in the EOS R1 and EOS R5 Mark II, I've been pondering this: if in-camera upscaling is on the money for making 400% bigger images, does this put to bed the megapixel race?
After all, it transforms the EOS R1 from a 24MP camera to a 96MP one, or the EOS R5 Mark from a 45MP camera to a 180MP one. Does it really matter if your camera has 24MP or 45MP when you can quadruple the size of your picture? Will you ever need more than 96MP?
And if you have 400% bigger images, you can crop in by 4x to fill your frame with distant subjects, effectively quadrupling your lens' reach, eg. making your 28mm lens a 112mm one. (A bit like how the 61MP Leica Q3 has digital crop modes for its 28mm fixed lens.) What does this mean for lens design going forward? Will we need bulky super-telephoto lenses anymore?
On the flip side, will you still need a lower-megapixel camera like the EOS R1 for better low-light performance when you can improve noise reduction by 2 stops on a higher-resolution camera like the EOS R5 Mark II? Does sensor size even matter anymore?
These are all big questions for camera brands: neural network tech could have a profound impact on camera sensor design and the kind of lenses we use in the future. I haven't had my hands on full-production versions of the cameras for a definitive verdict and it's unclear how Canon could further develop the tech, including how it's implemented in-camera, however, it's surely another way that AI is transforming how our cameras work.