Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Edinburgh Live
Edinburgh Live
National
Martyn Landi, PA Technology Correspondent & Sophie Buchan

Parents praise new Apple phone safety feature that scans messages for nudity

Apple have confirmed that an AI technology will soon be coming to UK iPhones as part of a new child safety feature.

First launched in the US last year, the safety tool will scan messages sent to and from children referred to as "communication safety in Messages" which will allows parents to turn on warnings for their children's phones and when enabled, all photos sent or received by the child using the Messages app will be scanned for nudity.

The scanning will be carried out by the device, called 'on-device artificial intelligence', meaning no individual at Apple will see the photos. However the feature does have the option to alert a trusted contact.

READ MORE - Nicola Sturgeon hits back at Carol McGiffin during heated Loose Women debate

If scanned and an inappropriate photo is found, so long as the feature is turned on, the photo will be blurred and the child will be sent a warning that it may contain nudity. It will also list resources from child safety groups to reassure them it's okay to choose not to view the image, while a similar alert also appears if a child attempts to send photos which contain nudity.

The tool, which is an opt-in feature and can only be used for child accounts set up as families in iCloud as part of a Family Sharing plan, will be rolled out as part of a software update in the next coming weeks, Apple confirmed, and will be available in the Messages app on iPhone, iPad and Mac computers.

However, a third proposed protection tool – which was also initially announced alongside the others last year before being delayed after some campaigners raised privacy concerns – remains in development with Apple saying they are continuing to collect feedback and make improvements.

That feature uses new systems which would detect child sexual abuse imagery when someone tried to upload it to iCloud, and report it to authorities.

Apple said the process would be done securely and would not regularly scan a user’s camera roll. However privacy campaigners have since raised concerns over the plans with some suggesting the technology could be hijacked by authoritarian governments to look for other types of imagery – something Apple said it would not allow.

In response to the announcement, online safety group the Internet Watch Foundation (IWF) said it welcomed the move to boost child safety.

IWF communications director Emma Hardy said: "We’re pleased to see Apple expanding the communication safety in Messages feature to the UK. At IWF, we’re concerned about preventing the creation of ‘self-generated’ child sexual abuse images and videos.

"Research shows that the best way to do this is to empower families, and particularly children and young people, to make good decisions. This, plus regularly talking about how to use technology as a family are good strategies for keeping safe online."

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.