Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Conversation
The Conversation
Politics
Andy Phippen, Professor of IT Ethics and Digital Rights, Bournemouth University

Online safety bill: why making the UK the 'safest place to go online' is not as easy as the government claims

fizkes/Shutterstock

The government’s online safety bill, a reform years in the making, will now become law.

Among the bill’s key aims is to ensure it is more difficult for young people (under the age of 18) to access content that is considered harmful – such as pornography and content that promotes suicide or eating disorders. It places a “duty of care” on tech companies to ensure their users, especially children, are safe online. And it aims to provide adults with greater control over the content they interact with, for example if they wish to avoid seeing sexual content.

The legislation puts the onus on service providers (such as social media companies and search engines) to enforce minimum age requirements, publish risk assessments, ensure young people cannot access harmful content (while still granting adults access) and remove illegal content such as self-harm and deepfake intimate images.

The government has said the new law will make the UK the “safest place to be online”, but this isn’t something that can happen overnight. Ofcom, the UK’s communications regulator, is in charge of turning the legislation into something they can actually regulate. By the regulator’s own calculations, this process will take months.

There are many who view the bill as poorly thought out, with potential overreach that could conflict with fundamental human rights. The Open Rights Group has raised serious concerns around privacy and freedom of expression.

The challenges of regulating the internet

There are also aspects of the bill that are, currently, technically impossible. For example, the expectation that platforms will inspect the content of private, end-to-end encrypted messages to ensure that there is no criminal activity (for example, sexual communication with children) on their platforms – this cannot be done without violating the privacy afforded by these technologies.

If platforms are expected to provide “back doors” to technology designed to ensure that communications are private, they may contradict privacy and human rights law. At present, there is no way to grant some people access to encrypted communications without weakening the security of the communications for everyone. Some platforms have said they will leave the UK if such erosions in encryption are enacted.

There is a rich history of governments wrongly assuming encryption can be accessed that is not being reflected upon in current debates.


Read more: The UK just passed an online safety law that could make people less safe


Furthermore, age verification and estimation technology is not yet foolproof, or indeed accurate enough to determine someone’s exact age. Yoti, a leading age verification and estimation technology provider has stated that their technology could correctly predict a user aged 13-17 being “under 25” 99.9% of the time. It’s entirely possible that many young adults would be falsely identified as being minors – which might prevent them from accessing legal content. There have been previous attempts to legislate age verification for pornography providers (such as in the 2017 Digital Economy Act), which the UK repealed due to the complexities of implementation.

While technology continues to develop, it seems unlikely there will be perfect implementations anytime soon for these issues.

What is ‘harmful’ content?

The other major argument against the bill is that, even with the best of intentions, the protections designed to keep children safe could have a chilling impact on freedom of speech and freedom of expression.

Previous versions of the bill placed expectations on platforms to explicitly tackle “legal but harmful” content for adults. This was defined at the time as content that would be viewed as offensive by a “reasonable person of ordinary sensibilities”. While these provisions are now removed, there is still a great deal of intangibility around what it means to protect children from “harmful” content.

Outside of illegal content, who decides what is harmful?

Platforms will be expected to make rules around content they deem might be harmful to certain users, and censor it before it can be published. As a result, this might also prevent children from accessing information related to gender and sexuality that could be caught up in the filtering and monitoring systems platforms will put in place. Without a clear definition of what harmful content is, it will be down to platforms to guess – and with moving goalposts, depending on the government of the day.

A young girl holding an ipad sits next to her mother on a sofa. The mother has her own laptop but is looking at her daughter's iPad screen.
Young people want adult support in dealing with what they see online – not regulation banning them from seeing it. Prostock-studio/Shutterstock

What would actually make the internet safe?

As someone who researches the ethics of technology and the habits of young people online, my concern is that this bill will be viewed as the solution to online harms – it clearly is not.

These measures, if effectively implemented, will make it more difficult for young people to stumble across content meant for adults, but they will not prevent the determined teenager. Furthermore, a lot of intimate content shared by young people is shared between peers and not accessed via platforms, so this legislation will do nothing to tackle this.

I often to speak to young people about what help they would like to be safer online. They rarely ask for risk assessments and age verification technologies – they want better education and more informed adults to help them when things go wrong. Far better, young people tell me, to provide people with the knowledge to understand the risks, and how to mitigate them, rather than demanding they are stopped by the platforms.

I am reminded of a quote from the American cybersecurity researcher Marcus Ranum: “You can’t solve social problems with software.”

The Conversation

Andy Phippen does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.

This article was originally published on The Conversation. Read the original article.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.