Get all your news in one place.
100’s of premium titles.
One app.
Start reading
The Guardian - UK
The Guardian - UK
Technology
Dan Milmo Global technology editor

Age checks, trolls and deepfakes: what’s in the online safety bill?

Age check screen
Platforms will have to explain in their terms of service how they enforce their age limits. Photograph: NurPhoto/Rex/Shutterstock

The online safety bill returns to the House of Commons on Tuesday with the government pledging to introduce an important change: criminal liability for tech executives whose platforms persistently fail to protect children from harm online.

It is the latest alteration to a piece of legislation that has triggered debates about a range of issues, from free speech to dealing with trolls and proper age checking for pornography sites. Here is a quick run-through of the bill as it stands.

How does the bill work?

The cornerstone of the bill is the duties of care it will place on tech companies to protect users from harmful content. The legislation will apply to platforms that host user-generated content, which covers social media services such as Twitter, TikTok and Facebook, and search engines such as Google. Although many of these services are based outside the UK, if they are accessible to UK users then they are in the scope of the bill’s powers.

All tech firms covered by the bill will have to protect all users from illegal content. The sort of content that platforms will need to remove includes child sexual abuse material, revenge pornography, selling illegal drugs or weapons, and terrorism.

Tech platforms will also have a duty of care to keep children safe online. This will involve preventing children from accessing harmful content and ensuring that age limits on social media platforms – the minimum age is typically 13 – are enforced. Platforms will have to explain in their terms of service how they enforce these age limits and what technology they use to police them.

In relation to both of these duties, tech firms will have to carry out risk assessments detailing the threats their services might pose in terms of illegal content and keeping children safe. They will then have to explain how they will mitigate those threats – for example through human moderators or using artificial intelligence tools – in a process that will be overseen by Ofcom, the communications regulator. This is expected to come into force by the end of the year.

What are the punishments for companies under the legislation?

Ofcom will have a range of regulatory powers under the bill. At the top end, it will be able to impose fines of up to £18m, or 10% of global turnover – a big number if it is a company such as Meta, which generated revenue of just under $118bn in 2021. In the most extreme cases, rogue sites can be blocked from operating by ordering payment providers, advertisers and internet service providers to stop working with them. Ofcom will also have the power to issue enforcement notices under the bill, telling companies and platforms to improve how they operate.

Can executives go to jail under the legislation?

Even before the government conceded to backbench rebels on Monday, tech executives faced the threat of a two-year jail sentence under the legislation, if they hinder an Ofcom investigation or a request for information.

Now, they also face the threat of a two-year jail sentence if they persistently ignore Ofcom enforcement notices telling them they have breached their duty of care to children. In the face of tech company protests about criminal liability, the government is stressing that the new offence will not criminalise executives who have “acted in good faith to comply in a proportionate way” with their duties.

Nonetheless, it will sharpen the minds of social media executives. The new offence will target senior managers who “connive” in “ignoring enforceable requirements”.

Are there other criminal offences?

The bill will introduce a range of criminal offences for England and Wales. These include encouraging people to self-harm, sharing pornographic “deepfake” images, taking and sharing “downblousing” images, cyberflashing (sending an unsolicited sexual image), and sending or posting a message that conveys a threat of serious harm.

How does it deal with pornography and age verification?

If a platform publishes pornography, it will need to have “robust” processes in place to check that a user is not underage. How that is to be done is up to the platform – there are a number of tools that can be used to check a user’s age – but it will be vetted by Ofcom. The government has said any age assurance method used by pornography sites would have to protect users’ data, reflecting privacy campaigners’ concerns that requiring users of porn websites to log in could make it easier to collect – and leak – data on an individual’s viewing habits.

Will it protect adults from online trolls and abuse?

Under a previous iteration, the bill placed a duty of care on large platforms to address content that was harmful but not illegal. This alarmed free speech advocates on the Conservative backbenches and elsewhere, so it has been removed. Instead, tech firms will be required to remove certain types of “legal but harmful” content if it is already banned under their terms of service, under a clause that tries to ensure platforms pay more than just lip service to their content rules. Adults will also have the option of screening out certain types of harmful content if they so choose. This includes posts that are abusive, or incite hatred on the basis of race, ethnicity, religion, disability, sex, gender reassignment or sexual orientation.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.