The government's new 'world-leading' online safety laws were brought before Parliament for the first time on Thursday (March 17). The Online Safety Bill, which has been in progress for around five years, aims to make the internet a safer place for all users - especially children - while protecting freedom of speech.
The new laws will require social media platforms, search engines and other apps and websites allowing people to post their own content to protect children, tackle illegal activity and uphold their stated terms and conditions. It will also see regulator Ofcom have the power to fine companies failing to comply with the laws, up to ten per cent of their annual global turnover.
In recent months, the Bill has faced a number of changes and a raft of new offences have been added to it ahead of its introduction to Parliament. The Bill was first published in draft in May 2021 and the government say changes have 'significantly strengthened' it since.
Read more:
Among the changes, executives whose companies fail to cooperate with Ofcom’s information requests could face prosecution or jail time within two months of the Bill becoming law, instead of two years as it was previously drafted. The government states that the new laws will make companies 'proactively tackle the most harmful illegal content and criminal activity quicker.'
The changes come after MPs, peers and campaigners warned the initial proposals failed to offer the expected user protection. Here are the changes that have been made to the Bill - and what they all mean.
Paid-for scam adverts
The Bill is set to combat online fraud by bringing paid-for scam adverts on social media and search engines into scope. It will require the largest and most popular social media platforms and search engines to prevent paid-for fraudulent adverts appearing on their services, the government says.
The change will improve protections for internet users from the potentially devastating impact of fake ads, including where criminals impersonate celebrities or companies to steal people’s personal data, peddle dodgy financial investments or break into bank accounts.
Pornography
All websites which publish or host pornography - including commercial sites and social media - must put robust checks in place to ensure users are 18-years-old or over. This could include adults using secure age verification technology to verify that they possess a credit card and are over 18 or having a third-party service confirm their age against government data.
If sites fail to act, then Ofcom will be able fine them up to 10 per cent of their annual worldwide turnover, or can block them from being accessible in the UK. Bosses of these websites could also be held criminally liable if they fail to cooperate with Ofcom.
The government said: "A large amount of pornography is available online with little or no protections to ensure that those accessing it are old enough to do so. There are widespread concerns this is impacting the way young people understand healthy relationships, sex and consent."
Age verification controls are one of the methods technologies websites may use to prove to Ofcom that they can fulfil their duty of care and prevent children accessing pornography.
Anonymous trolls
New measures have been added to clamp down on anonymous trolls, to give people more control over who can contact them and what they see online. Companies with the largest number of users and highest reach must offer ways for their users to verify their identities and control who can interact with them.
This may include options for users to tick a box in their settings to receive direct messages and replies only from verified accounts. The onus will be on the platforms to decide which methods to use to fulfil this identity verification duty but they must give users the option to opt in or out.
When it comes to verifying identities, some platforms may choose to provide users with an option to verify their profile picture to ensure it is a true likeness. Or they could use two-factor authentication where a platform sends a prompt to a user’s mobile number for them to verify. Alternatively, verification could include people using a government-issued ID such as a passport to create or update an account.
"While this will not prevent anonymous trolls posting abusive content in the first place - providing it is legal and does not contravene the platform’s terms and conditions - it will stop victims being exposed to it and give them more control over their online experience," the government said of the new rules. It also said that 'banning anonymity online entirely' would negatively affect 'those who have positive online experiences,' including people who use it for their personal safety. This might include domestic abuse victims, activists living in authoritarian countries or young people exploring their sexuality.
Cyberflashing
Cyberflashing - where offenders send unsolicited sexual images to others online - will be criminalised in England and Wales via the Bill. The change means that anyone who sends a photo or film of a person’s genitals, for the purpose of their own sexual gratification or to cause the victim humiliation, alarm or distress may now face up to two years in prison.
Photos are usually sent through social media or dating apps, but the law also covers images sent over data sharing services such as Bluetooth and Airdrop. The new offence will ensure cyberflashing is captured clearly by the criminal law – giving the police and Crown Prosecution Service greater ability to bring more perpetrators to justice.
It follows similar recent action to criminalise upskirting and breastfeeding voyeurism with the government determined to protect people, particularly women and girls, from these emerging crimes.