Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Newcastle Herald
Newcastle Herald
Lucy Arundell

'Punitive' measure or safety net: what is the social media ban?

A social media ban would punish young people for merely existing in online spaces, according to 19-year-old Jemma Rule.

The youth council representative believes a minimum age for social media won't protect Australian kids.

"The language and messaging is that this ban is to protect young people, but it's problematic. It doesn't address safety or the issues of use," she said.

"It's almost victim-blaming, in that it doesn't hold the perpetrators to account, neither the companies nor the users."

It's the debate where everyone has an opinion; how do we keep young people safe on social media?

The federal government wants to ban children from the platforms, Meta has announced new "teen accounts" for Instagram, all while parents say they're running out of options.

But are any of these ideas actually going to keep children safe? And what do young people themselves think?

What is the social media ban?

The federal government announced in early September it wanted to bring in a social media ban for Australia children, in a bid to address concerns over safety risks on the platform. The proposed legislation follows a similar move in South Australia earlier in 2024, as well as a recommendation from the eSafety Commissioner for a legislated minimum age.

Details on what exactly the ban would look like, what platforms would be included, and what age would be mandated are sparse, but the prime minister has indicated he'd like to see the age limit set to 16.

A young person scrolling on a social media platform. Picture by Marina Neil

Opposition Leader Peter Dutton has also supported an age limit for under-16s, while the Greens have condemned the plan as "knee jerk politics that lacks evidence".

How would the ban work?

Australians signing up to social media accounts would need to prove their age, likely through some form of government-issued ID. The government hasn't specified how users would prove their age, whether it would be through a government-operated portal, or via the apps themselves.

The federal government has announced funding for an age assurance trial, calling for experiments with age verification technology to prevent children from seeing concerning content, such as pornography, online.

The tender for the trial set out that the technology could use facial age estimation for low risk services. For high risk content, the technology could require a government ID or other authentication methods.

Experts have raised concerns about the viability of age verification when children could borrow parents' or older siblings' IDs, or age up their own photos using AI.

Dr Faith Gordon from the Australian National University said while most people agree something has to be done to keep kids safe on social media, there's little evidence to suggest a ban would work.

Social media age limits have been implemented in France, where parents must approve the accounts of children under 15, and in China, with children's use of social media regulated by time and content.

"Children and young people do report being able to, in those jurisdictions, bypass the rules, use VPNs or use their parent or guardians' IDs," she said.

"It's also members of our community that are producing this disturbing content often, and the platforms are obviously facilitating that, so it's much more complex than banning or restricting kids from the platforms."

What do young people think about the ban?

In recent discussions on the proposed ban, the ACT Youth Council concluded that the federal government's plan was a "punitive approach to a nuanced issue".

The council, which is made up of young people aged 16 to 23, said that more needs to be done to stop the platforms and people making online spaces unsafe for children.

"Regardless of the age, the same issues are present ... children will create fake social media accounts which will be unregulated and lacking parental oversight," Ms Rule said.

"We need to educate young people on the specifics of safety online, and have more oversight of social media platform's reporting mechanisms."

Other children told this masthead that young people are more critical of what they see online than politicians give them credit for, and that parents and educators are helping them understand bias on social media.

How have the social media giants reacted?

Meta has announced new "teen accounts" will be rolled out on Instagram over September and October to help parents monitor their children's use of the platform.

With the new teen accounts, children under 16 will be automatically made private users, which means they can only be messaged and tagged in posts by people they're connected to, and they'll be under Instagram's sensitive content tools, limiting access to content like people fighting and cosmetic procedures.

They'll also have a time limit notification of 60 minutes per day, and the app will automatically mute notifications overnight. The new accounts apply to all teens under 16 (including those already on Instagram and those signing up) and teens under 18 when they sign up for the app.

Executives from Meta, which owns Facebook, Instagram and Whatsapp, revealed to a Senate inquiry on September 11 that posts and photos from Australian accounts were being used to train AI models.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.