A first of its kind agreement between big tech, Netsafe and NZTech will see companies like Meta and Google forced to report on how they intend to get a grip on digital harm
A joint agreement between social media platforms and digital stakeholders in New Zealand hopes to set an industry-standard for online safety.
Meta (which includes Facebook and Instagram), Google (including YouTube), TikTok, Amazon (including Twitch) and Twitter have signed the Code of Practice for Online Safety and Harms, which goes live from today.
The agreement, which the companies signed alongside online safety regulator Netsafe and tech sector advocates NZTech, commits them to reducing harmful content by instituting clear public complaints mechanisms and reporting yearly on their safety standards.
The companies are asked to look at seven areas: child sexual exploitation and abuse, cyberbullying or harassment, hate speech, incitement of violence, violent and graphic content, misinformation and disinformation.
It’s a voluntary programme of self-regulation, which Netsafe CEO Brent Carey hopes will lead to more companies getting on board.
He compared it to the Christchurch Call, a set of voluntary commitments to eliminate violent extremist content from the internet in the wake of March 15, which saw more than 50 countries and international organisations sign up after it was established by New Zealand and France.
“It’s now open for others to go, 'Well, what is this, and should we also be signing on as a co-signatory?” Carey said.
While it’s a voluntary set of rules rather than legal regulations, Carey said it was the quickest way to get immediate reductions to digital harm while the Department of Internal Affairs’ content regulatory review is underway.
The new legal framework for media and online content is currently being identified and refined, with public consultation set to open in September.
But there’s still a bit of a wait ahead for the new set of laws on this issue. Drafting of new laws will only happen after Cabinet consideration of the department’s proposal, which is still a year off.
Carey said the code and any new laws decided on later can co-exist.
“We’re saying that there’s an industry voluntary approach that can tackle these issues now,” he said. “Of course, legislative process trumps voluntary ... but in the meantime, New Zealanders are suffering harm online now and want people to respond.”
Nick McDonnell, head of public policy for the New Zealand and Pacific Island division of signees Meta, said the company has long supported calls for regulation to address online safety and have been working collaboratively with industry, government and safety organisations to advance the code.
“This is an important step in the right direction and will further complement the Government’s work on content regulation in future,” he said. “We’re looking forward to working with stakeholders to ensure the code sets in place a framework to keep Kiwis safe across multiple platforms by preventing, detecting and responding to harmful content.”
Meta, formerly known as Facebook, has come under fire for being a platform for harmful content such as the Christchurch mosque shooting video, and for hosting scammers on Facebook Marketplace.
It’s also the most popular social media platform in the country, with around 81 percent of New Zealanders using it.
Whether signing up to yearly reporting on safety standards is more an attempt to rehabilitate a contentious relationship with the public or a sincere move to a new level of public accountability is yet to be seen.
Carey said it was good to have the old guard of social media signing up for the code along with the new. Platforms like Facebook and Twitter are in the mix alongside the channels quickly growing in prominence like Twitch and TikTok.
TikTok in particular has seen a meteoric rise in New Zealand, particularly among the under 25 crowd. According to the company, it has 1.3 million monthly users in New Zealand, who spend an average of 85 minutes a day on the app.
A TikTok spokesperson said the company welcomes the introduction of the code.
“As the first of its kind in New Zealand, the code provides an inclusive, self-regulatory framework committing signatories to meet crucial safety and transparency outcomes for tackling harmful online content,” they said.
But just like Facebook, TikTok is mainly held to account by its own word given the lack of legal muscle behind the agreement.
Carey was optimistic about it making a difference, likening code membership to a mark of quality consumers will take notice of.
“It could be a reputational risk to that particular company if they weren’t doing what the industry was expecting from them. They’d be a bit of a lone wolf, and that comes with some accountability,” he said. “And also if they fail to comply with their commitments they could be booted out of the code and that would actually make them a bit of a pariah.”
Perhaps like a can of tuna with a dolphin-free sticker, social media platforms will feel a bit naked without this kind of commitment in future.
The code will be looked over by an oversight committee, which can recommend removal of a signatory should it fail to live up to the agreement.
It’s a move other global players are slowly catching up on as well, with similar initiatives underway in countries like Australia and Singapore, where new codes of practice for social media platforms are being considered by government.
NZTech CEO Graeme Muller said his association will take over the establishment and administration of the code once it's up and running.
One of the advantages of the code as opposed to a traditional legal framework is it will be a flexible agreement that can be adapted easily enough as things go forward.
“The code will be a living document, it can be amended biannually and we hope the governance framework will enable it to evolve alongside local conditions, while at the same time respecting the fundamental rights of freedom of expression,” Muller said.
The code goes live as New Zealand’s Netsafety Week takes place this week with the overarching theme of diversity.
Netsafe is conducting a series of a series of in-person and online events this week which will see groups like Māori, LGBTQI+, young people, women, and seniors sharing the impact digital harm has on them.
The agreement comes as reports to Netsafe of online harm have increased 25 percent year on year since 2020.
The significant increase in reports has been put down to a number of factors, including increased awareness of Netsafe's ability to help out and the social isolation and economic instability of recent times.