If you’ve spent any time on Instagram or Facebook lately, you will probably have encountered concerned citizens sharing posts denying Meta, the parent company of both platforms as well as WhatsApp, the right to use their data to train AI systems.
If it wasn’t your slightly oddball old schoolfriend who posted the message, which begins “Goodbye Meta AI” and concludes “I do not give Meta or anyone else permission to use any of my personal data, profile information or photos”, then it’ll have been a Hollywood celebrity. Actor James McAvoy and former NFL player Tom Brady are among those who have posted the disclaimer.
But can sharing a post really stop Meta from mining your data? The answer, I’m afraid, is a resounding no.
The fear of AI hoovering up all our personal data and mulching it up into training data to help improve the systems of billion-dollar companies is, on the face of it, a valid concern. And given that the companies behind some of the biggest AI systems have shown themselves to be at best blase about things such as copyright and personal autonomy, it’s little surprise there’s a groundswell of public opprobrium about the potential of our data being used in this way. A third of British people told a government-commissioned survey carried out about a year ago that they did not think data use benefits all groups in society.
Versions of the message, which appears to have been composed in response to a June announcement by Meta that it will use public posts to train its AI systems, have been kicking about for three weeks now. But it has gained momentum in recent days as the viral post snowballs. Google searches for “Goodbye Meta AI” have surged.
It is important to put logic before emotion. Are you really going to be able to opt out of a mass data-gathering system by simply copying and pasting what your great-aunt with questionable views posted on her own Facebook profile?
Don’t take my word for it. A Meta spokesperson has also rubbished the post: “Sharing this story does not count as a valid form of objection,” they said. Meta’s factchecking teams are labelling it “false information” on Instagram.
Such so-called copypasta has been kicking around the internet for years. Similar block messages of legalese text have been circulating since at least 2012, when Facebook was falsely rumoured to be about to share private photos and messages publicly. The same concern arose four years later, with the Guardian gently knocking down the panic. Both of those messages had similar wording, including citing “UCC 1-308- 1 1 308-103” and the Rome Statute.
Even if these messages seem harmless and sharing them might feel like hedging your bets, I’d encourage you not to be drawn in. Digital literacy in the age of AI is more important than ever, and it’s vital that we are able to identify copypasta nonsense for what it is.
Not only does sharing false information like this single you out as being gullible, but it is also a fruitless exercise when there are real ways of pushing back against big tech using your data. Meta says it will be sending out notifications informing users it plans to train its AI systems on user data and giving people the option to opt out. You fill out a short form and send it to Meta, and any public data will be removed (Meta has already confirmed it won’t train its systems on anything you have not shared publicly).
But if you miss that, or you’re still keen to get ahead of it, you can opt out proactively. Click on “Settings & Privacy” in Facebook, then on “Privacy Centre” and you’ll be met with some text about the AI opt-in. The second paragraph begins: “You have the right to object.” Click on it to be taken to a form which allows you to express your dissent.
As AI develops, it’s vital we all stay abreast of the real threats to our data and how to combat them – and that we resist being drawn in by distractions such as “Goodbye Meta AI”.
Chris Stokel-Walker is the author of TikTok Boom: China’s Dynamite App and the Superpower Race for Social Media
Do you have an opinion on the issues raised in this article? If you would like to submit a response of up to 300 words by email to be considered for publication in our letters section, please click here.