It's fair to say that Meta's legal department is having a pretty gloomy summer so far.
The Big Tech giant behind the most used social media platforms across the world (Facebook, Instagram, and WhatsApp) has been hit by a wave of privacy complaints, data body regulators' orders, and huge fines in the last few months. July has been especially hard for the company, with the latest being a $220 million fine in Nigeria for breaching local laws on data sharing.
Meta's privacy problems certainly aren't new, nor is it the first time that data protection bodies have fined the company for mishandling users' data. Nonetheless, this latest regulatory push might be a sign that the data protection landscape is catching up. The question now is whether or not it will be enough to really shake the surveillance capitalism model to its core.
Meta's latest privacy missteps
A contentious point since last November has been the company's launch of a new advertisement model. Facebook and Instagram users in the EU, EEA, and Switzerland can now opt out of all advertisements across the two platforms by paying a, pretty pricey, subscription fee.
In its official announcement, Meta describes it as a way to balance "the requirements of European regulators while giving users choice and allowing Meta to continue serving all people in the EU, EEA, and Switzerland." Needless to say, neither privacy advocates nor regulators shared the same view.
Austria-based digital rights group Noyb (stylized as noyb) filed a privacy complaint a couple of weeks after the launch, warning that such a model will make privacy an exclusive right "for the rich." At least 18 local consumer groups issued similar complaints in the following months.
This week the ultimatum: Meta has until September 1 to respond to the concerns raised by consumer protection groups in Europe. This decision followed the European Commission's preliminary view that the new "pay or consent" model is not compliant with new Digital Markets Act (DMA) rules.
(Preliminary) noyb win: @Meta stops using EU/EEA data for "Meta AI" after 11 complaints by noyb and mounting criticism by other EU/EEA data protection authorities! 😀🥂More: https://t.co/bTpIAhBVkI https://t.co/czThdIcNcaJune 14, 2024
Meta's plan to use users' data to train its artificial intelligence (AI) models without consent wasn't welcomed, either. The backlash this time spread out the European block borders.
According to changes to Meta's privacy policy that came into force on June 26, the company can use years of personal posts, private images, or online tracking data to train AI.
After noyb filed 11 complaints to various Data Protection Authorities (DPAs) in Europe, the Irish DPA has now requested the company to pause its plans of using EU/EEA users' data in June (see tweet above).
Meta said to be disappointed about the decision, dubbing it a "step backward for European innovation" in AI. It decided to cancel the launch of Meta AI in Europe for not offering people "a second-rate experience."
On July 17, the Big Tech giant had to withdraw Meta AI from another big market: Brazil. The move came after the Brazilian Data Protection Authority ordered to stop processing personal data to train AI and threatened the company with a fine of R$50,000 (around $9,200) per day of non-compliance.
A wind of change?
As we have seen, data regulators finally seem to be showing some teeth and giving a hard time to the US social media giant.
Jurgita Miseviciute, Head of Public Policy and Government Relations at Proton - the privacy company behind one of the best VPN and encrypted email services - feels hopeful to see governments taking more concrete actions to protect people's privacy.
"When we talk to people, they always say they want more privacy, not less, so it’s heartening to see governments and regulators taking the side of consumers, rather than big tech," she told me.
Speaking to the Verge, Meta spokesperson Kate McLaughlin described this privacy mess as a by-product of "the unpredictable nature of the European regulatory environment." Experts believe, however, that this is just a sign data protection laws are mature enough to be more effective. As AI and privacy expert, Luiza Jarovsky, wrote: "The EU regulatory environment is not unpredictable. It's just that it's not 2004 anymore."
Yet, Miseviciute believes today's fines and regulations are still too weak to really halt Big Tech from exploiting people's personal data for profits.
For how high fines for millions of dollars can look like, in fact, they are just a drop in the ocean for the likes of Meta, Google, or Amazon. As for Proton data, all these companies needed just a week into 2024 to earn enough to pay off all fines issued against them the year before.
"If we want an internet that respects people’s privacy, the fastest way to get there is to convince Big Tech that abusing people’s data is not only wrong, it’s unprofitable," Miseviciute told me. "That’s why we’re pleased to see even greater fines from the EU and other jurisdictions that can attempt to bring Big Tech into line."
Remember: governments aren't the only ones that can influence the market, though. We can all actively decide to ditch data-hungry apps and services in favor of more private alternatives.
You can decide to ditch Gmail or Outlook for a more private and secure email service, for example - offering an alternative to Big Tech is indeed the long-term mission of a few privacy companies, including Proton. Likewise, I recommend replacing WhatsApp and Facebook Messenger with privacy-first messaging apps like Signal instead.
Commenting on this point, Miseviciute said: "The only thing that can drive Big Tech to reconsider the business model is if it sees tens or hundreds of millions of people leaving for privacy-focused alternatives."