The U.S. Federal Trade Commission just came down on Meta like a ton of bricks for allegedly violating its previous privacy promises with the agency. It claims the company failed to implement an effective privacy program, misled users about third parties being able to access their data, and violated kids’ privacy rules.
As a result, the FTC wants to ban Meta from monetizing under-18s’ data, or from even using it to enrich its own data models and algorithms. It also wants the Facebook and Instagram parent to pause the release of “any new or modified product, service, or feature” until it has first satisfied an independent assessor that its privacy program is solid. And just for good measure, Meta “would be required to disclose and obtain users’ affirmative consent for any future uses of facial recognition technology.”
This is clearly going to be an epic battle between Meta and the regulator, so let’s have a quick run-through of the FTC’s claims—at least, the bits that aren’t redacted in its enforcement proposal—and Meta’s furious rebuttal:
FTC: An independent assessor looked at Meta’s privacy program and found numerous gaps and weaknesses, the most serious of which “present substantial risks to the public.”
Meta: The FTC rushed its new proposal through “just before an independent assessor was scheduled to update them on our compliance work and program enhancements we made over the past two years.” Meta will by the end of this year have invested $5 billion since 2019 on developing a “rigorous privacy program.”
FTC: Meta failed (again—after the Cambridge Analytica fiasco) to comply with its promise to tell users how it makes their information available to third parties. In 2018, Meta said it would remove a third-party app developer’s access to a user’s nonpublic data if that user hadn’t used the relevant app in the previous 90 days, but until mid-2020 Meta “in some instances continued to share users’ nonpublic information with Expired Apps.”
Meta: A “coding oversight” did sometimes give apps “access to data for longer than our additional 90-day limitation measure,” but “did not result in any apps accessing data contrary to people’s privacy settings.”
FTC: From 2017 to 2019, Meta told parents they could approve all their children’s contacts in Messenger Kids, but “coding errors allowed children to participate in group text chats and group video calls with unapproved contacts under certain circumstances” in group text chats on Android devices, and when adding participants to ongoing video calls.
Meta: The unapproved people were friends of approved friends, and the company disclosed the “coding error” to the FTC and parents in 2019. “The technical errors impacted a very small subset of group chat and video threads due to the overlapping controls we have in place to protect against errors like this.”
Meta also accused the FTC of pulling a “political stunt” and attempting to “usurp the authority of Congress to set industrywide standards and instead single out one American company while allowing Chinese companies like TikTok to operate without constraint on American soil.”
That is presumably a reference to the bipartisan Kids Online Safety Act bill, which sponsors Senators Marsha Blackburn (R-Tenn.) and Richard Blumenthal (D-Conn.) reintroduced this week after it failed to clear Congress last year—and perhaps also to the recently introduced Protecting Kids on Social Media Act bill, which enjoys less support from child advocates owing to inherent privacy problems. Coincidentally, Senators Ed Markey (D-Mass.) and Bill Cassidy (R-La.) yesterday also reintroduced a bill called the Children and Teens’ Online Privacy Protection Act, which would crack down on Big Tech’s use of kids’ data, so now there are three competing proposals on the table.
I’m not convinced by Meta’s claim that the FTC is trying to pick on it while ignoring rivals. The agency would have nothing to gain by favoring TikTok, and it seems to just be using the law that’s currently available to it to crack down on Meta for repeatedly breaking previous agreements. I’d say Meta’s alleged infringements demonstrate why new, across-the-board protections are needed, but bills that may not pass should not preempt the possibility of more urgent enforcement.
Speaking of privacy law, the EU’s highest court issued a couple rulings this morning that clarify the impact of the General Data Protection Regulation, which came into effect five years ago this month (they grow up so fast). I sadly lack the space to discuss those in much depth, but here’s the gist:
1) An infringement of the GDPR doesn’t necessarily cause damage to a person, so not every infringement gives someone the right to claim compensation. However, there is also no “threshold of seriousness” for the emotional damage that someone can claim to have suffered, giving them a right to compensation.
2) When the GDPR says people have a right to get a copy of their personal data, that means everything, not just a summary. Yes, even whole documents and database extracts.
More news below.
Want to send thoughts or suggestions to Data Sheet? Drop a line here.
David Meyer
Data Sheet’s daily news section was written and curated by Andrea Guzman.