We open this episode of the Cyberlaw Podcast with some actual news about the debate over renewing section 702 of FISA. That's the law that allows the government to target foreigners for a national security purpose and to intercept their communications in and out of the U.S. A lot of attention has been focused on what happens to those communications after they've been intercepted and stored, with some arguing that the FBI should get a second court authorization—maybe even a warrant based on probable cause—to search for records about an American. Michael J. Ellis reports that the Office of the Director of National Intelligence has released new data on such FBI searches. Turns out, they've dropped from almost 3 million last year to nearly 120 thousand this year. In large part the drop reflects the tougher restrictions imposed by the FBI on such searches. Those restrictions were made public this week. It has also emerged that the government is using the database millions of times a year to identify the victims of cyberattacks. That's the kind of problem 702 is made for: some foreign hackers are a national security threat, and their whole business model is to use U.S. infrastructure to communicate (in a very special way) with U.S. networks. So it turns out that all those civil libertarians who want to make it hard for the government to search the 702 database for the names of Americans are actually proposing ways to slow down and complicate the process of warning hacking victims. Thanks a bunch, folks!
Justin Sherman covers China's plans to attack and even take over enemy (i.e., U.S.) satellites. The story is apparently drawn from the Discord leaks, and it has the ring of truth. I opine that DOD has gotten a little too comfortable waging war against people who don't really have an army, and that the Ukraine conflict shows how much tougher things get when there's an organized military on the other side. (Again, credit for our artwork goes to Bing Image Creator.)
Adam Candeub flags the next Supreme Court case to nibble away at the problem of social media and the law. The Court will hear argument next year on the constitutionality of public officials blocking people who post mean comments on the officials' Facebook pages.
Justin and I break down a story about whether Twitter is complying with more government demands now that Elon Musk is in charge. The short answer is yes. This leads me to ask why we expect social media companies to spend large sums fighting government takedown and surveillance requests when it's so much cheaper just to comply. So far, the answer has been that mainstream media and Good People Everywhere will criticize companies that don't fight. But with criticism of Elon Musk's Twitter already turned up to 11, that's not likely to persuade him.
Adam and I are impressed by Citizen Labs' report on search censorship in China. We'd both like to see Citizen Lab do the same thing for U.S. censorship, which somehow gets less attention. If you suspect that's because there's more U.S. censorship than U.S. companies want to admit, here's a bit of supporting evidence: Citizen Lab reports that the one American company still providing search services in China, Microsoft Bing, is actually more aggressive about stifling Chinese political speech than China's main search engine, Baidu. This jibes with my experience, when Bing's Image Creator refused to construct an image using Taiwan's flag. (It was OK using U.S. and German flags, but it also balked at China's.) To be fair, though, Microsoft has fixed that particular bit of overreach: You can now create images with both Taiwanese and Chinese flags.
Adam covers the EU's enthusiasm for regulating other countries' companies. It has designated 19 tech giants as subject to its online content rules. Of the 19, one is a European company, and two are Chinese (counting TikTok). The rest are American.
I introduce a case that I think could be a big problem for the Biden administration as it ramps up its campaign for cybersecurity regulation. Iowa and a couple of other states are suing to block the EPA's effort to impose cybersecurity requirements on public water systems. The problem from EPA's standpoint is that it used an "interpretation" of a statute that doesn't actually say much about cybersecurity.
Michael Ellis and I cover a former NSA director's business ties to Saudi Arabia – and confess our unease at the number of generals and admirals moving from command of U.S. forces abroad to a consulting gig with the countries where they just served. Recent restrictions on the revolving door for intelligence officers gets a mention.
Adam covers the Quebec decision awarding $500 thousand to a man who couldn't get Google to consistently delete a false story portraying him as a pedophile and conman.
Justin and I debate whether Meta's Reels feature has what it takes to be a plausible TikTok competitor. Justin is skeptical. I'm a little less so. Meta's claims about the success of Reels aren't entirely persuasive, but I think it's too early to tell.
The D.C. Circuit has killed off the state antitrust case trying to undo Meta's long-ago acquisition of WhatsApp and Instagram. The states waited too long, the court held. That doctrine doesn't apply the same way to the FTC, which will get to pursue the same lonely battle against long odds for years. If the FTC is going to keep sending its lawyers into dubious battles as though they were conscripts in Bakhmut, I ask, when will the Commission start recruiting in Russian prisons?
Well, that was fast. Adam tells us that the Brazil court order banning Telegram because it wouldn't turn over information on neo-Nazi groups has been overturned on appeal. But Telegram isn't out of the woods. The appeal court left in place fines of $200 thousand a day for noncompliance. That seems unsustainable for Telegram.
And in another regulatory walkback, Italy's privacy watchdog is letting ChatGPT return to the country. I suspect the Italian government is cutting a deal to save face as it abandons its initial position that ChatGPT violated data protection principles when it scraped public data to train the model.
Finally, in policies I wish they would walk back, four U.S. regulatory agencies claimed (plausibly) that they had authority to bring bias claims against companies using AI in a discriminatory fashion. Since I don't see any way to bring those claims without arguing that any deviation from proportional representation constitutes discrimination, this feels like a surreptitious introduction of quotas into several new parts of the economy, just as the Supreme Court seems poised to cast doubt on such quotas in higher education.
You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.
The post Why would the government need a warrant to warn me I'm about to be hacked? appeared first on Reason.com.