Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Reason
Reason
Politics
Stewart Baker

When AI poses an existential risk to your law license

This episode of the Cyberlaw Podcast features the second half of my interview with Paul Stephan, author of The World Crisis and International Law. But it begins the way many recent episodes have begun, with the latest AI news. And, since the story is squarely in scope for a cyberlaw podcast, we devote some time to the so-appalling-you-have-to-laugh-to-keep-from-crying story of the lawyer who relied on ChatGPT to write his brief. As Eugene Volokh noted in his post on the story, the AI returned exactly the case law the lawyer wanted – because it made up the cases, the citations, and even the quotes. The lawyer said he had no idea that AI would do such a thing.

I cast a skeptical eye on that excuse, since when challenged by the court to produce the cases he relied on, the lawyer turned not to Lexis-Nexis or Westlaw but to ChatGPT, which this time made up eight cases on point. And when the lawyer asked ChatGPT, "Are the other cases you provided fake," the model denied it. Well, all right then. Who among us has not asked Westlaw, "Are the cases you provided fake?" and accepted the answer without checking? Somehow, I can't help suspecting that the lawyer's claim to be an innocent victim of ChatGPT is going to get a closer look before this story ends. So if you're wondering whether AI poses existential risk, the answer for at least one law license is almost certainly "yes."

But the bigger stories of the week were the cries from Google and Microsoft leadership for government regulation of their new AI tools. Microsoft's President, Brad Smith has, as usual, written a thoughtful policy paper on what AI regulation might look like. Jeffery Atik and Richard Stiennon point out that, as usual, Brad Smith is advocating for a process that Microsoft could master pretty easily. Google's Sundar Pichai also joins the "regulate me" party, but a bit half-heartedly. I argue that the best measure of Silicon Valley's confidence in the accuracy of AI is easy to find: Just ask when Google and Apple will let their AI models identify photos of gorillas. Because if there's anything close to an extinction event for those companies it would be rolling out an AI that once again fails to differentiate between people and apes.

Moving from policy to tech, Richard and I talk about Google's integration of AI into search; I see some glimmer of explainability and accuracy in Google's willingness to provide citations (real ones, I presume) for its answers. And on the same topic, the National Academy of Sciences has posted research suggesting that explainability might not be quite as impossible as researchers once thought.

Jeffery takes us through the latest chapters in the U.S.—China decoupling story. China has retaliated, surprisingly weakly, for U.S. moves to cut off high-end chip sales to China. It has banned sales of U.S.—based Micron memory chips to critical infrastructure companies. In the long run, the chip wars may be the disaster that Invidia's CEO foresees. Certainly, Jeffery and I agree, Invidia has much to fear from a Chinese effort to build a national champion in AI chipmaking. Meanwhile, the Biden administration is building a new model for international agreements in an age of decoupling and industrial policy. Whether the effort to build a China-free IT supply chain will succeed is an open question, but we agree that it marks an end to the old free-trade agreements rejected by both former President Trump and President Biden.

China, meanwhile, is overplaying its hand in Africa. Richard notes reports that Chinese hackers attacked the Kenyan government when Kenya looked like it wouldn't be able to repay China's infrastructure loans. As Richard points out, lending money to a friend rarely works out. You are likely to lose both the money and the friend, even if you don't hack him.

Finally, Richard and Jeffery both opine on Ireland's imposing – under protest – a $1.3bn fine on Facebook for sending data to the United States despite the Court of Justice of the European Union's (CJEU) two Schrems decisions. We agree that the order simply sets a deadline for the U.S. and the EU to close their third deal to satisfy the CJEU that U.S. law is "adequate" to protect the rights of Europeans. Speaking of which, anyone who's enjoyed my rants about the EU will want to tune in for a June 15 Teleforum in which Max Schrems and I will  debate the latest privacy framework. If we can, we'll release it as a bonus episode of this podcast, but listening live should be even more fun!

Download 459th Episode (mp3)

You can subscribe to The Cyberlaw Podcast using iTunes, Google Play, Spotify, Pocket Casts, or our RSS feed. As always, The Cyberlaw Podcast is open to feedback. Be sure to engage with @stewartbaker on Twitter. Send your questions, comments, and suggestions for topics or interviewees to CyberlawPodcast@gmail.com. Remember: If your suggested guest appears on the show, we will send you a highly coveted Cyberlaw Podcast mug! The views expressed in this podcast are those of the speakers and do not reflect the opinions of their institutions, clients, friends, families, or pets.

 

The post When AI poses an existential risk to your law license appeared first on Reason.com.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.