There have been a couple of developments in the last day or two regarding what is and isn’t permissible with AI. The first comes courtesy of the U.K.’s Supreme Court, and it’s a bitter blow to those who want to use AI to invent things.
There’s an outfit called the Artificial Inventor Project that’s been trying—through a series of test cases—to get countries to recognize that AI-derived inventions can be patented. The cases hinge on inventions made by an AI called DABUS (“Device for Autonomous Bootstrapping of Unified Sentience”) that was created by Missouri-based inventor Stephen Thaler. Specifically, Thaler claims DABUS has come up with a novel kind of food container that increases rather than stifles heat transfer, and a new kind of “neural flame” light source for a flashing beacon, all by itself.
Thaler and his colleagues hit the end of the British road yesterday, following years of rejection by the U.K. Intellectual Property Office and then the courts. The Supreme Court unanimously dismissed Thaler’s appeal, essentially because DABUS is not a natural person and, legally speaking, only people can invent things. No inventor, no patent.
The Artificial Inventor Project’s efforts have not gone completely unrewarded—DABUS’s food container and beacon were granted patents in South Africa in 2021. But patent offices and courts in the U.S., Australia, and Taiwan have all definitively rejected the patent applications. Appeals are pending in Europe, Germany, Israel, Korea, Japan, and New Zealand, and the original applications are still pending in a bunch more countries (including China).
AI’s it-ain’t-a-person legal issue isn’t just limited to the patent world—it’s the same reason that, as things stand, the U.S. won’t grant copyright to AI-created works. But the Artificial Inventor Project’s big concern is that, if someone uses AI to invent stuff that doesn’t go on to be patented, the specifics of the invention will likely become trade secrets rather than being publicly disclosed. Advocates for AI patent rights argue this will prove particularly harmful when it comes to AI drug discovery.
Artificial Inventor Project chief Ryan Abbott, who represented Thaler in the U.K. case, told me he found the ruling “unfortunate,” but he took heart from the fact that the Supreme Court said Parliament could fix the problem. “Hopefully lawmakers act quickly to extend protection to encourage the use of AI in research and development,” he said.
Separately, but still on the subject of AI and intellectual property rights, a Japanese government panel reckons it may be a copyright violation for companies to train their AIs on copyright-protected works. The panel’s draft report will feed into new guidelines that should clear up currently murky rules around the ability of rights holders to limit what AI companies can do with protected IP.
This is of course a live issue around the world—that big copyright lawsuit against Microsoft and OpenAI in the U.S. just gained nearly a dozen new litigants, including the Pulitzer Prize-winning coauthors of the J. Robert Oppenheimer biography American Prometheus, which was the basis for Christopher Nolan’s Oppenheimer. “The defendants are raking in billions from their unauthorized use of nonfiction books, and the authors of these books deserve fair compensation and treatment for it,” said the writers’ attorney, Rohit Nath.
These are truly precedent-setting times. More news below—and see you in a month. I’m taking vacation back home in sunny South Africa, and am leaving you in my colleagues’ ever-capable hands. Totsiens!
David Meyer
Want to send thoughts or suggestions to Data Sheet? Drop a line here.