Get all your news in one place.
100’s of premium titles.
One app.
Start reading

Generative AI is a legal minefield

New generative AI systems like ChatGPT and Dall-E raise a host of novel questions for a legal system that always imagined people, rather than machines, as the creators of content.

Why it matters: The courts will have to sort out knotty problems like whether AI companies had rights to use the data that trained their systems, whether the output of generative engines can be copyrighted, and who is responsible if an AI engine spits out defamatory or dangerous information.


Between the lines: New laws specific to AI don't yet exist in most of the world (although Europe is in the process of drafting a wide-ranging AI Act). That means that most of these issues — at least for now — will have to be addressed through existing law.

  • Meanwhile, critics say that as the field has accelerated, companies are taking more risks.
  • "The more money that flows in, the faster people are moving the goal posts and removing the guardrails," says Matthew Butterick, an attorney whose firm is involved in lawsuits against several companies over how their generative AI systems operate, including Microsoft's GitHub.

Here are four broad areas of legal uncertainty around AI:

Should AI developers pay for rights to training data?

One big question is whether the latest AI systems are on safe legal ground in having trained their engines on all manner of information found on the internet, including copyrighted works.

  • At issue is whether or not such training falls under a principle known as "fair use," the scope of which is currently under consideration by the Supreme Court.
  • Much of the early legal battles have been about this issue. Getty, for example, is suing Stable Diffusion, saying the open source AI image generator trained its engine on 12 million images from Getty's database without getting permission or providing compensation.
  • CNN and The Wall Street Journal have raised similar legal issues about articles they say were used to train Open AI's ChatGPT text generator.

It's not just about copyright. In a lawsuit against GitHub, for example, the question is also whether the CoPilot system — which offers coders AI-generated help — violates the open source licenses that cover much of the code it was trained on.

  • Nor are the potential IP infringement issues limited to the data that trains such systems. Many of today's generative AI engines are prone to spitting out code, writing and images that appear to directly copy from one specific work or several discernible ones.

Can generative AI output be copyrighted?

Works entirely generated by a machine, in general, can't be copyrighted. It's less clear how the legal system will view human/AI collaborations.

  • The US Copyright Office this week said that images created by AI engine Midjouney and then used in a graphic novel were not able to be protected, Reuters reported.

Can AI slander or libel someone?

AI systems aren't people, and as such, may not be capable of committing libel or slander. But the creators of those systems could potentially be held liable if they were reckless or negligent in the creation of the systems, according to some legal experts.

  • ChatGPT or Microsoft's new AI-powered Bing, for example, may face a new kind of lawsuit if the information they serve up is so defamatory as to constitute libel or slander.

The problem is trickier still because AI shows different results to different people.

  • Unlike traditional apps and web sites, which generally return similar information given the same query, generative AI systems can serve up completely different results each time.

Courts will also have to decide how, if at all, the controversial Section 230 liability protections apply to content generated by AI systems.

  • Supreme Court Justice Neil Gorsuch recently sounded a skeptical note as to whether Section 230 would protect ChatGPT-created content.

Who's responsible if AI systems offer private or dangerously false info?

Another question is whether the makers of AI systems could be found liable for the consequences of providing dangerously wrong information.

  • Companies like Microsoft, Google and OpenAI have recently detailed their efforts to improve the accuracy of their generative AI programs, while also warning customers that the AI engines may offer information that's fabricated or wrong.
  • Private information could also be exposed by generative engines, something that anecdotal reports suggest is already occurring.

What to watch: Both courts and lawmakers will likely play a role in determining how these and other legal issues play out. The courts will seek to apply existing law, while legislators may face pressure from several directions to write laws more directly addressing AI.

Go deeper: Read more in Axios' AI Revolution series —

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.