Artificial intelligence models, such as ChatGPT, are language models at their core. In order to create their responses, these models were fed an enormous amount of written material, without the knowledge, consent or compensation of the people who created that material, notably artists and writers.
Author and comedian Sarah Silverman brought a lawsuit against OpenAI and Meta in July, claiming that OpenAI's ChatGPT and Meta's LLaMA were trained illegally on her material. A class action lawsuit alleging that OpenAI scraped "every piece of data on the internet it could take," without the consent of millions of users, was brought against the company in June.
Related: Microsoft signs a $200 million deal to help reduce its carbon emissions
The Atlantic revealed in a recent report that LLaMA was trained on more than 170,000 books, all without the knowledge, permission or compensation of the authors who penned them.
The tech companies behind this technology are looking for ways around these questions of copyright infringement as they work to get more people on board with AI.
Microsoft (MSFT) -) said in a statement Sept. 7 that it will foot the legal bills of any of its customers who are sued for artificially generating content that might open up claims of copyright infringement.
"As customers ask whether they can use Microsoft’s Copilot services and the output they generate without worrying about copyright claims, we are providing a straightforward answer: yes, you can, and if you are challenged on copyright grounds, we will assume responsibility for the potential legal risks involved," Brad Smith, Microsoft's vice chair and president, said in a statement.
So long as the customer in question was using the built-in guardrails and content filters, Microsoft will "defend the customer and pay the amount of any adverse judgments or settlements that result from the lawsuit."
The pledge, dubbed the 'Copilot Copyright Commitment,' covers the output of all of Microsoft's Copilot programs, across its Office suite, GitHub Copilot and Bing Chat enterprise.
The reasoning behind the pledge is multi-fold. Explaining that it charges its customers for the use of Copilot, Smith said that Microsoft therefore ought to assume responsibility for any problems that ensue. Smith further expressed his confidence in Microsoft's guardrails and content filters, saying that these both work to reduce the likelihood of "infringing content."
More Artificial Intelligence:
- Bill Gates Says Humans Are Ready to Handle Major AI Risks
- DeepAI Founder and CEO Says a Sci-Fi Future Is On Its Way
- NASA and IBM's Latest Project Could Actually Make Good on One of ChatGPT's Promises
The move, according to Nell Watson, a prominent AI researcher and ethics scientist, could help address concerns over IP infringement, increasing the accessibility of these models.
"This is an important move by Microsoft towards professionalizing the Large Language Model space, and should hopefully put some fears to rest," Watson told TheStreet. "If only problems of potential data leakage can be (cryptographically) guaranteed also. I imagine that will come in time."
Get investment guidance from trusted portfolio managers without the management fees. Sign up for Action Alerts PLUS now.