Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Jeremy Kahn

Are A.I. regulatory fears just an 'upsell' opportunity for Big Tech?

FTC Chair Lina Khan (Credit: Chip Somodevilla—Getty Images)

Hello and welcome to Eye on A.I. At Fortune’s Brainstorm Tech conference in Deer Valley, Utah, last week, generative A.I. was threaded through every conversation I had. I’ve been reporting from the Bay Area this week, where it is pretty much the same story. Companies large and small are racing to put generative A.I. into practice.

But it is also clear from the conversations I’ve been having that a lot of companies are struggling to figure out exactly what the best use cases for the technology are. “People are throwing everything at the bazooka right now, hoping magic comes out,” Sean Scott, the chief product officer at PagerDuty, said during a morning breakout session at Brainstorm Tech on A.I. and data privacy. The bazooka he’s referring to are large language models (LLMs), and Scott’s point was that sometimes such firepower isn’t necessary. Often, a smaller A.I. model, or some good old-fashioned rule-based coding, will do the job just as well, or maybe even better, at a much lower cost. “At the end of the day, it’s about what problem you are trying to solve and what is the best way to solve that problem,” he said.

It was also clear that many organizations are still struggling to put governance controls around the use of generative A.I. within their own organizations. And it doesn’t help that the regulatory picture remains uncertain.

The biggest news last week in A.I. was probably the Federal Trade Commission’s decision to open an investigation into OpenAI. The agency is probing whether ChatGPT might violate consumer protection laws by sometimes generating false and potentially defamatory statements about individuals. It's also looking into whether OpenAI broke laws when, due to a software bug it disclosed and fixed in March, it failed to secure users’ payment details and chat history data. OpenAI CEO Sam Altman tweeted that his company would “of course” comply with the investigation and that OpenAI was “confident we follow the law.” He also expressed “disappointment” that news of the investigation had immediately leaked to the press.

The FTC investigation is not unexpected. The agency has been signaling its intention to crack down on potentially deceptive practices among A.I. companies for months. FTC Chair Lina Khan is also eager to demonstrate that her agency and existing consumer protection laws can be a prime vehicle for A.I. regulation at a time when lawmakers are contemplating passing new regulations, and perhaps even creating a new federal entity to oversee A.I. But the FTC probe will have far-reaching ramifications. Right now, it is difficult for the creators of any LLM-based A.I. system to be 100% certain it won’t hallucinate and possibly say something reputationally damaging about a person. It is also difficult to ensure these very large LLMs don’t ingest any personal data during training and won’t leak it to a third party if fed the right prompt. If the FTC insists on iron-clad guarantees, it could have a chilling effect on the deployment of generative A.I. As a result, the companies selling generative A.I. systems to business customers are rushing to offer them assurance that they aren’t about to be stranded in a legal and ethical minefield.

As my colleague David Meyer wrote last week in this newsletter, one of the biggies in this regard is copyright. (David called it generative A.I.’s “Achille’s heel.” I think some other dangers, such as data privacy and hallucination may be equally troubling, but the general idea is right.) A number of companies creating A.I. foundation models, including OpenAI and Stability AI have been hit with copyright infringement lawsuits. Scott Belsky, Adobe’s chief strategy officer, recently told me in London that he found many enterprise customers were unwilling to use a generative A.I. model unless the creator of that model could vouch for it being “commercially safe” and that meant providing assurances that copyrighted material had not been used to train the A.I. Under the new EU A.I. Act, companies deploying foundation models will have to stipulate whether any copyrighted material was used in their creation, making it easier for IP rights holders to pursue them legally if they have violated the law.

Adobe, which created a text-to-image generation system called Firefly, has offered to indemnify Firefly users from copyright infringement lawsuits. “If you get sued, we’ll pay your legal fees,” Belsky said. Adobe is able to do this because it trained Firefly on Adobe Stock images and the company’s position is that the terms of that service allow it to use the images to train A.I.

But that indemnification may not help companies escape the ethical quandary entirely. Some creators who uploaded images to the service are angry about Adobe’s position and claim it was wrong of the tech giant to use their images without explicit consent and compensation. As Neil Turkowitz, who has emerged as a leading advocate for the rights of creators in the face of generative A.I., told me a few months ago, beyond what’s legal, the question is whether we want to encourage a system that has a lack of explicit consent at its core. And while Adobe has promised a tag that creators will be able to apply to their work to prevent it from being used for A.I. training and to put in place a system to compensate creators for the use of their data, the specifics have not yet been announced.

Microsoft meanwhile has announced a bunch of measures designed to help its Cloud customers get comfortable using its generative A.I. offerings. This includes sharing its own expertise in how to set up responsible A.I. frameworks and governance procedures and access to the same responsible A.I. training curriculum that Microsoft uses for its own employees. It also plans to provide attestation to how it has implemented the National Institute of Standards and Technology A.I. framework, which may help customers with government contracts. It has also said it will offer customers a “dedicated team of A.I. legal and regulatory experts in regions around the world” to help support their own A.I. implementations. Microsoft is also offering a partnership with global consulting firms PwC and EY to help support customers to create responsible A.I. programs.

It's actually enough to make you wonder, somewhat cynically, whether the current questions swirling around generative commercial safety are actually a bug—or a feature. Maybe all this uncertainty and angst isn’t bad for business after all—if it helps you upsell anxious customers on premium consulting and hand-holding services. At the very least, you can say that Microsoft knows how to turn lemons into lemonade.

With that, here's the rest of this week's A.I. news.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.