Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Sage Lazzaro

Can A.I. companies move fast without breaking things?

(Credit: SAUL LOEB/AFP via Getty Images)

Hello and welcome to this week's Eye on A.I. 

In a recent story in the Associated Press, artificial intelligence researchers and industry leaders interrogated generative A.I.’s “hallucination” problem, wherein tools like ChatGPT make up information and write it as fact. The problem stems from the very way that ChatGPT and other generative A.I. tools function—they aren’t really “thinking,” but rather use patterns to predict the next word in a sequence. ChatGPT writes according to what makes sense in a pattern, not according to what’s true.

“This isn’t fixable,” Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory, told the AP. “It’s inherent in the mismatch between the technology and the proposed use cases.”

Even OpenAI CEO Sam Altman acknowledges ChatGPT’s penchant for spewing out falsehoods. “I probably trust the answers that come out of ChatGPT the least of anybody on Earth,” he said to a crowd on his university tour, to laughter, according to the article. 

Altman went on to say he thinks the hallucination problem will improve eventually. But even if it can be fixed, how long will it take? How consequential will the errors it makes in the meantime be? And how will A.I. have already upended our world, before we even had a chance to truly contend with the changes? 

There are already several documented cases of generative A.I. hallucinating defamatory information about people or otherwise being used to threaten people’s reputations, such as one instance where Meta’s BlenderBot 3 falsely labeled a former member of the European Parliament a terrorist, for example. And of course, hallucinations are just one of the many concerns surrounding A.I., including issues around privacy, copyright, faulty datasets, A.I.’s ability to perpetuate discrimination, uproot our current systems of work (and thus much of how society operates), and likely more we can’t yet imagine. 

When OpenAI released ChatGPT, it opened the floodgates. Google, which had been holding back on productizing its competing tech over concerns it gave too many wrong answers, released Bard to avoid falling behind and is now experimenting with how it can recreate its search business in the image of generative A.I.  Many companies quickly followed suit, rewriting their product roadmaps to meet the new generative A.I. moment. And as was made clear by artificial intelligence’s explosive prominence in Q2 earnings calls, it’s a big opportunity for business. 

Companies of all stripes now face a common dilemma: The competitive pressure not to fall behind in the A.I. race and the responsibility for the safety of the tools they release. A survey of 400 global senior A.I. professionals just released by Databricks and Dataiku gets at this tension, with the majority of respondents responding both positively about the ROI of A.I. initiatives and that they’re more worried than excited about the future of A.I. 

We’ve already seen what happens when technology that has the ability to uproot how we experience the world is unleashed prematurely or integrated widely despite known issues—for example, Facebook’s role in the genocide in Myanmar, ethnic violence in Sri Lanka, and election disinformation in the U.S. This moment feels like an inflection point for A.I. where we decide if we’re going to “move fast and break things” yet again.

Abhishek Gupta, founder and principal researcher at the Montreal AI Ethics Institute, said “this is one of the biggest issues that is causing a lot of strife within many companies” when it comes to moving forward with generative A.I. plans, specifically citing strife between the business and privacy/legal functions. 

“We are struggling with similar, if not identical, issues around governance approaches to a powerful general-purpose technology that is even more distributed and deep-impacting than other technological waves” he said when asked if he fears we’ll repeat the same mistakes made with social media platforms in terms of not properly mitigating potential harms prior to product release and widespread use.

The nature of the complexity of A.I. systems adds a complicating factor, Gupta said, but he argues that we need to think ahead on how the landscape of problems and solutions is going to evolve.

As if to drive home this point, Facebook-parent company Meta—the originator of the now infamous "move fast and break things" phrase—last month released its powerful LLaMa 2 large language model as open-source code, raising alarms about its potential misuse.

One clear difference between this moment and the early days of the social media era is that policymakers are moving quickly to consider a variety of regulations around A.I. Individuals like comedian Sarah Silverman and companies like Getty have also taken legal aim at companies including Meta, OpenAI, and Stability AI, alleging they broke copyright laws by training models on their work without payment or consent. The Federal Trade Commission is also currently investigating whether OpenAI violated consumer protection laws when it scraped people’s data to train its models. 

“One of the upsides of the current zeitgeist is that there is a lot more discussion around issues of ethics and societal impacts compared to the era when social media platforms were just taking off,” Gupta said. “That said, history may not repeat itself, but it certainly rhymes.”

With that, here’s the rest of this week’s A.I. news.

Sage Lazzaro
sagelazzaro.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.