Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
Sheryl Estrada

How proper training for employees can help stem AI hallucinations

(Credit: Getty Images)

Good morning. CFOs are exploring use cases for generative AI in areas like reducing costs and assisting in decision-making. But hallucinations—the term for those unexpected moments when AI starts making stuff up—are still a lurking risk.

Jeff Dean, chief scientist of Google DeepMind and Google Research, touched on the topic of hallucinations while speaking this week at the Fortune Brainstorm Tech 2024 conference in Park City, Utah. “I do think we’re making progress," Dean said. "It’s a difficult problem because the models are trained to generate probabilistically plausible sentences, and those are not always true." Google’s latest Gemini models have shown promise and "glimmers of hope" in decreasing hallucinations around how information is directly provided by users in prompts, he said.

But if you're not someone with deep AI experience, trying to incorporate generative AI into workflows, how do you address the problems of hallucinations producing wrong or misleading results? Yesterday, I discussed this topic with Ilana Golbin Blumenfeld, Responsible AI Lead at PwC.

“I think everybody within an organization really has to have a baseline set of training,” Blumenfeld told me. 

How a person interacts with a generative AI model is an important part of stemming the risk of hallucinations, she said. For example, before implementing a tool like ChatGPT, employees should have the answers to questions such as: How are we using it? How is it being incorporated into our day-to-day workflow? How are we expected to make decisions from it? What type of prompts should you use? What does a non-desirable output look like, versus a desirable output?

“If you can't answer those questions, then it's going to be very difficult for you to assess that [the technology] is actually doing what you expect it to, at scale, when it's deployed to a number of users,” she said. 

And it may come down to creating a simple tool for employees to practice on. When PwC realized some team members didn’t know how to interact with a generative AI model, the firm created a prompt template. Blumenfeld said it was similar to the classic game Mad Libs, where one player prompts others for a list of words to substitute for blanks in a story before reading aloud.

In 2023, PwC US announced plans to invest $1 billion over the next three years to expand AI offerings, including generative AI. And in May, the firm signed an agreement with OpenAI, making PwC the tech company’s first reseller for ChatGPT Enterprise and the largest user of the product. This version of ChatGPT for large organizations will be rolled out to PwC’s U.S. and U.K. employees, along with enterprise-grade security and privacy, Blumenfeld said. 

Blumenfeld has been working in AI research and analytics at PwC for almost a decade, with prior roles including manager of the artificial intelligence accelerator. 

“I will tell you that prior to the launch of ChatGPT, specifically, most of the conversations we had around AI involved trying to get people to understand the basics of what data and AI can actually do for them,” Blumenfeld said. “And the big shift I have seen is that it's not a push, it's a pull. Everyone wants it.” 

And most companies have “stopped trying to block every website that has generative AI models in it because it was just a giant game of Whac-A-Mole,” she said. But you have to get out the right messaging to staff about governance and training so they don’t unintentionally expose the company to risk, Blumenfeld said. 

Sheryl Estrada
sheryl.estrada@fortune.com

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.