Good morning.
Risk management around generative A.I. is creating anxiety for some corporate leaders.
What in particular is keeping them up at night? “One thing is the fear of missing out and falling behind,” says Reid Blackman, author of the book Ethical Machines. “A lot of leaders are feeling pressure to figure out, how do we use this new technology to innovate, increase efficiencies—make money or save money? The other concern is, how do we do this safely?”
Blackman is also the founder and CEO of Virtue, a digital ethical risk consultancy. He advises the government of Canada on federal A.I. regulations and corporations on how to implement digital ethical risk programs. We talked about what he’s hearing from corporate leaders, and his new article in Harvard Business Review, titled, "Generative AI-nxiety."
“The anxiety is justified when you don't actually have the systems and structure in place to adequately account for these risks,” Blackman tells me. “It’s not like risk management is a new thing to enterprises. This is just another area of risk management that they haven’t done before.”
When ChatGPT was launched by OpenAI on Nov. 30, 2022, generative A.I. was thrust into the forefront. Some companies had already been working with large language models (LLM), which specialize in processing and generating text, Blackman says. But ChatGPT, and subsequently Microsoft’s Bing, and Google’s Bard, made generative A.I. available to everyone within their organizations, not just data scientists, he says.
That’s a “double-edged sword,” Blackman explains. “Lots of different people can see different ways of using technology to drive the bottom line,” he explains. But in using it without parameters, “anyone could potentially do damage to the brand,” he says. "This makes leaders nervous."
Still, many companies are moving in the direction of generative A.I. PwC’s latest pulse survey of more than 600 C-suite leaders found that, overall, 59% intend to invest in new technologies in the next 12–18 months. Fifty-two percent of CFOs surveyed plan to prioritize investment in generative A.I. advanced analytics. But challenges to companies’ ability to transform include achieving measurable value from new tech (88%), the cost of adoption (85%), and training talent (84%).
“What CFOs should be aiming for is how do we create what you would call a responsible A.I. program or an A.I. ethical risk program?” Blackman says. “How do we as an organization, from a governance perspective, manage these risks?”
Creating policies and metrics
Standard ethics in A.I. include a focus on bias, privacy violations, and black box problems, Blackman says. But in his article, he points to at least four cross-industry risks he says are unique to generative A.I.:
—The hallucination problem: For example, when a chatbot creates a response that may sound plausible but is factually incorrect or unrelated to the context.
—The deliberation problem: Generative A.I. does not deliberate or decide. It just predicts next-word-likelihood. It may fabricate reasons behind the outputs, which to an unsuspecting user, looks genuine.
—The sleazy salesperson problem: You could potentially undermine the trustworthiness of your company if it develops an LLM sales chatbot that's very good at manipulating people, for example.
—The problem of shared responsibility: Generative A.I. models are built by a small number of companies. So a feasibility analysis is necessary when your company is sourcing and then fine-tuning a generative A.I. model. Part of that analysis should include what the benchmarks are for “sufficiently safe for deployment.”
The remedies for the hallucination and deliberation problems are due diligence processes, monitoring, and human intervention, according to Blackman.
“When we build A.I. ethical risk programs, it goes all the way from high-level statements to augmenting governance structures, new policies, procedures, workflows, and then crucially, we have metrics and KPIs to track the rollout compliance and impact of the program,” he says.
What are some examples of KPIs that may be useful in this process? Blackman offers a few:
—How many A.I. models were deployed (for example, in the past quarter) that caused discriminatory outcomes?
—How many A.I. models caused concerns about insider trading?
—How many A.I. models were deployed that caused short-term financial gain but long-term damage to reputation/stakeholders?
One thing companies shouldn’t do is ban generative A.I. use, Blackman says.
“There are probably genuine opportunities to innovate in ways that are meaningful to the company,” he says. And an outright ban assumes that people aren’t really going to use it. “They will,” Blackman says. “It’s best to train people on how to use it safely.”
And potentially ease some anxiety.
Sheryl Estrada
sheryl.estrada@fortune.com