When it comes to AI, leaders have a love-hate relationship; it’s both promising and scary.
At the Fortune Most Powerful Women Summit in Laguna Niguel, Calif., executives from PwC, Google DeepMind, Slack, and IBM addressed the risks and potential rewards of AI for leaders and companies.
Slack CEO Lidiane Jones said she is optimistic about how AI will improve worker productivity by automating mundane tasks, opening up more space for creativity and helping with burnout. She said the AI revolution is really a ”productivity revolution.”
Kate Woolley, general manager of IBM’s partner ecosystem, seemed to echo Jones, adding that AI provides an opportunity to accelerate innovation.
“I believe in the transformational power that AI can have in the future, and the world that it can enable for my daughters and their generation,” Google DeepMind’s chief operating officer, Lila Ibrahim, said.
Mitra Best, partner and technology impact officer at PwC, stressed that we’re at an inflection point, one that most hadn’t realized would come this year. Though AI has been around a long time, it’s generative AI that’s a “game changer” because of how it redefines work and permeates every aspect of our lives, she said.
“It’s so powerful…It could change the future in a good way or in a bad way,” Best said.
Given those possibilities, it’s important that AI be used to enhance human experience, not replace it, Best added. It’s not yet clear how this technological transformation, led by AI, will play out—or how companies will mitigate the risks involved.
“Bias, misinformation, your risk of widening the digital divide, and the erosion of trust,” Best said. “Those are the risks that we want to avoid.”
Protection from AI risks
Ibrahim later said there are three major risks associated with AI: misinformation and bias, misuse and bad actors, and long-term risks. Companies need to come together and work together, such as with Frontier Model Forum, where leading AI firms develop best practices and meet with governmental entities.
It’s crucial that they bring civil societies in on the collaboration, so that companies don’t “further propagate some of the biases,” Ibrahim said.
Best’s team developed a tool called bias analyzer, which takes the output of decision-making algorithms and models and benchmarks them against the library of what’s expected, noting potential biases or areas of bias, she explained. Woolley said that although we have to be careful when regulating AI, we should be holding companies accountable for the technology they’re creating.
“Because I think when companies are accountable for what they’re creating, they’re very likely to create the right thing,” she said.
Toward the end of the panel discussion, an audience member asked about the actual concept of humanizing AI—referring to the title of the session.
Best answered: “I actually don’t like humanizing AI, because I don’t think AI is human, and I think the narrative needs to change…AI is a tool that can extend our capabilities, our capacity, and our creativity.”