There’s a growing disparity in organizations with boardrooms that are well versed in generative artificial intelligence and those that need to play catchup, warns Florin Rotar, chief AI officer at Avanade.
“I’m a little bit worried that we’ll see this building divergence, and some will be left behind,” says Rotar, during a virtual conversation hosted by Fortune in partnership with Diligent for The Modern Board series.
Avanade, an IT services and consulting firm, has worked with hundreds of organizations and in those conversations found some boardrooms are getting “quite sophisticated in terms of using AI themselves,” Rotar says. Some use cases that have been deployed include relying on generative AI to better prepare for board meetings, piloting and prototyping simulated activist investor exercises, and AI-enabled tabletop exercise to better plan for risks to the business.
Risks without proper AI governance
But as board members dive into applying generative AI to their workflows, it could present some risks to companies if the proper AI governance isn’t in place. That includes clear guidelines on how to adhere to safety, policies, and procedures without exposing sensitive company information. Over the past two years, employers have had to quickly set up policies about safe AI use, especially after the explosion of consumer interest in AI following the debut of chatbot ChatGPT.
The thinking was that employees were going to use generative AI whether it was blessed by management or not, so HR and IT teams had to set up restrictions, upskill classes, and other forms of training, as well as internal AI playgrounds to allow for safe exploration. Experts say that the same logic should apply to board members too.
“I think what we're seeing is there's definitely a need for a more fundamental understanding of the basics with the board,” says Nithya Das, chief legal officer and chief administrative officer at Diligent, a governance , risk and compliance SaaS company. “You should assume that they are going to find their own tools, and that may raise different security and privacy concerns for you as an organization, just given the sensitivity of board work and board materials.”
Das says training classes can be helpful to get boards up to speed on AI, similar to the education that had to be done when cybersecurity threats came into focus in recent years. One such course, recommended by Rotar, is Stanford University’s “The AI Awakening: Implications for the Economy and Society.”
AI is a growing priority for corporate directors
Diligent previewed a survey it will soon publish from the company’s research arm showing that generative AI will rank sixth on the priority list for board directors at U.S.-based public companies in 2025, trailing behind pursuing growth and optimizing financials, but higher than cybersecurity and workforce planning.
Sixth may not sound very high, but Das says it is an indication that AI is top of mind. Leaders are still sorting out how well versed their management team is on AI, working through concerns about data privacy, and worries about hallucinations, which can occur when AI models generate incorrect information based on unsound data.
“We do think that most boards and companies are at the beginning of their AI journeys, but they're definitely very AI curious,” says Das. “We expect to see this to be a continued area of focus for 2025.”
Fiona Tan, chief technology officer at Wayfair, an e-commerce furniture and home goods retailer, says even at digitally native companies, management had to explain to their boards the difference between generative AI technologies and more traditional use of AI and machine learning that was already deployed.
“For a board, it’s actually realizing some of the nuances between what was predictive…what are the generative capabilities, what are the large language model capabilities, and what are the risks,” says Tan. From that point, they can think through where to deploy generative AI. For a company like Wayfair, that could include content generation and making more personalized content for each specific shopper’s needs.
The management team, Tan says, should be responsible for looking at the various opportunities to enhance the business with generative AI and articulate that vision to the board. That should also include a close eye at AI startups that are emerging and building solutions that may be better to buy rather than build internally from scratch.
Looking for ways to disrupt your own company
“For the board, it is pushing to ensure that we are taking a little bit of an outside-in approach,” says Tan. “Where do we need to go in and disrupt ourselves?”
Omar Khawaja, chief information and security officer at Databricks, a data and AI software company, says board members and management should not conflate being an avid user of AI with actually understanding how these systems work and can be applied to the business.
“In fact, a trap that I often see boards and other leaders falling into is, ‘I’ve used AI, I know how it works, it’s been three months, why haven’t you magically solved problems x, y, and z,’” says Khawaja.
He likens this common miscalculation on AI readiness to watching a cooking video on TikTok. It may only take a few minutes to watch a dish get whipped up by an influencer, but to do the same task at home could take hours.
“The challenge of managing and governance, governing and curating, and organizing your data is where 90% of the work happens,” says Khawaja. The rest, he says, is related to training a model and leveraging it with the appropriate use cases.
Correction, November 22, 2024: A previous version of this article misspelled Omar Khawaja's last name.