Nine out of 10 executives say that artificial intelligence and generative AI will be a top-three tech priority for this year, and 85% of top leaders say they intend to boost their spending on the technology in 2024.
The survey, released today by consulting firm BCG, also showed that 95% of executives are allowing AI and generative AI at work, a huge leap from July 2023, when more than 50% were actively discouraging such use. The survey reflected the views of 1,406 executives from 50 markets globally.
“A lot of money is being pumped into this,” says BCG CEO Christoph Schweizer. “We are at the stage where the technology is maturing fast, but we are now getting to the human factor being the limiting factor for success.”
BCG says the scale of intentions for generative AI has outpaced any other technology advancement over the course of the firm’s 61-year history. Throughout 2023, companies have evolved from an acknowledgement that generative AI will be a massive disruptive force, to testing pilot programs and eventually deploying the tech. And while a vast majority of companies will boost spending on AI and generative AI, only 9% say they will spend more than $50 million in 2024 on this form of technology.
What executives are struggling with, BCG says, is that 66% of leaders are “ambivalent or outright dissatisfied” with their AI and generative AI progress thus far. That’s because of a lack of talent and skills, an unclear road map on AI investments and priorities, and a lack of strategy. Humans are getting in the way of the progress AI promises, just as executives are clamoring for returns on the billions they’ve invested in AI thus far.
“I would predict that this becomes about fundamentally transforming your business as much as it is a technology question,” says Schweizer, regarding what’s ahead in 2024.
Booking.com CEO Glenn Fogel says the way a CEO should approach AI should mirror how they broadly manage their business.
“You have to recognize that you don’t have nearly the knowledge and the details as with almost everything else,” says Fogel. “You have to set the right tone. You have to understand where you’re putting the big picture resources, but you have to be willing to give away the control and the agency to the people who are actually doing the work.”
After the splash of ChatGPT, Booking.com gave the travel company’s brands permission to approach the tech with different points of view. Booking.com launched a ChatGPT-powered AI tool to help travelers form their perfect itinerary. ChatGPT plug-ins were added to brands like OpenTable and Kayak. Agoda, an online travel agency that operates primarily in the Asia-Pacific region, focused on productivity.
Why would Booking.com pursue such disparate use cases? “Because nobody really knows what’s the best way to go forward,” says Fogel. “What’s the best thing for the company? Which one is going to produce the best results fastest? Let’s use all of our abilities from different directions and see what works best.”
Fogel also advises his teams to share AI learnings across the organization, but that information doesn’t need to flow to the top of the company before filtering back down again to another brand. “Things are happening too fast,” says Fogel. “The need for speed is very, very critical.”
Food tech startup NotCo has used AI algorithms for eight years, helping it sort through around 300,000 species of plants to better understand what combinations would create the best tasting plant-based milk or chicken.
“If we feed enough data into AI, AI should definitely be able to crack the code and start to establish the connections between the molecular components in food and the human perceptions of taste, textures, and color,” says Matias Muchnick, cofounder and CEO of NotCo.
NotCo has inked partnerships with food giants like Mars and Kraft Heinz, resulting in product innovations like the first-ever, plant-based Kraft macaroni and cheese. Muchnick says NotCo’s AI platform helps its partners “create innovation, better gross margins in products, and to control the supply chain.”
As he looks forward to 2024, Muchnick says foodmakers will be leaning more on AI, especially for supply chains. Major events like the Ukraine war show just how quickly the food system can be undermined by disruptions.
“It is way better to utilize AI as a chaperone when supply-chain problems arise and you want to map out the potential risks and figure out the solution beforehand,” says Muchnick.
Felix Van de Maele, cofounder and CEO of software company Collibra, says organizations must build a level of AI literacy across their workforce to develop the skills necessary to execute use cases for generative AI. He also advises that employers create a new role—that of chief data citizen, someone who can serve as a change agent in building a culture around AI data.
“Data has become its own business function,” says Van de Maele. “You need a leader to manage your data, and that has started to separate from a pure technology infrastructure function, which is typically under the CIO or CTO.”
A chief data citizen can help cascade the importance of training regarding the use of data across a company, says Van de Maele. Any employee that touches data should understand the intentionality of data, how to treat sensitive information about employees or customers, and handle thorny issues like copyright infringement. “Organizations need to make that as easy as possible,” says Van de Maele.
Prose, a custom beauty products brand, says the category encompasses more than 200,000 products on shelves that are formulated from 16,000 ingredients. “Both as a brand, and as a chemist, it is very hard to formulate products with such a large diversity of ingredients,” says Prose cofounder and CEO Arnaud Plas. “And for the customer, it is hard for them to know what to use.”
That’s where AI has come to be the backbone of Prose’s existence as a personalized beauty company that offers made-to-order shampoos and moisturizers. “Because to scale what we do, we had to use AI to formulate products, and we had to use automation to make the products,” says Plas.
Prose, like many companies, isn’t yet ready to offer a generative AI–powered tool to the market. But it does have a lot of data, including 2 million points of feedback from customers, that could be put to better use. Today, Prose is internally testing a chatbot that could answer beauty questions like, “If it is raining today, should I use more conditioner?” or “I have a party tonight. What should I do to ensure my hair is shiny?”
“At some point, our goal would be to give that beauty assistance to our customers,” Plas explains.
As leaders weigh the future possibilities of AI, executives admit a lot of unanswered questions linger. What use cases will actually add value for customers? Are there any regulatory or formal constraints ahead? And what does the future of responsible AI look like?
“I do think all of us, at this point, are unable to anticipate how much AI and [generative] AI are going to evolve,” Schweizer says. “But it is important to imagine it is going to be an incredibly strong, powerful technology that can also have pretty scary use cases.”
With more generative AI applications being deployed to the market in 2024 and beyond, Schweizer says trust is especially important to maintain. He points to research from Edelman that shows that while faith in government, media, and economic optimism continues to fall, business is still viewed as competent and ethical.
“Now is the critical moment to make sure that you don’t spoil that trust and kill it over the next couple of years with irresponsible use of AI,” says Schweizer. “You need to be responsible for what gets done in the machine, and you also need to be responsible for what comes out of the machine.”