Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Fortune
Fortune
François Candelon, Lisa Krayer, Saravanan Rajendran, Leonid Zhukov

Generative A.I.: How to future-proof your company's strategy

Complex computer circuitry is montaged with a person's head i (Credit: Getty Images)

Companies that went hunting to be the first to adopt the next big thing in generative A.I. did so on the presumption that because it was so user-friendly it would also be easy to implement. That has turned out not to be the case.

Incorporating generative A.I. for strategic purposes, in even straightforward applications, has so far proven resource- and capability-intensive. Accounting for data security, company-specific functionality, and a miniscule global talent pool, has meant only the big players have been able to even attempt meaningful pilot programs beyond basic use cases (and those use cases don’t create substantial differentiation from the competition). That’s the exact opposite of the future the technology appeared to promise.

Because the barrier to entry is higher than expected for generative A.I., it’s even more important for companies to develop a strategy to maximize its potential while not wasting valuable resources. That strategy will need to take advantage of what generative A.I. offers now, but also must take account of what it could offer in the future. Jumping on the bandwagon to secure short-term gains could lead to unintended consequences and long-term costs.

Making a strategic assessment can be difficult, given that the scale of the disruption could very well outpace our imaginations. Here are four strategic moves executives should make to define and future-proof their generative A.I. strategy.

Focus company resources on using generative A.I. to create competitive advantage 

When it comes to investing technical resources, companies should take advantage of the rise of plug-and-play third party solutions, like CoPilot and ChatGPT, that require less effort. When implementing in-house solutions that can require significant resources and capabilities, companies should focus on prioritizing generative AI efforts that have the potential to create competitive advantage.

Companies can do this either through “generative A.I. unicorns”—solutions that create new sources of competitive advantage, such as inverse design for drug discovery in pharmaceutical companies—or through the application of generative A.I. to transform an entire function—such as marketing or customer service—and enhance its productivity. Creating competitive advantage is often linked to maximizing the value of the company’s proprietary data.

A regional bank we studied did just that. After determining that off-the-shelf products didn’t have the functionality, data security, or model accuracy it was looking for, the bank made the strategic decision to invest in building its own tool to extract detailed information from its proprietary financial documents. The bank used a combination of internal staff and external resources while trying to minimize the number of data scientists needed, as those with relevant generative A.I. training are extremely hard to find. Third-party data-labelling A.I. products were deployed to train the model and a couple of dedicated data scientists were ultimately needed to manage the full life cycle of the new product. The homegrown solution provides new cost-effective insights that enables bank employees to make better decisions faster on customers’ loan applications, while minimizing defaults. 

Have a centralized data strategy 

Before a company can devise a generative A.I. strategy, it must first have a data strategy. When a leading fast-moving consumer goods company was beginning its A.I. transformation, 80% of the company’s own consumer data didn’t even exist in digital form, requiring a painstaking, months-long data excavation to locate and digitize hard copies of years of payment records. The value and importance of such high-quality data is only increasing. A growing volume of research literature suggests that clean, carefully selected data, even on smaller models such as Falcon and BloombergGPT, can outperform larger models trained on everything from the internet. 

The advance of generative A.I. has crystalized that a centralized, organization-wide strategy for collecting and governing data is essential. Generative A.I. can support data excavation and programmatically help companies clean large and unstructured data sets. Executives should prioritize creating these centralized datasets and work with data scientists to find new and creative data to finetune generative A.I. models. Where there are holes in the data, generative A.I. has the capability to manufacture synthetic data that can help companies explore novel data sets.  

A benefit of having a data strategy with centralized datasets is its adaptability. Basic versions of LLMs—such as LLaMA released by Meta AI in February or Falcon LLM released by Technology Innovation Institute in May—can be repurposed for various use cases. Companies must, however, consider the specific licenses of such models. LLaMA, an open-source high-quality base model, is released under a noncommercial license, while Falcon LLM, currently leading the Hugging Face open LLM leaderboard, can also be applied in commercial use cases. 

To maximize the data’s potential, companies should move to a shared services model managed by a single team—because only a handful of LLMs will be needed across an organization. Providing access to centrally managed LLMs also allows companies to derive value from employee experiments in a controlled and secure environment. The pharmaceutical company Merck recently announced the launch of an internal LLM tool, called myGPT, so that all of its employees could experiment directly with the company’s data in a secure setting, making them more likely to discover high impact use cases for the company. 

Treat the choice of LLM like a choice of strategic partners

Given that a single or a small handful of LLMs can be put to a variety of uses, choosing the right one is a critical decision. While executives don’t need to be heavily involved in the technical evaluation of each LLM, they do need to understand the broader impact, such as recognizing that their choice of LLMs is also a choice between strategic partnerships. When selecting a LLM provider for a given use case, there are three strategic factors that should guide the decision-making process.

Degree of data confidentiality needed. LLMs are often fine-tuned by transporting encrypted data to a public cloud tenant, run by the provider, and then decrypted for processing in the LLM. Executives need to carefully evaluate the options an LLM provider gives for data security in public, securely managed, hybrid, or fully private environments and determine what meets the company’s data security requirements.

Internal resources required to implement. Fully private implementations are the most resource-intensive option, often requiring a team with state-of-the-art expertise, on top of operations engineers and IT professionals, to build out a high-quality solution—and then continuously manage and update it end-to-end. The skill set to do this is exceedingly rare: There are only an estimated few thousand people in the U.S. capable of creating a fully bespoke generative A.I. model. Adapting a model in a cloud environment, on the other hand, requires far less: a handful of full-time data scientists or machine learning operations engineers. This type of model would typically be fully managed by the LLM provider and automatically updated, which also limits a company’s ability to adapt the model to fit its needs as they evolve.   

Implications for customer engagement. In a recent article, we assessed companies’ different options for building generative AI applications and the implications for customer engagement. For example, a decision to integrate customer-facing services into a third-party LLM platform, such as ChatGPT, could risk commoditization from intermediation and reduce direct customer engagement. Building an in-house, customer-facing chatbot with open-source might avoid those pitfalls, but risks missing out if the third-party LLM platform becomes a popular source of customer engagement and sales.

Experiment to predict the future of strategic workforce planning

In the world of generative A.I., leaders need to develop proactive and agile approaches to workforce planning that balance current adoption of new technology with its forecasted evolution. A recent MIT research study on generative A.I.’s impact on productivity found that when generative A.I. was used for writing tasks, it not only made workers assigned to the task more productive, reducing their time spent by more than a third, it also changed how they worked—and how they felt about that work. Companies should get started immediately experimenting with generative A.I. products across their organization to better understand how adoption will impact employees in the near-term, which will provide valuable data inputs to make longer-term forecasts. 

Productivity, however, should not be executives’ sole focus. Companies will need to manage not just the technology, but the human part of the equation, and the relationship between employees and generative A.I. The technology will advance, and the humans interacting with it will change too, creating new capabilities and new deficits. These changes will require companies to find a new equilibrium between employees and A.I. How, for instance, does an organization preserve the skills it values in employees? It’s widely accepted that when we no longer exercise a skill, like creative thinking, the skill itself atrophies. For every executive, addressing these sorts of questions will require experimentation and broad-minded, critical thinking.

How will junior employees master the capabilities necessary to someday supervise A.I. when entry-level employees may no longer be tasked with the mundane jobs (now assigned to A.I.) that make up a project? Where will the instincts for the job be developed without a traditional organizational learning curve? How will all of this impact employees’ professional identity? And how will that affect how humans and A.I. relate, and produce, as a combo? These questions, for which there are not yet answers, will determine what a company’s future workforce will look like and how it will operate: the types and numbers of employees it will need to recruit, and, crucially, how companies design and operate upskilling and reskilling programs. 

Executives should be continually forecasting what the working couple—generative A.I. and human employees—could look like in six months’ to two years’ time. Beyond experimenting internally, Metaculus, an online prediction and aggregation engine, aggregates, and weights multiple forecasts, is one form of workforce forecasting. The goal of these forecasts is not to predict exactly what the future holds, but to provide insights into how to prioritize workforce resources and what guardrails will both encourage employee experimentation and prepare the organization and its human resources. 

Conclusion

It’s vital for companies to develop a future-proofed A.I. strategy before setting out in the marketplace, whether that’s now or in the future. To be strategic, executives will need to stop thinking about generative A.I. as a new tool, instead embracing it as a revolution that will touch every aspect of how we live and work. Even the savviest leaders can’t know the future, but they do need to start thinking and preparing now to adapt to what it brings.

Read other Fortune columns by François Candelon

François Candelon is a managing director and senior partner in the Paris office of Boston Consulting Group and the global director of the BCG Henderson Institute (BHI).

Lisa Krayer is a project leader in BCG’s Washington, D.C. office and an ambassador at BHI

Saravanan Rajendran is a consultant in BCG’s San Francisco office and an ambassador at BHI.

Leonid Zhukov is the director of the BCG Global A.I. Institute and is based in BCG’s New York office.

Some of the companies featured in this column are past or current clients of BCG.

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.