Get all your news in one place.
100's of premium titles.
One app.
Start reading
inkl
inkl
Olga Glasson

Why Access Is Not Activation and What New Research Says about the Hidden Cost of AI Non-Adoption

Meta and Amazon now mandate AI use for every employee. A field assessment from the Fellows.tech Association shows why mandates alone are failing — and what the companies generating real AI value are doing differently.

In 2026, many of the world's largest companies don't just ask their employees to use AI — they require it, turning AI usage into a target metric. Meta tied every employee's performance review to "AI-driven impact," making AI proficiency a formal condition of career advancement for engineers, marketers, and executives alike. Amazon built an internal monitoring system called Clarity that tracks exactly which AI tools individual developers use and how often, with a target of 80% using AI for coding at least once per week. Accenture went further still: senior staff who cannot demonstrate regular AI adoption through weekly login data are now blocked from promotion.

The logic is clear enough. But the results so far suggest that mandates alone are not solving the problem they were designed for. Researchers at the Fellows.tech Association for Applied Technology and Engineering, which has spent the past two years studying enterprise AI deployments, gives this disconnect a precise name — the Activation Gap. Their field assessment, drawing on direct client observations alongside large-scale survey data, argues that the gap between deploying AI tools and actually using them is structural, not temporary — and that closing it requires a fundamentally different intervention than either training programs or performance mandates.

The problem hiding inside the adoption numbers

The data support that diagnosis. PwC's 29th Global CEO Survey, released at Davos in January 2026 and drawing on responses from 4,454 executives across 95 countries, found that 56% of companies have seen neither increased revenue nor reduced costs from AI over the past year. Only 12% — the group PwC calls the "vanguard" — report meaningful returns on both fronts. CEO confidence in near-term revenue growth has fallen to a five-year low. The investment is real; for most organizations, the results are not.

The surface-level adoption narrative looks like a success story — nearly nine in ten organizations now use AI in at least one business function. But adoption numbers count deployment, not practice. PwC's own workforce data reveals the gap: only 14% of workers globally use AI on a daily basis, a figure that barely moved in a year, even as access expanded by 50%. PwC itself describes the pattern as "isolated, tactical AI projects" that "often don't deliver measurable value." Organizations deploy a chatbot here, a predictive model there, without integrating AI into how work actually gets done. The tools are available, the licenses are paid for, and the habits remain unchanged.

When the same role is split into two

The practical consequences of the Activation Gap are already visible inside organizations. George Andronchik, a Fellow of the British Computer Society and data infrastructure architect with over a decade of enterprise experience, led the Fellows.tech research describes a pattern his team has observed repeatedly across client engagements in different industries.

“What surprised me was not the size of the gap itself, but how consistent the pattern was across different organizations and industries. While some practitioners rebuild their workflows around AI, others don't change anything — and it is not a technology story. It is about habits and incentives, and it plays out the same way whether you are in fintech, logistics, or healthcare,” Andronchik says

In one case, his team observed directly that a large organization with a mature data infrastructure function was automating a significant portion of its support workflows. The automation was being built by data engineers proficient in AI tooling — prompt engineering, agentic workflow design, and LLM-assisted code generation. The roles being eliminated through this automation belonged to data engineers with the same job title, at the same seniority level, with access to the same tools. The only difference was whether they had built a practice around those tools or not.

In other words, the displacement is not vertical. AI adoption does not make a role disappear from the organization. Instead, a lateral shift happens: a colleague with the same title but a different practice becomes the person the organization builds around.

It is not enough to tell people to use AI

The findings from the Fellows.tech research reveal the complications of Meta's mandate-driven approach and identify three mechanisms that explain why access does not become activation.

The first is workflow inertia. Practitioners who have spent years optimizing how they work face a real, immediate productivity cost when rebuilding those workflows around new tools. The workflow disruption is immediate, while the benefit is deferred. Rational people under performance pressure tend to revert to what works.

The second is identity resistance. Senior engineers and technical professionals have professional identities built around judgment and expertise. Tools that appear to shortcut that judgment can feel like a threat rather than an aid. At the practitioner level, this means they treat AI as a command-based tool rather than a thinking partner.

The third is an unclear task-level ROI. AI rollouts produce capability at the platform level — the tool is available; the license is paid for — without clear integration at the task level. A worker who cannot see how a given tool makes their specific daily work better has a rational reason not to change how they work.

Andronchik is direct about what this means for organizations betting on mandates alone: “Issuing a policy does not rebuild a workflow. The companies we see generating real value from AI are not the ones with the strictest adoption targets — they are the ones that went back to basics and redesigned how work actually gets done. That is a much harder intervention than a performance review requirement, and a much more effective one.”

The window is closing, but has not closed

The pressure on workers is intensifying faster than most organizations' capacity to support the transition. As AI tooling matures, workforce decisions are increasingly being made on a new criterion: whether practitioners are building automation or being overtaken by it. That logic is consistent across industries and company sizes, and it is more durable than any single round of performance reviews or layoffs suggests.

For the 56% of companies — and the far larger proportion of individual workers — who have not yet crossed from access to activation, the window for doing so before it becomes a baseline expectation rather than a differentiator is closing. It has not closed yet. But the gap between those building that capability now and those planning to start later is already measurable, and it is compounding.

Sign up to read this article
Read news from 100's of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.