Get all your news in one place.
100’s of premium titles.
One app.
Start reading
Tom’s Hardware
Tom’s Hardware
Technology
Paul Alcorn

Intel confirms Microsoft's Copilot AI will soon run locally on PCs, next-gen AI PCs require 40 TOPS of NPU performance

AI.

We've previously reported on industry rumors that Microsoft's Copilot AI service will soon run locally on PCs instead of in the cloud and that Microsoft would impose a requirement for 40 TOPS of performance on the Neural Processing Unit (NPU), but we had been unable to get an on-the-record verification of those rumors. That changed today at Intel's AI Summit in Taipei, where Intel executives, in a question-and-answer session with Tom's Hardware, said that Copilot elements will soon run locally on PCs. Company representatives also mentioned a 40 TOPS requirement for NPUs on next-gen AI PCs.

Microsoft has been largely silent about its plans for AI PCs and even allowed Intel to officially announce Microsoft's new definition of an AI PC. Microsoft’s and Intel’s new co-developed definition states that an AI PC will have an NPU, CPU, GPU, Microsoft’s Copilot, and a physical Copilot key directly on the keyboard. 

PCs meeting those requirements are already shipping, but that is just the first wave of the AI PC initiative. Intel divulged future AI PC requirements in response to my questions about potential memory criteria. 

"But to your point, there's going to be a continuum or an evolution, where then we're going to go to the next-gen AI PC with a 40 TOPS requirement in the NPU," said Todd Lewellen, the Vice President of Intel's Client Computing Group. "We have our next-gen product that's coming that will be in that category." 

"[..]And as we go to that next gen, it's just going to enable us to run more things locally, just like they will run Copilot with more elements of Copilot running locally on the client. That may not mean that everything in Copilot is running local, but you'll get a lot of key capabilities that will show up running on the NPU."

Currently, Copilot computation occurs in the cloud, but executing the workload locally will provide latency, performance, and privacy benefits. Notably, Intel's shipping Meteor Lake NPU offers up to 10  TOPS for the NPU, while AMD's competing Ryzen Hawk Point platform has an NPU with 16 TOPS, both of which fall shy of the 40 TOPS requirement. Qualcomm will have its oft-delayed X Elite chips with 45 TOPS of performance in the market later this year. 

Lewellen explained that Microsoft is focused on the customer experience with the new platforms. As such, Microsoft insists that Copilot runs on the NPU instead of the GPU to minimize the impact on battery life. 

"We had a lot of discussions over the course of the last year[with Microsoft], and we asked, 'Why can't we just run it on the GPU?' They said they want to make sure that the GPU and the CPU are freed up to do all this other work. But also, we want to make sure it's a great battery life experience. If we started running Copilot and some of those workloads on the GPU, suddenly you're going to see a huge hit on the battery life side," Lewellen explained. 

While AMD holds a slight lead in NPU TOPS performance, and Qualcomm claims a much bigger advantage in its not-yet-shipped chips, Intel says its roadmap includes next-gen processors to address every segment of the AI market.

"We have our product roadmap and plan on where we're at in mobile with premium and mainstream, but then you also go down into entry. And so we have plans around entry. From a desktop standpoint, we have plans on the desktop side, what we would say [is an] AI PC. And then there's also the next-gen AI PC, the 40 TOPS requirements; we have all of our different steps in our roadmap on how we cover all the different segments."

Intel's Lunar Lake processors will come to market later this year with three times more AI performance on both the GPU and the NPU than the existing Meteor Lake chips. Intel is already sampling those chips to its partners in preparation for a launch later in the year. Those chips will face off with Qualcomm's X Elite and AMD's next-gen processors. 

In the meantime, Intel is working to expand the number of AI features available on its silicon. As we covered in depth earlier today, the company plans to support 300 new AI-enabled features on its Meteor Lake processors this year. 

Many of those features will be optimized specifically for Intel's silicon. The company told me that roughly 65% of the developers it engages with use Intel's OpenVino, which means those applications are optimized specifically for Intel's NPU. The remainder of the developers use a 'mix of splits' between ONNX, DirectML, and WebNN, and Intel says it is happy to work with developers using any framework. 

However, the work with OpenVino could provide Intel with a nice runway of Intel-specific AI features as it heads into the Lunar Lake generation. Those are the types of advantages the company is obviously looking to enable through its AI PC Accelerator Program. The company says it has seen plenty of enthusiasm from the developer community, particularly in light of Intel's stated goal of selling 100 million AI PCs by 2025, which represents a big market opportunity for new AI software.

However, Microsoft's Copilot will run on NPUs from all vendors through DirectML, and having more TOPS will undoubtedly result in higher performance. That means we can expect a TOPS war to unfold, both in silicon and in marketing, over the coming years. 

Update 3/27/2024 6:50am PT: corrected Intel Meteor Lake and Ryzen Hawk Point NPU TOPS specifications. 

Sign up to read this article
Read news from 100’s of titles, curated specifically for you.
Already a member? Sign in here
Related Stories
Top stories on inkl right now
One subscription that gives you access to news from hundreds of sites
Already a member? Sign in here
Our Picks
Fourteen days free
Download the app
One app. One membership.
100+ trusted global sources.