Americas

  • United States

Asia

Microsoft’s Copilot AI set to operate locally on future PCs, says Intel

news
Mar 28, 20243 mins
Artificial IntelligenceCPUs and ProcessorsGenerative AI

AI-ready PCs will soon be able to handle more genAI tasks directly on the machine, potentially boosting performance and privacy.

ai chip
Credit: Shutterstock

Microsoft’s Copilot AI could soon run locally on PCs rather than relying on the cloud.

Intel told Tom’s Hardware that the chatbot could run on future AI-enabled PCs that would need to incorporate neural processing units (NPUs) capable of exceeding 40 trillion operations per second (TOPS) — a performance level not yet matched by any consumer processor currently available.

Intel mentioned that these AI PCs would be equipped to handle “more elements of Copilot” directly on the machine. Copilot currently relies predominantly on cloud processing for most tasks, leading to noticeable delays, especially for minor requests. Enhancing local computing power is expected to reduce such delays, potentially boosting performance and privacy.

Intel did not immediately respond to a request for comment from Computerworld.

Copilot on your PC

As previously reported, using Copilot on Windows 11, ChatGPT, Adobe Firefly, or similar genAI tools doesn’t actually process tasks on your computer but rather in remote data centers using significant resources and electricity. While it’s possible to run applications like the Stable Diffusion text-to-image model or the Llama language model locally, achieving high-quality results typically requires a PC with substantial processing capabilities, especially a high-speed GPU, previously sought after for cryptocurrency mining.

Recent advancements in hardware, particularly the inclusion of neural processing units in the latest Intel Meteor Lake chips and AMD’s offerings, have sparked discussions about AI-powered PCs. These NPUs, designed as dedicated, low-power components, aim to facilitate the local execution of generative AI models, enhancing AI processing efficiency. The expectation is that NPUs will become a standard feature in future PCs, enabling genAI tasks to operate seamlessly in the background, even on battery power.

For example, MSI’s latest AI engine recognizes your laptop activities and automatically adjusts the battery profile, fan speed, and screen settings to suit your task. When you’re gaming, it boosts performance to the max; switch to working on Word documents, and it dials everything back.

AI on the go

The local AI trend isn’t limited to PCs. For instance, Google’s Pixel 8 and Pixel 8 Pro smartphones are equipped with the Tensor G3 chip, which Google claims sets the stage for on-device generative AI. This technology already supports AI-driven functionalities like audio summarization in the Recorder app and intelligent response generation in the Gboard keyboard. However, despite these advances, such hardware is currently not capable of running extensive AI models like Google’s Bard AI, Copilot, or ChatGPT locally. Instead, these devices run more compact models.

One benefit of local AI processing is that it could enhance cybersecurity. Cybersecurity consultant John Bambenek pointed out that a significant risk companies encounter when integrating AI into intellectual property tasks is managing data flow and access.

“We’ve seen enough third-party breaches of cloud services to know that even with promises, the data can be lost,” he added. “If organizations can do Microsoft’s Copilot AI locally, the CISOs still feel they have control of their data, and it will remove what is likely the largest barrier to adoption that exists.”