Distinguished Engineer, Windows Platform + Developer https://blogs.windows.com/windowsdeveloper Thu, 20 Nov 2025 16:38:41 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.4 https://blogs.windows.com/wp-content/uploads/sites/3/2021/06/cropped-browser-icon-logo-32x32.jpg Distinguished Engineer, Windows Platform + Developer https://blogs.windows.com/windowsdeveloper 32 32 Ignite 2025: Furthering Windows as the premier platform for developers, governed by security https://blogs.windows.com/windowsdeveloper/2025/11/18/ignite-2025-furthering-windows-as-the-premier-platform-for-developers-governed-by-security/ Tue, 18 Nov 2025 16:03:03 +0000 https://blogs.windows.com/windowsdeveloper/?p=57599 Continuing Windows evolution as a secure open platform for AI and Agents

At Build, we Ignite 2025: Furthering Windows as the premier platform for developers, governed by security appeared first on Windows Developer Blog.

]]>
Continuing Windows evolution as a secure open platform for AI and Agents At Build, we laid out our vision for the future of development on Windows, announcing new tools that empower developers to do their best work with the ultimate flexibility.
  • We open-sourced Windows Subsystem for Linux, making it easier than ever for developers to contribute, customize and help us integrate Linux more seamlessly into Windows.
  • With Microsoft Foundry on Windows, formerly known as Windows AI Foundry, we introduced a unified and reliable AI platform to support AI development across CPU, GPU and NPU.
  • And we announced native support for Model Context Protocol (MCP), which offers a standardized framework for AI agents to connect with apps.
Today, we expand on these foundations, evolving Windows to give developers a platform to build the next generation of software experiences that empower people and organizations at scale. As AI transforms the way we work, agents are becoming powerful tools to make users more productive, handling routine tasks and taking away the drudgery so users can focus on what matters most. To empower developers and organizations on this journey, Windows is evolving as an operating system with the foundational structures to make agents on Windows more effective, secure and governable—with flexibility for developers and peace of mind for organizations to embrace this trend with confidence. To realize this vision, we’ve spent the past year listening closely to developers and actively engaging with the broader community, learning about pain points, tracking emerging needs, and identifying opportunities to make Windows a secure platform for the future of AI and agents. The feedback and community insights have directly shaped the updates we are introducing today.

What’s new for Windows Platform at Ignite:

  • Public preview of native support for Model Context Protocol (MCP) on Windows, a standardized framework for AI agents to connect with apps and tools to automate routine scenarios and perform tasks for users securely with user consent.
    • Public preview of Windows On-Device Registry (ODR), a secure, manageable repository of agent connectors, which are just MCP servers.
    • Public preview of built-in agent connectors for File Explorer and System Settings. Agents can use the File Explorer connector to manage, organize and retrieve local files with user consent. With System Settings connector, agents will be able to adjust Windows system settings like changing from light mode to dark mode or troubleshooting issues, while keeping the user in full control.
  • Private preview of Agent Workspace - a contained, policy-controlled and auditable environment where agents can interact with software and complete tasks for users in a parallel and separate desktop, without disrupting users’ primary session.
  • Introducing Agent ID—A unique ID distinct from the user ID that makes it possible to audit every action taken by the agent. The agent ID also helps IT distinguish agent interactions from user actions.
  • Secure by default policies for developers building agents and agent connectors and security controls for end-users using agents—keeping their data secure.
  • Enterprise manageability controls for IT admins to configure basic policies for their enterprise employees to adopt and use agents through typical policy configuration channels for Configuration Service Provider (CSP) policies and Group Policies (GP) starting with Intune in public preview.
  • Public preview of new AI APIs—video super resolution (VSR) and Stable Diffusion (SDXL) in Microsoft Foundry on Windows, formerly known as Windows AI Foundry. Developers can use these APIs powered by Windows on-device models to add AI-powered video enhancement (VSR) and image generation (SDXL) features to their apps.
These updates lay the foundation for a new generation of experiences, providing developers and enterprises with enhanced protection, transparency and governance—introducing platform-level security guardrails to help organizations begin adopting agent-powered workflows.

Announcing public preview of native support for Model Context Protocol (MCP) on Windows

The Model Context Protocol (MCP) is an open standard introduced by Anthropic in late 2024 to give AI agents a universal way to connect with external tools, data sources and services. By creating a common language for content exchange, MCP accelerated innovation and set the foundation for richer, more capable agentic workflows. On Windows, we are taking MCP even further by catering to the needs of developers, IT professionals and end-users. Users need easy discoverability and consistent controls, minimizing security risks. IT professionals need robust security and manageability controls to deploy agents confidently across the organization. Developers need tools and libraries to build and make their servers easily discoverable to agents without doing bespoke work for each platform. To build these AI experiences and agents at scale, you need an OS that’s built for it. This infrastructure can’t be delivered through middleware or applications alone—it demands OS-level integration for security, consent and control. We are thrilled to transform Windows into an operating system with this secure OS-level integration with a native agent infrastructure. That’s why today, we are announcing the public preview of native support for MCP on Windows—a standardized framework for AI agents to connect with apps and tools to automate routine scenarios and complete tasks for users.

Photo of a smiling woman next to chart sections labelled Agent, Windows on-device registry and Agent connectors.Developers can build MCP servers to expose their app’s functionality as agent connectors and register in Windows on-device registry

Agent connectors are essentially MCP servers built by app developers and made available in the Windows on-device registry. These are agent-aware tools that agents can connect to acquire new and unique skills and complete tasks for users. This includes built-in agent connectors from Windows, as well as local and remote connectors from our developer community. Agents can discover and connect to these tools and other agents via a secure, manageable Windows on-device registry (ODR). By default, all agent connectors in the Windows on-device registry will be contained in a secure environment with their own identity and audit trail. All communication between agents and agent connectors from the Windows on-device registry will go through the MCP proxy, a trusted gateway to ensure secure communication enabled by Windows. The proxy handles authentication (verifying the MCP client, the originator of the call), authorization (enforcing permissions and policies), and auditing (logging every interaction for compliance) for both local and remote MCP servers. With standard security policy, each agent connector has its own identity, and secure communication enforced through the MCP proxy ensures that agents and connectors can trust each other’s provenance. We are also introducing support for remote agent connectors. Developers can register remote endpoints with the on-device registry, making them discoverable to any compatible agent along with local agent connectors. With the support for remote agent connectors, developers can register their cloud based MCP servers in the Windows on-device registry and expose their apps’ functionality to agents.

Get started, dive into documentation. Platform capabilities in preview coming soon.

  • You can build MCP servers to offer your apps unique functionalities as agent connectors and register them in the Windows on-device registry, to be discovered by agents. This will enhance reach and drive engagement for their apps. To get started with building and registering agent connectors, check our documentation—https://aka.ms/RegisterMCPServer
  • You can package your agent connectors as either MSIX or MCPB (MCP Bundles). To package and register agent connectors, check our documentation—https://aka.ms/RegisterMCPBundle
  • As an agent developer, you can leverage agent connectors and benefit from the apps’ functionality to complete tasks for your users. To connect, list and interact with agent connectors, check our documentation—https://aka.ms/MCPHostQuickstart.

Announcing public preview of Windows built-in agent connectors for File Explorer and System Settings

We are thrilled to announce the public preview of two agent connectors built into Windows—File Explorer and System Settings. These connectors are available via the on-device registry for agents to leverage and complete tasks for users on Windows. File Explorer Connector: Agents can use the File Explorer connector to manage, organize and retrieve local files on a user’s device with their consent. On Copilot+ PCs, the connector can also perform natural language search to retrieve the exact file based on descriptions, content, metadata, and, for images, enhanced search based on image classification. System Settings connector: This connector helps agents adjust Windows system settings like changing from light mode to dark mode or troubleshooting issues, while keeping the user in full control.

Announcing private preview of Agent workspace

In addition to using tools like agent connectors, agents can also interact with existing software or line of business applications to complete tasks. Image showing "how Agents operate" with two drop downs reading "Tools and Agents" and "Computer-use Agent Capabilities." We are excited to announce the private preview of Agent workspace—a contained, policy-controlled and auditable environment where agents can interact with software, just like people, to complete tasks for the user in a parallel and separate desktop, without disrupting users’ primary session.

Introducing Agent ID

When agents are allowed to use software like people, it becomes more critical for IT professionals to clearly audit and distinguish between agent and user actions. To deliver transparency and control, we have built security paradigms that enforce agents to operate with their own unique identity, completely distinct from the user’s identity, and are governed by strict guardrails set by IT. This ensures every task, workflow and change is clearly tracked, making it easy to differentiate between what agents do and what users initiate. With these core primitives all agentic interactions on Windows are a step function, more secure and contained than traditional apps.

Announcing public preview of Windows 365 for Agents

These platform primitives apply not just to agents running locally on Windows client, but also in the cloud in Windows 365. To date, Windows 365 Cloud PCs have been designed for people, delivering the full Windows experience to power employee productivity on any device, anywhere. Today, we are thrilled to announce Windows 365 for Agents, which extends the local agent workspace concept to the cloud so agents can interact with existing software or line of business applications to complete tasks. The key distinction is simple: on local PCs, agents operate in a secure workspace on the user’s device and with Windows 365 for Agents, the Cloud PC itself becomes the agent’s secure, policy-controlled environment. Agent developers can build and deploy agents with Windows 365 for Agents. Windows 365 for Agents provides a comprehensive set of APIs for agent developers to manage and utilize compute resources. Agents running in Windows 365 can also use agent connectors and Windows on-device registry. Learn more about Windows 365 for Agents—https://aka.ms/W365forAgentsIgniteBlog Image of Windows 365 for Agents with four different sections reading "End Users," "Frontier Firms," "Agents" and "Agent Makers."

Securing agentic interactions on Windows

In line with Microsoft’s Secure Future Initiative commitment, security is our top priority as we expand MCP powered capabilities and agent workspace on Windows. At Build this year, we outlined the principles guiding this structure and last month, we expanded on our foundational security principles for agentic AI experiences. We are adhering to a strong set of durable security and privacy principles that must be met to make use of these new capabilities in Windows.
  • Distinct agent accounts: Agents in Windows operate with dedicated agent accounts, separate from the user account on your device. This enables agent-specific policies and lets you share access to files and resources in a secure manner with agents just as you would with other users on your device. IT admins using Agent 365 to build digital agents can manage Entra identity, policies, registry and observability through a single unified control pane.
  • Restricted agent privileges: By default, agents will start with minimal permissions and only gain access to resources you explicitly grant. Their actions are strictly bounded, and they cannot make changes to your device without your authorization. You can revoke access at any time.
  • Operational trust: Agents must be signed by a trusted source. Malicious or poorly behaved agents can be revoked and blocked using a range of defense-in-depth measures like certificate validation and antivirus.
  • Privacy-preserving design: Windows helps agents adhere to Microsoft’s commitments in the Microsoft Privacy Statement and Responsible AI Standard. Windows will support agents to collect and process data only for clearly defined purposes, enabling transparency and trust. See the Microsoft Privacy Report for details on our commitments to advancing AI responsibly while safeguarding privacy and other fundamental rights.
Today we begin to deliver on these commitments, and we will continuously learn and refine our approach as we gather real world feedback from the public preview.

Secure by default agent policies

In alignment to the above principles, the standard security policy for agent connectors on Windows aligns with Microsoft’s Secure Future Initiative (SFI) principle of “Secure by Default,” ensuring every connector meets strict requirements for packaging, identity and containment. Agent connectors and agents running on Windows must meet the platform security bars around packaging, identity, provenance, containment and consent. The on-device registry will only return agent connectors and agents that meet the criteria below.
  • Packaging and Identity: All applications must be packaged and have an identity established through trusted signing. This ensures that any connector available for the agent has identity which can be asserted by Windows.
  • Private capabilities manifested: Developers are required to define the minimum capabilities required for their agent connectors in their package manifest.
  • Containment: Agents and connectors will run in a contained environment as an agent user.
Windows also provides developers with settings and tools to help ensure existing agent connectors work in the default security policy, including testing with less restrictions.

Security controls to manage agentic workflows

To protect user data during agentic operations, we provide key security controls such as experimental agentic feature toggles, mandatory user consent and a dedicated settings page to enable or disable access to agent connectors. Experimental agentic features toggle:  All agentic experiences powered by agent connectors and agent workspace are disabled by default and are only enabled when the user toggles on the Windows Settings:  Settings > System > AI components > Agent tools > Experimental agentic features. Windows Settings System screen for AI components with toggle on for experimental agentic features. Mandatory user consent: In standard security policy, whenever an agent wants to access any sensitive information—like your files, applications or resources, Windows will always ask for your consent on the first occurrence. Screen asking user for consent for Copilot to use an agent connector to complete a task. Windows Settings for Agent connectors: A dedicated Settings page for each agent, allowing users to manage file access permissions and enable the connectors an agent needs to perform tasks. Windows Settings screen for Agent connectors allowing users to manage file access permissions and enable the connectors an agent needs to perform tasks.

Enterprise management policies and capabilities to ensure IT is always in control

IT admins can manage basic policies for their enterprise employees to access and use agentic experiences through typical policy configuration channels for Configuration Service Provider (CSP) policies and Group Policies (GP) starting with Intune in public preview.
  • IT admins can enable or disable both local and remote agent connectors, at device levels, using Intune or other MDM apps through Configuration Service Provider (CSP) and Group Policy Object (GPO).
  • IT admins can enable or disable agent workspace at device level, using Intune or other MDM apps—through both CSP and GPO.
  • IT admins can set minimum security policy levels for agent connectors at device level through both CSP and GPO.
  • Agent connectors packaged using MSIX can be deployed and managed using existing enterprise-grade mechanisms such as Intune, Conditional Access and Managed Installers, already familiar to IT teams. Policy support for MCPB will be available in the coming months.
  • IT admins can access event logs, which enumerate key Agent connector events such as invocations in agent workspace, errors and registry updates.
Additional advanced manageability controls are planned for later in 2026. This CSP includes certain settings currently under development, available only in Windows Insider Preview builds. These settings may change based on the feedback we receive in preview builds. Configuration settings in the Microsoft Intune admin center.

Building next-gen AI experiences with our partners and developer community

We are excited to be partnering with many agent builders and app developers who are already leveraging the agentic platform on Windows to deliver next-gen AI experiences, with many examples of partners building experiences using MCP shown below. A collection of company logos. Dynamics 365 is redefining expense management with MCP on Windows. Today filing an expense report is a tedious manual process—often taking a dozen steps, 30 or more minutes and is often error-prone. With Dynamics 365 agent in Microsoft 365 Copilot this process is reduced to one sentence with high accuracy saving you time to focus on the next important task. Under the hood, Microsoft 365 Copilot uses File Explorer connector to securely access local files and find relevant receipts powered by semantic search in seconds. It then extracts details, generates expense lines and submits the expense—streamlining approvals and reducing friction to just one prompt with MCP on Windows. https://youtu.be/rv7kq4CFpko Manus is an AI-powered general productivity agent that helps users with varied tasks such as creating websites, organizing files and generating content through simple prompts and secure integrations. Manus leverages MCP on Windows, and enables users to build a website in minutes—directly from content stored on their PC—without uploading files or switching apps. The agent uses the File Explorer connector to fetch content and execute tasks entirely within the Windows security model. Beyond website creation, Manus can organize files, generate content and manage information through simple prompts and explicit user approval. This demonstrates the core value of MCP on Windows: enabling agents to act intelligently while keeping enterprise data protected and workflows more seamless. https://youtu.be/oWtm_dgtbAc Claude by Anthropic is an AI productivity agent on platforms including Windows that helps users handle multi-step tasks efficiently. By connecting to File Explorer—with user consent—Claude can quickly find relevant documents like meeting notes and status updates, then generate summaries or reports in minutes. In a typical use case, Claude gathers all necessary files and produces an executive summary of a project, which can be sent directly through Outlook. This process saves time and maintains user privacy and control, showing how intelligent agents can streamline everyday work. https://youtu.be/oNAEW6N0aRg On Windows, Dropbox Dash streamlines storage by merging files from sources like Dropbox and OneDrive into a single searchable hub. With MCP integration, agents in any application can quickly access curated content without manual searching, enabling faster execution, real-time collaboration and built-in compliance. Dropbox Dash simplifies cross-app workflows for enterprises seeking unified experience.

New capabilities coming to Microsoft Foundry on Windows

AI-native platform and machine learning models are essential to enable advanced agentic experiences. Microsoft Foundry on Windows,  formerly known as Windows AI Foundry, first introduced at Build 2025, is a unified and reliable AI platform that supports the AI developer lifecycle from model selection, optimization, fine-tuning and deployment across CPU, GPU and NPU. Microsoft Foundry on Windows gives developers the tools to build AI experiences on-device whether they choose to use AI APIs powered by the inbox models that ship with Windows or access a rich catalog of pre-optimized open-source models in Foundry Local. At the foundation of Microsoft Foundry on Windows is Windows ML, which is generally available and simplifies the deployment of custom, proprietary models across varied Windows hardware devices. Thanks to deep collaboration with silicon partners like AMD, Intel, NVIDIA and Qualcomm, Windows ML offers unified execution, hardware mapping and power-aware performance, so models run on your local device efficiently.

Announcing new Windows AI APIs – Video super resolution (VSR) and Stable Diffusion XL (SDXL) – powering on-device AI through Microsoft Foundry on Windows

Today we are excited to announce new Windows AI APIs that developers can leverage to bring local AI experiences like video super resolution (VSR) to upscale low-resolution streams and Stable Diffusion XL (SDXL) for high quality image generation, both of which are now in public preview. App content search enters public preview as an API for developers to enable fast, intelligent in-app search experiences, making it easy to find relevant content within their Windows apps.

Many leading app developers are already leveraging Microsoft Foundry on Windows to deliver innovative, secure and high-performance AI experiences locally on Windows.

A collection of company logos.Windows ML is driving innovation across industries with partners like Roboflow leading the way. Roboflow, a Microsoft Start Up Pegasus Program participant, provides Visual AI tools used by millions of developers and over half the Fortune 100 for computer vision applications both in the cloud and on-device. With Windows ML integration, Roboflow is able to deploy the RF-DETR model for state-of-the-art detection and instance segmentation on the edge like cargo container tracking to manufacturing quality assurance. Infosys, a global leader in digital services and consulting, has integrated Windows ML with Infosys Agentic Foundry, part of Infosys Topaz™. By leveraging custom models tailored with business data, Infosys is transforming a cloud-based invoice classification agentic AI system. This advanced agentic application is designed to help Infosys business operations teams effortlessly understand the status of invoices from data embedded in emails. Consequently, it can quickly determine the necessary actions to move these invoices through the workflow. This integration aims to significantly enhance and expedite the end-to-end process, while ensuring sensitive data remains secure without being transmitted to the cloud.

Many partners are leveraging open-source models from Foundry Local to power local AI workflows in their organization.

HCLTech is exploring building a proctoring solution that monitors a user’s presence, gaze and phone usage during assessments using their custom model  YOLOv8 and Phi-4-mini-reasoning model from Foundry Local, ensuring privacy and enhanced monitoring. Cognizant is developing an offline plant disease detection solution that can, based on leaf images, identify the disease, describe the associated symptoms and recommend prevention and remediation steps using phi-4-mini reasoning model from Foundry Local and a custom plant disease classification model with Windows ML. Kahua is redefining field productivity, using locally-run AI and agentic workflows to keep construction teams productive, even when offline. Kahua is experimenting with Foundry Local models like Phi-4-mini-reasonning to analyze construction photos to detect defects, like unpainted areas or uncapped pipes and automatically generate structured data entries to document those defects inside the Kahua application. AnythingLLM powered by Foundry Local provides enterprises with secure, on-device document intelligence and agent automation through models like DeepSeek, Mistral, Phi and Qwen, while Belt triages and analyzes legal contracts and other sensitive email attachments using model families such as Phi 4 and Qwen through Foundry Local—all processed locally for privacy and efficiency. Cephable empowers users with its suite of AI productivity tools, such as summarization and rewrite, leveraging Foundry Local models to offer on-device AI with state-of-the-art models like Phi and Qwen. By running these advanced AI workloads locally, Cephable not only enhances productivity but also ensures user data remains private, significantly reducing the risk of data leakage and minimizing cloud computing costs. Raycast integrates Foundry Local models, bringing privacy-first, on-device AI to the desktop to streamline automation and give users fast, secure access to their tasks and workflows.

Looking ahead

We are committed to building an even more robust and secure Windows platform for developers to build secure, intelligent and next-gen AI solutions. This is just the beginning. With Windows as the foundation, we’re empowering each of you to unlock the full potential of next-gen computing and invite you to explore, build and help us shape the future of Windows. Editor's note -- Nov. 20, 2025 -- The Enterprise management policies and capabilities section was updated to clarify how IT admins can enable or disable agent connectors and agent workspace; also a sentence was added about the availability of certain settings on this version of CSP.]]>
Windows ML is generally available: Empowering developers to scale local AI across Windows devices https://blogs.windows.com/windowsdeveloper/2025/09/23/windows-ml-is-generally-available-empowering-developers-to-scale-local-ai-across-windows-devices/ Tue, 23 Sep 2025 19:00:03 +0000 https://blogs.windows.com/windowsdeveloper/?p=57575 The future of AI is hybrid, utilizing the respective strengths of cloud and client while harnessing every Windows device to achieve more. At Microsoft, we are reimagining what’s possible by bringing powerful AI compute directly to Windows devices,

The post Windows ML is generally available: Empowering developers to scale local AI across Windows devices appeared first on Windows Developer Blog.

]]>
Windows ML is now generally available for production use to assist developers with deploying production experiences in the evolving AI landscape. First introduced at Build 2025, Windows ML is the built-in AI inferencing runtime optimized for on-device model inference and streamlined model dependency management across CPUs, GPUs and NPUs, serving as the foundation for Windows AI Foundry and utilized by Foundry Local to enable expanded silicon support which is being released today. By harnessing the power of CPUs, GPUs and NPUs from our vibrant silicon partner ecosystem and building on ONNX’s strong momentum, Windows ML empowers developers to deliver real-time, secure and efficient AI workloads — right on the device. This ability to run models locally enables developers to build AI experiences that are more responsive, private and cost-effective, reaching users across the broadest range of Windows hardware. https://youtu.be/Mow9UY_9Ab4

Bring your own model and deploy efficiently across silicon – securely and locally on Windows

Windows ML is compatible with ONNX Runtime (ORT), allowing developers to utilize familiar ORT APIs and enabling easy transition for existing production workloads. Windows handles distribution and maintenance of ORT and the Execution Providers, taking that responsibility on from the App Developer. Execution Providers (EPs) are the bridge between the core runtime and the powerful and diverse silicon ecosystem, enabling independent optimization of model execution on the different chips from AMD, Intel, NVIDIA and Qualcomm. With ONNX as its model format, Windows ML integrates smoothly with current models and workflows. Developers can easily use their existing ONNX models or convert and optimize their source PyTorch models through the AI Toolkit for VS Code and deploy across Windows 11 PCs. [caption id="attachment_57579" align="alignnone" width="1024"]Diagram of Windows ML Windows ML Stack Diagram[/caption] While AI developers work with various models, Windows ML acts as a hardware abstraction layer offering several benefits:
  • Simplified Deployment: Our infrastructure APIs allow developers to support various hardware architectures without multiple app builds by leveraging execution providers available on the device or by dynamically downloading them. Developers also have the flexibility to precompile their models ahead-of-time (AOT) for a streamlined end-user experience.
  • Reduce App Overhead: Windows ML automatically detects the user’s hardware and downloads the appropriate execution providers, eliminating the need to bundle the runtime or EPs in a developer’s application.  This streamlined approach saves developers tens to hundreds of megabytes in app size when targeting a broad range of devices.
  • Compatibility: Through collaboration with our silicon partners, Windows ML aims to maintain conformance and compatibility, supporting ongoing updates while ensuring model accuracy across different builds through a certification process.
  • Advanced Silicon Targeting: Developers can assign device policies to optimize for low power (NPU), high performance (GPU) or specify the silicon used for a model.
For a more technical deep dive on Windows ML, learn more here.

Windows ML, optimized for the latest hardware in collaboration with our silicon partners

Windows 11 has a diverse hardware ecosystem that includes AMD, Intel, NVIDIA and Qualcomm and spans the CPU, GPU and NPU. Consumers can choose from a range of Windows PCs and this variety empowers developers to create innovative local AI experiences. We worked closely with our silicon partners to ensure that Windows ML can fully leverage their latest CPUs, GPUs and NPUs for AI workloads. The way this works is silicon partners build and maintain execution providers that Windows ML distributes, manages, and registers to run AI workloads performantly on-device, serving as a hardware abstraction layer for developers and a way to get optimal performance for each specific silicon. AMD has integrated Windows ML support across their Ryzen AI platform, enabling developers to harness the power of AMD silicon via AMD’s dedicated Vitis AI execution provider on NPU, GPU and CPU. Learn more. “By integrating Windows ML support across our Ryzen AI platform, AMD is making it easier for developers to harness the combined power of our CPUs, GPUs and NPUs. Together with Microsoft, we’re enabling scalable, efficient and high-performance AI experiences that run seamlessly across the Windows ecosystem.” - John Rayfield, corporate vice president, Computing and Graphics Group, AMD Intel’s EP combines OpenVINO AI software performance and efficiency with Windows ML, empowering AI developers to easily choose the optimal XPU (CPU, GPU or NPU) for their AI workloads on Intel Core Ultra processor powered PCs. Learn more. “Intel’s collaboration with Microsoft on Windows ML* empowers developers to effortlessly deploy their custom AI models and applications across CPUs, GPUs and NPUs on Intel’s AI-powered PCs. With the OpenVINO framework, Windows ML* accelerates the delivery of cutting-edge AI applications, enabling faster innovation with unmatched efficiency unlocking the full potential of Intel Core Ultra processors.” Sudhir Tonse Udupa, vice president, AI PC Software Engineering, Intel NVIDIA’s TensorRT for RTX EP enables AI models to be executed on NVIDIA GeForce RTX and RTX PRO GPUs using NVIDIA’s dedicated Tensor Core libraries for maximum performance. This lightweight EP generates optimized inference engines — instructions on how to run the AI model — for the system’s specific RTX GPU. Learn more.Windows ML with TensorRT for RTX delivers over 50% faster inferencing on NVIDIA RTX GPUs compared to DirectML in an easy-to-deploy package, enabling developers to scale generative AI across over 100 million Windows devices. This combination of speed and reach empowers developers to create richer AI experiences for Windows users.” - Jason Paul, vice president, Consumer AI, NVIDIA Qualcomm Technologies and Microsoft worked together to optimize Windows ML AI models and apps for the Snapdragon X Series NPU using the Qualcomm Neural Network Execution Provider (QNN EP) as well as GPU and CPU through integration with ONNX Runtime EPs. Learn more here. "With Windows ML now live and the preview of Foundry local, this is a pivotal moment for AI developers on Windows. The new Windows ML runtime not only delivers cutting-edge on-device inference but also simplifies deployment, enabling developers to fully harness advanced AI processors on Snapdragon X Series platforms. Its unified framework and support for NPUs, GPUs and CPUs ensure exceptional performance and efficiency across Snapdragon Windows PCs. As agentic AI experiences become mainstream, our deep collaboration with Microsoft is accelerating innovation and bringing the best AI experiences to Windows Copilot+ PCs and soon to our next-generation Snapdragon X2 platform.” - Upendra Kulkarni, VP, Product Management, Qualcomm Technologies, Inc.

Enabling local AI in the Windows software ecosystem

While developing Windows ML, we prioritized feedback from app developers building AI-powered features. We previously worked with app developers to test the integration with Windows ML during public preview. Leading software app developers such as Adobe, BUFFERZONE, Dot Inc., McAfee, Reincubate, Topaz Labs and Wondershare are among many others working on adopting Windows ML in their upcoming releases, accelerating the proliferation of local AI capabilities across a broad spectrum of applications. By leveraging Windows ML, our software partners can focus on building unique AI-powered features without worrying about hardware differences. Their early adoption and feedback show strong momentum toward local AI, enabling faster development and unlocking new local AI experiences across a variety of use cases:
  • Adobe Premiere Pro and Adobe After Effects – accelerated semantic search of content in the media library, tagging audio segments by type, and detecting scene edits, all powered by local NPU in upcoming releases; with plans to progressively migrate the full library of existing on-device models to Windows ML.
  • BUFFERZONE enables real-time secure web page analysis, protecting users from phishing and fraud without sending sensitive data to the cloud.
  • Camo by Reincubate leverages real-time image segmentation and other ML techniques to improve webcam video quality when streaming and presenting while using the NPU across all silicon providers.
  • Dot Vista by Dot Inc. supports hands-free voice control and optical character recognition (OCR) for accessibility scenarios, including deployments in healthcare environments using NPUs in Copilot+ PCs.
  • Filmora by Wondershare uses AI-powered body effects optimized for NPU acceleration on AMD, Intel and Qualcomm platforms, including real-time preview and application of Body effects such as Lightning Twined, Neon Ring and Particle Surround.
  • McAfee uses automatic detection of deepfake videos and other scam vectors that can be encountered on social networks.
  • Topaz Photo by Topaz Labs is a professional-grade image enhancement application that lets photographers sharpen details, restore focus and adjust levels on every shot they take - all powered by AI.

Simplified tooling for Windows ML

Developers can take advantage of Windows ML by starting with a robust set of tools for simplified model deployment. AI Toolkit for VS Code provides powerful tools for model and app preparation, including ONNX conversion from PyTorch, quantization, optimization, compilation and evaluation – all in one place. These features make it easier to prepare and deploy efficient models with Windows ML, eliminating the need for multiple builds and complex logic. Starting today, developers can also try custom AI models with Windows ML in AI Dev Gallery, which offers an interactive workspace to make it easier to discover and experiment AI-powered scenarios using local models.

Get started today

With Windows ML now generally available, Windows 11 provides a local AI inference framework that’s ready for production apps. Windows ML is included in the Windows App SDK (starting with version 1.8.1) and supports all devices running Windows 11 24H2 or newer. To get started developing with Windows ML:
  • Update your project to use the latest Windows App SDK
  • Call the Windows ML APIs to initialize EPs, and then load any ONNX model and start inferencing in just a few lines of code. For detailed tutorials, API reference and sample code, visit ms/TryWinML
  • For interactive samples of custom AI models with Windows ML, try the AI Dev Gallery at ms/ai-dev-gallery

Develop local AI solutions with Windows ML

Windows development has always been about enabling developers to do more with software and hardware. Windows ML lets both new and experienced developers build AI-powered apps easily, focusing on innovation and reducing app size.  We at Microsoft are excited to see what new experiences you will create using Windows ML across Windows 11 PCs. The era of intelligent, AI-enhanced Windows apps is here – and it’s available to every developer. Let’s usher in this new wave of innovation together with Windows ML!

Editor’s note — September 24, 2025 — Updated to reflect announcements at Qualcomm's Snapdragon Summit on Sept. 24 and correct link for McAfee.

]]>
Available today: DeepSeek R1 7B & 14B distilled models for Copilot+ PCs via Azure AI Foundry – further expanding AI on the edge https://blogs.windows.com/windowsdeveloper/2025/03/03/available-today-deepseek-r1-7b-14b-distilled-models-for-copilot-pcs-via-azure-ai-foundry-further-expanding-ai-on-the-edge/ Mon, 03 Mar 2025 19:53:09 +0000 https://blogs.windows.com/windowsdeveloper/?p=57375 At Microsoft, we believe the future of AI is happening now — spanning from the cloud to the edge. Our vision is bold: to build Windows as the ultimate platform for AI innovation, where intelligence isn’t just in the cloud but seamlessly woven t

The post Available today: DeepSeek R1 7B & 14B distilled models for Copilot+ PCs via Azure AI Foundry – further expanding AI on the edge appeared first on Windows Developer Blog.

]]>
recent announcement of bringing NPU-optimized versions of DeepSeek-R1 1.5B distilled model directly to Copilot+ PCs, we’re taking the next step forward with the availability of DeepSeek R1 7B & 14B distilled models for Copilot+ PCs via Azure AI Foundry. This milestone reinforces our commitment to delivering cutting-edge AI capabilities that are fast, efficient and built for real-world applications — helping developers, businesses and creators push the boundaries of what’s possible. https://www.youtube.com/watch?v=GotHKdBQPw4 Availability starts with Copilot+ PCs powered by Qualcomm Snapdragon X, followed by Intel Core Ultra 200V and AMD Ryzen. The ability to run 7B and 14B parameter reasoning models on Neural Processing Units (NPUs) is a significant milestone in the democratization and accessibility of artificial intelligence. This progression allows researchers, developers and enthusiasts to leverage the substantial power and functionalities of large-scale machine learning models directly from their Copilot+ PCs. These Copilot+ PCs include an NPU capable of over 40 trillion operations per second (TOPS).

NPUs are purpose-built to run AI models locally on-device with exceptional efficiency 

NPUs like those built into Copilot+ PCs are purpose-built to run AI models with exceptional efficiency, balancing speed and power consumption. They ensure sustained AI compute with minimal impact on battery life, thermal performance and resource usage. This leaves CPUs and GPUs free to perform other tasks, allowing reasoning models to operate longer and deliver superior results — all while keeping your PC running smoothly. Efficient inferencing has heightened significance due to a new scaling law for language models, which indicates that chain of thought reasoning during inference can improve response quality across various tasks. The longer a model can “think,” the better its quality will be. Instead of increasing parameters or training data, this approach taps into additional computational power for better outcomes. DeepSeek distilled models exemplify how even small pretrained models can shine with enhanced reasoning capabilities and when coupled with the NPUs on Copilot+ PCs, they unlock exciting new opportunities for innovation. Reasoning emerges in models of a certain minimum scale, and models at that scale must think using a large number of tokens to excel at complex multi-step reasoning. Although the NPU hardware aids in reducing inference costs, it is equally important to maintain a manageable memory footprint for these models on consumer PCs, say with 16GB RAM.

Pushing the boundaries of what’s possible on Windows

Our research investments have enabled us to push the boundaries of what’s possible on Windows even further at the system level and at a model level leading to innovations like Phi Silica. With our work on Phi Silica we were able to create a scalable platform for low-bit inference on NPUs, enabling powerful performance with minimal memory and bandwidth tax. Combined with the data privacy offered by local compute, this puts advanced scenarios like Retrieval Augmented Generation (RAG) and model fine-tuning at the fingertips of application developers. We reused techniques such as QuaRot, sliding window for fast first token responses and many other optimizations to enable the DeepSeek 1.5B release. We used Aqua, an internal automatic quantization tool, to quantize all the DeepSeek model variants to int4 weights with QuaRot, while retaining most of the accuracy. Using the same toolchain we used to optimize Phi Silica we quickly integrated all the optimizations into an efficient ONNX QdQ model with low precision weights. Like the 1.5B model, the 7B and 14B variants use 4-bit block wise quantization for the embeddings and language model head and run these memory-access heavy operations on the CPU. The compute-heavy transformer block containing the context processing and token iteration uses int4 per-channel quantization for the weights alongside int16 activations. We already see about 8 tok/sec on the 14B model (the 1.5B model, being very small, demonstrated close to 40 tok/sec) — and further optimizations are coming in as we leverage more advanced techniques. With all this in place, these nimble language models think longer and harder. This durable path to innovation has made it possible for us to more quickly optimize larger variants of DeepSeek models (7B and 14B) and will continue to enable us to bring more new models to run on Windows efficiently.

Get started today

Developers can access all distilled variants (1.5B, 7B and 14B) of DeepSeek models and run them on Copilot+ PCs by simply downloading the AI Toolkit VS Code extension. The DeepSeek model optimized in the ONNX QDQ format is available in AI Toolkit’s model catalog, pulled directly from Azure AI Foundry. You can download it locally by clicking the “Download” button. Once downloaded, experimenting with the model is as simple as opening the Playground, loading the “deepseek_r1_1_5” model and sending it prompts.

Run models across Copilot+ PCs and Azure

Copilot+ PCs offer local compute capabilities that are an extension of capabilities enabled by Azure, giving developers even more flexibility to train, fine-tune small language models on-device and leverage the cloud for larger intensive workloads. In addition to the ONNX model optimized for Copilot+ PC, you can also try the cloud-hosted source model in Azure Foundry by clicking on the “Try in Playground” button under “DeepSeek R1.” AI Toolkit is part of your developer workflow as you experiment with models and get them ready for deployment. With this playground, you can effortlessly test the DeepSeek models available in Azure AI Foundry for local deployment too. Through this, developers now have access to the most complete set of DeepSeek models available through the Azure AI Foundry from cloud to client. Copilot+ PCs pair efficient compute with the near infinite compute Microsoft has to offer via its Azure services. With reasoning able to span the cloud and the edge, running in sustained loops on the PC and invoking the much larger brains in the cloud as needed — we are on to a new paradigm of continuous compute creating value for our customers. The future of AI compute just got brighter! We can’t wait to see the new innovations from our developer community taking advantage of these rich capabilities. Please keep the feedback coming!]]>
Running Distilled DeepSeek R1 models locally on Copilot+ PCs, powered by Windows Copilot Runtime https://blogs.windows.com/windowsdeveloper/2025/01/29/running-distilled-deepseek-r1-models-locally-on-copilot-pcs-powered-by-windows-copilot-runtime/ Wed, 29 Jan 2025 22:11:14 +0000 https://blogs.windows.com/windowsdeveloper/?p=57331 Update: Feb. 3, 2025: Today, we are pleased to announce that the distilled DeepSeek R1 models optimized using ONNX are now available to use on your Snapdragon powered Copilot+ PCs. With further optimizations in place, the model i

The post Running Distilled DeepSeek R1 models locally on Copilot+ PCs, powered by Windows Copilot Runtime appeared first on Windows Developer Blog.

]]>
Update: Feb. 3, 2025: Today, we are pleased to announce that the distilled DeepSeek R1 models optimized using ONNX are now available to use on your Snapdragon powered Copilot+ PCs. With further optimizations in place, the model is capable of a time to first token of less than 70 ms for short prompts (<64 tokens) and a throughput rate of up to ~40 tokens/s. The time to first token scales with the length of the input prompt. The throughput rate varies based on the complexity of the task specified in the prompt; responses exhibit a throughput range of ~25-40 tokens/s. Longer responses are especially likely to enjoy higher throughput rates. Get started today by downloading the AI Toolkit extension in VS Code. AI is moving closer to the edge, and Copilot+ PCs are leading the way. With the availability of cloud hosted DeepSeek R1 available on Azure AI Foundry, we’re bringing NPU-optimized versions of DeepSeek-R1 directly to Copilot+ PCs, starting with Qualcomm Snapdragon X first, followed by Intel Core Ultra 200V and others. The first release, DeepSeek-R1-Distill-Qwen-1.5B (Source), will be available in AIToolkit for VSCode, with the 7B (Source) and 14B (Source) variants arriving soon. These optimized models let developers build and deploy AI-powered applications that run efficiently on-device, taking full advantage of the powerful NPUs in Copilot+ PCs. The Neural Processing Unit (NPU) on Copilot+ PCs offers a highly efficient engine for model inferencing, unlocking a paradigm where generative AI can execute not just when invoked, but enable semi-continuously running services. This empowers developers to tap into powerful reasoning engines to build proactive and sustained experiences. With our work on Phi Silica, we were able to harness highly efficient inferencing – delivering very competitive time to first token and throughput rates, while minimally impacting battery life and consumption of PC resources. Running models on the NPU is about speed and efficiency. For example, as mentioned in previous posts, the Phi Silica token iterator on the NPU exhibits a 56% improvement in power consumption compared to operating on the CPU. Such efficiency enables new experiences that demand such state-of-the art models to be in the main loop of the program without draining your battery or overly heating your device. The optimized DeepSeek models for the NPU take advantage of several of the key learnings and techniques from that effort, including how we separate out the various parts of the model to drive the best tradeoffs between performance and efficiency, low bit rate quantization and mapping transformers to the NPU. Additionally, we take advantage of Windows Copilot Runtime (WCR) to scale across the diverse Windows ecosystem with ONNX QDQ format. Get ready to play! First things first…let’s give it a whirl. To see DeepSeek in action on your Copilot+ PC, simply download the AI Toolkit VS Code extension. The DeepSeek model optimized in the ONNX QDQ format is available in AI Toolkit’s model catalog, pulled directly from Azure AI Foundry. You can download it locally by clicking the “Download” button. Once downloaded, experimenting with the model is as simple as opening the Playground, loading the “ deepseek_r1_1_5” model, and sending it prompts. In addition to the ONNX model optimized for Copilot+ PC, you can also try the cloud-hosted source model in Azure Foundry by clicking on the “Try in Playground” button under “ DeepSeek R1”. AI Toolkit is part of your developer workflow as you experiment with models and get them ready for deployment. With this playground, you can effortlessly test the DeepSeek models available in Azure AI Foundry for local deployment. https://youtu.be/CFzH0sekxYI Silicon Optimizations The distilled Qwen 1.5B consists of a tokenizer, embedding layer, a context processing model, token iteration model, a language model head and a de-tokenizer. We use 4-bit block wise quantization for the embeddings and language model head and run these memory-access heavy operations on the CPU. We focus the bulk of our NPU optimization efforts on the compute-heavy transformer block containing the context processing and token iteration, wherein we employ int4 per-channel quantization for the weights alongside int16 activations. Details of the various precisions involved are in the table below, for additional clarity on the mix.
Model Precision Host
Embeddings w: int4 a: fp32 CPU
Context processing w: int4 a: int16 NPU
Token iteration w: int4 a: int16 NPU
Language model head w: int4 a: fp32 CPU
While the Qwen 1.5B release from DeepSeek does have an int4 variant, it does not directly map to the NPU due to presence of dynamic input shapes and behavior – all of which needed optimizations to make compatible and extract the best efficiency. Additionally, we use the ONNX QDQ format to enable scaling across a variety of NPUs we have in the Windows ecosystem. We work out an optimal operator layout between the CPU and NPU for maximum power-efficiency and speed. To achieve the dual goals of low memory footprint and fast inference, much like Phi Silica, we make two key changes: First, we leverage a sliding window design that unlocks super-fast time to first token and long context support despite not having dynamic tensor support in the hardware stack. Second, we use the 4-bit QuaRot quantization scheme to truly take advantage of low bit processing. QuaRot employs Hadamard rotations to remove outliers in weights and activations, making the model easier to quantize. QuaRot significantly improves quantization accuracy, compared to existing methods, such as GPTQ, particularly for low granularity settings such as per-channel quantization. The combination of low-bit quantization and hardware optimizations such the sliding window design help deliver the behavior of a larger model within the memory footprint of a compact model. With these optimizations in place, the model is capable of a time to first token of 130 ms and a throughput rate of 16 tokens/s for short prompts (<64 tokens). We include examples of the original and quantized model responses below to show the minor differences between the two variants, with the latter being both fast and power-efficient: Sample response from the original model Sample response from the NPU-optimized model Figure 1: Qualitative comparison. Sample responses from the original model (top) vs NPU-optimized model (bottom) for the same prompt, including the model’s reasoning capability. The model follows a similar reasoning pattern, and reaches the same answer, demonstrating that the optimized model retains the reasoning ability of the original model.  With the speed and power characteristics of the NPU-optimized version of the DeepSeek R1 models users will be able to interact with these ground-breaking models entirely locally. We are excited what this capability enables for the future of the PC experience and looking forward towards innovations from our developer community.]]>
Elevating the developer experience on Windows with new AI tools and productivity tools https://blogs.windows.com/windowsdeveloper/2023/11/15/elevating-the-developer-experience-on-windows-with-new-ai-tools-and-productivity-tools/ Wed, 15 Nov 2023 16:00:14 +0000 https://blogs.windows.com/windowsdeveloper/?p=56881 With the latest Windows 11 update on Sept. 26 we released a host of developer features as the core component of the Windows OS with an intent to make every developer more productive on Windows. Today we are excited to announce Windows AI Studio, a ne

The post Elevating the developer experience on Windows with new AI tools and productivity tools appeared first on Windows Developer Blog.

]]>
Windows AI Studio simplifies generative AI app development Many developers and enterprises want to bring AI differentiated experiences to their apps and we have heard from these developers that they need an easier and trusted way to get started with local AI development. With many tools, frameworks and open-source models available it is difficult to pick the right set of tools to test, fine-tune and optimize the models or select the most trusted models that best fit diverse business needs. That’s why we are thrilled to announce Windows AI Studio, a new experience for developers, that extends the tooling of Azure AI Studio to jumpstart AI development locally on Windows.

Getting started with AI development locally on Windows is easier and faster than ever

Windows AI Studio simplifies generative AI app development by bringing together cutting-edge AI development tools and models from Azure AI Studio and other catalogs like Hugging Face, enabling developers to fine-tune, customize and deploy state-of-the-art small language models, or SLMs, for local use in their Windows apps. This includes an e2e guided workspace setup that includes model configuration UI and guided walkthroughs to fine-tune popular SLMs like Phi. Developers can then rapidly test their fine-tuned model using the Prompt Flow and Gradio templates integrated into the workspace. Windows AI Studio brings us closer to supporting Hybrid Loop development patterns and enabling hybrid AI scenarios across Azure and client devices. This gives developers greater choice either to run their models on the cloud on Azure or on the edge locally on Windows (or across the two) to meet their needs. Prompt Flow makes it easier than ever to implement this hybrid pattern by switching between local SLMs and cloud LLMs. Graph showing a typical fine-tuning workflow The picture above shows a typical fine-tuning workflow. Developers will bring their own datasets for fine-tuning. See our fine-tuning guide for details on how to get started. Note that the fine-tuning + model evaluation steps will be iterative until the model meets the developers’ evaluation criteria. In the coming weeks developers can access Windows AI Studio as a VS Code Extension, a familiar and seamless interface to help you get started with AI development. The guided interface allows you to focus on what you do best, coding, while we do all the heavy lifting by setting your developer environment with all the tools needed. Learn more about Windows AI Studio. Windows AI Studio Step 1: Select model Windows AI Studio Step 2: Configure the model Windows AI Studio Step 3: Generating project

Windows optimized state-of-the-art models

In addition to fine-tuning capabilities, Windows AI Studio will also highlight state-of-the-art (SOTA) models optimized specifically for Windows GPUs and NPUs in the future, starting with Llama 2-7B, Mistral-7B, Falcon-7B, and Stable Diffusion XL. Earlier this year, we talked about how ONNX Runtime is the gateway to Windows AI. DirectML is the native Windows machine learning API, and together they give developers access to a simplified yet highly performant AI development experience. With Olive, a powerful optimization tool for ONNX models, developers can ensure that their models run as performantly as possible with the DirectML+ONNX Runtime combo. At Inspire this year we shared details on how developers will be able to run Llama 2 with DirectML and the ONNX Runtime and we have been hard at work to make this a reality. We now have a sample showing our progress with Llama 2 7B; after an Olive optimization pass, our sample shows how developers can now run this versatile LLM locally and performantly on varied Windows hardware. We're excited about this milestone, and this is only a first peek. Stay tuned for future enhancements to support even larger models, fine-tuning and lower-precision data types.  Learn more. Windows Subsystem for Linux (WSL) offers a robust platform for AI development on Windows by making it easy to run Windows and Linux workloads simultaneously. Developers can easily share files, GUI apps, GPU and more between environments with no additional setup. WSL is now enhanced to meet the enterprise grade security requirements so enterprise customers can confidently deploy WSL for their developers to take advantage of both Windows and Linux operating systems on the same Windows device and accelerate AI development efficiently.

Windows Subsystem for Linux now offers new enterprise features that enhance security and simplify deployment

It’s now easier than ever to securely deploy WSL to your company with the latest enterprise features. These include:
  • Microsoft Defender for Endpoint released a new plug-in for WSL that enables security teams to continuously monitor events in all running distributions – delivering unparalleled visibility into systems once considered a critical blind spot.
  • Access to WSL and its key security settings are now controllable with Intune. Admins can configure access to WSL entirely, or dive into access to specific security settings like custom kernel, nested virtualization and more, to ensure security while using WSL.
  • Advanced networking controls in WSL let you specify firewall rules that apply to the WSL virtual machine and improve network compatibility in complex enterprise environments. Learn more to get started with WSL today!
We want to ensure Windows is optimized for developers and helps you be productive across any development you do – desktop, web, AI or cross-platform. That’s why we introduced Dev Home, your ultimate productivity companion, at Build 2023. Dev Home is a new experience for developers on Windows 11 that helps you get back in the zone and streamlines your workflows, boosting your productivity. Dev Home assists you in setting up your dev environment by downloading apps, packages or repositories, and lets you connect to your developer accounts and tools such as GitHub. Today, Dev Home is getting even better for you.

Dev Home now has Azure DevOps extension so you can stay on top of your daily tasks

We are thrilled to release Dev Home v0.7 with Azure DevOps (ADO) support powered by the new Dev Home Azure extension. This extension allows you to easily clone your Azure repositories using Dev Home to get your machine to a code-ready state, manage your ADO projects and get productive right away from the Windows desktop. Additionally, you can pin ADO widgets to display query results and query tiles to provide easily glanceable information for the projects you care about most. Enterprises can take advantage of Dev Home to onboard new team members and projects faster and developers can stay on top of projects, queries and relevant tasks from Dev Home. With our focus on empowering every developer to be an AI developer and continued investments on developer productivity, we believe Windows now provides the best platform for you to jumpstart local AI development and create cutting edge experiences for your customers. We are humbled and excited to be on this journey with you. We love hearing from our developer community and want to continue working with you to build the experiences and features you want. Share your feedback with us by reaching out on our social channels @WindowsDev on LinkedIn, X (formerly known as Twitter), Facebook and Instagram. Editor’s note –  Nov. 15, 2023   This post was updated to more accurately reflect what an SLM is.]]>