Microsoft Agent 365 gives security teams a governance layer for AI agents operating inside your Microsoft 365 environment: discovery, identity controls, Intune-based blocking. What it does not cover is the browser. Every time an employee opens ChatGPT in Chrome, pastes source code into Claude from a personal account, or installs an AI extension on a device that is not Intune-enrolled, that activity happens outside Agent 365’s visibility entirely.
Shadow AI refers to AI tools, agents, and workflows that employees use without IT awareness or formal approval. In a Microsoft 365 environment specifically, this includes unauthorized local agents like OpenClaw, consumer AI tools accessed through personal accounts, AI-connected MCP servers, third-party Copilot plugins, and AI-enabled browser extensions running across any browser employees choose to use.
The challenge is not that employees are trying to create security problems. They are trying to meet deadlines. A developer installs a local AI coding assistant. A sales rep connects a personal ChatGPT account to their workflow. A marketing manager pastes a strategy document into Gemini to get a first draft. None of these require IT approval, none get logged, and none are visible to the security team until something goes wrong.
According to LayerX’s Browser Security Report 2025, nearly 90% of AI logins in enterprise environments bypass oversight entirely, with 67% of employees accessing GenAI tools via personal accounts. That is not a visibility gap at the edge of your environment. That is the center of your environment.
Microsoft Agent 365 is a control plane for AI agents operating within the Microsoft 365 ecosystem. It integrates three existing Microsoft security platforms to provide agent-specific governance: Microsoft Entra handles agent identity and access control, Microsoft Purview manages data security and compliance for agent interactions, and Microsoft Defender provides threat detection and posture management.
On the shadow AI side specifically, Agent 365 includes a dedicated Shadow AI (Frontier) page in the Microsoft 365 admin center. This feature focuses on detecting and governing unapproved local AI agents. When an organization enables the detection policy for a known shadow AI agent, Agent 365 can identify which managed Windows devices have that agent installed and push a blocking policy through Intune.
The Agent 365 security architecture also surfaces agent sprawl risks that emerge from over-privileged agents, misconfigured agents, and tool misuse patterns including prompt injection. These are genuine governance capabilities that address a real and growing problem in enterprise AI environments.
This is where security architects need to read carefully. The Agent 365 Shadow AI detection feature is not available to all Microsoft 365 customers by default. As of the current preview, it requires a Microsoft 365 E3 license minimum, enrollment in the Frontier preview program, and critically, Microsoft Intune enrollment for managed Windows devices.
That last prerequisite carries significant weight. Detection and blocking through Agent 365 currently applies only to managed Windows devices enrolled with Microsoft Intune. A user on a Mac, on a personal laptop, on a contractor device, or on any Windows device not enrolled in Intune sits entirely outside this detection boundary. Additionally, the current public preview of the Shadow AI (Frontier) feature supports detection and blocking for a single known agent: OpenClaw.
Microsoft has signaled the feature set will expand. But as it stands today, the architectural constraint is real: Agent 365’s shadow AI controls require Intune management, Windows devices, and known agent signatures to do their work.
Agent 365 governs AI agents at the identity and endpoint layer. It can manage what registered agents can access, enforce conditional access policies tied to agent identities, detect known shadow agents on managed endpoints, and audit agent activity flowing through Microsoft’s own security toolchain. That is a meaningful security layer.
The boundary sits at the browser session. Agent 365 has no mechanism to observe what an employee types into ChatGPT in a browser tab, what they paste into Claude or Gemini during a work session, which AI tools they access through personal accounts on managed or unmanaged devices, or what AI-enabled browser extensions are doing inside active sessions on any browser other than Edge for Business.
Microsoft Edge for Business addresses part of this gap through Purview prompt-level DLP, which can audit or block sensitive content submitted to select AI tools. But this protection applies only when employees are signed into Edge for Business with their Entra ID credentials. Switch to Chrome, Firefox, or any other browser, and the coverage stops. For organizations with BYOD policies, contractor workforces, or mixed-browser environments, this creates a structural blind spot that no combination of Agent 365 and Edge for Business can fully close on its own.
Three risk categories emerge consistently when organizations look at the surface Agent 365 does not cover.
The first is personal account access to sanctioned and unsanctioned AI tools. LayerX research shows that 71.6% of enterprise access to GenAI tools happens through non-corporate accounts. When an employee accesses ChatGPT, Claude, or Gemini through a personal Gmail account, that session is invisible to Agent 365, Entra, and Purview. The user may be on a fully Intune-managed device with all policies applied. The data they are moving into that AI tool is completely ungoverned at the session level.
The second is copy-paste activity. File-based DLP has existed for years. What it cannot catch is the paste. LayerX’s Browser Security Report 2025 found that 77% of employees paste data into GenAI prompts, with 50% of that paste activity including corporate data. No endpoint tool sees a paste event. No network tool sees what content was carried in it. This is the primary data exfiltration vector in modern enterprise environments, and it happens entirely inside the browser.
The third is AI access on unmanaged devices. Security architects at large enterprises know their managed device population is not their entire employee population. Contractors, part-time workers, remote employees on personal machines, and BYOD users all represent real vectors for AI data exposure. Agent 365’s Intune requirement means these users fall entirely outside its shadow AI governance model.
AI-enabled browser extensions are one of the fastest-growing and least-understood shadow AI vectors in enterprise environments. These extensions run inside the browser session, with access to page content, text inputs, clipboard data, and in many cases cookies and identity information. They do not require IT approval, do not appear in Intune inventories, and are not covered by Agent 365’s current shadow AI detection capabilities.
The scale of the risk is not hypothetical. LayerX’s Enterprise Browser Extension Security Report 2026 found that 1-in-6 enterprise users run at least one AI-enabled browser extension, with 73% of those extensions carrying high or critical permission scope. AI extensions are 60% more likely to have a known CVE than the average extension, 3 times more likely to have access to cookies, and nearly 6 times more likely to change or expand their permissions over time after installation.
An employee using an AI writing assistant extension has granted that extension access to everything they type in their browser. That includes drafts pasted into email, content entered into internal tools, and prompts submitted to any AI platform they use during the workday. From a security perspective, this is a live, persistent data access grant that sits entirely below Agent 365’s detection threshold.
The security team cannot govern what it cannot see, and Agent 365’s visibility does not extend to extension behavior inside browser sessions.
A complete shadow AI governance posture for organizations running Microsoft 365 requires two distinct layers, each covering a different part of the risk surface.
The first layer is the agent identity and endpoint layer. Agent 365, Entra, Purview, and Defender operate here. This layer governs known and registered AI agents, enforces least-privilege access for agents acting within the M365 ecosystem, detects known shadow agents on managed Windows endpoints, and audits agent activity within Microsoft’s security telemetry. For organizations deeply invested in the Microsoft stack, this layer is worth deploying and maturing.
The second layer is the browser session layer. This is where human-driven AI activity happens: employees accessing ChatGPT, Claude, Perplexity, Grammarly, and Gemini in real time, across any browser they use, on any device, through any account type. The browser session layer is where copy-paste exfiltration happens, where AI extensions operate, and where personal account access bypasses every identity governance control in the first layer.
These two layers are not redundant. They address structurally different threat vectors. A security architecture that has invested in Agent 365 without a browser-level AI governance layer has strong coverage for registered agents and a largely unmonitored surface for human-driven AI activity. A governance strategy that addresses both layers covers the full shadow AI problem in a Microsoft 365 environment.
Security teams running Agent 365 have strong coverage for known, registered AI agents operating through managed Windows endpoints. The surface that still needs coverage is the browser, where employees access ChatGPT, Claude, Gemini, Grammarly, and hundreds of other AI tools through personal accounts, on BYOD devices, across any browser they choose. LayerX’s Enterprise Browser Extension addresses this layer through Shadow AI Discovery and AI DLP: it surfaces every AI tool accessed in the browser regardless of account type or device management status, and applies real-time enforcement on prompts, pastes, and file uploads without requiring Intune enrollment or Edge for Business adoption.
Because LayerX operates at the browser session level rather than the identity or endpoint layer, it covers what Agent 365 was not designed to reach. Security teams get last-mile visibility into AI usage across Chrome, Firefox, Edge, and any other browser in the environment, with granular controls that range from monitor-only through warn, prevent, and redact depending on data classification and policy. Together, Agent 365 and LayerX address the full shadow AI surface in a Microsoft 365 environment: one governing AI agents at the identity layer, the other governing human AI sessions at the browser layer.
Request a Demo
The most useful mental model is a coverage map rather than a product comparison. Agent 365 and browser-level AI security controls are not alternatives to each other. They address different threat surfaces at different layers of the stack.
Agent 365 owns the agent identity and lifecycle layer: registered agents, M365-integrated workflows, Copilot Studio agents, Intune-managed endpoints, and the Entra-Purview-Defender telemetry chain. It is the right tool for governing AI agents that operate within Microsoft’s ecosystem and that security teams have some prior awareness of.
Browser-level controls own the session layer: real-time activity across all browsers, personal account access, BYOD devices, AI extensions, copy-paste flows, and the long tail of consumer AI tools employees bring into the workplace without IT knowledge. This is the surface that generates the most data exposure events in practice, because it requires no formal agent deployment and no IT approval process to activate.
Security architects evaluating their shadow AI posture should ask two questions: first, can we see and govern AI agents operating within our M365 ecosystem at the identity level? Agent 365 answers that question. Second, can we see and govern AI activity happening in the browser, across all browsers, on all devices, through all account types? That second question requires a different layer of control, purpose-built for the browser session where most enterprise AI activity actually occurs.
Agent 365’s Shadow AI detection and blocking currently applies only to managed Windows devices enrolled with Microsoft Intune. Unmanaged devices, personal laptops, BYOD endpoints, contractor machines, and any non-Windows device fall outside Agent 365’s current shadow AI detection scope. This is a design constraint of the Intune-based enforcement model, not a configuration issue.
No. Agent 365 governs AI agents at the identity and endpoint layer through Entra, Purview, and Defender. It does not have visibility into browser session activity, including prompts submitted to ChatGPT, Claude, Gemini, or other web-based AI tools. Microsoft Edge for Business can apply Purview DLP to prompts in select AI tools, but only when employees are signed in with Entra ID credentials on Edge for Business specifically. Any session on another browser falls outside this coverage.
Shadow AI at the identity layer refers to AI agents and tools that have been granted access to organizational data or systems without proper IT governance, such as an unauthorized local agent with Entra permissions or a third-party Copilot plugin with excessive access rights. Shadow AI at the browser layer refers to AI activity that happens inside browser sessions without IT visibility: employees accessing ChatGPT or Gemini through personal accounts, pasting sensitive data into AI prompts, or running AI browser extensions with broad page permissions. Agent 365 addresses the identity layer. Browser-level controls are needed for the session layer.
Yes. As of the current public preview, Agent 365 Shadow AI detection requires Microsoft Intune enrollment for managed Windows devices. Detection and blocking policies are propagated through Intune and apply only to devices within that management scope. Organizations without comprehensive Intune coverage, or those with significant BYOD or contractor device populations, should plan for additional coverage layers to address the devices and sessions outside Intune’s reach.
As of the public preview, Agent 365’s Shadow AI (Frontier) feature supports detection and blocking for OpenClaw, an unauthorized local AI coding agent. Microsoft has indicated the supported agent list will expand over time. The broader Agent 365 platform supports governance for Microsoft-native agents including Copilot and Copilot Studio agents, as well as third-party agents registered within the M365 ecosystem. Consumer AI tools accessed through web browsers, such as ChatGPT, Claude, and Gemini, are not within Agent 365’s current governance scope.
Agent 365 and the broader Microsoft security stack do not currently provide comprehensive AI governance for unmanaged or BYOD devices. Governing AI access on these devices requires controls that operate below the Intune enrollment requirement, specifically at the browser session level. A browser-based security layer deployed as an extension can enforce AI usage policies across any browser, on any device, regardless of whether the device is enrolled in Intune, which operating system it runs, or which account the employee uses to access AI tools.
If your organization is running Agent 365 and wants to understand what your current AI governance coverage map actually looks like, LayerX can show you exactly what is visible at the browser layer that Agent 365 cannot see.