LayerX: 엔터프라이즈 브라우저 보안
  • 플랫폼
  • 고객 사례

    AI 사용 보안

    AI 발견

    모든 AI 앱에서 보안 가드레일을 발견하고 적용하세요. 

    AI DLP

    AI 도구에서 민감한 데이터 유출 방지

    AI 접근 제어

    승인되지 않은 AI 도구 또는 계정에 대한 사용자 액세스 제한

    AI 오용 방지

    즉각적인 주입, 규정 위반 등을 방지하세요

    AI 브라우저

    AI 브라우저를 공격 및 악용으로부터 보호하세요

    엔터프라이즈 브라우저 보안

    웹/SaaS DLP

    위협 모든 웹 채널에서 데이터 유출 방지

    BYOD/원격 액세스

    계약자 및 BYOD를 통한 안전한 SaaS 원격 액세스

    신원 보호

    기업 및 개인 SaaS ID를 검색하고 보호하세요

    안전 브라우징

    모든 브라우저에서 위험한 브라우저 확장 프로그램을 감지하고 차단합니다.

    Shadow SaaS/SaaS 보안

    '섀도우' SaaS를 발견하고 SaaS 보안 제어를 시행하세요

    악성 브라우저 확장 프로그램으로부터 보호

    모든 브라우저에서 위험한 브라우저 확장 프로그램을 감지하고 차단합니다.

    LayerX Enterprise GenAI 보안 보고서 2025

    LayerX Enterprise GenAI 보안 보고서 2025는 조직 내 GenAI 보안 위험에 대한 독특한 통찰력을 제공합니다.

    다운로드
  • 파트너

    파트너

    파트너

    파트너 프로그램 개요

    기술 파트너

    LayerX 통합 기능을 살펴보세요.

     

    구글 아이콘
    LayerX와 Google의 파트너십
  • 브랜드 소개

    회사 소개

    브랜드 소개

    LayerX의 사명과 리더십

    뉴스 편집실

    LayerX에 대한 업데이트를 받으세요

    이벤트

    우리가 참석하는 이벤트에 대해 알아보세요

     

    채용 정보

    공석에 지원하세요

    문의하기

    문의 제출

    LayerX Enterprise GenAI 보안 보고서 2025

    LayerX Enterprise GenAI 보안 보고서 2025는 조직 내 GenAI 보안 위험에 대한 독특한 통찰력을 제공합니다.

    다운로드
  • 리소스

    리소스

    LayerX 라이브러리

    데이터시트, 백서, 사례 연구 등

    용어 사전

    알아야 할 모든 용어

    확장 데이터베이스

    익스텐션피디아

    브라우저 확장 프로그램 허브

    블로그 및 팟캐스트

    우리 블로그

    최신 연구, 동향 및 회사 소식

    팟 캐스트

    브라우저 보안을 위한 1위 팟캐스트

     

    엔터프라이즈 브라우저

    브라우저 익스플로잇 설명
    보안 엔터프라이즈 브라우저
    브라우저 확장 프로그램 보안 위험 및 모범 사례
    브라우저 격리란 무엇입니까?
    ChatGPT 보안 위험

    AI 보안

    AI 사용 제어란 무엇인가요?
    GenAI 거버넌스란 무엇입니까? 팁과 모범 사례
    생성형 AI 보안이란 무엇인가요?
    ChatGPT 데이터 유출이란 무엇인가요?
    AI 데이터 침해: 근본 원인 및 실제 영향

    LayerX와 경쟁사 비교

    LayerX와 Island Enterprise Browser 비교
    LayerX와 Prisma Access 브라우저 비교
    LayerX와 Prisma Access 브라우저 확장 프로그램 비교
    LayerX와 Netskope SASE/SSE 비교
    LayerX와 Netskope One 엔터프라이즈 브라우저 비교
    LayerX와 Palo Alto SSE 비교
  • 데모 요청
  • 로그인
홈 블로그 Agent 365는 사용자 환경에서 Shadow AI에 대해 어떤 부분을 놓치고 있나요?

Agent 365는 사용자 환경에서 Shadow AI에 대해 어떤 부분을 놓치고 있나요?

Microsoft Agent 365 gives security teams a governance layer for AI agents operating inside your Microsoft 365 environment: discovery, identity controls, Intune-based blocking. What it does not cover is the browser. Every time an employee opens ChatGPT in Chrome, pastes source code into Claude from a personal account, or installs an AI extension on a device that is not Intune-enrolled, that activity happens outside Agent 365’s visibility entirely.

What is shadow AI in a Microsoft 365 environment?

섀도우 AI refers to AI tools, agents, and workflows that employees use without IT awareness or formal approval. In a Microsoft 365 environment specifically, this includes unauthorized local agents like OpenClaw, consumer AI tools accessed through personal accounts, AI-connected MCP servers, third-party Copilot plugins, and AI-enabled browser extensions running across any browser employees choose to use.

The challenge is not that employees are trying to create security problems. They are trying to meet deadlines. A developer installs a local AI coding assistant. A sales rep connects a personal ChatGPT account to their workflow. A marketing manager pastes a strategy document into Gemini to get a first draft. None of these require IT approval, none get logged, and none are visible to the security team until something goes wrong.

According to LayerX’s Browser Security Report 2025, nearly 90% of AI logins in enterprise environments bypass oversight entirely, with 67% of employees accessing GenAI tools via personal accounts. That is not a visibility gap at the edge of your environment. That is the center of your environment.

What does Microsoft Agent 365 actually do to govern shadow AI?

Microsoft Agent 365 is a control plane for AI agents operating within the Microsoft 365 ecosystem. It integrates three existing Microsoft security platforms to provide agent-specific governance: Microsoft Entra handles agent identity and access control, Microsoft Purview manages data security and compliance for agent interactions, and Microsoft Defender provides threat detection and posture management.

On the shadow AI side specifically, Agent 365 includes a dedicated Shadow AI (Frontier) page in the Microsoft 365 admin center. This feature focuses on detecting and governing unapproved local AI agents. When an organization enables the detection policy for a known shadow AI agent, Agent 365 can identify which managed Windows devices have that agent installed and push a blocking policy through Intune.

The Agent 365 security architecture also surfaces agent sprawl risks that emerge from over-privileged agents, misconfigured agents, and tool misuse patterns including prompt injection. These are genuine governance capabilities that address a real and growing problem in enterprise AI environments.

What are the prerequisites Agent 365 requires to detect shadow AI?

This is where security architects need to read carefully. The Agent 365 Shadow AI detection feature is not available to all Microsoft 365 customers by default. As of the current preview, it requires a Microsoft 365 E3 license minimum, enrollment in the Frontier preview program, and critically, Microsoft Intune enrollment for managed Windows devices.

That last prerequisite carries significant weight. Detection and blocking through Agent 365 currently applies only to managed Windows devices enrolled with Microsoft Intune. A user on a Mac, on a personal laptop, on a contractor device, or on any Windows device not enrolled in Intune sits entirely outside this detection boundary. Additionally, the current public preview of the Shadow AI (Frontier) feature supports detection and blocking for a single known agent: OpenClaw.

Microsoft has signaled the feature set will expand. But as it stands today, the architectural constraint is real: Agent 365’s shadow AI controls require Intune management, Windows devices, and known agent signatures to do their work.

Where does Agent 365’s shadow AI coverage stop?

Agent 365 governs AI agents at the identity and endpoint layer. It can manage what registered agents can access, enforce conditional access policies tied to agent identities, detect known shadow agents on managed endpoints, and audit agent activity flowing through Microsoft’s own security toolchain. That is a meaningful security layer.

The boundary sits at the browser session. Agent 365 has no mechanism to observe what an employee types into ChatGPT in a browser tab, what they paste into Claude or Gemini during a work session, which AI tools they access through personal accounts on managed or unmanaged devices, or what AI-enabled browser extensions are doing inside active sessions on any browser other than Edge for Business.

Microsoft Edge for Business addresses part of this gap through Purview prompt-level DLP, which can audit or block sensitive content submitted to select AI tools. But this protection applies only when employees are signed into Edge for Business with their Entra ID credentials. Switch to Chrome, Firefox, or any other browser, and the coverage stops. For organizations with BYOD policies, contractor workforces, or mixed-browser environments, this creates a structural blind spot that no combination of Agent 365 and Edge for Business can fully close on its own.

What shadow AI risks exist outside Agent 365’s detection boundary?

Three risk categories emerge consistently when organizations look at the surface Agent 365 does not cover.

The first is personal account access to sanctioned and unsanctioned AI tools. LayerX research shows that 71.6% of enterprise access to GenAI tools happens through non-corporate accounts. When an employee accesses ChatGPT, Claude, or Gemini through a personal Gmail account, that session is invisible to Agent 365, Entra, and Purview. The user may be on a fully Intune-managed device with all policies applied. The data they are moving into that AI tool is completely ungoverned at the session level.

The second is copy-paste activity. File-based DLP has existed for years. What it cannot catch is the paste. LayerX’s Browser Security Report 2025 found that 77% of employees paste data into GenAI prompts, with 50% of that paste activity including corporate data. No endpoint tool sees a paste event. No network tool sees what content was carried in it. This is the primary data exfiltration vector in modern enterprise environments, and it happens entirely inside the browser.

The third is AI access on unmanaged devices. Security architects at large enterprises know their managed device population is not their entire employee population. Contractors, part-time workers, remote employees on personal machines, and BYOD users all represent real vectors for AI data exposure. Agent 365’s Intune requirement means these users fall entirely outside its shadow AI governance model.

How do AI-enabled browser extensions create shadow AI risks Agent 365 cannot see?

AI-enabled browser extensions are one of the fastest-growing and least-understood shadow AI vectors in enterprise environments. These extensions run inside the browser session, with access to page content, text inputs, clipboard data, and in many cases cookies and identity information. They do not require IT approval, do not appear in Intune inventories, and are not covered by Agent 365’s current shadow AI detection capabilities.

The scale of the risk is not hypothetical. LayerX’s Enterprise Browser Extension Security Report 2026 found that 1-in-6 enterprise users run at least one AI-enabled browser extension, with 73% of those extensions carrying high or critical permission scope. AI extensions are 60% more likely to have a known CVE than the average extension, 3 times more likely to have access to cookies, and nearly 6 times more likely to change or expand their permissions over time after installation.

An employee using an AI writing assistant extension has granted that extension access to everything they type in their browser. That includes drafts pasted into email, content entered into internal tools, and prompts submitted to any AI platform they use during the workday. From a security perspective, this is a live, persistent data access grant that sits entirely below Agent 365’s detection threshold.

The security team cannot govern what it cannot see, and Agent 365’s visibility does not extend to extension behavior inside browser sessions.

What does a complete shadow AI governance posture look like for Microsoft 365 environments?

A complete shadow AI governance posture for organizations running Microsoft 365 requires two distinct layers, each covering a different part of the risk surface.

The first layer is the agent identity and endpoint layer. Agent 365, Entra, Purview, and Defender operate here. This layer governs known and registered AI agents, enforces least-privilege access for agents acting within the M365 ecosystem, detects known shadow agents on managed Windows endpoints, and audits agent activity within Microsoft’s security telemetry. For organizations deeply invested in the Microsoft stack, this layer is worth deploying and maturing.

The second layer is the browser session layer. This is where human-driven AI activity happens: employees accessing ChatGPT, Claude, Perplexity, Grammarly, and Gemini in real time, across any browser they use, on any device, through any account type. The browser session layer is where copy-paste exfiltration happens, where AI extensions operate, and where personal account access bypasses every identity governance control in the first layer.

These two layers are not redundant. They address structurally different threat vectors. A security architecture that has invested in Agent 365 without a browser-level AI governance layer has strong coverage for registered agents and a largely unmonitored surface for human-driven AI activity. A governance strategy that addresses both layers covers the full shadow AI problem in a Microsoft 365 environment.

How Does LayerX Address the Browser-Level Shadow AI Gap?

Security teams running Agent 365 have strong coverage for known, registered AI agents operating through managed Windows endpoints. The surface that still needs coverage is the browser, where employees access ChatGPT, Claude, Gemini, Grammarly, and hundreds of other AI tools through personal accounts, on BYOD 기기, across any browser they choose. LayerX’s 엔터프라이즈 브라우저 확장 addresses this layer through Shadow AI Discovery and AI DLP: it surfaces every AI tool accessed in the browser regardless of account type or device management status, and applies real-time enforcement on prompts, pastes, and file uploads without requiring Intune enrollment or Edge for Business adoption.

Because LayerX operates at the browser session level rather than the identity or endpoint layer, it covers what Agent 365 was not designed to reach. Security teams get last-mile visibility into AI 사용 across Chrome, Firefox, Edge, and any other browser in the environment, with granular controls that range from monitor-only through warn, prevent, and redact depending on data classification and policy. Together, Agent 365 and LayerX address the full shadow AI surface in a Microsoft 365 environment: one governing AI agents at the identity layer, the other governing human AI sessions at the browser layer.

데모 요청

How should security architects think about Agent 365 and browser-level AI controls together?

The most useful mental model is a coverage map rather than a product comparison. Agent 365 and browser-level AI security controls are not alternatives to each other. They address different threat surfaces at different layers of the stack.

Agent 365 owns the agent identity and lifecycle layer: registered agents, M365-integrated workflows, Copilot Studio agents, Intune-managed endpoints, and the Entra-Purview-Defender telemetry chain. It is the right tool for governing AI agents that operate within Microsoft’s ecosystem and that security teams have some prior awareness of.

Browser-level controls own the session layer: real-time activity across all browsers, personal account access, BYOD devices, AI extensions, copy-paste flows, and the long tail of consumer AI tools employees bring into the workplace without IT knowledge. This is the surface that generates the most data exposure events in practice, because it requires no formal agent deployment and no IT approval process to activate.

Security architects evaluating their shadow AI posture should ask two questions: first, can we see and govern AI agents operating within our M365 ecosystem at the identity level? Agent 365 answers that question. Second, can we see and govern AI activity happening in the browser, across all browsers, on all devices, through all account types? That second question requires a different layer of control, purpose-built for the browser session where most enterprise AI activity actually occurs.

자주 묻는 질문

Does Microsoft Agent 365 block shadow AI on all devices, or only managed ones?

Agent 365’s Shadow AI detection and blocking currently applies only to managed Windows devices enrolled with Microsoft Intune. Unmanaged devices, personal laptops, BYOD endpoints, contractor machines, and any non-Windows device fall outside Agent 365’s current shadow AI detection scope. This is a design constraint of the Intune-based enforcement model, not a configuration issue.

Can Agent 365 see what employees type into ChatGPT or other web-based AI tools?

No. Agent 365 governs AI agents at the identity and endpoint layer through Entra, Purview, and Defender. It does not have visibility into browser session activity, including prompts submitted to ChatGPT, Claude, Gemini, or other web-based AI tools. Microsoft Edge for Business can apply Purview DLP to prompts in select AI tools, but only when employees are signed in with Entra ID credentials on Edge for Business specifically. Any session on another browser falls outside this coverage.

What is the difference between shadow AI at the identity layer and shadow AI at the browser layer?

Shadow AI at the identity layer refers to AI agents and tools that have been granted access to organizational data or systems without proper IT governance, such as an unauthorized local agent with Entra permissions or a third-party Copilot plugin with excessive access rights. Shadow AI at the browser layer refers to AI activity that happens inside browser sessions without IT visibility: employees accessing ChatGPT or Gemini through personal accounts, pasting sensitive data into AI prompts, or running AI browser extensions with broad page permissions. Agent 365 addresses the identity layer. Browser-level controls are needed for the session layer.

Do I need Intune to use Agent 365 Shadow AI detection?

Yes. As of the current public preview, Agent 365 Shadow AI detection requires Microsoft Intune enrollment for managed Windows devices. Detection and blocking policies are propagated through Intune and apply only to devices within that management scope. Organizations without comprehensive Intune coverage, or those with significant BYOD or contractor device populations, should plan for additional coverage layers to address the devices and sessions outside Intune’s reach.

What AI tools does Agent 365 currently support for shadow AI governance?

As of the public preview, Agent 365’s Shadow AI (Frontier) feature supports detection and blocking for OpenClaw, an unauthorized local AI coding agent. Microsoft has indicated the supported agent list will expand over time. The broader Agent 365 platform supports governance for Microsoft-native agents including Copilot and Copilot Studio agents, as well as third-party agents registered within the M365 ecosystem. Consumer AI tools accessed through web browsers, such as ChatGPT, Claude, and Gemini, are not within Agent 365’s current governance scope.

How do security teams govern AI access on unmanaged or BYOD devices in a Microsoft 365 environment?

Agent 365 and the broader Microsoft security stack do not currently provide comprehensive AI governance for unmanaged or BYOD devices. Governing AI access on these devices requires controls that operate below the Intune enrollment requirement, specifically at the browser session level. A browser-based security layer deployed as an extension can enforce AI usage policies across any browser, on any device, regardless of whether the device is enrolled in Intune, which operating system it runs, or which account the employee uses to access AI tools.

See How LayerX Covers the Browser-Level Shadow AI Gap

If your organization is running Agent 365 and wants to understand what your current AI governance coverage map actually looks like, LayerX can show you exactly what is visible at the browser layer that Agent 365 cannot see.

데모 요청

 

보아즈 요나 게시됨 - 7년 2026월 XNUMX일

  • 공유하기 :
  • 공유하기 :
보아즈 요나

보아즈 요나

올인원 AI 및 브라우저 보안 플랫폼

브라우저 확장 관리 웹/SaaS DLP 신원 보호 GenAI 보안 섀도우 SaaS 안전 브라우징 보안 액세스

차례

    LayerX에서 최신 정보 받기

    관련 자료

    커서재킹: 모든 커서 사용자는 악성 확장 프로그램에 의한 API 키 탈취에 취약합니다.
    블로그 게시물

    커서재킹: 모든 커서 사용자는 악성 확장 프로그램에 의한 API 키 탈취에 취약합니다.

    Cursor는 API 키를 보호된 저장소에 저장하지 않으므로 모든 확장 프로그램이 API 키에 접근할 수 있습니다. Cursor는 이 취약점을 알고 있었지만 수정하지 않았습니다. 요약: LayerX 보안 연구원들은 인기 있는 AI 개발 도구인 Cursor의 모든 확장 프로그램이 개발자의 API 키와 세션 토큰에 접근할 수 있어 자격 증명이 완전히 유출될 수 있음을 발견했습니다. […]

    로이 파즈 - 2026년 4월 4일 더보기
    확장 프로그램 개발자들이 최소 6.5만 명의 사용자 데이터를 판매하고 있는데, 이 모든 것이 완전히 합법적입니다.
    블로그 게시물

    확장 프로그램 개발자들이 최소 6.5만 명의 사용자 데이터를 판매하고 있는데, 이 모든 것이 완전히 합법적입니다.

    이건 악성코드에 관한 이야기가 아닙니다. 아무도 당신을 해킹하지 않았고, 아무도 당신의 정보를 훔치지 않았습니다. 지금 실행 중인 확장 프로그램이 당신의 인터넷 사용 기록을 판매하고 있을 가능성이 있는데, 제작사 측에서도 이 사실을 미리 고지했습니다. 개인정보 처리방침에 명시되어 있습니다. 4페이지 7번째 문단에 말이죠. 아무도 읽지 않는 바로 그 부분입니다.

    다르 칼론(Dar Kahllon)과 가이 에레즈(Guy Erez) - 2026년 4월 4일 더보기
    스틸톡: 틱톡 영상 다운로더로 데이터 탈취 피해를 입은 사용자 130만 명
    블로그 게시물

    스틸톡: 틱톡 영상 다운로더로 데이터 탈취 피해를 입은 사용자 130만 명

      LayerX 보안 연구원들은 틱톡 동영상 다운로더로 위장했지만 실제로는 사용자 활동을 추적하고 데이터를 수집하는 최소 12개의 상호 연관된 브라우저 확장 프로그램 캠페인을 발견했습니다. 이 확장 프로그램들은 공통된 코드베이스를 공유하며 모두 서로 복제본이거나 약간만 수정된 버전으로, 이는 장기간에 걸쳐 지속적으로 진행되어 온 공격임을 시사합니다.

    나탈리 자르가로프 - 2026년 4월 4일 더보기
    레이어X 로고
    • 플랫폼
    • 파트너
    • 리소스
      • LayerX 라이브러리
      • 블로그
      • 용어 사전
      • 브라우저 보안 설명
      • 브라우저 격리란 무엇입니까?
      • 기업용 브라우저란 무엇입니까?
    • 회사
      • 브랜드 소개
      • 채용 정보
      • 취약점 공개 프로그램
    데모 요청

    저작권 © 2026 LayerX 이용 약관 개인정보 처리방침 취약점 공개 프로그램