GenAI has become an essential component in the employee toolkit. Yet, going beyond bombastic headlines such as these, actual quantitative data about how AI applications are being used remains surprisingly scarce. At LayerX, our enterprise browser extension provides us with visibility into users’ activities in the browser, where GenAI application usage takes place. We can use that visibility to derive comprehensive insights on the usage of GenAI tools and AI-enabled SaaS applications.
To shed light on GenAI usage for the benefit of the security community, we’ve compiled and analyzed that data, creating a report with (surprising) findings about how enterprise users consume AI in the workplace. Based on real-life data and telemetry collected from LayerX Security’s customer base, this is the first and one-of-its-kind report of enterprise use of GenAI.
Below, we bring some of the key findings. To read the full “Enterprise GenAI Data Security Report 2025” report, including all data points and advanced analysis, click here.
How Widespread is the Use of GenAI Usage?
While GenAI tools like ChatGPT, Gemini, and Copilot are becoming regular watercooler talk, LayerX has found that actual consumption remains casual. Only 14.5% of enterprise users engage with these tools weekly. Most of this activity is through ChatGPT. Software developers are the most frequent users (39%), followed by sales and marketing (28%).
This suggests that while GenAI has made significant inroads since late 2022, it has not yet become an integral part of daily workflows for most. However, adoption is expected to grow, and organizations should track usage to better understand how AI is being leveraged across different roles.
Which AI SaaS Applications are in Use?
LayerX data reveals that AI application usage is heavily concentrated among a few dominant tools, with ChatGPT alone accounting for nearly 52% of all AI-related website requests. The top five AI applications, including Gemini, Claude, and Copilot, make up 86% of AI usage, while the bottom 50 collectively account for less than 1%.
From a security and IT perspective, this creates a “shadow AI” problem, where numerous lesser-known AI applications operate under the radar with little oversight, potentially exposing sensitive data and creating security risks. To mitigate this, it is recommended that security teams implement monitoring and governance to track AI usage across all SaaS applications.
Do Organizations Have Visibility Into Workplace AI Usage?
The vast majority of workplace AI usage occurs outside organizational oversight, with 71.6% of GenAI tool access happening via non-corporate accounts. Even among the users who do use corporate accounts, only 11.7% of total logins meet the security standard of corporate accounts backed by SSO. This effectively leaves nearly 90% of AI tool usage invisible to organizations.
Employees using personal accounts for GenAI tools bypass corporate safeguards, exposing company data to potential security risks, including unauthorized data use for AI model training. Even when corporate accounts are used, if they don’t use SSO, organizations lose visibility into AI interactions.
What Information is Being Shared?
A small percentage of enterprise users are responsible for sharing large amounts of data with GenAI tools. However, those who do engage in this activity tend to do so frequently— heavy users paste data an average of 6.8 times per day, with over 50% of those pastes containing corporate information. File uploads, though less common, also occur at a notable rate.
The primary risk comes from copy/paste and file uploads, as they enable large amounts of sensitive corporate data to be shared quickly. Similarly, file uploads likely involve large datasets, which, if unmonitored, could pose security risks. To mitigate potential data exposure, organizations should closely monitor user interactions with GenAI tools, track data-sharing activities, and implement controls to prevent unauthorized disclosure of corporate information.
The Problem With AI Browser Extensions
GenAI-enabled browser extensions pose a significant but often overlooked security risk for enterprises. Our research indicates that over 20% of users have installed at least one AI-powered browser extension, and nearly half of them use multiple extensions. Alarmingly, 58% of these extensions request ‘high’ or ‘critical’ permissions, granting access to sensitive data such as cookies, browsing activity, and user identities. Even more concerning, 5.6% of AI-powered extensions are classified as malicious, making them a potential vector for data theft.
Browser extension security is an integral part of the organization’s overall GenAI security strategy. Security teams must implement controls over AI-enabled browser extensions, ensuring they receive the same scrutiny as direct GenAI access to mitigate potential threats.
What’s Next for Organizations
Based on the report, it seems like organizations lack visibility into logins and access to AI SaaS applications, the use of applications that aren’t the most popular ones, and AI browser extensions. Gaining visibility into each aspect of these hidden threat surfaces should be a priority for every organization. Since consumption occurs through the browser, preventing GenAI data leakage requires a browser-first security strategy that focuses on where – and how – users consume GenAI tools, and deploying security tools that enforce security protections accordingly.