As an industry leader in browser security, we are committed to shedding light on the risks associated with revolutionary technologies, like GenAI tools. Our latest research, “Revealing the True GenAI Data Exposure Risk”, provides critical insights into the scope and nature of these risks. Specifically, we examine the troubling risk of data exfiltration through GenAI.

To create the report, we analyzed how 10,000 employees used ChatGPT and other generative AI apps. The data came from devices with the LayerX extension installed on them. In this blog post, we delve into the key findings of the report and discuss how organizations can leverage this information to protect sensitive data. To read the full report with more details, click here.

5 Key GenAI Usage Findings

1. 44% Increase in GenAI API usage over the past 3 months

Not surprisingly, GenAI usage has significantly increased in the past months. This can be mainly attributed to the skyrocketing popularity of ChatGPT. It’s reasonable to assume usage will only grow.

2. 15% of employees have pasted data into GenAI

To make the most out of GenAI apps, employees are not only typing prompts, but also pasting in data. As a reminder, this data might be used for training these apps, meaning it could pop up in a response for other users. This makes pasting actions a highly likely source of data exposure.

3. R&D, Marketing & Sales, and Finance are the heaviest GenAI users.

50% of ‘heavy’ GenAI users come from R&D, 23.8% come from Marketing and Sales and 14.3% come from finance departments. This means that these departments constitute the highest risk exposure for organizations.

4. 6% of employees have pasted sensitive data into GenAI, 4% of employees paste sensitive data into GenAI on a weekly basis.

Take note of this stat, it’s quite alarming – A significant percentage of GenAI users are exposing sensitive company data into GenAI. While probably an innocent action, this is recurring behavior, increasing the risk of data exposure.

5. 31% of exposed data is source code

43% is internal business data and 12% is PII. Organizations are putting in efforts and resources in protecting their crown jewels, but on the other side of the open space, their crown jewels are being pasted into public tools.

Read the Complete Report

GenAI is opening up new opportunities for productivity, creativity, and innovation. However, it’s important to acknowledge the risks that come with GenAI. Specifically, those associated with the security and privacy of sensitive data.

The purpose of this report is to expose the potential dangers of pasting sensitive data associated with GenAI. As we’ve shown ,business plans, source code and PII are being pasted into thes public sources. The numbers are only expected to grow. 

Read the entire survey to get more details that will help you build your GenAI security strategy. Get additional insights like:

  • How many employees post sensitive data on a daily basis
  • How many times a day data is pasted into GenAI apps
  • How to protect from sensitive data pasting
  • And more

Read the complete report.