A recent study has discovered that 15% of employees regularly upload company data onto ChatGPT, and more than 25% of that data is considered sensitive information, which poses a security risk for their employers.

The research report, titled “Revealing the True genAI Data Exposure Risk,” conducted an analysis of over 10,000 employees to examine their behavior regarding the use of ChatGPT and other generative AI applications in the workplace.

The findings revealed that at least 15% of workers utilize ChatGPT and other generative AI tools while working, and nearly 25% of those instances involve pasting data.

On average, employees input data into GenAI tools 36 times per day, and these numbers are expected to increase as the popularity of AI for productivity continues to grow.

Similar Content… Man Files Lawsuit Against ChatGPT for Malicious Slander

The report highlighted that this behavior is recurring, with many employees pasting sensitive data on a weekly, and sometimes even daily, basis.

LayerX, in their 10-page report, stated, “Soon, we predict, employees will be using GenAI as part of their daily workflow, just like they use email, chats (Slack), video conferencing (Zoom, Teams), project management, and other productivity tools.”

While GenAI opens up new opportunities, it also presents significant risks to organizations, particularly in terms of the security and privacy of sensitive data, the report warned.

Sensitive Data by Workers in ChatGPT
LayerX

Furthermore, the top categories of confidential information being input into GenAI tools include internal business data at 43%, source code at 31%, and personally identifiable information (PII) at 12%.

LayerX noted, “Organizations might be unknowingly sharing their plans, product, and customer data with competitors and attackers.”

Since GenAI platforms operate in the browser, the existing security solutions are unable to address risks such as the pasting of sensitive data, according to the study.

Key findings revealed that 4% of employees paste sensitive data into GenAI on a weekly basis, thereby increasing the chances of sensitive data exfiltration.

Moreover, the study found that 50% of the most active GenAI users are from Research and Development (R&D), followed by Sales & Marketing at over 23% and Finance at over 14%.

“For example, a Sales manager using GenAI to produce an executive summary of their quarterly performance would have to provide the GenAI tool with the actual sales results data,” the report explained.

The study found that a significant portion of GenAI users not only rely on prompt instructions but also paste data to generate the desired text, inadvertently exposing sensitive company data to GenAI.

Within just two months of its release, ChatGPT quickly amassed over 100 million active users by January 2023.

By April, ChatGPT reported an astonishing growth rate with over 800 million active users per month.

The report discovered that 44% of workers have utilized the GenAI API in the past three months, and a small percentage of them visit AI sites and apps over 50 times per month.

LayerX stated, “This assumption might be reinforced by the fact that even in the last month, GenAI users accounted for less than 20% of the entire workforce.”

Leave A Reply

Please enter your comment!
Please enter your name here

This site uses Akismet to reduce spam. Learn how your comment data is processed.