Data Reveals IT Leaders Unaware of Generative AI Threats

by Mayniaga

ExtraHop®, a prominent leader in cloud-native network detection and response (NDR), has unveiled a new research report titled "The Generative AI Tipping Point."

This study reveals that many enterprises are grappling with the challenges of understanding and mitigating security risks associated with employee use of generative AI.

shedding light on the cognitive dissonance experienced by security leaders as generative AI increasingly integrates into the workplace.

The report delves into organizations' strategies for securing and overseeing generative AI tools,

yet they are uncertain about the best approaches to addressing associated security risks.

The findings indicate that 73% of IT and security leaders acknowledge that their employees occasionally or frequently use generative AI tools or Large Language Models (LLM) at work,

such as the exposure of customer and employee personally identifiable information (PII) (36%), trade secrets (33%), and financial loss (25%).

Interestingly, security leaders seem to prioritize concerns about obtaining inaccurate or nonsensical responses (40%) over security-related issues

similar proportions (36%) expressed high confidence in their ability to safeguard against AI-related threats.

Despite the fact that about 32% of respondents reported their organizations have banned the use of generative AI tools,

Continue Reading