Privacy Daily is a service of Warren Communications News.
Can't Outsource Risk

Shadow AI Dominates IBM Data Breaches Report, Lawyers Say

Shadow AI remains a significant challenge for businesses, said panelists during a Practising Law Institute webinar Monday. They discussed an IBM Cost of a Data Breach 2025 report that focused on the problem of employees using AI platforms that aren't sanctioned by the workplace.

Sign up for a free preview to unlock the rest of this article

Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.

Shadow AI is “still a really significant problem that companies have not gotten their arms around and is having financial and reputational impact,” said panelist Gail Gottehrer, founder and CEO of a privacy and cybersecurity consultancy.

Even before the prevalence of AI, “there was a reason that IT departments" blocked certain material from reaching employees' devices, she said. “We all know anything you can download for free over the internet could be troubling.” With the prevalence of AI use, the cybersecurity risk has morphed from deployed AI to shadow AI use.

For instance, Leeza Garber, a cybersecurity and privacy attorney, said the report showed that “the true focus [for everyone] should be on how we deploy and implement AI,” no matter its intended use. “You can have [an] AI platform that is completely company sanctioned,” since what is “within your four walls, you have complete control” over.

“But the second somebody downloads a platform … that's when the problems start.”

The report, Gottehrer said, showed shadow AI was the reason for 20% of data breaches, which is “a huge number."

In addition, it indicates that although “companies have rushed to roll out AI and adopt AI in their companies,” they “have not kept pace with governance” or even failed to “put governance in place either before putting AI into place or contemporaneously,” Gottehrer said.

She also said that company-sanctioned AI tools and systems usually have undergone a vetting process in the IT department, and “hopefully” a legal and cybersecurity review as well. The “hope” is that “the risks associated with it are lower because you vetted it,” and a “security incident could be headed off or anticipated” since the IT department is monitoring it, she added.

Garber noted that “this isn't necessarily some high-end malicious hacker that's getting into the nuts and bolts” of an AI platform, but rather someone “taking advantage of the fact that if somebody is using an unsanctioned, unprotected AI platform for work purposes, they are inputting data that is potentially extremely confidential.”

Also, company employees are not malicious, Gottehrer said. These are people who “maybe feel pressure to move faster, or to write emails that sound better, or for whatever reason, are just trying to use this to do their job.”

An example of this is AI notetakers, because in an “average conversation, you don't know who's going to say what.” Someone could “inadvertently discuss something confidential, and once it gets out of the bag, there's no getting it back,” she said.

The report also found that customer personally identifiable information (PII) was the most common data compromised in shadow AI incidents, which Garber said “should frighten everybody.” Not only can customer PII “arguably be called the crown jewels for many organizations,” but this shows the lack of understanding of the privacy implications of using freely available AI tools.

“We're not saying” that if you eliminate shadow AI use “you'll never have an AI-related cyber security incident … but your chances of avoiding one will be better,” Gottehrer said. “If you can look at any one thing and get rid of 20% of your breaches in security incidents, that's like a dream come true.”

Though the report revealed that shadow AI security incidents cost around $200,000 -- a number that both lawyers said was surprisingly low -- the bigger problem for businesses is that they make their customers angry, Garber said. She also noted that the 2025 report found that the cost of a data breach fell for the first time in around five years, though “overall, the costs are sky high compared to what they have been over the past decade.”

Despite the decline, “costs of breaches were still highest in the United States as compared to the rest of the world," Gottehrer said.

To combat breaches, it’s important for companies to understand their data flows and supply chains, Garber said. “Outsourcing [an AI tool] doesn't mean you are outsourcing the liability” or “risk.”

The report examined around 600 organizations of various sizes that experienced a data breach between March 2024 and February 2025. They were situated across 16 regions, with about one-tenth of the companies in the U.S. and another 10% in India. Additionally, the report spanned 17 industry areas, with the top four being financial, industrial, professional services and technology. The number of organizations involved “make [the report] one of the biggest in this area and most trustworthy,” said Garber.