Health Providers Must Address AI Shadow Use Now to Safeguard PHI, Panelist Says
Nearly 75% of health care employees are shadow using AI, resulting in more than 80% of data policy infractions, a panelist said during the Health Care Compliance Association event Wednesday. Accordingly, providers should directly address the issue with staff immediately and implement measures that make employees' AI tools less of a privacy risk.
Sign up for a free preview to unlock the rest of this article
Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.
“71% of health care workers are using their personal AI accounts” at work, tapping ChatGPT, Gemini, Claude or other patforms to write "clinical notes, draft patient letters and transcribe documentation," said Jennifer Sommer, president of Foresight Consulting Services.
In addition, "81% of data policy violations in health care involved regulated data being uploaded to these unauthorized tools,” she said, citing a Netskope study in the HIPAA Journal. “AI algorithms inadvertently retain PHI from training data, unless you've taken specific technical measures and contractual steps to prevent it,” Sommer added.
While she urged health care providers and organizations to address shadow AI use now, she advised against giving employees a hard no on using their tools. Instead, “Give them a yes with conditions,” like conducting a risk analysis and implementing basic controls such as multi-factor authentication to ensure their shadow AI use includes safeguarding patients' personal information.
AI tech is “fast and easy,” which is why it’s used nearly ubiquitously in health care, Sommer said. However, it's “almost too easy to sign on to those platforms and not think about the consequences" of entering certain information, such as protected health information (PHI). Such AI-laden tools are not HIPAA-compliant, nor do the developers and owners of the tools sign business associate agreements, she added.
Sehar Meraj, senior director of compliance programs at Matter Health, agreed. “The one problem that we see very routinely with AI is that everybody wants to use it, but” there's a lack of “clear benchmarks” for its use.
“The important thing to remember" is that privacy professionals must be constantly vigilant concerning AI-based tools. It's not "one and done," Meraj said. As AI technology develops, it will “constantly” be changing and “innovating, and you will have to innovate your approach” to it.