Privacy Fundamentals Can Work for AI Governance, Panelists Say
Emphasizing fundamentals and ensuring staff working with AI understand its risks are keys to protecting privacy, said panelists at a privacy risk event Tuesday. Later, another panel emphasized issues surrounding compliance with global rules that regulate AI. DataGrail, a compliance vendor, sponsored the event.
Sign up for a free preview to unlock the rest of this article
Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.
“In a world where data makes it possible for agentic systems to be useful, there's an opportunity for anything that we think of as an “agentic” system to fail in the same ways that humans make mistakes,” said Jason Clinton, deputy chief information security officer at Anthropic, a privacy vendor.
“People frequently fall for phishing attacks” and send “spreadsheets of customer data out the front door ... and an AI [system] is vulnerable in the same way that people are,” he added. “We have to assume that we need to put the right guardrails in place in the same way that we put guardrails in place for people.”
Whitney Merrill, data protection, privacy and compliance head for software company Asana, agreed. A "strong foundation has been established over the last 20 years around privacy," and much of that "applies to AI and to the innovation happening right now,” she said. Data governance basics remain “regardless of the pace of innovation,” and privacy fundamentals remain crucial.
Merrill noted that AI, in some form, has existed for years and that it's “just another version of processing.” Accordingly, “After you build the fundamentals, you can start to focus in on the other pieces of regulation” and “what's changing.”
Glean Chief Security Information Officer Sunil Agrawal said that privacy didn't keep pace with innovation, as it took almost 15 years for the GDPR to be implemented. However, “when it comes to AI, we have been lot … faster.”
Within generative AI, he said, there are a few main ways for data leakage to occur, including in the model when it is provided with data, when the model is used incorrectly, and as a side channel, where the AI company may not be secure or trustworthy.
Merrill emphasized issues with newer companies. It’s likely that they “have immature processes, or processes that rely on manual or human intervention in order to complete them; meaning the chances for something not being completed or not being checked or not actually happening is really high.”
It’s also harder for newer companies to keep up with regulations, she said, so making sure they verify that vendors are actually implementing zero retention and other practices is key.
Additionally, it’s not okay for employees to use shadow AI, where they “anonymize [the] data and stick it in some other random, unapproved tool” if they encounter problems with company-approved AI systems, Merrill said. “The important thing here is to really set out those guardrails and provide robust solutions for employees."
Clinton noted building on draft frameworks are available that can help "organizations to manage their risks around AI systems.”
“Guardrails don't slow you down,” but “actually allow you to go faster," said Agrawal, because you don't need to worry about "sufficient security."
Merrill agreed. “Controls actually help you move faster.”
Varying Regulations
Turning to global AI issues, a second panel of speakers discussed the difficulty of compliance in an unpredictable regulatory atmosphere.
For example, Gabriela Zanfir-Fortuna, vice president for global privacy at the Future of Privacy Forum, said, “We have a lot of uncertainty around the EU AI Act." Currently, “only specific provisions" of it are in effect now. "This is not an ideal environment to operate in,” she added.
But the so-called “Brussels effect,” in which the GDPR influenced regulation in the rest of the world, is not occurring with the EU AI Act, said Zanfir-Fortuna. She said this is likely because the AI Act is a “complex piece of legislation” and “heavily bureaucratic” in a way that would make it difficult to transplant elsewhere. However, other countries are developing AI regulation tailored to their needs.
Attorneys on a separate webinar the same day argued that EU AI Act will influence U.S. regulation just as the GDPR did (see 2510210038).
Within the U.S., there are state and federal laws, as well as privacy and AI statutes, said Shannon Yavorksy, an Orrick privacy lawyer.
Omer Tene, a Goodwin privacy attorney, said “the uncertainty is much broader than just with respect to the specter of regulation." For Tene, questions remain about whether AI will "even work."
The regulatory uncertainty around AI reminds Yavorsky of when global privacy laws were emerging. As such, she said adopting “principles-based governance frameworks” is a good starting place.
Zanfir-Fortuna suggested doing internal mapping of AI use, figuring out to what extent a company is using the technology. Such mapping is useful for privacy compliance, she added.
Andy Dale, chief privacy officer for OpenAP, a television advertising group, said many companies lack these resources, while others have mapped “and know they were doing it” because it was "part of product development” or something else.
Yavorksy said risk assessments are the next step. Tene added that it’s important to “talk to your counsel,” because they understand risks.