Privacy Daily is a service of Warren Communications News.

AI Transparency Needed from Companies and Government, Panelists Say

Government intervention can either advance or harm human rights, so there needs to be transparency and accountability from governments and companies developing and deploying AI, said experts during a panel at a Center for Democracy & Technology (CDT) event Tuesday.

Sign up for a free preview to unlock the rest of this article

Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.

Privacy laws, human rights assessments, risk management and rights-respecting procurement guidelines are examples of actions governments can take that help protect people's rights, said Min Aung, Global Network Initiative's accountability and innovation manager. On the other hand, things like “mandating overbroad censorship” or “imposing requirements for ideological alignment," such as the Trump administration’s Preventing Woke AI executive order, can have damage rights, Aung said.

Becca Branum, deputy director of CDT’s Free Expression Project, noted that Trump's July 23 executive order simultaneously "demanded that government-procured LLMs" be "objective and ideologically neutral." However, it said that such LLMs should "refrain" from producing outputs with which the administration disagrees.

The executive order's framing “gave a lot of people heartburn, including me,” said Neil Chilson, head of AI policy at the Abundance Institute. But “the actual operative language of the executive order” is not that bad, he said. Though it includes conditions for AI models, one of the ways to fulfill the conditions is “by explaining exactly any of the intentional manipulations you might be making to the outputs of the model.”

That disclosure requirement “makes it much more practical for companies to do,” Chilson said. “While directionally, it might be a little bit concerning … the fact that you can comply with this through transparency” made it “a much more practical executive order than it might have been otherwise.”

“A lot of the devil will be in the details,” said Eugene Volokh, a law professor at University of California-Los Angeles. Whatever guidelines come out of the executive order will be the true test of the impact to free speech, he said.

There is a “sort of squishiness around this ideal of ideological neutrality,” said Cody Venzke, senior policy counsel at the American Civil Liberties Union. When it comes to AI and civil rights, “explainability is such a key component for accountability,” but sometimes it's hard for developers to say how “they got from an input to a particular output.”

“We are going to run into problems in ensuring that the large language models -- structured as they are -- adhere to whatever safeguards that the developers and deployers try to build in compliance with this guidance that we’ll ultimately see,” he added.

Miranda Bogen, founding director of the CDT AI Governance Lab, said the key is the quality of the data AI models are trained on, and “how do we know what's happening within a model?” Privacy implications are something to consider, she said, especially if a person is in distress while using a chatbot. Additionally, the implications and subsequent controls on AI data or outputs could be shaped by the existence of liability, Bogen said.

Aung said it’s all about “accountability.” This means “companies participating in mandatory or even voluntary risk assessments, companies engaging with a civil society” and companies “being transparent about all these activities.”