Privacy Daily is a service of Warren Communications News.
Big Tech a Concern

Regulate AI to Protect Human Rights, UK Lawmakers Told

Privacy and other civil society advocates are the "regulators of last resort" seeking to uphold human rights in the age of AI, speakers said Wednesday during a hearing of the U.K. Parliament Joint Human Rights Committee. Lawmakers are considering recommendations to the government on AI regulation and rights protections.

Sign up for a free preview to unlock the rest of this article

Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.

Current laws don't meaningfully restrict Facial Recognition Technology (FRT), so nongovernmental organizations like Big Brother Watch are effectively acting as regulators by filing judicial challenges, said Big Brother Watch Director Silkie Carlo. There is "accidental legislation" around AI use in the U.K., and there are existing rights-based protections such as the U.K. GDPR, but they weren't designed to govern AI, said Temitope Lasade-Anderson, executive director of Glitch, which advocates on behalf of Black women in racial and gender injustice issues.

AI is causing harm in the U.K., Privacy International Director of Strategy Alex Pirlot de Corbion said. She cited the "mad dash" to vacuum up data into AI systems, government departments' use of automated decision-making technology for such things as determining welfare benefits, and the lack of clarity about how government agencies use AI tools.

One controversial use of AI is live FRT, said Carlo. The technology is operating on a large scale nationally, with police having scanned the faces of around 3 million people.

One problem with advocating against AI harm is the "inevitability and exceptionalism" the technology is creating, said Javier Ruiz Diaz, Amnesty International technology and human rights lead. It's portrayed as an unstoppable force, he said.

Amnesty International is additionally concerned about the anthropomorphic aspects of AI technology, Diaz Ruiz said. Individuals' struggle to deal with such things as chatbots hasn't been sufficiently acknowledged. AI technologies aren't neutral, he added, and they're building racism into policing systems and social scoring for benefits.

Asked whether AI systems such as FRT offer benefits, such as enabling police to more accurately identify criminals, de Corbion said Privacy International is concerned about the scale of surveillance, the scope of FRT, and the harms experienced by people affected by it. Amnesty International believes FRT "fundamentally violates human rights," said Diaz Ruiz.

Lawmakers sought information on what laws are needed to mitigate AI risks. While AI systems differ, one common denominator is data, said de Corbion. Safeguards are needed at every stage of the AI lifecycle, from data collection to processing activities to data use and applications, she said.

Diaz Ruiz called for additional safeguards and specific rights allowing people to obtain more information about particular AI decisions. Not all AI tools carry equal risk, so there should be red lines beyond which AI can't be used, said Lasade-Anderson. The government should sufficiently weigh AI risks and impose regulations along with its optimistic commercial outlook, said de Corbion.

The public shouldn't be kept in the dark about how AI technologies process their data, de Corbion said, adding that Privacy International is concerned about the dependency being created on Big Tech companies.

AI differs from other technological waves in its ability to portray itself as a human actor, said Kevin Fong, a professor who's investigating the human, regulatory and ethical aspects of AI. For instance, AI applications can convince humans of things that aren't always helpful, such as when people in need turn to chatbots for psychological help, he said. He urged lawmakers to focus on the people they serve and not let large tech companies simply drop products on the public.

The time to regulate is now, said Michael Birtwhistle, associate director of law and policy at the Ada Lovelace Institute, a research organization focused on the use of AI and data for societal good. Enough is known about AI and its risks to create rules without stifling innovation, said Birtwhistle: Regulation is urgently needed to ensure public trust.