Privacy Daily is a service of Warren Communications News.
Not a 'Magic Bullet'

Age Assurance Requires Balancing Privacy and Safety, Panelists Say

While age assurance is a useful tool to ensure children have appropriate access to the internet, it's not a silver bullet and must be implemented thoughtfully, a Google official and other privacy experts said during a Family Online Safety Institute (FOSI) Monday.

Sign up for a free preview to unlock the rest of this article

Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.

Age assurance “comes down to delivering age-appropriate experiences for youth online,” and “putting the right kinds of protections in place for them,” said Kate Charlet, senior director for privacy, safety and security at Google. “We want to treat teens as teens, and we want to treat adults as adults.”

Charlet noted that many companies offer protections like safe search, but they must be activated. That's why age assurance is more useful, she said, to know when the protections should be toggled on.

For example, “machine learning" lets Google take "signals” from users “to estimate whether" the person is younger or older than 18 and “no additional information needs to be collected,” Charlet said. Despite what the user says about his or her age, "if our estimation capability assesses that they're likely to be a minor, we will turn on all of those protections that we have for minors.”

She also noted that Google engineers use modified zero knowledge proof, which “is a privacy-enhancing technology."

However, potential risks remain, said Charlet: Since providing a government ID is more privacy-invasive than other methods and can cut access to those who may not have an ID, it “should be reserved for the higher-risk use cases.”

Future of Privacy Forum CEO Jules Polonetsky echoed this sentiment on a later panel. “There are tiers of risk” and an age-assurance framework needs to reflect that, because if we start “applying intrusive methods" to the mass population, "we're going to have people flipping it off and freaking out,” which would be a “mess.”

Sydney Saubestre, New America senior policy analyst, mentioned the tension between privacy and safety. When safety requires users to surrender their data, privacy issues inevitably arise. Accordingly, age estimation requires "context" and "proportionality." While the world is closer to reaching this point, it's not there yet, she added.

Polonetsky said that with age assurance, we need a “baseline that covers everybody with [specific] definitions in a logical way,” otherwise “it ends up being leaky and faulty” or even “a chaotic situation.”

Almudena Lara, Ofcom’s online safety policy director for child safety, said “age assurance is not conceived as an end in and of itself” but as “a means to an end.” The thought process needs to be “if I'm going to be much more age aware of where my children are, it needs to be for a purpose.”

The question of “what type of experience do we want for children?” needs to be thought out broadly, which Lara said makes it “a bit more technology resilient.”

But panelists agreed that there are other options besides age assurance. “There are a lot of baseline protections that" can be implemented without "need[ing] to know [a user's] age," Charlet said, adding that an increase in education and literacy can also help.

For Saubestre, the issue is that “we are trying to solve a not solely technical problem with tech.” As such, “We really need to be treating kids as independent agents that we want to develop, and not as people who can be locked out of systems and then suddenly, when they're 18 years old,” given access to everything on the internet, she said.

Privacy Regulation Can Help

Regulation can bolster age assurance, said Andrew Zack, policy manager at FOSI. With data privacy laws on the books, “people would be more likely to give over their data” for “an age check if there were collection, use and retention limitations” on that data, he added.

“If you bring in the privacy folks … that's when you make progress," agreed Google's Charlet. However, while “all the legislation ... out there is well-intentioned,” her employer hasn't supported all of it, she said.

For example, Google doesn't believe “everybody should have to verify with an ID just to access basic online services” like applying for a job, said the official. And whatever is chosen “needs to be scalable” to work on mobile apps, websites and app stores.

Lara said that when it comes to age assurance and the law, “I don't think that anybody is getting it right yet,” but “we are all moving forward in, hopefully, ... the right direction.” Implementing a “really big societal change … takes a lot of effort,” as well as trial and error, so it will take time.

Saubestre said that “we are trying to regulate the equipment today, but ... it's advancing very quickly,” which poses some issues. But at the same time, “a lot of the solutions that we have are the same solutions.” For example, she said, “if you pass a data privacy law, you are helping to figure out a lot of the issues that you have with AI" as well.

“I don't think that privacy and security have to be in opposition, but I do think that they need to be contended with,” she added.

When it comes to AI and large language models, the goal should be to “provide the right experience for different age groups,” and not impose regulations dependent on what the interaction with the tech looks like, said Polonetsky.

Saubestre also noted that “a lot of times when we talk about assurance or age estimation, we're talking about a very specific type of young person who has parents who are relatively digitally literate or involved,” she said. But systems really need to be built that “work for everyone.”

“Age assurance might not quite be a destination, but just part of a journey ... in the right direction,” Lara said.