Privacy Daily is a service of Warren Communications News.

Teens Aware of AI Privacy Concerns, Study Says

Lack of privacy and data protection is something that 11% of teens aged 15 to 18 consider to be the biggest concern with generative AI, according to a study from the Family Online Safety Institute (FOSI) published Monday.

Sign up for a free preview to unlock the rest of this article

Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.

The institute conducted a national survey of 1,000 U.S. teen generative AI users to examine how they view it and use it in their day-to-day lives. Almost half of the teens surveyed used generative AI at least once a week and were able to voice concerns and benefits.

The survey showed 29% of respondents said AI was neither safe nor unsafe, “a pretty high number in the neutral zone," said Alanna Powers-O’Brien, a report co-author and FOSI research & program specialist. She spoke during a FOSI event on online safety Monday. Three in five teens said they felt safe when using AI, and just 12% said they felt unsafe.

Gina Bell, youth partner with equity firm InTandem and a participant in the survey, said at the FOSI event she feels both safe and unsafe using AI. Logging in means the AI “actually saves your data, and the first thing that comes to mind would be data leaks,” she said. But Bell also understood data was being saved to improve future models.

Powers-O’Brien said other study participants “told us that they aren't really considering safety when they're using these tools ... it's not safe or unsafe, it just is.”

A final takeaway, said Bell, is that teens want to be included in the AI conversation. “The world is constantly changing, and technology is changing with it,” so if young people “aren't a part of the conversation, AI might not reflect the needs of the next generation.”