Report Links Level of Schools' AI Use With Privacy Concerns
Use of AI in U.S. schools is linked to increased privacy risk, the nonpartisan Center for Democracy and Technology (CDT) noted in a report Wednesday that surveyed more than 1,000 students, teachers and parents between June and August. In addition, the report found parents were the group most worried about school-related data privacy and that transgender and immigrant students are experiencing increased privacy issues.
Sign up for a free preview to unlock the rest of this article
Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.
“Our research clearly shows a connection between how much AI is used in schools and large-scale data breaches,” said CDT's Elizabeth Laird in an email to Privacy Daily.
While just 18% of teachers who limit their AI use at school reported experiencing a large-scale data breach, 28% of teachers who use AI for many reasons in classes reported experiencing a breach.
There are at least two possible explanations for this, said Laird, an author of the report and director of CDT's Equity in Civic Technology Project. One is that “teachers who are more likely to use AI are more attuned to issues related to data and technology generally, including data breaches,” and the second is that “tools that incorporate AI are data-intensive, and any new system that collects and/or produces data necessarily introduces heightened privacy and security risks.”
The report also found that parents are the most concerned group about student data privacy and security issues. Next is students, with faculty the least concerned.
Across the board, those who use AI more frequently report more concern about student privacy and security. Laird noted that “the explosion of AI use in schools is coinciding with long-standing concerns and emerging risks to students' privacy.”
According to the report, 76% of parents who use AI frequently are concerned about student data privacy issues, though 65% of parents who never use AI express the same concern.
Of students who use AI for many school-related reasons, 67% have privacy concerns, compared to just 41% who use AI for some school-related reasons and 38% who use it for few school reasons.
Just 40% of teachers who use AI for many reasons are concerned about student privacy, a number that drops to 28% if AI use for school reasons is limited.
“Additionally, immigrant students and transgender students are experiencing heightened risks to their privacy,” Laird said. “Seventeen percent of teachers say that their school has shared student information with immigration enforcement, and 13 percent of teachers say that they know of a staff member who has proactively reported a member of the school community (e.g. student, their family) to ICE.”
Around seven in 10 teachers report having received training on student privacy policies in the 2024-2025 school year, and four in 10 teachers were trained on data privacy school policies related to immigration enforcement.
Immigration data is a key privacy point, as “the new Administration has rescinded a previous policy that protected sensitive locations from immigration enforcement, including school campuses,” the report said. Though historically, federal enforcement of the U.S. Supreme Court case Plyler v. Doe means schools shouldn't discourage student enrollment, such as asking about immigration status, “there are funding and assessment reasons” for limited data collection on immigration.
“Understanding the data schools collect about immigrant students, and how they are applying best practices in data governance, is critical for local and state education agencies to fulfill their legal obligations to comply with Plyler v. Doe along with federal and state student privacy laws,” the report added.
“The best way to address these issues is to move beyond rhetoric about whether AI is altogether good or bad and instead get specific about the unique privacy risks of using AI in schools,” said Laird.
Instead, Laird advocates training teachers about protecting student data when it's shared with an AI-driven tool, centralized vetting of tools and software that include AI to ensure best practices in privacy and security, and privacy-forward contracts with edtech providers that incorporate AI. These contract should include "robust provisions around data retention and deletion, prohibition on secondary uses, and clear business rules around de-identification procedures.”