Privacy Daily is a service of Warren Communications News.

Former FTC, CFPB Officials: Privacy Laws Can Regulate AI Chatbots

Former officials at the FTC and Consumer Financial Protection Bureau co-authored a guide on using existing privacy and consumer protection laws to regulate against AI chatbot harms to minors, the Electronic Privacy Information Center said Monday. EPIC co-wrote the white paper.

Sign up for a free preview to unlock the rest of this article

Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.

“There is no law that exempts AI,” opens the guide. “As enforcers work to apply laws to emergent tech, including AI chatbots, this reference guide is designed to highlight ways to tackle these issues using longstanding precedent on unprecedented harms.” The paper’s authors include Sam Levine, former FTC Consumer Protection director; Stephanie Nguyen, former FTC chief technologist; and Erie Meyer, former CFPB chief technologist.

As examples of existing legal frameworks to regulate AI, the guide highlights COPPA’s requirements for data collection, retention and parental consent, as well as state privacy laws’ restrictions about children related to targeted ads and sharing or selling data. The paper also notes that states and the FTC can use unfair or deceptive acts or practices authorities, and that states are developing legislation about AI mental health tools and companion chatbots.

“For years, federal policymakers let Big Tech police itself -- allowing sweeping data collection with little oversight,” said Levine, now a senior fellow at the University of California-Berkeley Center for Consumer Law & Economic Justice. “With the same pattern now emerging around AI, states are once again leading the charge to safeguard privacy."

Nguyen, now senior fellow at the Vanderbilt Policy Accelerator, said, “We’re seeing tech companies turning to the same strategies now that they’ve used time and time again: premature launches, rapid deployment, public harm, and quiet rollbacks with no accountability.”

“Companies shouldn’t get a free pass to ignore the law just because they call something ‘AI,’” said Kara Williams, EPIC counsel, in the press release. “The same consumer protection and privacy laws that have existed for years also apply to chatbots, and enforcers can and should use these laws to address the harms that come from tech companies putting out dangerous technologies.”