Privacy Daily is a service of Warren Communications News.

42 Bipartisan AGs Ask AI Companies to Enhance Chatbot Safety

AI companies should exercise more quality control over chatbots, Pennsylvania Attorney General Dave Sunday (R), New Jersey AG Matthew Platkin (D) and 41 other AGs said Wednesday. The AGs sent a letter Tuesday to OpenAI, Google, Meta, Microsoft and other major AI software production and distribution companies, Sunday’s office said.

Sign up for a free preview to unlock the rest of this article

Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.

The bipartisan group urged companies to implement more warnings, testing and recall procedures, among other consumer protections. Companies should commit to changes by Jan. 16, the AGs said.

“Our support for innovation and America’s leadership in A.I. does not extend to using our residents, especially children, as guinea pigs while A.I. companies experiment with new applications,” the letter said. “Nor is our support for innovation an excuse for noncompliance with our laws, misinforming parents, and endangering our residents, particularly children.”

States are showing increasing interest in reining in AI chatbots, which have been at the center of broad privacy concerns. Earlier this year, California Gov. Gavin Newsom (D) signed legislation concerning AI chatbots that could be used by children, while vetoing a separate bill (see 2510140010). Meanwhile, amid intensifying regulatory pressure about kids' online safety, the chatbot company Character.AI said in October that it will limit children’s ability to have an open-ended chat with AI on its platform (see 2510300015).

Three consumer advocates Tuesday released a model state bill aimed at preventing privacy harms from AI chatbots (see 2512090048). Also, a recent Duke University study found people are increasingly using general-purpose AI chatbots for emotional and mental health support, with many unaware that privacy regulations like HIPAA fail to cover these sensitive conversations (see 2508070022). Additionally, Meta AI users posting what's typically private information for everyone to see on the app has raised questions about whether they know when they’re sharing AI queries with the world (see 2506120082).