FPF Sees State Legislators Experimenting on Frontier AI
A comparison of California’s new frontier AI law with a similar New York state bill highlights “how state legislators are experimenting with comparable, yet distinct, approaches to AI frontier model regulation,” Future of Privacy Forum AI Policy Analyst Justine Gluck blogged Friday.
Sign up for a free preview to unlock the rest of this article
Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.
Gov. Gavin Newsom (D) earlier this week signed SB-53, requiring transparency about safety and security protocols and providing whistleblower protection to employees at AI developers (see 2509290064). Earlier this year, New York passed the Responsible AI Safety and Education (RAISE) Act (S-6953/A-6453), but it hasn’t yet been transmitted to Gov. Kathy Hochul (D) for her signature.
The California frontier AI law “is more detailed in content -- requiring frameworks, transparency reports, and whistleblower protections -- while RAISE is stricter in enforcement, with higher penalties and liability provisions,” wrote Gluck. “Both bills share core elements, such as compute thresholds, catastrophic risk definitions, and mandatory frameworks/protocols.”
New York Assemblymember Alex Bores (D), sponsor of the RAISE Act, told Privacy Daily in August that his AI safety bill could still be revised through the state’s chapter-amendment process before it becomes law (see 2508060059).
“Given Newsom’s signature of SB 53, a central question is whether RAISE will be amended to more closely align with the California law,” said Gluck. “Other states, including Michigan, have introduced proposals of their own, illustrating the potential for a patchwork of requirements across jurisdictions.”