French data protection authority CNIL and the German Federal Office for Information Security jointly published a paper Tuesday covering design principles for Large Language Model-based systems using zero-trust architecture principles.
SB-318 is a legislative proposal to update SB-205, Colorado’s AI Act (see 2508070039).
States show growing interest in privacy laws covering neural and neurotechnology data, Future of Privacy Forum (FPF) said Tuesday. Four states have enacted laws so far: Montana, California, Connecticut, and Colorado.
California Privacy Protection Agency rules on automated decision-making technology (ADMT) and other subjects could receive Office of Administrative Law approval before the end of September.
Privacy Daily is providing readers with the top stories from last week, in case you missed them. All articles can be found by searching the title or clicking on the hyperlinked reference number.
People are increasingly using general-purpose AI chatbots like ChatGPT for emotional and mental health support, but many don’t realize that regulations like the Health Insurance Portability and Accountability Act (HIPAA) fail to cover these sensitive conversations, a Duke University paper published last month found. Industry self-regulation seems unlikely to solve the issue, which may disproportionately affect vulnerable populations, said Pardis Emami-Naeini, a computer science professor at Duke and one of the report’s authors.
American AI developers and deployers should determine whether they could be subject to the European Union’s general-purpose AI (GPAI) requirements under the AI Act, attorneys at Arnold & Porter said Monday.
New Jersey’s proposed privacy rules might be the most “aggressive” in the country, particularly the potential limitations on AI-related data scraping, attorneys and a tech industry official said in interviews.
California’s Judicial Council adopted a rule July 18 requiring court staff who use generative AI in their work to do so within the parameters of a use policy. A task force that developed the rule understood "the rapid evolution" of generative AI and, instead of "prescribing whether and how" courts can deploy the technology, attempted "to situate" its use "into a framework reflecting and applying broad legal, ethical, and professional principles," Morgan Lewis lawyer Jeffrey Niemczura said in a Thursday blog post.
The world has changed so dramatically since the EU AI Act took effect last August that its assumptions have been upended and its focus on rights has shifted, two digital rights advocates wrote in an op-ed Thursday in Tech Policy.