Facial recognition technology (FRT) deployment in stadiums can facilitate safety and security, though its use raises privacy and cybersecurity concerns, said Orrick lawyers in a Monday blog post.
Government intervention can either advance or harm human rights, so there needs to be transparency and accountability from governments and companies developing and deploying AI, said experts during a panel at a Center for Democracy & Technology (CDT) event Tuesday.
Companies should be ready to comply with European AI regulations because the EU AI Act will influence U.S. regulation just as the GDPR did, attorneys at Marashlian & Donahue said during a Tuesday webinar.
Companies developing or deploying AI systems in products aimed at children should consider safeguards such as privacy by design practices and limiting data collection, according to guidelines issued Monday by the Children’s Advertising Review Unit (CARU) of BBB National Programs.
A comparison of California’s new frontier AI law with a similar New York state bill highlights “how state legislators are experimenting with comparable, yet distinct, approaches to AI frontier model regulation,” Future of Privacy Forum AI Policy Analyst Justine Gluck blogged Friday.
State legislatures have passed 20 bills this year that could directly or indirectly affect private-sector AI development and deployment, the Future of Privacy Forum said in a report released Thursday.
There remains great uncertainty over how aggressively the federal government will try to preempt state AI regulations under the Trump administration’s AI Action Plan, said Robert McBlain, global data protection and AI compliance lead at consultancy Thoughtworks.
While few states have laws crafted specifically to regulate AI, they all have measures that cover the technology, Troutman Pepper lawyers said during a podcast episode. Adding to the AI uncertainty is the Trump administration's anti-regulatory stance and rumblings that the failed federal moratorium blocking state AI laws will be resurrected. As such, preparing for various scenarios is recommended, the attorneys said.
Health care providers must balance the benefits of deploying AI chatbots while ensuring legal safeguards are in place that protect patient privacy, said Womble Bond research consultant Amy Hill in a blog post Monday. In particular, they must comply with regulations within the Health Insurance Portability and Accountability Act (HIPAA), she added.
As the integration of AI tools into everyday workflow without formal oversight increases, security incidents rise as well, said Monday's MoFo Privacy Minute blog post. However, training and technical guardrails can help mitigate the risks of AI use, Morrison Foerster lawyers Linda Clark and Dan Alam added.