Meta Oversight Board Member Frustrated at Lack of AI Consideration
A member of Meta’s independent oversight board feels a “sense of frustration” at its inability to address emerging AI issues like chatbots interacting with children, Suzanne Nossel told an audience in Washington Tuesday.
Sign up for a free preview to unlock the rest of this article
Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.
“As a board, we’re acutely aware of the constraints of our modus operandi,” Nossel said during the Center for Industry Self-Regulation Soft Law Summit. “We often feel like a single traffic cop operating on a superhighway of information.”
Meta CEO Mark Zuckerberg announced the oversight board's creation in 2018 as an independent entity focused on reviewing content moderation decisions and their impact on free speech across platforms like Facebook and Instagram.
AI chatbots have been at the center of a controversy involving children and the promotion of self-harm or sexually suggestive content. State lawmakers have tried to regulate AI-driven harms to children. For instance, California Assemblymember Rebecca Bauer-Kahan (D) crafted AB-1064, which targets chatbots (see 2509260040). The bill passed the legislature last month but needs a signature from Gov. Gavin Newsom (D).
Board members are following articles about the “latest outrage online” surrounding chatbots interacting with children, she said. “We do feel a sense of frustration at what we can get our hands on as a board.”
The board’s decisions and recommendations apply to the use of AI on Meta platforms but not directly to the inner workings of Meta’s large-language model (LLM) technology, she said. “That’s something we’re talking about, and we have had over time some expansion of our jurisdiction.”
When the board was created, it could only deal with appeals to content the company had removed, she noted. Eventually, the board's jurisdiction widened to content that the company left online, she said. “Our hope over time is that that scope will continue to expand.”
Nossel said the board’s decisions on individual posts, photos and videos are “honestly, rarely all that consequential,” considering there are no guarantees a single position on a matter can be applied in future cases. If political speech is suppressed in the run-up to an election, “we can’t undo that once the votes have been cast,” and if a damaging post goes viral, "that's already happened.”
The board’s most significant impact is in making policy recommendations, though Meta isn’t required to follow them, she said. Nossel estimated Meta implements board recommendations about 60% of the time.
Nossel said the board, which formally began its work in 2020, has been a five-year “experiment.” As the online world moves toward an environment dominated by LLMs, the board’s body of work can be applied when trying to anticipate harms and putting guardrails in place, she said: “My hope is Meta stays invested in this experiment, and the only way an experiment works is if you can iterate over time and evolve in response to what you’re learning. I hope in the next few years we’ll get the opportunity to do just that.”