Privacy Daily is a service of Warren Communications News.

Dutch LinkedIn Users Urged to Disable AI Setting to Avoid Model Training Use

LinkedIn users must actively opt out of an AI setting or risk having the platform use their data to train large language models, said a Dutch privacy regulator and a lawyer.

Sign up for a free preview to unlock the rest of this article

Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.

"By default, and quietly introduced, all LinkedIn user data is being used for AI training purposes," Pinsent Masons technology attorney Nienke Kingma said in an article Tuesday.

Her comment followed a Sept. 24 warning from the Dutch DPA that users of LinkedIn in the Netherlands should deactivate its AI setting before Nov. 3 to avoid their data being used to train models.

LinkedIn said it plans to use public posts, comments and profile data -- including names, photos, roles and skills -- for “generative AI improvement” from that date, the DPA noted.

The company wants to use data dating back to 2003, a time when people shared information without foreseeing that it would be used for AI training, said Dutch DPA Vice President Monique Verdier. Once that data is in an AI model, users lose control of it, and the consequences aren't easy to gauge, she added.

The watchdog also said it's working with the DPA in Ireland, where LinkedIn’s European headquarters is based, and other regulators across the continent after receiving complaints.