Privacy Daily is a service of Warren Communications News.

China Developing Safe AI Applications, Academic Says

China is crafting guardrails for applications and AI development and has spoken with the U.S. about AI safety issues, Lan Xue, a Brookings Institution visiting nonresident fellow, said Thursday at a streamed Forum Global International AI Summit in Brussels.

Sign up for a free preview to unlock the rest of this article

Privacy Daily provides accurate coverage of newsworthy developments in data protection legislation, regulation, litigation, and enforcement for privacy professionals responsible for ensuring effective organizational data privacy compliance.

To achieve its goal of being the world's AI leader by 2030, China is focused on a global effort to develop large language models (LLMs), and is exploring their use in various applications, as described in the country's AI Plus plan, said Xue, a dean and professor of arts, humanities and social sciences at China's Tsinghua University. He described AI Plus as a policy plan aimed at encouraging international cooperation and promoting AI applications in the service and manufacturing sectors.

The Chinese government recently updated its AI rules to enable development of models with guardrails, said Xue. Its AI safe development plan is a set of guidelines, policies and laws developed by several government agencies and some industrial organizations, he added.

Asked to what extent China is participating in global AI safety efforts, Xue said it's working with in-country institutions. It wanted to take part in the international AI safety institution network that held its inaugural meeting in November 2024 but wasn't given the opportunity, he added.

The country has, however, been active in United Nations platforms and multilateral and bilateral discussions, including with the U.S., he said. It's absolutely necessary that China and the U.S. cooperate on AI to guard against malicious use, malfunction and systemic risk, he added. Both countries should identify red lines for companies to avoid in developing AI. Safety institutions could then collaborate to ensure AI models are safe, he said.

There are also opportunities for China and the EU to collaborate, he said, pointing to Europe's well-developed AI Act (see 2510210038) and the fact that China has many open-source AI models, which, he said, might benefit small and mid-size European enterprises.