China's AI Agent Boom Triggers Official Security Warnings: Risks Behind OpenClaw's Viral Adoption
AI agent tools like OpenClaw go viral in China, enabling natural language task automation. But officials warn of security risks including data leaks, prompt injection attacks, and the 'ClawJacked' vulnerability.
China's AI agent market is experiencing an unprecedented explosive growth, but the accompanying security risks have drawn heightened vigilance from authorities. According to Taipei Times, China's Cyberspace Administration issued the "Interim Measures for the Security Management of Generative AI Agents (Draft for Comments)" on March 12, 2026 — the world's first regulatory document specifically targeting AI agents.
An in-depth report by the South China Morning Post reveals the context behind this policy: over the past six months, more than 500 AI agent products and platforms have emerged in the Chinese market, spanning programming and development, financial analysis, customer service, educational tutoring, legal consulting, and many other fields. Among them, tools such as OpenClaw, Coze (owned by ByteDance), Baidu's Agent Platform, and Alibaba's Tongyi Agent have rapidly gained popularity among developers.
A commentary published by Xinhua News Agency points out that AI agents are fundamentally different from traditional conversational AI — they can not only generate text but also autonomously execute operations, access the internet, call APIs, read and write file systems, and even control other software. This "action capability" brings tremendous productivity gains but also means security risks have expanded from the information layer to the operational layer. The article cites examples of users reporting that AI agents accidentally deleted important files, sent erroneous emails, and even made unauthorized network requests while executing tasks.
An investigative report by Wired China exposes deeper security concerns. Journalists found that the permission control mechanisms of some AI agent platforms have serious defects: system-level permissions granted to agents by users could be exploited through malicious prompt injection, causing agents to perform unintended operations. An anonymous security researcher demonstrated to reporters how carefully crafted prompts could cause a well-known platform's agent to bypass safety guardrails and access users' private file listings.
The China Academy of Information and Communications Technology (CAICT) released a "White Paper on AI Agent Security" in March, categorizing agent security risks into five major types: prompt injection attacks, tool invocation abuse, data leakage risks, autonomous decision-making deviations, and multi-agent coordination risks. The white paper recommends establishing the "principle of least privilege," meaning agents should only be granted the minimum set of permissions needed to complete specific tasks, along with comprehensive operation audit logs.
Notably, OpenClaw, as an open-source AI agent framework, is particularly popular among Chinese developers due to its flexibility and powerful tool ecosystem. GitHub statistics show that OpenClaw's Chinese user base grew by 340% over the past three months, making it the fastest-growing market globally. OpenClaw's design philosophy emphasizes security-first, with built-in security mechanisms such as permission tiering, operation confirmation, and sandbox execution, and was cited in the CAICT white paper as a "security practice reference case."
However, the strengthening of regulation has also raised concerns within the industry. Several Chinese tech company executives stated in anonymous interviews that overly strict regulation could stifle innovation. A CEO of an AI startup in Shenzhen told the South China Morning Post: "AI agents represent the biggest opportunity for China's AI application-layer innovation. If regulation is excessive, China could be overtaken by the United States in this field." But authorities clearly believe safety comes first. In its policy interpretation, the Cyberspace Administration emphasized: "Development and security must go hand in hand. The safety boundaries of agents must be established before widespread adoption, rather than trying to fix things after a major incident occurs."
From a global perspective, China is at the forefront of AI agent regulation. Although the EU's AI Act began phased implementation in 2026, it has yet to include specific provisions targeting AI agents. On the U.S. side, the White House Office of Science and Technology Policy is also monitoring this area, but currently remains at the level of issuing guidance. China's regulatory practices could serve as important reference for other countries.
From an international comparative standpoint, China's regulatory response to AI agents has been far faster than that of Europe and the United States. The U.S. currently has no federal-level regulatory legislation targeting AI agents, with only California and New York drafting state-level bills; while the EU's AI Act has taken effect, its provisions concerning AI agents are still in the technical standards development phase, with specific compliance requirements not expected until 2027. China launched its legislative process just three months after the mass proliferation of AI agents, and this model of "regulation closely following innovation" has attracted international attention.
On the technical security front, a research team from Tsinghua University's Department of Computer Science released an AI agent security assessment report in early March, testing the performance of 12 mainstream AI agent products under extreme scenarios. The results were concerning: under carefully designed adversarial prompt attacks, 9 products exhibited unauthorized operational behaviors, including accessing files not explicitly authorized by users, sending API requests containing sensitive information to third-party services, and "jailbreak" behavior that bypassed security restrictions during task execution. The report recommends that all AI agent products implement the "principle of least privilege" and "operational sandbox" mechanisms.
Regarding the impact on industrial development, industry reactions show clear divergence. Large companies generally expressed support for reasonable regulation — Tencent, Alibaba, and Baidu all publicly stated they "welcome the government setting clear rules," as a regulatory framework actually helps alleviate user concerns about security. However, small and medium-sized developers worry about excessive compliance costs. A community contributor involved in OpenClaw development wrote in a GitHub Discussion: "If every agent operation requires secondary confirmation and 90-day log retention, this essentially excludes small teams from the AI agent track." Finding the optimal balance between security and innovation will be the core challenge facing China's AI agent regulation.