OpenClaw, an open-source artificial intelligence (AI) agent formerly known as Moltbot and Clawdbot, has gone viral in China in recent weeks, partly boosted by promotional campaigns from Tencent and Alibaba. More than a chatbot, it can handle emails, schedules and payments on a user’s behalf.
The momentum mirrors a broader shift first seen in the United States at the start of the year, in which developers moved beyond conversational models toward agents capable of performing real-world actions. That wave has now reached China, triggering debate within industry and government over governance, safeguards and the risks of delegating sensitive tasks to software that may operate with limited transparency.
The Chinese government has warned that OpenClaw, with access across email and bank accounts, could expose sensitive personal and financial data. In China, deploying OpenClaw is nicknamed “raising lobsters,” a nod to the project’s lobster mascot.
“The OpenClaw technology is spreading rapidly across society, from enterprises to individual users, bringing efficiency gains alongside rising security risks,” the Ministry of State Security said in providing “guidelines on raising lobsters” on social media on Tuesday. “Agent systems operate with broad permissions and can interact across multiple platforms, creating new vulnerabilities if not properly controlled.”
“‘Lobsters’ lack professional maintenance and patching mechanisms, and attackers may use malicious plugins to bypass their controls and actively exfiltrate users’ core sensitive data, often with stealth exceeding traditional trojans,” it said. “Users should remain vigilant and avoid exposing critical resources to uncontrolled agent access.”
The ministry suggested that users:
- check public exposure, permissions, credentials and plugin trust;
- apply least privilege, limit scope, encrypt data, keep audit logs, run in a sandbox virtual machine and restrict core access;
- treat it as a digital employee, enforce governance and keep use compliant, secure and controlled.
Prior to this, the National Computer Network Emergency Response Technical Team/Coordination Center of China (CNCERT/CC) had warned on March 10 that OpenClaw can control computers via natural language, but weak default security leaves users exposed to “prompt injection,” caused by hidden instructions that trick the AI agent into harmful actions.
“Hidden malicious instructions can be embedded in web pages to trick OpenClaw into executing them, potentially exposing system keys. Some plugins have also been identified as malicious or high-risk, and may steal credentials or carry out harmful actions once installed,” it said.
It warned that excessive permissions could allow attackers to seize control of systems and expose sensitive personal and financial data.
‘Lobsterization’
Large language models (LLMs) such as ChatGPT and DeepSeek can answer questions, write articles and suggest travel plans, but they act only when prompted. By contrast, an AI agent can connect a messenger (WhatsApp, Telegram or WeChat), an LLM, an email account, a storage device, and an e-wallet to operate on a schedule and execute tasks end-to-end, from brainstorming ideas to making payments, with minimal human input.
A year ago, Manus, developed by Beijing-based startup Butterfly Effect, emerged in China as an early example. Its AI platform can complete tasks in seconds, including planning trips, searching for overseas housing and analyzing financial statements.
Compared with Manus or other agentic AI platforms, OpenClaw offers two additional advantages. It can be downloaded to a personal computer and deployed locally for free, and its “lobsters” can automatically generate and test their own code to complete tasks through multiple approaches.
Some commentators say that using Manus is like renting a robot while OpenClaw is akin to owning and running the system yourself, with greater flexibility but also greater complexity and responsibility.
OpenClaw has gained rapid traction in China, with Tencent Cloud and Alibaba Cloud actively pushing adoption. On March 6, Tencent Cloud’s engineers offered on-site “install and play” services in Shenzhen, helping hundreds of users open accounts on Tencent Cloud servers, deploy OpenClaw, configure models and connect messaging tools.
Initially, OpenClaw creator Peter Steinberger criticized Tencent for having copied content from the official ClawHub marketplace without coordination.
“They copy yet they don’t support the project in any way,” he wrote on X. Tencent later became a sponsor via GitHub Sponsors on March 15, after which Steinberger signaled satisfaction with the support.
“The rise of ‘lobsters’ plays to Tencent’s strengths in cloud and AI,” Tencent Chief Executive Pony Ma said at the company’s annual results briefing on Wednesday. “By integrating agents with instant messaging apps, users no longer need to wait for responses. Tasks can run in the background, delivering a more ‘human-like’ experience that learns and adapts to individual preferences over time.”
He added that agentic AI represents a new deployment model, opening fresh opportunities across Tencent’s ecosystem. He said the approach is also shaping the company’s plans for WeChat AI, where mini software programs could undergo “lobsterization” and become increasingly intelligent, extending automation across a wide range of services.
AI governance
Scientists broadly describe AI development in stages, from static LLMs to generative AI (creating songs and videos), and now to early agentic systems that can plan and act with tools. More advanced systems are expected to add memory and enable agent-to-agent collaboration, while artificial general intelligence (AGI) remains a longer-term goal toward which AI systems can work, aiming to operate like humans.
Today’s systems are still in the early agentic phase. Users must decide how much access to grant, such as emails, documents and e-wallets, while recognizing that greater autonomy also brings higher cybersecurity risks.
In Europe, AI governance has taken shape through the EU AI Act, adopted in May 2024, which sets out responsibilities and penalties for AI providers and users. China has yet to introduce comparable rules, and authorities have told government bodies, state firms and schools to avoid installing “lobsters.”
Summer Yue, director of alignment at Meta Superintelligence Labs, said in a post on X last month that OpenClaw failed to follow her request to review emails for deletion and instead began deleting messages from her inbox. She said she was unable to stop the process and ultimately had to shut down her computer to halt the agent.
“Many users lack basic security awareness when deploying OpenClaw,” said Wang Liejun, a security expert at QAX Technology Group, a cybersecurity firm in China. “They expose their application programming interfaces (APIs, or security keys to access emails and data) to the public internet, keep default credentials unchanged and leave unnecessary ports open. This lets hackers scan and take over these agents, then use them to access networks or steal sensitive data.”
He added that a user should deploy OpenClaw on a virtual machine or a separate device to reduce data risks, noting that cloud-based environments provide isolation so any breach or system failure would be contained without affecting local data or home networks.
Such caution may hold for now, but innovation is accelerating as US tech giants push to make their LLMs better at executing real-world tasks.
On January 12, Apple and Google said that future Apple Foundation Models will be built on Google’s Gemini and cloud infrastructure, powering a more personalized Siri. On February 14, Steinberger said that he was joining OpenAI to help improve ChatGPT.
Read: Nvidia chip curbs turn Singapore into AI hub for China
Follow Jeff Pao on Twitter at @jeffpao3
